Apr 28 00:18:06.241659 kernel: Linux version 6.6.127-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Mon Apr 27 22:40:10 -00 2026 Apr 28 00:18:06.241679 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=dba81bba70fdc18951de51911456386ac86d38187268d44374f74ed6158168ec Apr 28 00:18:06.241688 kernel: BIOS-provided physical RAM map: Apr 28 00:18:06.241694 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Apr 28 00:18:06.241699 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Apr 28 00:18:06.241705 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Apr 28 00:18:06.241711 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Apr 28 00:18:06.241716 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Apr 28 00:18:06.241721 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Apr 28 00:18:06.241729 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Apr 28 00:18:06.241734 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Apr 28 00:18:06.241739 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Apr 28 00:18:06.241744 kernel: NX (Execute Disable) protection: active Apr 28 00:18:06.241749 kernel: APIC: Static calls initialized Apr 28 00:18:06.241756 kernel: SMBIOS 2.8 present. Apr 28 00:18:06.241763 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Apr 28 00:18:06.241769 kernel: Hypervisor detected: KVM Apr 28 00:18:06.241774 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Apr 28 00:18:06.241780 kernel: kvm-clock: using sched offset of 10208018784 cycles Apr 28 00:18:06.241786 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Apr 28 00:18:06.241791 kernel: tsc: Detected 2793.438 MHz processor Apr 28 00:18:06.241797 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Apr 28 00:18:06.241803 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Apr 28 00:18:06.241809 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x10000000000 Apr 28 00:18:06.241816 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Apr 28 00:18:06.241822 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Apr 28 00:18:06.241827 kernel: Using GB pages for direct mapping Apr 28 00:18:06.241833 kernel: ACPI: Early table checksum verification disabled Apr 28 00:18:06.241839 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Apr 28 00:18:06.241844 kernel: ACPI: RSDT 0x000000009CFE241A 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 28 00:18:06.241850 kernel: ACPI: FACP 0x000000009CFE21FA 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Apr 28 00:18:06.241855 kernel: ACPI: DSDT 0x000000009CFE0040 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 28 00:18:06.241861 kernel: ACPI: FACS 0x000000009CFE0000 000040 Apr 28 00:18:06.241868 kernel: ACPI: APIC 0x000000009CFE22EE 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 28 00:18:06.241874 kernel: ACPI: HPET 0x000000009CFE237E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 28 00:18:06.241879 kernel: ACPI: MCFG 0x000000009CFE23B6 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 28 00:18:06.241944 kernel: ACPI: WAET 0x000000009CFE23F2 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 28 00:18:06.241951 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21fa-0x9cfe22ed] Apr 28 00:18:06.241957 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21f9] Apr 28 00:18:06.241961 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Apr 28 00:18:06.241969 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22ee-0x9cfe237d] Apr 28 00:18:06.241976 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe237e-0x9cfe23b5] Apr 28 00:18:06.241981 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23b6-0x9cfe23f1] Apr 28 00:18:06.241986 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23f2-0x9cfe2419] Apr 28 00:18:06.241991 kernel: No NUMA configuration found Apr 28 00:18:06.241996 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Apr 28 00:18:06.242000 kernel: NODE_DATA(0) allocated [mem 0x9cfd6000-0x9cfdbfff] Apr 28 00:18:06.242007 kernel: Zone ranges: Apr 28 00:18:06.242011 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Apr 28 00:18:06.242016 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Apr 28 00:18:06.242021 kernel: Normal empty Apr 28 00:18:06.242026 kernel: Movable zone start for each node Apr 28 00:18:06.242031 kernel: Early memory node ranges Apr 28 00:18:06.242036 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Apr 28 00:18:06.242041 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Apr 28 00:18:06.242046 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Apr 28 00:18:06.242051 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Apr 28 00:18:06.242057 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Apr 28 00:18:06.242062 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Apr 28 00:18:06.242067 kernel: ACPI: PM-Timer IO Port: 0x608 Apr 28 00:18:06.242072 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Apr 28 00:18:06.242077 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Apr 28 00:18:06.242084 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Apr 28 00:18:06.242092 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Apr 28 00:18:06.242100 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Apr 28 00:18:06.242108 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Apr 28 00:18:06.242117 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Apr 28 00:18:06.242124 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Apr 28 00:18:06.242132 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Apr 28 00:18:06.242139 kernel: TSC deadline timer available Apr 28 00:18:06.242148 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Apr 28 00:18:06.242156 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Apr 28 00:18:06.242163 kernel: kvm-guest: KVM setup pv remote TLB flush Apr 28 00:18:06.242171 kernel: kvm-guest: setup PV sched yield Apr 28 00:18:06.242178 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Apr 28 00:18:06.242189 kernel: Booting paravirtualized kernel on KVM Apr 28 00:18:06.242198 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Apr 28 00:18:06.242205 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Apr 28 00:18:06.242214 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u524288 Apr 28 00:18:06.242223 kernel: pcpu-alloc: s196328 r8192 d28952 u524288 alloc=1*2097152 Apr 28 00:18:06.242236 kernel: pcpu-alloc: [0] 0 1 2 3 Apr 28 00:18:06.242241 kernel: kvm-guest: PV spinlocks enabled Apr 28 00:18:06.242246 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Apr 28 00:18:06.242252 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=dba81bba70fdc18951de51911456386ac86d38187268d44374f74ed6158168ec Apr 28 00:18:06.242260 kernel: random: crng init done Apr 28 00:18:06.242265 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Apr 28 00:18:06.242270 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Apr 28 00:18:06.242275 kernel: Fallback order for Node 0: 0 Apr 28 00:18:06.242280 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632732 Apr 28 00:18:06.242285 kernel: Policy zone: DMA32 Apr 28 00:18:06.242290 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Apr 28 00:18:06.242295 kernel: Memory: 2433652K/2571752K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42884K init, 2312K bss, 137896K reserved, 0K cma-reserved) Apr 28 00:18:06.242301 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Apr 28 00:18:06.242306 kernel: ftrace: allocating 37996 entries in 149 pages Apr 28 00:18:06.242311 kernel: ftrace: allocated 149 pages with 4 groups Apr 28 00:18:06.242356 kernel: Dynamic Preempt: voluntary Apr 28 00:18:06.242362 kernel: rcu: Preemptible hierarchical RCU implementation. Apr 28 00:18:06.242367 kernel: rcu: RCU event tracing is enabled. Apr 28 00:18:06.242372 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Apr 28 00:18:06.242378 kernel: Trampoline variant of Tasks RCU enabled. Apr 28 00:18:06.242383 kernel: Rude variant of Tasks RCU enabled. Apr 28 00:18:06.242390 kernel: Tracing variant of Tasks RCU enabled. Apr 28 00:18:06.242395 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Apr 28 00:18:06.242400 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Apr 28 00:18:06.242405 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Apr 28 00:18:06.242410 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Apr 28 00:18:06.242415 kernel: Console: colour VGA+ 80x25 Apr 28 00:18:06.242420 kernel: printk: console [ttyS0] enabled Apr 28 00:18:06.242425 kernel: ACPI: Core revision 20230628 Apr 28 00:18:06.242430 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Apr 28 00:18:06.242436 kernel: APIC: Switch to symmetric I/O mode setup Apr 28 00:18:06.242441 kernel: x2apic enabled Apr 28 00:18:06.242446 kernel: APIC: Switched APIC routing to: physical x2apic Apr 28 00:18:06.242452 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Apr 28 00:18:06.242457 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Apr 28 00:18:06.242462 kernel: kvm-guest: setup PV IPIs Apr 28 00:18:06.242467 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Apr 28 00:18:06.242472 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x284409db922, max_idle_ns: 440795228871 ns Apr 28 00:18:06.242484 kernel: Calibrating delay loop (skipped) preset value.. 5586.87 BogoMIPS (lpj=2793438) Apr 28 00:18:06.242490 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Apr 28 00:18:06.242495 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Apr 28 00:18:06.242501 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Apr 28 00:18:06.242508 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Apr 28 00:18:06.242513 kernel: Spectre V2 : Mitigation: Retpolines Apr 28 00:18:06.242519 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Apr 28 00:18:06.242525 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Apr 28 00:18:06.242532 kernel: RETBleed: Vulnerable Apr 28 00:18:06.242538 kernel: Speculative Store Bypass: Vulnerable Apr 28 00:18:06.242543 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Apr 28 00:18:06.242549 kernel: GDS: Unknown: Dependent on hypervisor status Apr 28 00:18:06.242555 kernel: active return thunk: its_return_thunk Apr 28 00:18:06.242560 kernel: ITS: Mitigation: Aligned branch/return thunks Apr 28 00:18:06.242566 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Apr 28 00:18:06.242571 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Apr 28 00:18:06.242577 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Apr 28 00:18:06.242584 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Apr 28 00:18:06.242589 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Apr 28 00:18:06.242595 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Apr 28 00:18:06.242600 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Apr 28 00:18:06.242606 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Apr 28 00:18:06.242611 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Apr 28 00:18:06.242617 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Apr 28 00:18:06.242622 kernel: x86/fpu: Enabled xstate features 0xe7, context size is 2432 bytes, using 'compacted' format. Apr 28 00:18:06.242628 kernel: Freeing SMP alternatives memory: 32K Apr 28 00:18:06.242636 kernel: pid_max: default: 32768 minimum: 301 Apr 28 00:18:06.242642 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Apr 28 00:18:06.242647 kernel: landlock: Up and running. Apr 28 00:18:06.242653 kernel: SELinux: Initializing. Apr 28 00:18:06.242658 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 28 00:18:06.242663 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 28 00:18:06.242669 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8370C CPU @ 2.80GHz (family: 0x6, model: 0x6a, stepping: 0x6) Apr 28 00:18:06.242675 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Apr 28 00:18:06.242681 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Apr 28 00:18:06.242688 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Apr 28 00:18:06.242693 kernel: Performance Events: unsupported p6 CPU model 106 no PMU driver, software events only. Apr 28 00:18:06.242699 kernel: signal: max sigframe size: 3632 Apr 28 00:18:06.242704 kernel: rcu: Hierarchical SRCU implementation. Apr 28 00:18:06.242710 kernel: rcu: Max phase no-delay instances is 400. Apr 28 00:18:06.242715 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Apr 28 00:18:06.242721 kernel: smp: Bringing up secondary CPUs ... Apr 28 00:18:06.242727 kernel: smpboot: x86: Booting SMP configuration: Apr 28 00:18:06.242732 kernel: .... node #0, CPUs: #1 #2 #3 Apr 28 00:18:06.242739 kernel: smp: Brought up 1 node, 4 CPUs Apr 28 00:18:06.242744 kernel: smpboot: Max logical packages: 1 Apr 28 00:18:06.242750 kernel: smpboot: Total of 4 processors activated (22347.50 BogoMIPS) Apr 28 00:18:06.242755 kernel: devtmpfs: initialized Apr 28 00:18:06.242761 kernel: x86/mm: Memory block size: 128MB Apr 28 00:18:06.242766 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Apr 28 00:18:06.242772 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Apr 28 00:18:06.242778 kernel: pinctrl core: initialized pinctrl subsystem Apr 28 00:18:06.242783 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Apr 28 00:18:06.242790 kernel: audit: initializing netlink subsys (disabled) Apr 28 00:18:06.242796 kernel: audit: type=2000 audit(1777335481.811:1): state=initialized audit_enabled=0 res=1 Apr 28 00:18:06.242801 kernel: thermal_sys: Registered thermal governor 'step_wise' Apr 28 00:18:06.242807 kernel: thermal_sys: Registered thermal governor 'user_space' Apr 28 00:18:06.242812 kernel: cpuidle: using governor menu Apr 28 00:18:06.242817 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Apr 28 00:18:06.242823 kernel: dca service started, version 1.12.1 Apr 28 00:18:06.242829 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Apr 28 00:18:06.242834 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Apr 28 00:18:06.242841 kernel: PCI: Using configuration type 1 for base access Apr 28 00:18:06.242847 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Apr 28 00:18:06.242852 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Apr 28 00:18:06.242858 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Apr 28 00:18:06.242864 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Apr 28 00:18:06.242869 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Apr 28 00:18:06.242874 kernel: ACPI: Added _OSI(Module Device) Apr 28 00:18:06.242880 kernel: ACPI: Added _OSI(Processor Device) Apr 28 00:18:06.243230 kernel: ACPI: Added _OSI(Processor Aggregator Device) Apr 28 00:18:06.243243 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Apr 28 00:18:06.243249 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Apr 28 00:18:06.243254 kernel: ACPI: Interpreter enabled Apr 28 00:18:06.243260 kernel: ACPI: PM: (supports S0 S3 S5) Apr 28 00:18:06.243265 kernel: ACPI: Using IOAPIC for interrupt routing Apr 28 00:18:06.243271 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Apr 28 00:18:06.243276 kernel: PCI: Using E820 reservations for host bridge windows Apr 28 00:18:06.243281 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Apr 28 00:18:06.243287 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Apr 28 00:18:06.243515 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Apr 28 00:18:06.243578 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Apr 28 00:18:06.243633 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Apr 28 00:18:06.243640 kernel: PCI host bridge to bus 0000:00 Apr 28 00:18:06.243701 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Apr 28 00:18:06.243751 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Apr 28 00:18:06.243802 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Apr 28 00:18:06.243851 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Apr 28 00:18:06.243972 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Apr 28 00:18:06.244043 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Apr 28 00:18:06.244299 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Apr 28 00:18:06.244559 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Apr 28 00:18:06.244629 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Apr 28 00:18:06.244690 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Apr 28 00:18:06.244795 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Apr 28 00:18:06.244850 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Apr 28 00:18:06.245056 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Apr 28 00:18:06.245143 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Apr 28 00:18:06.245224 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc0c0-0xc0df] Apr 28 00:18:06.245306 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Apr 28 00:18:06.245415 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Apr 28 00:18:06.245476 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Apr 28 00:18:06.245531 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc000-0xc07f] Apr 28 00:18:06.245585 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Apr 28 00:18:06.245639 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Apr 28 00:18:06.245699 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Apr 28 00:18:06.245757 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc0e0-0xc0ff] Apr 28 00:18:06.245811 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] Apr 28 00:18:06.245865 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] Apr 28 00:18:06.245985 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Apr 28 00:18:06.246058 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Apr 28 00:18:06.246241 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Apr 28 00:18:06.246307 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Apr 28 00:18:06.246748 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc100-0xc11f] Apr 28 00:18:06.246805 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd4000-0xfebd4fff] Apr 28 00:18:06.246868 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Apr 28 00:18:06.247005 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Apr 28 00:18:06.247013 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Apr 28 00:18:06.247019 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Apr 28 00:18:06.247025 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Apr 28 00:18:06.247030 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Apr 28 00:18:06.247041 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Apr 28 00:18:06.247046 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Apr 28 00:18:06.247052 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Apr 28 00:18:06.247057 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Apr 28 00:18:06.247063 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Apr 28 00:18:06.247068 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Apr 28 00:18:06.247074 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Apr 28 00:18:06.247079 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Apr 28 00:18:06.247085 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Apr 28 00:18:06.247092 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Apr 28 00:18:06.247097 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Apr 28 00:18:06.247102 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Apr 28 00:18:06.247108 kernel: iommu: Default domain type: Translated Apr 28 00:18:06.247113 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Apr 28 00:18:06.247119 kernel: PCI: Using ACPI for IRQ routing Apr 28 00:18:06.247124 kernel: PCI: pci_cache_line_size set to 64 bytes Apr 28 00:18:06.247129 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Apr 28 00:18:06.247135 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Apr 28 00:18:06.247228 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Apr 28 00:18:06.247286 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Apr 28 00:18:06.247381 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Apr 28 00:18:06.247389 kernel: vgaarb: loaded Apr 28 00:18:06.247394 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Apr 28 00:18:06.247400 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Apr 28 00:18:06.247405 kernel: clocksource: Switched to clocksource kvm-clock Apr 28 00:18:06.247411 kernel: VFS: Disk quotas dquot_6.6.0 Apr 28 00:18:06.247419 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Apr 28 00:18:06.247425 kernel: pnp: PnP ACPI init Apr 28 00:18:06.247596 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Apr 28 00:18:06.247605 kernel: pnp: PnP ACPI: found 6 devices Apr 28 00:18:06.247611 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Apr 28 00:18:06.247617 kernel: NET: Registered PF_INET protocol family Apr 28 00:18:06.247622 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Apr 28 00:18:06.247628 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Apr 28 00:18:06.247635 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Apr 28 00:18:06.247641 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Apr 28 00:18:06.247646 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Apr 28 00:18:06.247652 kernel: TCP: Hash tables configured (established 32768 bind 32768) Apr 28 00:18:06.247657 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 28 00:18:06.247663 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 28 00:18:06.247668 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Apr 28 00:18:06.247674 kernel: NET: Registered PF_XDP protocol family Apr 28 00:18:06.247724 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Apr 28 00:18:06.247775 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Apr 28 00:18:06.247824 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Apr 28 00:18:06.247874 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Apr 28 00:18:06.248228 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Apr 28 00:18:06.248303 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Apr 28 00:18:06.248314 kernel: PCI: CLS 0 bytes, default 64 Apr 28 00:18:06.248378 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Apr 28 00:18:06.248389 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x284409db922, max_idle_ns: 440795228871 ns Apr 28 00:18:06.248405 kernel: Initialise system trusted keyrings Apr 28 00:18:06.248415 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Apr 28 00:18:06.248425 kernel: Key type asymmetric registered Apr 28 00:18:06.248434 kernel: Asymmetric key parser 'x509' registered Apr 28 00:18:06.248442 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Apr 28 00:18:06.248452 kernel: io scheduler mq-deadline registered Apr 28 00:18:06.248460 kernel: io scheduler kyber registered Apr 28 00:18:06.248469 kernel: io scheduler bfq registered Apr 28 00:18:06.248478 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Apr 28 00:18:06.248490 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Apr 28 00:18:06.248498 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Apr 28 00:18:06.248506 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Apr 28 00:18:06.248515 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Apr 28 00:18:06.248524 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Apr 28 00:18:06.248533 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Apr 28 00:18:06.248543 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Apr 28 00:18:06.248552 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Apr 28 00:18:06.248670 kernel: rtc_cmos 00:04: RTC can wake from S4 Apr 28 00:18:06.248680 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Apr 28 00:18:06.248731 kernel: rtc_cmos 00:04: registered as rtc0 Apr 28 00:18:06.248782 kernel: rtc_cmos 00:04: setting system clock to 2026-04-28T00:18:05 UTC (1777335485) Apr 28 00:18:06.248833 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Apr 28 00:18:06.248840 kernel: intel_pstate: CPU model not supported Apr 28 00:18:06.248846 kernel: NET: Registered PF_INET6 protocol family Apr 28 00:18:06.248851 kernel: Segment Routing with IPv6 Apr 28 00:18:06.248856 kernel: In-situ OAM (IOAM) with IPv6 Apr 28 00:18:06.248864 kernel: NET: Registered PF_PACKET protocol family Apr 28 00:18:06.248869 kernel: Key type dns_resolver registered Apr 28 00:18:06.248874 kernel: IPI shorthand broadcast: enabled Apr 28 00:18:06.248880 kernel: sched_clock: Marking stable (2502044922, 979734592)->(4000039077, -518259563) Apr 28 00:18:06.248958 kernel: registered taskstats version 1 Apr 28 00:18:06.248964 kernel: Loading compiled-in X.509 certificates Apr 28 00:18:06.248970 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.127-flatcar: 40b5c5a01382737457e1eae3e889ae587960eb18' Apr 28 00:18:06.248976 kernel: Key type .fscrypt registered Apr 28 00:18:06.248981 kernel: Key type fscrypt-provisioning registered Apr 28 00:18:06.248988 kernel: ima: No TPM chip found, activating TPM-bypass! Apr 28 00:18:06.248994 kernel: ima: Allocated hash algorithm: sha1 Apr 28 00:18:06.248999 kernel: ima: No architecture policies found Apr 28 00:18:06.249005 kernel: clk: Disabling unused clocks Apr 28 00:18:06.249010 kernel: Freeing unused kernel image (initmem) memory: 42884K Apr 28 00:18:06.249016 kernel: Write protecting the kernel read-only data: 36864k Apr 28 00:18:06.249021 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Apr 28 00:18:06.249027 kernel: Run /init as init process Apr 28 00:18:06.249032 kernel: with arguments: Apr 28 00:18:06.249038 kernel: /init Apr 28 00:18:06.249045 kernel: with environment: Apr 28 00:18:06.249050 kernel: HOME=/ Apr 28 00:18:06.249055 kernel: TERM=linux Apr 28 00:18:06.249063 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 28 00:18:06.249070 systemd[1]: Detected virtualization kvm. Apr 28 00:18:06.249076 systemd[1]: Detected architecture x86-64. Apr 28 00:18:06.249082 systemd[1]: Running in initrd. Apr 28 00:18:06.249089 systemd[1]: No hostname configured, using default hostname. Apr 28 00:18:06.249095 systemd[1]: Hostname set to . Apr 28 00:18:06.249101 systemd[1]: Initializing machine ID from VM UUID. Apr 28 00:18:06.249107 systemd[1]: Queued start job for default target initrd.target. Apr 28 00:18:06.249113 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 28 00:18:06.249119 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 28 00:18:06.249125 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Apr 28 00:18:06.249131 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 28 00:18:06.249139 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Apr 28 00:18:06.249146 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Apr 28 00:18:06.249163 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Apr 28 00:18:06.249169 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Apr 28 00:18:06.249175 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 28 00:18:06.249183 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 28 00:18:06.249189 systemd[1]: Reached target paths.target - Path Units. Apr 28 00:18:06.249195 systemd[1]: Reached target slices.target - Slice Units. Apr 28 00:18:06.249201 systemd[1]: Reached target swap.target - Swaps. Apr 28 00:18:06.249207 systemd[1]: Reached target timers.target - Timer Units. Apr 28 00:18:06.249213 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Apr 28 00:18:06.249220 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 28 00:18:06.249226 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 28 00:18:06.249232 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Apr 28 00:18:06.249239 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 28 00:18:06.249245 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 28 00:18:06.249251 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 28 00:18:06.249257 systemd[1]: Reached target sockets.target - Socket Units. Apr 28 00:18:06.249264 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Apr 28 00:18:06.249270 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 28 00:18:06.249276 systemd[1]: Finished network-cleanup.service - Network Cleanup. Apr 28 00:18:06.249281 systemd[1]: Starting systemd-fsck-usr.service... Apr 28 00:18:06.249289 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 28 00:18:06.249295 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 28 00:18:06.249301 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 28 00:18:06.249599 systemd-journald[195]: Collecting audit messages is disabled. Apr 28 00:18:06.249630 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Apr 28 00:18:06.249637 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 28 00:18:06.249644 systemd-journald[195]: Journal started Apr 28 00:18:06.249664 systemd-journald[195]: Runtime Journal (/run/log/journal/32cd837a03624804813d64b92ed979dd) is 6.0M, max 48.4M, 42.3M free. Apr 28 00:18:06.255039 systemd[1]: Started systemd-journald.service - Journal Service. Apr 28 00:18:06.255356 systemd[1]: Finished systemd-fsck-usr.service. Apr 28 00:18:06.260735 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 28 00:18:06.266098 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 28 00:18:06.274074 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 28 00:18:06.276153 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 28 00:18:06.308591 systemd-modules-load[196]: Inserted module 'overlay' Apr 28 00:18:06.315448 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 28 00:18:06.315760 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 28 00:18:06.396248 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Apr 28 00:18:06.398622 systemd-modules-load[196]: Inserted module 'br_netfilter' Apr 28 00:18:06.666846 kernel: Bridge firewalling registered Apr 28 00:18:06.400607 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 28 00:18:06.672193 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 28 00:18:06.672535 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 28 00:18:06.688293 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 28 00:18:06.704501 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 28 00:18:06.708957 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 28 00:18:06.749308 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 28 00:18:06.782479 systemd-resolved[222]: Positive Trust Anchors: Apr 28 00:18:06.782782 systemd-resolved[222]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 28 00:18:06.783221 systemd-resolved[222]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 28 00:18:06.788236 systemd-resolved[222]: Defaulting to hostname 'linux'. Apr 28 00:18:06.823600 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Apr 28 00:18:06.831003 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 28 00:18:06.838792 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 28 00:18:06.849663 dracut-cmdline[232]: dracut-dracut-053 Apr 28 00:18:06.856720 dracut-cmdline[232]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=dba81bba70fdc18951de51911456386ac86d38187268d44374f74ed6158168ec Apr 28 00:18:06.967495 kernel: SCSI subsystem initialized Apr 28 00:18:06.980304 kernel: Loading iSCSI transport class v2.0-870. Apr 28 00:18:06.997136 kernel: iscsi: registered transport (tcp) Apr 28 00:18:07.027633 kernel: iscsi: registered transport (qla4xxx) Apr 28 00:18:07.027698 kernel: QLogic iSCSI HBA Driver Apr 28 00:18:07.078118 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Apr 28 00:18:07.106722 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Apr 28 00:18:07.154717 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Apr 28 00:18:07.162654 kernel: device-mapper: uevent: version 1.0.3 Apr 28 00:18:07.171675 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Apr 28 00:18:07.235711 kernel: raid6: avx512x4 gen() 34431 MB/s Apr 28 00:18:07.251311 kernel: raid6: avx512x2 gen() 27662 MB/s Apr 28 00:18:07.270433 kernel: raid6: avx512x1 gen() 33338 MB/s Apr 28 00:18:07.289549 kernel: raid6: avx2x4 gen() 27618 MB/s Apr 28 00:18:07.307432 kernel: raid6: avx2x2 gen() 30708 MB/s Apr 28 00:18:07.330662 kernel: raid6: avx2x1 gen() 15851 MB/s Apr 28 00:18:07.330826 kernel: raid6: using algorithm avx512x4 gen() 34431 MB/s Apr 28 00:18:07.379671 kernel: raid6: .... xor() 7030 MB/s, rmw enabled Apr 28 00:18:07.379792 kernel: raid6: using avx512x2 recovery algorithm Apr 28 00:18:07.407402 kernel: xor: automatically using best checksumming function avx Apr 28 00:18:07.636634 kernel: Btrfs loaded, zoned=no, fsverity=no Apr 28 00:18:07.651065 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Apr 28 00:18:07.676561 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 28 00:18:07.695677 systemd-udevd[415]: Using default interface naming scheme 'v255'. Apr 28 00:18:07.701047 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 28 00:18:07.725726 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Apr 28 00:18:07.750864 dracut-pre-trigger[419]: rd.md=0: removing MD RAID activation Apr 28 00:18:07.793565 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Apr 28 00:18:07.815696 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 28 00:18:07.865290 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 28 00:18:07.889136 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Apr 28 00:18:07.914069 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Apr 28 00:18:07.918013 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Apr 28 00:18:07.924764 kernel: cryptd: max_cpu_qlen set to 1000 Apr 28 00:18:07.929379 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Apr 28 00:18:07.940099 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Apr 28 00:18:07.940145 kernel: GPT:9289727 != 19775487 Apr 28 00:18:07.940154 kernel: GPT:Alternate GPT header not at the end of the disk. Apr 28 00:18:07.949662 kernel: GPT:9289727 != 19775487 Apr 28 00:18:07.949735 kernel: GPT: Use GNU Parted to correct GPT errors. Apr 28 00:18:07.949744 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 28 00:18:07.948869 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Apr 28 00:18:07.954770 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 28 00:18:07.955114 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 28 00:18:07.998159 kernel: libata version 3.00 loaded. Apr 28 00:18:07.995373 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Apr 28 00:18:07.999706 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 28 00:18:07.999951 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 28 00:18:08.023607 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 28 00:18:08.036322 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 28 00:18:08.044971 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 28 00:18:08.053029 kernel: ahci 0000:00:1f.2: version 3.0 Apr 28 00:18:08.057608 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Apr 28 00:18:08.057955 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Apr 28 00:18:08.075205 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Apr 28 00:18:08.075400 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Apr 28 00:18:08.075488 kernel: AVX2 version of gcm_enc/dec engaged. Apr 28 00:18:08.079005 kernel: AES CTR mode by8 optimization enabled Apr 28 00:18:08.082742 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 28 00:18:08.090985 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (468) Apr 28 00:18:08.091022 kernel: BTRFS: device fsid c393bc7b-9362-4bef-afe6-6491ed4d6c93 devid 1 transid 34 /dev/vda3 scanned by (udev-worker) (475) Apr 28 00:18:08.100287 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Apr 28 00:18:08.115076 kernel: scsi host0: ahci Apr 28 00:18:08.115951 kernel: scsi host1: ahci Apr 28 00:18:08.121147 kernel: scsi host2: ahci Apr 28 00:18:08.121407 kernel: scsi host3: ahci Apr 28 00:18:08.124196 kernel: scsi host4: ahci Apr 28 00:18:08.133454 kernel: scsi host5: ahci Apr 28 00:18:08.133595 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 Apr 28 00:18:08.133604 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 Apr 28 00:18:08.139780 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 Apr 28 00:18:08.139835 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 Apr 28 00:18:08.143124 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 Apr 28 00:18:08.149423 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 Apr 28 00:18:08.150324 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Apr 28 00:18:08.161108 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Apr 28 00:18:08.463984 kernel: ata2: SATA link down (SStatus 0 SControl 300) Apr 28 00:18:08.452616 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 28 00:18:08.492445 kernel: ata4: SATA link down (SStatus 0 SControl 300) Apr 28 00:18:08.492471 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Apr 28 00:18:08.492479 kernel: ata5: SATA link down (SStatus 0 SControl 300) Apr 28 00:18:08.492486 kernel: ata6: SATA link down (SStatus 0 SControl 300) Apr 28 00:18:08.492493 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Apr 28 00:18:08.492501 kernel: ata3.00: applying bridge limits Apr 28 00:18:08.492508 kernel: ata1: SATA link down (SStatus 0 SControl 300) Apr 28 00:18:08.492515 kernel: ata3.00: configured for UDMA/100 Apr 28 00:18:08.492521 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Apr 28 00:18:08.473295 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Apr 28 00:18:08.496488 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Apr 28 00:18:08.505159 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Apr 28 00:18:08.528263 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Apr 28 00:18:08.537316 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 28 00:18:08.549425 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 28 00:18:08.549513 disk-uuid[554]: Primary Header is updated. Apr 28 00:18:08.549513 disk-uuid[554]: Secondary Entries is updated. Apr 28 00:18:08.549513 disk-uuid[554]: Secondary Header is updated. Apr 28 00:18:08.564087 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 28 00:18:08.572023 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 28 00:18:08.610605 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Apr 28 00:18:08.610784 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Apr 28 00:18:08.635005 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Apr 28 00:18:09.574161 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 28 00:18:09.575036 disk-uuid[560]: The operation has completed successfully. Apr 28 00:18:09.615290 systemd[1]: disk-uuid.service: Deactivated successfully. Apr 28 00:18:09.616796 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Apr 28 00:18:09.653463 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Apr 28 00:18:09.668121 sh[593]: Success Apr 28 00:18:09.698160 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Apr 28 00:18:09.772414 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Apr 28 00:18:09.797608 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Apr 28 00:18:09.801617 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Apr 28 00:18:09.827516 kernel: BTRFS info (device dm-0): first mount of filesystem c393bc7b-9362-4bef-afe6-6491ed4d6c93 Apr 28 00:18:09.827647 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Apr 28 00:18:09.837454 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Apr 28 00:18:09.837522 kernel: BTRFS info (device dm-0): disabling log replay at mount time Apr 28 00:18:09.843614 kernel: BTRFS info (device dm-0): using free space tree Apr 28 00:18:09.855643 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Apr 28 00:18:09.866851 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Apr 28 00:18:09.886873 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Apr 28 00:18:09.894066 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Apr 28 00:18:09.949461 kernel: BTRFS info (device vda6): first mount of filesystem 00ce5520-a395-45f5-887a-de6bb1d2f08f Apr 28 00:18:09.949532 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 28 00:18:09.949548 kernel: BTRFS info (device vda6): using free space tree Apr 28 00:18:09.964850 kernel: BTRFS info (device vda6): auto enabling async discard Apr 28 00:18:09.980055 systemd[1]: mnt-oem.mount: Deactivated successfully. Apr 28 00:18:09.989392 kernel: BTRFS info (device vda6): last unmount of filesystem 00ce5520-a395-45f5-887a-de6bb1d2f08f Apr 28 00:18:09.998286 systemd[1]: Finished ignition-setup.service - Ignition (setup). Apr 28 00:18:10.023241 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Apr 28 00:18:10.269274 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 28 00:18:10.305166 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 28 00:18:10.407156 systemd-networkd[779]: lo: Link UP Apr 28 00:18:10.411773 kernel: hrtimer: interrupt took 17382730 ns Apr 28 00:18:10.407162 systemd-networkd[779]: lo: Gained carrier Apr 28 00:18:10.411476 systemd-networkd[779]: Enumeration completed Apr 28 00:18:10.425472 ignition[689]: Ignition 2.19.0 Apr 28 00:18:10.411805 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 28 00:18:10.425611 ignition[689]: Stage: fetch-offline Apr 28 00:18:10.413754 systemd-networkd[779]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 28 00:18:10.426060 ignition[689]: no configs at "/usr/lib/ignition/base.d" Apr 28 00:18:10.413757 systemd-networkd[779]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 28 00:18:10.426070 ignition[689]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 28 00:18:10.418997 systemd-networkd[779]: eth0: Link UP Apr 28 00:18:10.426413 ignition[689]: parsed url from cmdline: "" Apr 28 00:18:10.419000 systemd-networkd[779]: eth0: Gained carrier Apr 28 00:18:10.426417 ignition[689]: no config URL provided Apr 28 00:18:10.419008 systemd-networkd[779]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 28 00:18:10.426424 ignition[689]: reading system config file "/usr/lib/ignition/user.ign" Apr 28 00:18:10.419049 systemd[1]: Reached target network.target - Network. Apr 28 00:18:10.426432 ignition[689]: no config at "/usr/lib/ignition/user.ign" Apr 28 00:18:10.445274 systemd-networkd[779]: eth0: DHCPv4 address 10.0.0.11/16, gateway 10.0.0.1 acquired from 10.0.0.1 Apr 28 00:18:10.426561 ignition[689]: op(1): [started] loading QEMU firmware config module Apr 28 00:18:10.426567 ignition[689]: op(1): executing: "modprobe" "qemu_fw_cfg" Apr 28 00:18:10.450384 ignition[689]: op(1): [finished] loading QEMU firmware config module Apr 28 00:18:10.608384 ignition[689]: parsing config with SHA512: b33c5127d56f788ba813f2996ff93460446fa79501b7adfb83a399017efeaa4a9807e5cbf26e8feb31c562bbb4d602ada34acd758753f621414d4a27ce3b67c8 Apr 28 00:18:10.616481 unknown[689]: fetched base config from "system" Apr 28 00:18:10.617190 unknown[689]: fetched user config from "qemu" Apr 28 00:18:10.621075 ignition[689]: fetch-offline: fetch-offline passed Apr 28 00:18:10.625374 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Apr 28 00:18:10.623421 ignition[689]: Ignition finished successfully Apr 28 00:18:10.634303 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Apr 28 00:18:10.651798 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Apr 28 00:18:10.746035 ignition[785]: Ignition 2.19.0 Apr 28 00:18:10.746133 ignition[785]: Stage: kargs Apr 28 00:18:10.746410 ignition[785]: no configs at "/usr/lib/ignition/base.d" Apr 28 00:18:10.746418 ignition[785]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 28 00:18:10.747053 ignition[785]: kargs: kargs passed Apr 28 00:18:10.760627 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Apr 28 00:18:10.747087 ignition[785]: Ignition finished successfully Apr 28 00:18:10.776302 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Apr 28 00:18:10.904440 ignition[793]: Ignition 2.19.0 Apr 28 00:18:10.904561 ignition[793]: Stage: disks Apr 28 00:18:10.904845 ignition[793]: no configs at "/usr/lib/ignition/base.d" Apr 28 00:18:10.904856 ignition[793]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 28 00:18:10.923599 ignition[793]: disks: disks passed Apr 28 00:18:10.923784 ignition[793]: Ignition finished successfully Apr 28 00:18:10.930030 systemd[1]: Finished ignition-disks.service - Ignition (disks). Apr 28 00:18:10.937310 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Apr 28 00:18:10.945402 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 28 00:18:10.960742 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 28 00:18:10.966001 systemd[1]: Reached target sysinit.target - System Initialization. Apr 28 00:18:10.978858 systemd[1]: Reached target basic.target - Basic System. Apr 28 00:18:11.007232 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Apr 28 00:18:11.060623 systemd-fsck[804]: ROOT: clean, 14/553520 files, 52654/553472 blocks Apr 28 00:18:11.067508 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Apr 28 00:18:11.086768 systemd[1]: Mounting sysroot.mount - /sysroot... Apr 28 00:18:11.266102 kernel: EXT4-fs (vda9): mounted filesystem f590d1f8-5181-4682-9e04-fe65400dca5c r/w with ordered data mode. Quota mode: none. Apr 28 00:18:11.267865 systemd[1]: Mounted sysroot.mount - /sysroot. Apr 28 00:18:11.277121 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Apr 28 00:18:11.300162 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 28 00:18:11.308854 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Apr 28 00:18:11.314509 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Apr 28 00:18:11.347716 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (812) Apr 28 00:18:11.347741 kernel: BTRFS info (device vda6): first mount of filesystem 00ce5520-a395-45f5-887a-de6bb1d2f08f Apr 28 00:18:11.347749 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 28 00:18:11.347756 kernel: BTRFS info (device vda6): using free space tree Apr 28 00:18:11.347764 kernel: BTRFS info (device vda6): auto enabling async discard Apr 28 00:18:11.314623 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Apr 28 00:18:11.314647 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Apr 28 00:18:11.326085 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Apr 28 00:18:11.333004 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Apr 28 00:18:11.379192 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 28 00:18:11.414100 initrd-setup-root[836]: cut: /sysroot/etc/passwd: No such file or directory Apr 28 00:18:11.428661 initrd-setup-root[843]: cut: /sysroot/etc/group: No such file or directory Apr 28 00:18:11.434275 initrd-setup-root[850]: cut: /sysroot/etc/shadow: No such file or directory Apr 28 00:18:11.439258 initrd-setup-root[857]: cut: /sysroot/etc/gshadow: No such file or directory Apr 28 00:18:11.662215 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Apr 28 00:18:11.689544 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Apr 28 00:18:11.695253 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Apr 28 00:18:11.723656 systemd[1]: sysroot-oem.mount: Deactivated successfully. Apr 28 00:18:11.729422 kernel: BTRFS info (device vda6): last unmount of filesystem 00ce5520-a395-45f5-887a-de6bb1d2f08f Apr 28 00:18:11.744074 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Apr 28 00:18:11.761765 systemd-networkd[779]: eth0: Gained IPv6LL Apr 28 00:18:11.862457 ignition[927]: INFO : Ignition 2.19.0 Apr 28 00:18:11.862457 ignition[927]: INFO : Stage: mount Apr 28 00:18:11.862457 ignition[927]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 28 00:18:11.862457 ignition[927]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 28 00:18:11.880125 ignition[927]: INFO : mount: mount passed Apr 28 00:18:11.880125 ignition[927]: INFO : Ignition finished successfully Apr 28 00:18:11.875797 systemd[1]: Finished ignition-mount.service - Ignition (mount). Apr 28 00:18:11.910761 systemd[1]: Starting ignition-files.service - Ignition (files)... Apr 28 00:18:12.283478 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 28 00:18:12.309294 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (939) Apr 28 00:18:12.320213 kernel: BTRFS info (device vda6): first mount of filesystem 00ce5520-a395-45f5-887a-de6bb1d2f08f Apr 28 00:18:12.320322 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 28 00:18:12.320417 kernel: BTRFS info (device vda6): using free space tree Apr 28 00:18:12.339330 kernel: BTRFS info (device vda6): auto enabling async discard Apr 28 00:18:12.343422 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 28 00:18:12.411638 ignition[956]: INFO : Ignition 2.19.0 Apr 28 00:18:12.411638 ignition[956]: INFO : Stage: files Apr 28 00:18:12.411638 ignition[956]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 28 00:18:12.411638 ignition[956]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 28 00:18:12.429735 ignition[956]: DEBUG : files: compiled without relabeling support, skipping Apr 28 00:18:12.429735 ignition[956]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Apr 28 00:18:12.429735 ignition[956]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Apr 28 00:18:12.429735 ignition[956]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Apr 28 00:18:12.429735 ignition[956]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Apr 28 00:18:12.429735 ignition[956]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Apr 28 00:18:12.429735 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Apr 28 00:18:12.429735 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Apr 28 00:18:12.423750 unknown[956]: wrote ssh authorized keys file for user: core Apr 28 00:18:12.514323 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Apr 28 00:18:13.313297 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Apr 28 00:18:13.322511 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Apr 28 00:18:13.322511 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Apr 28 00:18:13.322511 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Apr 28 00:18:13.322511 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Apr 28 00:18:13.322511 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 28 00:18:13.394063 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 28 00:18:13.394063 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 28 00:18:13.394063 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 28 00:18:13.394063 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Apr 28 00:18:13.394063 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Apr 28 00:18:13.394063 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Apr 28 00:18:13.394063 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Apr 28 00:18:13.394063 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Apr 28 00:18:13.394063 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.4-x86-64.raw: attempt #1 Apr 28 00:18:13.819780 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Apr 28 00:18:19.142147 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Apr 28 00:18:19.151734 ignition[956]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Apr 28 00:18:19.151734 ignition[956]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 28 00:18:19.151734 ignition[956]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 28 00:18:19.151734 ignition[956]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Apr 28 00:18:19.151734 ignition[956]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Apr 28 00:18:19.151734 ignition[956]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Apr 28 00:18:19.151734 ignition[956]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Apr 28 00:18:19.151734 ignition[956]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Apr 28 00:18:19.151734 ignition[956]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Apr 28 00:18:19.241544 ignition[956]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Apr 28 00:18:19.251805 ignition[956]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Apr 28 00:18:19.260041 ignition[956]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Apr 28 00:18:19.260041 ignition[956]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Apr 28 00:18:19.260041 ignition[956]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Apr 28 00:18:19.260041 ignition[956]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Apr 28 00:18:19.260041 ignition[956]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Apr 28 00:18:19.260041 ignition[956]: INFO : files: files passed Apr 28 00:18:19.260041 ignition[956]: INFO : Ignition finished successfully Apr 28 00:18:19.269222 systemd[1]: Finished ignition-files.service - Ignition (files). Apr 28 00:18:19.310227 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Apr 28 00:18:19.316833 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Apr 28 00:18:19.336236 systemd[1]: ignition-quench.service: Deactivated successfully. Apr 28 00:18:19.336358 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Apr 28 00:18:19.349354 initrd-setup-root-after-ignition[983]: grep: /sysroot/oem/oem-release: No such file or directory Apr 28 00:18:19.354688 initrd-setup-root-after-ignition[986]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 28 00:18:19.354688 initrd-setup-root-after-ignition[986]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Apr 28 00:18:19.370552 initrd-setup-root-after-ignition[990]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 28 00:18:19.380368 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 28 00:18:19.380863 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Apr 28 00:18:19.412345 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Apr 28 00:18:19.457832 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Apr 28 00:18:19.458042 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Apr 28 00:18:19.458381 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Apr 28 00:18:19.467774 systemd[1]: Reached target initrd.target - Initrd Default Target. Apr 28 00:18:19.479705 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Apr 28 00:18:19.497370 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Apr 28 00:18:19.537732 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 28 00:18:19.569036 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Apr 28 00:18:19.595710 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Apr 28 00:18:19.602275 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 28 00:18:19.615336 systemd[1]: Stopped target timers.target - Timer Units. Apr 28 00:18:19.626136 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Apr 28 00:18:19.626557 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 28 00:18:19.669322 systemd[1]: Stopped target initrd.target - Initrd Default Target. Apr 28 00:18:19.686562 systemd[1]: Stopped target basic.target - Basic System. Apr 28 00:18:19.692874 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Apr 28 00:18:19.696580 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Apr 28 00:18:19.706732 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Apr 28 00:18:19.723636 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Apr 28 00:18:19.733521 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Apr 28 00:18:19.747957 systemd[1]: Stopped target sysinit.target - System Initialization. Apr 28 00:18:19.753711 systemd[1]: Stopped target local-fs.target - Local File Systems. Apr 28 00:18:19.764316 systemd[1]: Stopped target swap.target - Swaps. Apr 28 00:18:19.769114 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Apr 28 00:18:19.769379 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Apr 28 00:18:19.790241 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Apr 28 00:18:19.796110 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 28 00:18:19.802750 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Apr 28 00:18:19.803030 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 28 00:18:19.825570 systemd[1]: dracut-initqueue.service: Deactivated successfully. Apr 28 00:18:19.826021 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Apr 28 00:18:19.853155 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Apr 28 00:18:19.854001 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Apr 28 00:18:19.873280 systemd[1]: Stopped target paths.target - Path Units. Apr 28 00:18:19.873827 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Apr 28 00:18:19.879460 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 28 00:18:19.915779 systemd[1]: Stopped target slices.target - Slice Units. Apr 28 00:18:19.928959 systemd[1]: Stopped target sockets.target - Socket Units. Apr 28 00:18:19.936585 systemd[1]: iscsid.socket: Deactivated successfully. Apr 28 00:18:19.936737 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Apr 28 00:18:19.944170 systemd[1]: iscsiuio.socket: Deactivated successfully. Apr 28 00:18:19.944282 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 28 00:18:19.951571 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Apr 28 00:18:19.951823 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 28 00:18:19.959698 systemd[1]: ignition-files.service: Deactivated successfully. Apr 28 00:18:19.960149 systemd[1]: Stopped ignition-files.service - Ignition (files). Apr 28 00:18:19.996488 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Apr 28 00:18:20.002466 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Apr 28 00:18:20.002688 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Apr 28 00:18:20.017795 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Apr 28 00:18:20.021144 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Apr 28 00:18:20.021298 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Apr 28 00:18:20.044555 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Apr 28 00:18:20.044788 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Apr 28 00:18:20.117883 systemd[1]: initrd-cleanup.service: Deactivated successfully. Apr 28 00:18:20.118606 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Apr 28 00:18:20.133283 systemd[1]: sysroot-boot.mount: Deactivated successfully. Apr 28 00:18:20.160361 systemd[1]: sysroot-boot.service: Deactivated successfully. Apr 28 00:18:20.161720 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Apr 28 00:18:20.176527 ignition[1010]: INFO : Ignition 2.19.0 Apr 28 00:18:20.176527 ignition[1010]: INFO : Stage: umount Apr 28 00:18:20.183872 ignition[1010]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 28 00:18:20.183872 ignition[1010]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 28 00:18:20.183872 ignition[1010]: INFO : umount: umount passed Apr 28 00:18:20.183872 ignition[1010]: INFO : Ignition finished successfully Apr 28 00:18:20.184355 systemd[1]: ignition-mount.service: Deactivated successfully. Apr 28 00:18:20.184571 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Apr 28 00:18:20.201749 systemd[1]: Stopped target network.target - Network. Apr 28 00:18:20.206749 systemd[1]: ignition-disks.service: Deactivated successfully. Apr 28 00:18:20.207532 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Apr 28 00:18:20.221818 systemd[1]: ignition-kargs.service: Deactivated successfully. Apr 28 00:18:20.223970 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Apr 28 00:18:20.231991 systemd[1]: ignition-setup.service: Deactivated successfully. Apr 28 00:18:20.233222 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Apr 28 00:18:20.240671 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Apr 28 00:18:20.240812 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Apr 28 00:18:20.247287 systemd[1]: initrd-setup-root.service: Deactivated successfully. Apr 28 00:18:20.247703 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Apr 28 00:18:20.264833 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Apr 28 00:18:20.279625 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Apr 28 00:18:20.293718 systemd-networkd[779]: eth0: DHCPv6 lease lost Apr 28 00:18:20.315881 systemd[1]: systemd-resolved.service: Deactivated successfully. Apr 28 00:18:20.316137 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Apr 28 00:18:20.331468 systemd[1]: systemd-networkd.service: Deactivated successfully. Apr 28 00:18:20.331664 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Apr 28 00:18:20.349254 systemd[1]: systemd-networkd.socket: Deactivated successfully. Apr 28 00:18:20.349360 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Apr 28 00:18:20.378542 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Apr 28 00:18:20.381686 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Apr 28 00:18:20.381820 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 28 00:18:20.391639 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 28 00:18:20.391726 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 28 00:18:20.403221 systemd[1]: systemd-modules-load.service: Deactivated successfully. Apr 28 00:18:20.403598 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Apr 28 00:18:20.416247 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Apr 28 00:18:20.416872 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 28 00:18:20.431289 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 28 00:18:20.502814 systemd[1]: systemd-udevd.service: Deactivated successfully. Apr 28 00:18:20.505841 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 28 00:18:20.517827 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Apr 28 00:18:20.517999 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Apr 28 00:18:20.525098 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Apr 28 00:18:20.525155 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Apr 28 00:18:20.541474 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Apr 28 00:18:20.541587 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Apr 28 00:18:20.562250 systemd[1]: dracut-cmdline.service: Deactivated successfully. Apr 28 00:18:20.562472 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Apr 28 00:18:20.571068 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 28 00:18:20.571295 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 28 00:18:20.605300 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Apr 28 00:18:20.615822 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Apr 28 00:18:20.616020 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 28 00:18:20.622006 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 28 00:18:20.622709 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 28 00:18:20.634986 systemd[1]: network-cleanup.service: Deactivated successfully. Apr 28 00:18:20.635196 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Apr 28 00:18:20.644784 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Apr 28 00:18:20.647321 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Apr 28 00:18:20.658056 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Apr 28 00:18:20.680588 systemd[1]: Starting initrd-switch-root.service - Switch Root... Apr 28 00:18:20.709873 systemd[1]: Switching root. Apr 28 00:18:20.756367 systemd-journald[195]: Received SIGTERM from PID 1 (systemd). Apr 28 00:18:20.756501 systemd-journald[195]: Journal stopped Apr 28 00:18:23.963257 kernel: SELinux: policy capability network_peer_controls=1 Apr 28 00:18:23.964171 kernel: SELinux: policy capability open_perms=1 Apr 28 00:18:23.964196 kernel: SELinux: policy capability extended_socket_class=1 Apr 28 00:18:23.964217 kernel: SELinux: policy capability always_check_network=0 Apr 28 00:18:23.964229 kernel: SELinux: policy capability cgroup_seclabel=1 Apr 28 00:18:23.964241 kernel: SELinux: policy capability nnp_nosuid_transition=1 Apr 28 00:18:23.964251 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Apr 28 00:18:23.964263 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Apr 28 00:18:23.964274 kernel: audit: type=1403 audit(1777335501.079:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Apr 28 00:18:23.964288 systemd[1]: Successfully loaded SELinux policy in 93.135ms. Apr 28 00:18:23.964307 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 135.598ms. Apr 28 00:18:23.964323 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 28 00:18:23.964338 systemd[1]: Detected virtualization kvm. Apr 28 00:18:23.964351 systemd[1]: Detected architecture x86-64. Apr 28 00:18:23.964363 systemd[1]: Detected first boot. Apr 28 00:18:23.964376 systemd[1]: Initializing machine ID from VM UUID. Apr 28 00:18:23.964393 zram_generator::config[1054]: No configuration found. Apr 28 00:18:23.964409 systemd[1]: Populated /etc with preset unit settings. Apr 28 00:18:23.965840 systemd[1]: initrd-switch-root.service: Deactivated successfully. Apr 28 00:18:23.966249 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Apr 28 00:18:23.966269 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Apr 28 00:18:23.966283 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Apr 28 00:18:23.966296 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Apr 28 00:18:23.966310 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Apr 28 00:18:23.966322 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Apr 28 00:18:23.966336 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Apr 28 00:18:23.966353 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Apr 28 00:18:23.966365 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Apr 28 00:18:23.966381 systemd[1]: Created slice user.slice - User and Session Slice. Apr 28 00:18:23.966394 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 28 00:18:23.966409 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 28 00:18:23.966781 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Apr 28 00:18:23.967144 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Apr 28 00:18:23.967158 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Apr 28 00:18:23.967171 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 28 00:18:23.967184 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Apr 28 00:18:23.967197 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 28 00:18:23.967210 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Apr 28 00:18:23.967218 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Apr 28 00:18:23.967226 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Apr 28 00:18:23.967238 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Apr 28 00:18:23.967245 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 28 00:18:23.967254 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 28 00:18:23.967264 systemd[1]: Reached target slices.target - Slice Units. Apr 28 00:18:23.967272 systemd[1]: Reached target swap.target - Swaps. Apr 28 00:18:23.967281 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Apr 28 00:18:23.967289 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Apr 28 00:18:23.967297 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 28 00:18:23.967304 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 28 00:18:23.967312 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 28 00:18:23.967320 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Apr 28 00:18:23.967327 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Apr 28 00:18:23.967335 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Apr 28 00:18:23.967342 systemd[1]: Mounting media.mount - External Media Directory... Apr 28 00:18:23.967355 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 28 00:18:23.967363 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Apr 28 00:18:23.967371 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Apr 28 00:18:23.967379 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Apr 28 00:18:23.967387 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Apr 28 00:18:23.967395 systemd[1]: Reached target machines.target - Containers. Apr 28 00:18:23.967402 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Apr 28 00:18:23.967410 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 28 00:18:23.967419 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 28 00:18:23.967473 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Apr 28 00:18:23.967482 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 28 00:18:23.967492 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 28 00:18:23.967500 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 28 00:18:23.967508 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Apr 28 00:18:23.967516 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 28 00:18:23.967524 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Apr 28 00:18:23.967531 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Apr 28 00:18:23.967541 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Apr 28 00:18:23.967548 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Apr 28 00:18:23.967556 systemd[1]: Stopped systemd-fsck-usr.service. Apr 28 00:18:23.967565 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 28 00:18:23.967572 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 28 00:18:23.967580 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Apr 28 00:18:23.967587 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Apr 28 00:18:23.967595 kernel: fuse: init (API version 7.39) Apr 28 00:18:23.967605 kernel: loop: module loaded Apr 28 00:18:23.967612 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 28 00:18:23.967620 kernel: ACPI: bus type drm_connector registered Apr 28 00:18:23.967627 systemd[1]: verity-setup.service: Deactivated successfully. Apr 28 00:18:23.967634 systemd[1]: Stopped verity-setup.service. Apr 28 00:18:23.967642 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 28 00:18:23.968518 systemd-journald[1135]: Collecting audit messages is disabled. Apr 28 00:18:23.968604 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Apr 28 00:18:23.968614 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Apr 28 00:18:23.968624 systemd-journald[1135]: Journal started Apr 28 00:18:23.968646 systemd-journald[1135]: Runtime Journal (/run/log/journal/32cd837a03624804813d64b92ed979dd) is 6.0M, max 48.4M, 42.3M free. Apr 28 00:18:22.427955 systemd[1]: Queued start job for default target multi-user.target. Apr 28 00:18:22.473178 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Apr 28 00:18:22.473763 systemd[1]: systemd-journald.service: Deactivated successfully. Apr 28 00:18:23.982141 systemd[1]: Started systemd-journald.service - Journal Service. Apr 28 00:18:23.988668 systemd[1]: Mounted media.mount - External Media Directory. Apr 28 00:18:23.994608 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Apr 28 00:18:24.001312 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Apr 28 00:18:24.007713 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Apr 28 00:18:24.013846 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Apr 28 00:18:24.021747 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 28 00:18:24.034733 systemd[1]: modprobe@configfs.service: Deactivated successfully. Apr 28 00:18:24.039058 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Apr 28 00:18:24.050405 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 28 00:18:24.050691 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 28 00:18:24.058653 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 28 00:18:24.060349 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 28 00:18:24.068343 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 28 00:18:24.069572 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 28 00:18:24.076403 systemd[1]: modprobe@fuse.service: Deactivated successfully. Apr 28 00:18:24.077592 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Apr 28 00:18:24.083604 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 28 00:18:24.083853 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 28 00:18:24.093704 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 28 00:18:24.102752 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Apr 28 00:18:24.113309 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Apr 28 00:18:24.170689 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 28 00:18:24.221705 systemd[1]: Reached target network-pre.target - Preparation for Network. Apr 28 00:18:24.252863 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Apr 28 00:18:24.259195 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Apr 28 00:18:24.266856 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Apr 28 00:18:24.267038 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 28 00:18:24.275105 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Apr 28 00:18:24.284324 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Apr 28 00:18:24.296594 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Apr 28 00:18:24.300102 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 28 00:18:24.303384 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Apr 28 00:18:24.323138 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Apr 28 00:18:24.333805 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 28 00:18:24.395708 systemd-journald[1135]: Time spent on flushing to /var/log/journal/32cd837a03624804813d64b92ed979dd is 67.917ms for 947 entries. Apr 28 00:18:24.395708 systemd-journald[1135]: System Journal (/var/log/journal/32cd837a03624804813d64b92ed979dd) is 8.0M, max 195.6M, 187.6M free. Apr 28 00:18:24.539665 systemd-journald[1135]: Received client request to flush runtime journal. Apr 28 00:18:24.407011 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Apr 28 00:18:24.415530 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 28 00:18:24.420609 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 28 00:18:24.452837 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Apr 28 00:18:24.479630 systemd[1]: Starting systemd-sysusers.service - Create System Users... Apr 28 00:18:24.508992 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Apr 28 00:18:24.527690 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Apr 28 00:18:24.540973 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Apr 28 00:18:24.572343 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Apr 28 00:18:24.582815 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Apr 28 00:18:24.617784 kernel: loop0: detected capacity change from 0 to 219192 Apr 28 00:18:24.592722 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Apr 28 00:18:24.663480 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Apr 28 00:18:24.687278 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Apr 28 00:18:24.711138 udevadm[1171]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Apr 28 00:18:24.782295 systemd[1]: Finished systemd-sysusers.service - Create System Users. Apr 28 00:18:24.823116 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 28 00:18:24.885291 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Apr 28 00:18:24.872779 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Apr 28 00:18:24.874494 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Apr 28 00:18:24.940048 kernel: loop1: detected capacity change from 0 to 140768 Apr 28 00:18:24.940838 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 28 00:18:25.159804 systemd-tmpfiles[1183]: ACLs are not supported, ignoring. Apr 28 00:18:25.160583 systemd-tmpfiles[1183]: ACLs are not supported, ignoring. Apr 28 00:18:25.320798 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 28 00:18:25.344155 kernel: loop2: detected capacity change from 0 to 142488 Apr 28 00:18:25.445062 kernel: loop3: detected capacity change from 0 to 219192 Apr 28 00:18:25.569504 kernel: loop4: detected capacity change from 0 to 140768 Apr 28 00:18:26.048492 kernel: loop5: detected capacity change from 0 to 142488 Apr 28 00:18:26.101519 (sd-merge)[1193]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Apr 28 00:18:26.102347 (sd-merge)[1193]: Merged extensions into '/usr'. Apr 28 00:18:26.265003 kernel: clocksource: Long readout interval, skipping watchdog check: cs_nsec: 1078499159 wd_nsec: 1078498888 Apr 28 00:18:26.277065 systemd[1]: Reloading requested from client PID 1169 ('systemd-sysext') (unit systemd-sysext.service)... Apr 28 00:18:26.277591 systemd[1]: Reloading... Apr 28 00:18:26.493015 zram_generator::config[1219]: No configuration found. Apr 28 00:18:27.305094 ldconfig[1164]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Apr 28 00:18:27.326511 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 28 00:18:27.380829 systemd[1]: Reloading finished in 1102 ms. Apr 28 00:18:27.435774 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Apr 28 00:18:27.442265 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Apr 28 00:18:27.471434 systemd[1]: Starting ensure-sysext.service... Apr 28 00:18:27.478584 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 28 00:18:27.500278 systemd[1]: Reloading requested from client PID 1256 ('systemctl') (unit ensure-sysext.service)... Apr 28 00:18:27.500337 systemd[1]: Reloading... Apr 28 00:18:28.198664 zram_generator::config[1292]: No configuration found. Apr 28 00:18:28.222803 systemd-tmpfiles[1257]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Apr 28 00:18:28.223242 systemd-tmpfiles[1257]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Apr 28 00:18:28.224530 systemd-tmpfiles[1257]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Apr 28 00:18:28.225494 systemd-tmpfiles[1257]: ACLs are not supported, ignoring. Apr 28 00:18:28.225536 systemd-tmpfiles[1257]: ACLs are not supported, ignoring. Apr 28 00:18:28.227648 systemd-tmpfiles[1257]: Detected autofs mount point /boot during canonicalization of boot. Apr 28 00:18:28.227691 systemd-tmpfiles[1257]: Skipping /boot Apr 28 00:18:28.253390 systemd-tmpfiles[1257]: Detected autofs mount point /boot during canonicalization of boot. Apr 28 00:18:28.253405 systemd-tmpfiles[1257]: Skipping /boot Apr 28 00:18:28.273150 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 28 00:18:28.332671 systemd[1]: Reloading finished in 831 ms. Apr 28 00:18:28.353005 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 28 00:18:28.401394 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 28 00:18:28.415120 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Apr 28 00:18:28.435631 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Apr 28 00:18:28.476639 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 28 00:18:28.485580 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Apr 28 00:18:28.493731 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 28 00:18:28.493871 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 28 00:18:28.502393 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 28 00:18:28.513338 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 28 00:18:28.528264 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 28 00:18:28.533136 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 28 00:18:28.533351 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 28 00:18:28.534298 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 28 00:18:28.534521 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 28 00:18:28.546610 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Apr 28 00:18:28.552076 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 28 00:18:28.552224 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 28 00:18:28.562056 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Apr 28 00:18:28.569264 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Apr 28 00:18:28.576536 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 28 00:18:28.576682 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 28 00:18:28.593612 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 28 00:18:28.593771 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 28 00:18:28.603883 augenrules[1351]: No rules Apr 28 00:18:28.604624 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 28 00:18:28.615293 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 28 00:18:28.622856 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 28 00:18:28.627327 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 28 00:18:28.627549 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 28 00:18:28.631114 systemd[1]: Started systemd-userdbd.service - User Database Manager. Apr 28 00:18:28.638211 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 28 00:18:28.653555 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Apr 28 00:18:28.661696 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 28 00:18:28.662179 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 28 00:18:28.667744 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 28 00:18:28.668007 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 28 00:18:28.673764 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 28 00:18:28.674271 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 28 00:18:28.679845 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Apr 28 00:18:28.707567 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 28 00:18:28.707750 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 28 00:18:28.720204 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 28 00:18:28.720417 systemd-resolved[1333]: Positive Trust Anchors: Apr 28 00:18:28.720424 systemd-resolved[1333]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 28 00:18:28.720449 systemd-resolved[1333]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 28 00:18:28.728770 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 28 00:18:28.734242 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 28 00:18:28.734272 systemd-resolved[1333]: Defaulting to hostname 'linux'. Apr 28 00:18:28.742318 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 28 00:18:28.747552 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 28 00:18:28.749173 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 28 00:18:28.756256 systemd[1]: Starting systemd-update-done.service - Update is Completed... Apr 28 00:18:28.760374 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Apr 28 00:18:28.760428 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 28 00:18:28.761188 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 28 00:18:28.767766 systemd[1]: Finished ensure-sysext.service. Apr 28 00:18:28.772419 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 28 00:18:28.772595 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 28 00:18:28.779798 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 28 00:18:28.780029 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 28 00:18:28.788346 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 28 00:18:28.788608 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 28 00:18:28.794441 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 28 00:18:28.804876 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 28 00:18:28.807726 systemd-udevd[1374]: Using default interface naming scheme 'v255'. Apr 28 00:18:28.817602 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 28 00:18:28.823505 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 28 00:18:28.823621 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 28 00:18:28.835091 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Apr 28 00:18:28.840095 systemd[1]: Finished systemd-update-done.service - Update is Completed. Apr 28 00:18:28.859756 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 28 00:18:28.877958 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 28 00:18:28.931202 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Apr 28 00:18:28.940201 systemd[1]: Reached target time-set.target - System Time Set. Apr 28 00:18:29.022057 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Apr 28 00:18:29.036323 systemd-networkd[1387]: lo: Link UP Apr 28 00:18:29.036359 systemd-networkd[1387]: lo: Gained carrier Apr 28 00:18:29.039540 systemd-networkd[1387]: Enumeration completed Apr 28 00:18:29.039678 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 28 00:18:29.043951 systemd-networkd[1387]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 28 00:18:29.044971 systemd-networkd[1387]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 28 00:18:29.044997 systemd[1]: Reached target network.target - Network. Apr 28 00:18:29.051588 systemd-networkd[1387]: eth0: Link UP Apr 28 00:18:29.051621 systemd-networkd[1387]: eth0: Gained carrier Apr 28 00:18:29.051636 systemd-networkd[1387]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 28 00:18:29.061596 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Apr 28 00:18:29.075451 systemd-networkd[1387]: eth0: DHCPv4 address 10.0.0.11/16, gateway 10.0.0.1 acquired from 10.0.0.1 Apr 28 00:18:29.085976 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (1399) Apr 28 00:18:29.090427 systemd-networkd[1387]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 28 00:18:29.090959 systemd-timesyncd[1382]: Network configuration changed, trying to establish connection. Apr 28 00:18:29.092689 systemd-timesyncd[1382]: Contacted time server 10.0.0.1:123 (10.0.0.1). Apr 28 00:18:29.092768 systemd-timesyncd[1382]: Initial clock synchronization to Tue 2026-04-28 00:18:29.072276 UTC. Apr 28 00:18:29.678005 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Apr 28 00:18:29.693289 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Apr 28 00:18:29.709595 kernel: ACPI: button: Power Button [PWRF] Apr 28 00:18:29.710297 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Apr 28 00:18:29.772993 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Apr 28 00:18:29.780100 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Apr 28 00:18:29.806016 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Apr 28 00:18:29.815212 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Apr 28 00:18:29.816710 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Apr 28 00:18:30.292562 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 28 00:18:30.452977 kernel: mousedev: PS/2 mouse device common for all mice Apr 28 00:18:30.779540 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Apr 28 00:18:30.799154 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Apr 28 00:18:30.836186 systemd-networkd[1387]: eth0: Gained IPv6LL Apr 28 00:18:30.837238 lvm[1426]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 28 00:18:30.841426 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Apr 28 00:18:31.198722 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 28 00:18:31.210853 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Apr 28 00:18:31.241377 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 28 00:18:31.248297 systemd[1]: Reached target network-online.target - Network is Online. Apr 28 00:18:31.258419 systemd[1]: Reached target sysinit.target - System Initialization. Apr 28 00:18:31.272117 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Apr 28 00:18:31.284600 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Apr 28 00:18:31.299736 systemd[1]: Started logrotate.timer - Daily rotation of log files. Apr 28 00:18:31.308594 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Apr 28 00:18:31.324723 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Apr 28 00:18:31.348390 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Apr 28 00:18:31.348841 systemd[1]: Reached target paths.target - Path Units. Apr 28 00:18:31.383674 systemd[1]: Reached target timers.target - Timer Units. Apr 28 00:18:31.398344 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Apr 28 00:18:31.416368 systemd[1]: Starting docker.socket - Docker Socket for the API... Apr 28 00:18:31.441839 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Apr 28 00:18:31.468550 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Apr 28 00:18:31.477860 systemd[1]: Listening on docker.socket - Docker Socket for the API. Apr 28 00:18:31.493028 systemd[1]: Reached target sockets.target - Socket Units. Apr 28 00:18:31.505030 systemd[1]: Reached target basic.target - Basic System. Apr 28 00:18:31.514480 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Apr 28 00:18:31.515052 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Apr 28 00:18:31.515217 lvm[1432]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 28 00:18:31.527793 systemd[1]: Starting containerd.service - containerd container runtime... Apr 28 00:18:31.541374 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Apr 28 00:18:31.552037 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Apr 28 00:18:31.562339 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Apr 28 00:18:31.583654 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Apr 28 00:18:31.589266 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Apr 28 00:18:31.595620 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 28 00:18:31.615639 jq[1436]: false Apr 28 00:18:31.624228 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Apr 28 00:18:31.644223 extend-filesystems[1437]: Found loop3 Apr 28 00:18:31.644223 extend-filesystems[1437]: Found loop4 Apr 28 00:18:31.644223 extend-filesystems[1437]: Found loop5 Apr 28 00:18:31.644223 extend-filesystems[1437]: Found sr0 Apr 28 00:18:31.644223 extend-filesystems[1437]: Found vda Apr 28 00:18:31.644223 extend-filesystems[1437]: Found vda1 Apr 28 00:18:31.644223 extend-filesystems[1437]: Found vda2 Apr 28 00:18:31.644223 extend-filesystems[1437]: Found vda3 Apr 28 00:18:31.644223 extend-filesystems[1437]: Found usr Apr 28 00:18:31.644223 extend-filesystems[1437]: Found vda4 Apr 28 00:18:31.644223 extend-filesystems[1437]: Found vda6 Apr 28 00:18:31.644223 extend-filesystems[1437]: Found vda7 Apr 28 00:18:31.644223 extend-filesystems[1437]: Found vda9 Apr 28 00:18:31.644223 extend-filesystems[1437]: Checking size of /dev/vda9 Apr 28 00:18:31.838392 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Apr 28 00:18:31.761853 dbus-daemon[1435]: [system] SELinux support is enabled Apr 28 00:18:31.852109 extend-filesystems[1437]: Resized partition /dev/vda9 Apr 28 00:18:31.702992 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Apr 28 00:18:31.858554 extend-filesystems[1450]: resize2fs 1.47.1 (20-May-2024) Apr 28 00:18:31.869661 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Apr 28 00:18:31.710233 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Apr 28 00:18:31.732222 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Apr 28 00:18:31.762497 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Apr 28 00:18:31.971707 extend-filesystems[1450]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Apr 28 00:18:31.971707 extend-filesystems[1450]: old_desc_blocks = 1, new_desc_blocks = 1 Apr 28 00:18:31.971707 extend-filesystems[1450]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Apr 28 00:18:32.041077 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (1392) Apr 28 00:18:31.824223 systemd[1]: Starting systemd-logind.service - User Login Management... Apr 28 00:18:32.041837 extend-filesystems[1437]: Resized filesystem in /dev/vda9 Apr 28 00:18:31.858658 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Apr 28 00:18:31.869744 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Apr 28 00:18:31.901494 systemd[1]: Starting update-engine.service - Update Engine... Apr 28 00:18:31.984179 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Apr 28 00:18:32.009410 systemd[1]: Started dbus.service - D-Bus System Message Bus. Apr 28 00:18:32.034519 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Apr 28 00:18:32.100719 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Apr 28 00:18:32.104354 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Apr 28 00:18:32.105641 systemd[1]: extend-filesystems.service: Deactivated successfully. Apr 28 00:18:32.105854 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Apr 28 00:18:32.203519 jq[1466]: true Apr 28 00:18:32.218199 systemd[1]: motdgen.service: Deactivated successfully. Apr 28 00:18:32.218809 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Apr 28 00:18:32.244838 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Apr 28 00:18:32.284361 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Apr 28 00:18:32.297865 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Apr 28 00:18:32.382175 update_engine[1464]: I20260428 00:18:32.378247 1464 main.cc:92] Flatcar Update Engine starting Apr 28 00:18:32.389804 (ntainerd)[1473]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Apr 28 00:18:32.397667 update_engine[1464]: I20260428 00:18:32.391280 1464 update_check_scheduler.cc:74] Next update check in 11m28s Apr 28 00:18:32.398754 systemd-logind[1457]: Watching system buttons on /dev/input/event1 (Power Button) Apr 28 00:18:32.398803 systemd-logind[1457]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Apr 28 00:18:32.415589 systemd-logind[1457]: New seat seat0. Apr 28 00:18:32.497196 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Apr 28 00:18:32.497260 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Apr 28 00:18:32.522781 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Apr 28 00:18:32.524477 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Apr 28 00:18:32.541753 systemd[1]: Started systemd-logind.service - User Login Management. Apr 28 00:18:32.579181 systemd[1]: Started update-engine.service - Update Engine. Apr 28 00:18:32.640793 tar[1471]: linux-amd64/LICENSE Apr 28 00:18:32.640793 tar[1471]: linux-amd64/helm Apr 28 00:18:32.640260 systemd[1]: Started locksmithd.service - Cluster reboot manager. Apr 28 00:18:32.649800 jq[1472]: true Apr 28 00:18:32.678318 sshd_keygen[1462]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Apr 28 00:18:32.810419 systemd[1]: coreos-metadata.service: Deactivated successfully. Apr 28 00:18:32.813698 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Apr 28 00:18:32.826818 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Apr 28 00:18:32.857587 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Apr 28 00:18:33.523774 systemd[1]: Starting issuegen.service - Generate /run/issue... Apr 28 00:18:33.641202 systemd[1]: issuegen.service: Deactivated successfully. Apr 28 00:18:33.649384 systemd[1]: Finished issuegen.service - Generate /run/issue. Apr 28 00:18:33.849502 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Apr 28 00:18:33.876111 locksmithd[1489]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Apr 28 00:18:33.928001 bash[1522]: Updated "/home/core/.ssh/authorized_keys" Apr 28 00:18:33.963291 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Apr 28 00:18:33.975509 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Apr 28 00:18:34.588539 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Apr 28 00:18:34.650983 systemd[1]: Started getty@tty1.service - Getty on tty1. Apr 28 00:18:34.677792 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Apr 28 00:18:34.690694 systemd[1]: Reached target getty.target - Login Prompts. Apr 28 00:18:36.668409 containerd[1473]: time="2026-04-28T00:18:36.665432424Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Apr 28 00:18:37.289583 containerd[1473]: time="2026-04-28T00:18:37.287238700Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Apr 28 00:18:37.316685 containerd[1473]: time="2026-04-28T00:18:37.315963566Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.127-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Apr 28 00:18:37.319643 containerd[1473]: time="2026-04-28T00:18:37.319438755Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Apr 28 00:18:37.319866 containerd[1473]: time="2026-04-28T00:18:37.319810984Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Apr 28 00:18:37.320469 containerd[1473]: time="2026-04-28T00:18:37.320410662Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Apr 28 00:18:37.320592 containerd[1473]: time="2026-04-28T00:18:37.320542366Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Apr 28 00:18:37.320997 containerd[1473]: time="2026-04-28T00:18:37.320871776Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Apr 28 00:18:37.320997 containerd[1473]: time="2026-04-28T00:18:37.320987377Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Apr 28 00:18:37.321584 containerd[1473]: time="2026-04-28T00:18:37.321498792Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 28 00:18:37.321584 containerd[1473]: time="2026-04-28T00:18:37.321558071Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Apr 28 00:18:37.321584 containerd[1473]: time="2026-04-28T00:18:37.321574917Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Apr 28 00:18:37.321630 containerd[1473]: time="2026-04-28T00:18:37.321587580Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Apr 28 00:18:37.321967 containerd[1473]: time="2026-04-28T00:18:37.321815738Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Apr 28 00:18:37.322776 containerd[1473]: time="2026-04-28T00:18:37.322637819Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Apr 28 00:18:37.323046 containerd[1473]: time="2026-04-28T00:18:37.322965029Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 28 00:18:37.323046 containerd[1473]: time="2026-04-28T00:18:37.323024004Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Apr 28 00:18:37.323223 containerd[1473]: time="2026-04-28T00:18:37.323161275Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Apr 28 00:18:37.323385 containerd[1473]: time="2026-04-28T00:18:37.323277967Z" level=info msg="metadata content store policy set" policy=shared Apr 28 00:18:37.368437 containerd[1473]: time="2026-04-28T00:18:37.368058937Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Apr 28 00:18:37.369074 containerd[1473]: time="2026-04-28T00:18:37.368640071Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Apr 28 00:18:37.369074 containerd[1473]: time="2026-04-28T00:18:37.368669958Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Apr 28 00:18:37.369074 containerd[1473]: time="2026-04-28T00:18:37.368681652Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Apr 28 00:18:37.369074 containerd[1473]: time="2026-04-28T00:18:37.368711544Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Apr 28 00:18:37.371459 containerd[1473]: time="2026-04-28T00:18:37.369464460Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Apr 28 00:18:37.372564 containerd[1473]: time="2026-04-28T00:18:37.372516556Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Apr 28 00:18:37.373534 containerd[1473]: time="2026-04-28T00:18:37.373051606Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Apr 28 00:18:37.373534 containerd[1473]: time="2026-04-28T00:18:37.373070563Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Apr 28 00:18:37.373534 containerd[1473]: time="2026-04-28T00:18:37.373094395Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Apr 28 00:18:37.373534 containerd[1473]: time="2026-04-28T00:18:37.373107815Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Apr 28 00:18:37.373534 containerd[1473]: time="2026-04-28T00:18:37.373160701Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Apr 28 00:18:37.373534 containerd[1473]: time="2026-04-28T00:18:37.373177544Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Apr 28 00:18:37.373534 containerd[1473]: time="2026-04-28T00:18:37.373192944Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Apr 28 00:18:37.373534 containerd[1473]: time="2026-04-28T00:18:37.373210035Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Apr 28 00:18:37.373534 containerd[1473]: time="2026-04-28T00:18:37.373224981Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Apr 28 00:18:37.373534 containerd[1473]: time="2026-04-28T00:18:37.373238946Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Apr 28 00:18:37.373534 containerd[1473]: time="2026-04-28T00:18:37.373252523Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Apr 28 00:18:37.373534 containerd[1473]: time="2026-04-28T00:18:37.373532254Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Apr 28 00:18:37.373998 containerd[1473]: time="2026-04-28T00:18:37.373553958Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Apr 28 00:18:37.373998 containerd[1473]: time="2026-04-28T00:18:37.373570171Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Apr 28 00:18:37.373998 containerd[1473]: time="2026-04-28T00:18:37.373585651Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Apr 28 00:18:37.373998 containerd[1473]: time="2026-04-28T00:18:37.373666621Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Apr 28 00:18:37.373998 containerd[1473]: time="2026-04-28T00:18:37.373684594Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Apr 28 00:18:37.373998 containerd[1473]: time="2026-04-28T00:18:37.373700978Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Apr 28 00:18:37.373998 containerd[1473]: time="2026-04-28T00:18:37.373716935Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Apr 28 00:18:37.373998 containerd[1473]: time="2026-04-28T00:18:37.373727398Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Apr 28 00:18:37.373998 containerd[1473]: time="2026-04-28T00:18:37.373739363Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Apr 28 00:18:37.373998 containerd[1473]: time="2026-04-28T00:18:37.373748335Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Apr 28 00:18:37.373998 containerd[1473]: time="2026-04-28T00:18:37.373757378Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Apr 28 00:18:37.373998 containerd[1473]: time="2026-04-28T00:18:37.373767597Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Apr 28 00:18:37.373998 containerd[1473]: time="2026-04-28T00:18:37.373821973Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Apr 28 00:18:37.373998 containerd[1473]: time="2026-04-28T00:18:37.373955826Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Apr 28 00:18:37.373998 containerd[1473]: time="2026-04-28T00:18:37.373973338Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Apr 28 00:18:37.374605 containerd[1473]: time="2026-04-28T00:18:37.373988522Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Apr 28 00:18:37.374605 containerd[1473]: time="2026-04-28T00:18:37.374178463Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Apr 28 00:18:37.374605 containerd[1473]: time="2026-04-28T00:18:37.374242186Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Apr 28 00:18:37.374605 containerd[1473]: time="2026-04-28T00:18:37.374252883Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Apr 28 00:18:37.374605 containerd[1473]: time="2026-04-28T00:18:37.374262882Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Apr 28 00:18:37.374605 containerd[1473]: time="2026-04-28T00:18:37.374269922Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Apr 28 00:18:37.374605 containerd[1473]: time="2026-04-28T00:18:37.374278584Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Apr 28 00:18:37.374605 containerd[1473]: time="2026-04-28T00:18:37.374431586Z" level=info msg="NRI interface is disabled by configuration." Apr 28 00:18:37.374605 containerd[1473]: time="2026-04-28T00:18:37.374466917Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Apr 28 00:18:37.375572 containerd[1473]: time="2026-04-28T00:18:37.375450594Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Apr 28 00:18:37.376391 containerd[1473]: time="2026-04-28T00:18:37.375589347Z" level=info msg="Connect containerd service" Apr 28 00:18:37.376391 containerd[1473]: time="2026-04-28T00:18:37.375792148Z" level=info msg="using legacy CRI server" Apr 28 00:18:37.376391 containerd[1473]: time="2026-04-28T00:18:37.375801495Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Apr 28 00:18:37.376840 containerd[1473]: time="2026-04-28T00:18:37.376765775Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Apr 28 00:18:37.381246 containerd[1473]: time="2026-04-28T00:18:37.381062423Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 28 00:18:37.382643 containerd[1473]: time="2026-04-28T00:18:37.382548174Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Apr 28 00:18:37.382761 containerd[1473]: time="2026-04-28T00:18:37.382694117Z" level=info msg=serving... address=/run/containerd/containerd.sock Apr 28 00:18:37.387029 containerd[1473]: time="2026-04-28T00:18:37.386448566Z" level=info msg="Start subscribing containerd event" Apr 28 00:18:37.388055 containerd[1473]: time="2026-04-28T00:18:37.388030705Z" level=info msg="Start recovering state" Apr 28 00:18:37.389978 containerd[1473]: time="2026-04-28T00:18:37.388604627Z" level=info msg="Start event monitor" Apr 28 00:18:37.389978 containerd[1473]: time="2026-04-28T00:18:37.388723478Z" level=info msg="Start snapshots syncer" Apr 28 00:18:37.389978 containerd[1473]: time="2026-04-28T00:18:37.388770109Z" level=info msg="Start cni network conf syncer for default" Apr 28 00:18:37.389978 containerd[1473]: time="2026-04-28T00:18:37.388814473Z" level=info msg="Start streaming server" Apr 28 00:18:37.389978 containerd[1473]: time="2026-04-28T00:18:37.389187952Z" level=info msg="containerd successfully booted in 0.754088s" Apr 28 00:18:37.389831 systemd[1]: Started containerd.service - containerd container runtime. Apr 28 00:18:37.876754 tar[1471]: linux-amd64/README.md Apr 28 00:18:37.898808 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Apr 28 00:18:40.369222 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Apr 28 00:18:40.387808 systemd[1]: Started sshd@0-10.0.0.11:22-10.0.0.1:54276.service - OpenSSH per-connection server daemon (10.0.0.1:54276). Apr 28 00:18:40.777055 sshd[1547]: Accepted publickey for core from 10.0.0.1 port 54276 ssh2: RSA SHA256:h1NhNWfC+Dmp6wIIzeQOuTKnwsIBe3BN0LKeEqeOidc Apr 28 00:18:40.787425 sshd[1547]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 00:18:40.880879 systemd-logind[1457]: New session 1 of user core. Apr 28 00:18:40.886260 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Apr 28 00:18:40.908478 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Apr 28 00:18:41.119245 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Apr 28 00:18:41.625032 systemd[1]: Starting user@500.service - User Manager for UID 500... Apr 28 00:18:41.656670 (systemd)[1551]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Apr 28 00:18:41.852533 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 28 00:18:41.852680 (kubelet)[1559]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 28 00:18:41.853633 systemd[1]: Reached target multi-user.target - Multi-User System. Apr 28 00:18:41.979593 systemd[1551]: Queued start job for default target default.target. Apr 28 00:18:41.998547 systemd[1551]: Created slice app.slice - User Application Slice. Apr 28 00:18:41.998637 systemd[1551]: Reached target paths.target - Paths. Apr 28 00:18:41.998652 systemd[1551]: Reached target timers.target - Timers. Apr 28 00:18:42.019705 systemd[1551]: Starting dbus.socket - D-Bus User Message Bus Socket... Apr 28 00:18:42.270470 systemd[1551]: Listening on dbus.socket - D-Bus User Message Bus Socket. Apr 28 00:18:42.270601 systemd[1551]: Reached target sockets.target - Sockets. Apr 28 00:18:42.270612 systemd[1551]: Reached target basic.target - Basic System. Apr 28 00:18:42.270641 systemd[1551]: Reached target default.target - Main User Target. Apr 28 00:18:42.270692 systemd[1551]: Startup finished in 578ms. Apr 28 00:18:42.271195 systemd[1]: Started user@500.service - User Manager for UID 500. Apr 28 00:18:42.284301 systemd[1]: Started session-1.scope - Session 1 of User core. Apr 28 00:18:42.289537 systemd[1]: Startup finished in 2.689s (kernel) + 15.213s (initrd) + 21.295s (userspace) = 39.198s. Apr 28 00:18:42.428660 systemd[1]: Started sshd@1-10.0.0.11:22-10.0.0.1:54288.service - OpenSSH per-connection server daemon (10.0.0.1:54288). Apr 28 00:18:42.593557 sshd[1573]: Accepted publickey for core from 10.0.0.1 port 54288 ssh2: RSA SHA256:h1NhNWfC+Dmp6wIIzeQOuTKnwsIBe3BN0LKeEqeOidc Apr 28 00:18:42.601706 sshd[1573]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 00:18:42.640146 systemd-logind[1457]: New session 2 of user core. Apr 28 00:18:42.686190 systemd[1]: Started session-2.scope - Session 2 of User core. Apr 28 00:18:42.803047 sshd[1573]: pam_unix(sshd:session): session closed for user core Apr 28 00:18:42.841610 systemd[1]: sshd@1-10.0.0.11:22-10.0.0.1:54288.service: Deactivated successfully. Apr 28 00:18:42.852295 systemd[1]: session-2.scope: Deactivated successfully. Apr 28 00:18:42.865234 systemd-logind[1457]: Session 2 logged out. Waiting for processes to exit. Apr 28 00:18:42.881870 systemd[1]: Started sshd@2-10.0.0.11:22-10.0.0.1:54302.service - OpenSSH per-connection server daemon (10.0.0.1:54302). Apr 28 00:18:42.889224 systemd-logind[1457]: Removed session 2. Apr 28 00:18:43.161410 sshd[1580]: Accepted publickey for core from 10.0.0.1 port 54302 ssh2: RSA SHA256:h1NhNWfC+Dmp6wIIzeQOuTKnwsIBe3BN0LKeEqeOidc Apr 28 00:18:43.292336 sshd[1580]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 00:18:43.345005 systemd-logind[1457]: New session 3 of user core. Apr 28 00:18:43.363583 systemd[1]: Started session-3.scope - Session 3 of User core. Apr 28 00:18:43.494749 sshd[1580]: pam_unix(sshd:session): session closed for user core Apr 28 00:18:43.997669 systemd[1]: sshd@2-10.0.0.11:22-10.0.0.1:54302.service: Deactivated successfully. Apr 28 00:18:44.008880 systemd[1]: session-3.scope: Deactivated successfully. Apr 28 00:18:44.015740 systemd-logind[1457]: Session 3 logged out. Waiting for processes to exit. Apr 28 00:18:44.025978 systemd[1]: Started sshd@3-10.0.0.11:22-10.0.0.1:54316.service - OpenSSH per-connection server daemon (10.0.0.1:54316). Apr 28 00:18:44.027463 systemd-logind[1457]: Removed session 3. Apr 28 00:18:44.242043 sshd[1588]: Accepted publickey for core from 10.0.0.1 port 54316 ssh2: RSA SHA256:h1NhNWfC+Dmp6wIIzeQOuTKnwsIBe3BN0LKeEqeOidc Apr 28 00:18:44.247318 sshd[1588]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 00:18:44.275870 systemd-logind[1457]: New session 4 of user core. Apr 28 00:18:44.292015 systemd[1]: Started session-4.scope - Session 4 of User core. Apr 28 00:18:44.713141 sshd[1588]: pam_unix(sshd:session): session closed for user core Apr 28 00:18:44.741472 systemd[1]: sshd@3-10.0.0.11:22-10.0.0.1:54316.service: Deactivated successfully. Apr 28 00:18:44.746556 systemd[1]: session-4.scope: Deactivated successfully. Apr 28 00:18:44.748626 systemd-logind[1457]: Session 4 logged out. Waiting for processes to exit. Apr 28 00:18:44.766508 systemd[1]: Started sshd@4-10.0.0.11:22-10.0.0.1:54320.service - OpenSSH per-connection server daemon (10.0.0.1:54320). Apr 28 00:18:44.770506 systemd-logind[1457]: Removed session 4. Apr 28 00:18:44.852423 sshd[1596]: Accepted publickey for core from 10.0.0.1 port 54320 ssh2: RSA SHA256:h1NhNWfC+Dmp6wIIzeQOuTKnwsIBe3BN0LKeEqeOidc Apr 28 00:18:44.859705 sshd[1596]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 00:18:44.910863 systemd-logind[1457]: New session 5 of user core. Apr 28 00:18:44.943668 systemd[1]: Started session-5.scope - Session 5 of User core. Apr 28 00:18:45.125838 sudo[1599]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Apr 28 00:18:45.126505 sudo[1599]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 28 00:18:45.188631 kubelet[1559]: E0428 00:18:45.188091 1559 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 28 00:18:45.191533 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 28 00:18:45.191739 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 28 00:18:45.192245 systemd[1]: kubelet.service: Consumed 8.928s CPU time. Apr 28 00:18:49.884306 systemd[1]: Starting docker.service - Docker Application Container Engine... Apr 28 00:18:49.912124 (dockerd)[1619]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Apr 28 00:18:53.483263 dockerd[1619]: time="2026-04-28T00:18:53.482428497Z" level=info msg="Starting up" Apr 28 00:18:53.728666 dockerd[1619]: time="2026-04-28T00:18:53.727600459Z" level=info msg="Loading containers: start." Apr 28 00:18:54.128244 kernel: Initializing XFRM netlink socket Apr 28 00:18:54.330733 systemd-networkd[1387]: docker0: Link UP Apr 28 00:18:54.424376 dockerd[1619]: time="2026-04-28T00:18:54.423515410Z" level=info msg="Loading containers: done." Apr 28 00:18:54.456768 dockerd[1619]: time="2026-04-28T00:18:54.456334836Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Apr 28 00:18:54.457548 dockerd[1619]: time="2026-04-28T00:18:54.457088114Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Apr 28 00:18:54.457548 dockerd[1619]: time="2026-04-28T00:18:54.457438080Z" level=info msg="Daemon has completed initialization" Apr 28 00:18:54.542641 dockerd[1619]: time="2026-04-28T00:18:54.542362784Z" level=info msg="API listen on /run/docker.sock" Apr 28 00:18:54.546833 systemd[1]: Started docker.service - Docker Application Container Engine. Apr 28 00:18:55.242164 containerd[1473]: time="2026-04-28T00:18:55.241467112Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.7\"" Apr 28 00:18:55.371412 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Apr 28 00:18:55.383134 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 28 00:18:55.951745 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount49888008.mount: Deactivated successfully. Apr 28 00:18:56.218715 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 28 00:18:56.232004 (kubelet)[1789]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 28 00:18:56.373105 kubelet[1789]: E0428 00:18:56.371119 1789 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 28 00:18:56.375782 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 28 00:18:56.376115 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 28 00:18:57.867620 containerd[1473]: time="2026-04-28T00:18:57.867281172Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.34.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 00:18:57.870617 containerd[1473]: time="2026-04-28T00:18:57.870492357Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.34.7: active requests=0, bytes read=27099952" Apr 28 00:18:57.872431 containerd[1473]: time="2026-04-28T00:18:57.872349170Z" level=info msg="ImageCreate event name:\"sha256:c15709457ff55a861a7259eb631c447f9bf906267615f9d8dcc820635a0bfb95\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 00:18:57.897955 containerd[1473]: time="2026-04-28T00:18:57.897505624Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:b96b8464d152a24c81d7f0435fd2198f8486970cd26a9e0e9c20826c73d1441c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 00:18:57.907745 containerd[1473]: time="2026-04-28T00:18:57.907395129Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.34.7\" with image id \"sha256:c15709457ff55a861a7259eb631c447f9bf906267615f9d8dcc820635a0bfb95\", repo tag \"registry.k8s.io/kube-apiserver:v1.34.7\", repo digest \"registry.k8s.io/kube-apiserver@sha256:b96b8464d152a24c81d7f0435fd2198f8486970cd26a9e0e9c20826c73d1441c\", size \"27097113\" in 2.665508931s" Apr 28 00:18:57.907745 containerd[1473]: time="2026-04-28T00:18:57.907557151Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.7\" returns image reference \"sha256:c15709457ff55a861a7259eb631c447f9bf906267615f9d8dcc820635a0bfb95\"" Apr 28 00:18:57.909333 containerd[1473]: time="2026-04-28T00:18:57.909221517Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.7\"" Apr 28 00:19:00.985983 containerd[1473]: time="2026-04-28T00:19:00.983458807Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.34.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 00:19:00.989084 containerd[1473]: time="2026-04-28T00:19:00.988205058Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.34.7: active requests=0, bytes read=21252670" Apr 28 00:19:01.005621 containerd[1473]: time="2026-04-28T00:19:01.005189777Z" level=info msg="ImageCreate event name:\"sha256:23986a24c803336f2a2dfbcaaf0547ee8bcf6638f23bec8967e210909d00a97a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 00:19:01.400438 containerd[1473]: time="2026-04-28T00:19:01.399722961Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:7d759bdc4fef10a3fc1ad60ce9439d58e1a4df7ebb22751f7cc0201ce55f280b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 00:19:01.401681 containerd[1473]: time="2026-04-28T00:19:01.401614756Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.34.7\" with image id \"sha256:23986a24c803336f2a2dfbcaaf0547ee8bcf6638f23bec8967e210909d00a97a\", repo tag \"registry.k8s.io/kube-controller-manager:v1.34.7\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:7d759bdc4fef10a3fc1ad60ce9439d58e1a4df7ebb22751f7cc0201ce55f280b\", size \"22819085\" in 3.492339296s" Apr 28 00:19:01.401827 containerd[1473]: time="2026-04-28T00:19:01.401692516Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.7\" returns image reference \"sha256:23986a24c803336f2a2dfbcaaf0547ee8bcf6638f23bec8967e210909d00a97a\"" Apr 28 00:19:01.404197 containerd[1473]: time="2026-04-28T00:19:01.404086132Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.7\"" Apr 28 00:19:03.130677 containerd[1473]: time="2026-04-28T00:19:03.130314776Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.34.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 00:19:03.133944 containerd[1473]: time="2026-04-28T00:19:03.133750884Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.34.7: active requests=0, bytes read=15810823" Apr 28 00:19:03.140054 containerd[1473]: time="2026-04-28T00:19:03.139794255Z" level=info msg="ImageCreate event name:\"sha256:568f1856b0e1c464b0b50ab2879ebd535623c1a620b1d2530ba5dd594237dc82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 00:19:03.147616 containerd[1473]: time="2026-04-28T00:19:03.147370390Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:4ab32f707ff84beaac431797999707757b885196b0b9a52d29cb67f95efce7c1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 00:19:03.149592 containerd[1473]: time="2026-04-28T00:19:03.149511371Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.34.7\" with image id \"sha256:568f1856b0e1c464b0b50ab2879ebd535623c1a620b1d2530ba5dd594237dc82\", repo tag \"registry.k8s.io/kube-scheduler:v1.34.7\", repo digest \"registry.k8s.io/kube-scheduler@sha256:4ab32f707ff84beaac431797999707757b885196b0b9a52d29cb67f95efce7c1\", size \"17377256\" in 1.745375329s" Apr 28 00:19:03.149714 containerd[1473]: time="2026-04-28T00:19:03.149636058Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.7\" returns image reference \"sha256:568f1856b0e1c464b0b50ab2879ebd535623c1a620b1d2530ba5dd594237dc82\"" Apr 28 00:19:03.151425 containerd[1473]: time="2026-04-28T00:19:03.151359797Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.7\"" Apr 28 00:19:05.581110 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount173116459.mount: Deactivated successfully. Apr 28 00:19:06.326440 containerd[1473]: time="2026-04-28T00:19:06.326118147Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.34.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 00:19:06.330624 containerd[1473]: time="2026-04-28T00:19:06.328702530Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.34.7: active requests=0, bytes read=25972848" Apr 28 00:19:06.331754 containerd[1473]: time="2026-04-28T00:19:06.331682134Z" level=info msg="ImageCreate event name:\"sha256:345c2b8919907fbb425a843da24d86a16708ee53a49ad3fa2e6dc229c7b34643\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 00:19:06.334227 containerd[1473]: time="2026-04-28T00:19:06.334171217Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:062519bc0a14769e2f98c6bdff7816a17e6252de3f3c9cb102e6be33fe38d9e2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 00:19:06.335805 containerd[1473]: time="2026-04-28T00:19:06.335082278Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.34.7\" with image id \"sha256:345c2b8919907fbb425a843da24d86a16708ee53a49ad3fa2e6dc229c7b34643\", repo tag \"registry.k8s.io/kube-proxy:v1.34.7\", repo digest \"registry.k8s.io/kube-proxy@sha256:062519bc0a14769e2f98c6bdff7816a17e6252de3f3c9cb102e6be33fe38d9e2\", size \"25971973\" in 3.18366853s" Apr 28 00:19:06.335805 containerd[1473]: time="2026-04-28T00:19:06.335143221Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.7\" returns image reference \"sha256:345c2b8919907fbb425a843da24d86a16708ee53a49ad3fa2e6dc229c7b34643\"" Apr 28 00:19:06.339125 containerd[1473]: time="2026-04-28T00:19:06.338793757Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\"" Apr 28 00:19:06.666314 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Apr 28 00:19:06.674274 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 28 00:19:06.963875 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 28 00:19:06.997720 (kubelet)[1867]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 28 00:19:07.201376 kubelet[1867]: E0428 00:19:07.200857 1867 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 28 00:19:07.205384 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 28 00:19:07.205544 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 28 00:19:07.243091 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2404919957.mount: Deactivated successfully. Apr 28 00:19:11.100113 containerd[1473]: time="2026-04-28T00:19:11.099607362Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 00:19:11.108115 containerd[1473]: time="2026-04-28T00:19:11.105154320Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.1: active requests=0, bytes read=22387483" Apr 28 00:19:11.120437 containerd[1473]: time="2026-04-28T00:19:11.119869720Z" level=info msg="ImageCreate event name:\"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 00:19:11.127833 containerd[1473]: time="2026-04-28T00:19:11.127723888Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 00:19:11.128976 containerd[1473]: time="2026-04-28T00:19:11.128833656Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.1\" with image id \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\", size \"22384805\" in 4.789848787s" Apr 28 00:19:11.128976 containerd[1473]: time="2026-04-28T00:19:11.128937771Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\" returns image reference \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\"" Apr 28 00:19:11.134970 containerd[1473]: time="2026-04-28T00:19:11.133859332Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Apr 28 00:19:12.913505 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3999151885.mount: Deactivated successfully. Apr 28 00:19:12.935690 containerd[1473]: time="2026-04-28T00:19:12.934470232Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 00:19:12.937303 containerd[1473]: time="2026-04-28T00:19:12.936627008Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=321150" Apr 28 00:19:12.983670 containerd[1473]: time="2026-04-28T00:19:12.977499281Z" level=info msg="ImageCreate event name:\"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 00:19:13.102191 containerd[1473]: time="2026-04-28T00:19:13.101774444Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 00:19:13.106075 containerd[1473]: time="2026-04-28T00:19:13.105614827Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"320448\" in 1.970727796s" Apr 28 00:19:13.106075 containerd[1473]: time="2026-04-28T00:19:13.105831193Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\"" Apr 28 00:19:13.108051 containerd[1473]: time="2026-04-28T00:19:13.107474153Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.5-0\"" Apr 28 00:19:15.046786 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3430867738.mount: Deactivated successfully. Apr 28 00:19:17.378467 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Apr 28 00:19:17.397748 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 28 00:19:18.112510 update_engine[1464]: I20260428 00:19:18.110790 1464 update_attempter.cc:509] Updating boot flags... Apr 28 00:19:18.166195 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 28 00:19:18.170308 (kubelet)[1997]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 28 00:19:18.348046 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (1999) Apr 28 00:19:18.440212 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (1995) Apr 28 00:19:18.716202 kubelet[1997]: E0428 00:19:18.715477 1997 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 28 00:19:18.728229 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 28 00:19:18.728460 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 28 00:19:21.153853 containerd[1473]: time="2026-04-28T00:19:21.151863991Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.5-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 00:19:21.174437 containerd[1473]: time="2026-04-28T00:19:21.170218808Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.5-0: active requests=0, bytes read=22874255" Apr 28 00:19:21.280070 containerd[1473]: time="2026-04-28T00:19:21.278456444Z" level=info msg="ImageCreate event name:\"sha256:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 00:19:21.350041 containerd[1473]: time="2026-04-28T00:19:21.349433830Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 00:19:21.422241 containerd[1473]: time="2026-04-28T00:19:21.420850268Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.5-0\" with image id \"sha256:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1\", repo tag \"registry.k8s.io/etcd:3.6.5-0\", repo digest \"registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534\", size \"22871747\" in 8.31330219s" Apr 28 00:19:21.422241 containerd[1473]: time="2026-04-28T00:19:21.421413121Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.5-0\" returns image reference \"sha256:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1\"" Apr 28 00:19:28.616608 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 28 00:19:28.656196 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 28 00:19:29.055645 systemd[1]: Reloading requested from client PID 2058 ('systemctl') (unit session-5.scope)... Apr 28 00:19:29.055839 systemd[1]: Reloading... Apr 28 00:19:29.848133 zram_generator::config[2093]: No configuration found. Apr 28 00:19:30.731482 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 28 00:19:30.839806 systemd[1]: Reloading finished in 1783 ms. Apr 28 00:19:31.029944 (kubelet)[2138]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 28 00:19:31.030259 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 28 00:19:31.030713 systemd[1]: kubelet.service: Deactivated successfully. Apr 28 00:19:31.031065 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 28 00:19:31.034161 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 28 00:19:32.249119 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 28 00:19:32.288716 (kubelet)[2147]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 28 00:19:32.954144 kubelet[2147]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 28 00:19:32.954144 kubelet[2147]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 28 00:19:32.956268 kubelet[2147]: I0428 00:19:32.955194 2147 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 28 00:19:33.386008 kubelet[2147]: I0428 00:19:33.384664 2147 server.go:529] "Kubelet version" kubeletVersion="v1.34.4" Apr 28 00:19:33.387968 kubelet[2147]: I0428 00:19:33.386411 2147 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 28 00:19:33.387968 kubelet[2147]: I0428 00:19:33.387273 2147 watchdog_linux.go:95] "Systemd watchdog is not enabled" Apr 28 00:19:33.387968 kubelet[2147]: I0428 00:19:33.387335 2147 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 28 00:19:33.388637 kubelet[2147]: I0428 00:19:33.388524 2147 server.go:956] "Client rotation is on, will bootstrap in background" Apr 28 00:19:33.465568 kubelet[2147]: E0428 00:19:33.465127 2147 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.11:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.11:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 28 00:19:33.466711 kubelet[2147]: I0428 00:19:33.465747 2147 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 28 00:19:33.550116 kubelet[2147]: E0428 00:19:33.548170 2147 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 28 00:19:33.551830 kubelet[2147]: I0428 00:19:33.550512 2147 server.go:1400] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Apr 28 00:19:33.558256 kubelet[2147]: I0428 00:19:33.558121 2147 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Apr 28 00:19:33.559721 kubelet[2147]: I0428 00:19:33.559588 2147 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 28 00:19:33.560135 kubelet[2147]: I0428 00:19:33.559651 2147 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 28 00:19:33.560474 kubelet[2147]: I0428 00:19:33.560215 2147 topology_manager.go:138] "Creating topology manager with none policy" Apr 28 00:19:33.560474 kubelet[2147]: I0428 00:19:33.560224 2147 container_manager_linux.go:306] "Creating device plugin manager" Apr 28 00:19:33.560535 kubelet[2147]: I0428 00:19:33.560505 2147 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Apr 28 00:19:33.566765 kubelet[2147]: I0428 00:19:33.566425 2147 state_mem.go:36] "Initialized new in-memory state store" Apr 28 00:19:33.567996 kubelet[2147]: I0428 00:19:33.567385 2147 kubelet.go:475] "Attempting to sync node with API server" Apr 28 00:19:33.567996 kubelet[2147]: I0428 00:19:33.567420 2147 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 28 00:19:33.567996 kubelet[2147]: I0428 00:19:33.567700 2147 kubelet.go:387] "Adding apiserver pod source" Apr 28 00:19:33.567996 kubelet[2147]: I0428 00:19:33.567823 2147 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 28 00:19:33.576052 kubelet[2147]: E0428 00:19:33.572566 2147 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.11:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.11:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 28 00:19:33.576052 kubelet[2147]: E0428 00:19:33.572796 2147 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.11:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.11:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 28 00:19:33.577090 kubelet[2147]: I0428 00:19:33.577065 2147 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 28 00:19:33.581565 kubelet[2147]: I0428 00:19:33.581023 2147 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 28 00:19:33.581565 kubelet[2147]: I0428 00:19:33.581237 2147 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Apr 28 00:19:33.582557 kubelet[2147]: W0428 00:19:33.582417 2147 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Apr 28 00:19:33.596708 kubelet[2147]: I0428 00:19:33.596306 2147 server.go:1262] "Started kubelet" Apr 28 00:19:33.600971 kubelet[2147]: I0428 00:19:33.599694 2147 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 28 00:19:33.600971 kubelet[2147]: I0428 00:19:33.599935 2147 server_v1.go:49] "podresources" method="list" useActivePods=true Apr 28 00:19:33.600971 kubelet[2147]: I0428 00:19:33.599734 2147 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Apr 28 00:19:33.600971 kubelet[2147]: I0428 00:19:33.600554 2147 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 28 00:19:33.600971 kubelet[2147]: I0428 00:19:33.600833 2147 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 28 00:19:33.605179 kubelet[2147]: I0428 00:19:33.604624 2147 server.go:310] "Adding debug handlers to kubelet server" Apr 28 00:19:33.605179 kubelet[2147]: I0428 00:19:33.604724 2147 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 28 00:19:33.605622 kubelet[2147]: I0428 00:19:33.605569 2147 volume_manager.go:313] "Starting Kubelet Volume Manager" Apr 28 00:19:33.605809 kubelet[2147]: I0428 00:19:33.605762 2147 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Apr 28 00:19:33.605809 kubelet[2147]: E0428 00:19:33.604280 2147 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.11:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.11:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18aa5d4996cf8648 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-28 00:19:33.595260488 +0000 UTC m=+1.214082116,LastTimestamp:2026-04-28 00:19:33.595260488 +0000 UTC m=+1.214082116,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 28 00:19:33.612825 kubelet[2147]: I0428 00:19:33.608682 2147 reconciler.go:29] "Reconciler: start to sync state" Apr 28 00:19:33.612825 kubelet[2147]: E0428 00:19:33.609306 2147 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 00:19:33.613439 kubelet[2147]: E0428 00:19:33.613310 2147 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.11:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.11:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 28 00:19:33.622037 kubelet[2147]: E0428 00:19:33.620606 2147 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.11:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.11:6443: connect: connection refused" interval="200ms" Apr 28 00:19:33.622037 kubelet[2147]: I0428 00:19:33.620709 2147 factory.go:223] Registration of the systemd container factory successfully Apr 28 00:19:33.622037 kubelet[2147]: I0428 00:19:33.621347 2147 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 28 00:19:33.629697 kubelet[2147]: I0428 00:19:33.629574 2147 factory.go:223] Registration of the containerd container factory successfully Apr 28 00:19:33.634610 kubelet[2147]: E0428 00:19:33.634136 2147 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 28 00:19:33.661402 kubelet[2147]: I0428 00:19:33.661082 2147 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Apr 28 00:19:33.668387 kubelet[2147]: I0428 00:19:33.668149 2147 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Apr 28 00:19:33.668387 kubelet[2147]: I0428 00:19:33.668330 2147 status_manager.go:244] "Starting to sync pod status with apiserver" Apr 28 00:19:33.670120 kubelet[2147]: I0428 00:19:33.668606 2147 kubelet.go:2428] "Starting kubelet main sync loop" Apr 28 00:19:33.671674 kubelet[2147]: E0428 00:19:33.668732 2147 kubelet.go:2452] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 28 00:19:33.672947 kubelet[2147]: E0428 00:19:33.672340 2147 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.11:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.11:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 28 00:19:33.686270 kubelet[2147]: I0428 00:19:33.685817 2147 cpu_manager.go:221] "Starting CPU manager" policy="none" Apr 28 00:19:33.686270 kubelet[2147]: I0428 00:19:33.685834 2147 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Apr 28 00:19:33.686270 kubelet[2147]: I0428 00:19:33.685856 2147 state_mem.go:36] "Initialized new in-memory state store" Apr 28 00:19:33.690057 kubelet[2147]: I0428 00:19:33.689819 2147 policy_none.go:49] "None policy: Start" Apr 28 00:19:33.690057 kubelet[2147]: I0428 00:19:33.690033 2147 memory_manager.go:187] "Starting memorymanager" policy="None" Apr 28 00:19:33.690057 kubelet[2147]: I0428 00:19:33.690108 2147 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Apr 28 00:19:33.696001 kubelet[2147]: I0428 00:19:33.695622 2147 policy_none.go:47] "Start" Apr 28 00:19:33.705212 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Apr 28 00:19:33.718098 kubelet[2147]: E0428 00:19:33.711657 2147 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 00:19:33.726043 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Apr 28 00:19:33.735103 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Apr 28 00:19:33.755659 kubelet[2147]: E0428 00:19:33.755413 2147 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 28 00:19:33.756222 kubelet[2147]: I0428 00:19:33.756043 2147 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 28 00:19:33.756222 kubelet[2147]: I0428 00:19:33.756076 2147 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 28 00:19:33.756798 kubelet[2147]: I0428 00:19:33.756740 2147 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 28 00:19:33.759517 kubelet[2147]: E0428 00:19:33.759494 2147 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 28 00:19:33.759690 kubelet[2147]: E0428 00:19:33.759651 2147 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 28 00:19:33.810402 systemd[1]: Created slice kubepods-burstable-poda646d41a511ed3aa4e8f9816f82de57d.slice - libcontainer container kubepods-burstable-poda646d41a511ed3aa4e8f9816f82de57d.slice. Apr 28 00:19:33.811476 kubelet[2147]: I0428 00:19:33.811417 2147 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c6bb8708a026256e82ca4c5631a78b5a-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"c6bb8708a026256e82ca4c5631a78b5a\") " pod="kube-system/kube-controller-manager-localhost" Apr 28 00:19:33.811476 kubelet[2147]: I0428 00:19:33.811460 2147 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/c6bb8708a026256e82ca4c5631a78b5a-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"c6bb8708a026256e82ca4c5631a78b5a\") " pod="kube-system/kube-controller-manager-localhost" Apr 28 00:19:33.811618 kubelet[2147]: I0428 00:19:33.811491 2147 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c6bb8708a026256e82ca4c5631a78b5a-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"c6bb8708a026256e82ca4c5631a78b5a\") " pod="kube-system/kube-controller-manager-localhost" Apr 28 00:19:33.811618 kubelet[2147]: I0428 00:19:33.811565 2147 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c6bb8708a026256e82ca4c5631a78b5a-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"c6bb8708a026256e82ca4c5631a78b5a\") " pod="kube-system/kube-controller-manager-localhost" Apr 28 00:19:33.811696 kubelet[2147]: I0428 00:19:33.811608 2147 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c6bb8708a026256e82ca4c5631a78b5a-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"c6bb8708a026256e82ca4c5631a78b5a\") " pod="kube-system/kube-controller-manager-localhost" Apr 28 00:19:33.812435 kubelet[2147]: I0428 00:19:33.811956 2147 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a646d41a511ed3aa4e8f9816f82de57d-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"a646d41a511ed3aa4e8f9816f82de57d\") " pod="kube-system/kube-apiserver-localhost" Apr 28 00:19:33.812435 kubelet[2147]: I0428 00:19:33.812258 2147 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a646d41a511ed3aa4e8f9816f82de57d-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"a646d41a511ed3aa4e8f9816f82de57d\") " pod="kube-system/kube-apiserver-localhost" Apr 28 00:19:33.812725 kubelet[2147]: I0428 00:19:33.812519 2147 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a646d41a511ed3aa4e8f9816f82de57d-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"a646d41a511ed3aa4e8f9816f82de57d\") " pod="kube-system/kube-apiserver-localhost" Apr 28 00:19:33.812725 kubelet[2147]: I0428 00:19:33.812577 2147 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/824fd89300514e351ed3b68d82c665c6-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"824fd89300514e351ed3b68d82c665c6\") " pod="kube-system/kube-scheduler-localhost" Apr 28 00:19:33.821744 kubelet[2147]: E0428 00:19:33.821589 2147 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.11:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.11:6443: connect: connection refused" interval="400ms" Apr 28 00:19:33.829840 kubelet[2147]: E0428 00:19:33.829660 2147 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 28 00:19:33.832567 systemd[1]: Created slice kubepods-burstable-podc6bb8708a026256e82ca4c5631a78b5a.slice - libcontainer container kubepods-burstable-podc6bb8708a026256e82ca4c5631a78b5a.slice. Apr 28 00:19:33.834513 kubelet[2147]: E0428 00:19:33.834461 2147 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 28 00:19:33.838668 systemd[1]: Created slice kubepods-burstable-pod824fd89300514e351ed3b68d82c665c6.slice - libcontainer container kubepods-burstable-pod824fd89300514e351ed3b68d82c665c6.slice. Apr 28 00:19:33.843400 kubelet[2147]: E0428 00:19:33.843116 2147 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 28 00:19:33.868663 kubelet[2147]: I0428 00:19:33.868493 2147 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 28 00:19:33.869407 kubelet[2147]: E0428 00:19:33.869217 2147 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.11:6443/api/v1/nodes\": dial tcp 10.0.0.11:6443: connect: connection refused" node="localhost" Apr 28 00:19:33.884182 kubelet[2147]: E0428 00:19:33.883655 2147 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.11:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.11:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18aa5d4996cf8648 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-28 00:19:33.595260488 +0000 UTC m=+1.214082116,LastTimestamp:2026-04-28 00:19:33.595260488 +0000 UTC m=+1.214082116,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 28 00:19:34.079772 kubelet[2147]: I0428 00:19:34.079055 2147 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 28 00:19:34.079772 kubelet[2147]: E0428 00:19:34.079684 2147 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.11:6443/api/v1/nodes\": dial tcp 10.0.0.11:6443: connect: connection refused" node="localhost" Apr 28 00:19:34.139725 kubelet[2147]: E0428 00:19:34.139444 2147 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:19:34.147372 kubelet[2147]: E0428 00:19:34.147185 2147 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:19:34.149562 kubelet[2147]: E0428 00:19:34.149519 2147 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:19:34.150835 containerd[1473]: time="2026-04-28T00:19:34.150811503Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:c6bb8708a026256e82ca4c5631a78b5a,Namespace:kube-system,Attempt:0,}" Apr 28 00:19:34.152035 containerd[1473]: time="2026-04-28T00:19:34.150816324Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:a646d41a511ed3aa4e8f9816f82de57d,Namespace:kube-system,Attempt:0,}" Apr 28 00:19:34.152035 containerd[1473]: time="2026-04-28T00:19:34.150843208Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:824fd89300514e351ed3b68d82c665c6,Namespace:kube-system,Attempt:0,}" Apr 28 00:19:34.225193 kubelet[2147]: E0428 00:19:34.225026 2147 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.11:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.11:6443: connect: connection refused" interval="800ms" Apr 28 00:19:34.505643 kubelet[2147]: I0428 00:19:34.505263 2147 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 28 00:19:34.508248 kubelet[2147]: E0428 00:19:34.508043 2147 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.11:6443/api/v1/nodes\": dial tcp 10.0.0.11:6443: connect: connection refused" node="localhost" Apr 28 00:19:34.671036 kubelet[2147]: E0428 00:19:34.670668 2147 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.11:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.11:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 28 00:19:34.764049 kubelet[2147]: E0428 00:19:34.763446 2147 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.11:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.11:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 28 00:19:34.782705 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2427046749.mount: Deactivated successfully. Apr 28 00:19:34.796467 kubelet[2147]: E0428 00:19:34.796243 2147 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.11:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.11:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 28 00:19:34.798771 containerd[1473]: time="2026-04-28T00:19:34.798683425Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 28 00:19:34.800023 containerd[1473]: time="2026-04-28T00:19:34.799868221Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 28 00:19:34.800791 containerd[1473]: time="2026-04-28T00:19:34.800744290Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 28 00:19:34.801728 containerd[1473]: time="2026-04-28T00:19:34.801672760Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=311988" Apr 28 00:19:34.802590 containerd[1473]: time="2026-04-28T00:19:34.802554077Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 28 00:19:34.803403 containerd[1473]: time="2026-04-28T00:19:34.803345587Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 28 00:19:34.804186 containerd[1473]: time="2026-04-28T00:19:34.804142047Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 28 00:19:34.815351 containerd[1473]: time="2026-04-28T00:19:34.815009139Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 28 00:19:34.816115 containerd[1473]: time="2026-04-28T00:19:34.815674963Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 663.961395ms" Apr 28 00:19:34.816316 containerd[1473]: time="2026-04-28T00:19:34.816224257Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 664.667934ms" Apr 28 00:19:34.821229 containerd[1473]: time="2026-04-28T00:19:34.821059687Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 669.524421ms" Apr 28 00:19:35.031149 kubelet[2147]: E0428 00:19:35.030217 2147 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.11:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.11:6443: connect: connection refused" interval="1.6s" Apr 28 00:19:35.108750 kubelet[2147]: E0428 00:19:35.108486 2147 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.11:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.11:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 28 00:19:35.322592 kubelet[2147]: I0428 00:19:35.321774 2147 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 28 00:19:35.328527 kubelet[2147]: E0428 00:19:35.322664 2147 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.11:6443/api/v1/nodes\": dial tcp 10.0.0.11:6443: connect: connection refused" node="localhost" Apr 28 00:19:35.546815 containerd[1473]: time="2026-04-28T00:19:35.545165914Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 28 00:19:35.546815 containerd[1473]: time="2026-04-28T00:19:35.545693824Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 28 00:19:35.546815 containerd[1473]: time="2026-04-28T00:19:35.545708218Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 28 00:19:35.546815 containerd[1473]: time="2026-04-28T00:19:35.546158192Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 28 00:19:35.579608 containerd[1473]: time="2026-04-28T00:19:35.576469435Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 28 00:19:35.583688 containerd[1473]: time="2026-04-28T00:19:35.576787287Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 28 00:19:35.583688 containerd[1473]: time="2026-04-28T00:19:35.576869462Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 28 00:19:35.583688 containerd[1473]: time="2026-04-28T00:19:35.576960958Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 28 00:19:35.583688 containerd[1473]: time="2026-04-28T00:19:35.582757109Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 28 00:19:35.583688 containerd[1473]: time="2026-04-28T00:19:35.583443745Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 28 00:19:35.589658 containerd[1473]: time="2026-04-28T00:19:35.584461591Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 28 00:19:35.595258 containerd[1473]: time="2026-04-28T00:19:35.593782098Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 28 00:19:35.687451 kubelet[2147]: E0428 00:19:35.686852 2147 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.11:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.11:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 28 00:19:35.786382 systemd[1]: Started cri-containerd-670f440219fcc7a0b00ba64a35e5d1ee4e1b4357741788815366a9fc0871c449.scope - libcontainer container 670f440219fcc7a0b00ba64a35e5d1ee4e1b4357741788815366a9fc0871c449. Apr 28 00:19:35.863290 systemd[1]: Started cri-containerd-1c8dec823b2d977d0136feae27dd03906f1343f68ad2d582daddb320cf929b62.scope - libcontainer container 1c8dec823b2d977d0136feae27dd03906f1343f68ad2d582daddb320cf929b62. Apr 28 00:19:35.987063 systemd[1]: Started cri-containerd-5c78d6efeb1497bc4b241801b76ef5ef29f9c34ed77d9b10eab33b0cd1c147bb.scope - libcontainer container 5c78d6efeb1497bc4b241801b76ef5ef29f9c34ed77d9b10eab33b0cd1c147bb. Apr 28 00:19:36.110839 containerd[1473]: time="2026-04-28T00:19:36.104564004Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:c6bb8708a026256e82ca4c5631a78b5a,Namespace:kube-system,Attempt:0,} returns sandbox id \"670f440219fcc7a0b00ba64a35e5d1ee4e1b4357741788815366a9fc0871c449\"" Apr 28 00:19:36.135332 containerd[1473]: time="2026-04-28T00:19:36.135154540Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:a646d41a511ed3aa4e8f9816f82de57d,Namespace:kube-system,Attempt:0,} returns sandbox id \"1c8dec823b2d977d0136feae27dd03906f1343f68ad2d582daddb320cf929b62\"" Apr 28 00:19:36.164673 kubelet[2147]: E0428 00:19:36.163733 2147 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:19:36.171766 containerd[1473]: time="2026-04-28T00:19:36.171463488Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:824fd89300514e351ed3b68d82c665c6,Namespace:kube-system,Attempt:0,} returns sandbox id \"5c78d6efeb1497bc4b241801b76ef5ef29f9c34ed77d9b10eab33b0cd1c147bb\"" Apr 28 00:19:36.173334 kubelet[2147]: E0428 00:19:36.173283 2147 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:19:36.175529 kubelet[2147]: E0428 00:19:36.175510 2147 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:19:36.195288 containerd[1473]: time="2026-04-28T00:19:36.194980720Z" level=info msg="CreateContainer within sandbox \"1c8dec823b2d977d0136feae27dd03906f1343f68ad2d582daddb320cf929b62\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Apr 28 00:19:36.196013 containerd[1473]: time="2026-04-28T00:19:36.195008573Z" level=info msg="CreateContainer within sandbox \"670f440219fcc7a0b00ba64a35e5d1ee4e1b4357741788815366a9fc0871c449\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Apr 28 00:19:36.209332 containerd[1473]: time="2026-04-28T00:19:36.208803333Z" level=info msg="CreateContainer within sandbox \"5c78d6efeb1497bc4b241801b76ef5ef29f9c34ed77d9b10eab33b0cd1c147bb\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Apr 28 00:19:36.294524 containerd[1473]: time="2026-04-28T00:19:36.294144489Z" level=info msg="CreateContainer within sandbox \"670f440219fcc7a0b00ba64a35e5d1ee4e1b4357741788815366a9fc0871c449\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"08d94bf594f238ce10dd0aec926de9038c8845723ba5a23aeab465349ec74b42\"" Apr 28 00:19:36.295346 containerd[1473]: time="2026-04-28T00:19:36.295039412Z" level=info msg="CreateContainer within sandbox \"1c8dec823b2d977d0136feae27dd03906f1343f68ad2d582daddb320cf929b62\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"c1e55d247cdb60fe5d631e9c903dd2c21854aea1f240a5e38e911c85922a9a17\"" Apr 28 00:19:36.301048 containerd[1473]: time="2026-04-28T00:19:36.300809404Z" level=info msg="CreateContainer within sandbox \"5c78d6efeb1497bc4b241801b76ef5ef29f9c34ed77d9b10eab33b0cd1c147bb\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"926a2ae2d2bbe7d37d5546329503f4fc06ded504114e66507d4e88685c8b284d\"" Apr 28 00:19:36.306119 containerd[1473]: time="2026-04-28T00:19:36.301142981Z" level=info msg="StartContainer for \"c1e55d247cdb60fe5d631e9c903dd2c21854aea1f240a5e38e911c85922a9a17\"" Apr 28 00:19:36.306119 containerd[1473]: time="2026-04-28T00:19:36.301119018Z" level=info msg="StartContainer for \"08d94bf594f238ce10dd0aec926de9038c8845723ba5a23aeab465349ec74b42\"" Apr 28 00:19:36.361067 containerd[1473]: time="2026-04-28T00:19:36.360568061Z" level=info msg="StartContainer for \"926a2ae2d2bbe7d37d5546329503f4fc06ded504114e66507d4e88685c8b284d\"" Apr 28 00:19:36.411287 kubelet[2147]: E0428 00:19:36.408569 2147 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.11:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.11:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 28 00:19:36.599645 systemd[1]: Started cri-containerd-08d94bf594f238ce10dd0aec926de9038c8845723ba5a23aeab465349ec74b42.scope - libcontainer container 08d94bf594f238ce10dd0aec926de9038c8845723ba5a23aeab465349ec74b42. Apr 28 00:19:36.633563 kubelet[2147]: E0428 00:19:36.633490 2147 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.11:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.11:6443: connect: connection refused" interval="3.2s" Apr 28 00:19:36.637793 systemd[1]: Started cri-containerd-926a2ae2d2bbe7d37d5546329503f4fc06ded504114e66507d4e88685c8b284d.scope - libcontainer container 926a2ae2d2bbe7d37d5546329503f4fc06ded504114e66507d4e88685c8b284d. Apr 28 00:19:36.642519 systemd[1]: Started cri-containerd-c1e55d247cdb60fe5d631e9c903dd2c21854aea1f240a5e38e911c85922a9a17.scope - libcontainer container c1e55d247cdb60fe5d631e9c903dd2c21854aea1f240a5e38e911c85922a9a17. Apr 28 00:19:36.950871 kubelet[2147]: I0428 00:19:36.950725 2147 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 28 00:19:36.953053 kubelet[2147]: E0428 00:19:36.952984 2147 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.11:6443/api/v1/nodes\": dial tcp 10.0.0.11:6443: connect: connection refused" node="localhost" Apr 28 00:19:37.014366 kubelet[2147]: E0428 00:19:37.007455 2147 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.11:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.11:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 28 00:19:37.067785 containerd[1473]: time="2026-04-28T00:19:37.067561977Z" level=info msg="StartContainer for \"926a2ae2d2bbe7d37d5546329503f4fc06ded504114e66507d4e88685c8b284d\" returns successfully" Apr 28 00:19:37.074316 containerd[1473]: time="2026-04-28T00:19:37.074031697Z" level=info msg="StartContainer for \"08d94bf594f238ce10dd0aec926de9038c8845723ba5a23aeab465349ec74b42\" returns successfully" Apr 28 00:19:37.160408 containerd[1473]: time="2026-04-28T00:19:37.158655259Z" level=info msg="StartContainer for \"c1e55d247cdb60fe5d631e9c903dd2c21854aea1f240a5e38e911c85922a9a17\" returns successfully" Apr 28 00:19:38.254092 kubelet[2147]: E0428 00:19:38.251538 2147 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 28 00:19:38.254092 kubelet[2147]: E0428 00:19:38.251859 2147 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:19:38.255427 kubelet[2147]: E0428 00:19:38.255095 2147 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 28 00:19:38.255427 kubelet[2147]: E0428 00:19:38.255217 2147 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:19:38.256512 kubelet[2147]: E0428 00:19:38.255808 2147 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 28 00:19:38.256512 kubelet[2147]: E0428 00:19:38.256000 2147 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:19:39.340230 kubelet[2147]: E0428 00:19:39.339605 2147 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 28 00:19:39.340230 kubelet[2147]: E0428 00:19:39.339666 2147 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 28 00:19:39.340230 kubelet[2147]: E0428 00:19:39.340357 2147 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:19:39.341335 kubelet[2147]: E0428 00:19:39.340357 2147 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:19:39.341335 kubelet[2147]: E0428 00:19:39.340701 2147 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 28 00:19:39.341335 kubelet[2147]: E0428 00:19:39.341133 2147 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:19:40.180032 kubelet[2147]: I0428 00:19:40.176179 2147 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 28 00:19:40.596650 kubelet[2147]: E0428 00:19:40.595263 2147 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 28 00:19:40.596650 kubelet[2147]: E0428 00:19:40.596025 2147 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 28 00:19:40.596650 kubelet[2147]: E0428 00:19:40.595387 2147 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 28 00:19:40.596650 kubelet[2147]: E0428 00:19:40.596410 2147 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:19:40.596650 kubelet[2147]: E0428 00:19:40.596501 2147 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:19:40.597856 kubelet[2147]: E0428 00:19:40.597536 2147 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:19:43.243295 kubelet[2147]: E0428 00:19:43.238841 2147 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 28 00:19:43.250531 kubelet[2147]: E0428 00:19:43.250044 2147 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:19:43.803574 kubelet[2147]: E0428 00:19:43.790016 2147 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 28 00:19:45.809522 kubelet[2147]: E0428 00:19:45.804974 2147 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Apr 28 00:19:46.085308 kubelet[2147]: I0428 00:19:46.081872 2147 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Apr 28 00:19:46.086534 kubelet[2147]: E0428 00:19:46.085435 2147 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.18aa5d4996cf8648 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-28 00:19:33.595260488 +0000 UTC m=+1.214082116,LastTimestamp:2026-04-28 00:19:33.595260488 +0000 UTC m=+1.214082116,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 28 00:19:46.086534 kubelet[2147]: I0428 00:19:46.085327 2147 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Apr 28 00:19:46.607089 kubelet[2147]: I0428 00:19:46.606086 2147 apiserver.go:52] "Watching apiserver" Apr 28 00:19:46.702769 kubelet[2147]: E0428 00:19:46.697770 2147 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.18aa5d49991b26c7 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-28 00:19:33.633771207 +0000 UTC m=+1.252592847,LastTimestamp:2026-04-28 00:19:33.633771207 +0000 UTC m=+1.252592847,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 28 00:19:46.767183 kubelet[2147]: E0428 00:19:46.764787 2147 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Apr 28 00:19:46.767183 kubelet[2147]: I0428 00:19:46.765000 2147 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Apr 28 00:19:46.786647 kubelet[2147]: E0428 00:19:46.784047 2147 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Apr 28 00:19:46.786647 kubelet[2147]: I0428 00:19:46.784077 2147 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Apr 28 00:19:46.820484 kubelet[2147]: E0428 00:19:46.819063 2147 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Apr 28 00:19:46.931201 kubelet[2147]: I0428 00:19:46.928605 2147 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Apr 28 00:19:48.139991 kubelet[2147]: I0428 00:19:48.138878 2147 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Apr 28 00:19:48.543078 kubelet[2147]: E0428 00:19:48.532488 2147 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:19:49.771641 kubelet[2147]: E0428 00:19:49.700064 2147 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:19:50.400173 kubelet[2147]: I0428 00:19:50.396583 2147 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Apr 28 00:19:50.567970 kubelet[2147]: E0428 00:19:50.566172 2147 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:19:50.793657 kubelet[2147]: E0428 00:19:50.791157 2147 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:19:50.982260 kubelet[2147]: I0428 00:19:50.982008 2147 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=0.981856507 podStartE2EDuration="981.856507ms" podCreationTimestamp="2026-04-28 00:19:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-28 00:19:50.980966296 +0000 UTC m=+18.599787932" watchObservedRunningTime="2026-04-28 00:19:50.981856507 +0000 UTC m=+18.600678141" Apr 28 00:19:50.982260 kubelet[2147]: I0428 00:19:50.982176 2147 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=2.982169475 podStartE2EDuration="2.982169475s" podCreationTimestamp="2026-04-28 00:19:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-28 00:19:50.740884452 +0000 UTC m=+18.359706091" watchObservedRunningTime="2026-04-28 00:19:50.982169475 +0000 UTC m=+18.600991115" Apr 28 00:20:07.425623 systemd[1]: Reloading requested from client PID 2442 ('systemctl') (unit session-5.scope)... Apr 28 00:20:07.425643 systemd[1]: Reloading... Apr 28 00:20:08.147392 zram_generator::config[2477]: No configuration found. Apr 28 00:20:09.140957 kubelet[2147]: E0428 00:20:09.134452 2147 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:20:10.298554 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 28 00:20:11.234683 systemd[1]: Reloading finished in 3808 ms. Apr 28 00:20:12.203531 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 28 00:20:12.250590 systemd[1]: kubelet.service: Deactivated successfully. Apr 28 00:20:12.251560 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 28 00:20:12.252003 systemd[1]: kubelet.service: Consumed 20.846s CPU time, 133.5M memory peak, 0B memory swap peak. Apr 28 00:20:12.284351 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 28 00:20:14.308186 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 28 00:20:14.404634 (kubelet)[2526]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 28 00:20:15.983393 kubelet[2526]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 28 00:20:15.983393 kubelet[2526]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 28 00:20:15.998271 kubelet[2526]: I0428 00:20:15.983596 2526 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 28 00:20:16.195764 kubelet[2526]: I0428 00:20:16.189203 2526 server.go:529] "Kubelet version" kubeletVersion="v1.34.4" Apr 28 00:20:16.206001 kubelet[2526]: I0428 00:20:16.196133 2526 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 28 00:20:16.206001 kubelet[2526]: I0428 00:20:16.196823 2526 watchdog_linux.go:95] "Systemd watchdog is not enabled" Apr 28 00:20:16.206001 kubelet[2526]: I0428 00:20:16.196839 2526 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 28 00:20:16.225264 kubelet[2526]: I0428 00:20:16.221094 2526 server.go:956] "Client rotation is on, will bootstrap in background" Apr 28 00:20:16.254182 kubelet[2526]: I0428 00:20:16.249530 2526 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Apr 28 00:20:16.378003 kubelet[2526]: I0428 00:20:16.374626 2526 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 28 00:20:16.585770 kubelet[2526]: E0428 00:20:16.583969 2526 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 28 00:20:16.585770 kubelet[2526]: I0428 00:20:16.584224 2526 server.go:1400] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Apr 28 00:20:16.825746 kubelet[2526]: I0428 00:20:16.824995 2526 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Apr 28 00:20:16.829422 kubelet[2526]: I0428 00:20:16.828804 2526 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 28 00:20:16.836453 kubelet[2526]: I0428 00:20:16.829482 2526 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 28 00:20:16.836453 kubelet[2526]: I0428 00:20:16.833847 2526 topology_manager.go:138] "Creating topology manager with none policy" Apr 28 00:20:16.836453 kubelet[2526]: I0428 00:20:16.834425 2526 container_manager_linux.go:306] "Creating device plugin manager" Apr 28 00:20:16.836453 kubelet[2526]: I0428 00:20:16.834601 2526 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Apr 28 00:20:16.839852 kubelet[2526]: I0428 00:20:16.838220 2526 state_mem.go:36] "Initialized new in-memory state store" Apr 28 00:20:16.839852 kubelet[2526]: I0428 00:20:16.839193 2526 kubelet.go:475] "Attempting to sync node with API server" Apr 28 00:20:16.839852 kubelet[2526]: I0428 00:20:16.839209 2526 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 28 00:20:16.839852 kubelet[2526]: I0428 00:20:16.839326 2526 kubelet.go:387] "Adding apiserver pod source" Apr 28 00:20:16.839852 kubelet[2526]: I0428 00:20:16.839338 2526 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 28 00:20:16.948142 kubelet[2526]: I0428 00:20:16.947463 2526 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 28 00:20:16.949577 kubelet[2526]: I0428 00:20:16.948661 2526 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 28 00:20:16.949577 kubelet[2526]: I0428 00:20:16.948686 2526 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Apr 28 00:20:17.042834 kubelet[2526]: I0428 00:20:17.038100 2526 server.go:1262] "Started kubelet" Apr 28 00:20:17.042834 kubelet[2526]: I0428 00:20:17.040102 2526 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 28 00:20:17.196287 kubelet[2526]: I0428 00:20:17.189007 2526 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Apr 28 00:20:17.359681 kubelet[2526]: I0428 00:20:17.347829 2526 server_v1.go:49] "podresources" method="list" useActivePods=true Apr 28 00:20:17.375609 kubelet[2526]: I0428 00:20:17.373420 2526 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 28 00:20:17.390313 kubelet[2526]: I0428 00:20:17.388462 2526 server.go:310] "Adding debug handlers to kubelet server" Apr 28 00:20:17.582016 kubelet[2526]: I0428 00:20:17.555452 2526 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 28 00:20:17.599340 kubelet[2526]: I0428 00:20:17.598142 2526 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 28 00:20:17.667526 kubelet[2526]: I0428 00:20:17.647170 2526 volume_manager.go:313] "Starting Kubelet Volume Manager" Apr 28 00:20:17.686496 kubelet[2526]: I0428 00:20:17.647443 2526 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Apr 28 00:20:17.711484 kubelet[2526]: I0428 00:20:17.705799 2526 reconciler.go:29] "Reconciler: start to sync state" Apr 28 00:20:17.859727 kubelet[2526]: I0428 00:20:17.851132 2526 apiserver.go:52] "Watching apiserver" Apr 28 00:20:18.075794 kubelet[2526]: I0428 00:20:18.075253 2526 factory.go:223] Registration of the systemd container factory successfully Apr 28 00:20:18.162486 kubelet[2526]: E0428 00:20:18.159308 2526 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 28 00:20:18.218173 kubelet[2526]: I0428 00:20:18.159217 2526 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 28 00:20:18.547094 kubelet[2526]: I0428 00:20:18.545379 2526 factory.go:223] Registration of the containerd container factory successfully Apr 28 00:20:18.760282 kubelet[2526]: I0428 00:20:18.749346 2526 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Apr 28 00:20:18.858170 kubelet[2526]: I0428 00:20:18.808746 2526 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Apr 28 00:20:18.930537 kubelet[2526]: I0428 00:20:18.927237 2526 status_manager.go:244] "Starting to sync pod status with apiserver" Apr 28 00:20:18.954202 kubelet[2526]: I0428 00:20:18.953561 2526 kubelet.go:2428] "Starting kubelet main sync loop" Apr 28 00:20:19.247546 kubelet[2526]: E0428 00:20:19.241820 2526 kubelet.go:2452] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 28 00:20:19.356529 kubelet[2526]: E0428 00:20:19.347239 2526 kubelet.go:2452] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 28 00:20:19.564059 kubelet[2526]: E0428 00:20:19.559331 2526 kubelet.go:2452] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Apr 28 00:20:19.961355 kubelet[2526]: E0428 00:20:19.960857 2526 kubelet.go:2452] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Apr 28 00:20:20.839415 kubelet[2526]: E0428 00:20:20.834331 2526 kubelet.go:2452] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Apr 28 00:20:22.460864 kubelet[2526]: E0428 00:20:22.460112 2526 kubelet.go:2452] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Apr 28 00:20:24.237841 kubelet[2526]: I0428 00:20:24.234269 2526 cpu_manager.go:221] "Starting CPU manager" policy="none" Apr 28 00:20:24.237841 kubelet[2526]: I0428 00:20:24.234517 2526 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Apr 28 00:20:24.237841 kubelet[2526]: I0428 00:20:24.235074 2526 state_mem.go:36] "Initialized new in-memory state store" Apr 28 00:20:24.264211 kubelet[2526]: I0428 00:20:24.262409 2526 state_mem.go:88] "Updated default CPUSet" cpuSet="" Apr 28 00:20:24.264211 kubelet[2526]: I0428 00:20:24.263621 2526 state_mem.go:96] "Updated CPUSet assignments" assignments={} Apr 28 00:20:24.315058 kubelet[2526]: I0428 00:20:24.304506 2526 policy_none.go:49] "None policy: Start" Apr 28 00:20:24.315058 kubelet[2526]: I0428 00:20:24.308264 2526 memory_manager.go:187] "Starting memorymanager" policy="None" Apr 28 00:20:24.315058 kubelet[2526]: I0428 00:20:24.308790 2526 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Apr 28 00:20:24.610740 kubelet[2526]: I0428 00:20:24.609328 2526 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Apr 28 00:20:24.610740 kubelet[2526]: I0428 00:20:24.609600 2526 policy_none.go:47] "Start" Apr 28 00:20:25.101793 kubelet[2526]: E0428 00:20:25.100830 2526 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 28 00:20:25.193605 kubelet[2526]: I0428 00:20:25.170881 2526 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 28 00:20:25.193605 kubelet[2526]: I0428 00:20:25.171044 2526 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 28 00:20:25.193605 kubelet[2526]: I0428 00:20:25.176876 2526 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 28 00:20:25.627770 kubelet[2526]: E0428 00:20:25.592762 2526 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 28 00:20:26.052012 kubelet[2526]: I0428 00:20:26.043428 2526 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a646d41a511ed3aa4e8f9816f82de57d-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"a646d41a511ed3aa4e8f9816f82de57d\") " pod="kube-system/kube-apiserver-localhost" Apr 28 00:20:26.052012 kubelet[2526]: I0428 00:20:26.043855 2526 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a646d41a511ed3aa4e8f9816f82de57d-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"a646d41a511ed3aa4e8f9816f82de57d\") " pod="kube-system/kube-apiserver-localhost" Apr 28 00:20:26.052012 kubelet[2526]: I0428 00:20:26.044169 2526 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Apr 28 00:20:26.052012 kubelet[2526]: I0428 00:20:26.044158 2526 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a646d41a511ed3aa4e8f9816f82de57d-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"a646d41a511ed3aa4e8f9816f82de57d\") " pod="kube-system/kube-apiserver-localhost" Apr 28 00:20:26.353354 kubelet[2526]: I0428 00:20:26.334784 2526 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c6bb8708a026256e82ca4c5631a78b5a-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"c6bb8708a026256e82ca4c5631a78b5a\") " pod="kube-system/kube-controller-manager-localhost" Apr 28 00:20:26.621048 kubelet[2526]: I0428 00:20:26.511172 2526 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c6bb8708a026256e82ca4c5631a78b5a-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"c6bb8708a026256e82ca4c5631a78b5a\") " pod="kube-system/kube-controller-manager-localhost" Apr 28 00:20:26.637801 kubelet[2526]: I0428 00:20:26.621113 2526 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Apr 28 00:20:26.784052 kubelet[2526]: I0428 00:20:26.776347 2526 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c6bb8708a026256e82ca4c5631a78b5a-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"c6bb8708a026256e82ca4c5631a78b5a\") " pod="kube-system/kube-controller-manager-localhost" Apr 28 00:20:26.784052 kubelet[2526]: I0428 00:20:26.782218 2526 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/c6bb8708a026256e82ca4c5631a78b5a-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"c6bb8708a026256e82ca4c5631a78b5a\") " pod="kube-system/kube-controller-manager-localhost" Apr 28 00:20:26.784052 kubelet[2526]: I0428 00:20:26.782435 2526 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c6bb8708a026256e82ca4c5631a78b5a-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"c6bb8708a026256e82ca4c5631a78b5a\") " pod="kube-system/kube-controller-manager-localhost" Apr 28 00:20:27.161862 kubelet[2526]: I0428 00:20:27.152861 2526 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/824fd89300514e351ed3b68d82c665c6-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"824fd89300514e351ed3b68d82c665c6\") " pod="kube-system/kube-scheduler-localhost" Apr 28 00:20:27.343430 kubelet[2526]: E0428 00:20:27.342816 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:20:27.857844 kubelet[2526]: I0428 00:20:27.857278 2526 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 28 00:20:27.970882 kubelet[2526]: E0428 00:20:27.855629 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:20:28.567272 kubelet[2526]: E0428 00:20:28.558145 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:20:29.786792 kubelet[2526]: E0428 00:20:29.781771 2526 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.578s" Apr 28 00:20:30.357305 kubelet[2526]: E0428 00:20:30.356828 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:20:30.515202 kubelet[2526]: E0428 00:20:30.514246 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:20:30.718992 kubelet[2526]: I0428 00:20:30.718281 2526 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Apr 28 00:20:30.722134 kubelet[2526]: I0428 00:20:30.719276 2526 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Apr 28 00:20:30.911596 kubelet[2526]: E0428 00:20:30.911105 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:20:30.933779 kubelet[2526]: I0428 00:20:30.914247 2526 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Apr 28 00:20:30.999656 kubelet[2526]: E0428 00:20:30.997659 2526 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Apr 28 00:20:31.021331 kubelet[2526]: E0428 00:20:31.018965 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:20:31.961750 kubelet[2526]: E0428 00:20:31.958150 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:20:32.006682 kubelet[2526]: E0428 00:20:31.958833 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:20:32.066694 kubelet[2526]: I0428 00:20:32.063685 2526 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=5.063608418 podStartE2EDuration="5.063608418s" podCreationTimestamp="2026-04-28 00:20:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-28 00:20:31.346051135 +0000 UTC m=+16.784028430" watchObservedRunningTime="2026-04-28 00:20:32.063608418 +0000 UTC m=+17.501585707" Apr 28 00:20:36.048818 kubelet[2526]: I0428 00:20:36.048079 2526 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Apr 28 00:20:36.291204 containerd[1473]: time="2026-04-28T00:20:36.283591900Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Apr 28 00:20:36.338033 kubelet[2526]: I0428 00:20:36.335357 2526 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Apr 28 00:20:37.911735 kubelet[2526]: E0428 00:20:37.911155 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:20:38.280426 kubelet[2526]: E0428 00:20:38.223179 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:20:38.777058 kubelet[2526]: I0428 00:20:38.774536 2526 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/8ddeda12-a631-49b7-8f55-ed9be4d8f5d7-kube-proxy\") pod \"kube-proxy-d6m7c\" (UID: \"8ddeda12-a631-49b7-8f55-ed9be4d8f5d7\") " pod="kube-system/kube-proxy-d6m7c" Apr 28 00:20:38.936604 kubelet[2526]: I0428 00:20:38.927721 2526 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8ddeda12-a631-49b7-8f55-ed9be4d8f5d7-lib-modules\") pod \"kube-proxy-d6m7c\" (UID: \"8ddeda12-a631-49b7-8f55-ed9be4d8f5d7\") " pod="kube-system/kube-proxy-d6m7c" Apr 28 00:20:39.006331 systemd[1]: Created slice kubepods-besteffort-pod8ddeda12_a631_49b7_8f55_ed9be4d8f5d7.slice - libcontainer container kubepods-besteffort-pod8ddeda12_a631_49b7_8f55_ed9be4d8f5d7.slice. Apr 28 00:20:39.109329 kubelet[2526]: I0428 00:20:39.000775 2526 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8hf6h\" (UniqueName: \"kubernetes.io/projected/8ddeda12-a631-49b7-8f55-ed9be4d8f5d7-kube-api-access-8hf6h\") pod \"kube-proxy-d6m7c\" (UID: \"8ddeda12-a631-49b7-8f55-ed9be4d8f5d7\") " pod="kube-system/kube-proxy-d6m7c" Apr 28 00:20:39.116041 kubelet[2526]: I0428 00:20:39.113258 2526 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8ddeda12-a631-49b7-8f55-ed9be4d8f5d7-xtables-lock\") pod \"kube-proxy-d6m7c\" (UID: \"8ddeda12-a631-49b7-8f55-ed9be4d8f5d7\") " pod="kube-system/kube-proxy-d6m7c" Apr 28 00:20:39.140260 kubelet[2526]: E0428 00:20:39.139104 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:20:39.762507 kubelet[2526]: I0428 00:20:39.762166 2526 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b8c2716b-9c6e-4426-8150-9b8c1351c4b9-xtables-lock\") pod \"kube-flannel-ds-wjv2j\" (UID: \"b8c2716b-9c6e-4426-8150-9b8c1351c4b9\") " pod="kube-flannel/kube-flannel-ds-wjv2j" Apr 28 00:20:39.775537 kubelet[2526]: I0428 00:20:39.774104 2526 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-plugin\" (UniqueName: \"kubernetes.io/host-path/b8c2716b-9c6e-4426-8150-9b8c1351c4b9-cni-plugin\") pod \"kube-flannel-ds-wjv2j\" (UID: \"b8c2716b-9c6e-4426-8150-9b8c1351c4b9\") " pod="kube-flannel/kube-flannel-ds-wjv2j" Apr 28 00:20:39.789336 kubelet[2526]: I0428 00:20:39.789055 2526 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni\" (UniqueName: \"kubernetes.io/host-path/b8c2716b-9c6e-4426-8150-9b8c1351c4b9-cni\") pod \"kube-flannel-ds-wjv2j\" (UID: \"b8c2716b-9c6e-4426-8150-9b8c1351c4b9\") " pod="kube-flannel/kube-flannel-ds-wjv2j" Apr 28 00:20:39.842806 kubelet[2526]: I0428 00:20:39.829292 2526 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flannel-cfg\" (UniqueName: \"kubernetes.io/configmap/b8c2716b-9c6e-4426-8150-9b8c1351c4b9-flannel-cfg\") pod \"kube-flannel-ds-wjv2j\" (UID: \"b8c2716b-9c6e-4426-8150-9b8c1351c4b9\") " pod="kube-flannel/kube-flannel-ds-wjv2j" Apr 28 00:20:39.842806 kubelet[2526]: I0428 00:20:39.841184 2526 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6gvkk\" (UniqueName: \"kubernetes.io/projected/b8c2716b-9c6e-4426-8150-9b8c1351c4b9-kube-api-access-6gvkk\") pod \"kube-flannel-ds-wjv2j\" (UID: \"b8c2716b-9c6e-4426-8150-9b8c1351c4b9\") " pod="kube-flannel/kube-flannel-ds-wjv2j" Apr 28 00:20:39.848789 kubelet[2526]: I0428 00:20:39.846168 2526 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/b8c2716b-9c6e-4426-8150-9b8c1351c4b9-run\") pod \"kube-flannel-ds-wjv2j\" (UID: \"b8c2716b-9c6e-4426-8150-9b8c1351c4b9\") " pod="kube-flannel/kube-flannel-ds-wjv2j" Apr 28 00:20:40.054605 systemd[1]: Created slice kubepods-burstable-podb8c2716b_9c6e_4426_8150_9b8c1351c4b9.slice - libcontainer container kubepods-burstable-podb8c2716b_9c6e_4426_8150_9b8c1351c4b9.slice. Apr 28 00:20:40.299705 kubelet[2526]: E0428 00:20:40.299108 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:20:40.424651 sudo[1599]: pam_unix(sudo:session): session closed for user root Apr 28 00:20:40.472193 sshd[1596]: pam_unix(sshd:session): session closed for user core Apr 28 00:20:40.700211 systemd[1]: sshd@4-10.0.0.11:22-10.0.0.1:54320.service: Deactivated successfully. Apr 28 00:20:40.742655 systemd[1]: session-5.scope: Deactivated successfully. Apr 28 00:20:40.743146 systemd[1]: session-5.scope: Consumed 25.091s CPU time, 164.4M memory peak, 0B memory swap peak. Apr 28 00:20:40.783831 systemd-logind[1457]: Session 5 logged out. Waiting for processes to exit. Apr 28 00:20:40.798325 systemd-logind[1457]: Removed session 5. Apr 28 00:20:40.933764 kubelet[2526]: E0428 00:20:40.930867 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:20:40.948610 containerd[1473]: time="2026-04-28T00:20:40.939999720Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-d6m7c,Uid:8ddeda12-a631-49b7-8f55-ed9be4d8f5d7,Namespace:kube-system,Attempt:0,}" Apr 28 00:20:41.364169 containerd[1473]: time="2026-04-28T00:20:41.363874901Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-wjv2j,Uid:b8c2716b-9c6e-4426-8150-9b8c1351c4b9,Namespace:kube-flannel,Attempt:0,}" Apr 28 00:20:43.011490 containerd[1473]: time="2026-04-28T00:20:43.002598459Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 28 00:20:43.085619 containerd[1473]: time="2026-04-28T00:20:43.065345768Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 28 00:20:43.085619 containerd[1473]: time="2026-04-28T00:20:43.070752441Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 28 00:20:43.131797 containerd[1473]: time="2026-04-28T00:20:43.119156717Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 28 00:20:43.264104 containerd[1473]: time="2026-04-28T00:20:43.247551179Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 28 00:20:43.264104 containerd[1473]: time="2026-04-28T00:20:43.247722540Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 28 00:20:43.264104 containerd[1473]: time="2026-04-28T00:20:43.247734971Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 28 00:20:43.264104 containerd[1473]: time="2026-04-28T00:20:43.248046141Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 28 00:20:43.481455 systemd[1]: Started cri-containerd-dd23411d4810381369211aea798247d9ae3f74ada77bd0bdda4ac5d942a4a48a.scope - libcontainer container dd23411d4810381369211aea798247d9ae3f74ada77bd0bdda4ac5d942a4a48a. Apr 28 00:20:43.737699 systemd[1]: Started cri-containerd-c050d5b413d2ad901f503e7c19667942bd5fb0954d5dcefbdab8fee24468d9f3.scope - libcontainer container c050d5b413d2ad901f503e7c19667942bd5fb0954d5dcefbdab8fee24468d9f3. Apr 28 00:20:45.533736 containerd[1473]: time="2026-04-28T00:20:45.532794244Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-wjv2j,Uid:b8c2716b-9c6e-4426-8150-9b8c1351c4b9,Namespace:kube-flannel,Attempt:0,} returns sandbox id \"c050d5b413d2ad901f503e7c19667942bd5fb0954d5dcefbdab8fee24468d9f3\"" Apr 28 00:20:45.588689 containerd[1473]: time="2026-04-28T00:20:45.588201049Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-d6m7c,Uid:8ddeda12-a631-49b7-8f55-ed9be4d8f5d7,Namespace:kube-system,Attempt:0,} returns sandbox id \"dd23411d4810381369211aea798247d9ae3f74ada77bd0bdda4ac5d942a4a48a\"" Apr 28 00:20:45.665690 kubelet[2526]: E0428 00:20:45.665344 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:20:45.677259 kubelet[2526]: E0428 00:20:45.666252 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:20:45.732111 containerd[1473]: time="2026-04-28T00:20:45.731293996Z" level=info msg="PullImage \"ghcr.io/flannel-io/flannel-cni-plugin:v1.6.2-flannel1\"" Apr 28 00:20:45.916980 containerd[1473]: time="2026-04-28T00:20:45.916555786Z" level=info msg="CreateContainer within sandbox \"dd23411d4810381369211aea798247d9ae3f74ada77bd0bdda4ac5d942a4a48a\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Apr 28 00:20:46.278838 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2520989409.mount: Deactivated successfully. Apr 28 00:20:46.352626 containerd[1473]: time="2026-04-28T00:20:46.341878914Z" level=info msg="CreateContainer within sandbox \"dd23411d4810381369211aea798247d9ae3f74ada77bd0bdda4ac5d942a4a48a\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"fbc9179fc35722401c59fe1e618d9e5b37627723ada72854f8c15bfeef56c6c6\"" Apr 28 00:20:46.388216 containerd[1473]: time="2026-04-28T00:20:46.386741736Z" level=info msg="StartContainer for \"fbc9179fc35722401c59fe1e618d9e5b37627723ada72854f8c15bfeef56c6c6\"" Apr 28 00:20:47.262958 systemd[1]: Started cri-containerd-fbc9179fc35722401c59fe1e618d9e5b37627723ada72854f8c15bfeef56c6c6.scope - libcontainer container fbc9179fc35722401c59fe1e618d9e5b37627723ada72854f8c15bfeef56c6c6. Apr 28 00:20:48.131695 containerd[1473]: time="2026-04-28T00:20:48.129390438Z" level=info msg="StartContainer for \"fbc9179fc35722401c59fe1e618d9e5b37627723ada72854f8c15bfeef56c6c6\" returns successfully" Apr 28 00:20:49.393066 kubelet[2526]: E0428 00:20:49.391084 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:20:50.445477 kubelet[2526]: E0428 00:20:50.444050 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:20:52.095074 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1540269249.mount: Deactivated successfully. Apr 28 00:20:52.706750 containerd[1473]: time="2026-04-28T00:20:52.705529026Z" level=info msg="ImageCreate event name:\"ghcr.io/flannel-io/flannel-cni-plugin:v1.6.2-flannel1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 00:20:52.771467 containerd[1473]: time="2026-04-28T00:20:52.753982561Z" level=info msg="stop pulling image ghcr.io/flannel-io/flannel-cni-plugin:v1.6.2-flannel1: active requests=0, bytes read=4857008" Apr 28 00:20:52.778086 containerd[1473]: time="2026-04-28T00:20:52.777543434Z" level=info msg="ImageCreate event name:\"sha256:55ce2385d9d8c6f720091c177fbe885a21c9dc07c9e480bfb4d94b3001f58182\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 00:20:53.302345 containerd[1473]: time="2026-04-28T00:20:53.301790039Z" level=info msg="ImageCreate event name:\"ghcr.io/flannel-io/flannel-cni-plugin@sha256:f1812994f0edbcb5bb5ccb63be2147ba6ad10e1faaa7ca9fcdad4f441739d84f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 00:20:53.355970 containerd[1473]: time="2026-04-28T00:20:53.355320941Z" level=info msg="Pulled image \"ghcr.io/flannel-io/flannel-cni-plugin:v1.6.2-flannel1\" with image id \"sha256:55ce2385d9d8c6f720091c177fbe885a21c9dc07c9e480bfb4d94b3001f58182\", repo tag \"ghcr.io/flannel-io/flannel-cni-plugin:v1.6.2-flannel1\", repo digest \"ghcr.io/flannel-io/flannel-cni-plugin@sha256:f1812994f0edbcb5bb5ccb63be2147ba6ad10e1faaa7ca9fcdad4f441739d84f\", size \"4856838\" in 7.623629348s" Apr 28 00:20:53.355970 containerd[1473]: time="2026-04-28T00:20:53.355634055Z" level=info msg="PullImage \"ghcr.io/flannel-io/flannel-cni-plugin:v1.6.2-flannel1\" returns image reference \"sha256:55ce2385d9d8c6f720091c177fbe885a21c9dc07c9e480bfb4d94b3001f58182\"" Apr 28 00:20:54.257268 containerd[1473]: time="2026-04-28T00:20:54.251232434Z" level=info msg="CreateContainer within sandbox \"c050d5b413d2ad901f503e7c19667942bd5fb0954d5dcefbdab8fee24468d9f3\" for container &ContainerMetadata{Name:install-cni-plugin,Attempt:0,}" Apr 28 00:20:54.955214 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1580077510.mount: Deactivated successfully. Apr 28 00:20:55.150173 containerd[1473]: time="2026-04-28T00:20:55.149804690Z" level=info msg="CreateContainer within sandbox \"c050d5b413d2ad901f503e7c19667942bd5fb0954d5dcefbdab8fee24468d9f3\" for &ContainerMetadata{Name:install-cni-plugin,Attempt:0,} returns container id \"565d69f0dce89f63d6968ecff0a10b55d5f85271edc6c9ac21354b07559f9369\"" Apr 28 00:20:55.264256 containerd[1473]: time="2026-04-28T00:20:55.207841620Z" level=info msg="StartContainer for \"565d69f0dce89f63d6968ecff0a10b55d5f85271edc6c9ac21354b07559f9369\"" Apr 28 00:20:55.970878 systemd[1]: Started cri-containerd-565d69f0dce89f63d6968ecff0a10b55d5f85271edc6c9ac21354b07559f9369.scope - libcontainer container 565d69f0dce89f63d6968ecff0a10b55d5f85271edc6c9ac21354b07559f9369. Apr 28 00:20:56.724634 systemd[1]: cri-containerd-565d69f0dce89f63d6968ecff0a10b55d5f85271edc6c9ac21354b07559f9369.scope: Deactivated successfully. Apr 28 00:20:56.888270 containerd[1473]: time="2026-04-28T00:20:56.887981453Z" level=info msg="StartContainer for \"565d69f0dce89f63d6968ecff0a10b55d5f85271edc6c9ac21354b07559f9369\" returns successfully" Apr 28 00:20:57.118008 kubelet[2526]: E0428 00:20:57.116603 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:20:57.539270 kubelet[2526]: I0428 00:20:57.534823 2526 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-d6m7c" podStartSLOduration=20.53277863 podStartE2EDuration="20.53277863s" podCreationTimestamp="2026-04-28 00:20:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-28 00:20:50.099814612 +0000 UTC m=+35.537791909" watchObservedRunningTime="2026-04-28 00:20:57.53277863 +0000 UTC m=+42.970755938" Apr 28 00:20:57.573629 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-565d69f0dce89f63d6968ecff0a10b55d5f85271edc6c9ac21354b07559f9369-rootfs.mount: Deactivated successfully. Apr 28 00:20:57.604336 containerd[1473]: time="2026-04-28T00:20:57.603834997Z" level=info msg="shim disconnected" id=565d69f0dce89f63d6968ecff0a10b55d5f85271edc6c9ac21354b07559f9369 namespace=k8s.io Apr 28 00:20:57.604336 containerd[1473]: time="2026-04-28T00:20:57.604175645Z" level=warning msg="cleaning up after shim disconnected" id=565d69f0dce89f63d6968ecff0a10b55d5f85271edc6c9ac21354b07559f9369 namespace=k8s.io Apr 28 00:20:57.604336 containerd[1473]: time="2026-04-28T00:20:57.604238831Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 28 00:20:58.253595 kubelet[2526]: E0428 00:20:58.251340 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:20:58.257277 containerd[1473]: time="2026-04-28T00:20:58.253997876Z" level=info msg="PullImage \"ghcr.io/flannel-io/flannel:v0.26.7\"" Apr 28 00:21:16.334559 kubelet[2526]: E0428 00:21:16.331228 2526 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.092s" Apr 28 00:21:16.558542 containerd[1473]: time="2026-04-28T00:21:16.554621424Z" level=info msg="ImageCreate event name:\"ghcr.io/flannel-io/flannel:v0.26.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 00:21:16.675318 containerd[1473]: time="2026-04-28T00:21:16.605776441Z" level=info msg="stop pulling image ghcr.io/flannel-io/flannel:v0.26.7: active requests=0, bytes read=29354574" Apr 28 00:21:16.743797 containerd[1473]: time="2026-04-28T00:21:16.743643062Z" level=info msg="ImageCreate event name:\"sha256:965b9dd4aa4c1b6b68a4c54a166692b4645b6e6f8a5937d8dc17736cb63f515e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 00:21:16.861180 containerd[1473]: time="2026-04-28T00:21:16.860565993Z" level=info msg="ImageCreate event name:\"ghcr.io/flannel-io/flannel@sha256:7f471907fa940f944867270de4ed78121b8b4c5d564e17f940dc787cb16dea82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 00:21:16.906350 containerd[1473]: time="2026-04-28T00:21:16.906185425Z" level=info msg="Pulled image \"ghcr.io/flannel-io/flannel:v0.26.7\" with image id \"sha256:965b9dd4aa4c1b6b68a4c54a166692b4645b6e6f8a5937d8dc17736cb63f515e\", repo tag \"ghcr.io/flannel-io/flannel:v0.26.7\", repo digest \"ghcr.io/flannel-io/flannel@sha256:7f471907fa940f944867270de4ed78121b8b4c5d564e17f940dc787cb16dea82\", size \"32996046\" in 18.652143122s" Apr 28 00:21:16.910960 containerd[1473]: time="2026-04-28T00:21:16.910766333Z" level=info msg="PullImage \"ghcr.io/flannel-io/flannel:v0.26.7\" returns image reference \"sha256:965b9dd4aa4c1b6b68a4c54a166692b4645b6e6f8a5937d8dc17736cb63f515e\"" Apr 28 00:21:17.526626 containerd[1473]: time="2026-04-28T00:21:17.522618460Z" level=info msg="CreateContainer within sandbox \"c050d5b413d2ad901f503e7c19667942bd5fb0954d5dcefbdab8fee24468d9f3\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Apr 28 00:21:18.669792 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1591715664.mount: Deactivated successfully. Apr 28 00:21:18.694327 containerd[1473]: time="2026-04-28T00:21:18.683544641Z" level=info msg="CreateContainer within sandbox \"c050d5b413d2ad901f503e7c19667942bd5fb0954d5dcefbdab8fee24468d9f3\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"fb6218db7235af8e35d561f6ca8674dd741521fca3bed2d0efea94062c9c795c\"" Apr 28 00:21:18.773454 containerd[1473]: time="2026-04-28T00:21:18.772821652Z" level=info msg="StartContainer for \"fb6218db7235af8e35d561f6ca8674dd741521fca3bed2d0efea94062c9c795c\"" Apr 28 00:21:19.897493 systemd[1]: Started cri-containerd-fb6218db7235af8e35d561f6ca8674dd741521fca3bed2d0efea94062c9c795c.scope - libcontainer container fb6218db7235af8e35d561f6ca8674dd741521fca3bed2d0efea94062c9c795c. Apr 28 00:21:20.608110 systemd[1]: cri-containerd-fb6218db7235af8e35d561f6ca8674dd741521fca3bed2d0efea94062c9c795c.scope: Deactivated successfully. Apr 28 00:21:20.649859 containerd[1473]: time="2026-04-28T00:21:20.648276778Z" level=info msg="StartContainer for \"fb6218db7235af8e35d561f6ca8674dd741521fca3bed2d0efea94062c9c795c\" returns successfully" Apr 28 00:21:21.072172 kubelet[2526]: I0428 00:21:21.064330 2526 kubelet_node_status.go:439] "Fast updating node status as it just became ready" Apr 28 00:21:22.100693 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fb6218db7235af8e35d561f6ca8674dd741521fca3bed2d0efea94062c9c795c-rootfs.mount: Deactivated successfully. Apr 28 00:21:22.692819 containerd[1473]: time="2026-04-28T00:21:22.691124962Z" level=info msg="shim disconnected" id=fb6218db7235af8e35d561f6ca8674dd741521fca3bed2d0efea94062c9c795c namespace=k8s.io Apr 28 00:21:22.793496 containerd[1473]: time="2026-04-28T00:21:22.697194600Z" level=warning msg="cleaning up after shim disconnected" id=fb6218db7235af8e35d561f6ca8674dd741521fca3bed2d0efea94062c9c795c namespace=k8s.io Apr 28 00:21:22.793496 containerd[1473]: time="2026-04-28T00:21:22.697704564Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 28 00:21:24.663626 kubelet[2526]: E0428 00:21:24.467453 2526 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.197s" Apr 28 00:21:25.802617 kubelet[2526]: E0428 00:21:25.795573 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:21:26.375737 containerd[1473]: time="2026-04-28T00:21:26.371468911Z" level=warning msg="cleanup warnings time=\"2026-04-28T00:21:25Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Apr 28 00:21:27.248330 kubelet[2526]: E0428 00:21:27.247153 2526 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.985s" Apr 28 00:21:27.650128 kubelet[2526]: I0428 00:21:27.647801 2526 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2fhnj\" (UniqueName: \"kubernetes.io/projected/2f94c136-2158-4e5f-b19a-05695c38ab7a-kube-api-access-2fhnj\") pod \"coredns-66bc5c9577-976lc\" (UID: \"2f94c136-2158-4e5f-b19a-05695c38ab7a\") " pod="kube-system/coredns-66bc5c9577-976lc" Apr 28 00:21:27.651784 kubelet[2526]: I0428 00:21:27.651757 2526 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2f94c136-2158-4e5f-b19a-05695c38ab7a-config-volume\") pod \"coredns-66bc5c9577-976lc\" (UID: \"2f94c136-2158-4e5f-b19a-05695c38ab7a\") " pod="kube-system/coredns-66bc5c9577-976lc" Apr 28 00:21:27.760957 kubelet[2526]: I0428 00:21:27.759476 2526 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/69b6c5c4-1b0f-43c4-a6e5-e4ff6b274b36-config-volume\") pod \"coredns-66bc5c9577-sn6rz\" (UID: \"69b6c5c4-1b0f-43c4-a6e5-e4ff6b274b36\") " pod="kube-system/coredns-66bc5c9577-sn6rz" Apr 28 00:21:27.772114 kubelet[2526]: I0428 00:21:27.771173 2526 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v9d7p\" (UniqueName: \"kubernetes.io/projected/69b6c5c4-1b0f-43c4-a6e5-e4ff6b274b36-kube-api-access-v9d7p\") pod \"coredns-66bc5c9577-sn6rz\" (UID: \"69b6c5c4-1b0f-43c4-a6e5-e4ff6b274b36\") " pod="kube-system/coredns-66bc5c9577-sn6rz" Apr 28 00:21:27.882233 systemd[1]: Created slice kubepods-burstable-pod2f94c136_2158_4e5f_b19a_05695c38ab7a.slice - libcontainer container kubepods-burstable-pod2f94c136_2158_4e5f_b19a_05695c38ab7a.slice. Apr 28 00:21:28.080958 systemd[1]: Created slice kubepods-burstable-pod69b6c5c4_1b0f_43c4_a6e5_e4ff6b274b36.slice - libcontainer container kubepods-burstable-pod69b6c5c4_1b0f_43c4_a6e5_e4ff6b274b36.slice. Apr 28 00:21:28.684406 kubelet[2526]: E0428 00:21:28.679586 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:21:28.991692 kubelet[2526]: E0428 00:21:28.898459 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:21:28.993867 containerd[1473]: time="2026-04-28T00:21:28.988157646Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-sn6rz,Uid:69b6c5c4-1b0f-43c4-a6e5-e4ff6b274b36,Namespace:kube-system,Attempt:0,}" Apr 28 00:21:29.035549 containerd[1473]: time="2026-04-28T00:21:29.034602194Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-976lc,Uid:2f94c136-2158-4e5f-b19a-05695c38ab7a,Namespace:kube-system,Attempt:0,}" Apr 28 00:21:29.077621 kubelet[2526]: E0428 00:21:29.069759 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:21:30.452314 kubelet[2526]: E0428 00:21:30.451826 2526 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.24s" Apr 28 00:21:30.587707 containerd[1473]: time="2026-04-28T00:21:30.587334622Z" level=info msg="CreateContainer within sandbox \"c050d5b413d2ad901f503e7c19667942bd5fb0954d5dcefbdab8fee24468d9f3\" for container &ContainerMetadata{Name:kube-flannel,Attempt:0,}" Apr 28 00:21:31.359796 systemd[1]: run-netns-cni\x2d1b2c77fa\x2df328\x2dfc9f\x2de811\x2df1f1ca5d12bc.mount: Deactivated successfully. Apr 28 00:21:31.447826 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-6bf96bd052497692728e26531fae6017b47bbfd02940114a46475331cfd9a384-shm.mount: Deactivated successfully. Apr 28 00:21:31.970447 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4196590967.mount: Deactivated successfully. Apr 28 00:21:32.087835 containerd[1473]: time="2026-04-28T00:21:32.084628971Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-sn6rz,Uid:69b6c5c4-1b0f-43c4-a6e5-e4ff6b274b36,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"6bf96bd052497692728e26531fae6017b47bbfd02940114a46475331cfd9a384\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Apr 28 00:21:32.108991 kubelet[2526]: E0428 00:21:32.108057 2526 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6bf96bd052497692728e26531fae6017b47bbfd02940114a46475331cfd9a384\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Apr 28 00:21:32.118026 kubelet[2526]: E0428 00:21:32.115234 2526 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6bf96bd052497692728e26531fae6017b47bbfd02940114a46475331cfd9a384\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-66bc5c9577-sn6rz" Apr 28 00:21:32.118026 kubelet[2526]: E0428 00:21:32.117273 2526 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6bf96bd052497692728e26531fae6017b47bbfd02940114a46475331cfd9a384\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-66bc5c9577-sn6rz" Apr 28 00:21:32.132356 containerd[1473]: time="2026-04-28T00:21:32.125245314Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-976lc,Uid:2f94c136-2158-4e5f-b19a-05695c38ab7a,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"7ace627d25336c1653f2bf2e7dbd2e86bfc0af6e0787785acb52da4e2a113a4f\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Apr 28 00:21:32.132356 containerd[1473]: time="2026-04-28T00:21:32.132172180Z" level=info msg="CreateContainer within sandbox \"c050d5b413d2ad901f503e7c19667942bd5fb0954d5dcefbdab8fee24468d9f3\" for &ContainerMetadata{Name:kube-flannel,Attempt:0,} returns container id \"6feef6faa75a7380f3f0077b49d858f4297a35d529da0be0f5f351d0082e590a\"" Apr 28 00:21:32.132826 kubelet[2526]: E0428 00:21:32.119986 2526 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-sn6rz_kube-system(69b6c5c4-1b0f-43c4-a6e5-e4ff6b274b36)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-sn6rz_kube-system(69b6c5c4-1b0f-43c4-a6e5-e4ff6b274b36)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6bf96bd052497692728e26531fae6017b47bbfd02940114a46475331cfd9a384\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-66bc5c9577-sn6rz" podUID="69b6c5c4-1b0f-43c4-a6e5-e4ff6b274b36" Apr 28 00:21:32.136467 kubelet[2526]: E0428 00:21:32.132623 2526 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7ace627d25336c1653f2bf2e7dbd2e86bfc0af6e0787785acb52da4e2a113a4f\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Apr 28 00:21:32.136467 kubelet[2526]: E0428 00:21:32.133002 2526 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7ace627d25336c1653f2bf2e7dbd2e86bfc0af6e0787785acb52da4e2a113a4f\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-66bc5c9577-976lc" Apr 28 00:21:32.172355 kubelet[2526]: E0428 00:21:32.133019 2526 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7ace627d25336c1653f2bf2e7dbd2e86bfc0af6e0787785acb52da4e2a113a4f\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-66bc5c9577-976lc" Apr 28 00:21:32.172851 kubelet[2526]: E0428 00:21:32.172536 2526 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-976lc_kube-system(2f94c136-2158-4e5f-b19a-05695c38ab7a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-976lc_kube-system(2f94c136-2158-4e5f-b19a-05695c38ab7a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7ace627d25336c1653f2bf2e7dbd2e86bfc0af6e0787785acb52da4e2a113a4f\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-66bc5c9577-976lc" podUID="2f94c136-2158-4e5f-b19a-05695c38ab7a" Apr 28 00:21:32.199098 containerd[1473]: time="2026-04-28T00:21:32.196684097Z" level=info msg="StartContainer for \"6feef6faa75a7380f3f0077b49d858f4297a35d529da0be0f5f351d0082e590a\"" Apr 28 00:21:32.336863 systemd[1]: run-netns-cni\x2ddb9d7f1b\x2d0f66\x2d8fc1\x2d70a9\x2d339f85281068.mount: Deactivated successfully. Apr 28 00:21:32.347840 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-7ace627d25336c1653f2bf2e7dbd2e86bfc0af6e0787785acb52da4e2a113a4f-shm.mount: Deactivated successfully. Apr 28 00:21:32.681322 systemd[1]: Started cri-containerd-6feef6faa75a7380f3f0077b49d858f4297a35d529da0be0f5f351d0082e590a.scope - libcontainer container 6feef6faa75a7380f3f0077b49d858f4297a35d529da0be0f5f351d0082e590a. Apr 28 00:21:33.486576 containerd[1473]: time="2026-04-28T00:21:33.481728654Z" level=info msg="StartContainer for \"6feef6faa75a7380f3f0077b49d858f4297a35d529da0be0f5f351d0082e590a\" returns successfully" Apr 28 00:21:35.075781 kubelet[2526]: E0428 00:21:35.072557 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:21:35.636596 kubelet[2526]: I0428 00:21:35.633273 2526 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-flannel/kube-flannel-ds-wjv2j" podStartSLOduration=26.419024663 podStartE2EDuration="57.632205449s" podCreationTimestamp="2026-04-28 00:20:38 +0000 UTC" firstStartedPulling="2026-04-28 00:20:45.712056411 +0000 UTC m=+31.150033705" lastFinishedPulling="2026-04-28 00:21:16.925237202 +0000 UTC m=+62.363214491" observedRunningTime="2026-04-28 00:21:35.620164223 +0000 UTC m=+81.058141518" watchObservedRunningTime="2026-04-28 00:21:35.632205449 +0000 UTC m=+81.070182738" Apr 28 00:21:36.279802 kubelet[2526]: E0428 00:21:36.277287 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:21:38.603234 kubelet[2526]: E0428 00:21:38.595360 2526 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.364s" Apr 28 00:21:38.651859 systemd-networkd[1387]: flannel.1: Link UP Apr 28 00:21:38.651884 systemd-networkd[1387]: flannel.1: Gained carrier Apr 28 00:21:40.277349 systemd-networkd[1387]: flannel.1: Gained IPv6LL Apr 28 00:21:44.258398 kubelet[2526]: E0428 00:21:44.257751 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:21:44.264935 containerd[1473]: time="2026-04-28T00:21:44.264847338Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-976lc,Uid:2f94c136-2158-4e5f-b19a-05695c38ab7a,Namespace:kube-system,Attempt:0,}" Apr 28 00:21:45.169670 systemd-networkd[1387]: cni0: Link UP Apr 28 00:21:45.298478 systemd-networkd[1387]: vethcc44d29c: Link UP Apr 28 00:21:45.352643 kernel: cni0: port 1(vethcc44d29c) entered blocking state Apr 28 00:21:45.353030 kernel: cni0: port 1(vethcc44d29c) entered disabled state Apr 28 00:21:45.369350 kernel: vethcc44d29c: entered allmulticast mode Apr 28 00:21:45.376746 kernel: vethcc44d29c: entered promiscuous mode Apr 28 00:21:45.387049 kernel: cni0: port 1(vethcc44d29c) entered blocking state Apr 28 00:21:45.387567 kernel: cni0: port 1(vethcc44d29c) entered forwarding state Apr 28 00:21:45.387587 kernel: cni0: port 1(vethcc44d29c) entered disabled state Apr 28 00:21:45.653945 kernel: cni0: port 1(vethcc44d29c) entered blocking state Apr 28 00:21:45.654570 kernel: cni0: port 1(vethcc44d29c) entered forwarding state Apr 28 00:21:45.673491 systemd-networkd[1387]: vethcc44d29c: Gained carrier Apr 28 00:21:45.741303 systemd-networkd[1387]: cni0: Gained carrier Apr 28 00:21:46.371334 containerd[1473]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil), MTU:0, AdvMSS:0, Priority:0, Table:(*int)(nil), Scope:(*int)(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0xc000012850), "name":"cbr0", "type":"bridge"} Apr 28 00:21:46.371334 containerd[1473]: delegateAdd: netconf sent to delegate plugin: Apr 28 00:21:46.676578 systemd-networkd[1387]: cni0: Gained IPv6LL Apr 28 00:21:46.934204 systemd-networkd[1387]: vethcc44d29c: Gained IPv6LL Apr 28 00:21:47.066333 containerd[1473]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2026-04-28T00:21:47.062994246Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 28 00:21:47.066333 containerd[1473]: time="2026-04-28T00:21:47.063427180Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 28 00:21:47.066333 containerd[1473]: time="2026-04-28T00:21:47.063437002Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 28 00:21:47.076776 containerd[1473]: time="2026-04-28T00:21:47.063809361Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 28 00:21:47.272500 kubelet[2526]: E0428 00:21:47.272374 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:21:47.447175 containerd[1473]: time="2026-04-28T00:21:47.441790473Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-sn6rz,Uid:69b6c5c4-1b0f-43c4-a6e5-e4ff6b274b36,Namespace:kube-system,Attempt:0,}" Apr 28 00:21:47.504185 systemd[1]: Started cri-containerd-7eb414de5ee5f174e319d27e71485a5c2e05591fea5725f9210c7bf63b25f5f9.scope - libcontainer container 7eb414de5ee5f174e319d27e71485a5c2e05591fea5725f9210c7bf63b25f5f9. Apr 28 00:21:48.536829 systemd-resolved[1333]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 28 00:21:49.170473 systemd-networkd[1387]: veth03833185: Link UP Apr 28 00:21:49.179219 kernel: cni0: port 2(veth03833185) entered blocking state Apr 28 00:21:49.180585 kernel: cni0: port 2(veth03833185) entered disabled state Apr 28 00:21:49.189693 kernel: veth03833185: entered allmulticast mode Apr 28 00:21:49.191244 kernel: veth03833185: entered promiscuous mode Apr 28 00:21:49.357137 containerd[1473]: time="2026-04-28T00:21:49.351655461Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-976lc,Uid:2f94c136-2158-4e5f-b19a-05695c38ab7a,Namespace:kube-system,Attempt:0,} returns sandbox id \"7eb414de5ee5f174e319d27e71485a5c2e05591fea5725f9210c7bf63b25f5f9\"" Apr 28 00:21:49.368976 kernel: cni0: port 2(veth03833185) entered blocking state Apr 28 00:21:49.369769 kernel: cni0: port 2(veth03833185) entered forwarding state Apr 28 00:21:49.365119 systemd-networkd[1387]: veth03833185: Gained carrier Apr 28 00:21:49.394961 kubelet[2526]: E0428 00:21:49.394804 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:21:49.588630 containerd[1473]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil), MTU:0, AdvMSS:0, Priority:0, Table:(*int)(nil), Scope:(*int)(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0xc0000a2880), "name":"cbr0", "type":"bridge"} Apr 28 00:21:49.588630 containerd[1473]: delegateAdd: netconf sent to delegate plugin: Apr 28 00:21:49.642788 containerd[1473]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2026-04-28T00:21:49.642717379Z" level=info msg="CreateContainer within sandbox \"7eb414de5ee5f174e319d27e71485a5c2e05591fea5725f9210c7bf63b25f5f9\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 28 00:21:50.873493 containerd[1473]: time="2026-04-28T00:21:50.851950718Z" level=info msg="CreateContainer within sandbox \"7eb414de5ee5f174e319d27e71485a5c2e05591fea5725f9210c7bf63b25f5f9\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"95ba070727649ae7d38a11324b026e5b7a32381ddd62917874212b5414d158d6\"" Apr 28 00:21:51.440275 systemd-networkd[1387]: veth03833185: Gained IPv6LL Apr 28 00:21:51.765932 containerd[1473]: time="2026-04-28T00:21:51.658569237Z" level=info msg="StartContainer for \"95ba070727649ae7d38a11324b026e5b7a32381ddd62917874212b5414d158d6\"" Apr 28 00:21:51.806953 containerd[1473]: time="2026-04-28T00:21:51.766114010Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 28 00:21:51.991172 containerd[1473]: time="2026-04-28T00:21:51.926262981Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 28 00:21:52.086601 containerd[1473]: time="2026-04-28T00:21:52.075169943Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 28 00:21:52.103964 containerd[1473]: time="2026-04-28T00:21:52.094553073Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 28 00:21:52.261617 kubelet[2526]: E0428 00:21:52.261357 2526 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.009s" Apr 28 00:21:52.683703 systemd[1]: run-containerd-runc-k8s.io-f98b0001eb9c4e9cd25c60c96e1fd4ae0dfffa3591b2421dfdf6fe46de69df3b-runc.PNZKIr.mount: Deactivated successfully. Apr 28 00:21:52.851479 systemd[1]: Started cri-containerd-f98b0001eb9c4e9cd25c60c96e1fd4ae0dfffa3591b2421dfdf6fe46de69df3b.scope - libcontainer container f98b0001eb9c4e9cd25c60c96e1fd4ae0dfffa3591b2421dfdf6fe46de69df3b. Apr 28 00:21:54.043712 systemd-resolved[1333]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 28 00:21:54.405831 kubelet[2526]: E0428 00:21:54.405670 2526 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.186s" Apr 28 00:21:55.444961 containerd[1473]: time="2026-04-28T00:21:55.444827608Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-sn6rz,Uid:69b6c5c4-1b0f-43c4-a6e5-e4ff6b274b36,Namespace:kube-system,Attempt:0,} returns sandbox id \"f98b0001eb9c4e9cd25c60c96e1fd4ae0dfffa3591b2421dfdf6fe46de69df3b\"" Apr 28 00:21:55.625638 kubelet[2526]: E0428 00:21:55.624141 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:21:55.655325 systemd[1]: Started cri-containerd-95ba070727649ae7d38a11324b026e5b7a32381ddd62917874212b5414d158d6.scope - libcontainer container 95ba070727649ae7d38a11324b026e5b7a32381ddd62917874212b5414d158d6. Apr 28 00:21:58.835660 containerd[1473]: time="2026-04-28T00:21:58.835374035Z" level=info msg="CreateContainer within sandbox \"f98b0001eb9c4e9cd25c60c96e1fd4ae0dfffa3591b2421dfdf6fe46de69df3b\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 28 00:21:59.002438 containerd[1473]: time="2026-04-28T00:21:58.913776092Z" level=info msg="StartContainer for \"95ba070727649ae7d38a11324b026e5b7a32381ddd62917874212b5414d158d6\" returns successfully" Apr 28 00:21:59.049786 kubelet[2526]: E0428 00:21:59.048592 2526 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.837s" Apr 28 00:21:59.202827 kubelet[2526]: E0428 00:21:59.202683 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:22:00.316440 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1420726909.mount: Deactivated successfully. Apr 28 00:22:00.743444 containerd[1473]: time="2026-04-28T00:22:00.740744652Z" level=info msg="CreateContainer within sandbox \"f98b0001eb9c4e9cd25c60c96e1fd4ae0dfffa3591b2421dfdf6fe46de69df3b\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"ba020a93e80adf73791d86fcc904a0daa6985d2408fad704f4ca14a7e5b59253\"" Apr 28 00:22:00.870706 containerd[1473]: time="2026-04-28T00:22:00.870503391Z" level=info msg="StartContainer for \"ba020a93e80adf73791d86fcc904a0daa6985d2408fad704f4ca14a7e5b59253\"" Apr 28 00:22:02.935319 kubelet[2526]: E0428 00:22:02.933424 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:22:03.605822 systemd[1]: Started cri-containerd-ba020a93e80adf73791d86fcc904a0daa6985d2408fad704f4ca14a7e5b59253.scope - libcontainer container ba020a93e80adf73791d86fcc904a0daa6985d2408fad704f4ca14a7e5b59253. Apr 28 00:22:03.678407 kubelet[2526]: E0428 00:22:03.666726 2526 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.412s" Apr 28 00:22:04.376804 kubelet[2526]: E0428 00:22:04.295740 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:22:05.856429 kubelet[2526]: I0428 00:22:05.854083 2526 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-976lc" podStartSLOduration=87.853940248 podStartE2EDuration="1m27.853940248s" podCreationTimestamp="2026-04-28 00:20:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-28 00:22:05.835007187 +0000 UTC m=+111.272984485" watchObservedRunningTime="2026-04-28 00:22:05.853940248 +0000 UTC m=+111.291917545" Apr 28 00:22:05.996802 kubelet[2526]: E0428 00:22:05.995820 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:22:06.536527 containerd[1473]: time="2026-04-28T00:22:06.532367240Z" level=info msg="StartContainer for \"ba020a93e80adf73791d86fcc904a0daa6985d2408fad704f4ca14a7e5b59253\" returns successfully" Apr 28 00:22:06.760169 kubelet[2526]: E0428 00:22:06.747477 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:22:07.897684 kubelet[2526]: E0428 00:22:07.892062 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:22:08.163710 kubelet[2526]: E0428 00:22:08.159215 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:22:08.324610 kubelet[2526]: E0428 00:22:08.318659 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:22:09.256879 kubelet[2526]: E0428 00:22:09.251526 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:22:09.563420 kubelet[2526]: E0428 00:22:09.559839 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:22:10.467345 kubelet[2526]: E0428 00:22:10.461350 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:22:11.086443 kubelet[2526]: E0428 00:22:11.065646 2526 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.566s" Apr 28 00:22:17.540517 kubelet[2526]: E0428 00:22:17.537958 2526 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.332s" Apr 28 00:22:20.276240 kubelet[2526]: E0428 00:22:20.275730 2526 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.731s" Apr 28 00:22:25.324333 kubelet[2526]: E0428 00:22:25.317016 2526 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="5.037s" Apr 28 00:22:25.894287 kubelet[2526]: E0428 00:22:25.883842 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:22:26.757777 kubelet[2526]: I0428 00:22:26.753828 2526 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-sn6rz" podStartSLOduration=108.753293956 podStartE2EDuration="1m48.753293956s" podCreationTimestamp="2026-04-28 00:20:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-28 00:22:11.131668603 +0000 UTC m=+116.569645905" watchObservedRunningTime="2026-04-28 00:22:26.753293956 +0000 UTC m=+132.191271261" Apr 28 00:22:28.667515 kubelet[2526]: E0428 00:22:28.667208 2526 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.454s" Apr 28 00:22:48.649412 kubelet[2526]: E0428 00:22:48.647606 2526 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.349s" Apr 28 00:22:52.260482 kubelet[2526]: E0428 00:22:52.259838 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:23:07.219654 kubelet[2526]: E0428 00:23:07.216825 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:23:23.294130 kubelet[2526]: E0428 00:23:23.290144 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:23:25.244200 kubelet[2526]: E0428 00:23:25.242165 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:23:27.362313 kubelet[2526]: E0428 00:23:27.361318 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:23:30.262196 kubelet[2526]: E0428 00:23:30.261739 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:23:34.352665 kubelet[2526]: E0428 00:23:34.348574 2526 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.117s" Apr 28 00:23:37.008524 kubelet[2526]: E0428 00:23:37.004361 2526 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.631s" Apr 28 00:23:43.579239 kubelet[2526]: E0428 00:23:43.575711 2526 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="6.063s" Apr 28 00:23:50.520329 systemd[1]: cri-containerd-08d94bf594f238ce10dd0aec926de9038c8845723ba5a23aeab465349ec74b42.scope: Deactivated successfully. Apr 28 00:23:50.521638 systemd[1]: cri-containerd-08d94bf594f238ce10dd0aec926de9038c8845723ba5a23aeab465349ec74b42.scope: Consumed 57.476s CPU time, 22.1M memory peak, 0B memory swap peak. Apr 28 00:23:55.783040 systemd[1]: cri-containerd-926a2ae2d2bbe7d37d5546329503f4fc06ded504114e66507d4e88685c8b284d.scope: Deactivated successfully. Apr 28 00:23:55.785058 systemd[1]: cri-containerd-926a2ae2d2bbe7d37d5546329503f4fc06ded504114e66507d4e88685c8b284d.scope: Consumed 40.062s CPU time, 18.8M memory peak, 0B memory swap peak. Apr 28 00:23:56.043782 kubelet[2526]: E0428 00:23:56.010862 2526 controller.go:195] "Failed to update lease" err="Put \"https://10.0.0.11:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Apr 28 00:23:59.343391 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-08d94bf594f238ce10dd0aec926de9038c8845723ba5a23aeab465349ec74b42-rootfs.mount: Deactivated successfully. Apr 28 00:24:00.031254 containerd[1473]: time="2026-04-28T00:23:59.892010031Z" level=info msg="shim disconnected" id=08d94bf594f238ce10dd0aec926de9038c8845723ba5a23aeab465349ec74b42 namespace=k8s.io Apr 28 00:24:00.189169 containerd[1473]: time="2026-04-28T00:24:00.072555543Z" level=warning msg="cleaning up after shim disconnected" id=08d94bf594f238ce10dd0aec926de9038c8845723ba5a23aeab465349ec74b42 namespace=k8s.io Apr 28 00:24:00.246762 containerd[1473]: time="2026-04-28T00:24:00.223604565Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 28 00:24:02.083280 containerd[1473]: time="2026-04-28T00:24:02.057171708Z" level=error msg="failed to delete shim" error="1 error occurred:\n\t* close wait error: context deadline exceeded\n\n" id=08d94bf594f238ce10dd0aec926de9038c8845723ba5a23aeab465349ec74b42 Apr 28 00:24:02.466385 containerd[1473]: time="2026-04-28T00:24:02.454689460Z" level=warning msg="cleanup warnings time=\"2026-04-28T00:24:02Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 1: open /run/containerd/io.containerd.runtime.v2.task/k8s.io/08d94bf594f238ce10dd0aec926de9038c8845723ba5a23aeab465349ec74b42/log.json: no such file or directory\\n\\nNAME:\\n runc - Open Container Initiative runtime\\n\\nrunc is a command line client for running applications packaged according to\\nthe Open Container Initiative (OCI) format and is a compliant implementation of the\\nOpen Container Initiative specification.\\n\\nrunc integrates well with existing process supervisors to provide a production\\ncontainer runtime environment for applications. It can be used with your\\nexisting process monitoring tools and the container will be spawned as a\\ndirect child of the process supervisor.\\n\\nContainers are configured using bundles. A bundle for a container is a directory\\nthat includes a specification file named \\\"config.json\\\" and a root filesystem.\\nThe root filesystem contains the contents of the container.\\n\\nTo start a new instance of a container:\\n\\n # runc run [ -b bundle ] \\n\\nWhere \\\"\\\" is your name for the instance of the container that you\\nare starting. The name you provide for the container instance must be unique on\\nyour host. Providing the bundle directory using \\\"-b\\\" is optional. The default\\nvalue for \\\"bundle\\\" is the current directory.\\n\\nUSAGE:\\n runc [global options] command [command options] [arguments...]\\n\\nVERSION:\\n 1.1.13\\ncommit: 58aa9203c123022138b22cf96540c284876a7910\\nspec: 1.0.2-dev\\ngo: go1.21.13\\nlibseccomp: 2.5.5\\n\\nCOMMANDS:\\n checkpoint checkpoint a running container\\n create create a container\\n delete delete any resources held by the container often used with detached container\\n events display container events such as OOM notifications, cpu, memory, and IO usage statistics\\n exec execute new process inside the container\\n kill kill sends the specified signal (default: SIGTERM) to the container's init process\\n list lists containers started by runc with the given root\\n pause pause suspends all processes inside the container\\n ps ps displays the processes running inside a container\\n restore restore a container from a previous checkpoint\\n resume resumes all processes that have been previously paused\\n run create and run a container\\n spec create a new specification file\\n start executes the user defined process in a created container\\n state output the state of a container\\n update update container resource constraints\\n features show the enabled features\\n help, h Shows a list of commands or help for one command\\n\\nGLOBAL OPTIONS:\\n --debug enable debug logging\\n --log value set the log file to write runc logs to (default is '/dev/stderr')\\n --log-format value set the log format ('text' (default), or 'json') (default: \\\"text\\\")\\n --root value root directory for storage of container state (this should be located in tmpfs) (default: \\\"/run/runc\\\")\\n --criu value path to the criu binary used for checkpoint and restore (default: \\\"criu\\\")\\n --systemd-cgroup enable systemd cgroup support, expects cgroupsPath to be of form \\\"slice:prefix:name\\\" for e.g. \\\"system.slice:runc:434234\\\"\\n --rootless value ignore cgroup permission errors ('true', 'false', or 'auto') (default: \\\"auto\\\")\\n --help, -h show help\\n --version, -v print the version\\n{\\\"level\\\":\\\"error\\\",\\\"msg\\\":\\\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/08d94bf594f238ce10dd0aec926de9038c8845723ba5a23aeab465349ec74b42/log.json: no such file or directory\\\",\\\"time\\\":\\\"2026-04-28T00:24:02Z\\\"}\\n\" runtime=io.containerd.runc.v2\ntime=\"2026-04-28T00:24:02Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/08d94bf594f238ce10dd0aec926de9038c8845723ba5a23aeab465349ec74b42/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" namespace=k8s.io Apr 28 00:24:03.672634 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-926a2ae2d2bbe7d37d5546329503f4fc06ded504114e66507d4e88685c8b284d-rootfs.mount: Deactivated successfully. Apr 28 00:24:03.947424 containerd[1473]: time="2026-04-28T00:24:03.936345465Z" level=info msg="shim disconnected" id=926a2ae2d2bbe7d37d5546329503f4fc06ded504114e66507d4e88685c8b284d namespace=k8s.io Apr 28 00:24:04.079117 containerd[1473]: time="2026-04-28T00:24:03.950053966Z" level=warning msg="cleaning up after shim disconnected" id=926a2ae2d2bbe7d37d5546329503f4fc06ded504114e66507d4e88685c8b284d namespace=k8s.io Apr 28 00:24:04.079117 containerd[1473]: time="2026-04-28T00:24:03.950784118Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 28 00:24:06.469727 containerd[1473]: time="2026-04-28T00:24:06.448973519Z" level=error msg="failed to delete shim" error="1 error occurred:\n\t* close wait error: context deadline exceeded\n\n" id=926a2ae2d2bbe7d37d5546329503f4fc06ded504114e66507d4e88685c8b284d Apr 28 00:24:09.019733 containerd[1473]: time="2026-04-28T00:24:08.993402983Z" level=error msg="failed to delete" cmd="/usr/bin/containerd-shim-runc-v2 -namespace k8s.io -address /run/containerd/containerd.sock -publish-binary /usr/bin/containerd -id 926a2ae2d2bbe7d37d5546329503f4fc06ded504114e66507d4e88685c8b284d -bundle /run/containerd/io.containerd.runtime.v2.task/k8s.io/926a2ae2d2bbe7d37d5546329503f4fc06ded504114e66507d4e88685c8b284d delete" error="signal: killed" namespace=k8s.io Apr 28 00:24:09.095804 containerd[1473]: time="2026-04-28T00:24:09.012537077Z" level=warning msg="failed to clean up after shim disconnected" error=": signal: killed" id=926a2ae2d2bbe7d37d5546329503f4fc06ded504114e66507d4e88685c8b284d namespace=k8s.io Apr 28 00:24:16.474642 kubelet[2526]: E0428 00:24:16.471125 2526 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="32.713s" Apr 28 00:24:24.485185 kubelet[2526]: E0428 00:24:24.461821 2526 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="7.904s" Apr 28 00:24:25.573241 kubelet[2526]: E0428 00:24:25.570450 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:24:25.685356 kubelet[2526]: I0428 00:24:25.684653 2526 scope.go:117] "RemoveContainer" containerID="08d94bf594f238ce10dd0aec926de9038c8845723ba5a23aeab465349ec74b42" Apr 28 00:24:25.699592 kubelet[2526]: E0428 00:24:25.694394 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:24:25.797707 kubelet[2526]: E0428 00:24:25.792733 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:24:25.956864 kubelet[2526]: E0428 00:24:25.944617 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:24:26.366857 kubelet[2526]: E0428 00:24:26.347476 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:24:26.613171 kubelet[2526]: E0428 00:24:26.574716 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:24:26.699662 containerd[1473]: time="2026-04-28T00:24:26.695350122Z" level=info msg="CreateContainer within sandbox \"670f440219fcc7a0b00ba64a35e5d1ee4e1b4357741788815366a9fc0871c449\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Apr 28 00:24:26.916745 kubelet[2526]: E0428 00:24:26.882792 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:24:26.941326 kubelet[2526]: E0428 00:24:26.921616 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:24:26.941326 kubelet[2526]: I0428 00:24:26.921741 2526 scope.go:117] "RemoveContainer" containerID="926a2ae2d2bbe7d37d5546329503f4fc06ded504114e66507d4e88685c8b284d" Apr 28 00:24:26.941326 kubelet[2526]: E0428 00:24:26.921866 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:24:27.149128 kubelet[2526]: E0428 00:24:27.140706 2526 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.882s" Apr 28 00:24:27.872435 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount888241869.mount: Deactivated successfully. Apr 28 00:24:28.494099 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount316708655.mount: Deactivated successfully. Apr 28 00:24:29.934813 containerd[1473]: time="2026-04-28T00:24:29.932620686Z" level=info msg="CreateContainer within sandbox \"670f440219fcc7a0b00ba64a35e5d1ee4e1b4357741788815366a9fc0871c449\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"e578517ca0c60a838bc513e10c078eea4937b964974df5e9ff74878c6d37749c\"" Apr 28 00:24:33.747234 containerd[1473]: time="2026-04-28T00:24:33.746848554Z" level=info msg="StartContainer for \"e578517ca0c60a838bc513e10c078eea4937b964974df5e9ff74878c6d37749c\"" Apr 28 00:24:36.646787 containerd[1473]: time="2026-04-28T00:24:36.633512872Z" level=info msg="CreateContainer within sandbox \"5c78d6efeb1497bc4b241801b76ef5ef29f9c34ed77d9b10eab33b0cd1c147bb\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Apr 28 00:24:41.202868 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3679886614.mount: Deactivated successfully. Apr 28 00:24:42.302180 containerd[1473]: time="2026-04-28T00:24:42.262504998Z" level=info msg="CreateContainer within sandbox \"5c78d6efeb1497bc4b241801b76ef5ef29f9c34ed77d9b10eab33b0cd1c147bb\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"2e8500a91f82d4d0bef9aa6ebd493fb03fb3933a1726f03ed3d23808e2d491b7\"" Apr 28 00:24:43.109169 systemd[1]: run-containerd-runc-k8s.io-e578517ca0c60a838bc513e10c078eea4937b964974df5e9ff74878c6d37749c-runc.s0zHV6.mount: Deactivated successfully. Apr 28 00:24:43.438828 systemd[1]: Started cri-containerd-e578517ca0c60a838bc513e10c078eea4937b964974df5e9ff74878c6d37749c.scope - libcontainer container e578517ca0c60a838bc513e10c078eea4937b964974df5e9ff74878c6d37749c. Apr 28 00:24:45.052486 containerd[1473]: time="2026-04-28T00:24:45.032470647Z" level=info msg="StartContainer for \"2e8500a91f82d4d0bef9aa6ebd493fb03fb3933a1726f03ed3d23808e2d491b7\"" Apr 28 00:24:46.059457 containerd[1473]: time="2026-04-28T00:24:46.011880120Z" level=info msg="StartContainer for \"e578517ca0c60a838bc513e10c078eea4937b964974df5e9ff74878c6d37749c\" returns successfully" Apr 28 00:24:54.744656 kubelet[2526]: E0428 00:24:54.739009 2526 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="27.518s" Apr 28 00:24:56.800775 systemd[1]: Started cri-containerd-2e8500a91f82d4d0bef9aa6ebd493fb03fb3933a1726f03ed3d23808e2d491b7.scope - libcontainer container 2e8500a91f82d4d0bef9aa6ebd493fb03fb3933a1726f03ed3d23808e2d491b7. Apr 28 00:25:00.268295 kubelet[2526]: E0428 00:25:00.255797 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:25:01.680410 containerd[1473]: time="2026-04-28T00:25:01.668930937Z" level=info msg="StartContainer for \"2e8500a91f82d4d0bef9aa6ebd493fb03fb3933a1726f03ed3d23808e2d491b7\" returns successfully" Apr 28 00:25:03.271587 kubelet[2526]: E0428 00:25:03.271237 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:25:05.181346 kubelet[2526]: E0428 00:25:05.170943 2526 controller.go:195] "Failed to update lease" err="Put \"https://10.0.0.11:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Apr 28 00:25:15.430661 kubelet[2526]: E0428 00:25:15.428982 2526 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="20.664s" Apr 28 00:25:16.442730 kubelet[2526]: E0428 00:25:16.390854 2526 controller.go:195] "Failed to update lease" err="Put \"https://10.0.0.11:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded" Apr 28 00:25:16.560751 kubelet[2526]: E0428 00:25:16.445735 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:25:18.662689 kubelet[2526]: E0428 00:25:18.588844 2526 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.606s" Apr 28 00:25:18.844937 kubelet[2526]: E0428 00:25:18.842056 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:25:21.107014 kubelet[2526]: E0428 00:25:21.105999 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:25:21.523227 kubelet[2526]: E0428 00:25:21.463742 2526 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.603s" Apr 28 00:25:22.564461 kubelet[2526]: E0428 00:25:22.562481 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:25:22.697307 kubelet[2526]: E0428 00:25:22.693605 2526 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.139s" Apr 28 00:25:22.754161 kubelet[2526]: E0428 00:25:22.749521 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:25:22.915102 kubelet[2526]: E0428 00:25:22.910702 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:25:22.926598 kubelet[2526]: E0428 00:25:22.920850 2526 controller.go:195] "Failed to update lease" err="Operation cannot be fulfilled on leases.coordination.k8s.io \"localhost\": the object has been modified; please apply your changes to the latest version and try again" Apr 28 00:25:24.642625 kubelet[2526]: E0428 00:25:24.641791 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:25:24.755283 kubelet[2526]: E0428 00:25:24.743528 2526 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.465s" Apr 28 00:25:28.152765 kubelet[2526]: E0428 00:25:28.152632 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:25:28.209308 kubelet[2526]: E0428 00:25:28.181868 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:25:29.705838 kubelet[2526]: E0428 00:25:29.700296 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:25:30.345301 kubelet[2526]: E0428 00:25:30.344860 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:25:30.361124 kubelet[2526]: E0428 00:25:30.359014 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:25:31.602496 kubelet[2526]: E0428 00:25:31.598582 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:25:37.481164 kubelet[2526]: E0428 00:25:37.474153 2526 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.252s" Apr 28 00:25:43.270316 kubelet[2526]: E0428 00:25:43.238542 2526 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="5.744s" Apr 28 00:25:45.482471 kubelet[2526]: E0428 00:25:45.472538 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:25:49.328009 kubelet[2526]: E0428 00:25:49.277433 2526 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="5.89s" Apr 28 00:25:53.357705 kubelet[2526]: E0428 00:25:53.355680 2526 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.053s" Apr 28 00:25:54.884881 kubelet[2526]: E0428 00:25:54.884478 2526 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.528s" Apr 28 00:25:56.835342 kubelet[2526]: E0428 00:25:56.833282 2526 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.586s" Apr 28 00:26:02.089396 kubelet[2526]: E0428 00:26:02.042198 2526 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="4.709s" Apr 28 00:26:04.048518 kubelet[2526]: E0428 00:26:04.045142 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:26:07.088871 kubelet[2526]: E0428 00:26:07.087744 2526 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.739s" Apr 28 00:26:10.388859 kubelet[2526]: E0428 00:26:10.388234 2526 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.16s" Apr 28 00:26:14.376275 kubelet[2526]: E0428 00:26:14.374779 2526 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.083s" Apr 28 00:26:17.515468 kubelet[2526]: E0428 00:26:17.515286 2526 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.207s" Apr 28 00:26:18.545352 kubelet[2526]: E0428 00:26:18.542284 2526 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.007s" Apr 28 00:26:21.306244 kubelet[2526]: E0428 00:26:21.301560 2526 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.001s" Apr 28 00:26:26.297475 systemd[1]: Started sshd@5-10.0.0.11:22-10.0.0.1:34376.service - OpenSSH per-connection server daemon (10.0.0.1:34376). Apr 28 00:26:27.464744 sshd[4219]: Accepted publickey for core from 10.0.0.1 port 34376 ssh2: RSA SHA256:h1NhNWfC+Dmp6wIIzeQOuTKnwsIBe3BN0LKeEqeOidc Apr 28 00:26:27.662310 sshd[4219]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 00:26:28.069878 systemd-logind[1457]: New session 6 of user core. Apr 28 00:26:28.144278 systemd[1]: Started session-6.scope - Session 6 of User core. Apr 28 00:26:30.358289 kubelet[2526]: E0428 00:26:30.298700 2526 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.013s" Apr 28 00:26:30.655204 kubelet[2526]: E0428 00:26:30.654908 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:26:32.840608 kubelet[2526]: E0428 00:26:32.838135 2526 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.437s" Apr 28 00:26:40.799748 kubelet[2526]: E0428 00:26:40.798684 2526 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="7.541s" Apr 28 00:26:41.687497 kubelet[2526]: E0428 00:26:41.683736 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:26:41.687497 kubelet[2526]: E0428 00:26:41.684070 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:26:41.706524 kubelet[2526]: E0428 00:26:41.704386 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:26:42.175413 kubelet[2526]: E0428 00:26:42.174222 2526 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.115s" Apr 28 00:26:42.473364 sshd[4219]: pam_unix(sshd:session): session closed for user core Apr 28 00:26:42.666136 systemd[1]: sshd@5-10.0.0.11:22-10.0.0.1:34376.service: Deactivated successfully. Apr 28 00:26:42.762222 systemd[1]: session-6.scope: Deactivated successfully. Apr 28 00:26:42.770671 systemd[1]: session-6.scope: Consumed 3.082s CPU time. Apr 28 00:26:42.905091 systemd-logind[1457]: Session 6 logged out. Waiting for processes to exit. Apr 28 00:26:42.975790 systemd-logind[1457]: Removed session 6. Apr 28 00:26:48.216877 systemd[1]: Started sshd@6-10.0.0.11:22-10.0.0.1:41914.service - OpenSSH per-connection server daemon (10.0.0.1:41914). Apr 28 00:26:50.571526 kubelet[2526]: E0428 00:26:50.547849 2526 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.225s" Apr 28 00:26:51.046252 sshd[4277]: Accepted publickey for core from 10.0.0.1 port 41914 ssh2: RSA SHA256:h1NhNWfC+Dmp6wIIzeQOuTKnwsIBe3BN0LKeEqeOidc Apr 28 00:26:51.084853 sshd[4277]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 00:26:51.443451 systemd-logind[1457]: New session 7 of user core. Apr 28 00:26:51.629519 systemd[1]: Started session-7.scope - Session 7 of User core. Apr 28 00:26:54.087038 sshd[4277]: pam_unix(sshd:session): session closed for user core Apr 28 00:26:54.141796 systemd[1]: sshd@6-10.0.0.11:22-10.0.0.1:41914.service: Deactivated successfully. Apr 28 00:26:54.143606 systemd[1]: sshd@6-10.0.0.11:22-10.0.0.1:41914.service: Consumed 1.232s CPU time. Apr 28 00:26:54.196332 systemd[1]: session-7.scope: Deactivated successfully. Apr 28 00:26:54.198149 systemd[1]: session-7.scope: Consumed 1.592s CPU time. Apr 28 00:26:54.266138 systemd-logind[1457]: Session 7 logged out. Waiting for processes to exit. Apr 28 00:26:54.296749 systemd-logind[1457]: Removed session 7. Apr 28 00:26:59.383750 systemd[1]: Started sshd@7-10.0.0.11:22-10.0.0.1:49686.service - OpenSSH per-connection server daemon (10.0.0.1:49686). Apr 28 00:27:00.783359 kubelet[2526]: E0428 00:27:00.769709 2526 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.247s" Apr 28 00:27:00.874232 sshd[4329]: Accepted publickey for core from 10.0.0.1 port 49686 ssh2: RSA SHA256:h1NhNWfC+Dmp6wIIzeQOuTKnwsIBe3BN0LKeEqeOidc Apr 28 00:27:00.952300 sshd[4329]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 00:27:01.252363 kubelet[2526]: E0428 00:27:01.246883 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:27:01.438541 systemd-logind[1457]: New session 8 of user core. Apr 28 00:27:01.466743 systemd[1]: Started session-8.scope - Session 8 of User core. Apr 28 00:27:06.796771 sshd[4329]: pam_unix(sshd:session): session closed for user core Apr 28 00:27:06.953033 systemd[1]: sshd@7-10.0.0.11:22-10.0.0.1:49686.service: Deactivated successfully. Apr 28 00:27:07.005097 systemd[1]: session-8.scope: Deactivated successfully. Apr 28 00:27:07.041439 systemd[1]: session-8.scope: Consumed 3.645s CPU time. Apr 28 00:27:07.067834 systemd-logind[1457]: Session 8 logged out. Waiting for processes to exit. Apr 28 00:27:07.098759 systemd-logind[1457]: Removed session 8. Apr 28 00:27:07.560353 kubelet[2526]: E0428 00:27:07.505882 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:27:12.098328 systemd[1]: Started sshd@8-10.0.0.11:22-10.0.0.1:40024.service - OpenSSH per-connection server daemon (10.0.0.1:40024). Apr 28 00:27:13.840820 sshd[4378]: Accepted publickey for core from 10.0.0.1 port 40024 ssh2: RSA SHA256:h1NhNWfC+Dmp6wIIzeQOuTKnwsIBe3BN0LKeEqeOidc Apr 28 00:27:14.063103 sshd[4378]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 00:27:14.568374 systemd-logind[1457]: New session 9 of user core. Apr 28 00:27:14.701598 systemd[1]: Started session-9.scope - Session 9 of User core. Apr 28 00:27:18.288381 kubelet[2526]: E0428 00:27:18.288072 2526 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.056s" Apr 28 00:27:19.177542 sshd[4378]: pam_unix(sshd:session): session closed for user core Apr 28 00:27:19.516602 systemd[1]: sshd@8-10.0.0.11:22-10.0.0.1:40024.service: Deactivated successfully. Apr 28 00:27:19.568358 systemd[1]: session-9.scope: Deactivated successfully. Apr 28 00:27:19.584016 systemd[1]: session-9.scope: Consumed 3.038s CPU time. Apr 28 00:27:19.601369 systemd-logind[1457]: Session 9 logged out. Waiting for processes to exit. Apr 28 00:27:19.652114 systemd-logind[1457]: Removed session 9. Apr 28 00:27:24.417713 systemd[1]: Started sshd@9-10.0.0.11:22-10.0.0.1:45634.service - OpenSSH per-connection server daemon (10.0.0.1:45634). Apr 28 00:27:25.227445 sshd[4438]: Accepted publickey for core from 10.0.0.1 port 45634 ssh2: RSA SHA256:h1NhNWfC+Dmp6wIIzeQOuTKnwsIBe3BN0LKeEqeOidc Apr 28 00:27:25.253443 sshd[4438]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 00:27:25.436351 systemd-logind[1457]: New session 10 of user core. Apr 28 00:27:25.510380 systemd[1]: Started session-10.scope - Session 10 of User core. Apr 28 00:27:28.373066 kubelet[2526]: E0428 00:27:28.372154 2526 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.16s" Apr 28 00:27:28.419355 kubelet[2526]: E0428 00:27:28.373125 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:27:29.870769 sshd[4438]: pam_unix(sshd:session): session closed for user core Apr 28 00:27:30.089595 systemd[1]: sshd@9-10.0.0.11:22-10.0.0.1:45634.service: Deactivated successfully. Apr 28 00:27:30.203151 systemd[1]: session-10.scope: Deactivated successfully. Apr 28 00:27:30.203605 systemd[1]: session-10.scope: Consumed 2.771s CPU time. Apr 28 00:27:30.267339 systemd-logind[1457]: Session 10 logged out. Waiting for processes to exit. Apr 28 00:27:30.279614 systemd-logind[1457]: Removed session 10. Apr 28 00:27:35.262373 systemd[1]: Started sshd@10-10.0.0.11:22-10.0.0.1:40088.service - OpenSSH per-connection server daemon (10.0.0.1:40088). Apr 28 00:27:36.851376 sshd[4490]: Accepted publickey for core from 10.0.0.1 port 40088 ssh2: RSA SHA256:h1NhNWfC+Dmp6wIIzeQOuTKnwsIBe3BN0LKeEqeOidc Apr 28 00:27:36.860055 sshd[4490]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 00:27:37.030425 systemd-logind[1457]: New session 11 of user core. Apr 28 00:27:37.061696 systemd[1]: Started session-11.scope - Session 11 of User core. Apr 28 00:27:37.863348 kubelet[2526]: E0428 00:27:37.861548 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:27:40.594353 sshd[4490]: pam_unix(sshd:session): session closed for user core Apr 28 00:27:40.763334 systemd[1]: sshd@10-10.0.0.11:22-10.0.0.1:40088.service: Deactivated successfully. Apr 28 00:27:40.902998 systemd[1]: session-11.scope: Deactivated successfully. Apr 28 00:27:40.903598 systemd[1]: session-11.scope: Consumed 2.540s CPU time. Apr 28 00:27:40.934633 systemd-logind[1457]: Session 11 logged out. Waiting for processes to exit. Apr 28 00:27:41.034791 systemd-logind[1457]: Removed session 11. Apr 28 00:27:46.099778 systemd[1]: Started sshd@11-10.0.0.11:22-10.0.0.1:48566.service - OpenSSH per-connection server daemon (10.0.0.1:48566). Apr 28 00:27:47.382659 sshd[4535]: Accepted publickey for core from 10.0.0.1 port 48566 ssh2: RSA SHA256:h1NhNWfC+Dmp6wIIzeQOuTKnwsIBe3BN0LKeEqeOidc Apr 28 00:27:47.637813 sshd[4535]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 00:27:48.240506 systemd-logind[1457]: New session 12 of user core. Apr 28 00:27:48.392201 systemd[1]: Started session-12.scope - Session 12 of User core. Apr 28 00:27:48.446416 kubelet[2526]: E0428 00:27:48.394605 2526 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.187s" Apr 28 00:27:51.343147 kubelet[2526]: E0428 00:27:51.304581 2526 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.089s" Apr 28 00:27:54.853551 kubelet[2526]: E0428 00:27:54.852009 2526 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.519s" Apr 28 00:27:55.804026 kubelet[2526]: E0428 00:27:55.787758 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:27:58.697780 systemd[1]: cri-containerd-e578517ca0c60a838bc513e10c078eea4937b964974df5e9ff74878c6d37749c.scope: Deactivated successfully. Apr 28 00:27:58.767708 systemd[1]: cri-containerd-e578517ca0c60a838bc513e10c078eea4937b964974df5e9ff74878c6d37749c.scope: Consumed 39.941s CPU time. Apr 28 00:28:06.177834 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e578517ca0c60a838bc513e10c078eea4937b964974df5e9ff74878c6d37749c-rootfs.mount: Deactivated successfully. Apr 28 00:28:06.758794 containerd[1473]: time="2026-04-28T00:28:06.591002896Z" level=info msg="shim disconnected" id=e578517ca0c60a838bc513e10c078eea4937b964974df5e9ff74878c6d37749c namespace=k8s.io Apr 28 00:28:06.877922 containerd[1473]: time="2026-04-28T00:28:06.771371629Z" level=warning msg="cleaning up after shim disconnected" id=e578517ca0c60a838bc513e10c078eea4937b964974df5e9ff74878c6d37749c namespace=k8s.io Apr 28 00:28:06.877922 containerd[1473]: time="2026-04-28T00:28:06.782790444Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 28 00:28:08.312267 sshd[4535]: pam_unix(sshd:session): session closed for user core Apr 28 00:28:08.555200 systemd[1]: sshd@11-10.0.0.11:22-10.0.0.1:48566.service: Deactivated successfully. Apr 28 00:28:08.690561 systemd[1]: session-12.scope: Deactivated successfully. Apr 28 00:28:08.691482 systemd[1]: session-12.scope: Consumed 12.236s CPU time. Apr 28 00:28:08.739960 systemd-logind[1457]: Session 12 logged out. Waiting for processes to exit. Apr 28 00:28:08.799637 systemd-logind[1457]: Removed session 12. Apr 28 00:28:09.373758 containerd[1473]: time="2026-04-28T00:28:09.357034214Z" level=error msg="failed to delete shim" error="1 error occurred:\n\t* close wait error: context deadline exceeded\n\n" id=e578517ca0c60a838bc513e10c078eea4937b964974df5e9ff74878c6d37749c Apr 28 00:28:10.204934 systemd[1]: cri-containerd-2e8500a91f82d4d0bef9aa6ebd493fb03fb3933a1726f03ed3d23808e2d491b7.scope: Deactivated successfully. Apr 28 00:28:10.213809 systemd[1]: cri-containerd-2e8500a91f82d4d0bef9aa6ebd493fb03fb3933a1726f03ed3d23808e2d491b7.scope: Consumed 40.615s CPU time. Apr 28 00:28:10.944426 containerd[1473]: time="2026-04-28T00:28:10.941413670Z" level=warning msg="cleanup warnings time=\"2026-04-28T00:28:09Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: open /run/containerd/io.containerd.runtime.v2.task/k8s.io/e578517ca0c60a838bc513e10c078eea4937b964974df5e9ff74878c6d37749c/log.json: no such file or directory\\n\\nNAME:\\n runc - Open Container Initiative runtime\\n\\nrunc is a command line client for running applications packaged according to\\nthe Open Container Initiative (OCI) format and is a compliant implementation of the\\nOpen Container Initiative specification.\\n\\nrunc integrates well with existing process supervisors to provide a production\\ncontainer runtime environment for applications. It can be used with your\\nexisting process monitoring tools and the container will be spawned as a\\ndirect child of the process supervisor.\\n\\nContainers are configured using bundles. A bundle for a container is a directory\\nthat includes a specification file named \\\"config.json\\\" and a root filesystem.\\nThe root filesystem contains the contents of the container.\\n\\nTo start a new instance of a container:\\n\\n # runc run [ -b bundle ] \\n\\nWhere \\\"\\\" is your name for the instance of the container that you\\nare starting. The name you provide for the container instance must be unique on\\nyour host. Providing the bundle directory using \\\"-b\\\" is optional. The default\\nvalue for \\\"bundle\\\" is the current directory.\\n\\nUSAGE:\\n runc [global options] command [command options] [arguments...]\\n\\nVERSION:\\n 1.1.13\\ncommit: 58aa9203c123022138b22cf96540c284876a7910\\nspec: 1.0.2-dev\\ngo: go1.21.13\\nlibseccomp: 2.5.5\\n\\nCOMMANDS:\\n checkpoint checkpoint a running container\\n create create a container\\n delete delete any resources held by the container often used with detached container\\n events display container events such as OOM notifications, cpu, memory, and IO usage statistics\\n exec execute new process inside the container\\n kill kill sends the specified signal (default: SIGTERM) to the container's init process\\n list lists containers started by runc with the given root\\n pause pause suspends all processes inside the container\\n ps ps displays the processes running inside a container\\n restore restore a container from a previous checkpoint\\n resume resumes all processes that have been previously paused\\n run create and run a container\\n spec create a new specification file\\n start executes the user defined process in a created container\\n state output the state of a container\\n update update container resource constraints\\n features show the enabled features\\n help, h Shows a list of commands or help for one command\\n\\nGLOBAL OPTIONS:\\n --debug enable debug logging\\n --log value set the log file to write runc logs to (default is '/dev/stderr')\\n --log-format value set the log format ('text' (default), or 'json') (default: \\\"text\\\")\\n --root value root directory for storage of container state (this should be located in tmpfs) (default: \\\"/run/runc\\\")\\n --criu value path to the criu binary used for checkpoint and restore (default: \\\"criu\\\")\\n --systemd-cgroup enable systemd cgroup support, expects cgroupsPath to be of form \\\"slice:prefix:name\\\" for e.g. \\\"system.slice:runc:434234\\\"\\n --rootless value ignore cgroup permission errors ('true', 'false', or 'auto') (default: \\\"auto\\\")\\n --help, -h show help\\n --version, -v print the version\\n{\\\"level\\\":\\\"error\\\",\\\"msg\\\":\\\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/e578517ca0c60a838bc513e10c078eea4937b964974df5e9ff74878c6d37749c/log.json: no such file or directory\\\",\\\"time\\\":\\\"2026-04-28T00:28:09Z\\\"}\\n\" runtime=io.containerd.runc.v2\ntime=\"2026-04-28T00:28:10Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/e578517ca0c60a838bc513e10c078eea4937b964974df5e9ff74878c6d37749c/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" namespace=k8s.io Apr 28 00:28:11.750920 kubelet[2526]: E0428 00:28:11.706463 2526 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="16.484s" Apr 28 00:28:12.842478 kubelet[2526]: E0428 00:28:12.810588 2526 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.033s" Apr 28 00:28:12.852321 kubelet[2526]: E0428 00:28:12.852085 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:28:12.853376 kubelet[2526]: E0428 00:28:12.852610 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:28:12.853538 kubelet[2526]: E0428 00:28:12.853516 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:28:12.988701 kubelet[2526]: E0428 00:28:12.980786 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:28:13.054014 kubelet[2526]: I0428 00:28:13.053942 2526 scope.go:117] "RemoveContainer" containerID="08d94bf594f238ce10dd0aec926de9038c8845723ba5a23aeab465349ec74b42" Apr 28 00:28:13.439295 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2e8500a91f82d4d0bef9aa6ebd493fb03fb3933a1726f03ed3d23808e2d491b7-rootfs.mount: Deactivated successfully. Apr 28 00:28:13.603646 containerd[1473]: time="2026-04-28T00:28:13.573628940Z" level=info msg="shim disconnected" id=2e8500a91f82d4d0bef9aa6ebd493fb03fb3933a1726f03ed3d23808e2d491b7 namespace=k8s.io Apr 28 00:28:13.603646 containerd[1473]: time="2026-04-28T00:28:13.576317931Z" level=warning msg="cleaning up after shim disconnected" id=2e8500a91f82d4d0bef9aa6ebd493fb03fb3933a1726f03ed3d23808e2d491b7 namespace=k8s.io Apr 28 00:28:13.603646 containerd[1473]: time="2026-04-28T00:28:13.576509968Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 28 00:28:14.308458 containerd[1473]: time="2026-04-28T00:28:14.304518626Z" level=info msg="RemoveContainer for \"08d94bf594f238ce10dd0aec926de9038c8845723ba5a23aeab465349ec74b42\"" Apr 28 00:28:14.355203 systemd[1]: Started sshd@12-10.0.0.11:22-10.0.0.1:39526.service - OpenSSH per-connection server daemon (10.0.0.1:39526). Apr 28 00:28:14.594975 kubelet[2526]: E0428 00:28:14.584497 2526 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.34s" Apr 28 00:28:14.621972 containerd[1473]: time="2026-04-28T00:28:14.621829414Z" level=info msg="RemoveContainer for \"08d94bf594f238ce10dd0aec926de9038c8845723ba5a23aeab465349ec74b42\" returns successfully" Apr 28 00:28:15.067526 kubelet[2526]: I0428 00:28:15.065506 2526 scope.go:117] "RemoveContainer" containerID="e578517ca0c60a838bc513e10c078eea4937b964974df5e9ff74878c6d37749c" Apr 28 00:28:15.081154 kubelet[2526]: E0428 00:28:15.071708 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:28:15.867615 sshd[4638]: Accepted publickey for core from 10.0.0.1 port 39526 ssh2: RSA SHA256:h1NhNWfC+Dmp6wIIzeQOuTKnwsIBe3BN0LKeEqeOidc Apr 28 00:28:16.148203 sshd[4638]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 00:28:16.358820 kubelet[2526]: E0428 00:28:16.357089 2526 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.136s" Apr 28 00:28:16.504948 containerd[1473]: time="2026-04-28T00:28:16.503698850Z" level=info msg="CreateContainer within sandbox \"670f440219fcc7a0b00ba64a35e5d1ee4e1b4357741788815366a9fc0871c449\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:2,}" Apr 28 00:28:16.520447 systemd-logind[1457]: New session 13 of user core. Apr 28 00:28:16.538013 systemd[1]: Started session-13.scope - Session 13 of User core. Apr 28 00:28:18.021662 containerd[1473]: time="2026-04-28T00:28:18.011807806Z" level=info msg="CreateContainer within sandbox \"670f440219fcc7a0b00ba64a35e5d1ee4e1b4357741788815366a9fc0871c449\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:2,} returns container id \"827d879477da84dac32005469cdf034d3a7183ae2787008c4ad54f98ca50a868\"" Apr 28 00:28:18.121815 containerd[1473]: time="2026-04-28T00:28:18.121273954Z" level=info msg="StartContainer for \"827d879477da84dac32005469cdf034d3a7183ae2787008c4ad54f98ca50a868\"" Apr 28 00:28:19.023545 kubelet[2526]: E0428 00:28:19.021326 2526 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.702s" Apr 28 00:28:19.305431 kubelet[2526]: I0428 00:28:19.299620 2526 scope.go:117] "RemoveContainer" containerID="926a2ae2d2bbe7d37d5546329503f4fc06ded504114e66507d4e88685c8b284d" Apr 28 00:28:20.104465 kubelet[2526]: I0428 00:28:20.091255 2526 scope.go:117] "RemoveContainer" containerID="2e8500a91f82d4d0bef9aa6ebd493fb03fb3933a1726f03ed3d23808e2d491b7" Apr 28 00:28:20.164581 kubelet[2526]: E0428 00:28:20.163695 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:28:20.203274 kubelet[2526]: E0428 00:28:20.203039 2526 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-scheduler pod=kube-scheduler-localhost_kube-system(824fd89300514e351ed3b68d82c665c6)\"" pod="kube-system/kube-scheduler-localhost" podUID="824fd89300514e351ed3b68d82c665c6" Apr 28 00:28:20.465592 kubelet[2526]: E0428 00:28:20.465178 2526 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.191s" Apr 28 00:28:20.890776 containerd[1473]: time="2026-04-28T00:28:20.854726053Z" level=info msg="RemoveContainer for \"926a2ae2d2bbe7d37d5546329503f4fc06ded504114e66507d4e88685c8b284d\"" Apr 28 00:28:23.258562 containerd[1473]: time="2026-04-28T00:28:23.258188685Z" level=info msg="RemoveContainer for \"926a2ae2d2bbe7d37d5546329503f4fc06ded504114e66507d4e88685c8b284d\" returns successfully" Apr 28 00:28:27.959624 sshd[4638]: pam_unix(sshd:session): session closed for user core Apr 28 00:28:28.348606 systemd[1]: sshd@12-10.0.0.11:22-10.0.0.1:39526.service: Deactivated successfully. Apr 28 00:28:28.603278 systemd[1]: session-13.scope: Deactivated successfully. Apr 28 00:28:28.610693 systemd[1]: session-13.scope: Consumed 5.120s CPU time. Apr 28 00:28:28.888108 systemd-logind[1457]: Session 13 logged out. Waiting for processes to exit. Apr 28 00:28:29.069828 kubelet[2526]: E0428 00:28:29.068574 2526 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="7.802s" Apr 28 00:28:29.079385 systemd-logind[1457]: Removed session 13. Apr 28 00:28:31.572645 kubelet[2526]: I0428 00:28:31.397815 2526 scope.go:117] "RemoveContainer" containerID="e578517ca0c60a838bc513e10c078eea4937b964974df5e9ff74878c6d37749c" Apr 28 00:28:33.531194 systemd[1]: Started sshd@13-10.0.0.11:22-10.0.0.1:35478.service - OpenSSH per-connection server daemon (10.0.0.1:35478). Apr 28 00:28:34.664015 systemd[1]: Started cri-containerd-827d879477da84dac32005469cdf034d3a7183ae2787008c4ad54f98ca50a868.scope - libcontainer container 827d879477da84dac32005469cdf034d3a7183ae2787008c4ad54f98ca50a868. Apr 28 00:28:36.872249 sshd[4725]: Accepted publickey for core from 10.0.0.1 port 35478 ssh2: RSA SHA256:h1NhNWfC+Dmp6wIIzeQOuTKnwsIBe3BN0LKeEqeOidc Apr 28 00:28:37.254673 sshd[4725]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 00:28:37.828261 systemd-logind[1457]: New session 14 of user core. Apr 28 00:28:37.839246 containerd[1473]: time="2026-04-28T00:28:37.836756731Z" level=info msg="RemoveContainer for \"e578517ca0c60a838bc513e10c078eea4937b964974df5e9ff74878c6d37749c\"" Apr 28 00:28:37.848193 systemd[1]: Started session-14.scope - Session 14 of User core. Apr 28 00:28:38.866394 containerd[1473]: time="2026-04-28T00:28:38.866297734Z" level=info msg="RemoveContainer for \"e578517ca0c60a838bc513e10c078eea4937b964974df5e9ff74878c6d37749c\" returns successfully" Apr 28 00:28:40.059632 kubelet[2526]: E0428 00:28:40.055222 2526 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="10.845s" Apr 28 00:28:40.841354 containerd[1473]: time="2026-04-28T00:28:40.836804013Z" level=info msg="StartContainer for \"827d879477da84dac32005469cdf034d3a7183ae2787008c4ad54f98ca50a868\" returns successfully" Apr 28 00:28:41.245719 kubelet[2526]: I0428 00:28:41.204668 2526 scope.go:117] "RemoveContainer" containerID="2e8500a91f82d4d0bef9aa6ebd493fb03fb3933a1726f03ed3d23808e2d491b7" Apr 28 00:28:41.529449 kubelet[2526]: E0428 00:28:41.499434 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:28:41.529449 kubelet[2526]: E0428 00:28:41.524680 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:28:41.871752 containerd[1473]: time="2026-04-28T00:28:41.866675255Z" level=info msg="CreateContainer within sandbox \"5c78d6efeb1497bc4b241801b76ef5ef29f9c34ed77d9b10eab33b0cd1c147bb\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:2,}" Apr 28 00:28:41.933338 kubelet[2526]: E0428 00:28:41.930529 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:28:42.545406 containerd[1473]: time="2026-04-28T00:28:42.541404405Z" level=info msg="CreateContainer within sandbox \"5c78d6efeb1497bc4b241801b76ef5ef29f9c34ed77d9b10eab33b0cd1c147bb\" for &ContainerMetadata{Name:kube-scheduler,Attempt:2,} returns container id \"547088bae0d68030bc53a090237b436587d1c69e8058210f1dd2d1a84e2701b0\"" Apr 28 00:28:42.687821 containerd[1473]: time="2026-04-28T00:28:42.683811758Z" level=info msg="StartContainer for \"547088bae0d68030bc53a090237b436587d1c69e8058210f1dd2d1a84e2701b0\"" Apr 28 00:28:43.201925 sshd[4725]: pam_unix(sshd:session): session closed for user core Apr 28 00:28:43.473003 systemd[1]: sshd@13-10.0.0.11:22-10.0.0.1:35478.service: Deactivated successfully. Apr 28 00:28:43.475839 systemd[1]: sshd@13-10.0.0.11:22-10.0.0.1:35478.service: Consumed 1.302s CPU time. Apr 28 00:28:43.579165 systemd[1]: session-14.scope: Deactivated successfully. Apr 28 00:28:43.579806 systemd[1]: session-14.scope: Consumed 3.483s CPU time. Apr 28 00:28:43.684558 systemd-logind[1457]: Session 14 logged out. Waiting for processes to exit. Apr 28 00:28:43.757811 systemd-logind[1457]: Removed session 14. Apr 28 00:28:44.386466 kubelet[2526]: E0428 00:28:44.380078 2526 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.063s" Apr 28 00:28:44.944614 kubelet[2526]: E0428 00:28:44.940081 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:28:45.275233 systemd[1]: Started cri-containerd-547088bae0d68030bc53a090237b436587d1c69e8058210f1dd2d1a84e2701b0.scope - libcontainer container 547088bae0d68030bc53a090237b436587d1c69e8058210f1dd2d1a84e2701b0. Apr 28 00:28:47.067400 containerd[1473]: time="2026-04-28T00:28:47.065010952Z" level=info msg="StartContainer for \"547088bae0d68030bc53a090237b436587d1c69e8058210f1dd2d1a84e2701b0\" returns successfully" Apr 28 00:28:47.443033 kubelet[2526]: E0428 00:28:47.442326 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:28:47.596296 kubelet[2526]: E0428 00:28:47.594189 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:28:48.750323 systemd[1]: Started sshd@14-10.0.0.11:22-10.0.0.1:40962.service - OpenSSH per-connection server daemon (10.0.0.1:40962). Apr 28 00:28:49.176464 kubelet[2526]: E0428 00:28:49.175552 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:28:49.420944 kubelet[2526]: E0428 00:28:49.420559 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:28:50.503107 kubelet[2526]: E0428 00:28:50.461196 2526 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.214s" Apr 28 00:28:51.760603 sshd[4829]: Accepted publickey for core from 10.0.0.1 port 40962 ssh2: RSA SHA256:h1NhNWfC+Dmp6wIIzeQOuTKnwsIBe3BN0LKeEqeOidc Apr 28 00:28:51.773348 sshd[4829]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 00:28:52.936790 systemd-logind[1457]: New session 15 of user core. Apr 28 00:28:53.115602 systemd[1]: Started session-15.scope - Session 15 of User core. Apr 28 00:28:54.760722 kubelet[2526]: E0428 00:28:54.754883 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:28:55.856773 kubelet[2526]: E0428 00:28:55.852655 2526 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="4.123s" Apr 28 00:28:57.927570 kubelet[2526]: E0428 00:28:57.919774 2526 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.955s" Apr 28 00:28:58.264209 kubelet[2526]: E0428 00:28:58.259139 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:29:00.374839 kubelet[2526]: E0428 00:29:00.374527 2526 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.09s" Apr 28 00:29:02.886279 kubelet[2526]: E0428 00:29:02.866327 2526 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.627s" Apr 28 00:29:06.394117 kubelet[2526]: E0428 00:29:06.393951 2526 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.138s" Apr 28 00:29:10.472863 kubelet[2526]: E0428 00:29:10.441804 2526 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.237s" Apr 28 00:29:13.109651 sshd[4829]: pam_unix(sshd:session): session closed for user core Apr 28 00:29:13.744586 systemd[1]: sshd@14-10.0.0.11:22-10.0.0.1:40962.service: Deactivated successfully. Apr 28 00:29:13.757368 systemd[1]: sshd@14-10.0.0.11:22-10.0.0.1:40962.service: Consumed 1.083s CPU time. Apr 28 00:29:13.861374 systemd[1]: session-15.scope: Deactivated successfully. Apr 28 00:29:13.861652 systemd[1]: session-15.scope: Consumed 6.988s CPU time. Apr 28 00:29:14.010351 systemd-logind[1457]: Session 15 logged out. Waiting for processes to exit. Apr 28 00:29:14.056513 systemd-logind[1457]: Removed session 15. Apr 28 00:29:16.726573 kubelet[2526]: E0428 00:29:16.691734 2526 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="5.985s" Apr 28 00:29:17.003682 kubelet[2526]: E0428 00:29:16.954750 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:29:18.702693 systemd[1]: Started sshd@15-10.0.0.11:22-10.0.0.1:47800.service - OpenSSH per-connection server daemon (10.0.0.1:47800). Apr 28 00:29:19.298022 sshd[4916]: Accepted publickey for core from 10.0.0.1 port 47800 ssh2: RSA SHA256:h1NhNWfC+Dmp6wIIzeQOuTKnwsIBe3BN0LKeEqeOidc Apr 28 00:29:19.350695 sshd[4916]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 00:29:19.659140 kubelet[2526]: E0428 00:29:19.652375 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:29:19.752594 systemd-logind[1457]: New session 16 of user core. Apr 28 00:29:19.797307 systemd[1]: Started session-16.scope - Session 16 of User core. Apr 28 00:29:21.260554 kubelet[2526]: E0428 00:29:21.259721 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:29:24.267505 kubelet[2526]: E0428 00:29:24.260456 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:29:25.215794 sshd[4916]: pam_unix(sshd:session): session closed for user core Apr 28 00:29:25.562247 systemd[1]: sshd@15-10.0.0.11:22-10.0.0.1:47800.service: Deactivated successfully. Apr 28 00:29:25.796828 systemd[1]: session-16.scope: Deactivated successfully. Apr 28 00:29:25.797983 systemd[1]: session-16.scope: Consumed 3.810s CPU time. Apr 28 00:29:25.983022 systemd-logind[1457]: Session 16 logged out. Waiting for processes to exit. Apr 28 00:29:25.998595 systemd-logind[1457]: Removed session 16. Apr 28 00:29:30.313940 systemd[1]: Started sshd@16-10.0.0.11:22-10.0.0.1:33602.service - OpenSSH per-connection server daemon (10.0.0.1:33602). Apr 28 00:29:30.892936 sshd[4962]: Accepted publickey for core from 10.0.0.1 port 33602 ssh2: RSA SHA256:h1NhNWfC+Dmp6wIIzeQOuTKnwsIBe3BN0LKeEqeOidc Apr 28 00:29:30.973029 sshd[4962]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 00:29:31.344446 systemd-logind[1457]: New session 17 of user core. Apr 28 00:29:31.401823 systemd[1]: Started session-17.scope - Session 17 of User core. Apr 28 00:29:36.246481 sshd[4962]: pam_unix(sshd:session): session closed for user core Apr 28 00:29:36.396728 systemd[1]: sshd@16-10.0.0.11:22-10.0.0.1:33602.service: Deactivated successfully. Apr 28 00:29:36.508266 systemd[1]: session-17.scope: Deactivated successfully. Apr 28 00:29:36.509102 systemd[1]: session-17.scope: Consumed 3.231s CPU time. Apr 28 00:29:36.743859 systemd-logind[1457]: Session 17 logged out. Waiting for processes to exit. Apr 28 00:29:36.792865 systemd-logind[1457]: Removed session 17. Apr 28 00:29:41.809994 systemd[1]: Started sshd@17-10.0.0.11:22-10.0.0.1:54050.service - OpenSSH per-connection server daemon (10.0.0.1:54050). Apr 28 00:29:42.515689 kubelet[2526]: E0428 00:29:42.252328 2526 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.047s" Apr 28 00:29:43.265876 kubelet[2526]: E0428 00:29:43.265210 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:29:43.962884 sshd[5018]: Accepted publickey for core from 10.0.0.1 port 54050 ssh2: RSA SHA256:h1NhNWfC+Dmp6wIIzeQOuTKnwsIBe3BN0LKeEqeOidc Apr 28 00:29:44.165358 sshd[5018]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 00:29:44.478737 systemd-logind[1457]: New session 18 of user core. Apr 28 00:29:44.550259 systemd[1]: Started session-18.scope - Session 18 of User core. Apr 28 00:29:48.681498 sshd[5018]: pam_unix(sshd:session): session closed for user core Apr 28 00:29:48.757124 systemd[1]: sshd@17-10.0.0.11:22-10.0.0.1:54050.service: Deactivated successfully. Apr 28 00:29:48.798173 systemd[1]: session-18.scope: Deactivated successfully. Apr 28 00:29:48.800031 systemd[1]: session-18.scope: Consumed 2.471s CPU time. Apr 28 00:29:48.928672 systemd-logind[1457]: Session 18 logged out. Waiting for processes to exit. Apr 28 00:29:48.949427 systemd-logind[1457]: Removed session 18. Apr 28 00:29:51.326649 kubelet[2526]: E0428 00:29:51.322254 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:29:53.922352 systemd[1]: Started sshd@18-10.0.0.11:22-10.0.0.1:57354.service - OpenSSH per-connection server daemon (10.0.0.1:57354). Apr 28 00:29:55.003111 sshd[5062]: Accepted publickey for core from 10.0.0.1 port 57354 ssh2: RSA SHA256:h1NhNWfC+Dmp6wIIzeQOuTKnwsIBe3BN0LKeEqeOidc Apr 28 00:29:55.043802 sshd[5062]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 00:29:55.362264 systemd-logind[1457]: New session 19 of user core. Apr 28 00:29:55.467546 systemd[1]: Started session-19.scope - Session 19 of User core. Apr 28 00:29:59.310201 kubelet[2526]: E0428 00:29:59.308433 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:29:59.565089 sshd[5062]: pam_unix(sshd:session): session closed for user core Apr 28 00:29:59.722186 systemd[1]: sshd@18-10.0.0.11:22-10.0.0.1:57354.service: Deactivated successfully. Apr 28 00:29:59.759519 systemd[1]: session-19.scope: Deactivated successfully. Apr 28 00:29:59.763816 systemd[1]: session-19.scope: Consumed 2.833s CPU time. Apr 28 00:29:59.796166 systemd-logind[1457]: Session 19 logged out. Waiting for processes to exit. Apr 28 00:29:59.905433 systemd-logind[1457]: Removed session 19. Apr 28 00:30:01.162501 update_engine[1464]: I20260428 00:30:01.153555 1464 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Apr 28 00:30:01.245930 update_engine[1464]: I20260428 00:30:01.171202 1464 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Apr 28 00:30:01.245930 update_engine[1464]: I20260428 00:30:01.200740 1464 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Apr 28 00:30:01.261016 update_engine[1464]: I20260428 00:30:01.260820 1464 omaha_request_params.cc:62] Current group set to lts Apr 28 00:30:01.278882 update_engine[1464]: I20260428 00:30:01.274626 1464 update_attempter.cc:499] Already updated boot flags. Skipping. Apr 28 00:30:01.295702 update_engine[1464]: I20260428 00:30:01.285000 1464 update_attempter.cc:643] Scheduling an action processor start. Apr 28 00:30:01.302936 update_engine[1464]: I20260428 00:30:01.298408 1464 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Apr 28 00:30:01.309926 update_engine[1464]: I20260428 00:30:01.303148 1464 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Apr 28 00:30:01.309926 update_engine[1464]: I20260428 00:30:01.308739 1464 omaha_request_action.cc:271] Posting an Omaha request to disabled Apr 28 00:30:01.309926 update_engine[1464]: I20260428 00:30:01.308979 1464 omaha_request_action.cc:272] Request: Apr 28 00:30:01.309926 update_engine[1464]: Apr 28 00:30:01.309926 update_engine[1464]: Apr 28 00:30:01.309926 update_engine[1464]: Apr 28 00:30:01.309926 update_engine[1464]: Apr 28 00:30:01.309926 update_engine[1464]: Apr 28 00:30:01.309926 update_engine[1464]: Apr 28 00:30:01.309926 update_engine[1464]: Apr 28 00:30:01.309926 update_engine[1464]: Apr 28 00:30:01.309926 update_engine[1464]: I20260428 00:30:01.308987 1464 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 28 00:30:01.345166 locksmithd[1489]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Apr 28 00:30:01.377122 update_engine[1464]: I20260428 00:30:01.375564 1464 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 28 00:30:01.385312 update_engine[1464]: I20260428 00:30:01.383268 1464 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 28 00:30:01.394309 update_engine[1464]: E20260428 00:30:01.393651 1464 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Apr 28 00:30:01.394309 update_engine[1464]: I20260428 00:30:01.394070 1464 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Apr 28 00:30:04.935992 systemd[1]: Started sshd@19-10.0.0.11:22-10.0.0.1:53294.service - OpenSSH per-connection server daemon (10.0.0.1:53294). Apr 28 00:30:05.788300 sshd[5116]: Accepted publickey for core from 10.0.0.1 port 53294 ssh2: RSA SHA256:h1NhNWfC+Dmp6wIIzeQOuTKnwsIBe3BN0LKeEqeOidc Apr 28 00:30:05.912733 sshd[5116]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 00:30:06.280235 systemd-logind[1457]: New session 20 of user core. Apr 28 00:30:06.330269 systemd[1]: Started session-20.scope - Session 20 of User core. Apr 28 00:30:07.842923 sshd[5116]: pam_unix(sshd:session): session closed for user core Apr 28 00:30:07.871255 systemd[1]: sshd@19-10.0.0.11:22-10.0.0.1:53294.service: Deactivated successfully. Apr 28 00:30:07.979058 systemd[1]: session-20.scope: Deactivated successfully. Apr 28 00:30:07.983230 systemd[1]: session-20.scope: Consumed 1.054s CPU time. Apr 28 00:30:08.011115 systemd-logind[1457]: Session 20 logged out. Waiting for processes to exit. Apr 28 00:30:08.057377 systemd-logind[1457]: Removed session 20. Apr 28 00:30:08.244500 kubelet[2526]: E0428 00:30:08.224330 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:30:12.104553 update_engine[1464]: I20260428 00:30:12.096264 1464 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 28 00:30:12.149602 update_engine[1464]: I20260428 00:30:12.149539 1464 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 28 00:30:12.152994 update_engine[1464]: I20260428 00:30:12.152813 1464 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 28 00:30:12.204210 update_engine[1464]: E20260428 00:30:12.193827 1464 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Apr 28 00:30:12.256842 update_engine[1464]: I20260428 00:30:12.208397 1464 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Apr 28 00:30:13.329146 systemd[1]: Started sshd@20-10.0.0.11:22-10.0.0.1:36320.service - OpenSSH per-connection server daemon (10.0.0.1:36320). Apr 28 00:30:14.515747 sshd[5164]: Accepted publickey for core from 10.0.0.1 port 36320 ssh2: RSA SHA256:h1NhNWfC+Dmp6wIIzeQOuTKnwsIBe3BN0LKeEqeOidc Apr 28 00:30:14.558558 sshd[5164]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 00:30:14.698045 systemd-logind[1457]: New session 21 of user core. Apr 28 00:30:14.745390 systemd[1]: Started session-21.scope - Session 21 of User core. Apr 28 00:30:16.564572 sshd[5164]: pam_unix(sshd:session): session closed for user core Apr 28 00:30:16.595658 systemd[1]: sshd@20-10.0.0.11:22-10.0.0.1:36320.service: Deactivated successfully. Apr 28 00:30:16.612333 systemd[1]: session-21.scope: Deactivated successfully. Apr 28 00:30:16.612821 systemd[1]: session-21.scope: Consumed 1.511s CPU time. Apr 28 00:30:16.631950 systemd-logind[1457]: Session 21 logged out. Waiting for processes to exit. Apr 28 00:30:16.675203 systemd[1]: Started sshd@21-10.0.0.11:22-10.0.0.1:36322.service - OpenSSH per-connection server daemon (10.0.0.1:36322). Apr 28 00:30:16.683705 systemd-logind[1457]: Removed session 21. Apr 28 00:30:16.853571 sshd[5192]: Accepted publickey for core from 10.0.0.1 port 36322 ssh2: RSA SHA256:h1NhNWfC+Dmp6wIIzeQOuTKnwsIBe3BN0LKeEqeOidc Apr 28 00:30:16.863043 sshd[5192]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 00:30:16.990424 systemd-logind[1457]: New session 22 of user core. Apr 28 00:30:17.010523 systemd[1]: Started session-22.scope - Session 22 of User core. Apr 28 00:30:17.615464 sshd[5192]: pam_unix(sshd:session): session closed for user core Apr 28 00:30:17.645256 systemd[1]: sshd@21-10.0.0.11:22-10.0.0.1:36322.service: Deactivated successfully. Apr 28 00:30:17.651030 systemd[1]: session-22.scope: Deactivated successfully. Apr 28 00:30:17.652012 systemd-logind[1457]: Session 22 logged out. Waiting for processes to exit. Apr 28 00:30:17.671746 systemd[1]: Started sshd@22-10.0.0.11:22-10.0.0.1:36326.service - OpenSSH per-connection server daemon (10.0.0.1:36326). Apr 28 00:30:17.681747 systemd-logind[1457]: Removed session 22. Apr 28 00:30:17.812544 sshd[5204]: Accepted publickey for core from 10.0.0.1 port 36326 ssh2: RSA SHA256:h1NhNWfC+Dmp6wIIzeQOuTKnwsIBe3BN0LKeEqeOidc Apr 28 00:30:17.847995 sshd[5204]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 00:30:17.858420 systemd-logind[1457]: New session 23 of user core. Apr 28 00:30:17.959420 systemd[1]: Started session-23.scope - Session 23 of User core. Apr 28 00:30:19.803768 sshd[5204]: pam_unix(sshd:session): session closed for user core Apr 28 00:30:19.866320 systemd[1]: sshd@22-10.0.0.11:22-10.0.0.1:36326.service: Deactivated successfully. Apr 28 00:30:20.020549 systemd[1]: session-23.scope: Deactivated successfully. Apr 28 00:30:20.026694 systemd[1]: session-23.scope: Consumed 1.309s CPU time. Apr 28 00:30:20.054536 systemd-logind[1457]: Session 23 logged out. Waiting for processes to exit. Apr 28 00:30:20.056327 systemd-logind[1457]: Removed session 23. Apr 28 00:30:22.093716 update_engine[1464]: I20260428 00:30:22.093122 1464 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 28 00:30:22.097797 update_engine[1464]: I20260428 00:30:22.096400 1464 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 28 00:30:22.097797 update_engine[1464]: I20260428 00:30:22.097751 1464 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 28 00:30:22.109539 update_engine[1464]: E20260428 00:30:22.108055 1464 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Apr 28 00:30:22.112721 update_engine[1464]: I20260428 00:30:22.112349 1464 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Apr 28 00:30:24.448840 kubelet[2526]: E0428 00:30:24.446574 2526 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.241s" Apr 28 00:30:26.070841 systemd[1]: Started sshd@23-10.0.0.11:22-10.0.0.1:55116.service - OpenSSH per-connection server daemon (10.0.0.1:55116). Apr 28 00:30:26.295279 kubelet[2526]: E0428 00:30:26.294676 2526 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.047s" Apr 28 00:30:32.149293 update_engine[1464]: I20260428 00:30:32.109776 1464 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 28 00:30:32.301401 update_engine[1464]: I20260428 00:30:32.205128 1464 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 28 00:30:32.360547 update_engine[1464]: I20260428 00:30:32.320876 1464 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 28 00:30:32.485090 update_engine[1464]: E20260428 00:30:32.459339 1464 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Apr 28 00:30:32.485090 update_engine[1464]: I20260428 00:30:32.470539 1464 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Apr 28 00:30:32.485090 update_engine[1464]: I20260428 00:30:32.481571 1464 omaha_request_action.cc:617] Omaha request response: Apr 28 00:30:32.852338 update_engine[1464]: E20260428 00:30:32.507385 1464 omaha_request_action.cc:636] Omaha request network transfer failed. Apr 28 00:30:32.852338 update_engine[1464]: I20260428 00:30:32.532135 1464 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Apr 28 00:30:32.852338 update_engine[1464]: I20260428 00:30:32.543247 1464 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Apr 28 00:30:32.852338 update_engine[1464]: I20260428 00:30:32.557763 1464 update_attempter.cc:306] Processing Done. Apr 28 00:30:32.852338 update_engine[1464]: E20260428 00:30:32.601500 1464 update_attempter.cc:619] Update failed. Apr 28 00:30:32.852338 update_engine[1464]: I20260428 00:30:32.663353 1464 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Apr 28 00:30:32.852338 update_engine[1464]: I20260428 00:30:32.667380 1464 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Apr 28 00:30:32.852338 update_engine[1464]: I20260428 00:30:32.682849 1464 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Apr 28 00:30:32.852338 update_engine[1464]: I20260428 00:30:32.761765 1464 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Apr 28 00:30:32.852338 update_engine[1464]: I20260428 00:30:32.795646 1464 omaha_request_action.cc:271] Posting an Omaha request to disabled Apr 28 00:30:32.852338 update_engine[1464]: I20260428 00:30:32.837690 1464 omaha_request_action.cc:272] Request: Apr 28 00:30:32.852338 update_engine[1464]: Apr 28 00:30:32.852338 update_engine[1464]: Apr 28 00:30:32.852338 update_engine[1464]: Apr 28 00:30:32.852338 update_engine[1464]: Apr 28 00:30:32.852338 update_engine[1464]: Apr 28 00:30:32.852338 update_engine[1464]: Apr 28 00:30:32.962060 update_engine[1464]: I20260428 00:30:32.846820 1464 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 28 00:30:32.968445 update_engine[1464]: I20260428 00:30:32.967288 1464 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 28 00:30:32.984814 update_engine[1464]: I20260428 00:30:32.982765 1464 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 28 00:30:33.011443 update_engine[1464]: E20260428 00:30:33.009181 1464 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Apr 28 00:30:33.011443 update_engine[1464]: I20260428 00:30:33.069401 1464 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Apr 28 00:30:33.011443 update_engine[1464]: I20260428 00:30:33.069871 1464 omaha_request_action.cc:617] Omaha request response: Apr 28 00:30:33.011443 update_engine[1464]: I20260428 00:30:33.071208 1464 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Apr 28 00:30:33.011443 update_engine[1464]: I20260428 00:30:33.071673 1464 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Apr 28 00:30:33.011443 update_engine[1464]: I20260428 00:30:33.071684 1464 update_attempter.cc:306] Processing Done. Apr 28 00:30:33.011443 update_engine[1464]: I20260428 00:30:33.071759 1464 update_attempter.cc:310] Error event sent. Apr 28 00:30:33.011443 update_engine[1464]: I20260428 00:30:33.072016 1464 update_check_scheduler.cc:74] Next update check in 44m36s Apr 28 00:30:34.083651 sshd[5245]: Accepted publickey for core from 10.0.0.1 port 55116 ssh2: RSA SHA256:h1NhNWfC+Dmp6wIIzeQOuTKnwsIBe3BN0LKeEqeOidc Apr 28 00:30:34.470324 locksmithd[1489]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Apr 28 00:30:34.470324 locksmithd[1489]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Apr 28 00:30:34.567494 sshd[5245]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 00:30:36.169129 systemd-logind[1457]: New session 24 of user core. Apr 28 00:30:36.696279 systemd[1]: Started session-24.scope - Session 24 of User core. Apr 28 00:30:40.355613 systemd[1]: cri-containerd-827d879477da84dac32005469cdf034d3a7183ae2787008c4ad54f98ca50a868.scope: Deactivated successfully. Apr 28 00:30:40.393256 systemd[1]: cri-containerd-827d879477da84dac32005469cdf034d3a7183ae2787008c4ad54f98ca50a868.scope: Consumed 36.216s CPU time. Apr 28 00:30:44.294374 kubelet[2526]: E0428 00:30:44.291687 2526 controller.go:195] "Failed to update lease" err="Put \"https://10.0.0.11:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Apr 28 00:30:53.489336 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-827d879477da84dac32005469cdf034d3a7183ae2787008c4ad54f98ca50a868-rootfs.mount: Deactivated successfully. Apr 28 00:30:54.086822 containerd[1473]: time="2026-04-28T00:30:54.066108182Z" level=error msg="failed to handle container TaskExit event container_id:\"827d879477da84dac32005469cdf034d3a7183ae2787008c4ad54f98ca50a868\" id:\"827d879477da84dac32005469cdf034d3a7183ae2787008c4ad54f98ca50a868\" pid:4733 exit_status:1 exited_at:{seconds:1777336240 nanos:664921178}" error="failed to stop container: failed to delete task: context deadline exceeded: unknown" Apr 28 00:30:54.507672 containerd[1473]: time="2026-04-28T00:30:54.471490439Z" level=error msg="ttrpc: received message on inactive stream" stream=43 Apr 28 00:30:55.386464 systemd[1]: cri-containerd-547088bae0d68030bc53a090237b436587d1c69e8058210f1dd2d1a84e2701b0.scope: Deactivated successfully. Apr 28 00:30:55.419834 systemd[1]: cri-containerd-547088bae0d68030bc53a090237b436587d1c69e8058210f1dd2d1a84e2701b0.scope: Consumed 27.726s CPU time. Apr 28 00:30:55.782552 containerd[1473]: time="2026-04-28T00:30:55.750675554Z" level=info msg="TaskExit event container_id:\"827d879477da84dac32005469cdf034d3a7183ae2787008c4ad54f98ca50a868\" id:\"827d879477da84dac32005469cdf034d3a7183ae2787008c4ad54f98ca50a868\" pid:4733 exit_status:1 exited_at:{seconds:1777336240 nanos:664921178}" Apr 28 00:31:05.505483 containerd[1473]: time="2026-04-28T00:31:05.504979629Z" level=error msg="Failed to handle backOff event container_id:\"827d879477da84dac32005469cdf034d3a7183ae2787008c4ad54f98ca50a868\" id:\"827d879477da84dac32005469cdf034d3a7183ae2787008c4ad54f98ca50a868\" pid:4733 exit_status:1 exited_at:{seconds:1777336240 nanos:664921178} for 827d879477da84dac32005469cdf034d3a7183ae2787008c4ad54f98ca50a868" error="failed to handle container TaskExit event: failed to stop container: failed to delete task: context deadline exceeded: unknown" Apr 28 00:31:07.165502 containerd[1473]: time="2026-04-28T00:31:07.102529383Z" level=error msg="ttrpc: received message on inactive stream" stream=55 Apr 28 00:31:07.683235 containerd[1473]: time="2026-04-28T00:31:07.679444698Z" level=error msg="failed to handle container TaskExit event container_id:\"547088bae0d68030bc53a090237b436587d1c69e8058210f1dd2d1a84e2701b0\" id:\"547088bae0d68030bc53a090237b436587d1c69e8058210f1dd2d1a84e2701b0\" pid:4802 exit_status:1 exited_at:{seconds:1777336257 nanos:3227843}" error="failed to stop container: context deadline exceeded: unknown" Apr 28 00:31:08.029255 containerd[1473]: time="2026-04-28T00:31:07.972792364Z" level=error msg="ttrpc: received message on inactive stream" stream=37 Apr 28 00:31:08.406607 containerd[1473]: time="2026-04-28T00:31:08.403284139Z" level=info msg="TaskExit event container_id:\"827d879477da84dac32005469cdf034d3a7183ae2787008c4ad54f98ca50a868\" id:\"827d879477da84dac32005469cdf034d3a7183ae2787008c4ad54f98ca50a868\" pid:4733 exit_status:1 exited_at:{seconds:1777336240 nanos:664921178}" Apr 28 00:31:10.471599 kubelet[2526]: E0428 00:31:09.344706 2526 controller.go:195] "Failed to update lease" err="Put \"https://10.0.0.11:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded" Apr 28 00:31:12.149470 kubelet[2526]: E0428 00:31:12.149391 2526 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="44.666s" Apr 28 00:31:12.797653 kubelet[2526]: E0428 00:31:12.796509 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:31:18.833190 containerd[1473]: time="2026-04-28T00:31:18.818860608Z" level=error msg="Failed to handle backOff event container_id:\"827d879477da84dac32005469cdf034d3a7183ae2787008c4ad54f98ca50a868\" id:\"827d879477da84dac32005469cdf034d3a7183ae2787008c4ad54f98ca50a868\" pid:4733 exit_status:1 exited_at:{seconds:1777336240 nanos:664921178} for 827d879477da84dac32005469cdf034d3a7183ae2787008c4ad54f98ca50a868" error="failed to handle container TaskExit event: failed to stop container: failed to delete task: context deadline exceeded: unknown" Apr 28 00:31:18.834273 containerd[1473]: time="2026-04-28T00:31:18.833466203Z" level=info msg="TaskExit event container_id:\"547088bae0d68030bc53a090237b436587d1c69e8058210f1dd2d1a84e2701b0\" id:\"547088bae0d68030bc53a090237b436587d1c69e8058210f1dd2d1a84e2701b0\" pid:4802 exit_status:1 exited_at:{seconds:1777336257 nanos:3227843}" Apr 28 00:31:18.834606 containerd[1473]: time="2026-04-28T00:31:18.832830114Z" level=error msg="ttrpc: received message on inactive stream" stream=67 Apr 28 00:31:18.863292 sshd[5245]: pam_unix(sshd:session): session closed for user core Apr 28 00:31:19.655795 systemd[1]: sshd@23-10.0.0.11:22-10.0.0.1:55116.service: Deactivated successfully. Apr 28 00:31:19.675391 systemd[1]: sshd@23-10.0.0.11:22-10.0.0.1:55116.service: Consumed 2.261s CPU time. Apr 28 00:31:20.007327 systemd[1]: session-24.scope: Deactivated successfully. Apr 28 00:31:20.034543 systemd[1]: session-24.scope: Consumed 21.606s CPU time. Apr 28 00:31:20.159843 systemd-logind[1457]: Session 24 logged out. Waiting for processes to exit. Apr 28 00:31:20.318292 systemd-logind[1457]: Removed session 24. Apr 28 00:31:22.850848 kubelet[2526]: E0428 00:31:22.826455 2526 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="10.397s" Apr 28 00:31:24.928026 systemd[1]: Started sshd@24-10.0.0.11:22-10.0.0.1:43910.service - OpenSSH per-connection server daemon (10.0.0.1:43910). Apr 28 00:31:26.899705 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-547088bae0d68030bc53a090237b436587d1c69e8058210f1dd2d1a84e2701b0-rootfs.mount: Deactivated successfully. Apr 28 00:31:27.499584 containerd[1473]: time="2026-04-28T00:31:27.495253535Z" level=info msg="shim disconnected" id=547088bae0d68030bc53a090237b436587d1c69e8058210f1dd2d1a84e2701b0 namespace=k8s.io Apr 28 00:31:27.862644 containerd[1473]: time="2026-04-28T00:31:27.754861893Z" level=warning msg="cleaning up after shim disconnected" id=547088bae0d68030bc53a090237b436587d1c69e8058210f1dd2d1a84e2701b0 namespace=k8s.io Apr 28 00:31:27.888060 containerd[1473]: time="2026-04-28T00:31:27.867292921Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 28 00:31:29.267390 containerd[1473]: time="2026-04-28T00:31:29.261821503Z" level=error msg="failed to delete shim" error="1 error occurred:\n\t* close wait error: context deadline exceeded\n\n" id=547088bae0d68030bc53a090237b436587d1c69e8058210f1dd2d1a84e2701b0 Apr 28 00:31:30.086563 containerd[1473]: time="2026-04-28T00:31:30.077079081Z" level=info msg="TaskExit event container_id:\"827d879477da84dac32005469cdf034d3a7183ae2787008c4ad54f98ca50a868\" id:\"827d879477da84dac32005469cdf034d3a7183ae2787008c4ad54f98ca50a868\" pid:4733 exit_status:1 exited_at:{seconds:1777336240 nanos:664921178}" Apr 28 00:31:33.342343 containerd[1473]: time="2026-04-28T00:31:33.336249777Z" level=error msg="failed to delete" cmd="/usr/bin/containerd-shim-runc-v2 -namespace k8s.io -address /run/containerd/containerd.sock -publish-binary /usr/bin/containerd -id 547088bae0d68030bc53a090237b436587d1c69e8058210f1dd2d1a84e2701b0 -bundle /run/containerd/io.containerd.runtime.v2.task/k8s.io/547088bae0d68030bc53a090237b436587d1c69e8058210f1dd2d1a84e2701b0 delete" error="signal: killed" namespace=k8s.io Apr 28 00:31:33.342343 containerd[1473]: time="2026-04-28T00:31:33.336971872Z" level=warning msg="failed to clean up after shim disconnected" error=": signal: killed" id=547088bae0d68030bc53a090237b436587d1c69e8058210f1dd2d1a84e2701b0 namespace=k8s.io Apr 28 00:31:34.230930 kubelet[2526]: E0428 00:31:34.227354 2526 controller.go:195] "Failed to update lease" err="Put \"https://10.0.0.11:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Apr 28 00:31:36.573439 sshd[5380]: Accepted publickey for core from 10.0.0.1 port 43910 ssh2: RSA SHA256:h1NhNWfC+Dmp6wIIzeQOuTKnwsIBe3BN0LKeEqeOidc Apr 28 00:31:37.243930 sshd[5380]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 00:31:38.585637 systemd-logind[1457]: New session 25 of user core. Apr 28 00:31:38.944178 systemd[1]: Started session-25.scope - Session 25 of User core. Apr 28 00:31:40.609281 containerd[1473]: time="2026-04-28T00:31:40.605586526Z" level=error msg="Failed to handle backOff event container_id:\"827d879477da84dac32005469cdf034d3a7183ae2787008c4ad54f98ca50a868\" id:\"827d879477da84dac32005469cdf034d3a7183ae2787008c4ad54f98ca50a868\" pid:4733 exit_status:1 exited_at:{seconds:1777336240 nanos:664921178} for 827d879477da84dac32005469cdf034d3a7183ae2787008c4ad54f98ca50a868" error="failed to handle container TaskExit event: failed to stop container: failed to delete task: context deadline exceeded: unknown" Apr 28 00:31:40.904152 containerd[1473]: time="2026-04-28T00:31:40.885877386Z" level=error msg="ttrpc: received message on inactive stream" stream=79 Apr 28 00:31:46.505817 kubelet[2526]: E0428 00:31:46.498978 2526 controller.go:195] "Failed to update lease" err="Put \"https://10.0.0.11:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Apr 28 00:31:47.956019 kubelet[2526]: E0428 00:31:47.955768 2526 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="25.025s" Apr 28 00:31:49.594326 containerd[1473]: time="2026-04-28T00:31:49.474722428Z" level=info msg="TaskExit event container_id:\"827d879477da84dac32005469cdf034d3a7183ae2787008c4ad54f98ca50a868\" id:\"827d879477da84dac32005469cdf034d3a7183ae2787008c4ad54f98ca50a868\" pid:4733 exit_status:1 exited_at:{seconds:1777336240 nanos:664921178}" Apr 28 00:31:51.788835 kubelet[2526]: E0428 00:31:51.788687 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:31:52.607315 kubelet[2526]: E0428 00:31:52.599545 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:31:52.638687 kubelet[2526]: E0428 00:31:52.596712 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:31:54.573715 kubelet[2526]: E0428 00:31:54.563204 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:31:55.025539 kubelet[2526]: E0428 00:31:55.014392 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:31:56.320467 kubelet[2526]: E0428 00:31:56.319370 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:31:56.659511 kubelet[2526]: E0428 00:31:56.651820 2526 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="7.836s" Apr 28 00:31:57.277642 kubelet[2526]: E0428 00:31:57.256163 2526 controller.go:195] "Failed to update lease" err="Put \"https://10.0.0.11:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Apr 28 00:31:58.835264 containerd[1473]: time="2026-04-28T00:31:58.818534775Z" level=info msg="shim disconnected" id=827d879477da84dac32005469cdf034d3a7183ae2787008c4ad54f98ca50a868 namespace=k8s.io Apr 28 00:31:58.901509 containerd[1473]: time="2026-04-28T00:31:58.855410488Z" level=warning msg="cleaning up after shim disconnected" id=827d879477da84dac32005469cdf034d3a7183ae2787008c4ad54f98ca50a868 namespace=k8s.io Apr 28 00:31:58.995636 containerd[1473]: time="2026-04-28T00:31:58.906912702Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 28 00:31:59.702740 containerd[1473]: time="2026-04-28T00:31:59.700883290Z" level=error msg="failed to delete shim" error="1 error occurred:\n\t* close wait error: context deadline exceeded\n\n" id=827d879477da84dac32005469cdf034d3a7183ae2787008c4ad54f98ca50a868 Apr 28 00:32:00.646739 kubelet[2526]: E0428 00:32:00.646421 2526 status_manager.go:1041] "Failed to update status for pod" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69b6c5c4-1b0f-43c4-a6e5-e4ff6b274b36\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-04-28T00:31:12Z\\\",\\\"message\\\":\\\"containers with unready status: [coredns]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-28T00:31:12Z\\\",\\\"message\\\":\\\"containers with unready status: [coredns]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"100m\\\",\\\"memory\\\":\\\"70Mi\\\"},\\\"containerID\\\":\\\"containerd://ba020a93e80adf73791d86fcc904a0daa6985d2408fad704f4ca14a7e5b59253\\\",\\\"image\\\":\\\"registry.k8s.io/coredns/coredns:v1.12.1\\\",\\\"imageID\\\":\\\"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"coredns\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"limits\\\":{\\\"memory\\\":\\\"170Mi\\\"},\\\"requests\\\":{\\\"cpu\\\":\\\"100m\\\",\\\"memory\\\":\\\"70Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-28T00:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/coredns\\\",\\\"name\\\":\\\"config-volume\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9d7p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"kube-system\"/\"coredns-66bc5c9577-sn6rz\": Timeout: request did not complete within requested timeout - context deadline exceeded" pod="kube-system/coredns-66bc5c9577-sn6rz" Apr 28 00:32:01.146676 sshd[5380]: pam_unix(sshd:session): session closed for user core Apr 28 00:32:01.195064 kubelet[2526]: E0428 00:32:01.195024 2526 controller.go:195] "Failed to update lease" err="Operation cannot be fulfilled on leases.coordination.k8s.io \"localhost\": the object has been modified; please apply your changes to the latest version and try again" Apr 28 00:32:01.444195 systemd[1]: sshd@24-10.0.0.11:22-10.0.0.1:43910.service: Deactivated successfully. Apr 28 00:32:01.446150 systemd[1]: sshd@24-10.0.0.11:22-10.0.0.1:43910.service: Consumed 3.253s CPU time. Apr 28 00:32:01.706711 systemd[1]: session-25.scope: Deactivated successfully. Apr 28 00:32:01.763374 systemd[1]: session-25.scope: Consumed 12.505s CPU time. Apr 28 00:32:01.866686 containerd[1473]: time="2026-04-28T00:32:01.811146760Z" level=error msg="failed to delete" cmd="/usr/bin/containerd-shim-runc-v2 -namespace k8s.io -address /run/containerd/containerd.sock -publish-binary /usr/bin/containerd -id 827d879477da84dac32005469cdf034d3a7183ae2787008c4ad54f98ca50a868 -bundle /run/containerd/io.containerd.runtime.v2.task/k8s.io/827d879477da84dac32005469cdf034d3a7183ae2787008c4ad54f98ca50a868 delete" error="exit status 1" namespace=k8s.io Apr 28 00:32:01.988471 containerd[1473]: time="2026-04-28T00:32:01.865866239Z" level=warning msg="failed to clean up after shim disconnected" error="io.containerd.runc.v2: getwd: no such file or directory: exit status 1" id=827d879477da84dac32005469cdf034d3a7183ae2787008c4ad54f98ca50a868 namespace=k8s.io Apr 28 00:32:02.003987 systemd-logind[1457]: Session 25 logged out. Waiting for processes to exit. Apr 28 00:32:02.181681 systemd-logind[1457]: Removed session 25. Apr 28 00:32:02.636757 kubelet[2526]: E0428 00:32:02.636612 2526 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="5.873s" Apr 28 00:32:03.136851 kubelet[2526]: E0428 00:32:03.136318 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:32:03.146865 kubelet[2526]: E0428 00:32:03.144809 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:32:03.146865 kubelet[2526]: I0428 00:32:03.136468 2526 scope.go:117] "RemoveContainer" containerID="2e8500a91f82d4d0bef9aa6ebd493fb03fb3933a1726f03ed3d23808e2d491b7" Apr 28 00:32:03.155016 kubelet[2526]: I0428 00:32:03.149327 2526 scope.go:117] "RemoveContainer" containerID="827d879477da84dac32005469cdf034d3a7183ae2787008c4ad54f98ca50a868" Apr 28 00:32:03.155016 kubelet[2526]: E0428 00:32:03.149821 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:32:03.326318 kubelet[2526]: E0428 00:32:03.309292 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:32:03.326318 kubelet[2526]: I0428 00:32:03.309451 2526 scope.go:117] "RemoveContainer" containerID="547088bae0d68030bc53a090237b436587d1c69e8058210f1dd2d1a84e2701b0" Apr 28 00:32:03.326318 kubelet[2526]: E0428 00:32:03.309801 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:32:03.376091 containerd[1473]: time="2026-04-28T00:32:03.364880900Z" level=info msg="RemoveContainer for \"2e8500a91f82d4d0bef9aa6ebd493fb03fb3933a1726f03ed3d23808e2d491b7\"" Apr 28 00:32:03.657824 containerd[1473]: time="2026-04-28T00:32:03.611767298Z" level=info msg="RemoveContainer for \"2e8500a91f82d4d0bef9aa6ebd493fb03fb3933a1726f03ed3d23808e2d491b7\" returns successfully" Apr 28 00:32:03.719581 containerd[1473]: time="2026-04-28T00:32:03.703400446Z" level=info msg="CreateContainer within sandbox \"5c78d6efeb1497bc4b241801b76ef5ef29f9c34ed77d9b10eab33b0cd1c147bb\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:3,}" Apr 28 00:32:03.775343 containerd[1473]: time="2026-04-28T00:32:03.771366106Z" level=info msg="CreateContainer within sandbox \"670f440219fcc7a0b00ba64a35e5d1ee4e1b4357741788815366a9fc0871c449\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:3,}" Apr 28 00:32:05.231478 containerd[1473]: time="2026-04-28T00:32:05.231197272Z" level=info msg="CreateContainer within sandbox \"670f440219fcc7a0b00ba64a35e5d1ee4e1b4357741788815366a9fc0871c449\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:3,} returns container id \"b5cc653730b7b306ff2b21874723bc3fb88f6b5b2b7a92400665491634f8c45d\"" Apr 28 00:32:05.255260 containerd[1473]: time="2026-04-28T00:32:05.244601784Z" level=info msg="StartContainer for \"b5cc653730b7b306ff2b21874723bc3fb88f6b5b2b7a92400665491634f8c45d\"" Apr 28 00:32:05.312632 containerd[1473]: time="2026-04-28T00:32:05.312306085Z" level=info msg="CreateContainer within sandbox \"5c78d6efeb1497bc4b241801b76ef5ef29f9c34ed77d9b10eab33b0cd1c147bb\" for &ContainerMetadata{Name:kube-scheduler,Attempt:3,} returns container id \"30623e84dddc09c264da46ca8ec09e46882e1ee19a1ade1f70bbee60ef9a49a0\"" Apr 28 00:32:05.363457 containerd[1473]: time="2026-04-28T00:32:05.361144817Z" level=info msg="StartContainer for \"30623e84dddc09c264da46ca8ec09e46882e1ee19a1ade1f70bbee60ef9a49a0\"" Apr 28 00:32:05.843724 kubelet[2526]: E0428 00:32:05.843501 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:32:07.453656 systemd[1]: Started sshd@25-10.0.0.11:22-10.0.0.1:56988.service - OpenSSH per-connection server daemon (10.0.0.1:56988). Apr 28 00:32:08.100135 systemd[1]: Started cri-containerd-b5cc653730b7b306ff2b21874723bc3fb88f6b5b2b7a92400665491634f8c45d.scope - libcontainer container b5cc653730b7b306ff2b21874723bc3fb88f6b5b2b7a92400665491634f8c45d. Apr 28 00:32:08.411538 systemd[1]: run-containerd-runc-k8s.io-30623e84dddc09c264da46ca8ec09e46882e1ee19a1ade1f70bbee60ef9a49a0-runc.OB2agz.mount: Deactivated successfully. Apr 28 00:32:08.484274 systemd[1]: Started cri-containerd-30623e84dddc09c264da46ca8ec09e46882e1ee19a1ade1f70bbee60ef9a49a0.scope - libcontainer container 30623e84dddc09c264da46ca8ec09e46882e1ee19a1ade1f70bbee60ef9a49a0. Apr 28 00:32:09.066371 sshd[5522]: Accepted publickey for core from 10.0.0.1 port 56988 ssh2: RSA SHA256:h1NhNWfC+Dmp6wIIzeQOuTKnwsIBe3BN0LKeEqeOidc Apr 28 00:32:09.247824 sshd[5522]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 00:32:09.426769 systemd-logind[1457]: New session 26 of user core. Apr 28 00:32:09.467373 systemd[1]: Started session-26.scope - Session 26 of User core. Apr 28 00:32:10.642368 containerd[1473]: time="2026-04-28T00:32:10.637698352Z" level=info msg="StartContainer for \"b5cc653730b7b306ff2b21874723bc3fb88f6b5b2b7a92400665491634f8c45d\" returns successfully" Apr 28 00:32:10.849005 kubelet[2526]: I0428 00:32:10.848739 2526 scope.go:117] "RemoveContainer" containerID="547088bae0d68030bc53a090237b436587d1c69e8058210f1dd2d1a84e2701b0" Apr 28 00:32:11.248252 containerd[1473]: time="2026-04-28T00:32:11.247455480Z" level=info msg="RemoveContainer for \"547088bae0d68030bc53a090237b436587d1c69e8058210f1dd2d1a84e2701b0\"" Apr 28 00:32:11.420102 containerd[1473]: time="2026-04-28T00:32:11.414731595Z" level=info msg="RemoveContainer for \"547088bae0d68030bc53a090237b436587d1c69e8058210f1dd2d1a84e2701b0\" returns successfully" Apr 28 00:32:11.526103 containerd[1473]: time="2026-04-28T00:32:11.525884499Z" level=info msg="StartContainer for \"30623e84dddc09c264da46ca8ec09e46882e1ee19a1ade1f70bbee60ef9a49a0\" returns successfully" Apr 28 00:32:11.529779 kubelet[2526]: E0428 00:32:11.529723 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:32:13.146352 kubelet[2526]: E0428 00:32:13.146206 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:32:13.155541 sshd[5522]: pam_unix(sshd:session): session closed for user core Apr 28 00:32:13.406836 systemd-logind[1457]: Session 26 logged out. Waiting for processes to exit. Apr 28 00:32:13.517107 systemd[1]: sshd@25-10.0.0.11:22-10.0.0.1:56988.service: Deactivated successfully. Apr 28 00:32:13.563627 systemd[1]: session-26.scope: Deactivated successfully. Apr 28 00:32:13.567592 systemd[1]: session-26.scope: Consumed 2.226s CPU time. Apr 28 00:32:13.606182 systemd-logind[1457]: Removed session 26. Apr 28 00:32:14.424425 kubelet[2526]: E0428 00:32:14.424216 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:32:16.807801 kubelet[2526]: E0428 00:32:16.805557 2526 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.079s" Apr 28 00:32:17.491847 kubelet[2526]: E0428 00:32:17.491555 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:32:17.885283 kubelet[2526]: E0428 00:32:17.881157 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:32:18.255599 systemd[1]: Started sshd@26-10.0.0.11:22-10.0.0.1:33830.service - OpenSSH per-connection server daemon (10.0.0.1:33830). Apr 28 00:32:18.664287 sshd[5638]: Accepted publickey for core from 10.0.0.1 port 33830 ssh2: RSA SHA256:h1NhNWfC+Dmp6wIIzeQOuTKnwsIBe3BN0LKeEqeOidc Apr 28 00:32:18.685307 sshd[5638]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 00:32:18.788295 systemd-logind[1457]: New session 27 of user core. Apr 28 00:32:18.805540 systemd[1]: Started session-27.scope - Session 27 of User core. Apr 28 00:32:20.561155 sshd[5638]: pam_unix(sshd:session): session closed for user core Apr 28 00:32:20.566309 systemd[1]: sshd@26-10.0.0.11:22-10.0.0.1:33830.service: Deactivated successfully. Apr 28 00:32:20.589290 systemd[1]: session-27.scope: Deactivated successfully. Apr 28 00:32:20.589951 systemd[1]: session-27.scope: Consumed 1.404s CPU time. Apr 28 00:32:20.595270 systemd-logind[1457]: Session 27 logged out. Waiting for processes to exit. Apr 28 00:32:20.601536 systemd-logind[1457]: Removed session 27. Apr 28 00:32:26.193810 systemd[1]: Started sshd@27-10.0.0.11:22-10.0.0.1:43104.service - OpenSSH per-connection server daemon (10.0.0.1:43104). Apr 28 00:32:27.249142 sshd[5683]: Accepted publickey for core from 10.0.0.1 port 43104 ssh2: RSA SHA256:h1NhNWfC+Dmp6wIIzeQOuTKnwsIBe3BN0LKeEqeOidc Apr 28 00:32:27.289522 sshd[5683]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 00:32:27.671304 systemd-logind[1457]: New session 28 of user core. Apr 28 00:32:27.748582 systemd[1]: Started session-28.scope - Session 28 of User core. Apr 28 00:32:28.144625 kubelet[2526]: E0428 00:32:28.143405 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:32:29.068544 kubelet[2526]: E0428 00:32:29.063507 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:32:29.852421 kubelet[2526]: E0428 00:32:29.846181 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:32:32.164406 sshd[5683]: pam_unix(sshd:session): session closed for user core Apr 28 00:32:32.346227 systemd[1]: sshd@27-10.0.0.11:22-10.0.0.1:43104.service: Deactivated successfully. Apr 28 00:32:32.410798 systemd[1]: session-28.scope: Deactivated successfully. Apr 28 00:32:32.447374 systemd[1]: session-28.scope: Consumed 3.063s CPU time. Apr 28 00:32:32.475506 systemd-logind[1457]: Session 28 logged out. Waiting for processes to exit. Apr 28 00:32:32.505729 systemd-logind[1457]: Removed session 28. Apr 28 00:32:37.745711 systemd[1]: Started sshd@28-10.0.0.11:22-10.0.0.1:36462.service - OpenSSH per-connection server daemon (10.0.0.1:36462). Apr 28 00:32:39.398941 kubelet[2526]: E0428 00:32:39.394100 2526 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.188s" Apr 28 00:32:39.998625 sshd[5730]: Accepted publickey for core from 10.0.0.1 port 36462 ssh2: RSA SHA256:h1NhNWfC+Dmp6wIIzeQOuTKnwsIBe3BN0LKeEqeOidc Apr 28 00:32:40.154144 sshd[5730]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 00:32:40.463507 systemd-logind[1457]: New session 29 of user core. Apr 28 00:32:40.474292 systemd[1]: Started session-29.scope - Session 29 of User core. Apr 28 00:32:42.379791 kubelet[2526]: E0428 00:32:42.376389 2526 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.126s" Apr 28 00:32:46.717702 sshd[5730]: pam_unix(sshd:session): session closed for user core Apr 28 00:32:46.928348 systemd[1]: sshd@28-10.0.0.11:22-10.0.0.1:36462.service: Deactivated successfully. Apr 28 00:32:47.065442 systemd[1]: session-29.scope: Deactivated successfully. Apr 28 00:32:47.091216 systemd[1]: session-29.scope: Consumed 3.893s CPU time. Apr 28 00:32:47.180884 systemd-logind[1457]: Session 29 logged out. Waiting for processes to exit. Apr 28 00:32:47.207487 systemd-logind[1457]: Removed session 29. Apr 28 00:32:53.057305 systemd[1]: Started sshd@29-10.0.0.11:22-10.0.0.1:41556.service - OpenSSH per-connection server daemon (10.0.0.1:41556). Apr 28 00:32:55.072594 kubelet[2526]: E0428 00:32:55.051375 2526 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.777s" Apr 28 00:32:57.567004 kubelet[2526]: E0428 00:32:57.474182 2526 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.422s" Apr 28 00:32:58.537679 sshd[5784]: Accepted publickey for core from 10.0.0.1 port 41556 ssh2: RSA SHA256:h1NhNWfC+Dmp6wIIzeQOuTKnwsIBe3BN0LKeEqeOidc Apr 28 00:32:58.658354 sshd[5784]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 00:32:59.351245 systemd-logind[1457]: New session 30 of user core. Apr 28 00:32:59.479778 systemd[1]: Started session-30.scope - Session 30 of User core. Apr 28 00:33:01.982869 systemd[1]: cri-containerd-b5cc653730b7b306ff2b21874723bc3fb88f6b5b2b7a92400665491634f8c45d.scope: Deactivated successfully. Apr 28 00:33:02.032158 systemd[1]: cri-containerd-b5cc653730b7b306ff2b21874723bc3fb88f6b5b2b7a92400665491634f8c45d.scope: Consumed 16.244s CPU time. Apr 28 00:33:02.726818 kubelet[2526]: E0428 00:33:02.725796 2526 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="4.859s" Apr 28 00:33:04.533182 kubelet[2526]: E0428 00:33:04.529645 2526 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.793s" Apr 28 00:33:04.848343 systemd[1]: cri-containerd-30623e84dddc09c264da46ca8ec09e46882e1ee19a1ade1f70bbee60ef9a49a0.scope: Deactivated successfully. Apr 28 00:33:04.848840 systemd[1]: cri-containerd-30623e84dddc09c264da46ca8ec09e46882e1ee19a1ade1f70bbee60ef9a49a0.scope: Consumed 11.569s CPU time. Apr 28 00:33:04.978233 systemd[1]: Starting systemd-tmpfiles-clean.service - Cleanup of Temporary Directories... Apr 28 00:33:07.377734 kubelet[2526]: E0428 00:33:07.376718 2526 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.845s" Apr 28 00:33:07.705936 sshd[5784]: pam_unix(sshd:session): session closed for user core Apr 28 00:33:07.960137 systemd[1]: sshd@29-10.0.0.11:22-10.0.0.1:41556.service: Deactivated successfully. Apr 28 00:33:07.985351 systemd[1]: sshd@29-10.0.0.11:22-10.0.0.1:41556.service: Consumed 1.806s CPU time. Apr 28 00:33:08.123075 systemd[1]: session-30.scope: Deactivated successfully. Apr 28 00:33:08.133543 systemd[1]: session-30.scope: Consumed 4.989s CPU time. Apr 28 00:33:08.185425 systemd-logind[1457]: Session 30 logged out. Waiting for processes to exit. Apr 28 00:33:08.340795 systemd-logind[1457]: Removed session 30. Apr 28 00:33:08.602863 systemd-tmpfiles[5824]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Apr 28 00:33:08.706808 systemd-tmpfiles[5824]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Apr 28 00:33:08.733415 kubelet[2526]: E0428 00:33:08.728716 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:33:08.763147 systemd-tmpfiles[5824]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Apr 28 00:33:08.791151 systemd-tmpfiles[5824]: ACLs are not supported, ignoring. Apr 28 00:33:08.839858 systemd-tmpfiles[5824]: ACLs are not supported, ignoring. Apr 28 00:33:09.031727 systemd-tmpfiles[5824]: Detected autofs mount point /boot during canonicalization of boot. Apr 28 00:33:09.031779 systemd-tmpfiles[5824]: Skipping /boot Apr 28 00:33:09.132936 kubelet[2526]: E0428 00:33:09.106545 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:33:09.347656 kubelet[2526]: E0428 00:33:09.345551 2526 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.822s" Apr 28 00:33:09.360786 kubelet[2526]: E0428 00:33:09.359119 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:33:09.505090 systemd[1]: systemd-tmpfiles-clean.service: Deactivated successfully. Apr 28 00:33:09.533639 systemd[1]: Finished systemd-tmpfiles-clean.service - Cleanup of Temporary Directories. Apr 28 00:33:09.557468 systemd[1]: systemd-tmpfiles-clean.service: Consumed 1.451s CPU time. Apr 28 00:33:11.394374 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-30623e84dddc09c264da46ca8ec09e46882e1ee19a1ade1f70bbee60ef9a49a0-rootfs.mount: Deactivated successfully. Apr 28 00:33:11.433100 kubelet[2526]: E0428 00:33:11.431265 2526 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.082s" Apr 28 00:33:11.433401 containerd[1473]: time="2026-04-28T00:33:11.431406684Z" level=info msg="shim disconnected" id=30623e84dddc09c264da46ca8ec09e46882e1ee19a1ade1f70bbee60ef9a49a0 namespace=k8s.io Apr 28 00:33:11.433401 containerd[1473]: time="2026-04-28T00:33:11.431621672Z" level=warning msg="cleaning up after shim disconnected" id=30623e84dddc09c264da46ca8ec09e46882e1ee19a1ade1f70bbee60ef9a49a0 namespace=k8s.io Apr 28 00:33:11.433401 containerd[1473]: time="2026-04-28T00:33:11.431632817Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 28 00:33:11.521660 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b5cc653730b7b306ff2b21874723bc3fb88f6b5b2b7a92400665491634f8c45d-rootfs.mount: Deactivated successfully. Apr 28 00:33:11.701472 containerd[1473]: time="2026-04-28T00:33:11.696405126Z" level=info msg="shim disconnected" id=b5cc653730b7b306ff2b21874723bc3fb88f6b5b2b7a92400665491634f8c45d namespace=k8s.io Apr 28 00:33:11.707572 containerd[1473]: time="2026-04-28T00:33:11.707375706Z" level=warning msg="cleaning up after shim disconnected" id=b5cc653730b7b306ff2b21874723bc3fb88f6b5b2b7a92400665491634f8c45d namespace=k8s.io Apr 28 00:33:11.715405 containerd[1473]: time="2026-04-28T00:33:11.710839318Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 28 00:33:12.550603 containerd[1473]: time="2026-04-28T00:33:12.542568469Z" level=error msg="failed to delete shim" error="1 error occurred:\n\t* close wait error: context deadline exceeded\n\n" id=b5cc653730b7b306ff2b21874723bc3fb88f6b5b2b7a92400665491634f8c45d Apr 28 00:33:12.770836 systemd[1]: Started sshd@30-10.0.0.11:22-10.0.0.1:36682.service - OpenSSH per-connection server daemon (10.0.0.1:36682). Apr 28 00:33:13.076538 containerd[1473]: time="2026-04-28T00:33:12.948509180Z" level=error msg="failed to delete" cmd="/usr/bin/containerd-shim-runc-v2 -namespace k8s.io -address /run/containerd/containerd.sock -publish-binary /usr/bin/containerd -id b5cc653730b7b306ff2b21874723bc3fb88f6b5b2b7a92400665491634f8c45d -bundle /run/containerd/io.containerd.runtime.v2.task/k8s.io/b5cc653730b7b306ff2b21874723bc3fb88f6b5b2b7a92400665491634f8c45d delete" error="exit status 1" namespace=k8s.io Apr 28 00:33:13.076538 containerd[1473]: time="2026-04-28T00:33:13.053516272Z" level=warning msg="failed to clean up after shim disconnected" error="io.containerd.runc.v2: getwd: no such file or directory: exit status 1" id=b5cc653730b7b306ff2b21874723bc3fb88f6b5b2b7a92400665491634f8c45d namespace=k8s.io Apr 28 00:33:13.276400 containerd[1473]: time="2026-04-28T00:33:13.178442436Z" level=warning msg="cleanup warnings time=\"2026-04-28T00:33:12Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Apr 28 00:33:13.379320 kubelet[2526]: E0428 00:33:13.375804 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:33:14.252057 kubelet[2526]: I0428 00:33:14.251792 2526 scope.go:117] "RemoveContainer" containerID="827d879477da84dac32005469cdf034d3a7183ae2787008c4ad54f98ca50a868" Apr 28 00:33:14.264830 kubelet[2526]: I0428 00:33:14.264136 2526 scope.go:117] "RemoveContainer" containerID="b5cc653730b7b306ff2b21874723bc3fb88f6b5b2b7a92400665491634f8c45d" Apr 28 00:33:14.264830 kubelet[2526]: E0428 00:33:14.264317 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:33:14.264830 kubelet[2526]: E0428 00:33:14.264640 2526 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-controller-manager pod=kube-controller-manager-localhost_kube-system(c6bb8708a026256e82ca4c5631a78b5a)\"" pod="kube-system/kube-controller-manager-localhost" podUID="c6bb8708a026256e82ca4c5631a78b5a" Apr 28 00:33:14.405681 containerd[1473]: time="2026-04-28T00:33:14.405323338Z" level=info msg="RemoveContainer for \"827d879477da84dac32005469cdf034d3a7183ae2787008c4ad54f98ca50a868\"" Apr 28 00:33:14.568289 containerd[1473]: time="2026-04-28T00:33:14.560816567Z" level=info msg="RemoveContainer for \"827d879477da84dac32005469cdf034d3a7183ae2787008c4ad54f98ca50a868\" returns successfully" Apr 28 00:33:14.771186 sshd[5888]: Accepted publickey for core from 10.0.0.1 port 36682 ssh2: RSA SHA256:h1NhNWfC+Dmp6wIIzeQOuTKnwsIBe3BN0LKeEqeOidc Apr 28 00:33:15.069349 sshd[5888]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 00:33:15.734984 systemd-logind[1457]: New session 31 of user core. Apr 28 00:33:15.965331 systemd[1]: Started session-31.scope - Session 31 of User core. Apr 28 00:33:19.364782 kubelet[2526]: E0428 00:33:19.364217 2526 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.974s" Apr 28 00:33:19.475738 kubelet[2526]: I0428 00:33:19.453775 2526 scope.go:117] "RemoveContainer" containerID="30623e84dddc09c264da46ca8ec09e46882e1ee19a1ade1f70bbee60ef9a49a0" Apr 28 00:33:19.500693 kubelet[2526]: E0428 00:33:19.481476 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:33:19.500693 kubelet[2526]: I0428 00:33:19.500542 2526 scope.go:117] "RemoveContainer" containerID="b5cc653730b7b306ff2b21874723bc3fb88f6b5b2b7a92400665491634f8c45d" Apr 28 00:33:19.563539 kubelet[2526]: E0428 00:33:19.501443 2526 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-scheduler pod=kube-scheduler-localhost_kube-system(824fd89300514e351ed3b68d82c665c6)\"" pod="kube-system/kube-scheduler-localhost" podUID="824fd89300514e351ed3b68d82c665c6" Apr 28 00:33:19.563539 kubelet[2526]: E0428 00:33:19.501544 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:33:19.563539 kubelet[2526]: E0428 00:33:19.548108 2526 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-controller-manager pod=kube-controller-manager-localhost_kube-system(c6bb8708a026256e82ca4c5631a78b5a)\"" pod="kube-system/kube-controller-manager-localhost" podUID="c6bb8708a026256e82ca4c5631a78b5a" Apr 28 00:33:22.324228 kubelet[2526]: E0428 00:33:22.321640 2526 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.013s" Apr 28 00:33:25.059824 kubelet[2526]: E0428 00:33:24.992818 2526 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.671s" Apr 28 00:33:25.108180 kubelet[2526]: E0428 00:33:25.107131 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:33:26.470111 kubelet[2526]: E0428 00:33:26.466188 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:33:28.281365 sshd[5888]: pam_unix(sshd:session): session closed for user core Apr 28 00:33:28.776007 systemd[1]: sshd@30-10.0.0.11:22-10.0.0.1:36682.service: Deactivated successfully. Apr 28 00:33:28.800815 systemd[1]: sshd@30-10.0.0.11:22-10.0.0.1:36682.service: Consumed 1.034s CPU time. Apr 28 00:33:28.973216 systemd[1]: session-31.scope: Deactivated successfully. Apr 28 00:33:28.984304 systemd[1]: session-31.scope: Consumed 8.315s CPU time. Apr 28 00:33:29.106972 systemd-logind[1457]: Session 31 logged out. Waiting for processes to exit. Apr 28 00:33:29.335805 systemd-logind[1457]: Removed session 31. Apr 28 00:33:30.355640 kubelet[2526]: E0428 00:33:30.343593 2526 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.111s" Apr 28 00:33:31.356625 kubelet[2526]: I0428 00:33:31.354525 2526 scope.go:117] "RemoveContainer" containerID="30623e84dddc09c264da46ca8ec09e46882e1ee19a1ade1f70bbee60ef9a49a0" Apr 28 00:33:31.380500 kubelet[2526]: E0428 00:33:31.357407 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:33:31.508109 kubelet[2526]: E0428 00:33:31.496539 2526 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-scheduler pod=kube-scheduler-localhost_kube-system(824fd89300514e351ed3b68d82c665c6)\"" pod="kube-system/kube-scheduler-localhost" podUID="824fd89300514e351ed3b68d82c665c6" Apr 28 00:33:34.313635 systemd[1]: Started sshd@31-10.0.0.11:22-10.0.0.1:53430.service - OpenSSH per-connection server daemon (10.0.0.1:53430). Apr 28 00:33:35.005444 kubelet[2526]: E0428 00:33:35.002296 2526 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.764s" Apr 28 00:33:35.005444 kubelet[2526]: I0428 00:33:35.003587 2526 scope.go:117] "RemoveContainer" containerID="b5cc653730b7b306ff2b21874723bc3fb88f6b5b2b7a92400665491634f8c45d" Apr 28 00:33:35.005444 kubelet[2526]: E0428 00:33:35.003788 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:33:35.160056 kubelet[2526]: E0428 00:33:35.007779 2526 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-controller-manager pod=kube-controller-manager-localhost_kube-system(c6bb8708a026256e82ca4c5631a78b5a)\"" pod="kube-system/kube-controller-manager-localhost" podUID="c6bb8708a026256e82ca4c5631a78b5a" Apr 28 00:33:38.099752 kubelet[2526]: E0428 00:33:38.098763 2526 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.87s" Apr 28 00:33:39.812832 kubelet[2526]: E0428 00:33:39.811943 2526 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.614s" Apr 28 00:33:40.092752 sshd[5972]: Accepted publickey for core from 10.0.0.1 port 53430 ssh2: RSA SHA256:h1NhNWfC+Dmp6wIIzeQOuTKnwsIBe3BN0LKeEqeOidc Apr 28 00:33:40.439942 sshd[5972]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 00:33:41.191353 kubelet[2526]: E0428 00:33:41.187826 2526 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.356s" Apr 28 00:33:41.511587 systemd-logind[1457]: New session 32 of user core. Apr 28 00:33:41.799957 systemd[1]: Started session-32.scope - Session 32 of User core. Apr 28 00:33:42.597333 kubelet[2526]: E0428 00:33:42.583836 2526 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.287s" Apr 28 00:33:44.555099 kubelet[2526]: I0428 00:33:44.549439 2526 scope.go:117] "RemoveContainer" containerID="30623e84dddc09c264da46ca8ec09e46882e1ee19a1ade1f70bbee60ef9a49a0" Apr 28 00:33:44.900502 kubelet[2526]: E0428 00:33:44.834679 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:33:45.280440 kubelet[2526]: E0428 00:33:45.066821 2526 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-scheduler pod=kube-scheduler-localhost_kube-system(824fd89300514e351ed3b68d82c665c6)\"" pod="kube-system/kube-scheduler-localhost" podUID="824fd89300514e351ed3b68d82c665c6" Apr 28 00:33:59.897794 kubelet[2526]: E0428 00:33:58.979612 2526 controller.go:195] "Failed to update lease" err="Put \"https://10.0.0.11:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded" Apr 28 00:34:03.811448 kubelet[2526]: E0428 00:34:03.747880 2526 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="18.478s" Apr 28 00:34:09.343661 kubelet[2526]: E0428 00:34:09.338816 2526 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="5.514s" Apr 28 00:34:12.156477 kubelet[2526]: E0428 00:34:12.156292 2526 controller.go:195] "Failed to update lease" err="Put \"https://10.0.0.11:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Apr 28 00:34:14.087845 kubelet[2526]: I0428 00:34:14.001849 2526 scope.go:117] "RemoveContainer" containerID="30623e84dddc09c264da46ca8ec09e46882e1ee19a1ade1f70bbee60ef9a49a0" Apr 28 00:34:14.356299 kubelet[2526]: E0428 00:34:14.352611 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:34:15.477651 kubelet[2526]: I0428 00:34:15.315572 2526 scope.go:117] "RemoveContainer" containerID="b5cc653730b7b306ff2b21874723bc3fb88f6b5b2b7a92400665491634f8c45d" Apr 28 00:34:16.773548 kubelet[2526]: E0428 00:34:16.711772 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:34:22.555785 containerd[1473]: time="2026-04-28T00:34:22.552271507Z" level=info msg="CreateContainer within sandbox \"5c78d6efeb1497bc4b241801b76ef5ef29f9c34ed77d9b10eab33b0cd1c147bb\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:4,}" Apr 28 00:34:23.340760 kubelet[2526]: E0428 00:34:23.079880 2526 controller.go:195] "Failed to update lease" err="Put \"https://10.0.0.11:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Apr 28 00:34:23.941240 kubelet[2526]: E0428 00:34:23.941060 2526 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="14.305s" Apr 28 00:34:24.088753 containerd[1473]: time="2026-04-28T00:34:24.088427739Z" level=info msg="CreateContainer within sandbox \"5c78d6efeb1497bc4b241801b76ef5ef29f9c34ed77d9b10eab33b0cd1c147bb\" for &ContainerMetadata{Name:kube-scheduler,Attempt:4,} returns container id \"6dc550d34c16d61f2413488d8637e49fb5bb9df7067e03a8482f046125cfd41a\"" Apr 28 00:34:24.906435 containerd[1473]: time="2026-04-28T00:34:24.855797145Z" level=info msg="CreateContainer within sandbox \"670f440219fcc7a0b00ba64a35e5d1ee4e1b4357741788815366a9fc0871c449\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:4,}" Apr 28 00:34:24.906435 containerd[1473]: time="2026-04-28T00:34:24.904542604Z" level=info msg="StartContainer for \"6dc550d34c16d61f2413488d8637e49fb5bb9df7067e03a8482f046125cfd41a\"" Apr 28 00:34:27.043305 sshd[5972]: pam_unix(sshd:session): session closed for user core Apr 28 00:34:28.101820 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount242711831.mount: Deactivated successfully. Apr 28 00:34:28.164657 systemd[1]: sshd@31-10.0.0.11:22-10.0.0.1:53430.service: Deactivated successfully. Apr 28 00:34:28.165000 systemd[1]: sshd@31-10.0.0.11:22-10.0.0.1:53430.service: Consumed 2.098s CPU time. Apr 28 00:34:28.434086 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4265088931.mount: Deactivated successfully. Apr 28 00:34:28.461485 systemd[1]: session-32.scope: Deactivated successfully. Apr 28 00:34:28.464538 systemd[1]: session-32.scope: Consumed 21.725s CPU time. Apr 28 00:34:28.545940 systemd-logind[1457]: Session 32 logged out. Waiting for processes to exit. Apr 28 00:34:28.625610 containerd[1473]: time="2026-04-28T00:34:28.608763535Z" level=info msg="CreateContainer within sandbox \"670f440219fcc7a0b00ba64a35e5d1ee4e1b4357741788815366a9fc0871c449\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:4,} returns container id \"a0ac537b67e6c9c214565fbfb889fac58007b2a05a5bdf2ba3d669ac1ec7dfe7\"" Apr 28 00:34:28.812687 systemd-logind[1457]: Removed session 32. Apr 28 00:34:28.967098 containerd[1473]: time="2026-04-28T00:34:28.964181151Z" level=info msg="StartContainer for \"a0ac537b67e6c9c214565fbfb889fac58007b2a05a5bdf2ba3d669ac1ec7dfe7\"" Apr 28 00:34:29.393573 kubelet[2526]: E0428 00:34:29.390135 2526 controller.go:195] "Failed to update lease" err="Operation cannot be fulfilled on leases.coordination.k8s.io \"localhost\": the object has been modified; please apply your changes to the latest version and try again" Apr 28 00:34:33.010245 systemd[1]: Started sshd@32-10.0.0.11:22-10.0.0.1:55278.service - OpenSSH per-connection server daemon (10.0.0.1:55278). Apr 28 00:34:33.144476 kubelet[2526]: E0428 00:34:33.144393 2526 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="9.203s" Apr 28 00:34:40.447490 kubelet[2526]: E0428 00:34:40.443281 2526 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.11:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="200ms" Apr 28 00:34:41.920214 sshd[6097]: Accepted publickey for core from 10.0.0.1 port 55278 ssh2: RSA SHA256:h1NhNWfC+Dmp6wIIzeQOuTKnwsIBe3BN0LKeEqeOidc Apr 28 00:34:42.653829 sshd[6097]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 00:34:43.724767 systemd-logind[1457]: New session 33 of user core. Apr 28 00:34:44.046784 systemd[1]: Started session-33.scope - Session 33 of User core. Apr 28 00:34:48.369932 systemd[1]: Started cri-containerd-6dc550d34c16d61f2413488d8637e49fb5bb9df7067e03a8482f046125cfd41a.scope - libcontainer container 6dc550d34c16d61f2413488d8637e49fb5bb9df7067e03a8482f046125cfd41a. Apr 28 00:34:49.438579 kubelet[2526]: E0428 00:34:49.438222 2526 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="15.277s" Apr 28 00:34:49.849858 kubelet[2526]: E0428 00:34:49.839425 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:34:49.849858 kubelet[2526]: E0428 00:34:49.842736 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:34:50.175651 kubelet[2526]: E0428 00:34:50.170387 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:34:51.034742 kubelet[2526]: E0428 00:34:51.032396 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:34:51.034742 kubelet[2526]: E0428 00:34:51.033057 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:34:51.346669 containerd[1473]: time="2026-04-28T00:34:51.249339798Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 28 00:34:51.346669 containerd[1473]: time="2026-04-28T00:34:51.249545338Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 28 00:34:51.346669 containerd[1473]: time="2026-04-28T00:34:51.249563340Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 28 00:34:51.346669 containerd[1473]: time="2026-04-28T00:34:51.250079526Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 28 00:34:51.406289 kubelet[2526]: E0428 00:34:51.400761 2526 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.798s" Apr 28 00:34:52.207935 sshd[6097]: pam_unix(sshd:session): session closed for user core Apr 28 00:34:52.387714 systemd[1]: sshd@32-10.0.0.11:22-10.0.0.1:55278.service: Deactivated successfully. Apr 28 00:34:52.389375 systemd[1]: sshd@32-10.0.0.11:22-10.0.0.1:55278.service: Consumed 2.743s CPU time. Apr 28 00:34:52.409671 systemd[1]: session-33.scope: Deactivated successfully. Apr 28 00:34:52.413010 systemd[1]: session-33.scope: Consumed 4.609s CPU time. Apr 28 00:34:52.422497 systemd-logind[1457]: Session 33 logged out. Waiting for processes to exit. Apr 28 00:34:52.450707 systemd-logind[1457]: Removed session 33. Apr 28 00:34:52.769273 kubelet[2526]: E0428 00:34:52.766565 2526 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.365s" Apr 28 00:34:53.992506 containerd[1473]: time="2026-04-28T00:34:53.991774275Z" level=info msg="StartContainer for \"6dc550d34c16d61f2413488d8637e49fb5bb9df7067e03a8482f046125cfd41a\" returns successfully" Apr 28 00:34:54.018361 systemd[1]: Started cri-containerd-a0ac537b67e6c9c214565fbfb889fac58007b2a05a5bdf2ba3d669ac1ec7dfe7.scope - libcontainer container a0ac537b67e6c9c214565fbfb889fac58007b2a05a5bdf2ba3d669ac1ec7dfe7. Apr 28 00:34:55.164033 kubelet[2526]: E0428 00:34:55.163281 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:34:56.452627 kubelet[2526]: E0428 00:34:56.418764 2526 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.185s" Apr 28 00:34:56.746989 kubelet[2526]: E0428 00:34:56.746444 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:34:57.233342 containerd[1473]: time="2026-04-28T00:34:57.229737175Z" level=info msg="StartContainer for \"a0ac537b67e6c9c214565fbfb889fac58007b2a05a5bdf2ba3d669ac1ec7dfe7\" returns successfully" Apr 28 00:34:58.222686 systemd[1]: Started sshd@33-10.0.0.11:22-10.0.0.1:51968.service - OpenSSH per-connection server daemon (10.0.0.1:51968). Apr 28 00:35:03.477309 sshd[6238]: Accepted publickey for core from 10.0.0.1 port 51968 ssh2: RSA SHA256:h1NhNWfC+Dmp6wIIzeQOuTKnwsIBe3BN0LKeEqeOidc Apr 28 00:35:03.993319 sshd[6238]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 00:35:05.623733 systemd-logind[1457]: New session 34 of user core. Apr 28 00:35:06.165180 systemd[1]: Started session-34.scope - Session 34 of User core. Apr 28 00:35:14.212546 kubelet[2526]: E0428 00:35:13.749695 2526 controller.go:195] "Failed to update lease" err="Put \"https://10.0.0.11:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded" Apr 28 00:35:15.663153 kubelet[2526]: E0428 00:35:15.659038 2526 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="18.455s" Apr 28 00:35:25.671643 kubelet[2526]: E0428 00:35:25.662650 2526 controller.go:195] "Failed to update lease" err="Put \"https://10.0.0.11:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded" Apr 28 00:35:37.557451 kubelet[2526]: E0428 00:35:37.557144 2526 controller.go:195] "Failed to update lease" err="Put \"https://10.0.0.11:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Apr 28 00:35:46.860293 kubelet[2526]: I0428 00:35:46.859270 2526 reflector.go:571] "Warning: watch ended with error" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" err="an error on the server (\"unable to decode an event from the watch stream: http2: client connection lost\") has prevented the request from succeeding" Apr 28 00:35:47.268661 kubelet[2526]: I0428 00:35:47.263647 2526 reflector.go:571] "Warning: watch ended with error" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" err="an error on the server (\"unable to decode an event from the watch stream: http2: client connection lost\") has prevented the request from succeeding" Apr 28 00:35:47.779288 kubelet[2526]: I0428 00:35:47.693686 2526 reflector.go:571] "Warning: watch ended with error" reflector="object-\"kube-system\"/\"kube-root-ca.crt\"" type="*v1.ConfigMap" err="an error on the server (\"unable to decode an event from the watch stream: http2: client connection lost\") has prevented the request from succeeding" Apr 28 00:35:47.891111 kubelet[2526]: I0428 00:35:47.847599 2526 reflector.go:571] "Warning: watch ended with error" reflector="object-\"kube-flannel\"/\"kube-flannel-cfg\"" type="*v1.ConfigMap" err="an error on the server (\"unable to decode an event from the watch stream: http2: client connection lost\") has prevented the request from succeeding" Apr 28 00:35:48.648534 kubelet[2526]: I0428 00:35:48.647597 2526 reflector.go:571] "Warning: watch ended with error" reflector="object-\"kube-system\"/\"coredns\"" type="*v1.ConfigMap" err="an error on the server (\"unable to decode an event from the watch stream: http2: client connection lost\") has prevented the request from succeeding" Apr 28 00:35:48.648534 kubelet[2526]: I0428 00:35:48.040369 2526 reflector.go:571] "Warning: watch ended with error" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" err="an error on the server (\"unable to decode an event from the watch stream: http2: client connection lost\") has prevented the request from succeeding" Apr 28 00:35:48.648534 kubelet[2526]: I0428 00:35:48.647721 2526 reflector.go:571] "Warning: watch ended with error" reflector="pkg/kubelet/config/apiserver.go:66" type="*v1.Pod" err="an error on the server (\"unable to decode an event from the watch stream: http2: client connection lost\") has prevented the request from succeeding" Apr 28 00:35:48.648534 kubelet[2526]: E0428 00:35:48.647035 2526 status_manager.go:1041] "Failed to update status for pod" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3050adca-1502-494a-af1d-f384b2fe157b\\\"},\\\"status\\\":{\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"100m\\\"},\\\"containerID\\\":\\\"containerd://6dc550d34c16d61f2413488d8637e49fb5bb9df7067e03a8482f046125cfd41a\\\",\\\"image\\\":\\\"registry.k8s.io/kube-scheduler:v1.34.7\\\",\\\"imageID\\\":\\\"registry.k8s.io/kube-scheduler@sha256:4ab32f707ff84beaac431797999707757b885196b0b9a52d29cb67f95efce7c1\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"containerd://30623e84dddc09c264da46ca8ec09e46882e1ee19a1ade1f70bbee60ef9a49a0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-04-28T00:33:06Z\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-04-28T00:32:11Z\\\"}},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"100m\\\"}},\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-28T00:34:53Z\\\"}}}]}}\" for pod \"kube-system\"/\"kube-scheduler-localhost\": Patch \"https://10.0.0.11:6443/api/v1/namespaces/kube-system/pods/kube-scheduler-localhost/status\": http2: client connection lost" pod="kube-system/kube-scheduler-localhost" Apr 28 00:35:48.655995 kubelet[2526]: E0428 00:35:48.646986 2526 controller.go:195] "Failed to update lease" err="Put \"https://10.0.0.11:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": http2: client connection lost" Apr 28 00:35:48.743433 kubelet[2526]: I0428 00:35:48.040399 2526 reflector.go:571] "Warning: watch ended with error" reflector="object-\"kube-system\"/\"kube-proxy\"" type="*v1.ConfigMap" err="an error on the server (\"unable to decode an event from the watch stream: http2: client connection lost\") has prevented the request from succeeding" Apr 28 00:35:49.989601 kubelet[2526]: I0428 00:35:49.104756 2526 reflector.go:571] "Warning: watch ended with error" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" err="an error on the server (\"unable to decode an event from the watch stream: http2: client connection lost\") has prevented the request from succeeding" Apr 28 00:35:52.739456 kubelet[2526]: I0428 00:35:52.738250 2526 reflector.go:571] "Warning: watch ended with error" reflector="object-\"kube-flannel\"/\"kube-root-ca.crt\"" type="*v1.ConfigMap" err="an error on the server (\"unable to decode an event from the watch stream: http2: client connection lost\") has prevented the request from succeeding" Apr 28 00:35:52.935750 kubelet[2526]: E0428 00:35:47.502572 2526 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://10.0.0.11:6443/api/v1/namespaces/kube-system/events/kube-controller-manager-localhost.18aa5d9287efedf7\": http2: client connection lost" event="&Event{ObjectMeta:{kube-controller-manager-localhost.18aa5d9287efedf7 kube-system 1294 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-controller-manager-localhost,UID:c6bb8708a026256e82ca4c5631a78b5a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Started,Message:Started container kube-controller-manager,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-28 00:24:46 +0000 UTC,LastTimestamp:2026-04-28 00:34:57.692778742 +0000 UTC m=+883.130756040,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 28 00:35:55.748506 kubelet[2526]: E0428 00:35:55.745598 2526 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="39.851s" Apr 28 00:35:56.003078 kubelet[2526]: E0428 00:35:56.001190 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:35:56.080399 kubelet[2526]: E0428 00:35:56.053193 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:35:56.080399 kubelet[2526]: E0428 00:35:56.053874 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:35:58.935651 kubelet[2526]: E0428 00:35:58.929363 2526 controller.go:195] "Failed to update lease" err="Put \"https://10.0.0.11:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded" Apr 28 00:36:00.386310 kubelet[2526]: I0428 00:36:00.382081 2526 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Apr 28 00:36:01.013367 kubelet[2526]: E0428 00:36:01.000375 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:36:01.544978 kubelet[2526]: E0428 00:36:01.542226 2526 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://10.0.0.11:6443/api/v1/namespaces/kube-system/pods/kube-scheduler-localhost\": net/http: TLS handshake timeout" podUID="824fd89300514e351ed3b68d82c665c6" pod="kube-system/kube-scheduler-localhost" Apr 28 00:36:03.840616 kubelet[2526]: E0428 00:36:03.829033 2526 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="8.073s" Apr 28 00:36:04.211343 kubelet[2526]: E0428 00:36:03.866352 2526 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.11:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&resourceVersion=1363\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 28 00:36:04.583534 kubelet[2526]: E0428 00:36:04.574778 2526 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: Get \"https://10.0.0.11:6443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dkube-proxy&resourceVersion=1352\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="object-\"kube-system\"/\"kube-proxy\"" type="*v1.ConfigMap" Apr 28 00:36:04.605576 kubelet[2526]: E0428 00:36:04.589401 2526 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.11:6443/apis/node.k8s.io/v1/runtimeclasses?resourceVersion=1379\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 28 00:36:04.802531 kubelet[2526]: E0428 00:36:04.589361 2526 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: Get \"https://10.0.0.11:6443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=1352\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="object-\"kube-system\"/\"kube-root-ca.crt\"" type="*v1.ConfigMap" Apr 28 00:36:05.059326 kubelet[2526]: E0428 00:36:04.766558 2526 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: Get \"https://10.0.0.11:6443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dcoredns&resourceVersion=1352\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="object-\"kube-system\"/\"coredns\"" type="*v1.ConfigMap" Apr 28 00:36:05.353609 kubelet[2526]: E0428 00:36:05.350641 2526 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: Get \"https://10.0.0.11:6443/api/v1/namespaces/kube-flannel/configmaps?fieldSelector=metadata.name%3Dkube-flannel-cfg&resourceVersion=1352\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="object-\"kube-flannel\"/\"kube-flannel-cfg\"" type="*v1.ConfigMap" Apr 28 00:36:05.374550 kubelet[2526]: E0428 00:36:05.350665 2526 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.0.0.11:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dlocalhost&resourceVersion=1363\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="pkg/kubelet/config/apiserver.go:66" type="*v1.Pod" Apr 28 00:36:06.402639 kubelet[2526]: E0428 00:36:06.195644 2526 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: Get \"https://10.0.0.11:6443/api/v1/namespaces/kube-flannel/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=1352\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="object-\"kube-flannel\"/\"kube-root-ca.crt\"" type="*v1.ConfigMap" Apr 28 00:36:06.593460 kubelet[2526]: E0428 00:36:06.397840 2526 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.11:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&resourceVersion=1346\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 28 00:36:06.954656 kubelet[2526]: E0428 00:36:06.952349 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:36:07.486481 kubelet[2526]: E0428 00:36:07.484420 2526 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.11:6443/apis/storage.k8s.io/v1/csidrivers?resourceVersion=1363\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 28 00:36:09.508274 sshd[6238]: pam_unix(sshd:session): session closed for user core Apr 28 00:36:10.058336 systemd[1]: sshd@33-10.0.0.11:22-10.0.0.1:51968.service: Deactivated successfully. Apr 28 00:36:10.093237 systemd[1]: sshd@33-10.0.0.11:22-10.0.0.1:51968.service: Consumed 1.852s CPU time. Apr 28 00:36:10.488289 systemd[1]: session-34.scope: Deactivated successfully. Apr 28 00:36:10.533241 systemd[1]: session-34.scope: Consumed 34.803s CPU time. Apr 28 00:36:10.781573 systemd-logind[1457]: Session 34 logged out. Waiting for processes to exit. Apr 28 00:36:11.023642 systemd-logind[1457]: Removed session 34. Apr 28 00:36:13.000357 kubelet[2526]: E0428 00:36:12.993130 2526 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://10.0.0.11:6443/api/v1/namespaces/kube-system/pods/kube-scheduler-localhost\": net/http: TLS handshake timeout" podUID="824fd89300514e351ed3b68d82c665c6" pod="kube-system/kube-scheduler-localhost" Apr 28 00:36:13.680884 kubelet[2526]: E0428 00:36:12.185256 2526 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.11:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded" interval="200ms" Apr 28 00:36:15.702047 kubelet[2526]: E0428 00:36:13.796819 2526 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://10.0.0.11:6443/api/v1/namespaces/kube-system/events/kube-controller-manager-localhost.18aa5d9287efedf7\": net/http: TLS handshake timeout" event="&Event{ObjectMeta:{kube-controller-manager-localhost.18aa5d9287efedf7 kube-system 1294 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-controller-manager-localhost,UID:c6bb8708a026256e82ca4c5631a78b5a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Started,Message:Started container kube-controller-manager,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-28 00:24:46 +0000 UTC,LastTimestamp:2026-04-28 00:34:57.692778742 +0000 UTC m=+883.130756040,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 28 00:36:15.826559 systemd[1]: Started sshd@34-10.0.0.11:22-10.0.0.1:33926.service - OpenSSH per-connection server daemon (10.0.0.1:33926). Apr 28 00:36:22.116475 sshd[6314]: Accepted publickey for core from 10.0.0.1 port 33926 ssh2: RSA SHA256:h1NhNWfC+Dmp6wIIzeQOuTKnwsIBe3BN0LKeEqeOidc Apr 28 00:36:22.906132 sshd[6314]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 00:36:23.752674 kubelet[2526]: E0428 00:36:23.752404 2526 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.11:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&resourceVersion=1363\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 28 00:36:23.993615 systemd-logind[1457]: New session 35 of user core. Apr 28 00:36:24.050558 kubelet[2526]: E0428 00:36:24.044394 2526 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: Get \"https://10.0.0.11:6443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=1352\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="object-\"kube-system\"/\"kube-root-ca.crt\"" type="*v1.ConfigMap" Apr 28 00:36:24.232281 kubelet[2526]: E0428 00:36:24.232177 2526 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: Get \"https://10.0.0.11:6443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dcoredns&resourceVersion=1352\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="object-\"kube-system\"/\"coredns\"" type="*v1.ConfigMap" Apr 28 00:36:24.255405 systemd[1]: Started session-35.scope - Session 35 of User core. Apr 28 00:36:24.367246 kubelet[2526]: E0428 00:36:24.367021 2526 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.11:6443/apis/node.k8s.io/v1/runtimeclasses?resourceVersion=1379\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 28 00:36:24.367597 kubelet[2526]: E0428 00:36:24.367274 2526 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: Get \"https://10.0.0.11:6443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dkube-proxy&resourceVersion=1352\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="object-\"kube-system\"/\"kube-proxy\"" type="*v1.ConfigMap" Apr 28 00:36:24.550643 kubelet[2526]: E0428 00:36:24.252564 2526 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: Get \"https://10.0.0.11:6443/api/v1/namespaces/kube-flannel/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=1352\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="object-\"kube-flannel\"/\"kube-root-ca.crt\"" type="*v1.ConfigMap" Apr 28 00:36:24.661483 kubelet[2526]: E0428 00:36:24.606570 2526 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: Get \"https://10.0.0.11:6443/api/v1/namespaces/kube-flannel/configmaps?fieldSelector=metadata.name%3Dkube-flannel-cfg&resourceVersion=1352\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="object-\"kube-flannel\"/\"kube-flannel-cfg\"" type="*v1.ConfigMap" Apr 28 00:36:25.151242 kubelet[2526]: E0428 00:36:25.139671 2526 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.11:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded" interval="400ms" Apr 28 00:36:25.369847 kubelet[2526]: E0428 00:36:25.159373 2526 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://10.0.0.11:6443/api/v1/namespaces/kube-system/pods/kube-controller-manager-localhost\": net/http: TLS handshake timeout" podUID="c6bb8708a026256e82ca4c5631a78b5a" pod="kube-system/kube-controller-manager-localhost" Apr 28 00:36:25.369847 kubelet[2526]: E0428 00:36:25.160830 2526 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.11:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&resourceVersion=1346\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 28 00:36:25.369847 kubelet[2526]: E0428 00:36:25.161215 2526 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.0.0.11:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dlocalhost&resourceVersion=1363\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="pkg/kubelet/config/apiserver.go:66" type="*v1.Pod" Apr 28 00:36:31.191553 kubelet[2526]: E0428 00:36:27.152465 2526 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.11:6443/apis/storage.k8s.io/v1/csidrivers?resourceVersion=1363\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 28 00:36:36.080052 kubelet[2526]: E0428 00:36:36.074205 2526 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://10.0.0.11:6443/api/v1/namespaces/kube-system/pods/coredns-66bc5c9577-976lc\": net/http: TLS handshake timeout" podUID="2f94c136-2158-4e5f-b19a-05695c38ab7a" pod="kube-system/coredns-66bc5c9577-976lc" Apr 28 00:36:37.098765 kubelet[2526]: E0428 00:36:35.522528 2526 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="31.279s" Apr 28 00:36:41.375376 kubelet[2526]: E0428 00:36:41.371456 2526 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.11:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" interval="800ms" Apr 28 00:36:47.000035 kubelet[2526]: E0428 00:36:45.499828 2526 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: Get \"https://10.0.0.11:6443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=1352\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="object-\"kube-system\"/\"kube-root-ca.crt\"" type="*v1.ConfigMap" Apr 28 00:36:50.174631 kubelet[2526]: E0428 00:36:47.856599 2526 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.0.0.11:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dlocalhost&resourceVersion=1363\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="pkg/kubelet/config/apiserver.go:66" type="*v1.Pod" Apr 28 00:36:50.713441 kubelet[2526]: E0428 00:36:50.664427 2526 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: Get \"https://10.0.0.11:6443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dcoredns&resourceVersion=1352\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="object-\"kube-system\"/\"coredns\"" type="*v1.ConfigMap" Apr 28 00:36:52.763402 kubelet[2526]: E0428 00:36:52.760074 2526 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: Get \"https://10.0.0.11:6443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dkube-proxy&resourceVersion=1352\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="object-\"kube-system\"/\"kube-proxy\"" type="*v1.ConfigMap" Apr 28 00:36:54.267577 kubelet[2526]: E0428 00:36:51.326370 2526 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://10.0.0.11:6443/api/v1/namespaces/kube-system/pods/coredns-66bc5c9577-sn6rz\": net/http: TLS handshake timeout" podUID="69b6c5c4-1b0f-43c4-a6e5-e4ff6b274b36" pod="kube-system/coredns-66bc5c9577-sn6rz" Apr 28 00:36:55.051750 kubelet[2526]: E0428 00:36:52.501144 2526 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://10.0.0.11:6443/api/v1/namespaces/kube-system/events/kube-controller-manager-localhost.18aa5d9287efedf7\": net/http: TLS handshake timeout" event="&Event{ObjectMeta:{kube-controller-manager-localhost.18aa5d9287efedf7 kube-system 1294 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-controller-manager-localhost,UID:c6bb8708a026256e82ca4c5631a78b5a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Started,Message:Started container kube-controller-manager,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-28 00:24:46 +0000 UTC,LastTimestamp:2026-04-28 00:34:57.692778742 +0000 UTC m=+883.130756040,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 28 00:36:56.205613 kubelet[2526]: E0428 00:36:55.512325 2526 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: Get \"https://10.0.0.11:6443/api/v1/namespaces/kube-flannel/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=1352\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="object-\"kube-flannel\"/\"kube-root-ca.crt\"" type="*v1.ConfigMap" Apr 28 00:36:57.358569 kubelet[2526]: E0428 00:36:56.933595 2526 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.11:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded" interval="1.6s" Apr 28 00:36:57.455556 kubelet[2526]: E0428 00:36:57.358338 2526 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.11:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&resourceVersion=1363\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 28 00:36:57.685259 kubelet[2526]: E0428 00:36:57.605884 2526 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.11:6443/apis/node.k8s.io/v1/runtimeclasses?resourceVersion=1379\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 28 00:36:58.378477 kubelet[2526]: E0428 00:36:56.412538 2526 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.11:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&resourceVersion=1346\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 28 00:37:02.231225 kubelet[2526]: E0428 00:37:02.210642 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:37:02.764443 kubelet[2526]: E0428 00:37:02.351647 2526 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: Get \"https://10.0.0.11:6443/api/v1/namespaces/kube-flannel/configmaps?fieldSelector=metadata.name%3Dkube-flannel-cfg&resourceVersion=1352\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="object-\"kube-flannel\"/\"kube-flannel-cfg\"" type="*v1.ConfigMap" Apr 28 00:37:05.485383 kubelet[2526]: E0428 00:37:05.484273 2526 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.11:6443/apis/storage.k8s.io/v1/csidrivers?resourceVersion=1363\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 28 00:37:10.144983 kubelet[2526]: E0428 00:37:10.144429 2526 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.11:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" interval="3.2s" Apr 28 00:37:11.252683 kubelet[2526]: E0428 00:37:10.130612 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:37:11.380122 sshd[6314]: pam_unix(sshd:session): session closed for user core Apr 28 00:37:12.507259 systemd[1]: sshd@34-10.0.0.11:22-10.0.0.1:33926.service: Deactivated successfully. Apr 28 00:37:12.538413 systemd[1]: sshd@34-10.0.0.11:22-10.0.0.1:33926.service: Consumed 2.406s CPU time. Apr 28 00:37:12.971527 systemd[1]: session-35.scope: Deactivated successfully. Apr 28 00:37:12.990464 systemd[1]: session-35.scope: Consumed 25.166s CPU time. Apr 28 00:37:13.072596 kubelet[2526]: E0428 00:37:13.065217 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:37:13.159851 systemd-logind[1457]: Session 35 logged out. Waiting for processes to exit. Apr 28 00:37:13.502640 systemd-logind[1457]: Removed session 35. Apr 28 00:37:13.922126 containerd[1473]: time="2026-04-28T00:37:13.909256584Z" level=info msg="StopContainer for \"c1e55d247cdb60fe5d631e9c903dd2c21854aea1f240a5e38e911c85922a9a17\" with timeout 30 (s)" Apr 28 00:37:14.102747 containerd[1473]: time="2026-04-28T00:37:14.069449449Z" level=info msg="Stop container \"c1e55d247cdb60fe5d631e9c903dd2c21854aea1f240a5e38e911c85922a9a17\" with signal terminated" Apr 28 00:37:14.865563 kubelet[2526]: E0428 00:37:14.860478 2526 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://10.0.0.11:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-localhost\": net/http: TLS handshake timeout" podUID="a646d41a511ed3aa4e8f9816f82de57d" pod="kube-system/kube-apiserver-localhost" Apr 28 00:37:16.839383 kubelet[2526]: E0428 00:37:16.375642 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:37:18.064486 systemd[1]: Started sshd@35-10.0.0.11:22-10.0.0.1:43804.service - OpenSSH per-connection server daemon (10.0.0.1:43804). Apr 28 00:37:18.963437 kubelet[2526]: E0428 00:37:18.950351 2526 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://10.0.0.11:6443/api/v1/namespaces/kube-system/events/kube-controller-manager-localhost.18aa5d9287efedf7\": net/http: TLS handshake timeout" event="&Event{ObjectMeta:{kube-controller-manager-localhost.18aa5d9287efedf7 kube-system 1294 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-controller-manager-localhost,UID:c6bb8708a026256e82ca4c5631a78b5a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Started,Message:Started container kube-controller-manager,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-28 00:24:46 +0000 UTC,LastTimestamp:2026-04-28 00:34:57.692778742 +0000 UTC m=+883.130756040,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 28 00:37:20.865594 kubelet[2526]: E0428 00:37:20.856554 2526 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: Get \"https://10.0.0.11:6443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dcoredns&resourceVersion=1352\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="object-\"kube-system\"/\"coredns\"" type="*v1.ConfigMap" Apr 28 00:37:22.182872 kubelet[2526]: E0428 00:37:22.182014 2526 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.11:6443/apis/node.k8s.io/v1/runtimeclasses?resourceVersion=1379\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 28 00:37:23.301433 kubelet[2526]: E0428 00:37:22.007505 2526 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: Get \"https://10.0.0.11:6443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=1352\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="object-\"kube-system\"/\"kube-root-ca.crt\"" type="*v1.ConfigMap" Apr 28 00:37:23.441520 kubelet[2526]: E0428 00:37:22.061665 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:37:23.465826 kubelet[2526]: E0428 00:37:23.464331 2526 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: Get \"https://10.0.0.11:6443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dkube-proxy&resourceVersion=1352\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="object-\"kube-system\"/\"kube-proxy\"" type="*v1.ConfigMap" Apr 28 00:37:25.637314 kubelet[2526]: E0428 00:37:25.627586 2526 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: Get \"https://10.0.0.11:6443/api/v1/namespaces/kube-flannel/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=1352\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="object-\"kube-flannel\"/\"kube-root-ca.crt\"" type="*v1.ConfigMap" Apr 28 00:37:26.040811 sshd[6386]: Accepted publickey for core from 10.0.0.1 port 43804 ssh2: RSA SHA256:h1NhNWfC+Dmp6wIIzeQOuTKnwsIBe3BN0LKeEqeOidc Apr 28 00:37:26.090279 kubelet[2526]: E0428 00:37:25.627577 2526 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.11:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&resourceVersion=1346\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 28 00:37:26.195618 kubelet[2526]: E0428 00:37:24.895290 2526 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.0.0.11:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dlocalhost&resourceVersion=1363\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="pkg/kubelet/config/apiserver.go:66" type="*v1.Pod" Apr 28 00:37:26.391429 sshd[6386]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 00:37:26.673458 kubelet[2526]: E0428 00:37:26.668796 2526 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.11:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" interval="6.4s" Apr 28 00:37:26.757933 kubelet[2526]: E0428 00:37:26.735805 2526 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: Get \"https://10.0.0.11:6443/api/v1/namespaces/kube-flannel/configmaps?fieldSelector=metadata.name%3Dkube-flannel-cfg&resourceVersion=1352\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="object-\"kube-flannel\"/\"kube-flannel-cfg\"" type="*v1.ConfigMap" Apr 28 00:37:26.757933 kubelet[2526]: E0428 00:37:26.749163 2526 kubelet_node_status.go:486] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"NetworkUnavailable\\\"},{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-04-28T00:37:05Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-28T00:37:05Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-28T00:37:05Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-28T00:37:05Z\\\",\\\"type\\\":\\\"Ready\\\"}]}}\" for node \"localhost\": Patch \"https://10.0.0.11:6443/api/v1/nodes/localhost/status?timeout=10s\": context deadline exceeded" Apr 28 00:37:26.878483 systemd-logind[1457]: New session 36 of user core. Apr 28 00:37:27.052845 systemd[1]: Started session-36.scope - Session 36 of User core. Apr 28 00:37:28.984347 kubelet[2526]: E0428 00:37:28.155322 2526 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://10.0.0.11:6443/api/v1/namespaces/kube-system/pods/kube-scheduler-localhost\": net/http: TLS handshake timeout" podUID="824fd89300514e351ed3b68d82c665c6" pod="kube-system/kube-scheduler-localhost" Apr 28 00:37:29.408717 kubelet[2526]: E0428 00:37:28.967573 2526 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.11:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&resourceVersion=1363\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 28 00:37:32.580373 kubelet[2526]: E0428 00:37:32.578355 2526 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="28.069s" Apr 28 00:37:34.476755 kubelet[2526]: E0428 00:37:34.447844 2526 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.11:6443/apis/storage.k8s.io/v1/csidrivers?resourceVersion=1363\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 28 00:37:35.086050 kubelet[2526]: E0428 00:37:35.085446 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:37:37.159502 kubelet[2526]: E0428 00:37:37.152499 2526 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"localhost\": Get \"https://10.0.0.11:6443/api/v1/nodes/localhost?timeout=10s\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Apr 28 00:37:38.059722 kubelet[2526]: E0428 00:37:38.058438 2526 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="5.376s" Apr 28 00:37:39.419525 kubelet[2526]: E0428 00:37:39.417185 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:37:39.806772 kubelet[2526]: E0428 00:37:39.793945 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:37:41.494539 kubelet[2526]: E0428 00:37:41.220199 2526 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.091s" Apr 28 00:37:42.141577 kubelet[2526]: E0428 00:37:42.090320 2526 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://10.0.0.11:6443/api/v1/namespaces/kube-system/pods/kube-controller-manager-localhost\": net/http: TLS handshake timeout" podUID="c6bb8708a026256e82ca4c5631a78b5a" pod="kube-system/kube-controller-manager-localhost" Apr 28 00:37:42.232718 kubelet[2526]: E0428 00:37:41.878626 2526 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://10.0.0.11:6443/api/v1/namespaces/kube-system/events/kube-controller-manager-localhost.18aa5d9287efedf7\": net/http: TLS handshake timeout" event="&Event{ObjectMeta:{kube-controller-manager-localhost.18aa5d9287efedf7 kube-system 1294 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-controller-manager-localhost,UID:c6bb8708a026256e82ca4c5631a78b5a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Started,Message:Started container kube-controller-manager,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-28 00:24:46 +0000 UTC,LastTimestamp:2026-04-28 00:34:57.692778742 +0000 UTC m=+883.130756040,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 28 00:37:43.000377 kubelet[2526]: E0428 00:37:42.996545 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:37:43.588619 kubelet[2526]: E0428 00:37:43.558528 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:37:43.896579 kubelet[2526]: E0428 00:37:43.892021 2526 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.11:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded" interval="7s" Apr 28 00:37:45.262982 kubelet[2526]: E0428 00:37:45.262010 2526 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.852s" Apr 28 00:37:45.812217 containerd[1473]: time="2026-04-28T00:37:45.805781082Z" level=info msg="Kill container \"c1e55d247cdb60fe5d631e9c903dd2c21854aea1f240a5e38e911c85922a9a17\"" Apr 28 00:37:47.199365 kubelet[2526]: E0428 00:37:47.186584 2526 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"localhost\": Get \"https://10.0.0.11:6443/api/v1/nodes/localhost?timeout=10s\": context deadline exceeded" Apr 28 00:37:49.459231 systemd[1]: cri-containerd-c1e55d247cdb60fe5d631e9c903dd2c21854aea1f240a5e38e911c85922a9a17.scope: Deactivated successfully. Apr 28 00:37:49.473212 systemd[1]: cri-containerd-c1e55d247cdb60fe5d631e9c903dd2c21854aea1f240a5e38e911c85922a9a17.scope: Consumed 13min 13.702s CPU time, 188.3M memory peak, 0B memory swap peak. Apr 28 00:37:51.165728 kubelet[2526]: E0428 00:37:51.165553 2526 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.11:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.11:6443: connect: connection refused" interval="7s" Apr 28 00:37:51.267512 sshd[6386]: pam_unix(sshd:session): session closed for user core Apr 28 00:37:51.298113 kubelet[2526]: E0428 00:37:51.262523 2526 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.0.0.11:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dlocalhost&resourceVersion=1363\": dial tcp 10.0.0.11:6443: connect: connection refused" logger="UnhandledError" reflector="pkg/kubelet/config/apiserver.go:66" type="*v1.Pod" Apr 28 00:37:51.412484 systemd[1]: sshd@35-10.0.0.11:22-10.0.0.1:43804.service: Deactivated successfully. Apr 28 00:37:51.464346 systemd[1]: sshd@35-10.0.0.11:22-10.0.0.1:43804.service: Consumed 2.812s CPU time. Apr 28 00:37:51.640300 systemd[1]: session-36.scope: Deactivated successfully. Apr 28 00:37:51.640670 systemd[1]: session-36.scope: Consumed 12.919s CPU time. Apr 28 00:37:51.658096 systemd-logind[1457]: Session 36 logged out. Waiting for processes to exit. Apr 28 00:37:51.659343 systemd-logind[1457]: Removed session 36. Apr 28 00:37:51.663174 kubelet[2526]: E0428 00:37:51.660155 2526 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"localhost\": Get \"https://10.0.0.11:6443/api/v1/nodes/localhost?timeout=10s\": dial tcp 10.0.0.11:6443: connect: connection refused - error from a previous attempt: read tcp 10.0.0.11:59728->10.0.0.11:6443: read: connection reset by peer" Apr 28 00:37:51.663174 kubelet[2526]: E0428 00:37:51.660507 2526 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.11:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&resourceVersion=1363\": dial tcp 10.0.0.11:6443: connect: connection refused - error from a previous attempt: read tcp 10.0.0.11:59692->10.0.0.11:6443: read: connection reset by peer" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 28 00:37:51.663174 kubelet[2526]: E0428 00:37:51.660581 2526 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: Get \"https://10.0.0.11:6443/api/v1/namespaces/kube-flannel/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=1352\": dial tcp 10.0.0.11:6443: connect: connection refused - error from a previous attempt: read tcp 10.0.0.11:59724->10.0.0.11:6443: read: connection reset by peer" logger="UnhandledError" reflector="object-\"kube-flannel\"/\"kube-root-ca.crt\"" type="*v1.ConfigMap" Apr 28 00:37:51.663174 kubelet[2526]: E0428 00:37:51.660706 2526 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: Get \"https://10.0.0.11:6443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=1352\": dial tcp 10.0.0.11:6443: connect: connection refused - error from a previous attempt: read tcp 10.0.0.11:59802->10.0.0.11:6443: read: connection reset by peer" logger="UnhandledError" reflector="object-\"kube-system\"/\"kube-root-ca.crt\"" type="*v1.ConfigMap" Apr 28 00:37:51.663174 kubelet[2526]: E0428 00:37:51.660770 2526 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: Get \"https://10.0.0.11:6443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dkube-proxy&resourceVersion=1352\": dial tcp 10.0.0.11:6443: connect: connection refused - error from a previous attempt: read tcp 10.0.0.11:59682->10.0.0.11:6443: read: connection reset by peer" logger="UnhandledError" reflector="object-\"kube-system\"/\"kube-proxy\"" type="*v1.ConfigMap" Apr 28 00:37:51.663174 kubelet[2526]: E0428 00:37:51.660859 2526 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.11:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&resourceVersion=1346\": dial tcp 10.0.0.11:6443: connect: connection refused - error from a previous attempt: read tcp 10.0.0.11:59650->10.0.0.11:6443: read: connection reset by peer" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 28 00:37:51.663174 kubelet[2526]: E0428 00:37:51.660833 2526 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.11:6443/apis/node.k8s.io/v1/runtimeclasses?resourceVersion=1379\": dial tcp 10.0.0.11:6443: connect: connection refused - error from a previous attempt: read tcp 10.0.0.11:59678->10.0.0.11:6443: read: connection reset by peer" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 28 00:37:51.832297 kubelet[2526]: E0428 00:37:51.831814 2526 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: Get \"https://10.0.0.11:6443/api/v1/namespaces/kube-flannel/configmaps?fieldSelector=metadata.name%3Dkube-flannel-cfg&resourceVersion=1352\": dial tcp 10.0.0.11:6443: connect: connection refused - error from a previous attempt: read tcp 10.0.0.11:59716->10.0.0.11:6443: read: connection reset by peer" logger="UnhandledError" reflector="object-\"kube-flannel\"/\"kube-flannel-cfg\"" type="*v1.ConfigMap" Apr 28 00:37:51.839228 kubelet[2526]: E0428 00:37:51.809046 2526 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"localhost\": Get \"https://10.0.0.11:6443/api/v1/nodes/localhost?timeout=10s\": dial tcp 10.0.0.11:6443: connect: connection refused" Apr 28 00:37:51.839228 kubelet[2526]: E0428 00:37:51.833027 2526 kubelet_node_status.go:473] "Unable to update node status" err="update node status exceeds retry count" Apr 28 00:37:51.861312 kubelet[2526]: E0428 00:37:51.840742 2526 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: Get \"https://10.0.0.11:6443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dcoredns&resourceVersion=1352\": dial tcp 10.0.0.11:6443: connect: connection refused - error from a previous attempt: read tcp 10.0.0.11:59686->10.0.0.11:6443: read: connection reset by peer" logger="UnhandledError" reflector="object-\"kube-system\"/\"coredns\"" type="*v1.ConfigMap" Apr 28 00:37:52.266362 kubelet[2526]: E0428 00:37:52.265347 2526 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.036s" Apr 28 00:37:52.686509 kubelet[2526]: E0428 00:37:52.677585 2526 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://10.0.0.11:6443/api/v1/namespaces/kube-system/pods/coredns-66bc5c9577-976lc\": dial tcp 10.0.0.11:6443: connect: connection refused - error from a previous attempt: read tcp 10.0.0.11:59660->10.0.0.11:6443: read: connection reset by peer" podUID="2f94c136-2158-4e5f-b19a-05695c38ab7a" pod="kube-system/coredns-66bc5c9577-976lc" Apr 28 00:37:52.861944 kubelet[2526]: E0428 00:37:52.861327 2526 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://10.0.0.11:6443/api/v1/namespaces/kube-system/pods/coredns-66bc5c9577-sn6rz\": dial tcp 10.0.0.11:6443: connect: connection refused" podUID="69b6c5c4-1b0f-43c4-a6e5-e4ff6b274b36" pod="kube-system/coredns-66bc5c9577-sn6rz" Apr 28 00:37:52.875506 kubelet[2526]: E0428 00:37:52.859609 2526 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://10.0.0.11:6443/api/v1/namespaces/kube-system/events/kube-controller-manager-localhost.18aa5d9287efedf7\": dial tcp 10.0.0.11:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-controller-manager-localhost.18aa5d9287efedf7 kube-system 1294 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-controller-manager-localhost,UID:c6bb8708a026256e82ca4c5631a78b5a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Started,Message:Started container kube-controller-manager,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-28 00:24:46 +0000 UTC,LastTimestamp:2026-04-28 00:34:57.692778742 +0000 UTC m=+883.130756040,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 28 00:37:52.901997 kubelet[2526]: E0428 00:37:52.877323 2526 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://10.0.0.11:6443/api/v1/namespaces/kube-system/pods/coredns-66bc5c9577-976lc\": dial tcp 10.0.0.11:6443: connect: connection refused" podUID="2f94c136-2158-4e5f-b19a-05695c38ab7a" pod="kube-system/coredns-66bc5c9577-976lc" Apr 28 00:37:53.033796 kubelet[2526]: E0428 00:37:53.002384 2526 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://10.0.0.11:6443/api/v1/namespaces/kube-system/pods/coredns-66bc5c9577-sn6rz\": dial tcp 10.0.0.11:6443: connect: connection refused" podUID="69b6c5c4-1b0f-43c4-a6e5-e4ff6b274b36" pod="kube-system/coredns-66bc5c9577-sn6rz" Apr 28 00:37:53.103386 kubelet[2526]: E0428 00:37:53.091924 2526 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://10.0.0.11:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-localhost\": dial tcp 10.0.0.11:6443: connect: connection refused" podUID="a646d41a511ed3aa4e8f9816f82de57d" pod="kube-system/kube-apiserver-localhost" Apr 28 00:37:53.515867 kubelet[2526]: E0428 00:37:53.506296 2526 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://10.0.0.11:6443/api/v1/namespaces/kube-system/pods/kube-scheduler-localhost\": dial tcp 10.0.0.11:6443: connect: connection refused" podUID="824fd89300514e351ed3b68d82c665c6" pod="kube-system/kube-scheduler-localhost" Apr 28 00:37:53.572458 kubelet[2526]: E0428 00:37:53.566284 2526 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.274s" Apr 28 00:37:53.592566 kubelet[2526]: E0428 00:37:53.589977 2526 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://10.0.0.11:6443/api/v1/namespaces/kube-system/pods/kube-controller-manager-localhost\": dial tcp 10.0.0.11:6443: connect: connection refused" podUID="c6bb8708a026256e82ca4c5631a78b5a" pod="kube-system/kube-controller-manager-localhost" Apr 28 00:37:53.748415 kubelet[2526]: E0428 00:37:53.746346 2526 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://10.0.0.11:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-localhost\": dial tcp 10.0.0.11:6443: connect: connection refused" podUID="a646d41a511ed3aa4e8f9816f82de57d" pod="kube-system/kube-apiserver-localhost" Apr 28 00:37:53.777725 kubelet[2526]: E0428 00:37:53.755490 2526 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://10.0.0.11:6443/api/v1/namespaces/kube-system/pods/kube-scheduler-localhost\": dial tcp 10.0.0.11:6443: connect: connection refused" podUID="824fd89300514e351ed3b68d82c665c6" pod="kube-system/kube-scheduler-localhost" Apr 28 00:37:53.902522 kubelet[2526]: E0428 00:37:53.899744 2526 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://10.0.0.11:6443/api/v1/namespaces/kube-system/pods/kube-controller-manager-localhost\": dial tcp 10.0.0.11:6443: connect: connection refused" podUID="c6bb8708a026256e82ca4c5631a78b5a" pod="kube-system/kube-controller-manager-localhost" Apr 28 00:37:54.069062 kubelet[2526]: E0428 00:37:54.067198 2526 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://10.0.0.11:6443/api/v1/namespaces/kube-system/pods/coredns-66bc5c9577-976lc\": dial tcp 10.0.0.11:6443: connect: connection refused" podUID="2f94c136-2158-4e5f-b19a-05695c38ab7a" pod="kube-system/coredns-66bc5c9577-976lc" Apr 28 00:37:54.138579 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c1e55d247cdb60fe5d631e9c903dd2c21854aea1f240a5e38e911c85922a9a17-rootfs.mount: Deactivated successfully. Apr 28 00:37:54.156581 kubelet[2526]: E0428 00:37:54.140666 2526 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://10.0.0.11:6443/api/v1/namespaces/kube-system/pods/coredns-66bc5c9577-sn6rz\": dial tcp 10.0.0.11:6443: connect: connection refused" podUID="69b6c5c4-1b0f-43c4-a6e5-e4ff6b274b36" pod="kube-system/coredns-66bc5c9577-sn6rz" Apr 28 00:37:54.175313 containerd[1473]: time="2026-04-28T00:37:54.102431723Z" level=info msg="shim disconnected" id=c1e55d247cdb60fe5d631e9c903dd2c21854aea1f240a5e38e911c85922a9a17 namespace=k8s.io Apr 28 00:37:54.175313 containerd[1473]: time="2026-04-28T00:37:54.140031801Z" level=warning msg="cleaning up after shim disconnected" id=c1e55d247cdb60fe5d631e9c903dd2c21854aea1f240a5e38e911c85922a9a17 namespace=k8s.io Apr 28 00:37:54.175313 containerd[1473]: time="2026-04-28T00:37:54.140460354Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 28 00:37:54.372623 kubelet[2526]: E0428 00:37:54.367854 2526 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://10.0.0.11:6443/api/v1/namespaces/kube-system/pods/coredns-66bc5c9577-sn6rz\": dial tcp 10.0.0.11:6443: connect: connection refused" podUID="69b6c5c4-1b0f-43c4-a6e5-e4ff6b274b36" pod="kube-system/coredns-66bc5c9577-sn6rz" Apr 28 00:37:54.435770 kubelet[2526]: E0428 00:37:54.435475 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:37:54.477341 kubelet[2526]: E0428 00:37:54.471164 2526 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://10.0.0.11:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-localhost\": dial tcp 10.0.0.11:6443: connect: connection refused" podUID="a646d41a511ed3aa4e8f9816f82de57d" pod="kube-system/kube-apiserver-localhost" Apr 28 00:37:54.610624 kubelet[2526]: E0428 00:37:54.601803 2526 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://10.0.0.11:6443/api/v1/namespaces/kube-system/pods/kube-scheduler-localhost\": dial tcp 10.0.0.11:6443: connect: connection refused" podUID="824fd89300514e351ed3b68d82c665c6" pod="kube-system/kube-scheduler-localhost" Apr 28 00:37:54.670742 kubelet[2526]: E0428 00:37:54.611673 2526 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://10.0.0.11:6443/api/v1/namespaces/kube-system/pods/kube-controller-manager-localhost\": dial tcp 10.0.0.11:6443: connect: connection refused" podUID="c6bb8708a026256e82ca4c5631a78b5a" pod="kube-system/kube-controller-manager-localhost" Apr 28 00:37:54.704101 kubelet[2526]: E0428 00:37:54.700637 2526 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://10.0.0.11:6443/api/v1/namespaces/kube-system/pods/coredns-66bc5c9577-976lc\": dial tcp 10.0.0.11:6443: connect: connection refused" podUID="2f94c136-2158-4e5f-b19a-05695c38ab7a" pod="kube-system/coredns-66bc5c9577-976lc" Apr 28 00:37:56.358585 containerd[1473]: time="2026-04-28T00:37:56.354431980Z" level=warning msg="cleanup warnings time=\"2026-04-28T00:37:56Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Apr 28 00:37:56.504215 kubelet[2526]: E0428 00:37:56.501605 2526 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.11:6443/apis/storage.k8s.io/v1/csidrivers?resourceVersion=1363\": dial tcp 10.0.0.11:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 28 00:37:56.680426 containerd[1473]: time="2026-04-28T00:37:56.543877940Z" level=info msg="StopContainer for \"c1e55d247cdb60fe5d631e9c903dd2c21854aea1f240a5e38e911c85922a9a17\" returns successfully" Apr 28 00:37:56.767397 kubelet[2526]: E0428 00:37:56.766202 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:37:57.165696 systemd[1]: Started sshd@36-10.0.0.11:22-10.0.0.1:55838.service - OpenSSH per-connection server daemon (10.0.0.1:55838). Apr 28 00:37:57.338553 containerd[1473]: time="2026-04-28T00:37:57.338482483Z" level=info msg="CreateContainer within sandbox \"1c8dec823b2d977d0136feae27dd03906f1343f68ad2d582daddb320cf929b62\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:1,}" Apr 28 00:37:58.262148 containerd[1473]: time="2026-04-28T00:37:58.261921731Z" level=info msg="CreateContainer within sandbox \"1c8dec823b2d977d0136feae27dd03906f1343f68ad2d582daddb320cf929b62\" for &ContainerMetadata{Name:kube-apiserver,Attempt:1,} returns container id \"00a4447277ed6e8253aa251b26423c3830f2660c0e81559c7944bcaed4d05b6e\"" Apr 28 00:37:58.505718 containerd[1473]: time="2026-04-28T00:37:58.504031077Z" level=info msg="StartContainer for \"00a4447277ed6e8253aa251b26423c3830f2660c0e81559c7944bcaed4d05b6e\"" Apr 28 00:37:58.648233 kubelet[2526]: E0428 00:37:58.646537 2526 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.11:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.11:6443: connect: connection refused" interval="7s" Apr 28 00:37:59.810138 sshd[6495]: Accepted publickey for core from 10.0.0.1 port 55838 ssh2: RSA SHA256:h1NhNWfC+Dmp6wIIzeQOuTKnwsIBe3BN0LKeEqeOidc Apr 28 00:38:00.007801 kubelet[2526]: E0428 00:38:00.005804 2526 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://10.0.0.11:6443/api/v1/namespaces/kube-system/pods/coredns-66bc5c9577-976lc\": dial tcp 10.0.0.11:6443: connect: connection refused" podUID="2f94c136-2158-4e5f-b19a-05695c38ab7a" pod="kube-system/coredns-66bc5c9577-976lc" Apr 28 00:38:00.080380 sshd[6495]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 00:38:00.465242 kubelet[2526]: E0428 00:38:00.404202 2526 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://10.0.0.11:6443/api/v1/namespaces/kube-system/pods/coredns-66bc5c9577-sn6rz\": dial tcp 10.0.0.11:6443: connect: connection refused" podUID="69b6c5c4-1b0f-43c4-a6e5-e4ff6b274b36" pod="kube-system/coredns-66bc5c9577-sn6rz" Apr 28 00:38:00.720441 kubelet[2526]: E0428 00:38:00.708994 2526 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.477s" Apr 28 00:38:00.793289 kubelet[2526]: E0428 00:38:00.753368 2526 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://10.0.0.11:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-localhost\": dial tcp 10.0.0.11:6443: connect: connection refused" podUID="a646d41a511ed3aa4e8f9816f82de57d" pod="kube-system/kube-apiserver-localhost" Apr 28 00:38:01.103913 kubelet[2526]: E0428 00:38:01.096472 2526 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://10.0.0.11:6443/api/v1/namespaces/kube-system/pods/kube-scheduler-localhost\": dial tcp 10.0.0.11:6443: connect: connection refused" podUID="824fd89300514e351ed3b68d82c665c6" pod="kube-system/kube-scheduler-localhost" Apr 28 00:38:01.150579 systemd-logind[1457]: New session 37 of user core. Apr 28 00:38:01.560853 systemd[1]: Started session-37.scope - Session 37 of User core. Apr 28 00:38:01.705405 kubelet[2526]: E0428 00:38:01.705158 2526 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://10.0.0.11:6443/api/v1/namespaces/kube-system/pods/kube-controller-manager-localhost\": dial tcp 10.0.0.11:6443: connect: connection refused" podUID="c6bb8708a026256e82ca4c5631a78b5a" pod="kube-system/kube-controller-manager-localhost" Apr 28 00:38:02.265284 kubelet[2526]: E0428 00:38:02.265228 2526 kubelet_node_status.go:486] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"NetworkUnavailable\\\"},{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-04-28T00:38:02Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-28T00:38:02Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-28T00:38:02Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-28T00:38:02Z\\\",\\\"type\\\":\\\"Ready\\\"}]}}\" for node \"localhost\": Patch \"https://10.0.0.11:6443/api/v1/nodes/localhost/status?timeout=10s\": dial tcp 10.0.0.11:6443: connect: connection refused" Apr 28 00:38:02.746905 kubelet[2526]: E0428 00:38:02.746818 2526 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.439s" Apr 28 00:38:02.767104 kubelet[2526]: E0428 00:38:02.766843 2526 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"localhost\": Get \"https://10.0.0.11:6443/api/v1/nodes/localhost?timeout=10s\": dial tcp 10.0.0.11:6443: connect: connection refused" Apr 28 00:38:02.774847 kubelet[2526]: E0428 00:38:02.771074 2526 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"localhost\": Get \"https://10.0.0.11:6443/api/v1/nodes/localhost?timeout=10s\": dial tcp 10.0.0.11:6443: connect: connection refused" Apr 28 00:38:02.774847 kubelet[2526]: E0428 00:38:02.771810 2526 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"localhost\": Get \"https://10.0.0.11:6443/api/v1/nodes/localhost?timeout=10s\": dial tcp 10.0.0.11:6443: connect: connection refused" Apr 28 00:38:02.774847 kubelet[2526]: E0428 00:38:02.772030 2526 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"localhost\": Get \"https://10.0.0.11:6443/api/v1/nodes/localhost?timeout=10s\": dial tcp 10.0.0.11:6443: connect: connection refused" Apr 28 00:38:02.774847 kubelet[2526]: E0428 00:38:02.772039 2526 kubelet_node_status.go:473] "Unable to update node status" err="update node status exceeds retry count" Apr 28 00:38:03.001475 kubelet[2526]: E0428 00:38:02.996556 2526 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://10.0.0.11:6443/api/v1/namespaces/kube-system/events/kube-controller-manager-localhost.18aa5d9287efedf7\": dial tcp 10.0.0.11:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-controller-manager-localhost.18aa5d9287efedf7 kube-system 1294 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-controller-manager-localhost,UID:c6bb8708a026256e82ca4c5631a78b5a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Started,Message:Started container kube-controller-manager,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-28 00:24:46 +0000 UTC,LastTimestamp:2026-04-28 00:34:57.692778742 +0000 UTC m=+883.130756040,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 28 00:38:03.157400 systemd[1]: run-containerd-runc-k8s.io-00a4447277ed6e8253aa251b26423c3830f2660c0e81559c7944bcaed4d05b6e-runc.232sph.mount: Deactivated successfully. Apr 28 00:38:03.199432 systemd[1]: Started cri-containerd-00a4447277ed6e8253aa251b26423c3830f2660c0e81559c7944bcaed4d05b6e.scope - libcontainer container 00a4447277ed6e8253aa251b26423c3830f2660c0e81559c7944bcaed4d05b6e. Apr 28 00:38:04.259445 kubelet[2526]: E0428 00:38:04.259241 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:38:05.961431 containerd[1473]: time="2026-04-28T00:38:05.946364936Z" level=error msg="get state for 00a4447277ed6e8253aa251b26423c3830f2660c0e81559c7944bcaed4d05b6e" error="context deadline exceeded: unknown" Apr 28 00:38:06.077144 containerd[1473]: time="2026-04-28T00:38:06.040495675Z" level=warning msg="unknown status" status=0 Apr 28 00:38:06.242167 kubelet[2526]: E0428 00:38:06.240941 2526 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.11:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.11:6443: connect: connection refused" interval="7s" Apr 28 00:38:06.245224 kubelet[2526]: E0428 00:38:06.244437 2526 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.031s" Apr 28 00:38:06.254401 containerd[1473]: time="2026-04-28T00:38:06.253437584Z" level=error msg="ttrpc: received message on inactive stream" stream=5 Apr 28 00:38:07.450085 sshd[6495]: pam_unix(sshd:session): session closed for user core Apr 28 00:38:08.002638 systemd[1]: sshd@36-10.0.0.11:22-10.0.0.1:55838.service: Deactivated successfully. Apr 28 00:38:08.004095 systemd[1]: sshd@36-10.0.0.11:22-10.0.0.1:55838.service: Consumed 1.402s CPU time. Apr 28 00:38:08.325929 systemd[1]: session-37.scope: Deactivated successfully. Apr 28 00:38:08.326490 systemd[1]: session-37.scope: Consumed 4.302s CPU time. Apr 28 00:38:08.497469 systemd-logind[1457]: Session 37 logged out. Waiting for processes to exit. Apr 28 00:38:08.559975 systemd-logind[1457]: Removed session 37. Apr 28 00:38:08.691492 kubelet[2526]: E0428 00:38:08.686370 2526 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.408s" Apr 28 00:38:09.093199 containerd[1473]: time="2026-04-28T00:38:09.086602363Z" level=error msg="get state for 00a4447277ed6e8253aa251b26423c3830f2660c0e81559c7944bcaed4d05b6e" error="context deadline exceeded: unknown" Apr 28 00:38:09.093199 containerd[1473]: time="2026-04-28T00:38:09.088477201Z" level=warning msg="unknown status" status=0 Apr 28 00:38:09.297733 containerd[1473]: time="2026-04-28T00:38:09.296721609Z" level=error msg="ttrpc: received message on inactive stream" stream=15 Apr 28 00:38:09.335581 containerd[1473]: time="2026-04-28T00:38:09.334202701Z" level=info msg="StartContainer for \"00a4447277ed6e8253aa251b26423c3830f2660c0e81559c7944bcaed4d05b6e\" returns successfully" Apr 28 00:38:10.094542 kubelet[2526]: E0428 00:38:10.091405 2526 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://10.0.0.11:6443/api/v1/namespaces/kube-system/pods/coredns-66bc5c9577-976lc\": dial tcp 10.0.0.11:6443: connect: connection refused" podUID="2f94c136-2158-4e5f-b19a-05695c38ab7a" pod="kube-system/coredns-66bc5c9577-976lc" Apr 28 00:38:10.175403 kubelet[2526]: E0428 00:38:10.118363 2526 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://10.0.0.11:6443/api/v1/namespaces/kube-system/pods/coredns-66bc5c9577-sn6rz\": dial tcp 10.0.0.11:6443: connect: connection refused" podUID="69b6c5c4-1b0f-43c4-a6e5-e4ff6b274b36" pod="kube-system/coredns-66bc5c9577-sn6rz" Apr 28 00:38:10.661877 kubelet[2526]: E0428 00:38:10.661519 2526 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://10.0.0.11:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-localhost\": dial tcp 10.0.0.11:6443: connect: connection refused" podUID="a646d41a511ed3aa4e8f9816f82de57d" pod="kube-system/kube-apiserver-localhost" Apr 28 00:38:11.126119 kubelet[2526]: E0428 00:38:11.125753 2526 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://10.0.0.11:6443/api/v1/namespaces/kube-system/pods/kube-scheduler-localhost\": dial tcp 10.0.0.11:6443: connect: connection refused" podUID="824fd89300514e351ed3b68d82c665c6" pod="kube-system/kube-scheduler-localhost" Apr 28 00:38:11.491637 kubelet[2526]: E0428 00:38:11.480762 2526 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.199s" Apr 28 00:38:11.589753 kubelet[2526]: E0428 00:38:11.253862 2526 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://10.0.0.11:6443/api/v1/namespaces/kube-system/pods/kube-controller-manager-localhost\": dial tcp 10.0.0.11:6443: connect: connection refused" podUID="c6bb8708a026256e82ca4c5631a78b5a" pod="kube-system/kube-controller-manager-localhost" Apr 28 00:38:12.736300 kubelet[2526]: E0428 00:38:12.734869 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:38:12.736300 kubelet[2526]: E0428 00:38:12.735483 2526 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://10.0.0.11:6443/api/v1/namespaces/kube-system/pods/coredns-66bc5c9577-976lc\": dial tcp 10.0.0.11:6443: connect: connection refused" podUID="2f94c136-2158-4e5f-b19a-05695c38ab7a" pod="kube-system/coredns-66bc5c9577-976lc" Apr 28 00:38:12.736300 kubelet[2526]: E0428 00:38:12.735627 2526 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://10.0.0.11:6443/api/v1/namespaces/kube-system/pods/coredns-66bc5c9577-sn6rz\": dial tcp 10.0.0.11:6443: connect: connection refused" podUID="69b6c5c4-1b0f-43c4-a6e5-e4ff6b274b36" pod="kube-system/coredns-66bc5c9577-sn6rz" Apr 28 00:38:12.736300 kubelet[2526]: E0428 00:38:12.735800 2526 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://10.0.0.11:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-localhost\": dial tcp 10.0.0.11:6443: connect: connection refused" podUID="a646d41a511ed3aa4e8f9816f82de57d" pod="kube-system/kube-apiserver-localhost" Apr 28 00:38:12.736300 kubelet[2526]: E0428 00:38:12.736007 2526 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://10.0.0.11:6443/api/v1/namespaces/kube-system/pods/kube-scheduler-localhost\": dial tcp 10.0.0.11:6443: connect: connection refused" podUID="824fd89300514e351ed3b68d82c665c6" pod="kube-system/kube-scheduler-localhost" Apr 28 00:38:12.736300 kubelet[2526]: E0428 00:38:12.736173 2526 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://10.0.0.11:6443/api/v1/namespaces/kube-system/pods/kube-controller-manager-localhost\": dial tcp 10.0.0.11:6443: connect: connection refused" podUID="c6bb8708a026256e82ca4c5631a78b5a" pod="kube-system/kube-controller-manager-localhost" Apr 28 00:38:12.806264 systemd[1]: Started sshd@37-10.0.0.11:22-10.0.0.1:46846.service - OpenSSH per-connection server daemon (10.0.0.1:46846). Apr 28 00:38:13.166462 kubelet[2526]: E0428 00:38:13.085396 2526 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://10.0.0.11:6443/api/v1/namespaces/kube-system/events/kube-controller-manager-localhost.18aa5d9287efedf7\": dial tcp 10.0.0.11:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-controller-manager-localhost.18aa5d9287efedf7 kube-system 1294 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-controller-manager-localhost,UID:c6bb8708a026256e82ca4c5631a78b5a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Started,Message:Started container kube-controller-manager,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-28 00:24:46 +0000 UTC,LastTimestamp:2026-04-28 00:34:57.692778742 +0000 UTC m=+883.130756040,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 28 00:38:13.245146 kubelet[2526]: E0428 00:38:13.245021 2526 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.11:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.11:6443: connect: connection refused" interval="7s" Apr 28 00:38:14.130395 kubelet[2526]: E0428 00:38:14.117453 2526 kubelet_node_status.go:486] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"NetworkUnavailable\\\"},{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-04-28T00:38:13Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-28T00:38:13Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-28T00:38:13Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-28T00:38:13Z\\\",\\\"type\\\":\\\"Ready\\\"}]}}\" for node \"localhost\": Patch \"https://10.0.0.11:6443/api/v1/nodes/localhost/status?timeout=10s\": dial tcp 10.0.0.11:6443: connect: connection refused" Apr 28 00:38:14.401568 kubelet[2526]: E0428 00:38:14.401190 2526 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"localhost\": Get \"https://10.0.0.11:6443/api/v1/nodes/localhost?timeout=10s\": dial tcp 10.0.0.11:6443: connect: connection refused" Apr 28 00:38:15.055717 kubelet[2526]: E0428 00:38:15.051037 2526 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"localhost\": Get \"https://10.0.0.11:6443/api/v1/nodes/localhost?timeout=10s\": dial tcp 10.0.0.11:6443: connect: connection refused" Apr 28 00:38:15.055717 kubelet[2526]: E0428 00:38:15.051412 2526 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"localhost\": Get \"https://10.0.0.11:6443/api/v1/nodes/localhost?timeout=10s\": dial tcp 10.0.0.11:6443: connect: connection refused" Apr 28 00:38:15.055717 kubelet[2526]: E0428 00:38:15.051610 2526 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"localhost\": Get \"https://10.0.0.11:6443/api/v1/nodes/localhost?timeout=10s\": dial tcp 10.0.0.11:6443: connect: connection refused" Apr 28 00:38:15.055717 kubelet[2526]: E0428 00:38:15.051631 2526 kubelet_node_status.go:473] "Unable to update node status" err="update node status exceeds retry count" Apr 28 00:38:16.457079 sshd[6592]: Accepted publickey for core from 10.0.0.1 port 46846 ssh2: RSA SHA256:h1NhNWfC+Dmp6wIIzeQOuTKnwsIBe3BN0LKeEqeOidc Apr 28 00:38:16.587379 kubelet[2526]: E0428 00:38:16.497528 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:38:16.636616 sshd[6592]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 00:38:17.786450 systemd-logind[1457]: New session 38 of user core. Apr 28 00:38:18.015348 systemd[1]: Started session-38.scope - Session 38 of User core. Apr 28 00:38:21.738572 kubelet[2526]: E0428 00:38:21.728434 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:38:22.704304 kubelet[2526]: E0428 00:38:22.655329 2526 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="7.339s" Apr 28 00:38:26.056098 kubelet[2526]: E0428 00:38:26.050473 2526 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.282s" Apr 28 00:38:27.293598 kubelet[2526]: E0428 00:38:27.292391 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:38:29.662244 kubelet[2526]: E0428 00:38:29.661397 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:38:31.203644 kubelet[2526]: E0428 00:38:31.083204 2526 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://10.0.0.11:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-localhost\": net/http: TLS handshake timeout" podUID="a646d41a511ed3aa4e8f9816f82de57d" pod="kube-system/kube-apiserver-localhost" Apr 28 00:38:31.264222 kubelet[2526]: E0428 00:38:31.250875 2526 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.11:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" interval="7s" Apr 28 00:38:32.314709 kubelet[2526]: E0428 00:38:32.300483 2526 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: Get \"https://10.0.0.11:6443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=1352\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="object-\"kube-system\"/\"kube-root-ca.crt\"" type="*v1.ConfigMap" Apr 28 00:38:33.804737 kubelet[2526]: E0428 00:38:33.782707 2526 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="7.392s" Apr 28 00:38:34.157367 kubelet[2526]: E0428 00:38:33.782721 2526 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://10.0.0.11:6443/api/v1/namespaces/kube-system/events/kube-controller-manager-localhost.18aa5d9287efedf7\": net/http: TLS handshake timeout" event="&Event{ObjectMeta:{kube-controller-manager-localhost.18aa5d9287efedf7 kube-system 1294 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-controller-manager-localhost,UID:c6bb8708a026256e82ca4c5631a78b5a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Started,Message:Started container kube-controller-manager,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-28 00:24:46 +0000 UTC,LastTimestamp:2026-04-28 00:34:57.692778742 +0000 UTC m=+883.130756040,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 28 00:38:35.495461 kubelet[2526]: E0428 00:38:35.487914 2526 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.527s" Apr 28 00:38:36.926300 kubelet[2526]: E0428 00:38:36.926106 2526 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.438s" Apr 28 00:38:39.110674 kubelet[2526]: E0428 00:38:39.109438 2526 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.0.0.11:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dlocalhost&resourceVersion=1363\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="pkg/kubelet/config/apiserver.go:66" type="*v1.Pod" Apr 28 00:38:39.426147 kubelet[2526]: E0428 00:38:39.425967 2526 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.11:6443/apis/storage.k8s.io/v1/csidrivers?resourceVersion=1363\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 28 00:38:40.676544 kubelet[2526]: E0428 00:38:40.675385 2526 kubelet_node_status.go:486] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"NetworkUnavailable\\\"},{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-04-28T00:38:26Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-28T00:38:26Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-28T00:38:26Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-28T00:38:26Z\\\",\\\"type\\\":\\\"Ready\\\"}]}}\" for node \"localhost\": Patch \"https://10.0.0.11:6443/api/v1/nodes/localhost/status?timeout=10s\": context deadline exceeded" Apr 28 00:38:40.767854 kubelet[2526]: E0428 00:38:40.766342 2526 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: Get \"https://10.0.0.11:6443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dkube-proxy&resourceVersion=1352\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="object-\"kube-system\"/\"kube-proxy\"" type="*v1.ConfigMap" Apr 28 00:38:40.964173 kubelet[2526]: E0428 00:38:40.954828 2526 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: Get \"https://10.0.0.11:6443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dcoredns&resourceVersion=1352\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="object-\"kube-system\"/\"coredns\"" type="*v1.ConfigMap" Apr 28 00:38:41.568250 kubelet[2526]: E0428 00:38:41.507856 2526 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.11:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&resourceVersion=1363\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 28 00:38:41.740546 kubelet[2526]: E0428 00:38:41.695143 2526 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://10.0.0.11:6443/api/v1/namespaces/kube-system/pods/kube-scheduler-localhost\": net/http: TLS handshake timeout" podUID="824fd89300514e351ed3b68d82c665c6" pod="kube-system/kube-scheduler-localhost" Apr 28 00:38:42.889666 kubelet[2526]: E0428 00:38:42.888290 2526 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.529s" Apr 28 00:38:43.121068 kubelet[2526]: E0428 00:38:43.059287 2526 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.11:6443/apis/node.k8s.io/v1/runtimeclasses?resourceVersion=1379\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 28 00:38:44.479307 sshd[6592]: pam_unix(sshd:session): session closed for user core Apr 28 00:38:45.058533 systemd[1]: sshd@37-10.0.0.11:22-10.0.0.1:46846.service: Deactivated successfully. Apr 28 00:38:45.098812 systemd[1]: sshd@37-10.0.0.11:22-10.0.0.1:46846.service: Consumed 1.504s CPU time. Apr 28 00:38:45.503236 systemd[1]: session-38.scope: Deactivated successfully. Apr 28 00:38:45.537492 systemd[1]: session-38.scope: Consumed 11.420s CPU time. Apr 28 00:38:45.745866 systemd-logind[1457]: Session 38 logged out. Waiting for processes to exit. Apr 28 00:38:46.037865 systemd-logind[1457]: Removed session 38. Apr 28 00:38:47.089188 kubelet[2526]: E0428 00:38:47.089152 2526 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: Get \"https://10.0.0.11:6443/api/v1/namespaces/kube-flannel/configmaps?fieldSelector=metadata.name%3Dkube-flannel-cfg&resourceVersion=1352\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="object-\"kube-flannel\"/\"kube-flannel-cfg\"" type="*v1.ConfigMap" Apr 28 00:38:48.987627 kubelet[2526]: E0428 00:38:48.981834 2526 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.11:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&resourceVersion=1346\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 28 00:38:49.104109 kubelet[2526]: E0428 00:38:49.099271 2526 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.11:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded" interval="7s" Apr 28 00:38:50.202675 kubelet[2526]: E0428 00:38:50.201128 2526 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: Get \"https://10.0.0.11:6443/api/v1/namespaces/kube-flannel/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=1352\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="object-\"kube-flannel\"/\"kube-root-ca.crt\"" type="*v1.ConfigMap" Apr 28 00:38:50.801204 systemd[1]: Started sshd@38-10.0.0.11:22-10.0.0.1:56864.service - OpenSSH per-connection server daemon (10.0.0.1:56864). Apr 28 00:38:51.993722 kubelet[2526]: E0428 00:38:51.746050 2526 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"localhost\": Get \"https://10.0.0.11:6443/api/v1/nodes/localhost?timeout=10s\": net/http: TLS handshake timeout" Apr 28 00:38:52.461613 kubelet[2526]: E0428 00:38:52.404852 2526 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://10.0.0.11:6443/api/v1/namespaces/kube-system/pods/kube-controller-manager-localhost\": net/http: TLS handshake timeout" podUID="c6bb8708a026256e82ca4c5631a78b5a" pod="kube-system/kube-controller-manager-localhost" Apr 28 00:38:54.965459 kubelet[2526]: E0428 00:38:54.964526 2526 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="11.751s" Apr 28 00:38:56.076173 kubelet[2526]: E0428 00:38:55.908055 2526 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://10.0.0.11:6443/api/v1/namespaces/kube-system/events/kube-controller-manager-localhost.18aa5d9287efedf7\": net/http: TLS handshake timeout" event="&Event{ObjectMeta:{kube-controller-manager-localhost.18aa5d9287efedf7 kube-system 1294 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-controller-manager-localhost,UID:c6bb8708a026256e82ca4c5631a78b5a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Started,Message:Started container kube-controller-manager,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-28 00:24:46 +0000 UTC,LastTimestamp:2026-04-28 00:34:57.692778742 +0000 UTC m=+883.130756040,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 28 00:38:56.694271 kubelet[2526]: E0428 00:38:56.685370 2526 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.141s" Apr 28 00:38:56.901329 sshd[6681]: Accepted publickey for core from 10.0.0.1 port 56864 ssh2: RSA SHA256:h1NhNWfC+Dmp6wIIzeQOuTKnwsIBe3BN0LKeEqeOidc Apr 28 00:38:57.283842 sshd[6681]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 00:38:57.964703 systemd-logind[1457]: New session 39 of user core. Apr 28 00:38:58.344776 systemd[1]: Started session-39.scope - Session 39 of User core. Apr 28 00:39:03.401790 kubelet[2526]: E0428 00:39:03.400857 2526 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"localhost\": Get \"https://10.0.0.11:6443/api/v1/nodes/localhost?timeout=10s\": context deadline exceeded" Apr 28 00:39:04.169648 kubelet[2526]: E0428 00:39:04.099296 2526 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://10.0.0.11:6443/api/v1/namespaces/kube-system/pods/coredns-66bc5c9577-976lc\": net/http: TLS handshake timeout" podUID="2f94c136-2158-4e5f-b19a-05695c38ab7a" pod="kube-system/coredns-66bc5c9577-976lc" Apr 28 00:39:06.395251 kubelet[2526]: E0428 00:39:06.175597 2526 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="8.383s" Apr 28 00:39:06.686454 kubelet[2526]: E0428 00:39:06.673620 2526 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.11:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" interval="7s" Apr 28 00:39:08.029175 kubelet[2526]: E0428 00:39:08.026425 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:39:09.093203 kubelet[2526]: E0428 00:39:09.091597 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:39:10.558476 kubelet[2526]: E0428 00:39:10.556669 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:39:14.144412 kubelet[2526]: E0428 00:39:14.141056 2526 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"localhost\": Get \"https://10.0.0.11:6443/api/v1/nodes/localhost?timeout=10s\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Apr 28 00:39:14.455622 kubelet[2526]: E0428 00:39:14.449242 2526 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="7.073s" Apr 28 00:39:14.498614 kubelet[2526]: E0428 00:39:14.447677 2526 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://10.0.0.11:6443/api/v1/namespaces/kube-system/pods/coredns-66bc5c9577-sn6rz\": net/http: TLS handshake timeout" podUID="69b6c5c4-1b0f-43c4-a6e5-e4ff6b274b36" pod="kube-system/coredns-66bc5c9577-sn6rz" Apr 28 00:39:14.879626 kubelet[2526]: E0428 00:39:14.878097 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:39:16.872349 kubelet[2526]: E0428 00:39:16.572481 2526 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.001s" Apr 28 00:39:18.864250 sshd[6681]: pam_unix(sshd:session): session closed for user core Apr 28 00:39:19.603824 systemd[1]: sshd@38-10.0.0.11:22-10.0.0.1:56864.service: Deactivated successfully. Apr 28 00:39:19.608614 systemd[1]: sshd@38-10.0.0.11:22-10.0.0.1:56864.service: Consumed 2.366s CPU time. Apr 28 00:39:19.872558 systemd[1]: session-39.scope: Deactivated successfully. Apr 28 00:39:19.885366 systemd[1]: session-39.scope: Consumed 7.167s CPU time. Apr 28 00:39:20.101862 systemd-logind[1457]: Session 39 logged out. Waiting for processes to exit. Apr 28 00:39:20.177173 systemd-logind[1457]: Removed session 39. Apr 28 00:39:21.359663 kubelet[2526]: E0428 00:39:20.316510 2526 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://10.0.0.11:6443/api/v1/namespaces/kube-system/events/kube-controller-manager-localhost.18aa5d9287efedf7\": net/http: TLS handshake timeout" event="&Event{ObjectMeta:{kube-controller-manager-localhost.18aa5d9287efedf7 kube-system 1294 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-controller-manager-localhost,UID:c6bb8708a026256e82ca4c5631a78b5a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Started,Message:Started container kube-controller-manager,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-28 00:24:46 +0000 UTC,LastTimestamp:2026-04-28 00:34:57.692778742 +0000 UTC m=+883.130756040,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 28 00:39:22.583334 kubelet[2526]: E0428 00:39:22.582846 2526 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: Get \"https://10.0.0.11:6443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dkube-proxy&resourceVersion=1352\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="object-\"kube-system\"/\"kube-proxy\"" type="*v1.ConfigMap" Apr 28 00:39:22.614118 kubelet[2526]: E0428 00:39:22.593956 2526 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="5.391s" Apr 28 00:39:24.459464 kubelet[2526]: E0428 00:39:24.458246 2526 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"localhost\": Get \"https://10.0.0.11:6443/api/v1/nodes/localhost?timeout=10s\": context deadline exceeded" Apr 28 00:39:24.545433 systemd[1]: Started sshd@39-10.0.0.11:22-10.0.0.1:33130.service - OpenSSH per-connection server daemon (10.0.0.1:33130). Apr 28 00:39:25.093296 kubelet[2526]: E0428 00:39:25.078721 2526 kubelet_node_status.go:473] "Unable to update node status" err="update node status exceeds retry count" Apr 28 00:39:25.847524 kubelet[2526]: E0428 00:39:25.432605 2526 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.11:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" interval="7s" Apr 28 00:39:26.214064 kubelet[2526]: E0428 00:39:26.112623 2526 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://10.0.0.11:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-localhost\": net/http: TLS handshake timeout" podUID="a646d41a511ed3aa4e8f9816f82de57d" pod="kube-system/kube-apiserver-localhost" Apr 28 00:39:26.713185 kubelet[2526]: E0428 00:39:26.514563 2526 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.0.0.11:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dlocalhost&resourceVersion=1363\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="pkg/kubelet/config/apiserver.go:66" type="*v1.Pod" Apr 28 00:39:26.851286 sshd[6754]: Accepted publickey for core from 10.0.0.1 port 33130 ssh2: RSA SHA256:h1NhNWfC+Dmp6wIIzeQOuTKnwsIBe3BN0LKeEqeOidc Apr 28 00:39:26.958652 kubelet[2526]: E0428 00:39:26.889607 2526 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="4.22s" Apr 28 00:39:27.138723 sshd[6754]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 00:39:27.985357 systemd-logind[1457]: New session 40 of user core. Apr 28 00:39:28.341560 systemd[1]: Started session-40.scope - Session 40 of User core. Apr 28 00:39:29.072246 kubelet[2526]: E0428 00:39:29.064451 2526 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.11:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&resourceVersion=1363\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 28 00:39:32.100486 kubelet[2526]: E0428 00:39:32.097366 2526 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="5.206s" Apr 28 00:39:34.272590 kubelet[2526]: E0428 00:39:34.271847 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:39:36.668466 kubelet[2526]: E0428 00:39:36.660968 2526 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://10.0.0.11:6443/api/v1/namespaces/kube-system/pods/kube-scheduler-localhost\": net/http: TLS handshake timeout" podUID="824fd89300514e351ed3b68d82c665c6" pod="kube-system/kube-scheduler-localhost" Apr 28 00:39:36.668466 kubelet[2526]: E0428 00:39:36.667428 2526 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="4.413s" Apr 28 00:39:36.668466 kubelet[2526]: E0428 00:39:36.667566 2526 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: Get \"https://10.0.0.11:6443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=1352\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="object-\"kube-system\"/\"kube-root-ca.crt\"" type="*v1.ConfigMap" Apr 28 00:39:36.668466 kubelet[2526]: E0428 00:39:36.668232 2526 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.11:6443/apis/node.k8s.io/v1/runtimeclasses?resourceVersion=1379\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 28 00:39:37.784089 kubelet[2526]: E0428 00:39:37.744723 2526 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.055s" Apr 28 00:39:40.243305 kubelet[2526]: E0428 00:39:40.232646 2526 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.11:6443/apis/storage.k8s.io/v1/csidrivers?resourceVersion=1363\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 28 00:39:42.294506 kubelet[2526]: E0428 00:39:42.081705 2526 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://10.0.0.11:6443/api/v1/namespaces/kube-system/events/kube-controller-manager-localhost.18aa5d9287efedf7\": net/http: TLS handshake timeout" event="&Event{ObjectMeta:{kube-controller-manager-localhost.18aa5d9287efedf7 kube-system 1294 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-controller-manager-localhost,UID:c6bb8708a026256e82ca4c5631a78b5a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Started,Message:Started container kube-controller-manager,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-28 00:24:46 +0000 UTC,LastTimestamp:2026-04-28 00:34:57.692778742 +0000 UTC m=+883.130756040,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 28 00:39:42.423372 kubelet[2526]: E0428 00:39:42.417940 2526 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="4.57s" Apr 28 00:39:42.476265 kubelet[2526]: E0428 00:39:42.408797 2526 event.go:307] "Unable to write event (retry limit exceeded!)" event="&Event{ObjectMeta:{kube-controller-manager-localhost.18aa5e20bf50a0f6 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-controller-manager-localhost,UID:c6bb8708a026256e82ca4c5631a78b5a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Started,Message:Started container kube-controller-manager,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-28 00:34:57.692778742 +0000 UTC m=+883.130756040,LastTimestamp:2026-04-28 00:34:57.692778742 +0000 UTC m=+883.130756040,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 28 00:39:42.845723 kubelet[2526]: E0428 00:39:42.805693 2526 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: Get \"https://10.0.0.11:6443/api/v1/namespaces/kube-flannel/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=1352\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="object-\"kube-flannel\"/\"kube-root-ca.crt\"" type="*v1.ConfigMap" Apr 28 00:39:43.446614 kubelet[2526]: E0428 00:39:43.405502 2526 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.11:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded" interval="7s" Apr 28 00:39:44.168633 kubelet[2526]: E0428 00:39:44.167547 2526 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: Get \"https://10.0.0.11:6443/api/v1/namespaces/kube-flannel/configmaps?fieldSelector=metadata.name%3Dkube-flannel-cfg&resourceVersion=1352\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="object-\"kube-flannel\"/\"kube-flannel-cfg\"" type="*v1.ConfigMap" Apr 28 00:39:44.799589 kubelet[2526]: E0428 00:39:44.786710 2526 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.11:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&resourceVersion=1346\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 28 00:39:46.761386 kubelet[2526]: E0428 00:39:46.752746 2526 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://10.0.0.11:6443/api/v1/namespaces/kube-system/pods/kube-controller-manager-localhost\": net/http: TLS handshake timeout" podUID="c6bb8708a026256e82ca4c5631a78b5a" pod="kube-system/kube-controller-manager-localhost" Apr 28 00:39:47.287010 kubelet[2526]: E0428 00:39:47.286574 2526 kubelet_node_status.go:486] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"NetworkUnavailable\\\"},{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-04-28T00:39:37Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-28T00:39:37Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-28T00:39:37Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-28T00:39:37Z\\\",\\\"type\\\":\\\"Ready\\\"}]}}\" for node \"localhost\": Patch \"https://10.0.0.11:6443/api/v1/nodes/localhost/status?timeout=10s\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Apr 28 00:39:47.859409 sshd[6754]: pam_unix(sshd:session): session closed for user core Apr 28 00:39:48.155707 systemd[1]: sshd@39-10.0.0.11:22-10.0.0.1:33130.service: Deactivated successfully. Apr 28 00:39:48.552310 systemd[1]: session-40.scope: Deactivated successfully. Apr 28 00:39:48.573154 systemd[1]: session-40.scope: Consumed 6.569s CPU time. Apr 28 00:39:48.714628 systemd-logind[1457]: Session 40 logged out. Waiting for processes to exit. Apr 28 00:39:48.962688 systemd-logind[1457]: Removed session 40. Apr 28 00:39:49.306435 kubelet[2526]: E0428 00:39:49.302458 2526 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.951s" Apr 28 00:39:49.766543 kubelet[2526]: E0428 00:39:49.657433 2526 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: Get \"https://10.0.0.11:6443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dcoredns&resourceVersion=1352\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="object-\"kube-system\"/\"coredns\"" type="*v1.ConfigMap" Apr 28 00:39:51.761364 kubelet[2526]: E0428 00:39:51.757808 2526 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.288s" Apr 28 00:39:52.807209 kubelet[2526]: E0428 00:39:52.801453 2526 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://10.0.0.11:6443/api/v1/namespaces/kube-system/events/coredns-66bc5c9577-sn6rz.18aa5d8381c7a9f0\": net/http: TLS handshake timeout" event="&Event{ObjectMeta:{coredns-66bc5c9577-sn6rz.18aa5d8381c7a9f0 kube-system 1261 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:coredns-66bc5c9577-sn6rz,UID:69b6c5c4-1b0f-43c4-a6e5-e4ff6b274b36,APIVersion:v1,ResourceVersion:568,FieldPath:spec.containers{coredns},},Reason:Unhealthy,Message:Readiness probe failed: Get \"http://192.168.0.3:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-28 00:23:42 +0000 UTC,LastTimestamp:2026-04-28 00:35:05.253671056 +0000 UTC m=+890.691648352,Count:21,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 28 00:39:53.053025 kubelet[2526]: E0428 00:39:53.042365 2526 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.179s" Apr 28 00:39:53.054382 systemd[1]: Started sshd@40-10.0.0.11:22-10.0.0.1:41524.service - OpenSSH per-connection server daemon (10.0.0.1:41524). Apr 28 00:39:54.960208 sshd[6822]: Accepted publickey for core from 10.0.0.1 port 41524 ssh2: RSA SHA256:h1NhNWfC+Dmp6wIIzeQOuTKnwsIBe3BN0LKeEqeOidc Apr 28 00:39:55.211231 sshd[6822]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 00:39:56.190087 systemd-logind[1457]: New session 41 of user core. Apr 28 00:39:56.332434 systemd[1]: Started session-41.scope - Session 41 of User core. Apr 28 00:39:57.012469 kubelet[2526]: E0428 00:39:57.009617 2526 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://10.0.0.11:6443/api/v1/namespaces/kube-system/pods/coredns-66bc5c9577-976lc\": net/http: TLS handshake timeout" podUID="2f94c136-2158-4e5f-b19a-05695c38ab7a" pod="kube-system/coredns-66bc5c9577-976lc" Apr 28 00:39:57.516497 kubelet[2526]: E0428 00:39:57.510107 2526 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"localhost\": Get \"https://10.0.0.11:6443/api/v1/nodes/localhost?timeout=10s\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Apr 28 00:40:01.099636 kubelet[2526]: E0428 00:40:01.062622 2526 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.11:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" interval="7s" Apr 28 00:40:06.269230 kubelet[2526]: E0428 00:40:06.242615 2526 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="12.742s" Apr 28 00:40:08.243558 kubelet[2526]: E0428 00:40:08.241776 2526 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"localhost\": Get \"https://10.0.0.11:6443/api/v1/nodes/localhost?timeout=10s\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Apr 28 00:40:09.458444 kubelet[2526]: E0428 00:40:09.407615 2526 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://10.0.0.11:6443/api/v1/namespaces/kube-system/events/coredns-66bc5c9577-sn6rz.18aa5d8381c7a9f0\": net/http: TLS handshake timeout" event="&Event{ObjectMeta:{coredns-66bc5c9577-sn6rz.18aa5d8381c7a9f0 kube-system 1261 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:coredns-66bc5c9577-sn6rz,UID:69b6c5c4-1b0f-43c4-a6e5-e4ff6b274b36,APIVersion:v1,ResourceVersion:568,FieldPath:spec.containers{coredns},},Reason:Unhealthy,Message:Readiness probe failed: Get \"http://192.168.0.3:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-28 00:23:42 +0000 UTC,LastTimestamp:2026-04-28 00:35:05.253671056 +0000 UTC m=+890.691648352,Count:21,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 28 00:40:10.624822 kubelet[2526]: E0428 00:40:10.595439 2526 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://10.0.0.11:6443/api/v1/namespaces/kube-system/pods/coredns-66bc5c9577-sn6rz\": net/http: TLS handshake timeout" podUID="69b6c5c4-1b0f-43c4-a6e5-e4ff6b274b36" pod="kube-system/coredns-66bc5c9577-sn6rz" Apr 28 00:40:11.334591 kubelet[2526]: E0428 00:40:11.325514 2526 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="4.599s" Apr 28 00:40:12.468285 kubelet[2526]: E0428 00:40:12.467147 2526 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.11:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&resourceVersion=1363\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 28 00:40:13.063457 kubelet[2526]: E0428 00:40:13.062697 2526 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.58s" Apr 28 00:40:13.593474 kubelet[2526]: E0428 00:40:13.590470 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:40:14.098200 kubelet[2526]: E0428 00:40:14.097048 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:40:14.875437 sshd[6822]: pam_unix(sshd:session): session closed for user core Apr 28 00:40:15.158979 systemd[1]: sshd@40-10.0.0.11:22-10.0.0.1:41524.service: Deactivated successfully. Apr 28 00:40:15.337591 systemd[1]: session-41.scope: Deactivated successfully. Apr 28 00:40:15.342585 systemd[1]: session-41.scope: Consumed 5.734s CPU time. Apr 28 00:40:15.513536 systemd-logind[1457]: Session 41 logged out. Waiting for processes to exit. Apr 28 00:40:15.764725 systemd-logind[1457]: Removed session 41. Apr 28 00:40:18.359576 kubelet[2526]: E0428 00:40:18.356564 2526 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.11:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" interval="7s" Apr 28 00:40:18.780460 kubelet[2526]: E0428 00:40:18.697573 2526 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.412s" Apr 28 00:40:19.105622 kubelet[2526]: E0428 00:40:19.093855 2526 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"localhost\": Get \"https://10.0.0.11:6443/api/v1/nodes/localhost?timeout=10s\": context deadline exceeded" Apr 28 00:40:19.936499 kubelet[2526]: E0428 00:40:19.934349 2526 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.0.0.11:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dlocalhost&resourceVersion=1363\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="pkg/kubelet/config/apiserver.go:66" type="*v1.Pod" Apr 28 00:40:20.361736 systemd[1]: Started sshd@41-10.0.0.11:22-10.0.0.1:58676.service - OpenSSH per-connection server daemon (10.0.0.1:58676). Apr 28 00:40:20.985702 kubelet[2526]: E0428 00:40:20.983433 2526 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://10.0.0.11:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-localhost\": net/http: TLS handshake timeout" podUID="a646d41a511ed3aa4e8f9816f82de57d" pod="kube-system/kube-apiserver-localhost" Apr 28 00:40:22.688246 kubelet[2526]: E0428 00:40:22.667685 2526 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.398s" Apr 28 00:40:23.293854 kubelet[2526]: E0428 00:40:23.251252 2526 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.11:6443/apis/storage.k8s.io/v1/csidrivers?resourceVersion=1363\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 28 00:40:24.684642 sshd[6881]: Accepted publickey for core from 10.0.0.1 port 58676 ssh2: RSA SHA256:h1NhNWfC+Dmp6wIIzeQOuTKnwsIBe3BN0LKeEqeOidc Apr 28 00:40:24.954287 sshd[6881]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 00:40:25.203761 kubelet[2526]: E0428 00:40:25.096366 2526 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.379s" Apr 28 00:40:25.804998 systemd-logind[1457]: New session 42 of user core. Apr 28 00:40:26.129119 systemd[1]: Started session-42.scope - Session 42 of User core. Apr 28 00:40:30.577250 kubelet[2526]: E0428 00:40:30.297585 2526 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"localhost\": Get \"https://10.0.0.11:6443/api/v1/nodes/localhost?timeout=10s\": context deadline exceeded" Apr 28 00:40:30.625580 kubelet[2526]: E0428 00:40:30.514750 2526 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.11:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&resourceVersion=1346\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 28 00:40:30.625580 kubelet[2526]: E0428 00:40:30.624097 2526 kubelet_node_status.go:473] "Unable to update node status" err="update node status exceeds retry count" Apr 28 00:40:31.104866 kubelet[2526]: E0428 00:40:31.098321 2526 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://10.0.0.11:6443/api/v1/namespaces/kube-system/events/coredns-66bc5c9577-sn6rz.18aa5d8381c7a9f0\": net/http: TLS handshake timeout" event="&Event{ObjectMeta:{coredns-66bc5c9577-sn6rz.18aa5d8381c7a9f0 kube-system 1261 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:coredns-66bc5c9577-sn6rz,UID:69b6c5c4-1b0f-43c4-a6e5-e4ff6b274b36,APIVersion:v1,ResourceVersion:568,FieldPath:spec.containers{coredns},},Reason:Unhealthy,Message:Readiness probe failed: Get \"http://192.168.0.3:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-28 00:23:42 +0000 UTC,LastTimestamp:2026-04-28 00:35:05.253671056 +0000 UTC m=+890.691648352,Count:21,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 28 00:40:32.640675 kubelet[2526]: E0428 00:40:32.182335 2526 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://10.0.0.11:6443/api/v1/namespaces/kube-system/pods/kube-scheduler-localhost\": net/http: TLS handshake timeout" podUID="824fd89300514e351ed3b68d82c665c6" pod="kube-system/kube-scheduler-localhost" Apr 28 00:40:34.089022 kubelet[2526]: E0428 00:40:34.078381 2526 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: Get \"https://10.0.0.11:6443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dkube-proxy&resourceVersion=1352\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="object-\"kube-system\"/\"kube-proxy\"" type="*v1.ConfigMap" Apr 28 00:40:34.787013 kubelet[2526]: E0428 00:40:34.755712 2526 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: Get \"https://10.0.0.11:6443/api/v1/namespaces/kube-flannel/configmaps?fieldSelector=metadata.name%3Dkube-flannel-cfg&resourceVersion=1352\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="object-\"kube-flannel\"/\"kube-flannel-cfg\"" type="*v1.ConfigMap" Apr 28 00:40:35.274324 kubelet[2526]: E0428 00:40:35.256799 2526 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="10.003s" Apr 28 00:40:35.667745 kubelet[2526]: E0428 00:40:35.641653 2526 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.11:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded" interval="7s" Apr 28 00:40:36.315446 kubelet[2526]: E0428 00:40:36.197450 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:40:36.423610 kubelet[2526]: E0428 00:40:36.413076 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:40:36.632636 kubelet[2526]: E0428 00:40:36.626472 2526 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.11:6443/apis/node.k8s.io/v1/runtimeclasses?resourceVersion=1379\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 28 00:40:36.663528 kubelet[2526]: E0428 00:40:36.656810 2526 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.148s" Apr 28 00:40:36.702506 kubelet[2526]: E0428 00:40:36.696969 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:40:39.054355 kubelet[2526]: E0428 00:40:39.053457 2526 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.841s" Apr 28 00:40:40.167111 kubelet[2526]: E0428 00:40:40.164765 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:40:41.106798 kubelet[2526]: E0428 00:40:41.100662 2526 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.895s" Apr 28 00:40:42.718525 kubelet[2526]: E0428 00:40:42.706823 2526 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.489s" Apr 28 00:40:43.030131 kubelet[2526]: E0428 00:40:42.983541 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:40:43.463800 kubelet[2526]: E0428 00:40:43.455539 2526 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: Get \"https://10.0.0.11:6443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=1352\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="object-\"kube-system\"/\"kube-root-ca.crt\"" type="*v1.ConfigMap" Apr 28 00:40:43.463800 kubelet[2526]: E0428 00:40:43.455565 2526 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://10.0.0.11:6443/api/v1/namespaces/kube-system/pods/kube-controller-manager-localhost\": net/http: TLS handshake timeout" podUID="c6bb8708a026256e82ca4c5631a78b5a" pod="kube-system/kube-controller-manager-localhost" Apr 28 00:40:44.029439 sshd[6881]: pam_unix(sshd:session): session closed for user core Apr 28 00:40:44.341599 systemd[1]: sshd@41-10.0.0.11:22-10.0.0.1:58676.service: Deactivated successfully. Apr 28 00:40:44.377524 systemd[1]: sshd@41-10.0.0.11:22-10.0.0.1:58676.service: Consumed 1.667s CPU time. Apr 28 00:40:44.618213 systemd[1]: session-42.scope: Deactivated successfully. Apr 28 00:40:44.635665 systemd[1]: session-42.scope: Consumed 5.084s CPU time. Apr 28 00:40:44.791593 systemd-logind[1457]: Session 42 logged out. Waiting for processes to exit. Apr 28 00:40:44.981142 systemd-logind[1457]: Removed session 42. Apr 28 00:40:45.672634 kubelet[2526]: E0428 00:40:45.671245 2526 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.343s" Apr 28 00:40:46.249517 kubelet[2526]: E0428 00:40:46.243485 2526 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: Get \"https://10.0.0.11:6443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dcoredns&resourceVersion=1352\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="object-\"kube-system\"/\"coredns\"" type="*v1.ConfigMap" Apr 28 00:40:47.966752 kubelet[2526]: E0428 00:40:47.965964 2526 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.256s" Apr 28 00:40:49.741607 kubelet[2526]: E0428 00:40:49.739581 2526 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: Get \"https://10.0.0.11:6443/api/v1/namespaces/kube-flannel/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=1352\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="object-\"kube-flannel\"/\"kube-root-ca.crt\"" type="*v1.ConfigMap" Apr 28 00:40:49.934527 systemd[1]: Started sshd@42-10.0.0.11:22-10.0.0.1:35582.service - OpenSSH per-connection server daemon (10.0.0.1:35582). Apr 28 00:40:52.960649 kubelet[2526]: E0428 00:40:52.959993 2526 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.11:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" interval="7s" Apr 28 00:40:53.389906 kubelet[2526]: E0428 00:40:52.696909 2526 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://10.0.0.11:6443/api/v1/namespaces/kube-system/events/coredns-66bc5c9577-sn6rz.18aa5d8381c7a9f0\": net/http: TLS handshake timeout" event="&Event{ObjectMeta:{coredns-66bc5c9577-sn6rz.18aa5d8381c7a9f0 kube-system 1261 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:coredns-66bc5c9577-sn6rz,UID:69b6c5c4-1b0f-43c4-a6e5-e4ff6b274b36,APIVersion:v1,ResourceVersion:568,FieldPath:spec.containers{coredns},},Reason:Unhealthy,Message:Readiness probe failed: Get \"http://192.168.0.3:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-28 00:23:42 +0000 UTC,LastTimestamp:2026-04-28 00:35:05.253671056 +0000 UTC m=+890.691648352,Count:21,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 28 00:40:53.861424 kubelet[2526]: E0428 00:40:53.835958 2526 kubelet_node_status.go:486] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"NetworkUnavailable\\\"},{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-04-28T00:40:41Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-28T00:40:41Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-28T00:40:41Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-28T00:40:41Z\\\",\\\"type\\\":\\\"Ready\\\"}]}}\" for node \"localhost\": Patch \"https://10.0.0.11:6443/api/v1/nodes/localhost/status?timeout=10s\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Apr 28 00:40:53.861424 kubelet[2526]: E0428 00:40:53.847687 2526 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://10.0.0.11:6443/api/v1/namespaces/kube-system/pods/coredns-66bc5c9577-976lc\": net/http: TLS handshake timeout" podUID="2f94c136-2158-4e5f-b19a-05695c38ab7a" pod="kube-system/coredns-66bc5c9577-976lc" Apr 28 00:40:54.044399 sshd[6956]: Accepted publickey for core from 10.0.0.1 port 35582 ssh2: RSA SHA256:h1NhNWfC+Dmp6wIIzeQOuTKnwsIBe3BN0LKeEqeOidc Apr 28 00:40:54.323700 sshd[6956]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 00:40:55.370308 systemd-logind[1457]: New session 43 of user core. Apr 28 00:40:55.580976 systemd[1]: Started session-43.scope - Session 43 of User core. Apr 28 00:40:57.359638 kubelet[2526]: E0428 00:40:57.359346 2526 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="9.381s" Apr 28 00:40:59.988528 kubelet[2526]: E0428 00:40:59.981964 2526 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.518s" Apr 28 00:41:02.654178 kubelet[2526]: E0428 00:41:02.653106 2526 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.42s" Apr 28 00:41:03.838115 kubelet[2526]: E0428 00:41:03.837974 2526 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"localhost\": Get \"https://10.0.0.11:6443/api/v1/nodes/localhost?timeout=10s\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Apr 28 00:41:05.555011 kubelet[2526]: E0428 00:41:05.553245 2526 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://10.0.0.11:6443/api/v1/namespaces/kube-system/pods/coredns-66bc5c9577-sn6rz\": net/http: TLS handshake timeout" podUID="69b6c5c4-1b0f-43c4-a6e5-e4ff6b274b36" pod="kube-system/coredns-66bc5c9577-sn6rz" Apr 28 00:41:06.037835 kubelet[2526]: E0428 00:41:06.033229 2526 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.11:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&resourceVersion=1363\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 28 00:41:06.037835 kubelet[2526]: E0428 00:41:06.033635 2526 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.0.0.11:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dlocalhost&resourceVersion=1363\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="pkg/kubelet/config/apiserver.go:66" type="*v1.Pod" Apr 28 00:41:06.358690 kubelet[2526]: E0428 00:41:06.336814 2526 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.13s" Apr 28 00:41:08.290386 kubelet[2526]: E0428 00:41:08.286271 2526 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.021s" Apr 28 00:41:09.966180 sshd[6956]: pam_unix(sshd:session): session closed for user core Apr 28 00:41:10.301821 kubelet[2526]: E0428 00:41:10.292511 2526 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.11:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" interval="7s" Apr 28 00:41:10.294721 systemd[1]: sshd@42-10.0.0.11:22-10.0.0.1:35582.service: Deactivated successfully. Apr 28 00:41:10.308478 systemd[1]: sshd@42-10.0.0.11:22-10.0.0.1:35582.service: Consumed 1.472s CPU time. Apr 28 00:41:10.502398 systemd[1]: session-43.scope: Deactivated successfully. Apr 28 00:41:10.510923 systemd[1]: session-43.scope: Consumed 2.846s CPU time. Apr 28 00:41:10.632386 systemd-logind[1457]: Session 43 logged out. Waiting for processes to exit. Apr 28 00:41:10.655666 systemd-logind[1457]: Removed session 43. Apr 28 00:41:10.852217 kubelet[2526]: E0428 00:41:10.852002 2526 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.629s" Apr 28 00:41:12.486593 kubelet[2526]: E0428 00:41:12.485570 2526 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.168s" Apr 28 00:41:13.657674 kubelet[2526]: E0428 00:41:13.577862 2526 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://10.0.0.11:6443/api/v1/namespaces/kube-system/events/coredns-66bc5c9577-sn6rz.18aa5d8381c7a9f0\": net/http: TLS handshake timeout" event="&Event{ObjectMeta:{coredns-66bc5c9577-sn6rz.18aa5d8381c7a9f0 kube-system 1261 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:coredns-66bc5c9577-sn6rz,UID:69b6c5c4-1b0f-43c4-a6e5-e4ff6b274b36,APIVersion:v1,ResourceVersion:568,FieldPath:spec.containers{coredns},},Reason:Unhealthy,Message:Readiness probe failed: Get \"http://192.168.0.3:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-28 00:23:42 +0000 UTC,LastTimestamp:2026-04-28 00:35:05.253671056 +0000 UTC m=+890.691648352,Count:21,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 28 00:41:13.966552 kubelet[2526]: E0428 00:41:13.948685 2526 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.11:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&resourceVersion=1346\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 28 00:41:14.020087 kubelet[2526]: E0428 00:41:13.956784 2526 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"localhost\": Get \"https://10.0.0.11:6443/api/v1/nodes/localhost?timeout=10s\": context deadline exceeded" Apr 28 00:41:14.762794 kubelet[2526]: E0428 00:41:14.762339 2526 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.11:6443/apis/storage.k8s.io/v1/csidrivers?resourceVersion=1363\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 28 00:41:14.880330 kubelet[2526]: E0428 00:41:14.762465 2526 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.559s" Apr 28 00:41:15.603941 systemd[1]: Started sshd@43-10.0.0.11:22-10.0.0.1:54442.service - OpenSSH per-connection server daemon (10.0.0.1:54442). Apr 28 00:41:15.737868 kubelet[2526]: E0428 00:41:15.735614 2526 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://10.0.0.11:6443/api/v1/namespaces/kube-system/pods/coredns-66bc5c9577-976lc\": net/http: TLS handshake timeout" podUID="2f94c136-2158-4e5f-b19a-05695c38ab7a" pod="kube-system/coredns-66bc5c9577-976lc" Apr 28 00:41:16.974780 kubelet[2526]: E0428 00:41:16.963649 2526 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.758s" Apr 28 00:41:18.654337 sshd[7020]: Accepted publickey for core from 10.0.0.1 port 54442 ssh2: RSA SHA256:h1NhNWfC+Dmp6wIIzeQOuTKnwsIBe3BN0LKeEqeOidc Apr 28 00:41:18.809429 sshd[7020]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 00:41:19.672217 systemd-logind[1457]: New session 44 of user core. Apr 28 00:41:19.987117 systemd[1]: Started session-44.scope - Session 44 of User core. Apr 28 00:41:21.120775 kubelet[2526]: E0428 00:41:21.112714 2526 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: Get \"https://10.0.0.11:6443/api/v1/namespaces/kube-flannel/configmaps?fieldSelector=metadata.name%3Dkube-flannel-cfg&resourceVersion=1352\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="object-\"kube-flannel\"/\"kube-flannel-cfg\"" type="*v1.ConfigMap" Apr 28 00:41:21.279661 kubelet[2526]: E0428 00:41:21.275393 2526 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: Get \"https://10.0.0.11:6443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dkube-proxy&resourceVersion=1352\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="object-\"kube-system\"/\"kube-proxy\"" type="*v1.ConfigMap" Apr 28 00:41:24.215763 kubelet[2526]: E0428 00:41:24.214179 2526 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"localhost\": Get \"https://10.0.0.11:6443/api/v1/nodes/localhost?timeout=10s\": context deadline exceeded" Apr 28 00:41:24.653665 kubelet[2526]: E0428 00:41:24.653385 2526 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="7.277s" Apr 28 00:41:26.251674 kubelet[2526]: E0428 00:41:26.244266 2526 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.157s" Apr 28 00:41:26.424809 kubelet[2526]: E0428 00:41:26.396559 2526 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://10.0.0.11:6443/api/v1/namespaces/kube-system/pods/coredns-66bc5c9577-sn6rz\": net/http: TLS handshake timeout" podUID="69b6c5c4-1b0f-43c4-a6e5-e4ff6b274b36" pod="kube-system/coredns-66bc5c9577-sn6rz" Apr 28 00:41:26.551315 kubelet[2526]: E0428 00:41:26.463825 2526 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.11:6443/apis/node.k8s.io/v1/runtimeclasses?resourceVersion=1379\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 28 00:41:29.054815 kubelet[2526]: E0428 00:41:28.925753 2526 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.11:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" interval="7s" Apr 28 00:41:30.859384 kubelet[2526]: E0428 00:41:30.855990 2526 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="4.578s" Apr 28 00:41:32.346866 kubelet[2526]: E0428 00:41:32.345629 2526 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.431s" Apr 28 00:41:34.605568 kubelet[2526]: E0428 00:41:34.599604 2526 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"localhost\": Get \"https://10.0.0.11:6443/api/v1/nodes/localhost?timeout=10s\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Apr 28 00:41:34.778126 kubelet[2526]: E0428 00:41:34.773859 2526 kubelet_node_status.go:473] "Unable to update node status" err="update node status exceeds retry count" Apr 28 00:41:34.965434 kubelet[2526]: E0428 00:41:34.368471 2526 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://10.0.0.11:6443/api/v1/namespaces/kube-system/events/coredns-66bc5c9577-sn6rz.18aa5d8381c7a9f0\": net/http: TLS handshake timeout" event="&Event{ObjectMeta:{coredns-66bc5c9577-sn6rz.18aa5d8381c7a9f0 kube-system 1261 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:coredns-66bc5c9577-sn6rz,UID:69b6c5c4-1b0f-43c4-a6e5-e4ff6b274b36,APIVersion:v1,ResourceVersion:568,FieldPath:spec.containers{coredns},},Reason:Unhealthy,Message:Readiness probe failed: Get \"http://192.168.0.3:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-28 00:23:42 +0000 UTC,LastTimestamp:2026-04-28 00:35:05.253671056 +0000 UTC m=+890.691648352,Count:21,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 28 00:41:35.336757 kubelet[2526]: E0428 00:41:35.329603 2526 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.965s" Apr 28 00:41:36.038156 kubelet[2526]: E0428 00:41:36.033558 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:41:36.872524 kubelet[2526]: E0428 00:41:36.828695 2526 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://10.0.0.11:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-localhost\": net/http: TLS handshake timeout" podUID="a646d41a511ed3aa4e8f9816f82de57d" pod="kube-system/kube-apiserver-localhost" Apr 28 00:41:38.647717 kubelet[2526]: E0428 00:41:38.644600 2526 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.148s" Apr 28 00:41:39.089718 sshd[7020]: pam_unix(sshd:session): session closed for user core Apr 28 00:41:39.388446 systemd[1]: sshd@43-10.0.0.11:22-10.0.0.1:54442.service: Deactivated successfully. Apr 28 00:41:39.395583 systemd[1]: sshd@43-10.0.0.11:22-10.0.0.1:54442.service: Consumed 1.091s CPU time. Apr 28 00:41:39.582632 systemd[1]: session-44.scope: Deactivated successfully. Apr 28 00:41:39.598509 systemd[1]: session-44.scope: Consumed 6.497s CPU time. Apr 28 00:41:39.765481 systemd-logind[1457]: Session 44 logged out. Waiting for processes to exit. Apr 28 00:41:39.906225 kubelet[2526]: E0428 00:41:39.904646 2526 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: Get \"https://10.0.0.11:6443/api/v1/namespaces/kube-flannel/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=1352\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="object-\"kube-flannel\"/\"kube-root-ca.crt\"" type="*v1.ConfigMap" Apr 28 00:41:39.911714 systemd-logind[1457]: Removed session 44. Apr 28 00:41:40.667791 kubelet[2526]: E0428 00:41:40.662537 2526 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.015s" Apr 28 00:41:40.957392 kubelet[2526]: E0428 00:41:40.952196 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:41:41.025995 kubelet[2526]: E0428 00:41:40.952490 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:41:42.715698 kubelet[2526]: E0428 00:41:42.715433 2526 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.061s" Apr 28 00:41:44.168803 kubelet[2526]: E0428 00:41:43.881744 2526 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: Get \"https://10.0.0.11:6443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=1352\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="object-\"kube-system\"/\"kube-root-ca.crt\"" type="*v1.ConfigMap" Apr 28 00:41:44.902658 systemd[1]: Started sshd@44-10.0.0.11:22-10.0.0.1:47378.service - OpenSSH per-connection server daemon (10.0.0.1:47378). Apr 28 00:41:46.411678 kubelet[2526]: E0428 00:41:46.408464 2526 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.11:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" interval="7s" Apr 28 00:41:47.409597 sshd[7091]: Accepted publickey for core from 10.0.0.1 port 47378 ssh2: RSA SHA256:h1NhNWfC+Dmp6wIIzeQOuTKnwsIBe3BN0LKeEqeOidc Apr 28 00:41:47.762395 sshd[7091]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 00:41:47.884590 kubelet[2526]: E0428 00:41:47.787091 2526 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://10.0.0.11:6443/api/v1/namespaces/kube-system/pods/kube-scheduler-localhost\": net/http: TLS handshake timeout" podUID="824fd89300514e351ed3b68d82c665c6" pod="kube-system/kube-scheduler-localhost" Apr 28 00:41:49.151120 systemd-logind[1457]: New session 45 of user core. Apr 28 00:41:49.513260 systemd[1]: Started session-45.scope - Session 45 of User core. Apr 28 00:41:50.402669 kubelet[2526]: E0428 00:41:50.386061 2526 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.0.0.11:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dlocalhost&resourceVersion=1363\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="pkg/kubelet/config/apiserver.go:66" type="*v1.Pod" Apr 28 00:41:52.335717 kubelet[2526]: E0428 00:41:52.152101 2526 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: Get \"https://10.0.0.11:6443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dcoredns&resourceVersion=1352\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="object-\"kube-system\"/\"coredns\"" type="*v1.ConfigMap" Apr 28 00:41:52.573782 kubelet[2526]: E0428 00:41:52.573106 2526 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="9.316s" Apr 28 00:41:52.686430 kubelet[2526]: E0428 00:41:52.634758 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:41:53.739068 kubelet[2526]: E0428 00:41:53.703331 2526 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.087s" Apr 28 00:41:56.267337 kubelet[2526]: E0428 00:41:56.150773 2526 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://10.0.0.11:6443/api/v1/namespaces/kube-system/events/coredns-66bc5c9577-sn6rz.18aa5d8381c7a9f0\": net/http: TLS handshake timeout" event="&Event{ObjectMeta:{coredns-66bc5c9577-sn6rz.18aa5d8381c7a9f0 kube-system 1261 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:coredns-66bc5c9577-sn6rz,UID:69b6c5c4-1b0f-43c4-a6e5-e4ff6b274b36,APIVersion:v1,ResourceVersion:568,FieldPath:spec.containers{coredns},},Reason:Unhealthy,Message:Readiness probe failed: Get \"http://192.168.0.3:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-28 00:23:42 +0000 UTC,LastTimestamp:2026-04-28 00:35:05.253671056 +0000 UTC m=+890.691648352,Count:21,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 28 00:41:56.267337 kubelet[2526]: E0428 00:41:56.265781 2526 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.399s" Apr 28 00:41:58.888297 kubelet[2526]: E0428 00:41:58.881444 2526 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://10.0.0.11:6443/api/v1/namespaces/kube-system/pods/kube-controller-manager-localhost\": net/http: TLS handshake timeout" podUID="c6bb8708a026256e82ca4c5631a78b5a" pod="kube-system/kube-controller-manager-localhost" Apr 28 00:42:00.312451 kubelet[2526]: E0428 00:42:00.296696 2526 kubelet_node_status.go:486] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"NetworkUnavailable\\\"},{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-04-28T00:41:48Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-28T00:41:48Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-28T00:41:48Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-28T00:41:48Z\\\",\\\"type\\\":\\\"Ready\\\"}]}}\" for node \"localhost\": Patch \"https://10.0.0.11:6443/api/v1/nodes/localhost/status?timeout=10s\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Apr 28 00:42:01.090783 kubelet[2526]: E0428 00:42:01.089015 2526 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="4.817s" Apr 28 00:42:02.014761 kubelet[2526]: E0428 00:42:02.008749 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:42:02.681354 kubelet[2526]: E0428 00:42:02.680071 2526 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.515s" Apr 28 00:42:03.708056 kubelet[2526]: E0428 00:42:03.658331 2526 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.11:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded" interval="7s" Apr 28 00:42:04.163611 kubelet[2526]: E0428 00:42:04.162050 2526 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.478s" Apr 28 00:42:05.047703 kubelet[2526]: E0428 00:42:05.046405 2526 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.11:6443/apis/storage.k8s.io/v1/csidrivers?resourceVersion=1363\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 28 00:42:05.139506 kubelet[2526]: E0428 00:42:05.137761 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:42:05.789825 sshd[7091]: pam_unix(sshd:session): session closed for user core Apr 28 00:42:06.263545 systemd[1]: sshd@44-10.0.0.11:22-10.0.0.1:47378.service: Deactivated successfully. Apr 28 00:42:06.339257 systemd[1]: sshd@44-10.0.0.11:22-10.0.0.1:47378.service: Consumed 1.042s CPU time. Apr 28 00:42:06.650207 systemd[1]: session-45.scope: Deactivated successfully. Apr 28 00:42:06.669436 systemd[1]: session-45.scope: Consumed 3.994s CPU time. Apr 28 00:42:06.851702 systemd-logind[1457]: Session 45 logged out. Waiting for processes to exit. Apr 28 00:42:07.144567 systemd-logind[1457]: Removed session 45. Apr 28 00:42:07.449357 kubelet[2526]: E0428 00:42:07.447534 2526 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.087s" Apr 28 00:42:09.778038 kubelet[2526]: E0428 00:42:09.613590 2526 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://10.0.0.11:6443/api/v1/namespaces/kube-system/pods/kube-scheduler-localhost\": net/http: TLS handshake timeout" podUID="824fd89300514e351ed3b68d82c665c6" pod="kube-system/kube-scheduler-localhost" Apr 28 00:42:10.643400 kubelet[2526]: E0428 00:42:10.643071 2526 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.195s" Apr 28 00:42:10.886439 kubelet[2526]: E0428 00:42:10.881720 2526 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"localhost\": Get \"https://10.0.0.11:6443/api/v1/nodes/localhost?timeout=10s\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Apr 28 00:42:12.096019 systemd[1]: Started sshd@45-10.0.0.11:22-10.0.0.1:32860.service - OpenSSH per-connection server daemon (10.0.0.1:32860). Apr 28 00:42:12.409883 kubelet[2526]: E0428 00:42:12.398819 2526 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.11:6443/apis/node.k8s.io/v1/runtimeclasses?resourceVersion=1379\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 28 00:42:12.986412 kubelet[2526]: E0428 00:42:12.980763 2526 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.11:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&resourceVersion=1363\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 28 00:42:13.586628 sshd[7159]: Accepted publickey for core from 10.0.0.1 port 32860 ssh2: RSA SHA256:h1NhNWfC+Dmp6wIIzeQOuTKnwsIBe3BN0LKeEqeOidc Apr 28 00:42:13.814160 sshd[7159]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 00:42:14.750869 systemd-logind[1457]: New session 46 of user core. Apr 28 00:42:15.016570 systemd[1]: Started session-46.scope - Session 46 of User core. Apr 28 00:42:18.600641 kubelet[2526]: E0428 00:42:17.814246 2526 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://10.0.0.11:6443/api/v1/namespaces/kube-system/events/coredns-66bc5c9577-sn6rz.18aa5d8381c7a9f0\": net/http: TLS handshake timeout" event="&Event{ObjectMeta:{coredns-66bc5c9577-sn6rz.18aa5d8381c7a9f0 kube-system 1261 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:coredns-66bc5c9577-sn6rz,UID:69b6c5c4-1b0f-43c4-a6e5-e4ff6b274b36,APIVersion:v1,ResourceVersion:568,FieldPath:spec.containers{coredns},},Reason:Unhealthy,Message:Readiness probe failed: Get \"http://192.168.0.3:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-28 00:23:42 +0000 UTC,LastTimestamp:2026-04-28 00:35:05.253671056 +0000 UTC m=+890.691648352,Count:21,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 28 00:42:21.243751 kubelet[2526]: E0428 00:42:21.236627 2526 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.11:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" interval="7s" Apr 28 00:42:21.802528 kubelet[2526]: E0428 00:42:21.797726 2526 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"localhost\": Get \"https://10.0.0.11:6443/api/v1/nodes/localhost?timeout=10s\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Apr 28 00:42:22.389541 kubelet[2526]: E0428 00:42:21.857627 2526 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://10.0.0.11:6443/api/v1/namespaces/kube-system/pods/kube-controller-manager-localhost\": net/http: TLS handshake timeout" podUID="c6bb8708a026256e82ca4c5631a78b5a" pod="kube-system/kube-controller-manager-localhost" Apr 28 00:42:22.712611 kubelet[2526]: E0428 00:42:22.288224 2526 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: Get \"https://10.0.0.11:6443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dkube-proxy&resourceVersion=1352\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="object-\"kube-system\"/\"kube-proxy\"" type="*v1.ConfigMap" Apr 28 00:42:22.931115 kubelet[2526]: E0428 00:42:22.914765 2526 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.11:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&resourceVersion=1346\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 28 00:42:23.193187 kubelet[2526]: E0428 00:42:23.180930 2526 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: Get \"https://10.0.0.11:6443/api/v1/namespaces/kube-flannel/configmaps?fieldSelector=metadata.name%3Dkube-flannel-cfg&resourceVersion=1352\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="object-\"kube-flannel\"/\"kube-flannel-cfg\"" type="*v1.ConfigMap" Apr 28 00:42:24.662207 kubelet[2526]: E0428 00:42:24.659348 2526 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="14.002s" Apr 28 00:42:26.138705 kubelet[2526]: E0428 00:42:26.138633 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:42:26.156387 kubelet[2526]: E0428 00:42:26.138848 2526 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.433s" Apr 28 00:42:26.275662 kubelet[2526]: E0428 00:42:26.271383 2526 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: Get \"https://10.0.0.11:6443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=1352\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="object-\"kube-system\"/\"kube-root-ca.crt\"" type="*v1.ConfigMap" Apr 28 00:42:30.413768 kubelet[2526]: E0428 00:42:30.411196 2526 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.145s" Apr 28 00:42:32.799509 kubelet[2526]: E0428 00:42:32.763700 2526 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.194s" Apr 28 00:42:32.884480 sshd[7159]: pam_unix(sshd:session): session closed for user core Apr 28 00:42:33.216470 systemd[1]: sshd@45-10.0.0.11:22-10.0.0.1:32860.service: Deactivated successfully. Apr 28 00:42:33.464401 systemd[1]: session-46.scope: Deactivated successfully. Apr 28 00:42:33.507659 systemd[1]: session-46.scope: Consumed 5.129s CPU time. Apr 28 00:42:33.695337 systemd-logind[1457]: Session 46 logged out. Waiting for processes to exit. Apr 28 00:42:33.749794 kubelet[2526]: E0428 00:42:33.741528 2526 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"localhost\": Get \"https://10.0.0.11:6443/api/v1/nodes/localhost?timeout=10s\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Apr 28 00:42:33.803923 systemd-logind[1457]: Removed session 46. Apr 28 00:42:34.216510 kubelet[2526]: E0428 00:42:33.440825 2526 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://10.0.0.11:6443/api/v1/namespaces/kube-system/pods/coredns-66bc5c9577-976lc\": net/http: TLS handshake timeout" podUID="2f94c136-2158-4e5f-b19a-05695c38ab7a" pod="kube-system/coredns-66bc5c9577-976lc" Apr 28 00:42:35.579745 kubelet[2526]: E0428 00:42:35.578230 2526 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.209s" Apr 28 00:42:36.954541 kubelet[2526]: E0428 00:42:36.950510 2526 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: Get \"https://10.0.0.11:6443/api/v1/namespaces/kube-flannel/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=1352\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="object-\"kube-flannel\"/\"kube-root-ca.crt\"" type="*v1.ConfigMap" Apr 28 00:42:38.934131 systemd[1]: Started sshd@46-10.0.0.11:22-10.0.0.1:50118.service - OpenSSH per-connection server daemon (10.0.0.1:50118). Apr 28 00:42:39.208350 kubelet[2526]: E0428 00:42:39.194565 2526 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://10.0.0.11:6443/api/v1/namespaces/kube-system/events/coredns-66bc5c9577-sn6rz.18aa5d8381c7a9f0\": net/http: TLS handshake timeout" event="&Event{ObjectMeta:{coredns-66bc5c9577-sn6rz.18aa5d8381c7a9f0 kube-system 1261 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:coredns-66bc5c9577-sn6rz,UID:69b6c5c4-1b0f-43c4-a6e5-e4ff6b274b36,APIVersion:v1,ResourceVersion:568,FieldPath:spec.containers{coredns},},Reason:Unhealthy,Message:Readiness probe failed: Get \"http://192.168.0.3:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-28 00:23:42 +0000 UTC,LastTimestamp:2026-04-28 00:35:05.253671056 +0000 UTC m=+890.691648352,Count:21,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 28 00:42:39.744678 kubelet[2526]: E0428 00:42:39.732926 2526 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.11:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded" interval="7s" Apr 28 00:42:40.707306 sshd[7218]: Accepted publickey for core from 10.0.0.1 port 50118 ssh2: RSA SHA256:h1NhNWfC+Dmp6wIIzeQOuTKnwsIBe3BN0LKeEqeOidc Apr 28 00:42:40.970585 sshd[7218]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 00:42:42.353608 systemd-logind[1457]: New session 47 of user core. Apr 28 00:42:42.591108 systemd[1]: Started session-47.scope - Session 47 of User core. Apr 28 00:42:44.637688 kubelet[2526]: E0428 00:42:44.636792 2526 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"localhost\": Get \"https://10.0.0.11:6443/api/v1/nodes/localhost?timeout=10s\": context deadline exceeded" Apr 28 00:42:44.698796 kubelet[2526]: E0428 00:42:44.655526 2526 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://10.0.0.11:6443/api/v1/namespaces/kube-system/pods/coredns-66bc5c9577-sn6rz\": net/http: TLS handshake timeout" podUID="69b6c5c4-1b0f-43c4-a6e5-e4ff6b274b36" pod="kube-system/coredns-66bc5c9577-sn6rz" Apr 28 00:42:44.913727 kubelet[2526]: E0428 00:42:44.884649 2526 kubelet_node_status.go:473] "Unable to update node status" err="update node status exceeds retry count" Apr 28 00:42:49.575794 kubelet[2526]: E0428 00:42:49.493848 2526 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="13.302s" Apr 28 00:42:50.637868 kubelet[2526]: E0428 00:42:50.176781 2526 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: Get \"https://10.0.0.11:6443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dcoredns&resourceVersion=1352\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="object-\"kube-system\"/\"coredns\"" type="*v1.ConfigMap" Apr 28 00:42:54.140567 kubelet[2526]: E0428 00:42:54.139687 2526 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.11:6443/apis/storage.k8s.io/v1/csidrivers?resourceVersion=1363\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 28 00:42:56.122921 kubelet[2526]: E0428 00:42:56.085663 2526 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://10.0.0.11:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-localhost\": net/http: TLS handshake timeout" podUID="a646d41a511ed3aa4e8f9816f82de57d" pod="kube-system/kube-apiserver-localhost" Apr 28 00:42:58.760576 kubelet[2526]: E0428 00:42:58.711856 2526 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="8.71s" Apr 28 00:42:59.668420 kubelet[2526]: E0428 00:42:58.958185 2526 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.11:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded" interval="7s" Apr 28 00:42:59.843263 kubelet[2526]: E0428 00:42:59.839795 2526 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.11:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&resourceVersion=1363\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 28 00:43:00.993575 kubelet[2526]: E0428 00:43:00.955503 2526 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://10.0.0.11:6443/api/v1/namespaces/kube-system/events/coredns-66bc5c9577-sn6rz.18aa5d8381c7a9f0\": net/http: TLS handshake timeout" event="&Event{ObjectMeta:{coredns-66bc5c9577-sn6rz.18aa5d8381c7a9f0 kube-system 1261 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:coredns-66bc5c9577-sn6rz,UID:69b6c5c4-1b0f-43c4-a6e5-e4ff6b274b36,APIVersion:v1,ResourceVersion:568,FieldPath:spec.containers{coredns},},Reason:Unhealthy,Message:Readiness probe failed: Get \"http://192.168.0.3:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-28 00:23:42 +0000 UTC,LastTimestamp:2026-04-28 00:35:05.253671056 +0000 UTC m=+890.691648352,Count:21,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 28 00:43:02.684623 sshd[7218]: pam_unix(sshd:session): session closed for user core Apr 28 00:43:03.087264 systemd[1]: sshd@46-10.0.0.11:22-10.0.0.1:50118.service: Deactivated successfully. Apr 28 00:43:03.142965 kubelet[2526]: E0428 00:43:03.137430 2526 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.0.0.11:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dlocalhost&resourceVersion=1363\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="pkg/kubelet/config/apiserver.go:66" type="*v1.Pod" Apr 28 00:43:03.460542 systemd[1]: session-47.scope: Deactivated successfully. Apr 28 00:43:03.480399 systemd[1]: session-47.scope: Consumed 6.105s CPU time. Apr 28 00:43:03.616102 systemd-logind[1457]: Session 47 logged out. Waiting for processes to exit. Apr 28 00:43:03.688539 systemd-logind[1457]: Removed session 47. Apr 28 00:43:03.998608 kubelet[2526]: E0428 00:43:03.997218 2526 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="5.197s" Apr 28 00:43:05.842933 kubelet[2526]: E0428 00:43:05.842298 2526 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.845s" Apr 28 00:43:05.867758 kubelet[2526]: E0428 00:43:05.867645 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:43:05.870141 kubelet[2526]: E0428 00:43:05.869313 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:43:05.870141 kubelet[2526]: E0428 00:43:05.867795 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:43:06.534343 kubelet[2526]: E0428 00:43:06.501714 2526 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://10.0.0.11:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-localhost\": net/http: TLS handshake timeout" podUID="a646d41a511ed3aa4e8f9816f82de57d" pod="kube-system/kube-apiserver-localhost" Apr 28 00:43:07.396197 kubelet[2526]: E0428 00:43:07.386845 2526 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: Get \"https://10.0.0.11:6443/api/v1/namespaces/kube-flannel/configmaps?fieldSelector=metadata.name%3Dkube-flannel-cfg&resourceVersion=1352\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="object-\"kube-flannel\"/\"kube-flannel-cfg\"" type="*v1.ConfigMap" Apr 28 00:43:08.780914 systemd[1]: Started sshd@47-10.0.0.11:22-10.0.0.1:47826.service - OpenSSH per-connection server daemon (10.0.0.1:47826). Apr 28 00:43:09.514835 kubelet[2526]: E0428 00:43:09.501744 2526 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.11:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&resourceVersion=1346\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 28 00:43:09.951819 kubelet[2526]: E0428 00:43:09.948967 2526 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.11:6443/apis/node.k8s.io/v1/runtimeclasses?resourceVersion=1379\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 28 00:43:10.232479 kubelet[2526]: E0428 00:43:10.227615 2526 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.012s" Apr 28 00:43:10.250183 sshd[7282]: Accepted publickey for core from 10.0.0.1 port 47826 ssh2: RSA SHA256:h1NhNWfC+Dmp6wIIzeQOuTKnwsIBe3BN0LKeEqeOidc Apr 28 00:43:10.256062 sshd[7282]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 00:43:10.406148 kubelet[2526]: E0428 00:43:10.278587 2526 kubelet_node_status.go:486] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"NetworkUnavailable\\\"},{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-04-28T00:42:57Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-28T00:42:57Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-28T00:42:57Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-28T00:42:57Z\\\",\\\"type\\\":\\\"Ready\\\"}]}}\" for node \"localhost\": Patch \"https://10.0.0.11:6443/api/v1/nodes/localhost/status?timeout=10s\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Apr 28 00:43:11.170934 systemd-logind[1457]: New session 48 of user core. Apr 28 00:43:11.184137 kubelet[2526]: E0428 00:43:11.180129 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:43:11.220739 systemd[1]: Started session-48.scope - Session 48 of User core. Apr 28 00:43:11.237282 kubelet[2526]: E0428 00:43:11.236743 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:43:16.155570 kubelet[2526]: E0428 00:43:16.108590 2526 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.611s" Apr 28 00:43:16.666952 kubelet[2526]: E0428 00:43:16.666736 2526 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://10.0.0.11:6443/api/v1/namespaces/kube-system/pods/kube-scheduler-localhost\": net/http: TLS handshake timeout" podUID="824fd89300514e351ed3b68d82c665c6" pod="kube-system/kube-scheduler-localhost" Apr 28 00:43:16.916470 kubelet[2526]: E0428 00:43:16.854869 2526 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.11:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded" interval="7s" Apr 28 00:43:22.162874 kubelet[2526]: E0428 00:43:20.616788 2526 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: Get \"https://10.0.0.11:6443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dkube-proxy&resourceVersion=1352\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="object-\"kube-system\"/\"kube-proxy\"" type="*v1.ConfigMap" Apr 28 00:43:22.874468 kubelet[2526]: E0428 00:43:22.842800 2526 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"localhost\": Get \"https://10.0.0.11:6443/api/v1/nodes/localhost?timeout=10s\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Apr 28 00:43:24.011535 kubelet[2526]: E0428 00:43:24.005438 2526 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://10.0.0.11:6443/api/v1/namespaces/kube-system/events/coredns-66bc5c9577-sn6rz.18aa5d8381c7a9f0\": net/http: TLS handshake timeout" event="&Event{ObjectMeta:{coredns-66bc5c9577-sn6rz.18aa5d8381c7a9f0 kube-system 1261 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:coredns-66bc5c9577-sn6rz,UID:69b6c5c4-1b0f-43c4-a6e5-e4ff6b274b36,APIVersion:v1,ResourceVersion:568,FieldPath:spec.containers{coredns},},Reason:Unhealthy,Message:Readiness probe failed: Get \"http://192.168.0.3:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-28 00:23:42 +0000 UTC,LastTimestamp:2026-04-28 00:35:05.253671056 +0000 UTC m=+890.691648352,Count:21,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 28 00:43:28.363112 kubelet[2526]: E0428 00:43:28.314036 2526 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://10.0.0.11:6443/api/v1/namespaces/kube-system/pods/kube-controller-manager-localhost\": net/http: TLS handshake timeout" podUID="c6bb8708a026256e82ca4c5631a78b5a" pod="kube-system/kube-controller-manager-localhost" Apr 28 00:43:32.478639 kubelet[2526]: E0428 00:43:32.475676 2526 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="16.331s" Apr 28 00:43:33.982088 kubelet[2526]: E0428 00:43:33.799792 2526 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"localhost\": Get \"https://10.0.0.11:6443/api/v1/nodes/localhost?timeout=10s\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Apr 28 00:43:34.537199 kubelet[2526]: E0428 00:43:34.520765 2526 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.11:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded" interval="7s" Apr 28 00:43:34.771278 sshd[7282]: pam_unix(sshd:session): session closed for user core Apr 28 00:43:35.200763 systemd[1]: sshd@47-10.0.0.11:22-10.0.0.1:47826.service: Deactivated successfully. Apr 28 00:43:35.416148 systemd[1]: session-48.scope: Deactivated successfully. Apr 28 00:43:35.460235 systemd[1]: session-48.scope: Consumed 8.907s CPU time. Apr 28 00:43:35.593342 systemd-logind[1457]: Session 48 logged out. Waiting for processes to exit. Apr 28 00:43:35.763594 systemd-logind[1457]: Removed session 48. Apr 28 00:43:36.244612 kubelet[2526]: E0428 00:43:36.243167 2526 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: Get \"https://10.0.0.11:6443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=1352\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="object-\"kube-system\"/\"kube-root-ca.crt\"" type="*v1.ConfigMap" Apr 28 00:43:37.150584 kubelet[2526]: E0428 00:43:37.141490 2526 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="4.287s" Apr 28 00:43:40.666019 systemd[1]: Started sshd@48-10.0.0.11:22-10.0.0.1:53206.service - OpenSSH per-connection server daemon (10.0.0.1:53206). Apr 28 00:43:41.013755 kubelet[2526]: E0428 00:43:40.989134 2526 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://10.0.0.11:6443/api/v1/namespaces/kube-system/pods/coredns-66bc5c9577-976lc\": net/http: TLS handshake timeout" podUID="2f94c136-2158-4e5f-b19a-05695c38ab7a" pod="kube-system/coredns-66bc5c9577-976lc" Apr 28 00:43:45.095690 kubelet[2526]: E0428 00:43:44.351239 2526 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"localhost\": Get \"https://10.0.0.11:6443/api/v1/nodes/localhost?timeout=10s\": context deadline exceeded" Apr 28 00:43:46.152863 kubelet[2526]: E0428 00:43:46.149534 2526 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://10.0.0.11:6443/api/v1/namespaces/kube-system/events/coredns-66bc5c9577-sn6rz.18aa5d8381c7a9f0\": net/http: TLS handshake timeout" event="&Event{ObjectMeta:{coredns-66bc5c9577-sn6rz.18aa5d8381c7a9f0 kube-system 1261 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:coredns-66bc5c9577-sn6rz,UID:69b6c5c4-1b0f-43c4-a6e5-e4ff6b274b36,APIVersion:v1,ResourceVersion:568,FieldPath:spec.containers{coredns},},Reason:Unhealthy,Message:Readiness probe failed: Get \"http://192.168.0.3:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-28 00:23:42 +0000 UTC,LastTimestamp:2026-04-28 00:35:05.253671056 +0000 UTC m=+890.691648352,Count:21,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 28 00:43:46.481614 kubelet[2526]: E0428 00:43:46.476603 2526 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.0.0.11:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dlocalhost&resourceVersion=1363\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="pkg/kubelet/config/apiserver.go:66" type="*v1.Pod" Apr 28 00:43:46.785638 kubelet[2526]: E0428 00:43:46.354571 2526 event.go:307] "Unable to write event (retry limit exceeded!)" event="&Event{ObjectMeta:{coredns-66bc5c9577-sn6rz.18aa5e2281fab090 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:coredns-66bc5c9577-sn6rz,UID:69b6c5c4-1b0f-43c4-a6e5-e4ff6b274b36,APIVersion:v1,ResourceVersion:568,FieldPath:spec.containers{coredns},},Reason:Unhealthy,Message:Readiness probe failed: Get \"http://192.168.0.3:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-28 00:35:05.253671056 +0000 UTC m=+890.691648352,LastTimestamp:2026-04-28 00:35:05.253671056 +0000 UTC m=+890.691648352,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 28 00:43:47.686399 sshd[7350]: Accepted publickey for core from 10.0.0.1 port 53206 ssh2: RSA SHA256:h1NhNWfC+Dmp6wIIzeQOuTKnwsIBe3BN0LKeEqeOidc Apr 28 00:43:48.369815 sshd[7350]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 00:43:49.414309 systemd-logind[1457]: New session 49 of user core. Apr 28 00:43:49.846746 systemd[1]: Started session-49.scope - Session 49 of User core. Apr 28 00:43:50.179623 kubelet[2526]: E0428 00:43:48.487796 2526 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: Get \"https://10.0.0.11:6443/api/v1/namespaces/kube-flannel/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=1352\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="object-\"kube-flannel\"/\"kube-root-ca.crt\"" type="*v1.ConfigMap" Apr 28 00:43:52.560549 kubelet[2526]: E0428 00:43:52.560014 2526 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://10.0.0.11:6443/api/v1/namespaces/kube-system/pods/coredns-66bc5c9577-sn6rz\": net/http: TLS handshake timeout" podUID="69b6c5c4-1b0f-43c4-a6e5-e4ff6b274b36" pod="kube-system/coredns-66bc5c9577-sn6rz" Apr 28 00:43:53.519632 kubelet[2526]: E0428 00:43:53.516047 2526 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.11:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded" interval="7s" Apr 28 00:43:53.614668 kubelet[2526]: E0428 00:43:53.599786 2526 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: Get \"https://10.0.0.11:6443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dcoredns&resourceVersion=1352\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="object-\"kube-system\"/\"coredns\"" type="*v1.ConfigMap" Apr 28 00:43:54.396538 kubelet[2526]: E0428 00:43:54.386755 2526 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="16.692s" Apr 28 00:43:55.631116 kubelet[2526]: E0428 00:43:55.625474 2526 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"localhost\": Get \"https://10.0.0.11:6443/api/v1/nodes/localhost?timeout=10s\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Apr 28 00:43:55.909645 kubelet[2526]: E0428 00:43:55.772625 2526 kubelet_node_status.go:473] "Unable to update node status" err="update node status exceeds retry count" Apr 28 00:43:56.002381 kubelet[2526]: E0428 00:43:55.998612 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:43:56.860026 kubelet[2526]: E0428 00:43:56.856297 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:44:03.775742 kubelet[2526]: E0428 00:44:01.695865 2526 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://10.0.0.11:6443/api/v1/namespaces/kube-system/events/coredns-66bc5c9577-976lc.18aa5d84e4bccbb1\": net/http: TLS handshake timeout" event="&Event{ObjectMeta:{coredns-66bc5c9577-976lc.18aa5d84e4bccbb1 kube-system 1268 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:coredns-66bc5c9577-976lc,UID:2f94c136-2158-4e5f-b19a-05695c38ab7a,APIVersion:v1,ResourceVersion:567,FieldPath:spec.containers{coredns},},Reason:Unhealthy,Message:Readiness probe failed: Get \"http://192.168.0.2:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-28 00:23:48 +0000 UTC,LastTimestamp:2026-04-28 00:35:10.900735917 +0000 UTC m=+896.338713214,Count:22,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 28 00:44:08.150784 kubelet[2526]: E0428 00:44:07.504689 2526 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.11:6443/apis/storage.k8s.io/v1/csidrivers?resourceVersion=1363\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 28 00:44:08.264649 kubelet[2526]: E0428 00:44:07.290766 2526 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://10.0.0.11:6443/api/v1/namespaces/kube-system/pods/kube-controller-manager-localhost\": net/http: TLS handshake timeout" podUID="c6bb8708a026256e82ca4c5631a78b5a" pod="kube-system/kube-controller-manager-localhost" Apr 28 00:44:09.868510 kubelet[2526]: E0428 00:44:08.806447 2526 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: Get \"https://10.0.0.11:6443/api/v1/namespaces/kube-flannel/configmaps?fieldSelector=metadata.name%3Dkube-flannel-cfg&resourceVersion=1352\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="object-\"kube-flannel\"/\"kube-flannel-cfg\"" type="*v1.ConfigMap" Apr 28 00:44:11.778180 kubelet[2526]: E0428 00:44:11.764098 2526 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.11:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&resourceVersion=1346\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 28 00:44:12.987353 kubelet[2526]: E0428 00:44:12.987177 2526 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.11:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" interval="7s" Apr 28 00:44:14.419602 kubelet[2526]: E0428 00:44:14.419011 2526 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.11:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&resourceVersion=1363\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 28 00:44:14.594358 kubelet[2526]: E0428 00:44:14.589678 2526 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="20.078s" Apr 28 00:44:16.627837 kubelet[2526]: E0428 00:44:16.621845 2526 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: Get \"https://10.0.0.11:6443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dkube-proxy&resourceVersion=1352\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="object-\"kube-system\"/\"kube-proxy\"" type="*v1.ConfigMap" Apr 28 00:44:18.450312 kubelet[2526]: E0428 00:44:18.445844 2526 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.11:6443/apis/node.k8s.io/v1/runtimeclasses?resourceVersion=1379\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 28 00:44:22.637330 kubelet[2526]: E0428 00:44:22.454346 2526 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://10.0.0.11:6443/api/v1/namespaces/kube-system/pods/coredns-66bc5c9577-976lc\": net/http: TLS handshake timeout" podUID="2f94c136-2158-4e5f-b19a-05695c38ab7a" pod="kube-system/coredns-66bc5c9577-976lc" Apr 28 00:44:24.975862 kubelet[2526]: E0428 00:44:24.780427 2526 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="9.773s" Apr 28 00:44:25.298877 kubelet[2526]: E0428 00:44:25.235673 2526 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://10.0.0.11:6443/api/v1/namespaces/kube-system/events/coredns-66bc5c9577-976lc.18aa5d84e4bccbb1\": net/http: TLS handshake timeout" event="&Event{ObjectMeta:{coredns-66bc5c9577-976lc.18aa5d84e4bccbb1 kube-system 1268 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:coredns-66bc5c9577-976lc,UID:2f94c136-2158-4e5f-b19a-05695c38ab7a,APIVersion:v1,ResourceVersion:567,FieldPath:spec.containers{coredns},},Reason:Unhealthy,Message:Readiness probe failed: Get \"http://192.168.0.2:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-28 00:23:48 +0000 UTC,LastTimestamp:2026-04-28 00:35:10.900735917 +0000 UTC m=+896.338713214,Count:22,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 28 00:44:26.214167 kubelet[2526]: E0428 00:44:26.213658 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:44:26.286768 sshd[7350]: pam_unix(sshd:session): session closed for user core Apr 28 00:44:26.467700 kubelet[2526]: E0428 00:44:26.465626 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:44:26.608742 systemd[1]: sshd@48-10.0.0.11:22-10.0.0.1:53206.service: Deactivated successfully. Apr 28 00:44:26.609393 systemd[1]: sshd@48-10.0.0.11:22-10.0.0.1:53206.service: Consumed 2.564s CPU time. Apr 28 00:44:26.611859 systemd-logind[1457]: Session 49 logged out. Waiting for processes to exit. Apr 28 00:44:26.798449 systemd[1]: session-49.scope: Deactivated successfully. Apr 28 00:44:26.807297 systemd[1]: session-49.scope: Consumed 17.893s CPU time. Apr 28 00:44:26.998424 systemd-logind[1457]: Removed session 49. Apr 28 00:44:27.047679 kubelet[2526]: E0428 00:44:27.043506 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:44:27.184270 kubelet[2526]: E0428 00:44:27.111357 2526 kubelet_node_status.go:486] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"NetworkUnavailable\\\"},{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-04-28T00:44:16Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-28T00:44:16Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-28T00:44:16Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-28T00:44:16Z\\\",\\\"type\\\":\\\"Ready\\\"}]}}\" for node \"localhost\": Patch \"https://10.0.0.11:6443/api/v1/nodes/localhost/status?timeout=10s\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Apr 28 00:44:27.753545 kubelet[2526]: E0428 00:44:27.706802 2526 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.723s" Apr 28 00:44:30.814802 kubelet[2526]: E0428 00:44:30.805073 2526 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.11:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" interval="7s" Apr 28 00:44:31.356682 kubelet[2526]: E0428 00:44:31.355028 2526 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.551s" Apr 28 00:44:32.095629 systemd[1]: Started sshd@49-10.0.0.11:22-10.0.0.1:33410.service - OpenSSH per-connection server daemon (10.0.0.1:33410). Apr 28 00:44:32.703542 kubelet[2526]: E0428 00:44:32.699433 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:44:33.664696 kubelet[2526]: E0428 00:44:33.664425 2526 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://10.0.0.11:6443/api/v1/namespaces/kube-system/pods/coredns-66bc5c9577-sn6rz\": net/http: TLS handshake timeout" podUID="69b6c5c4-1b0f-43c4-a6e5-e4ff6b274b36" pod="kube-system/coredns-66bc5c9577-sn6rz" Apr 28 00:44:35.439558 sshd[7424]: Accepted publickey for core from 10.0.0.1 port 33410 ssh2: RSA SHA256:h1NhNWfC+Dmp6wIIzeQOuTKnwsIBe3BN0LKeEqeOidc Apr 28 00:44:35.920716 sshd[7424]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 00:44:36.992409 systemd-logind[1457]: New session 50 of user core. Apr 28 00:44:37.042006 systemd[1]: Started session-50.scope - Session 50 of User core. Apr 28 00:44:40.655264 kubelet[2526]: E0428 00:44:40.652461 2526 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"localhost\": Get \"https://10.0.0.11:6443/api/v1/nodes/localhost?timeout=10s\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Apr 28 00:44:41.521537 kubelet[2526]: E0428 00:44:41.250944 2526 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: Get \"https://10.0.0.11:6443/api/v1/namespaces/kube-flannel/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=1352\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="object-\"kube-flannel\"/\"kube-root-ca.crt\"" type="*v1.ConfigMap" Apr 28 00:44:41.896705 kubelet[2526]: E0428 00:44:41.864729 2526 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: Get \"https://10.0.0.11:6443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=1352\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="object-\"kube-system\"/\"kube-root-ca.crt\"" type="*v1.ConfigMap" Apr 28 00:44:45.050481 kubelet[2526]: E0428 00:44:45.047759 2526 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://10.0.0.11:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-localhost\": net/http: TLS handshake timeout" podUID="a646d41a511ed3aa4e8f9816f82de57d" pod="kube-system/kube-apiserver-localhost" Apr 28 00:44:46.060489 kubelet[2526]: E0428 00:44:46.060437 2526 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="14.65s" Apr 28 00:44:47.073868 kubelet[2526]: E0428 00:44:47.067272 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:44:48.292752 kubelet[2526]: E0428 00:44:48.288842 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:44:48.390547 kubelet[2526]: E0428 00:44:48.383873 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:44:49.247031 kubelet[2526]: E0428 00:44:49.246770 2526 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.28s" Apr 28 00:44:49.849876 kubelet[2526]: E0428 00:44:49.847395 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:44:51.279704 kubelet[2526]: E0428 00:44:51.278205 2526 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"localhost\": Get \"https://10.0.0.11:6443/api/v1/nodes/localhost?timeout=10s\": context deadline exceeded" Apr 28 00:44:51.279704 kubelet[2526]: E0428 00:44:51.278404 2526 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://10.0.0.11:6443/api/v1/namespaces/kube-system/events/coredns-66bc5c9577-976lc.18aa5d84e4bccbb1\": net/http: TLS handshake timeout" event="&Event{ObjectMeta:{coredns-66bc5c9577-976lc.18aa5d84e4bccbb1 kube-system 1268 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:coredns-66bc5c9577-976lc,UID:2f94c136-2158-4e5f-b19a-05695c38ab7a,APIVersion:v1,ResourceVersion:567,FieldPath:spec.containers{coredns},},Reason:Unhealthy,Message:Readiness probe failed: Get \"http://192.168.0.2:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-28 00:23:48 +0000 UTC,LastTimestamp:2026-04-28 00:35:10.900735917 +0000 UTC m=+896.338713214,Count:22,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 28 00:44:51.638443 kubelet[2526]: E0428 00:44:51.638161 2526 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.11:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded" interval="7s" Apr 28 00:44:51.878319 kubelet[2526]: E0428 00:44:51.859560 2526 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: Get \"https://10.0.0.11:6443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dcoredns&resourceVersion=1352\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="object-\"kube-system\"/\"coredns\"" type="*v1.ConfigMap" Apr 28 00:44:52.562410 kubelet[2526]: E0428 00:44:52.559868 2526 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.0.0.11:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dlocalhost&resourceVersion=1363\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="pkg/kubelet/config/apiserver.go:66" type="*v1.Pod" Apr 28 00:44:55.243945 kubelet[2526]: E0428 00:44:55.243673 2526 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://10.0.0.11:6443/api/v1/namespaces/kube-system/pods/kube-scheduler-localhost\": net/http: TLS handshake timeout" podUID="824fd89300514e351ed3b68d82c665c6" pod="kube-system/kube-scheduler-localhost" Apr 28 00:44:56.260065 sshd[7424]: pam_unix(sshd:session): session closed for user core Apr 28 00:44:56.310062 systemd[1]: sshd@49-10.0.0.11:22-10.0.0.1:33410.service: Deactivated successfully. Apr 28 00:44:56.310612 systemd[1]: sshd@49-10.0.0.11:22-10.0.0.1:33410.service: Consumed 1.381s CPU time. Apr 28 00:44:56.369993 systemd[1]: session-50.scope: Deactivated successfully. Apr 28 00:44:56.377335 systemd[1]: session-50.scope: Consumed 5.913s CPU time. Apr 28 00:44:56.407359 systemd-logind[1457]: Session 50 logged out. Waiting for processes to exit. Apr 28 00:44:56.421065 systemd-logind[1457]: Removed session 50. Apr 28 00:45:01.298480 kubelet[2526]: E0428 00:45:01.293564 2526 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"localhost\": Get \"https://10.0.0.11:6443/api/v1/nodes/localhost?timeout=10s\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Apr 28 00:45:01.513257 systemd[1]: Started sshd@50-10.0.0.11:22-10.0.0.1:37730.service - OpenSSH per-connection server daemon (10.0.0.1:37730). Apr 28 00:45:03.645103 sshd[7510]: Accepted publickey for core from 10.0.0.1 port 37730 ssh2: RSA SHA256:h1NhNWfC+Dmp6wIIzeQOuTKnwsIBe3BN0LKeEqeOidc Apr 28 00:45:03.986461 kubelet[2526]: E0428 00:45:03.781685 2526 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.11:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&resourceVersion=1363\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 28 00:45:04.455316 sshd[7510]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 00:45:05.170797 kubelet[2526]: E0428 00:45:05.150372 2526 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.883s" Apr 28 00:45:05.497873 systemd-logind[1457]: New session 51 of user core. Apr 28 00:45:06.232594 kubelet[2526]: E0428 00:45:06.231608 2526 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://10.0.0.11:6443/api/v1/namespaces/kube-system/pods/kube-scheduler-localhost\": net/http: TLS handshake timeout" podUID="824fd89300514e351ed3b68d82c665c6" pod="kube-system/kube-scheduler-localhost" Apr 28 00:45:06.232801 systemd[1]: Started session-51.scope - Session 51 of User core. Apr 28 00:45:08.684884 kubelet[2526]: E0428 00:45:08.682754 2526 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.11:6443/apis/storage.k8s.io/v1/csidrivers?resourceVersion=1363\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 28 00:45:08.877839 kubelet[2526]: E0428 00:45:08.877244 2526 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.11:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" interval="7s" Apr 28 00:45:12.189732 kubelet[2526]: E0428 00:45:12.178368 2526 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"localhost\": Get \"https://10.0.0.11:6443/api/v1/nodes/localhost?timeout=10s\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Apr 28 00:45:12.297173 kubelet[2526]: E0428 00:45:12.190877 2526 kubelet_node_status.go:473] "Unable to update node status" err="update node status exceeds retry count" Apr 28 00:45:12.371232 kubelet[2526]: E0428 00:45:12.369342 2526 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="7.072s" Apr 28 00:45:13.183305 kubelet[2526]: E0428 00:45:12.200866 2526 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://10.0.0.11:6443/api/v1/namespaces/kube-system/events/coredns-66bc5c9577-976lc.18aa5d84e4bccbb1\": net/http: TLS handshake timeout" event="&Event{ObjectMeta:{coredns-66bc5c9577-976lc.18aa5d84e4bccbb1 kube-system 1268 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:coredns-66bc5c9577-976lc,UID:2f94c136-2158-4e5f-b19a-05695c38ab7a,APIVersion:v1,ResourceVersion:567,FieldPath:spec.containers{coredns},},Reason:Unhealthy,Message:Readiness probe failed: Get \"http://192.168.0.2:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-28 00:23:48 +0000 UTC,LastTimestamp:2026-04-28 00:35:10.900735917 +0000 UTC m=+896.338713214,Count:22,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 28 00:45:14.259636 kubelet[2526]: E0428 00:45:14.256673 2526 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: Get \"https://10.0.0.11:6443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dkube-proxy&resourceVersion=1352\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="object-\"kube-system\"/\"kube-proxy\"" type="*v1.ConfigMap" Apr 28 00:45:14.400182 kubelet[2526]: E0428 00:45:14.356641 2526 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: Get \"https://10.0.0.11:6443/api/v1/namespaces/kube-flannel/configmaps?fieldSelector=metadata.name%3Dkube-flannel-cfg&resourceVersion=1352\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="object-\"kube-flannel\"/\"kube-flannel-cfg\"" type="*v1.ConfigMap" Apr 28 00:45:16.030825 kubelet[2526]: E0428 00:45:16.017280 2526 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.362s" Apr 28 00:45:17.386549 kubelet[2526]: E0428 00:45:17.182444 2526 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://10.0.0.11:6443/api/v1/namespaces/kube-system/pods/kube-controller-manager-localhost\": net/http: TLS handshake timeout" podUID="c6bb8708a026256e82ca4c5631a78b5a" pod="kube-system/kube-controller-manager-localhost" Apr 28 00:45:19.910841 kubelet[2526]: E0428 00:45:19.905445 2526 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.11:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&resourceVersion=1346\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 28 00:45:21.712741 kubelet[2526]: E0428 00:45:21.702319 2526 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.11:6443/apis/node.k8s.io/v1/runtimeclasses?resourceVersion=1379\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 28 00:45:22.157504 kubelet[2526]: E0428 00:45:22.156694 2526 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="6.139s" Apr 28 00:45:24.579018 kubelet[2526]: E0428 00:45:24.576709 2526 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.385s" Apr 28 00:45:26.320845 kubelet[2526]: E0428 00:45:26.316252 2526 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.11:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded" interval="7s" Apr 28 00:45:26.386996 kubelet[2526]: E0428 00:45:26.311384 2526 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.674s" Apr 28 00:45:27.387695 kubelet[2526]: E0428 00:45:27.387223 2526 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.025s" Apr 28 00:45:29.364675 kubelet[2526]: E0428 00:45:29.354606 2526 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://10.0.0.11:6443/api/v1/namespaces/kube-system/pods/coredns-66bc5c9577-976lc\": net/http: TLS handshake timeout" podUID="2f94c136-2158-4e5f-b19a-05695c38ab7a" pod="kube-system/coredns-66bc5c9577-976lc" Apr 28 00:45:30.714840 kubelet[2526]: E0428 00:45:30.692273 2526 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.067s" Apr 28 00:45:31.661865 sshd[7510]: pam_unix(sshd:session): session closed for user core Apr 28 00:45:32.334311 systemd[1]: sshd@50-10.0.0.11:22-10.0.0.1:37730.service: Deactivated successfully. Apr 28 00:45:32.334870 systemd[1]: sshd@50-10.0.0.11:22-10.0.0.1:37730.service: Consumed 1.284s CPU time. Apr 28 00:45:32.766091 systemd[1]: session-51.scope: Deactivated successfully. Apr 28 00:45:32.798601 systemd[1]: session-51.scope: Consumed 10.203s CPU time. Apr 28 00:45:32.983218 systemd-logind[1457]: Session 51 logged out. Waiting for processes to exit. Apr 28 00:45:33.300852 systemd-logind[1457]: Removed session 51. Apr 28 00:45:34.378312 kubelet[2526]: E0428 00:45:34.111708 2526 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://10.0.0.11:6443/api/v1/namespaces/kube-system/events/coredns-66bc5c9577-976lc.18aa5d84e4bccbb1\": net/http: TLS handshake timeout" event="&Event{ObjectMeta:{coredns-66bc5c9577-976lc.18aa5d84e4bccbb1 kube-system 1268 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:coredns-66bc5c9577-976lc,UID:2f94c136-2158-4e5f-b19a-05695c38ab7a,APIVersion:v1,ResourceVersion:567,FieldPath:spec.containers{coredns},},Reason:Unhealthy,Message:Readiness probe failed: Get \"http://192.168.0.2:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-28 00:23:48 +0000 UTC,LastTimestamp:2026-04-28 00:35:10.900735917 +0000 UTC m=+896.338713214,Count:22,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 28 00:45:35.122877 kubelet[2526]: E0428 00:45:35.051527 2526 kubelet_node_status.go:486] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"NetworkUnavailable\\\"},{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-04-28T00:45:23Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-28T00:45:23Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-28T00:45:23Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-28T00:45:23Z\\\",\\\"type\\\":\\\"Ready\\\"}]}}\" for node \"localhost\": Patch \"https://10.0.0.11:6443/api/v1/nodes/localhost/status?timeout=10s\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Apr 28 00:45:38.151691 systemd[1]: Started sshd@51-10.0.0.11:22-10.0.0.1:36902.service - OpenSSH per-connection server daemon (10.0.0.1:36902). Apr 28 00:45:40.274235 kubelet[2526]: E0428 00:45:40.273838 2526 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://10.0.0.11:6443/api/v1/namespaces/kube-system/pods/coredns-66bc5c9577-sn6rz\": net/http: TLS handshake timeout" podUID="69b6c5c4-1b0f-43c4-a6e5-e4ff6b274b36" pod="kube-system/coredns-66bc5c9577-sn6rz" Apr 28 00:45:43.952139 kubelet[2526]: E0428 00:45:43.946559 2526 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.11:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" interval="7s" Apr 28 00:45:44.138864 kubelet[2526]: E0428 00:45:44.137201 2526 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: Get \"https://10.0.0.11:6443/api/v1/namespaces/kube-flannel/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=1352\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="object-\"kube-flannel\"/\"kube-root-ca.crt\"" type="*v1.ConfigMap" Apr 28 00:45:46.093354 kubelet[2526]: E0428 00:45:46.089756 2526 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"localhost\": Get \"https://10.0.0.11:6443/api/v1/nodes/localhost?timeout=10s\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Apr 28 00:45:47.329554 sshd[7565]: Accepted publickey for core from 10.0.0.1 port 36902 ssh2: RSA SHA256:h1NhNWfC+Dmp6wIIzeQOuTKnwsIBe3BN0LKeEqeOidc Apr 28 00:45:47.945771 sshd[7565]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 00:45:49.679435 systemd-logind[1457]: New session 52 of user core. Apr 28 00:45:50.202634 systemd[1]: Started session-52.scope - Session 52 of User core. Apr 28 00:45:56.379778 kubelet[2526]: E0428 00:45:56.368807 2526 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://10.0.0.11:6443/api/v1/namespaces/kube-system/events/coredns-66bc5c9577-976lc.18aa5d84e4bccbb1\": net/http: TLS handshake timeout" event="&Event{ObjectMeta:{coredns-66bc5c9577-976lc.18aa5d84e4bccbb1 kube-system 1268 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:coredns-66bc5c9577-976lc,UID:2f94c136-2158-4e5f-b19a-05695c38ab7a,APIVersion:v1,ResourceVersion:567,FieldPath:spec.containers{coredns},},Reason:Unhealthy,Message:Readiness probe failed: Get \"http://192.168.0.2:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-28 00:23:48 +0000 UTC,LastTimestamp:2026-04-28 00:35:10.900735917 +0000 UTC m=+896.338713214,Count:22,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 28 00:45:57.927239 kubelet[2526]: E0428 00:45:56.967642 2526 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://10.0.0.11:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-localhost\": net/http: TLS handshake timeout" podUID="a646d41a511ed3aa4e8f9816f82de57d" pod="kube-system/kube-apiserver-localhost" Apr 28 00:45:58.895084 kubelet[2526]: E0428 00:45:56.967804 2526 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.0.0.11:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dlocalhost&resourceVersion=1363\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="pkg/kubelet/config/apiserver.go:66" type="*v1.Pod" Apr 28 00:46:00.894275 kubelet[2526]: E0428 00:46:00.803661 2526 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"localhost\": Get \"https://10.0.0.11:6443/api/v1/nodes/localhost?timeout=10s\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Apr 28 00:46:02.063075 kubelet[2526]: E0428 00:46:01.031743 2526 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: Get \"https://10.0.0.11:6443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=1352\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="object-\"kube-system\"/\"kube-root-ca.crt\"" type="*v1.ConfigMap" Apr 28 00:46:05.080700 kubelet[2526]: E0428 00:46:05.075467 2526 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.11:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" interval="7s" Apr 28 00:46:10.948418 kubelet[2526]: E0428 00:46:10.939276 2526 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: Get \"https://10.0.0.11:6443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dcoredns&resourceVersion=1352\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="object-\"kube-system\"/\"coredns\"" type="*v1.ConfigMap" Apr 28 00:46:17.550267 kubelet[2526]: E0428 00:46:17.549831 2526 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.11:6443/apis/storage.k8s.io/v1/csidrivers?resourceVersion=1363\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 28 00:46:19.134360 kubelet[2526]: E0428 00:46:19.130060 2526 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="48.372s" Apr 28 00:46:19.533378 kubelet[2526]: E0428 00:46:17.378453 2526 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"localhost\": Get \"https://10.0.0.11:6443/api/v1/nodes/localhost?timeout=10s\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Apr 28 00:46:22.197803 kubelet[2526]: E0428 00:46:22.181559 2526 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: Get \"https://10.0.0.11:6443/api/v1/namespaces/kube-flannel/configmaps?fieldSelector=metadata.name%3Dkube-flannel-cfg&resourceVersion=1352\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="object-\"kube-flannel\"/\"kube-flannel-cfg\"" type="*v1.ConfigMap" Apr 28 00:46:22.444387 kubelet[2526]: E0428 00:46:22.432692 2526 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.11:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&resourceVersion=1346\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 28 00:46:22.444387 kubelet[2526]: E0428 00:46:22.433750 2526 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.11:6443/apis/node.k8s.io/v1/runtimeclasses?resourceVersion=1379\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 28 00:46:22.444387 kubelet[2526]: E0428 00:46:22.433781 2526 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.11:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&resourceVersion=1363\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 28 00:46:23.924418 kubelet[2526]: E0428 00:46:23.517297 2526 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.11:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded" interval="7s" Apr 28 00:46:28.536796 kubelet[2526]: E0428 00:46:28.270712 2526 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: Get \"https://10.0.0.11:6443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dkube-proxy&resourceVersion=1352\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="object-\"kube-system\"/\"kube-proxy\"" type="*v1.ConfigMap" Apr 28 00:46:29.167138 sshd[7565]: pam_unix(sshd:session): session closed for user core Apr 28 00:46:29.887836 systemd[1]: sshd@51-10.0.0.11:22-10.0.0.1:36902.service: Deactivated successfully. Apr 28 00:46:29.917320 systemd[1]: sshd@51-10.0.0.11:22-10.0.0.1:36902.service: Consumed 3.255s CPU time. Apr 28 00:46:30.180837 systemd[1]: session-52.scope: Deactivated successfully. Apr 28 00:46:30.251739 systemd[1]: session-52.scope: Consumed 20.207s CPU time. Apr 28 00:46:30.505539 systemd-logind[1457]: Session 52 logged out. Waiting for processes to exit. Apr 28 00:46:30.607410 kubelet[2526]: E0428 00:46:30.514831 2526 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://10.0.0.11:6443/api/v1/namespaces/kube-system/events/coredns-66bc5c9577-976lc.18aa5d84e4bccbb1\": net/http: TLS handshake timeout" event="&Event{ObjectMeta:{coredns-66bc5c9577-976lc.18aa5d84e4bccbb1 kube-system 1268 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:coredns-66bc5c9577-976lc,UID:2f94c136-2158-4e5f-b19a-05695c38ab7a,APIVersion:v1,ResourceVersion:567,FieldPath:spec.containers{coredns},},Reason:Unhealthy,Message:Readiness probe failed: Get \"http://192.168.0.2:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-28 00:23:48 +0000 UTC,LastTimestamp:2026-04-28 00:35:10.900735917 +0000 UTC m=+896.338713214,Count:22,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 28 00:46:30.680721 systemd-logind[1457]: Removed session 52. Apr 28 00:46:30.843641 kubelet[2526]: E0428 00:46:29.454118 2526 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://10.0.0.11:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-localhost\": net/http: TLS handshake timeout" podUID="a646d41a511ed3aa4e8f9816f82de57d" pod="kube-system/kube-apiserver-localhost" Apr 28 00:46:32.378611 kubelet[2526]: E0428 00:46:32.155548 2526 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"localhost\": Get \"https://10.0.0.11:6443/api/v1/nodes/localhost?timeout=10s\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Apr 28 00:46:32.684430 kubelet[2526]: E0428 00:46:32.395211 2526 kubelet_node_status.go:473] "Unable to update node status" err="update node status exceeds retry count" Apr 28 00:46:35.781851 systemd[1]: Started sshd@52-10.0.0.11:22-10.0.0.1:37130.service - OpenSSH per-connection server daemon (10.0.0.1:37130). Apr 28 00:46:44.577470 kubelet[2526]: E0428 00:46:44.569336 2526 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.11:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" interval="7s" Apr 28 00:46:46.055688 sshd[7625]: Accepted publickey for core from 10.0.0.1 port 37130 ssh2: RSA SHA256:h1NhNWfC+Dmp6wIIzeQOuTKnwsIBe3BN0LKeEqeOidc Apr 28 00:46:46.489428 kubelet[2526]: E0428 00:46:45.503364 2526 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://10.0.0.11:6443/api/v1/namespaces/kube-system/pods/kube-scheduler-localhost\": net/http: TLS handshake timeout" podUID="824fd89300514e351ed3b68d82c665c6" pod="kube-system/kube-scheduler-localhost" Apr 28 00:46:46.749161 sshd[7625]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 00:46:47.702204 systemd-logind[1457]: New session 53 of user core. Apr 28 00:46:48.036043 systemd[1]: Started session-53.scope - Session 53 of User core. Apr 28 00:46:50.190644 kubelet[2526]: E0428 00:46:49.387429 2526 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: Get \"https://10.0.0.11:6443/api/v1/namespaces/kube-flannel/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=1352\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="object-\"kube-flannel\"/\"kube-root-ca.crt\"" type="*v1.ConfigMap" Apr 28 00:46:50.254270 systemd[1]: cri-containerd-00a4447277ed6e8253aa251b26423c3830f2660c0e81559c7944bcaed4d05b6e.scope: Deactivated successfully. Apr 28 00:46:50.259807 systemd[1]: cri-containerd-00a4447277ed6e8253aa251b26423c3830f2660c0e81559c7944bcaed4d05b6e.scope: Consumed 5min 43.250s CPU time. Apr 28 00:46:52.367999 kubelet[2526]: W0428 00:46:51.810363 2526 helpers.go:245] readString: Failed to read "/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda646d41a511ed3aa4e8f9816f82de57d.slice/cri-containerd-00a4447277ed6e8253aa251b26423c3830f2660c0e81559c7944bcaed4d05b6e.scope/memory.swap.max": read /sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda646d41a511ed3aa4e8f9816f82de57d.slice/cri-containerd-00a4447277ed6e8253aa251b26423c3830f2660c0e81559c7944bcaed4d05b6e.scope/memory.swap.max: no such device Apr 28 00:46:52.656338 kubelet[2526]: E0428 00:46:52.345591 2526 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://10.0.0.11:6443/api/v1/namespaces/kube-system/events/coredns-66bc5c9577-976lc.18aa5d84e4bccbb1\": read tcp 10.0.0.11:32976->10.0.0.11:6443: read: connection reset by peer" event="&Event{ObjectMeta:{coredns-66bc5c9577-976lc.18aa5d84e4bccbb1 kube-system 1268 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:coredns-66bc5c9577-976lc,UID:2f94c136-2158-4e5f-b19a-05695c38ab7a,APIVersion:v1,ResourceVersion:567,FieldPath:spec.containers{coredns},},Reason:Unhealthy,Message:Readiness probe failed: Get \"http://192.168.0.2:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-28 00:23:48 +0000 UTC,LastTimestamp:2026-04-28 00:35:10.900735917 +0000 UTC m=+896.338713214,Count:22,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 28 00:46:52.779285 kubelet[2526]: E0428 00:46:52.715856 2526 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.0.0.11:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dlocalhost&resourceVersion=1363\": dial tcp 10.0.0.11:6443: connect: connection refused" logger="UnhandledError" reflector="pkg/kubelet/config/apiserver.go:66" type="*v1.Pod" Apr 28 00:46:52.779285 kubelet[2526]: E0428 00:46:52.777711 2526 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.11:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.11:6443: connect: connection refused" interval="7s" Apr 28 00:46:52.779285 kubelet[2526]: E0428 00:46:52.778128 2526 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: Get \"https://10.0.0.11:6443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=1352\": dial tcp 10.0.0.11:6443: connect: connection refused" logger="UnhandledError" reflector="object-\"kube-system\"/\"kube-root-ca.crt\"" type="*v1.ConfigMap" Apr 28 00:46:52.799103 kubelet[2526]: E0428 00:46:52.796398 2526 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://10.0.0.11:6443/api/v1/namespaces/kube-system/pods/kube-controller-manager-localhost\": dial tcp 10.0.0.11:6443: connect: connection refused" podUID="c6bb8708a026256e82ca4c5631a78b5a" pod="kube-system/kube-controller-manager-localhost" Apr 28 00:46:52.857067 kubelet[2526]: E0428 00:46:52.856661 2526 cadvisor_stats_provider.go:567] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda646d41a511ed3aa4e8f9816f82de57d.slice/cri-containerd-00a4447277ed6e8253aa251b26423c3830f2660c0e81559c7944bcaed4d05b6e.scope\": RecentStats: unable to find data in memory cache]" Apr 28 00:46:52.968882 kubelet[2526]: E0428 00:46:52.951254 2526 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://10.0.0.11:6443/api/v1/namespaces/kube-system/pods/coredns-66bc5c9577-976lc\": dial tcp 10.0.0.11:6443: connect: connection refused" podUID="2f94c136-2158-4e5f-b19a-05695c38ab7a" pod="kube-system/coredns-66bc5c9577-976lc" Apr 28 00:46:52.987683 kubelet[2526]: E0428 00:46:52.987368 2526 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://10.0.0.11:6443/api/v1/namespaces/kube-system/pods/coredns-66bc5c9577-sn6rz\": dial tcp 10.0.0.11:6443: connect: connection refused" podUID="69b6c5c4-1b0f-43c4-a6e5-e4ff6b274b36" pod="kube-system/coredns-66bc5c9577-sn6rz" Apr 28 00:46:52.996990 kubelet[2526]: E0428 00:46:52.994806 2526 kubelet_node_status.go:486] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"NetworkUnavailable\\\"},{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-04-28T00:46:52Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-28T00:46:52Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-28T00:46:52Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-28T00:46:52Z\\\",\\\"type\\\":\\\"Ready\\\"}]}}\" for node \"localhost\": Patch \"https://10.0.0.11:6443/api/v1/nodes/localhost/status?timeout=10s\": dial tcp 10.0.0.11:6443: connect: connection refused" Apr 28 00:46:53.001924 kubelet[2526]: E0428 00:46:52.998589 2526 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="32.831s" Apr 28 00:46:53.103404 kubelet[2526]: E0428 00:46:53.100865 2526 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"localhost\": Get \"https://10.0.0.11:6443/api/v1/nodes/localhost?timeout=10s\": dial tcp 10.0.0.11:6443: connect: connection refused" Apr 28 00:46:53.131612 kubelet[2526]: E0428 00:46:53.127937 2526 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://10.0.0.11:6443/api/v1/namespaces/kube-system/pods/kube-scheduler-localhost\": dial tcp 10.0.0.11:6443: connect: connection refused" podUID="824fd89300514e351ed3b68d82c665c6" pod="kube-system/kube-scheduler-localhost" Apr 28 00:46:53.160173 kubelet[2526]: E0428 00:46:53.157567 2526 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://10.0.0.11:6443/api/v1/namespaces/kube-system/pods/kube-controller-manager-localhost\": dial tcp 10.0.0.11:6443: connect: connection refused" podUID="c6bb8708a026256e82ca4c5631a78b5a" pod="kube-system/kube-controller-manager-localhost" Apr 28 00:46:53.495883 kubelet[2526]: E0428 00:46:53.135501 2526 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"localhost\": Get \"https://10.0.0.11:6443/api/v1/nodes/localhost?timeout=10s\": dial tcp 10.0.0.11:6443: connect: connection refused" Apr 28 00:46:53.618162 kubelet[2526]: E0428 00:46:53.579439 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:46:53.774842 kubelet[2526]: E0428 00:46:53.770138 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:46:53.789734 kubelet[2526]: E0428 00:46:53.776418 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:46:53.827312 kubelet[2526]: E0428 00:46:53.826493 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:46:53.861130 kubelet[2526]: E0428 00:46:53.858403 2526 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://10.0.0.11:6443/api/v1/namespaces/kube-system/pods/coredns-66bc5c9577-976lc\": dial tcp 10.0.0.11:6443: connect: connection refused" podUID="2f94c136-2158-4e5f-b19a-05695c38ab7a" pod="kube-system/coredns-66bc5c9577-976lc" Apr 28 00:46:53.861130 kubelet[2526]: E0428 00:46:53.859761 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:46:54.262849 kubelet[2526]: E0428 00:46:54.261773 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:46:54.359415 kubelet[2526]: E0428 00:46:54.359215 2526 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"localhost\": Get \"https://10.0.0.11:6443/api/v1/nodes/localhost?timeout=10s\": dial tcp 10.0.0.11:6443: connect: connection refused" Apr 28 00:46:54.368248 kubelet[2526]: E0428 00:46:54.368210 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:46:54.389180 kubelet[2526]: E0428 00:46:54.359174 2526 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://10.0.0.11:6443/api/v1/namespaces/kube-system/pods/coredns-66bc5c9577-sn6rz\": dial tcp 10.0.0.11:6443: connect: connection refused" podUID="69b6c5c4-1b0f-43c4-a6e5-e4ff6b274b36" pod="kube-system/coredns-66bc5c9577-sn6rz" Apr 28 00:46:54.972561 kubelet[2526]: E0428 00:46:54.394485 2526 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://10.0.0.11:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-localhost\": dial tcp 10.0.0.11:6443: connect: connection refused" podUID="a646d41a511ed3aa4e8f9816f82de57d" pod="kube-system/kube-apiserver-localhost" Apr 28 00:46:55.002983 kubelet[2526]: E0428 00:46:54.750513 2526 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"localhost\": Get \"https://10.0.0.11:6443/api/v1/nodes/localhost?timeout=10s\": dial tcp 10.0.0.11:6443: connect: connection refused" Apr 28 00:46:55.002983 kubelet[2526]: E0428 00:46:55.002276 2526 kubelet_node_status.go:473] "Unable to update node status" err="update node status exceeds retry count" Apr 28 00:46:55.002983 kubelet[2526]: E0428 00:46:55.002856 2526 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://10.0.0.11:6443/api/v1/namespaces/kube-system/pods/coredns-66bc5c9577-sn6rz\": dial tcp 10.0.0.11:6443: connect: connection refused" podUID="69b6c5c4-1b0f-43c4-a6e5-e4ff6b274b36" pod="kube-system/coredns-66bc5c9577-sn6rz" Apr 28 00:46:55.108604 kubelet[2526]: E0428 00:46:55.108288 2526 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.099s" Apr 28 00:46:55.213016 kubelet[2526]: E0428 00:46:55.211416 2526 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://10.0.0.11:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-localhost\": dial tcp 10.0.0.11:6443: connect: connection refused" podUID="a646d41a511ed3aa4e8f9816f82de57d" pod="kube-system/kube-apiserver-localhost" Apr 28 00:46:55.213016 kubelet[2526]: E0428 00:46:55.212733 2526 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://10.0.0.11:6443/api/v1/namespaces/kube-system/pods/kube-scheduler-localhost\": dial tcp 10.0.0.11:6443: connect: connection refused" podUID="824fd89300514e351ed3b68d82c665c6" pod="kube-system/kube-scheduler-localhost" Apr 28 00:46:55.252303 kubelet[2526]: E0428 00:46:55.245928 2526 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://10.0.0.11:6443/api/v1/namespaces/kube-system/pods/kube-controller-manager-localhost\": dial tcp 10.0.0.11:6443: connect: connection refused" podUID="c6bb8708a026256e82ca4c5631a78b5a" pod="kube-system/kube-controller-manager-localhost" Apr 28 00:46:55.252303 kubelet[2526]: E0428 00:46:55.246467 2526 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://10.0.0.11:6443/api/v1/namespaces/kube-system/pods/coredns-66bc5c9577-976lc\": dial tcp 10.0.0.11:6443: connect: connection refused" podUID="2f94c136-2158-4e5f-b19a-05695c38ab7a" pod="kube-system/coredns-66bc5c9577-976lc" Apr 28 00:46:55.264747 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-00a4447277ed6e8253aa251b26423c3830f2660c0e81559c7944bcaed4d05b6e-rootfs.mount: Deactivated successfully. Apr 28 00:46:55.451175 containerd[1473]: time="2026-04-28T00:46:55.441429081Z" level=info msg="shim disconnected" id=00a4447277ed6e8253aa251b26423c3830f2660c0e81559c7944bcaed4d05b6e namespace=k8s.io Apr 28 00:46:55.458512 containerd[1473]: time="2026-04-28T00:46:55.458329226Z" level=warning msg="cleaning up after shim disconnected" id=00a4447277ed6e8253aa251b26423c3830f2660c0e81559c7944bcaed4d05b6e namespace=k8s.io Apr 28 00:46:55.458738 containerd[1473]: time="2026-04-28T00:46:55.458719199Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 28 00:46:56.233407 kubelet[2526]: E0428 00:46:56.233189 2526 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.025s" Apr 28 00:46:56.957434 kubelet[2526]: E0428 00:46:56.954841 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:46:56.974199 kubelet[2526]: E0428 00:46:56.891377 2526 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://10.0.0.11:6443/api/v1/namespaces/kube-system/pods/coredns-66bc5c9577-976lc\": dial tcp 10.0.0.11:6443: connect: connection refused" podUID="2f94c136-2158-4e5f-b19a-05695c38ab7a" pod="kube-system/coredns-66bc5c9577-976lc" Apr 28 00:46:57.007459 kubelet[2526]: E0428 00:46:57.003073 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:46:57.135554 kubelet[2526]: E0428 00:46:57.135394 2526 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://10.0.0.11:6443/api/v1/namespaces/kube-system/pods/coredns-66bc5c9577-sn6rz\": dial tcp 10.0.0.11:6443: connect: connection refused" podUID="69b6c5c4-1b0f-43c4-a6e5-e4ff6b274b36" pod="kube-system/coredns-66bc5c9577-sn6rz" Apr 28 00:46:57.189527 kubelet[2526]: E0428 00:46:57.189123 2526 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://10.0.0.11:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-localhost\": dial tcp 10.0.0.11:6443: connect: connection refused" podUID="a646d41a511ed3aa4e8f9816f82de57d" pod="kube-system/kube-apiserver-localhost" Apr 28 00:46:57.197948 kubelet[2526]: E0428 00:46:57.197101 2526 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://10.0.0.11:6443/api/v1/namespaces/kube-system/pods/kube-scheduler-localhost\": dial tcp 10.0.0.11:6443: connect: connection refused" podUID="824fd89300514e351ed3b68d82c665c6" pod="kube-system/kube-scheduler-localhost" Apr 28 00:46:57.197948 kubelet[2526]: E0428 00:46:57.197589 2526 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://10.0.0.11:6443/api/v1/namespaces/kube-system/pods/kube-controller-manager-localhost\": dial tcp 10.0.0.11:6443: connect: connection refused" podUID="c6bb8708a026256e82ca4c5631a78b5a" pod="kube-system/kube-controller-manager-localhost" Apr 28 00:46:57.199233 kubelet[2526]: E0428 00:46:57.199203 2526 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://10.0.0.11:6443/api/v1/namespaces/kube-system/pods/coredns-66bc5c9577-976lc\": dial tcp 10.0.0.11:6443: connect: connection refused" podUID="2f94c136-2158-4e5f-b19a-05695c38ab7a" pod="kube-system/coredns-66bc5c9577-976lc" Apr 28 00:46:57.199690 kubelet[2526]: E0428 00:46:57.199573 2526 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://10.0.0.11:6443/api/v1/namespaces/kube-system/pods/coredns-66bc5c9577-sn6rz\": dial tcp 10.0.0.11:6443: connect: connection refused" podUID="69b6c5c4-1b0f-43c4-a6e5-e4ff6b274b36" pod="kube-system/coredns-66bc5c9577-sn6rz" Apr 28 00:46:57.247327 kubelet[2526]: E0428 00:46:57.245651 2526 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://10.0.0.11:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-localhost\": dial tcp 10.0.0.11:6443: connect: connection refused" podUID="a646d41a511ed3aa4e8f9816f82de57d" pod="kube-system/kube-apiserver-localhost" Apr 28 00:46:58.028621 kubelet[2526]: E0428 00:46:58.000415 2526 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://10.0.0.11:6443/api/v1/namespaces/kube-system/pods/kube-scheduler-localhost\": dial tcp 10.0.0.11:6443: connect: connection refused" podUID="824fd89300514e351ed3b68d82c665c6" pod="kube-system/kube-scheduler-localhost" Apr 28 00:46:58.503577 kubelet[2526]: E0428 00:46:58.503049 2526 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://10.0.0.11:6443/api/v1/namespaces/kube-system/pods/kube-controller-manager-localhost\": dial tcp 10.0.0.11:6443: connect: connection refused" podUID="c6bb8708a026256e82ca4c5631a78b5a" pod="kube-system/kube-controller-manager-localhost" Apr 28 00:47:00.660604 containerd[1473]: time="2026-04-28T00:47:00.595467980Z" level=error msg="failed to delete" cmd="/usr/bin/containerd-shim-runc-v2 -namespace k8s.io -address /run/containerd/containerd.sock -publish-binary /usr/bin/containerd -id 00a4447277ed6e8253aa251b26423c3830f2660c0e81559c7944bcaed4d05b6e -bundle /run/containerd/io.containerd.runtime.v2.task/k8s.io/00a4447277ed6e8253aa251b26423c3830f2660c0e81559c7944bcaed4d05b6e delete" error="signal: killed" namespace=k8s.io Apr 28 00:47:00.789487 containerd[1473]: time="2026-04-28T00:47:00.661460205Z" level=warning msg="failed to clean up after shim disconnected" error="time=\"2026-04-28T00:46:58Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n: signal: killed" id=00a4447277ed6e8253aa251b26423c3830f2660c0e81559c7944bcaed4d05b6e namespace=k8s.io Apr 28 00:47:00.864777 kubelet[2526]: E0428 00:47:00.180173 2526 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.11:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.11:6443: connect: connection refused" interval="7s" Apr 28 00:47:00.934837 kubelet[2526]: E0428 00:47:00.927045 2526 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://10.0.0.11:6443/api/v1/namespaces/kube-system/pods/coredns-66bc5c9577-976lc\": dial tcp 10.0.0.11:6443: connect: connection refused" podUID="2f94c136-2158-4e5f-b19a-05695c38ab7a" pod="kube-system/coredns-66bc5c9577-976lc" Apr 28 00:47:00.941510 containerd[1473]: time="2026-04-28T00:47:00.855477986Z" level=error msg="failed to delete shim" error="1 error occurred:\n\t* close wait error: context deadline exceeded\n\n" id=00a4447277ed6e8253aa251b26423c3830f2660c0e81559c7944bcaed4d05b6e Apr 28 00:47:01.250405 kubelet[2526]: E0428 00:47:01.247596 2526 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.11:6443/apis/storage.k8s.io/v1/csidrivers?resourceVersion=1363\": dial tcp 10.0.0.11:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 28 00:47:01.337652 kubelet[2526]: E0428 00:47:01.337317 2526 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://10.0.0.11:6443/api/v1/namespaces/kube-system/pods/coredns-66bc5c9577-sn6rz\": dial tcp 10.0.0.11:6443: connect: connection refused" podUID="69b6c5c4-1b0f-43c4-a6e5-e4ff6b274b36" pod="kube-system/coredns-66bc5c9577-sn6rz" Apr 28 00:47:02.447532 kubelet[2526]: E0428 00:47:02.442497 2526 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://10.0.0.11:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-localhost\": dial tcp 10.0.0.11:6443: connect: connection refused" podUID="a646d41a511ed3aa4e8f9816f82de57d" pod="kube-system/kube-apiserver-localhost" Apr 28 00:47:03.043445 kubelet[2526]: E0428 00:47:02.586382 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:47:04.231197 kubelet[2526]: E0428 00:47:04.217450 2526 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://10.0.0.11:6443/api/v1/namespaces/kube-system/events/coredns-66bc5c9577-976lc.18aa5d84e4bccbb1\": dial tcp 10.0.0.11:6443: connect: connection refused" event="&Event{ObjectMeta:{coredns-66bc5c9577-976lc.18aa5d84e4bccbb1 kube-system 1268 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:coredns-66bc5c9577-976lc,UID:2f94c136-2158-4e5f-b19a-05695c38ab7a,APIVersion:v1,ResourceVersion:567,FieldPath:spec.containers{coredns},},Reason:Unhealthy,Message:Readiness probe failed: Get \"http://192.168.0.2:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-28 00:23:48 +0000 UTC,LastTimestamp:2026-04-28 00:35:10.900735917 +0000 UTC m=+896.338713214,Count:22,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 28 00:47:04.235585 kubelet[2526]: E0428 00:47:04.233769 2526 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="7.005s" Apr 28 00:47:04.235585 kubelet[2526]: E0428 00:47:04.234611 2526 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://10.0.0.11:6443/api/v1/namespaces/kube-system/pods/kube-scheduler-localhost\": dial tcp 10.0.0.11:6443: connect: connection refused" podUID="824fd89300514e351ed3b68d82c665c6" pod="kube-system/kube-scheduler-localhost" Apr 28 00:47:04.236002 kubelet[2526]: E0428 00:47:04.235945 2526 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://10.0.0.11:6443/api/v1/namespaces/kube-system/pods/kube-controller-manager-localhost\": dial tcp 10.0.0.11:6443: connect: connection refused" podUID="c6bb8708a026256e82ca4c5631a78b5a" pod="kube-system/kube-controller-manager-localhost" Apr 28 00:47:04.834568 kubelet[2526]: I0428 00:47:04.834330 2526 scope.go:117] "RemoveContainer" containerID="c1e55d247cdb60fe5d631e9c903dd2c21854aea1f240a5e38e911c85922a9a17" Apr 28 00:47:04.859279 kubelet[2526]: I0428 00:47:04.858723 2526 scope.go:117] "RemoveContainer" containerID="00a4447277ed6e8253aa251b26423c3830f2660c0e81559c7944bcaed4d05b6e" Apr 28 00:47:04.860644 kubelet[2526]: E0428 00:47:04.859693 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:47:04.860777 kubelet[2526]: E0428 00:47:04.860498 2526 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://10.0.0.11:6443/api/v1/namespaces/kube-system/pods/coredns-66bc5c9577-976lc\": dial tcp 10.0.0.11:6443: connect: connection refused" podUID="2f94c136-2158-4e5f-b19a-05695c38ab7a" pod="kube-system/coredns-66bc5c9577-976lc" Apr 28 00:47:04.861411 kubelet[2526]: E0428 00:47:04.861365 2526 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://10.0.0.11:6443/api/v1/namespaces/kube-system/pods/coredns-66bc5c9577-sn6rz\": dial tcp 10.0.0.11:6443: connect: connection refused" podUID="69b6c5c4-1b0f-43c4-a6e5-e4ff6b274b36" pod="kube-system/coredns-66bc5c9577-sn6rz" Apr 28 00:47:04.861649 kubelet[2526]: E0428 00:47:04.861624 2526 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://10.0.0.11:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-localhost\": dial tcp 10.0.0.11:6443: connect: connection refused" podUID="a646d41a511ed3aa4e8f9816f82de57d" pod="kube-system/kube-apiserver-localhost" Apr 28 00:47:05.001289 kubelet[2526]: E0428 00:47:05.000969 2526 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://10.0.0.11:6443/api/v1/namespaces/kube-system/pods/kube-scheduler-localhost\": dial tcp 10.0.0.11:6443: connect: connection refused" podUID="824fd89300514e351ed3b68d82c665c6" pod="kube-system/kube-scheduler-localhost" Apr 28 00:47:05.015708 kubelet[2526]: E0428 00:47:05.013252 2526 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://10.0.0.11:6443/api/v1/namespaces/kube-system/pods/kube-controller-manager-localhost\": dial tcp 10.0.0.11:6443: connect: connection refused" podUID="c6bb8708a026256e82ca4c5631a78b5a" pod="kube-system/kube-controller-manager-localhost" Apr 28 00:47:05.013400 sshd[7625]: pam_unix(sshd:session): session closed for user core Apr 28 00:47:05.039098 containerd[1473]: time="2026-04-28T00:47:05.037982721Z" level=info msg="RemoveContainer for \"c1e55d247cdb60fe5d631e9c903dd2c21854aea1f240a5e38e911c85922a9a17\"" Apr 28 00:47:05.077429 containerd[1473]: time="2026-04-28T00:47:05.074831072Z" level=info msg="CreateContainer within sandbox \"1c8dec823b2d977d0136feae27dd03906f1343f68ad2d582daddb320cf929b62\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:2,}" Apr 28 00:47:05.368842 containerd[1473]: time="2026-04-28T00:47:05.361864405Z" level=info msg="RemoveContainer for \"c1e55d247cdb60fe5d631e9c903dd2c21854aea1f240a5e38e911c85922a9a17\" returns successfully" Apr 28 00:47:05.616246 systemd[1]: sshd@52-10.0.0.11:22-10.0.0.1:37130.service: Deactivated successfully. Apr 28 00:47:05.639992 systemd[1]: sshd@52-10.0.0.11:22-10.0.0.1:37130.service: Consumed 3.569s CPU time. Apr 28 00:47:05.891224 systemd[1]: session-53.scope: Deactivated successfully. Apr 28 00:47:05.894103 systemd[1]: session-53.scope: Consumed 11.865s CPU time. Apr 28 00:47:06.070371 systemd-logind[1457]: Session 53 logged out. Waiting for processes to exit. Apr 28 00:47:06.191865 systemd-logind[1457]: Removed session 53. Apr 28 00:47:06.801422 kubelet[2526]: E0428 00:47:06.794228 2526 kubelet_node_status.go:486] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"NetworkUnavailable\\\"},{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-04-28T00:47:06Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-28T00:47:06Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-28T00:47:06Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-28T00:47:06Z\\\",\\\"type\\\":\\\"Ready\\\"}]}}\" for node \"localhost\": Patch \"https://10.0.0.11:6443/api/v1/nodes/localhost/status?timeout=10s\": dial tcp 10.0.0.11:6443: connect: connection refused" Apr 28 00:47:07.148059 kubelet[2526]: E0428 00:47:07.138186 2526 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"localhost\": Get \"https://10.0.0.11:6443/api/v1/nodes/localhost?timeout=10s\": dial tcp 10.0.0.11:6443: connect: connection refused" Apr 28 00:47:07.162655 kubelet[2526]: E0428 00:47:07.161426 2526 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"localhost\": Get \"https://10.0.0.11:6443/api/v1/nodes/localhost?timeout=10s\": dial tcp 10.0.0.11:6443: connect: connection refused" Apr 28 00:47:07.167749 containerd[1473]: time="2026-04-28T00:47:07.161869903Z" level=info msg="CreateContainer within sandbox \"1c8dec823b2d977d0136feae27dd03906f1343f68ad2d582daddb320cf929b62\" for &ContainerMetadata{Name:kube-apiserver,Attempt:2,} returns container id \"297d1f2fb41974972ccdd7c4f2806d11e5c5ca8b2ec27bdc8ae9222e5bc34c41\"" Apr 28 00:47:07.176541 containerd[1473]: time="2026-04-28T00:47:07.176469458Z" level=info msg="StartContainer for \"297d1f2fb41974972ccdd7c4f2806d11e5c5ca8b2ec27bdc8ae9222e5bc34c41\"" Apr 28 00:47:07.200254 kubelet[2526]: E0428 00:47:07.176588 2526 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.854s" Apr 28 00:47:07.469965 kubelet[2526]: E0428 00:47:07.463067 2526 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"localhost\": Get \"https://10.0.0.11:6443/api/v1/nodes/localhost?timeout=10s\": dial tcp 10.0.0.11:6443: connect: connection refused" Apr 28 00:47:07.469965 kubelet[2526]: E0428 00:47:07.463344 2526 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"localhost\": Get \"https://10.0.0.11:6443/api/v1/nodes/localhost?timeout=10s\": dial tcp 10.0.0.11:6443: connect: connection refused" Apr 28 00:47:07.469965 kubelet[2526]: E0428 00:47:07.463359 2526 kubelet_node_status.go:473] "Unable to update node status" err="update node status exceeds retry count" Apr 28 00:47:07.983619 kubelet[2526]: E0428 00:47:07.981282 2526 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.11:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.11:6443: connect: connection refused" interval="7s" Apr 28 00:47:09.056682 kubelet[2526]: E0428 00:47:09.054398 2526 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://10.0.0.11:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-localhost\": dial tcp 10.0.0.11:6443: connect: connection refused" podUID="a646d41a511ed3aa4e8f9816f82de57d" pod="kube-system/kube-apiserver-localhost" Apr 28 00:47:09.215434 kubelet[2526]: E0428 00:47:09.214447 2526 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://10.0.0.11:6443/api/v1/namespaces/kube-system/pods/kube-scheduler-localhost\": dial tcp 10.0.0.11:6443: connect: connection refused" podUID="824fd89300514e351ed3b68d82c665c6" pod="kube-system/kube-scheduler-localhost" Apr 28 00:47:09.424417 kubelet[2526]: E0428 00:47:09.405184 2526 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://10.0.0.11:6443/api/v1/namespaces/kube-system/pods/kube-controller-manager-localhost\": dial tcp 10.0.0.11:6443: connect: connection refused" podUID="c6bb8708a026256e82ca4c5631a78b5a" pod="kube-system/kube-controller-manager-localhost" Apr 28 00:47:09.556478 kubelet[2526]: E0428 00:47:09.542499 2526 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://10.0.0.11:6443/api/v1/namespaces/kube-system/pods/coredns-66bc5c9577-976lc\": dial tcp 10.0.0.11:6443: connect: connection refused" podUID="2f94c136-2158-4e5f-b19a-05695c38ab7a" pod="kube-system/coredns-66bc5c9577-976lc" Apr 28 00:47:09.662247 kubelet[2526]: E0428 00:47:09.655091 2526 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://10.0.0.11:6443/api/v1/namespaces/kube-system/pods/coredns-66bc5c9577-sn6rz\": dial tcp 10.0.0.11:6443: connect: connection refused" podUID="69b6c5c4-1b0f-43c4-a6e5-e4ff6b274b36" pod="kube-system/coredns-66bc5c9577-sn6rz" Apr 28 00:47:09.795216 kubelet[2526]: E0428 00:47:09.714542 2526 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: Get \"https://10.0.0.11:6443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dcoredns&resourceVersion=1352\": dial tcp 10.0.0.11:6443: connect: connection refused" logger="UnhandledError" reflector="object-\"kube-system\"/\"coredns\"" type="*v1.ConfigMap" Apr 28 00:47:10.395940 systemd[1]: Started sshd@53-10.0.0.11:22-10.0.0.1:39376.service - OpenSSH per-connection server daemon (10.0.0.1:39376). Apr 28 00:47:11.500430 kubelet[2526]: E0428 00:47:11.479783 2526 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.11:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&resourceVersion=1363\": dial tcp 10.0.0.11:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 28 00:47:12.472882 kubelet[2526]: E0428 00:47:12.472283 2526 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.156s" Apr 28 00:47:13.052670 systemd[1]: Started cri-containerd-297d1f2fb41974972ccdd7c4f2806d11e5c5ca8b2ec27bdc8ae9222e5bc34c41.scope - libcontainer container 297d1f2fb41974972ccdd7c4f2806d11e5c5ca8b2ec27bdc8ae9222e5bc34c41. Apr 28 00:47:14.007788 kubelet[2526]: E0428 00:47:14.003200 2526 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.11:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&resourceVersion=1346\": dial tcp 10.0.0.11:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 28 00:47:14.164817 sshd[7738]: Accepted publickey for core from 10.0.0.1 port 39376 ssh2: RSA SHA256:h1NhNWfC+Dmp6wIIzeQOuTKnwsIBe3BN0LKeEqeOidc Apr 28 00:47:14.264754 sshd[7738]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 00:47:14.574621 kubelet[2526]: E0428 00:47:14.555938 2526 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://10.0.0.11:6443/api/v1/namespaces/kube-system/events/coredns-66bc5c9577-976lc.18aa5d84e4bccbb1\": dial tcp 10.0.0.11:6443: connect: connection refused" event="&Event{ObjectMeta:{coredns-66bc5c9577-976lc.18aa5d84e4bccbb1 kube-system 1268 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:coredns-66bc5c9577-976lc,UID:2f94c136-2158-4e5f-b19a-05695c38ab7a,APIVersion:v1,ResourceVersion:567,FieldPath:spec.containers{coredns},},Reason:Unhealthy,Message:Readiness probe failed: Get \"http://192.168.0.2:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-28 00:23:48 +0000 UTC,LastTimestamp:2026-04-28 00:35:10.900735917 +0000 UTC m=+896.338713214,Count:22,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 28 00:47:14.611783 kubelet[2526]: I0428 00:47:14.609412 2526 scope.go:117] "RemoveContainer" containerID="00a4447277ed6e8253aa251b26423c3830f2660c0e81559c7944bcaed4d05b6e" Apr 28 00:47:14.970603 kubelet[2526]: E0428 00:47:14.966542 2526 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.11:6443/apis/node.k8s.io/v1/runtimeclasses?resourceVersion=1379\": dial tcp 10.0.0.11:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 28 00:47:15.092587 systemd-logind[1457]: New session 54 of user core. Apr 28 00:47:15.226826 containerd[1473]: time="2026-04-28T00:47:15.104529132Z" level=error msg="get state for 297d1f2fb41974972ccdd7c4f2806d11e5c5ca8b2ec27bdc8ae9222e5bc34c41" error="context deadline exceeded: unknown" Apr 28 00:47:15.394182 containerd[1473]: time="2026-04-28T00:47:15.317763243Z" level=warning msg="unknown status" status=0 Apr 28 00:47:15.550804 systemd[1]: Started session-54.scope - Session 54 of User core. Apr 28 00:47:15.610876 kubelet[2526]: E0428 00:47:15.484454 2526 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.11:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.11:6443: connect: connection refused" interval="7s" Apr 28 00:47:16.487716 kubelet[2526]: E0428 00:47:16.477431 2526 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.246s" Apr 28 00:47:16.607043 containerd[1473]: time="2026-04-28T00:47:16.602642528Z" level=info msg="RemoveContainer for \"00a4447277ed6e8253aa251b26423c3830f2660c0e81559c7944bcaed4d05b6e\"" Apr 28 00:47:17.059860 kubelet[2526]: E0428 00:47:17.056233 2526 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: Get \"https://10.0.0.11:6443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dkube-proxy&resourceVersion=1352\": dial tcp 10.0.0.11:6443: connect: connection refused" logger="UnhandledError" reflector="object-\"kube-system\"/\"kube-proxy\"" type="*v1.ConfigMap" Apr 28 00:47:17.421651 containerd[1473]: time="2026-04-28T00:47:17.420003387Z" level=info msg="RemoveContainer for \"00a4447277ed6e8253aa251b26423c3830f2660c0e81559c7944bcaed4d05b6e\" returns successfully" Apr 28 00:47:18.658267 containerd[1473]: time="2026-04-28T00:47:18.642267940Z" level=error msg="get state for 297d1f2fb41974972ccdd7c4f2806d11e5c5ca8b2ec27bdc8ae9222e5bc34c41" error="context deadline exceeded: unknown" Apr 28 00:47:18.815756 containerd[1473]: time="2026-04-28T00:47:18.666562623Z" level=warning msg="unknown status" status=0 Apr 28 00:47:20.975879 kubelet[2526]: E0428 00:47:20.685557 2526 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://10.0.0.11:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-localhost\": dial tcp 10.0.0.11:6443: connect: connection refused" podUID="a646d41a511ed3aa4e8f9816f82de57d" pod="kube-system/kube-apiserver-localhost" Apr 28 00:47:22.776306 containerd[1473]: time="2026-04-28T00:47:22.763100650Z" level=error msg="get state for 297d1f2fb41974972ccdd7c4f2806d11e5c5ca8b2ec27bdc8ae9222e5bc34c41" error="context deadline exceeded: unknown" Apr 28 00:47:22.876480 containerd[1473]: time="2026-04-28T00:47:22.771317848Z" level=warning msg="unknown status" status=0 Apr 28 00:47:24.708143 kubelet[2526]: E0428 00:47:24.684325 2526 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: Get \"https://10.0.0.11:6443/api/v1/namespaces/kube-flannel/configmaps?fieldSelector=metadata.name%3Dkube-flannel-cfg&resourceVersion=1352\": dial tcp 10.0.0.11:6443: connect: connection refused" logger="UnhandledError" reflector="object-\"kube-flannel\"/\"kube-flannel-cfg\"" type="*v1.ConfigMap" Apr 28 00:47:24.786330 kubelet[2526]: E0428 00:47:24.705492 2526 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.11:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.11:6443: connect: connection refused" interval="7s" Apr 28 00:47:25.126833 containerd[1473]: time="2026-04-28T00:47:25.015180598Z" level=error msg="ttrpc: received message on inactive stream" stream=3 Apr 28 00:47:25.126833 containerd[1473]: time="2026-04-28T00:47:25.106532332Z" level=error msg="ttrpc: received message on inactive stream" stream=5 Apr 28 00:47:25.126833 containerd[1473]: time="2026-04-28T00:47:25.106739086Z" level=error msg="ttrpc: received message on inactive stream" stream=7 Apr 28 00:47:27.440736 kubelet[2526]: E0428 00:47:27.439097 2526 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://10.0.0.11:6443/api/v1/namespaces/kube-system/pods/kube-scheduler-localhost\": dial tcp 10.0.0.11:6443: connect: connection refused" podUID="824fd89300514e351ed3b68d82c665c6" pod="kube-system/kube-scheduler-localhost" Apr 28 00:47:27.970603 kubelet[2526]: E0428 00:47:27.965110 2526 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://10.0.0.11:6443/api/v1/namespaces/kube-system/pods/kube-controller-manager-localhost\": dial tcp 10.0.0.11:6443: connect: connection refused" podUID="c6bb8708a026256e82ca4c5631a78b5a" pod="kube-system/kube-controller-manager-localhost" Apr 28 00:47:28.187389 kubelet[2526]: E0428 00:47:27.907246 2526 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://10.0.0.11:6443/api/v1/namespaces/kube-system/events/coredns-66bc5c9577-976lc.18aa5d84e4bccbb1\": dial tcp 10.0.0.11:6443: connect: connection refused" event="&Event{ObjectMeta:{coredns-66bc5c9577-976lc.18aa5d84e4bccbb1 kube-system 1268 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:coredns-66bc5c9577-976lc,UID:2f94c136-2158-4e5f-b19a-05695c38ab7a,APIVersion:v1,ResourceVersion:567,FieldPath:spec.containers{coredns},},Reason:Unhealthy,Message:Readiness probe failed: Get \"http://192.168.0.2:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-28 00:23:48 +0000 UTC,LastTimestamp:2026-04-28 00:35:10.900735917 +0000 UTC m=+896.338713214,Count:22,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 28 00:47:28.456358 kubelet[2526]: E0428 00:47:28.293848 2526 kubelet_node_status.go:486] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"NetworkUnavailable\\\"},{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-04-28T00:47:24Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-28T00:47:24Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-28T00:47:24Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-28T00:47:24Z\\\",\\\"type\\\":\\\"Ready\\\"}]}}\" for node \"localhost\": Patch \"https://10.0.0.11:6443/api/v1/nodes/localhost/status?timeout=10s\": dial tcp 10.0.0.11:6443: connect: connection refused" Apr 28 00:47:29.294343 containerd[1473]: time="2026-04-28T00:47:29.293577706Z" level=info msg="StartContainer for \"297d1f2fb41974972ccdd7c4f2806d11e5c5ca8b2ec27bdc8ae9222e5bc34c41\" returns successfully" Apr 28 00:47:29.672425 kubelet[2526]: E0428 00:47:29.671661 2526 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://10.0.0.11:6443/api/v1/namespaces/kube-system/pods/coredns-66bc5c9577-976lc\": dial tcp 10.0.0.11:6443: connect: connection refused" podUID="2f94c136-2158-4e5f-b19a-05695c38ab7a" pod="kube-system/coredns-66bc5c9577-976lc" Apr 28 00:47:30.275869 kubelet[2526]: E0428 00:47:30.257047 2526 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"localhost\": Get \"https://10.0.0.11:6443/api/v1/nodes/localhost?timeout=10s\": dial tcp 10.0.0.11:6443: connect: connection refused" Apr 28 00:47:30.895177 kubelet[2526]: E0428 00:47:30.893122 2526 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="14.303s" Apr 28 00:47:31.610410 kubelet[2526]: E0428 00:47:30.420873 2526 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://10.0.0.11:6443/api/v1/namespaces/kube-system/pods/coredns-66bc5c9577-sn6rz\": dial tcp 10.0.0.11:6443: connect: connection refused" podUID="69b6c5c4-1b0f-43c4-a6e5-e4ff6b274b36" pod="kube-system/coredns-66bc5c9577-sn6rz" Apr 28 00:47:32.795264 kubelet[2526]: E0428 00:47:32.793711 2526 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.11:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.11:6443: connect: connection refused" interval="7s" Apr 28 00:47:32.959585 kubelet[2526]: E0428 00:47:32.942499 2526 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"localhost\": Get \"https://10.0.0.11:6443/api/v1/nodes/localhost?timeout=10s\": dial tcp 10.0.0.11:6443: connect: connection refused" Apr 28 00:47:33.082802 kubelet[2526]: E0428 00:47:32.633478 2526 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://10.0.0.11:6443/api/v1/namespaces/kube-system/pods/coredns-66bc5c9577-976lc\": dial tcp 10.0.0.11:6443: connect: connection refused" podUID="2f94c136-2158-4e5f-b19a-05695c38ab7a" pod="kube-system/coredns-66bc5c9577-976lc" Apr 28 00:47:34.086616 kubelet[2526]: E0428 00:47:34.018077 2526 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"localhost\": Get \"https://10.0.0.11:6443/api/v1/nodes/localhost?timeout=10s\": dial tcp 10.0.0.11:6443: connect: connection refused" Apr 28 00:47:34.422944 kubelet[2526]: E0428 00:47:34.416159 2526 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: Get \"https://10.0.0.11:6443/api/v1/namespaces/kube-flannel/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=1352\": dial tcp 10.0.0.11:6443: connect: connection refused" logger="UnhandledError" reflector="object-\"kube-flannel\"/\"kube-root-ca.crt\"" type="*v1.ConfigMap" Apr 28 00:47:34.431123 kubelet[2526]: E0428 00:47:34.428239 2526 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.0.0.11:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dlocalhost&resourceVersion=1363\": dial tcp 10.0.0.11:6443: connect: connection refused" logger="UnhandledError" reflector="pkg/kubelet/config/apiserver.go:66" type="*v1.Pod" Apr 28 00:47:34.431123 kubelet[2526]: E0428 00:47:34.428874 2526 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"localhost\": Get \"https://10.0.0.11:6443/api/v1/nodes/localhost?timeout=10s\": dial tcp 10.0.0.11:6443: connect: connection refused" Apr 28 00:47:34.431123 kubelet[2526]: E0428 00:47:34.428922 2526 kubelet_node_status.go:473] "Unable to update node status" err="update node status exceeds retry count" Apr 28 00:47:34.431123 kubelet[2526]: E0428 00:47:34.400658 2526 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://10.0.0.11:6443/api/v1/namespaces/kube-system/pods/coredns-66bc5c9577-sn6rz\": dial tcp 10.0.0.11:6443: connect: connection refused" podUID="69b6c5c4-1b0f-43c4-a6e5-e4ff6b274b36" pod="kube-system/coredns-66bc5c9577-sn6rz" Apr 28 00:47:34.431123 kubelet[2526]: E0428 00:47:34.429644 2526 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://10.0.0.11:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-localhost\": dial tcp 10.0.0.11:6443: connect: connection refused" podUID="a646d41a511ed3aa4e8f9816f82de57d" pod="kube-system/kube-apiserver-localhost" Apr 28 00:47:34.431123 kubelet[2526]: E0428 00:47:34.429847 2526 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://10.0.0.11:6443/api/v1/namespaces/kube-system/pods/kube-scheduler-localhost\": dial tcp 10.0.0.11:6443: connect: connection refused" podUID="824fd89300514e351ed3b68d82c665c6" pod="kube-system/kube-scheduler-localhost" Apr 28 00:47:34.431123 kubelet[2526]: E0428 00:47:34.430022 2526 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://10.0.0.11:6443/api/v1/namespaces/kube-system/pods/kube-controller-manager-localhost\": dial tcp 10.0.0.11:6443: connect: connection refused" podUID="c6bb8708a026256e82ca4c5631a78b5a" pod="kube-system/kube-controller-manager-localhost" Apr 28 00:47:52.248757 kubelet[2526]: E0428 00:47:51.701263 2526 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.11:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" interval="7s" Apr 28 00:47:54.334561 kubelet[2526]: E0428 00:47:54.317342 2526 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://10.0.0.11:6443/api/v1/namespaces/kube-system/pods/coredns-66bc5c9577-sn6rz\": net/http: TLS handshake timeout" podUID="69b6c5c4-1b0f-43c4-a6e5-e4ff6b274b36" pod="kube-system/coredns-66bc5c9577-sn6rz" Apr 28 00:47:54.638539 kubelet[2526]: E0428 00:47:53.864844 2526 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://10.0.0.11:6443/api/v1/namespaces/kube-system/events/coredns-66bc5c9577-976lc.18aa5d84e4bccbb1\": net/http: TLS handshake timeout" event="&Event{ObjectMeta:{coredns-66bc5c9577-976lc.18aa5d84e4bccbb1 kube-system 1268 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:coredns-66bc5c9577-976lc,UID:2f94c136-2158-4e5f-b19a-05695c38ab7a,APIVersion:v1,ResourceVersion:567,FieldPath:spec.containers{coredns},},Reason:Unhealthy,Message:Readiness probe failed: Get \"http://192.168.0.2:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-28 00:23:48 +0000 UTC,LastTimestamp:2026-04-28 00:35:10.900735917 +0000 UTC m=+896.338713214,Count:22,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 28 00:47:54.664174 kubelet[2526]: E0428 00:47:54.649495 2526 event.go:307] "Unable to write event (retry limit exceeded!)" event="&Event{ObjectMeta:{coredns-66bc5c9577-976lc.18aa5e23d2920fad kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:coredns-66bc5c9577-976lc,UID:2f94c136-2158-4e5f-b19a-05695c38ab7a,APIVersion:v1,ResourceVersion:567,FieldPath:spec.containers{coredns},},Reason:Unhealthy,Message:Readiness probe failed: Get \"http://192.168.0.2:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-28 00:35:10.900735917 +0000 UTC m=+896.338713214,LastTimestamp:2026-04-28 00:35:10.900735917 +0000 UTC m=+896.338713214,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 28 00:47:55.551798 kubelet[2526]: E0428 00:47:55.283472 2526 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: Get \"https://10.0.0.11:6443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=1352\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="object-\"kube-system\"/\"kube-root-ca.crt\"" type="*v1.ConfigMap" Apr 28 00:47:55.869575 kubelet[2526]: E0428 00:47:55.636566 2526 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="23.029s" Apr 28 00:47:57.697219 kubelet[2526]: E0428 00:47:57.696829 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:47:59.982141 kubelet[2526]: E0428 00:47:59.977362 2526 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.11:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&resourceVersion=1363\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 28 00:48:00.164280 kubelet[2526]: E0428 00:48:00.163478 2526 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="4.201s" Apr 28 00:48:00.292988 sshd[7738]: pam_unix(sshd:session): session closed for user core Apr 28 00:48:00.694802 systemd[1]: sshd@53-10.0.0.11:22-10.0.0.1:39376.service: Deactivated successfully. Apr 28 00:48:01.011289 systemd[1]: session-54.scope: Deactivated successfully. Apr 28 00:48:01.056461 systemd[1]: session-54.scope: Consumed 24.228s CPU time. Apr 28 00:48:01.231645 systemd-logind[1457]: Session 54 logged out. Waiting for processes to exit. Apr 28 00:48:01.349780 systemd-logind[1457]: Removed session 54. Apr 28 00:48:02.861418 kubelet[2526]: E0428 00:48:02.826065 2526 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.448s" Apr 28 00:48:04.003147 kubelet[2526]: E0428 00:48:04.001692 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:48:04.959259 kubelet[2526]: E0428 00:48:04.958348 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:48:05.112968 kubelet[2526]: E0428 00:48:04.906185 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:48:05.802402 kubelet[2526]: E0428 00:48:05.670362 2526 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.417s" Apr 28 00:48:06.553228 systemd[1]: Started sshd@54-10.0.0.11:22-10.0.0.1:34412.service - OpenSSH per-connection server daemon (10.0.0.1:34412). Apr 28 00:48:06.857470 kubelet[2526]: E0428 00:48:06.602859 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:48:07.530646 kubelet[2526]: E0428 00:48:06.602644 2526 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://10.0.0.11:6443/api/v1/namespaces/kube-system/events/kube-apiserver-localhost.18aa5e16981b97f0\": net/http: TLS handshake timeout" event="&Event{ObjectMeta:{kube-apiserver-localhost.18aa5e16981b97f0 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-localhost,UID:a646d41a511ed3aa4e8f9816f82de57d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Liveness probe failed: Get \"https://10.0.0.11:6443/livez\": net/http: TLS handshake timeout,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-28 00:34:14.08531864 +0000 UTC m=+839.523295936,LastTimestamp:2026-04-28 00:35:12.387308008 +0000 UTC m=+897.825285316,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 28 00:48:07.877289 kubelet[2526]: E0428 00:48:06.475592 2526 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://10.0.0.11:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-localhost\": net/http: TLS handshake timeout" podUID="a646d41a511ed3aa4e8f9816f82de57d" pod="kube-system/kube-apiserver-localhost" Apr 28 00:48:08.356388 kubelet[2526]: E0428 00:48:08.353290 2526 kubelet_node_status.go:486] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"NetworkUnavailable\\\"},{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-04-28T00:47:55Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-28T00:47:55Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-28T00:47:55Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-28T00:47:55Z\\\",\\\"type\\\":\\\"Ready\\\"}]}}\" for node \"localhost\": Patch \"https://10.0.0.11:6443/api/v1/nodes/localhost/status?timeout=10s\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Apr 28 00:48:08.606288 kubelet[2526]: E0428 00:48:07.757654 2526 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.11:6443/apis/storage.k8s.io/v1/csidrivers?resourceVersion=1363\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 28 00:48:10.300259 kubelet[2526]: E0428 00:48:10.290414 2526 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.11:6443/apis/node.k8s.io/v1/runtimeclasses?resourceVersion=1379\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 28 00:48:11.637487 kubelet[2526]: E0428 00:48:10.960370 2526 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.11:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" interval="7s" Apr 28 00:48:12.009266 kubelet[2526]: E0428 00:48:11.958777 2526 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: Get \"https://10.0.0.11:6443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dkube-proxy&resourceVersion=1352\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="object-\"kube-system\"/\"kube-proxy\"" type="*v1.ConfigMap" Apr 28 00:48:13.790628 sshd[7851]: Accepted publickey for core from 10.0.0.1 port 34412 ssh2: RSA SHA256:h1NhNWfC+Dmp6wIIzeQOuTKnwsIBe3BN0LKeEqeOidc Apr 28 00:48:14.169172 sshd[7851]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 00:48:15.551358 kubelet[2526]: E0428 00:48:15.478636 2526 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: Get \"https://10.0.0.11:6443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dcoredns&resourceVersion=1352\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="object-\"kube-system\"/\"coredns\"" type="*v1.ConfigMap" Apr 28 00:48:15.614239 systemd-logind[1457]: New session 55 of user core. Apr 28 00:48:16.091912 systemd[1]: Started session-55.scope - Session 55 of User core. Apr 28 00:48:18.570309 kubelet[2526]: E0428 00:48:18.283509 2526 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.0.0.11:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dlocalhost&resourceVersion=1363\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="pkg/kubelet/config/apiserver.go:66" type="*v1.Pod" Apr 28 00:48:20.354234 kubelet[2526]: E0428 00:48:20.351359 2526 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"localhost\": Get \"https://10.0.0.11:6443/api/v1/nodes/localhost?timeout=10s\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Apr 28 00:48:21.050114 kubelet[2526]: E0428 00:48:20.596317 2526 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: Get \"https://10.0.0.11:6443/api/v1/namespaces/kube-flannel/configmaps?fieldSelector=metadata.name%3Dkube-flannel-cfg&resourceVersion=1352\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="object-\"kube-flannel\"/\"kube-flannel-cfg\"" type="*v1.ConfigMap" Apr 28 00:48:21.114170 kubelet[2526]: E0428 00:48:20.795568 2526 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://10.0.0.11:6443/api/v1/namespaces/kube-system/events/kube-apiserver-localhost.18aa5e16981b97f0\": net/http: TLS handshake timeout" event="&Event{ObjectMeta:{kube-apiserver-localhost.18aa5e16981b97f0 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-localhost,UID:a646d41a511ed3aa4e8f9816f82de57d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Liveness probe failed: Get \"https://10.0.0.11:6443/livez\": net/http: TLS handshake timeout,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-28 00:34:14.08531864 +0000 UTC m=+839.523295936,LastTimestamp:2026-04-28 00:35:12.387308008 +0000 UTC m=+897.825285316,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 28 00:48:22.220633 kubelet[2526]: E0428 00:48:22.213593 2526 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://10.0.0.11:6443/api/v1/namespaces/kube-system/pods/kube-scheduler-localhost\": net/http: TLS handshake timeout" podUID="824fd89300514e351ed3b68d82c665c6" pod="kube-system/kube-scheduler-localhost" Apr 28 00:48:22.646657 kubelet[2526]: E0428 00:48:22.644681 2526 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.11:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&resourceVersion=1346\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 28 00:48:30.266625 kubelet[2526]: E0428 00:48:29.805678 2526 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.11:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" interval="7s" Apr 28 00:48:31.981690 kubelet[2526]: E0428 00:48:31.909709 2526 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"localhost\": Get \"https://10.0.0.11:6443/api/v1/nodes/localhost?timeout=10s\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Apr 28 00:48:34.596685 kubelet[2526]: E0428 00:48:34.594066 2526 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="28.621s" Apr 28 00:48:35.491658 kubelet[2526]: E0428 00:48:35.489227 2526 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://10.0.0.11:6443/api/v1/namespaces/kube-system/pods/kube-controller-manager-localhost\": net/http: TLS handshake timeout" podUID="c6bb8708a026256e82ca4c5631a78b5a" pod="kube-system/kube-controller-manager-localhost" Apr 28 00:48:39.746295 kubelet[2526]: E0428 00:48:38.467619 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:48:42.082783 kubelet[2526]: E0428 00:48:42.078929 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:48:44.360007 kubelet[2526]: E0428 00:48:43.528723 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:48:45.308795 kubelet[2526]: E0428 00:48:45.307993 2526 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"localhost\": Get \"https://10.0.0.11:6443/api/v1/nodes/localhost?timeout=10s\": context deadline exceeded" Apr 28 00:48:46.089489 kubelet[2526]: E0428 00:48:46.088854 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:48:46.237411 kubelet[2526]: E0428 00:48:45.593364 2526 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://10.0.0.11:6443/api/v1/namespaces/kube-system/events/kube-apiserver-localhost.18aa5e16981b97f0\": net/http: TLS handshake timeout" event="&Event{ObjectMeta:{kube-apiserver-localhost.18aa5e16981b97f0 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-localhost,UID:a646d41a511ed3aa4e8f9816f82de57d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Liveness probe failed: Get \"https://10.0.0.11:6443/livez\": net/http: TLS handshake timeout,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-28 00:34:14.08531864 +0000 UTC m=+839.523295936,LastTimestamp:2026-04-28 00:35:12.387308008 +0000 UTC m=+897.825285316,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 28 00:48:47.503770 kubelet[2526]: E0428 00:48:47.501723 2526 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://10.0.0.11:6443/api/v1/namespaces/kube-system/pods/coredns-66bc5c9577-976lc\": net/http: TLS handshake timeout" podUID="2f94c136-2158-4e5f-b19a-05695c38ab7a" pod="kube-system/coredns-66bc5c9577-976lc" Apr 28 00:48:48.126458 kubelet[2526]: E0428 00:48:47.887056 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:48:48.865460 kubelet[2526]: E0428 00:48:48.859742 2526 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.11:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded" interval="7s" Apr 28 00:48:50.792946 sshd[7851]: pam_unix(sshd:session): session closed for user core Apr 28 00:48:51.207404 systemd[1]: sshd@54-10.0.0.11:22-10.0.0.1:34412.service: Deactivated successfully. Apr 28 00:48:51.271022 systemd[1]: sshd@54-10.0.0.11:22-10.0.0.1:34412.service: Consumed 2.798s CPU time. Apr 28 00:48:51.513974 kubelet[2526]: E0428 00:48:51.331273 2526 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: Get \"https://10.0.0.11:6443/api/v1/namespaces/kube-flannel/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=1352\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="object-\"kube-flannel\"/\"kube-root-ca.crt\"" type="*v1.ConfigMap" Apr 28 00:48:51.518217 systemd[1]: session-55.scope: Deactivated successfully. Apr 28 00:48:51.551005 systemd[1]: session-55.scope: Consumed 17.359s CPU time. Apr 28 00:48:51.699712 systemd-logind[1457]: Session 55 logged out. Waiting for processes to exit. Apr 28 00:48:51.813687 systemd-logind[1457]: Removed session 55. Apr 28 00:48:54.396721 kubelet[2526]: E0428 00:48:54.396089 2526 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="17.379s" Apr 28 00:48:55.254481 kubelet[2526]: E0428 00:48:55.250422 2526 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.11:6443/apis/node.k8s.io/v1/runtimeclasses?resourceVersion=1379\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 28 00:48:56.602770 systemd[1]: Started sshd@55-10.0.0.11:22-10.0.0.1:35712.service - OpenSSH per-connection server daemon (10.0.0.1:35712). Apr 28 00:49:02.905396 kubelet[2526]: E0428 00:49:02.062704 2526 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://10.0.0.11:6443/api/v1/namespaces/kube-system/pods/kube-scheduler-localhost\": net/http: TLS handshake timeout" podUID="824fd89300514e351ed3b68d82c665c6" pod="kube-system/kube-scheduler-localhost" Apr 28 00:49:03.507627 kubelet[2526]: E0428 00:49:03.503611 2526 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: Get \"https://10.0.0.11:6443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=1352\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="object-\"kube-system\"/\"kube-root-ca.crt\"" type="*v1.ConfigMap" Apr 28 00:49:03.565671 kubelet[2526]: E0428 00:49:02.751836 2526 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"localhost\": Get \"https://10.0.0.11:6443/api/v1/nodes/localhost?timeout=10s\": context deadline exceeded" Apr 28 00:49:04.035168 kubelet[2526]: E0428 00:49:04.028017 2526 kubelet_node_status.go:473] "Unable to update node status" err="update node status exceeds retry count" Apr 28 00:49:04.035168 kubelet[2526]: E0428 00:49:03.977714 2526 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.11:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&resourceVersion=1363\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 28 00:49:04.175511 sshd[7926]: Accepted publickey for core from 10.0.0.1 port 35712 ssh2: RSA SHA256:h1NhNWfC+Dmp6wIIzeQOuTKnwsIBe3BN0LKeEqeOidc Apr 28 00:49:04.748258 sshd[7926]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 00:49:06.171709 systemd-logind[1457]: New session 56 of user core. Apr 28 00:49:06.550150 systemd[1]: Started session-56.scope - Session 56 of User core. Apr 28 00:49:07.743473 kubelet[2526]: E0428 00:49:06.761427 2526 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.0.0.11:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dlocalhost&resourceVersion=1363\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="pkg/kubelet/config/apiserver.go:66" type="*v1.Pod" Apr 28 00:49:12.002678 kubelet[2526]: E0428 00:49:12.001865 2526 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: Get \"https://10.0.0.11:6443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dkube-proxy&resourceVersion=1352\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="object-\"kube-system\"/\"kube-proxy\"" type="*v1.ConfigMap" Apr 28 00:49:14.174226 kubelet[2526]: E0428 00:49:14.169225 2526 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.11:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" interval="7s" Apr 28 00:49:14.460998 kubelet[2526]: E0428 00:49:14.459747 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:49:15.914593 kubelet[2526]: E0428 00:49:15.514748 2526 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://10.0.0.11:6443/api/v1/namespaces/kube-system/events/kube-apiserver-localhost.18aa5e16981b97f0\": net/http: TLS handshake timeout" event="&Event{ObjectMeta:{kube-apiserver-localhost.18aa5e16981b97f0 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-localhost,UID:a646d41a511ed3aa4e8f9816f82de57d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Liveness probe failed: Get \"https://10.0.0.11:6443/livez\": net/http: TLS handshake timeout,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-28 00:34:14.08531864 +0000 UTC m=+839.523295936,LastTimestamp:2026-04-28 00:35:12.387308008 +0000 UTC m=+897.825285316,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 28 00:49:17.662561 kubelet[2526]: E0428 00:49:17.025761 2526 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: Get \"https://10.0.0.11:6443/api/v1/namespaces/kube-flannel/configmaps?fieldSelector=metadata.name%3Dkube-flannel-cfg&resourceVersion=1352\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="object-\"kube-flannel\"/\"kube-flannel-cfg\"" type="*v1.ConfigMap" Apr 28 00:49:18.969296 kubelet[2526]: E0428 00:49:18.956112 2526 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.11:6443/apis/storage.k8s.io/v1/csidrivers?resourceVersion=1363\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 28 00:49:20.863655 kubelet[2526]: E0428 00:49:20.843979 2526 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://10.0.0.11:6443/api/v1/namespaces/kube-system/pods/kube-controller-manager-localhost\": net/http: TLS handshake timeout" podUID="c6bb8708a026256e82ca4c5631a78b5a" pod="kube-system/kube-controller-manager-localhost" Apr 28 00:49:23.769382 kubelet[2526]: E0428 00:49:23.766810 2526 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="28.191s" Apr 28 00:49:25.271547 kubelet[2526]: E0428 00:49:25.270752 2526 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: Get \"https://10.0.0.11:6443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dcoredns&resourceVersion=1352\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="object-\"kube-system\"/\"coredns\"" type="*v1.ConfigMap" Apr 28 00:49:25.393185 kubelet[2526]: E0428 00:49:25.385478 2526 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.11:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&resourceVersion=1346\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 28 00:49:27.401774 kubelet[2526]: E0428 00:49:27.174804 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:49:30.684492 kubelet[2526]: E0428 00:49:30.568646 2526 kubelet_node_status.go:486] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"NetworkUnavailable\\\"},{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-04-28T00:49:16Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-28T00:49:16Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-28T00:49:16Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-28T00:49:16Z\\\",\\\"type\\\":\\\"Ready\\\"}]}}\" for node \"localhost\": Patch \"https://10.0.0.11:6443/api/v1/nodes/localhost/status?timeout=10s\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Apr 28 00:49:31.212324 kubelet[2526]: E0428 00:49:31.212086 2526 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="7.154s" Apr 28 00:49:31.555087 kubelet[2526]: E0428 00:49:31.533812 2526 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.11:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded" interval="7s" Apr 28 00:49:32.756836 kubelet[2526]: E0428 00:49:32.746437 2526 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://10.0.0.11:6443/api/v1/namespaces/kube-system/pods/coredns-66bc5c9577-976lc\": net/http: TLS handshake timeout" podUID="2f94c136-2158-4e5f-b19a-05695c38ab7a" pod="kube-system/coredns-66bc5c9577-976lc" Apr 28 00:49:33.813577 kubelet[2526]: E0428 00:49:33.793849 2526 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.508s" Apr 28 00:49:34.455018 kubelet[2526]: E0428 00:49:34.454077 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:49:35.315431 kubelet[2526]: E0428 00:49:35.309744 2526 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.246s" Apr 28 00:49:35.970466 kubelet[2526]: E0428 00:49:35.957832 2526 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: Get \"https://10.0.0.11:6443/api/v1/namespaces/kube-flannel/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=1352\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="object-\"kube-flannel\"/\"kube-root-ca.crt\"" type="*v1.ConfigMap" Apr 28 00:49:36.496797 kubelet[2526]: E0428 00:49:36.452812 2526 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://10.0.0.11:6443/api/v1/namespaces/kube-system/events/kube-apiserver-localhost.18aa5e16981b97f0\": net/http: TLS handshake timeout" event="&Event{ObjectMeta:{kube-apiserver-localhost.18aa5e16981b97f0 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-localhost,UID:a646d41a511ed3aa4e8f9816f82de57d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Liveness probe failed: Get \"https://10.0.0.11:6443/livez\": net/http: TLS handshake timeout,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-28 00:34:14.08531864 +0000 UTC m=+839.523295936,LastTimestamp:2026-04-28 00:35:12.387308008 +0000 UTC m=+897.825285316,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 28 00:49:40.682827 sshd[7926]: pam_unix(sshd:session): session closed for user core Apr 28 00:49:41.159520 systemd[1]: sshd@55-10.0.0.11:22-10.0.0.1:35712.service: Deactivated successfully. Apr 28 00:49:41.165838 systemd[1]: sshd@55-10.0.0.11:22-10.0.0.1:35712.service: Consumed 2.771s CPU time. Apr 28 00:49:41.441357 systemd[1]: session-56.scope: Deactivated successfully. Apr 28 00:49:41.455762 systemd[1]: session-56.scope: Consumed 16.460s CPU time. Apr 28 00:49:41.694714 systemd-logind[1457]: Session 56 logged out. Waiting for processes to exit. Apr 28 00:49:41.902224 systemd-logind[1457]: Removed session 56. Apr 28 00:49:42.053087 kubelet[2526]: E0428 00:49:42.052522 2526 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"localhost\": Get \"https://10.0.0.11:6443/api/v1/nodes/localhost?timeout=10s\": context deadline exceeded" Apr 28 00:49:43.124391 kubelet[2526]: E0428 00:49:42.996842 2526 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://10.0.0.11:6443/api/v1/namespaces/kube-system/pods/coredns-66bc5c9577-sn6rz\": net/http: TLS handshake timeout" podUID="69b6c5c4-1b0f-43c4-a6e5-e4ff6b274b36" pod="kube-system/coredns-66bc5c9577-sn6rz" Apr 28 00:49:44.759029 kubelet[2526]: E0428 00:49:44.742558 2526 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="9.247s" Apr 28 00:49:45.556491 kubelet[2526]: E0428 00:49:45.543032 2526 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.11:6443/apis/node.k8s.io/v1/runtimeclasses?resourceVersion=1379\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 28 00:49:46.819200 systemd[1]: Started sshd@56-10.0.0.11:22-10.0.0.1:50524.service - OpenSSH per-connection server daemon (10.0.0.1:50524). Apr 28 00:49:48.237563 kubelet[2526]: E0428 00:49:48.236924 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:49:49.687490 kubelet[2526]: E0428 00:49:49.680511 2526 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.11:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded" interval="7s" Apr 28 00:49:53.191639 kubelet[2526]: E0428 00:49:53.183623 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:49:55.475385 kubelet[2526]: E0428 00:49:55.446821 2526 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://10.0.0.11:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-localhost\": net/http: TLS handshake timeout" podUID="a646d41a511ed3aa4e8f9816f82de57d" pod="kube-system/kube-apiserver-localhost" Apr 28 00:49:56.188504 kubelet[2526]: E0428 00:49:56.172384 2526 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"localhost\": Get \"https://10.0.0.11:6443/api/v1/nodes/localhost?timeout=10s\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Apr 28 00:49:56.464048 kubelet[2526]: E0428 00:49:55.611742 2526 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.11:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&resourceVersion=1363\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 28 00:49:57.271570 kubelet[2526]: E0428 00:49:57.264578 2526 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: Get \"https://10.0.0.11:6443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dkube-proxy&resourceVersion=1352\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="object-\"kube-system\"/\"kube-proxy\"" type="*v1.ConfigMap" Apr 28 00:49:57.982945 sshd[8010]: Accepted publickey for core from 10.0.0.1 port 50524 ssh2: RSA SHA256:h1NhNWfC+Dmp6wIIzeQOuTKnwsIBe3BN0LKeEqeOidc Apr 28 00:49:58.687874 sshd[8010]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 00:50:00.315267 systemd-logind[1457]: New session 57 of user core. Apr 28 00:50:00.387610 systemd[1]: Started session-57.scope - Session 57 of User core. Apr 28 00:50:02.356450 kubelet[2526]: E0428 00:50:01.571881 2526 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://10.0.0.11:6443/api/v1/namespaces/kube-system/events/kube-apiserver-localhost.18aa5e16981b97f0\": net/http: TLS handshake timeout" event="&Event{ObjectMeta:{kube-apiserver-localhost.18aa5e16981b97f0 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-localhost,UID:a646d41a511ed3aa4e8f9816f82de57d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Liveness probe failed: Get \"https://10.0.0.11:6443/livez\": net/http: TLS handshake timeout,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-28 00:34:14.08531864 +0000 UTC m=+839.523295936,LastTimestamp:2026-04-28 00:35:12.387308008 +0000 UTC m=+897.825285316,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 28 00:50:06.810614 kubelet[2526]: E0428 00:50:06.809621 2526 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"localhost\": Get \"https://10.0.0.11:6443/api/v1/nodes/localhost?timeout=10s\": context deadline exceeded" Apr 28 00:50:07.040853 kubelet[2526]: E0428 00:50:07.033721 2526 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.11:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded" interval="7s" Apr 28 00:50:07.101574 kubelet[2526]: E0428 00:50:07.047833 2526 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.0.0.11:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dlocalhost&resourceVersion=1363\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="pkg/kubelet/config/apiserver.go:66" type="*v1.Pod" Apr 28 00:50:09.402881 kubelet[2526]: E0428 00:50:09.380241 2526 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="24.365s" Apr 28 00:50:10.894707 kubelet[2526]: E0428 00:50:10.881581 2526 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: Get \"https://10.0.0.11:6443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=1352\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="object-\"kube-system\"/\"kube-root-ca.crt\"" type="*v1.ConfigMap" Apr 28 00:50:12.885808 kubelet[2526]: E0428 00:50:12.881466 2526 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://10.0.0.11:6443/api/v1/namespaces/kube-system/pods/coredns-66bc5c9577-976lc\": net/http: TLS handshake timeout" podUID="2f94c136-2158-4e5f-b19a-05695c38ab7a" pod="kube-system/coredns-66bc5c9577-976lc" Apr 28 00:50:13.603727 kubelet[2526]: E0428 00:50:13.442858 2526 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: Get \"https://10.0.0.11:6443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dcoredns&resourceVersion=1352\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="object-\"kube-system\"/\"coredns\"" type="*v1.ConfigMap" Apr 28 00:50:14.744725 kubelet[2526]: E0428 00:50:14.618917 2526 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="4.816s" Apr 28 00:50:15.201758 kubelet[2526]: E0428 00:50:15.200058 2526 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.11:6443/apis/storage.k8s.io/v1/csidrivers?resourceVersion=1363\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 28 00:50:19.820881 kubelet[2526]: E0428 00:50:19.198833 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:50:22.497739 kubelet[2526]: E0428 00:50:22.495955 2526 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"localhost\": Get \"https://10.0.0.11:6443/api/v1/nodes/localhost?timeout=10s\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Apr 28 00:50:23.046014 kubelet[2526]: E0428 00:50:23.038921 2526 kubelet_node_status.go:473] "Unable to update node status" err="update node status exceeds retry count" Apr 28 00:50:23.969829 kubelet[2526]: E0428 00:50:23.284827 2526 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://10.0.0.11:6443/api/v1/namespaces/kube-system/events/kube-apiserver-localhost.18aa5e16981b97f0\": net/http: TLS handshake timeout" event="&Event{ObjectMeta:{kube-apiserver-localhost.18aa5e16981b97f0 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-localhost,UID:a646d41a511ed3aa4e8f9816f82de57d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Liveness probe failed: Get \"https://10.0.0.11:6443/livez\": net/http: TLS handshake timeout,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-28 00:34:14.08531864 +0000 UTC m=+839.523295936,LastTimestamp:2026-04-28 00:35:12.387308008 +0000 UTC m=+897.825285316,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 28 00:50:24.099607 kubelet[2526]: E0428 00:50:24.099229 2526 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: Get \"https://10.0.0.11:6443/api/v1/namespaces/kube-flannel/configmaps?fieldSelector=metadata.name%3Dkube-flannel-cfg&resourceVersion=1352\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="object-\"kube-flannel\"/\"kube-flannel-cfg\"" type="*v1.ConfigMap" Apr 28 00:50:24.154662 kubelet[2526]: E0428 00:50:24.152394 2526 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://10.0.0.11:6443/api/v1/namespaces/kube-system/pods/coredns-66bc5c9577-sn6rz\": net/http: TLS handshake timeout" podUID="69b6c5c4-1b0f-43c4-a6e5-e4ff6b274b36" pod="kube-system/coredns-66bc5c9577-sn6rz" Apr 28 00:50:24.572590 kubelet[2526]: E0428 00:50:24.562409 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:50:24.601199 kubelet[2526]: E0428 00:50:24.587956 2526 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.11:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded" interval="7s" Apr 28 00:50:24.813435 kubelet[2526]: E0428 00:50:24.803829 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:50:25.645211 kubelet[2526]: E0428 00:50:25.618685 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:50:26.148152 kubelet[2526]: E0428 00:50:26.147469 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:50:28.669689 kubelet[2526]: E0428 00:50:27.853708 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:50:31.760583 sshd[8010]: pam_unix(sshd:session): session closed for user core Apr 28 00:50:32.325144 systemd[1]: sshd@56-10.0.0.11:22-10.0.0.1:50524.service: Deactivated successfully. Apr 28 00:50:32.346855 systemd[1]: sshd@56-10.0.0.11:22-10.0.0.1:50524.service: Consumed 3.232s CPU time. Apr 28 00:50:32.735820 systemd[1]: session-57.scope: Deactivated successfully. Apr 28 00:50:32.760649 systemd[1]: session-57.scope: Consumed 14.969s CPU time. Apr 28 00:50:32.980443 systemd-logind[1457]: Session 57 logged out. Waiting for processes to exit. Apr 28 00:50:33.187153 systemd-logind[1457]: Removed session 57. Apr 28 00:50:33.894229 kubelet[2526]: E0428 00:50:33.686857 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:50:34.267111 kubelet[2526]: E0428 00:50:34.002589 2526 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: Get \"https://10.0.0.11:6443/api/v1/namespaces/kube-flannel/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=1352\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="object-\"kube-flannel\"/\"kube-root-ca.crt\"" type="*v1.ConfigMap" Apr 28 00:50:34.933706 kubelet[2526]: E0428 00:50:34.932843 2526 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://10.0.0.11:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-localhost\": net/http: TLS handshake timeout" podUID="a646d41a511ed3aa4e8f9816f82de57d" pod="kube-system/kube-apiserver-localhost" Apr 28 00:50:35.378718 kubelet[2526]: E0428 00:50:35.360870 2526 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="10.217s" Apr 28 00:50:36.373012 kubelet[2526]: E0428 00:50:36.368874 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:50:36.449342 kubelet[2526]: E0428 00:50:36.447596 2526 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.11:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&resourceVersion=1346\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 28 00:50:38.086717 systemd[1]: Started sshd@57-10.0.0.11:22-10.0.0.1:38684.service - OpenSSH per-connection server daemon (10.0.0.1:38684). Apr 28 00:50:41.704672 kubelet[2526]: E0428 00:50:41.690415 2526 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.11:6443/apis/node.k8s.io/v1/runtimeclasses?resourceVersion=1379\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 28 00:50:42.696079 kubelet[2526]: E0428 00:50:42.415481 2526 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.11:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded" interval="7s" Apr 28 00:50:45.159673 sshd[8093]: Accepted publickey for core from 10.0.0.1 port 38684 ssh2: RSA SHA256:h1NhNWfC+Dmp6wIIzeQOuTKnwsIBe3BN0LKeEqeOidc Apr 28 00:50:45.505780 kubelet[2526]: E0428 00:50:45.077107 2526 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://10.0.0.11:6443/api/v1/namespaces/kube-system/events/kube-apiserver-localhost.18aa5e16981b97f0\": net/http: TLS handshake timeout" event="&Event{ObjectMeta:{kube-apiserver-localhost.18aa5e16981b97f0 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-localhost,UID:a646d41a511ed3aa4e8f9816f82de57d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Liveness probe failed: Get \"https://10.0.0.11:6443/livez\": net/http: TLS handshake timeout,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-28 00:34:14.08531864 +0000 UTC m=+839.523295936,LastTimestamp:2026-04-28 00:35:12.387308008 +0000 UTC m=+897.825285316,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 28 00:50:45.563392 sshd[8093]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 00:50:45.966353 kubelet[2526]: E0428 00:50:45.951265 2526 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: Get \"https://10.0.0.11:6443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dkube-proxy&resourceVersion=1352\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="object-\"kube-system\"/\"kube-proxy\"" type="*v1.ConfigMap" Apr 28 00:50:46.962177 systemd-logind[1457]: New session 58 of user core. Apr 28 00:50:47.266314 systemd[1]: Started session-58.scope - Session 58 of User core. Apr 28 00:50:47.770883 kubelet[2526]: E0428 00:50:47.748879 2526 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.11:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&resourceVersion=1363\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 28 00:50:48.105807 kubelet[2526]: E0428 00:50:48.061847 2526 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://10.0.0.11:6443/api/v1/namespaces/kube-system/pods/kube-scheduler-localhost\": net/http: TLS handshake timeout" podUID="824fd89300514e351ed3b68d82c665c6" pod="kube-system/kube-scheduler-localhost" Apr 28 00:50:48.516616 kubelet[2526]: E0428 00:50:48.503244 2526 kubelet_node_status.go:486] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"NetworkUnavailable\\\"},{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-04-28T00:50:35Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-28T00:50:35Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-28T00:50:35Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-28T00:50:35Z\\\",\\\"type\\\":\\\"Ready\\\"}]}}\" for node \"localhost\": Patch \"https://10.0.0.11:6443/api/v1/nodes/localhost/status?timeout=10s\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Apr 28 00:50:57.328612 kubelet[2526]: E0428 00:50:56.982678 2526 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="21.324s" Apr 28 00:50:58.652190 kubelet[2526]: E0428 00:50:58.649291 2526 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"localhost\": Get \"https://10.0.0.11:6443/api/v1/nodes/localhost?timeout=10s\": context deadline exceeded" Apr 28 00:50:59.352387 kubelet[2526]: E0428 00:50:59.351628 2526 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://10.0.0.11:6443/api/v1/namespaces/kube-system/pods/kube-controller-manager-localhost\": net/http: TLS handshake timeout" podUID="c6bb8708a026256e82ca4c5631a78b5a" pod="kube-system/kube-controller-manager-localhost" Apr 28 00:50:59.588220 kubelet[2526]: E0428 00:50:59.585624 2526 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.932s" Apr 28 00:50:59.610485 kubelet[2526]: E0428 00:50:59.603799 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:50:59.624243 kubelet[2526]: E0428 00:50:59.624196 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:50:59.764384 kubelet[2526]: E0428 00:50:59.764209 2526 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.11:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded" interval="7s" Apr 28 00:51:00.587284 kubelet[2526]: E0428 00:51:00.585672 2526 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.11:6443/apis/storage.k8s.io/v1/csidrivers?resourceVersion=1363\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 28 00:51:01.564875 kubelet[2526]: E0428 00:51:01.564314 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:51:06.522597 sshd[8093]: pam_unix(sshd:session): session closed for user core Apr 28 00:51:06.727727 systemd[1]: sshd@57-10.0.0.11:22-10.0.0.1:38684.service: Deactivated successfully. Apr 28 00:51:06.731515 systemd[1]: sshd@57-10.0.0.11:22-10.0.0.1:38684.service: Consumed 2.521s CPU time. Apr 28 00:51:06.801379 systemd[1]: session-58.scope: Deactivated successfully. Apr 28 00:51:06.809612 systemd[1]: session-58.scope: Consumed 7.569s CPU time. Apr 28 00:51:06.899154 systemd-logind[1457]: Session 58 logged out. Waiting for processes to exit. Apr 28 00:51:06.930058 systemd[1]: Started sshd@58-10.0.0.11:22-10.0.0.1:35848.service - OpenSSH per-connection server daemon (10.0.0.1:35848). Apr 28 00:51:06.954032 systemd-logind[1457]: Removed session 58. Apr 28 00:51:07.148112 sshd[8167]: Accepted publickey for core from 10.0.0.1 port 35848 ssh2: RSA SHA256:h1NhNWfC+Dmp6wIIzeQOuTKnwsIBe3BN0LKeEqeOidc Apr 28 00:51:07.175755 sshd[8167]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 00:51:07.236765 systemd-logind[1457]: New session 59 of user core. Apr 28 00:51:07.367357 systemd[1]: Started session-59.scope - Session 59 of User core. Apr 28 00:51:10.615258 sshd[8167]: pam_unix(sshd:session): session closed for user core Apr 28 00:51:10.719741 systemd[1]: Started sshd@59-10.0.0.11:22-10.0.0.1:44208.service - OpenSSH per-connection server daemon (10.0.0.1:44208). Apr 28 00:51:10.823651 systemd[1]: sshd@58-10.0.0.11:22-10.0.0.1:35848.service: Deactivated successfully. Apr 28 00:51:10.856174 systemd[1]: session-59.scope: Deactivated successfully. Apr 28 00:51:10.856637 systemd[1]: session-59.scope: Consumed 1.522s CPU time. Apr 28 00:51:10.883395 systemd-logind[1457]: Session 59 logged out. Waiting for processes to exit. Apr 28 00:51:10.888400 systemd-logind[1457]: Removed session 59. Apr 28 00:51:11.015729 sshd[8188]: Accepted publickey for core from 10.0.0.1 port 44208 ssh2: RSA SHA256:h1NhNWfC+Dmp6wIIzeQOuTKnwsIBe3BN0LKeEqeOidc Apr 28 00:51:11.030768 sshd[8188]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 00:51:11.056461 systemd-logind[1457]: New session 60 of user core. Apr 28 00:51:11.060455 systemd[1]: Started session-60.scope - Session 60 of User core. Apr 28 00:51:18.841708 kubelet[2526]: E0428 00:51:18.838786 2526 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.433s" Apr 28 00:51:29.079912 sshd[8188]: pam_unix(sshd:session): session closed for user core Apr 28 00:51:29.463337 systemd[1]: sshd@59-10.0.0.11:22-10.0.0.1:44208.service: Deactivated successfully. Apr 28 00:51:29.617746 systemd[1]: session-60.scope: Deactivated successfully. Apr 28 00:51:29.618360 systemd[1]: session-60.scope: Consumed 5.132s CPU time. Apr 28 00:51:29.663729 systemd-logind[1457]: Session 60 logged out. Waiting for processes to exit. Apr 28 00:51:29.695797 systemd[1]: Started sshd@60-10.0.0.11:22-10.0.0.1:58014.service - OpenSSH per-connection server daemon (10.0.0.1:58014). Apr 28 00:51:29.819684 systemd-logind[1457]: Removed session 60. Apr 28 00:51:31.171655 sshd[8271]: Accepted publickey for core from 10.0.0.1 port 58014 ssh2: RSA SHA256:h1NhNWfC+Dmp6wIIzeQOuTKnwsIBe3BN0LKeEqeOidc Apr 28 00:51:31.212587 sshd[8271]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 00:51:31.342503 kubelet[2526]: E0428 00:51:31.341011 2526 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.049s" Apr 28 00:51:31.419713 systemd-logind[1457]: New session 61 of user core. Apr 28 00:51:31.517959 systemd[1]: Started session-61.scope - Session 61 of User core. Apr 28 00:51:34.223562 kubelet[2526]: E0428 00:51:34.223224 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:51:34.258249 sshd[8271]: pam_unix(sshd:session): session closed for user core Apr 28 00:51:34.598876 systemd[1]: sshd@60-10.0.0.11:22-10.0.0.1:58014.service: Deactivated successfully. Apr 28 00:51:34.661605 systemd[1]: session-61.scope: Deactivated successfully. Apr 28 00:51:34.663701 systemd[1]: session-61.scope: Consumed 1.907s CPU time. Apr 28 00:51:34.719763 systemd-logind[1457]: Session 61 logged out. Waiting for processes to exit. Apr 28 00:51:34.805208 systemd[1]: Started sshd@61-10.0.0.11:22-10.0.0.1:43242.service - OpenSSH per-connection server daemon (10.0.0.1:43242). Apr 28 00:51:34.931800 systemd-logind[1457]: Removed session 61. Apr 28 00:51:35.065290 systemd[1]: cri-containerd-a0ac537b67e6c9c214565fbfb889fac58007b2a05a5bdf2ba3d669ac1ec7dfe7.scope: Deactivated successfully. Apr 28 00:51:35.070522 systemd[1]: cri-containerd-a0ac537b67e6c9c214565fbfb889fac58007b2a05a5bdf2ba3d669ac1ec7dfe7.scope: Consumed 1min 49.023s CPU time. Apr 28 00:51:36.472580 kubelet[2526]: E0428 00:51:36.469326 2526 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.216s" Apr 28 00:51:36.590453 sshd[8305]: Accepted publickey for core from 10.0.0.1 port 43242 ssh2: RSA SHA256:h1NhNWfC+Dmp6wIIzeQOuTKnwsIBe3BN0LKeEqeOidc Apr 28 00:51:36.643272 sshd[8305]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 00:51:36.790784 systemd-logind[1457]: New session 62 of user core. Apr 28 00:51:36.825121 systemd[1]: Started session-62.scope - Session 62 of User core. Apr 28 00:51:36.995016 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a0ac537b67e6c9c214565fbfb889fac58007b2a05a5bdf2ba3d669ac1ec7dfe7-rootfs.mount: Deactivated successfully. Apr 28 00:51:37.142656 containerd[1473]: time="2026-04-28T00:51:37.140304237Z" level=info msg="shim disconnected" id=a0ac537b67e6c9c214565fbfb889fac58007b2a05a5bdf2ba3d669ac1ec7dfe7 namespace=k8s.io Apr 28 00:51:37.165014 containerd[1473]: time="2026-04-28T00:51:37.161058380Z" level=warning msg="cleaning up after shim disconnected" id=a0ac537b67e6c9c214565fbfb889fac58007b2a05a5bdf2ba3d669ac1ec7dfe7 namespace=k8s.io Apr 28 00:51:37.165014 containerd[1473]: time="2026-04-28T00:51:37.161417727Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 28 00:51:37.411944 kubelet[2526]: E0428 00:51:37.410237 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:51:39.239257 kubelet[2526]: E0428 00:51:39.239110 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:51:39.244922 kubelet[2526]: E0428 00:51:39.244859 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:51:39.253466 kubelet[2526]: I0428 00:51:39.252988 2526 scope.go:117] "RemoveContainer" containerID="b5cc653730b7b306ff2b21874723bc3fb88f6b5b2b7a92400665491634f8c45d" Apr 28 00:51:39.618326 containerd[1473]: time="2026-04-28T00:51:39.601583207Z" level=info msg="RemoveContainer for \"b5cc653730b7b306ff2b21874723bc3fb88f6b5b2b7a92400665491634f8c45d\"" Apr 28 00:51:39.828511 containerd[1473]: time="2026-04-28T00:51:39.828132283Z" level=info msg="RemoveContainer for \"b5cc653730b7b306ff2b21874723bc3fb88f6b5b2b7a92400665491634f8c45d\" returns successfully" Apr 28 00:51:40.492346 kubelet[2526]: I0428 00:51:40.492230 2526 scope.go:117] "RemoveContainer" containerID="a0ac537b67e6c9c214565fbfb889fac58007b2a05a5bdf2ba3d669ac1ec7dfe7" Apr 28 00:51:40.493610 kubelet[2526]: E0428 00:51:40.492432 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:51:40.493610 kubelet[2526]: E0428 00:51:40.492494 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:51:40.496141 containerd[1473]: time="2026-04-28T00:51:40.496085846Z" level=info msg="CreateContainer within sandbox \"670f440219fcc7a0b00ba64a35e5d1ee4e1b4357741788815366a9fc0871c449\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:5,}" Apr 28 00:51:40.615444 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4151554324.mount: Deactivated successfully. Apr 28 00:51:40.720018 containerd[1473]: time="2026-04-28T00:51:40.719615961Z" level=info msg="CreateContainer within sandbox \"670f440219fcc7a0b00ba64a35e5d1ee4e1b4357741788815366a9fc0871c449\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:5,} returns container id \"93d6279973a6ecf62ad26521025cdb363fab368126b8cfd76d670571d156b150\"" Apr 28 00:51:40.722929 containerd[1473]: time="2026-04-28T00:51:40.722856818Z" level=info msg="StartContainer for \"93d6279973a6ecf62ad26521025cdb363fab368126b8cfd76d670571d156b150\"" Apr 28 00:51:40.722935 sshd[8305]: pam_unix(sshd:session): session closed for user core Apr 28 00:51:40.913158 systemd[1]: sshd@61-10.0.0.11:22-10.0.0.1:43242.service: Deactivated successfully. Apr 28 00:51:40.928773 systemd[1]: session-62.scope: Deactivated successfully. Apr 28 00:51:40.936275 systemd[1]: session-62.scope: Consumed 2.896s CPU time. Apr 28 00:51:40.945285 systemd-logind[1457]: Session 62 logged out. Waiting for processes to exit. Apr 28 00:51:40.949750 systemd-logind[1457]: Removed session 62. Apr 28 00:51:40.958003 containerd[1473]: time="2026-04-28T00:51:40.957823173Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 28 00:51:40.958003 containerd[1473]: time="2026-04-28T00:51:40.957973797Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 28 00:51:40.958003 containerd[1473]: time="2026-04-28T00:51:40.957987525Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 28 00:51:40.958334 containerd[1473]: time="2026-04-28T00:51:40.958072191Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 28 00:51:41.315382 systemd[1]: Started cri-containerd-93d6279973a6ecf62ad26521025cdb363fab368126b8cfd76d670571d156b150.scope - libcontainer container 93d6279973a6ecf62ad26521025cdb363fab368126b8cfd76d670571d156b150. Apr 28 00:51:42.164266 containerd[1473]: time="2026-04-28T00:51:42.163927979Z" level=info msg="StartContainer for \"93d6279973a6ecf62ad26521025cdb363fab368126b8cfd76d670571d156b150\" returns successfully" Apr 28 00:51:42.745487 kubelet[2526]: E0428 00:51:42.745395 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:51:46.184454 systemd[1]: Started sshd@62-10.0.0.11:22-10.0.0.1:50606.service - OpenSSH per-connection server daemon (10.0.0.1:50606). Apr 28 00:51:46.960769 sshd[8439]: Accepted publickey for core from 10.0.0.1 port 50606 ssh2: RSA SHA256:h1NhNWfC+Dmp6wIIzeQOuTKnwsIBe3BN0LKeEqeOidc Apr 28 00:51:47.114651 sshd[8439]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 00:51:47.194062 systemd-logind[1457]: New session 63 of user core. Apr 28 00:51:47.209426 systemd[1]: Started session-63.scope - Session 63 of User core. Apr 28 00:51:47.415580 kubelet[2526]: E0428 00:51:47.412862 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:51:50.241229 kubelet[2526]: E0428 00:51:50.240579 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:51:50.568569 sshd[8439]: pam_unix(sshd:session): session closed for user core Apr 28 00:51:50.616365 systemd[1]: sshd@62-10.0.0.11:22-10.0.0.1:50606.service: Deactivated successfully. Apr 28 00:51:50.708301 systemd[1]: session-63.scope: Deactivated successfully. Apr 28 00:51:50.708918 systemd[1]: session-63.scope: Consumed 2.617s CPU time. Apr 28 00:51:50.713801 systemd-logind[1457]: Session 63 logged out. Waiting for processes to exit. Apr 28 00:51:50.735803 systemd-logind[1457]: Removed session 63. Apr 28 00:51:55.798766 systemd[1]: Started sshd@63-10.0.0.11:22-10.0.0.1:40610.service - OpenSSH per-connection server daemon (10.0.0.1:40610). Apr 28 00:51:56.061445 sshd[8487]: Accepted publickey for core from 10.0.0.1 port 40610 ssh2: RSA SHA256:h1NhNWfC+Dmp6wIIzeQOuTKnwsIBe3BN0LKeEqeOidc Apr 28 00:51:56.099850 sshd[8487]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 00:51:56.494424 systemd-logind[1457]: New session 64 of user core. Apr 28 00:51:56.557784 kubelet[2526]: E0428 00:51:56.547069 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:51:56.567670 systemd[1]: Started session-64.scope - Session 64 of User core. Apr 28 00:51:58.364294 kubelet[2526]: E0428 00:51:58.311834 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:52:04.392670 sshd[8487]: pam_unix(sshd:session): session closed for user core Apr 28 00:52:04.465123 systemd-logind[1457]: Session 64 logged out. Waiting for processes to exit. Apr 28 00:52:04.488395 systemd[1]: sshd@63-10.0.0.11:22-10.0.0.1:40610.service: Deactivated successfully. Apr 28 00:52:04.549612 systemd[1]: session-64.scope: Deactivated successfully. Apr 28 00:52:04.550165 systemd[1]: session-64.scope: Consumed 5.613s CPU time. Apr 28 00:52:04.564812 systemd-logind[1457]: Removed session 64. Apr 28 00:52:06.216658 kubelet[2526]: E0428 00:52:06.214552 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:52:09.500977 systemd[1]: Started sshd@64-10.0.0.11:22-10.0.0.1:56562.service - OpenSSH per-connection server daemon (10.0.0.1:56562). Apr 28 00:52:10.660136 sshd[8537]: Accepted publickey for core from 10.0.0.1 port 56562 ssh2: RSA SHA256:h1NhNWfC+Dmp6wIIzeQOuTKnwsIBe3BN0LKeEqeOidc Apr 28 00:52:10.696612 sshd[8537]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 00:52:10.998245 systemd-logind[1457]: New session 65 of user core. Apr 28 00:52:11.009112 systemd[1]: Started session-65.scope - Session 65 of User core. Apr 28 00:52:12.987125 sshd[8537]: pam_unix(sshd:session): session closed for user core Apr 28 00:52:13.196551 systemd[1]: sshd@64-10.0.0.11:22-10.0.0.1:56562.service: Deactivated successfully. Apr 28 00:52:13.294124 systemd[1]: session-65.scope: Deactivated successfully. Apr 28 00:52:13.294534 systemd[1]: session-65.scope: Consumed 1.066s CPU time. Apr 28 00:52:13.399602 systemd-logind[1457]: Session 65 logged out. Waiting for processes to exit. Apr 28 00:52:13.440327 systemd-logind[1457]: Removed session 65. Apr 28 00:52:18.532476 systemd[1]: Started sshd@65-10.0.0.11:22-10.0.0.1:53604.service - OpenSSH per-connection server daemon (10.0.0.1:53604). Apr 28 00:52:19.155925 sshd[8589]: Accepted publickey for core from 10.0.0.1 port 53604 ssh2: RSA SHA256:h1NhNWfC+Dmp6wIIzeQOuTKnwsIBe3BN0LKeEqeOidc Apr 28 00:52:19.169517 sshd[8589]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 00:52:19.381223 systemd-logind[1457]: New session 66 of user core. Apr 28 00:52:19.463140 systemd[1]: Started session-66.scope - Session 66 of User core. Apr 28 00:52:21.175418 sshd[8589]: pam_unix(sshd:session): session closed for user core Apr 28 00:52:21.259975 systemd-logind[1457]: Session 66 logged out. Waiting for processes to exit. Apr 28 00:52:21.262377 systemd[1]: sshd@65-10.0.0.11:22-10.0.0.1:53604.service: Deactivated successfully. Apr 28 00:52:21.349492 systemd[1]: session-66.scope: Deactivated successfully. Apr 28 00:52:21.351134 systemd[1]: session-66.scope: Consumed 1.165s CPU time. Apr 28 00:52:21.365379 systemd-logind[1457]: Removed session 66. Apr 28 00:52:26.387856 systemd[1]: Started sshd@66-10.0.0.11:22-10.0.0.1:40428.service - OpenSSH per-connection server daemon (10.0.0.1:40428). Apr 28 00:52:26.866701 sshd[8633]: Accepted publickey for core from 10.0.0.1 port 40428 ssh2: RSA SHA256:h1NhNWfC+Dmp6wIIzeQOuTKnwsIBe3BN0LKeEqeOidc Apr 28 00:52:26.889316 sshd[8633]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 00:52:27.389657 systemd-logind[1457]: New session 67 of user core. Apr 28 00:52:27.628527 systemd[1]: Started session-67.scope - Session 67 of User core. Apr 28 00:52:30.309430 kubelet[2526]: E0428 00:52:30.308480 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:52:32.214782 sshd[8633]: pam_unix(sshd:session): session closed for user core Apr 28 00:52:32.466961 systemd[1]: sshd@66-10.0.0.11:22-10.0.0.1:40428.service: Deactivated successfully. Apr 28 00:52:32.594780 systemd[1]: session-67.scope: Deactivated successfully. Apr 28 00:52:32.597285 systemd[1]: session-67.scope: Consumed 2.749s CPU time. Apr 28 00:52:32.600196 systemd-logind[1457]: Session 67 logged out. Waiting for processes to exit. Apr 28 00:52:32.624452 systemd-logind[1457]: Removed session 67. Apr 28 00:52:37.493727 systemd[1]: Started sshd@67-10.0.0.11:22-10.0.0.1:50842.service - OpenSSH per-connection server daemon (10.0.0.1:50842). Apr 28 00:52:38.433424 sshd[8673]: Accepted publickey for core from 10.0.0.1 port 50842 ssh2: RSA SHA256:h1NhNWfC+Dmp6wIIzeQOuTKnwsIBe3BN0LKeEqeOidc Apr 28 00:52:38.602050 sshd[8673]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 00:52:39.209309 systemd-logind[1457]: New session 68 of user core. Apr 28 00:52:39.243334 systemd[1]: Started session-68.scope - Session 68 of User core. Apr 28 00:52:44.125399 sshd[8673]: pam_unix(sshd:session): session closed for user core Apr 28 00:52:44.324617 systemd[1]: sshd@67-10.0.0.11:22-10.0.0.1:50842.service: Deactivated successfully. Apr 28 00:52:44.343487 systemd[1]: session-68.scope: Deactivated successfully. Apr 28 00:52:44.344340 systemd[1]: session-68.scope: Consumed 3.083s CPU time. Apr 28 00:52:44.345370 systemd-logind[1457]: Session 68 logged out. Waiting for processes to exit. Apr 28 00:52:44.356783 systemd-logind[1457]: Removed session 68. Apr 28 00:52:49.398453 systemd[1]: Started sshd@68-10.0.0.11:22-10.0.0.1:49776.service - OpenSSH per-connection server daemon (10.0.0.1:49776). Apr 28 00:52:50.760145 sshd[8724]: Accepted publickey for core from 10.0.0.1 port 49776 ssh2: RSA SHA256:h1NhNWfC+Dmp6wIIzeQOuTKnwsIBe3BN0LKeEqeOidc Apr 28 00:52:50.969071 sshd[8724]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 00:52:51.250240 systemd-logind[1457]: New session 69 of user core. Apr 28 00:52:51.275393 systemd[1]: Started session-69.scope - Session 69 of User core. Apr 28 00:52:56.269953 kubelet[2526]: E0428 00:52:56.267858 2526 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.012s" Apr 28 00:52:56.906669 kubelet[2526]: E0428 00:52:56.904470 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:52:57.916220 sshd[8724]: pam_unix(sshd:session): session closed for user core Apr 28 00:52:58.190300 systemd[1]: sshd@68-10.0.0.11:22-10.0.0.1:49776.service: Deactivated successfully. Apr 28 00:52:58.365162 systemd[1]: session-69.scope: Deactivated successfully. Apr 28 00:52:58.365800 systemd[1]: session-69.scope: Consumed 2.781s CPU time. Apr 28 00:52:58.456925 systemd-logind[1457]: Session 69 logged out. Waiting for processes to exit. Apr 28 00:52:58.459386 systemd-logind[1457]: Removed session 69. Apr 28 00:53:01.822745 kubelet[2526]: E0428 00:53:01.813467 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:53:03.311994 systemd[1]: Started sshd@69-10.0.0.11:22-10.0.0.1:60910.service - OpenSSH per-connection server daemon (10.0.0.1:60910). Apr 28 00:53:03.781414 kubelet[2526]: E0428 00:53:03.764586 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:53:04.664364 sshd[8774]: Accepted publickey for core from 10.0.0.1 port 60910 ssh2: RSA SHA256:h1NhNWfC+Dmp6wIIzeQOuTKnwsIBe3BN0LKeEqeOidc Apr 28 00:53:04.928361 sshd[8774]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 00:53:05.237239 systemd-logind[1457]: New session 70 of user core. Apr 28 00:53:05.260195 systemd[1]: Started session-70.scope - Session 70 of User core. Apr 28 00:53:08.421557 kubelet[2526]: E0428 00:53:08.418377 2526 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.134s" Apr 28 00:53:10.401164 sshd[8774]: pam_unix(sshd:session): session closed for user core Apr 28 00:53:10.518822 systemd[1]: sshd@69-10.0.0.11:22-10.0.0.1:60910.service: Deactivated successfully. Apr 28 00:53:10.663784 systemd[1]: session-70.scope: Deactivated successfully. Apr 28 00:53:10.669028 systemd[1]: session-70.scope: Consumed 2.714s CPU time. Apr 28 00:53:10.715340 systemd-logind[1457]: Session 70 logged out. Waiting for processes to exit. Apr 28 00:53:10.858974 systemd-logind[1457]: Removed session 70. Apr 28 00:53:15.775456 systemd[1]: Started sshd@70-10.0.0.11:22-10.0.0.1:35338.service - OpenSSH per-connection server daemon (10.0.0.1:35338). Apr 28 00:53:16.253127 kubelet[2526]: E0428 00:53:16.253035 2526 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.051s" Apr 28 00:53:16.281588 kubelet[2526]: E0428 00:53:16.262413 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:53:18.697269 sshd[8825]: Accepted publickey for core from 10.0.0.1 port 35338 ssh2: RSA SHA256:h1NhNWfC+Dmp6wIIzeQOuTKnwsIBe3BN0LKeEqeOidc Apr 28 00:53:18.945759 sshd[8825]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 00:53:19.467376 systemd-logind[1457]: New session 71 of user core. Apr 28 00:53:19.750349 systemd[1]: Started session-71.scope - Session 71 of User core. Apr 28 00:53:20.160333 kubelet[2526]: E0428 00:53:20.158251 2526 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.92s" Apr 28 00:53:21.419214 kubelet[2526]: E0428 00:53:21.418978 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:53:21.982263 kubelet[2526]: E0428 00:53:21.961516 2526 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.057s" Apr 28 00:53:26.745561 kubelet[2526]: E0428 00:53:26.745325 2526 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.41s" Apr 28 00:53:26.748854 kubelet[2526]: E0428 00:53:26.748836 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:53:30.886327 sshd[8825]: pam_unix(sshd:session): session closed for user core Apr 28 00:53:31.260790 systemd[1]: sshd@70-10.0.0.11:22-10.0.0.1:35338.service: Deactivated successfully. Apr 28 00:53:31.278377 systemd[1]: sshd@70-10.0.0.11:22-10.0.0.1:35338.service: Consumed 1.148s CPU time. Apr 28 00:53:31.356293 systemd[1]: session-71.scope: Deactivated successfully. Apr 28 00:53:31.358527 systemd[1]: session-71.scope: Consumed 5.890s CPU time. Apr 28 00:53:31.449706 systemd-logind[1457]: Session 71 logged out. Waiting for processes to exit. Apr 28 00:53:31.501100 systemd-logind[1457]: Removed session 71. Apr 28 00:53:36.155829 systemd[1]: Started sshd@71-10.0.0.11:22-10.0.0.1:52474.service - OpenSSH per-connection server daemon (10.0.0.1:52474). Apr 28 00:53:36.771534 sshd[8890]: Accepted publickey for core from 10.0.0.1 port 52474 ssh2: RSA SHA256:h1NhNWfC+Dmp6wIIzeQOuTKnwsIBe3BN0LKeEqeOidc Apr 28 00:53:36.856792 sshd[8890]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 00:53:37.150829 systemd-logind[1457]: New session 72 of user core. Apr 28 00:53:37.340110 systemd[1]: Started session-72.scope - Session 72 of User core. Apr 28 00:53:40.658568 kubelet[2526]: E0428 00:53:40.655291 2526 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.42s" Apr 28 00:53:42.482038 sshd[8890]: pam_unix(sshd:session): session closed for user core Apr 28 00:53:42.623167 systemd[1]: sshd@71-10.0.0.11:22-10.0.0.1:52474.service: Deactivated successfully. Apr 28 00:53:42.644394 systemd[1]: session-72.scope: Deactivated successfully. Apr 28 00:53:42.645634 systemd[1]: session-72.scope: Consumed 2.776s CPU time. Apr 28 00:53:42.653179 systemd-logind[1457]: Session 72 logged out. Waiting for processes to exit. Apr 28 00:53:42.683454 systemd-logind[1457]: Removed session 72. Apr 28 00:53:47.711353 systemd[1]: Started sshd@72-10.0.0.11:22-10.0.0.1:56736.service - OpenSSH per-connection server daemon (10.0.0.1:56736). Apr 28 00:53:49.544408 sshd[8946]: Accepted publickey for core from 10.0.0.1 port 56736 ssh2: RSA SHA256:h1NhNWfC+Dmp6wIIzeQOuTKnwsIBe3BN0LKeEqeOidc Apr 28 00:53:49.758761 sshd[8946]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 00:53:50.561961 systemd-logind[1457]: New session 73 of user core. Apr 28 00:53:50.798764 systemd[1]: Started session-73.scope - Session 73 of User core. Apr 28 00:53:51.649165 kubelet[2526]: E0428 00:53:51.647745 2526 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.349s" Apr 28 00:53:55.251752 kubelet[2526]: E0428 00:53:55.250779 2526 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.914s" Apr 28 00:53:56.852678 kubelet[2526]: E0428 00:53:56.852085 2526 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.523s" Apr 28 00:53:59.128631 sshd[8946]: pam_unix(sshd:session): session closed for user core Apr 28 00:53:59.293817 systemd[1]: sshd@72-10.0.0.11:22-10.0.0.1:56736.service: Deactivated successfully. Apr 28 00:53:59.406812 systemd[1]: session-73.scope: Deactivated successfully. Apr 28 00:53:59.456292 systemd[1]: session-73.scope: Consumed 4.844s CPU time. Apr 28 00:53:59.590143 systemd-logind[1457]: Session 73 logged out. Waiting for processes to exit. Apr 28 00:53:59.716169 systemd-logind[1457]: Removed session 73. Apr 28 00:53:59.884547 kubelet[2526]: E0428 00:53:59.880821 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:54:04.264505 systemd[1]: Started sshd@73-10.0.0.11:22-10.0.0.1:37208.service - OpenSSH per-connection server daemon (10.0.0.1:37208). Apr 28 00:54:06.162859 sshd[8989]: Accepted publickey for core from 10.0.0.1 port 37208 ssh2: RSA SHA256:h1NhNWfC+Dmp6wIIzeQOuTKnwsIBe3BN0LKeEqeOidc Apr 28 00:54:06.194620 sshd[8989]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 00:54:06.678104 systemd-logind[1457]: New session 74 of user core. Apr 28 00:54:06.822783 systemd[1]: Started session-74.scope - Session 74 of User core. Apr 28 00:54:06.877157 kubelet[2526]: E0428 00:54:06.872791 2526 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.431s" Apr 28 00:54:09.849080 kubelet[2526]: E0428 00:54:09.833803 2526 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.592s" Apr 28 00:54:13.465458 sshd[8989]: pam_unix(sshd:session): session closed for user core Apr 28 00:54:13.870846 systemd[1]: sshd@73-10.0.0.11:22-10.0.0.1:37208.service: Deactivated successfully. Apr 28 00:54:14.215974 systemd[1]: session-74.scope: Deactivated successfully. Apr 28 00:54:14.248772 systemd[1]: session-74.scope: Consumed 4.734s CPU time. Apr 28 00:54:14.385318 systemd-logind[1457]: Session 74 logged out. Waiting for processes to exit. Apr 28 00:54:14.566777 systemd-logind[1457]: Removed session 74. Apr 28 00:54:14.810334 kubelet[2526]: E0428 00:54:14.809235 2526 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.51s" Apr 28 00:54:14.952067 kubelet[2526]: E0428 00:54:14.948444 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:54:19.095661 systemd[1]: Started sshd@74-10.0.0.11:22-10.0.0.1:37658.service - OpenSSH per-connection server daemon (10.0.0.1:37658). Apr 28 00:54:20.466232 kubelet[2526]: E0428 00:54:20.465236 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:54:20.940140 sshd[9035]: Accepted publickey for core from 10.0.0.1 port 37658 ssh2: RSA SHA256:h1NhNWfC+Dmp6wIIzeQOuTKnwsIBe3BN0LKeEqeOidc Apr 28 00:54:21.081861 sshd[9035]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 00:54:21.863708 systemd-logind[1457]: New session 75 of user core. Apr 28 00:54:21.902288 systemd[1]: Started session-75.scope - Session 75 of User core. Apr 28 00:54:22.645983 kubelet[2526]: E0428 00:54:22.644471 2526 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.322s" Apr 28 00:54:25.023199 kubelet[2526]: E0428 00:54:25.022563 2526 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.803s" Apr 28 00:54:27.229632 kubelet[2526]: E0428 00:54:27.225096 2526 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.903s" Apr 28 00:54:27.359133 kubelet[2526]: E0428 00:54:27.359007 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:54:27.596411 systemd[1]: cri-containerd-93d6279973a6ecf62ad26521025cdb363fab368126b8cfd76d670571d156b150.scope: Deactivated successfully. Apr 28 00:54:27.618127 systemd[1]: cri-containerd-93d6279973a6ecf62ad26521025cdb363fab368126b8cfd76d670571d156b150.scope: Consumed 46.874s CPU time. Apr 28 00:54:33.607299 kubelet[2526]: E0428 00:54:33.603968 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:54:33.616481 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-93d6279973a6ecf62ad26521025cdb363fab368126b8cfd76d670571d156b150-rootfs.mount: Deactivated successfully. Apr 28 00:54:34.262063 containerd[1473]: time="2026-04-28T00:54:34.243443440Z" level=info msg="shim disconnected" id=93d6279973a6ecf62ad26521025cdb363fab368126b8cfd76d670571d156b150 namespace=k8s.io Apr 28 00:54:34.317577 containerd[1473]: time="2026-04-28T00:54:34.268295616Z" level=warning msg="cleaning up after shim disconnected" id=93d6279973a6ecf62ad26521025cdb363fab368126b8cfd76d670571d156b150 namespace=k8s.io Apr 28 00:54:34.317577 containerd[1473]: time="2026-04-28T00:54:34.306918592Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 28 00:54:34.363924 kubelet[2526]: E0428 00:54:34.363487 2526 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.084s" Apr 28 00:54:35.909821 systemd[1]: cri-containerd-6dc550d34c16d61f2413488d8637e49fb5bb9df7067e03a8482f046125cfd41a.scope: Deactivated successfully. Apr 28 00:54:36.099359 systemd[1]: cri-containerd-6dc550d34c16d61f2413488d8637e49fb5bb9df7067e03a8482f046125cfd41a.scope: Consumed 4min 52.428s CPU time. Apr 28 00:54:36.155243 kubelet[2526]: E0428 00:54:35.992938 2526 controller.go:195] "Failed to update lease" err="Put \"https://10.0.0.11:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Apr 28 00:54:36.393248 kubelet[2526]: E0428 00:54:36.362881 2526 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.079s" Apr 28 00:54:37.854418 sshd[9035]: pam_unix(sshd:session): session closed for user core Apr 28 00:54:38.115202 systemd[1]: sshd@74-10.0.0.11:22-10.0.0.1:37658.service: Deactivated successfully. Apr 28 00:54:38.317091 systemd[1]: session-75.scope: Deactivated successfully. Apr 28 00:54:38.328413 systemd[1]: session-75.scope: Consumed 4.819s CPU time. Apr 28 00:54:38.468590 systemd-logind[1457]: Session 75 logged out. Waiting for processes to exit. Apr 28 00:54:38.642116 systemd-logind[1457]: Removed session 75. Apr 28 00:54:39.069346 containerd[1473]: time="2026-04-28T00:54:39.004997736Z" level=error msg="failed to delete shim" error="1 error occurred:\n\t* close wait error: context deadline exceeded\n\n" id=93d6279973a6ecf62ad26521025cdb363fab368126b8cfd76d670571d156b150 Apr 28 00:54:39.489640 containerd[1473]: time="2026-04-28T00:54:39.483881137Z" level=error msg="failed to delete" cmd="/usr/bin/containerd-shim-runc-v2 -namespace k8s.io -address /run/containerd/containerd.sock -publish-binary /usr/bin/containerd -id 93d6279973a6ecf62ad26521025cdb363fab368126b8cfd76d670571d156b150 -bundle /run/containerd/io.containerd.runtime.v2.task/k8s.io/93d6279973a6ecf62ad26521025cdb363fab368126b8cfd76d670571d156b150 delete" error="signal: killed" namespace=k8s.io Apr 28 00:54:39.489640 containerd[1473]: time="2026-04-28T00:54:39.485679828Z" level=warning msg="failed to clean up after shim disconnected" error=": signal: killed" id=93d6279973a6ecf62ad26521025cdb363fab368126b8cfd76d670571d156b150 namespace=k8s.io Apr 28 00:54:39.878138 kubelet[2526]: E0428 00:54:39.849163 2526 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.421s" Apr 28 00:54:40.207683 kubelet[2526]: E0428 00:54:40.190793 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:54:41.094327 kubelet[2526]: E0428 00:54:41.092806 2526 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.141s" Apr 28 00:54:43.066084 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6dc550d34c16d61f2413488d8637e49fb5bb9df7067e03a8482f046125cfd41a-rootfs.mount: Deactivated successfully. Apr 28 00:54:43.944379 systemd[1]: Started sshd@75-10.0.0.11:22-10.0.0.1:35438.service - OpenSSH per-connection server daemon (10.0.0.1:35438). Apr 28 00:54:44.065543 containerd[1473]: time="2026-04-28T00:54:43.985616829Z" level=info msg="shim disconnected" id=6dc550d34c16d61f2413488d8637e49fb5bb9df7067e03a8482f046125cfd41a namespace=k8s.io Apr 28 00:54:44.065543 containerd[1473]: time="2026-04-28T00:54:44.063685474Z" level=warning msg="cleaning up after shim disconnected" id=6dc550d34c16d61f2413488d8637e49fb5bb9df7067e03a8482f046125cfd41a namespace=k8s.io Apr 28 00:54:44.065543 containerd[1473]: time="2026-04-28T00:54:44.064118941Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 28 00:54:45.854052 kubelet[2526]: E0428 00:54:45.853536 2526 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="4.539s" Apr 28 00:54:46.335788 kubelet[2526]: I0428 00:54:46.335239 2526 scope.go:117] "RemoveContainer" containerID="a0ac537b67e6c9c214565fbfb889fac58007b2a05a5bdf2ba3d669ac1ec7dfe7" Apr 28 00:54:46.380539 kubelet[2526]: I0428 00:54:46.377420 2526 scope.go:117] "RemoveContainer" containerID="93d6279973a6ecf62ad26521025cdb363fab368126b8cfd76d670571d156b150" Apr 28 00:54:46.455092 kubelet[2526]: E0428 00:54:46.399341 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:54:46.615206 containerd[1473]: time="2026-04-28T00:54:46.611760478Z" level=info msg="RemoveContainer for \"a0ac537b67e6c9c214565fbfb889fac58007b2a05a5bdf2ba3d669ac1ec7dfe7\"" Apr 28 00:54:46.641387 containerd[1473]: time="2026-04-28T00:54:46.640735571Z" level=info msg="CreateContainer within sandbox \"670f440219fcc7a0b00ba64a35e5d1ee4e1b4357741788815366a9fc0871c449\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:6,}" Apr 28 00:54:46.714014 containerd[1473]: time="2026-04-28T00:54:46.712546732Z" level=info msg="RemoveContainer for \"a0ac537b67e6c9c214565fbfb889fac58007b2a05a5bdf2ba3d669ac1ec7dfe7\" returns successfully" Apr 28 00:54:46.989075 sshd[9142]: Accepted publickey for core from 10.0.0.1 port 35438 ssh2: RSA SHA256:h1NhNWfC+Dmp6wIIzeQOuTKnwsIBe3BN0LKeEqeOidc Apr 28 00:54:47.520260 sshd[9142]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 00:54:47.631080 containerd[1473]: time="2026-04-28T00:54:47.628353443Z" level=info msg="CreateContainer within sandbox \"670f440219fcc7a0b00ba64a35e5d1ee4e1b4357741788815366a9fc0871c449\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:6,} returns container id \"f9a396c4e783c7a856252e1329b9281410598d88a71d21eb083e8a7b9a46defe\"" Apr 28 00:54:48.673658 containerd[1473]: time="2026-04-28T00:54:48.669258623Z" level=info msg="StartContainer for \"f9a396c4e783c7a856252e1329b9281410598d88a71d21eb083e8a7b9a46defe\"" Apr 28 00:54:48.807237 systemd-logind[1457]: New session 76 of user core. Apr 28 00:54:49.163696 systemd[1]: Started session-76.scope - Session 76 of User core. Apr 28 00:54:54.279596 kubelet[2526]: E0428 00:54:54.268585 2526 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="6.706s" Apr 28 00:54:55.967164 kubelet[2526]: E0428 00:54:55.966978 2526 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.698s" Apr 28 00:54:56.257714 kubelet[2526]: I0428 00:54:56.245324 2526 scope.go:117] "RemoveContainer" containerID="30623e84dddc09c264da46ca8ec09e46882e1ee19a1ade1f70bbee60ef9a49a0" Apr 28 00:54:57.216822 kubelet[2526]: I0428 00:54:57.215073 2526 scope.go:117] "RemoveContainer" containerID="6dc550d34c16d61f2413488d8637e49fb5bb9df7067e03a8482f046125cfd41a" Apr 28 00:54:57.368013 kubelet[2526]: E0428 00:54:57.367464 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:54:58.198700 containerd[1473]: time="2026-04-28T00:54:58.198622867Z" level=info msg="RemoveContainer for \"30623e84dddc09c264da46ca8ec09e46882e1ee19a1ade1f70bbee60ef9a49a0\"" Apr 28 00:54:58.598159 containerd[1473]: time="2026-04-28T00:54:58.198925256Z" level=info msg="CreateContainer within sandbox \"5c78d6efeb1497bc4b241801b76ef5ef29f9c34ed77d9b10eab33b0cd1c147bb\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:5,}" Apr 28 00:54:58.684599 kubelet[2526]: I0428 00:54:58.572509 2526 scope.go:117] "RemoveContainer" containerID="30623e84dddc09c264da46ca8ec09e46882e1ee19a1ade1f70bbee60ef9a49a0" Apr 28 00:54:58.759228 kubelet[2526]: E0428 00:54:58.758665 2526 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.479s" Apr 28 00:54:59.328097 containerd[1473]: time="2026-04-28T00:54:59.312353332Z" level=info msg="RemoveContainer for \"30623e84dddc09c264da46ca8ec09e46882e1ee19a1ade1f70bbee60ef9a49a0\" returns successfully" Apr 28 00:54:59.352700 containerd[1473]: time="2026-04-28T00:54:59.334449310Z" level=info msg="RemoveContainer for \"30623e84dddc09c264da46ca8ec09e46882e1ee19a1ade1f70bbee60ef9a49a0\"" Apr 28 00:54:59.352700 containerd[1473]: time="2026-04-28T00:54:59.348705442Z" level=info msg="RemoveContainer for \"30623e84dddc09c264da46ca8ec09e46882e1ee19a1ade1f70bbee60ef9a49a0\" returns successfully" Apr 28 00:54:59.764703 kubelet[2526]: I0428 00:54:59.764379 2526 scope.go:117] "RemoveContainer" containerID="93d6279973a6ecf62ad26521025cdb363fab368126b8cfd76d670571d156b150" Apr 28 00:55:00.635058 kubelet[2526]: E0428 00:55:00.634670 2526 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.168s" Apr 28 00:55:00.980660 containerd[1473]: time="2026-04-28T00:55:00.955241428Z" level=info msg="RemoveContainer for \"93d6279973a6ecf62ad26521025cdb363fab368126b8cfd76d670571d156b150\"" Apr 28 00:55:01.910163 containerd[1473]: time="2026-04-28T00:55:01.651223375Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 28 00:55:01.910163 containerd[1473]: time="2026-04-28T00:55:01.651366031Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 28 00:55:01.910163 containerd[1473]: time="2026-04-28T00:55:01.651374445Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 28 00:55:01.910163 containerd[1473]: time="2026-04-28T00:55:01.651751299Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 28 00:55:02.575065 containerd[1473]: time="2026-04-28T00:55:02.574597497Z" level=info msg="RemoveContainer for \"93d6279973a6ecf62ad26521025cdb363fab368126b8cfd76d670571d156b150\" returns successfully" Apr 28 00:55:02.908359 kubelet[2526]: E0428 00:55:02.908113 2526 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.67s" Apr 28 00:55:03.255592 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount564187796.mount: Deactivated successfully. Apr 28 00:55:03.479861 sshd[9142]: pam_unix(sshd:session): session closed for user core Apr 28 00:55:03.801768 containerd[1473]: time="2026-04-28T00:55:03.800333096Z" level=info msg="CreateContainer within sandbox \"5c78d6efeb1497bc4b241801b76ef5ef29f9c34ed77d9b10eab33b0cd1c147bb\" for &ContainerMetadata{Name:kube-scheduler,Attempt:5,} returns container id \"cfca32e6e6731a59b42e2c0091992c12ea86aef9e0ea3f18c4deab8e0c8ba221\"" Apr 28 00:55:03.875167 systemd[1]: sshd@75-10.0.0.11:22-10.0.0.1:35438.service: Deactivated successfully. Apr 28 00:55:03.876294 systemd[1]: sshd@75-10.0.0.11:22-10.0.0.1:35438.service: Consumed 1.301s CPU time. Apr 28 00:55:04.088870 systemd[1]: session-76.scope: Deactivated successfully. Apr 28 00:55:04.118392 systemd[1]: session-76.scope: Consumed 9.352s CPU time. Apr 28 00:55:04.184591 systemd-logind[1457]: Session 76 logged out. Waiting for processes to exit. Apr 28 00:55:04.200535 systemd-logind[1457]: Removed session 76. Apr 28 00:55:04.260996 containerd[1473]: time="2026-04-28T00:55:04.259316121Z" level=info msg="StartContainer for \"cfca32e6e6731a59b42e2c0091992c12ea86aef9e0ea3f18c4deab8e0c8ba221\"" Apr 28 00:55:05.414070 systemd[1]: Started cri-containerd-f9a396c4e783c7a856252e1329b9281410598d88a71d21eb083e8a7b9a46defe.scope - libcontainer container f9a396c4e783c7a856252e1329b9281410598d88a71d21eb083e8a7b9a46defe. Apr 28 00:55:08.156870 kubelet[2526]: E0428 00:55:08.148193 2526 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.767s" Apr 28 00:55:09.140328 systemd[1]: Started sshd@76-10.0.0.11:22-10.0.0.1:60716.service - OpenSSH per-connection server daemon (10.0.0.1:60716). Apr 28 00:55:11.107277 containerd[1473]: time="2026-04-28T00:55:11.064601741Z" level=error msg="get state for f9a396c4e783c7a856252e1329b9281410598d88a71d21eb083e8a7b9a46defe" error="context deadline exceeded: unknown" Apr 28 00:55:11.407195 containerd[1473]: time="2026-04-28T00:55:11.344158423Z" level=warning msg="unknown status" status=0 Apr 28 00:55:14.155331 sshd[9256]: Accepted publickey for core from 10.0.0.1 port 60716 ssh2: RSA SHA256:h1NhNWfC+Dmp6wIIzeQOuTKnwsIBe3BN0LKeEqeOidc Apr 28 00:55:14.287492 sshd[9256]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 00:55:14.498206 kubelet[2526]: E0428 00:55:14.490630 2526 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="6.342s" Apr 28 00:55:14.708908 systemd[1]: Started cri-containerd-cfca32e6e6731a59b42e2c0091992c12ea86aef9e0ea3f18c4deab8e0c8ba221.scope - libcontainer container cfca32e6e6731a59b42e2c0091992c12ea86aef9e0ea3f18c4deab8e0c8ba221. Apr 28 00:55:15.158390 systemd-logind[1457]: New session 77 of user core. Apr 28 00:55:15.185425 systemd[1]: Started session-77.scope - Session 77 of User core. Apr 28 00:55:15.304641 containerd[1473]: time="2026-04-28T00:55:15.136480446Z" level=error msg="ttrpc: received message on inactive stream" stream=5 Apr 28 00:55:17.072138 kubelet[2526]: E0428 00:55:17.071983 2526 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.413s" Apr 28 00:55:18.245316 kubelet[2526]: E0428 00:55:18.193782 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:55:18.914049 containerd[1473]: time="2026-04-28T00:55:18.911940464Z" level=error msg="get state for cfca32e6e6731a59b42e2c0091992c12ea86aef9e0ea3f18c4deab8e0c8ba221" error="context deadline exceeded: unknown" Apr 28 00:55:18.928928 containerd[1473]: time="2026-04-28T00:55:18.928344369Z" level=warning msg="unknown status" status=0 Apr 28 00:55:19.190810 kubelet[2526]: E0428 00:55:19.182309 2526 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.11s" Apr 28 00:55:19.477138 containerd[1473]: time="2026-04-28T00:55:19.474481718Z" level=error msg="ttrpc: received message on inactive stream" stream=3 Apr 28 00:55:20.165143 containerd[1473]: time="2026-04-28T00:55:20.164640970Z" level=info msg="StartContainer for \"f9a396c4e783c7a856252e1329b9281410598d88a71d21eb083e8a7b9a46defe\" returns successfully" Apr 28 00:55:20.653287 sshd[9256]: pam_unix(sshd:session): session closed for user core Apr 28 00:55:21.081879 systemd[1]: sshd@76-10.0.0.11:22-10.0.0.1:60716.service: Deactivated successfully. Apr 28 00:55:21.097761 systemd[1]: sshd@76-10.0.0.11:22-10.0.0.1:60716.service: Consumed 1.524s CPU time. Apr 28 00:55:21.262307 containerd[1473]: time="2026-04-28T00:55:21.259110763Z" level=info msg="StartContainer for \"cfca32e6e6731a59b42e2c0091992c12ea86aef9e0ea3f18c4deab8e0c8ba221\" returns successfully" Apr 28 00:55:21.265620 systemd[1]: session-77.scope: Deactivated successfully. Apr 28 00:55:21.449166 kubelet[2526]: E0428 00:55:21.400871 2526 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.255s" Apr 28 00:55:21.268760 systemd[1]: session-77.scope: Consumed 3.421s CPU time. Apr 28 00:55:21.296868 systemd-logind[1457]: Session 77 logged out. Waiting for processes to exit. Apr 28 00:55:21.410541 systemd-logind[1457]: Removed session 77. Apr 28 00:55:23.018114 kubelet[2526]: E0428 00:55:23.004801 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:55:23.018114 kubelet[2526]: E0428 00:55:23.016705 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:55:23.994299 kubelet[2526]: E0428 00:55:23.990757 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:55:23.994299 kubelet[2526]: E0428 00:55:23.991381 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:55:25.750875 kubelet[2526]: E0428 00:55:25.750362 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:55:26.186512 systemd[1]: Started sshd@77-10.0.0.11:22-10.0.0.1:55534.service - OpenSSH per-connection server daemon (10.0.0.1:55534). Apr 28 00:55:27.211490 sshd[9364]: Accepted publickey for core from 10.0.0.1 port 55534 ssh2: RSA SHA256:h1NhNWfC+Dmp6wIIzeQOuTKnwsIBe3BN0LKeEqeOidc Apr 28 00:55:27.247253 sshd[9364]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 00:55:27.887340 systemd-logind[1457]: New session 78 of user core. Apr 28 00:55:27.910139 systemd[1]: Started session-78.scope - Session 78 of User core. Apr 28 00:55:27.996662 kubelet[2526]: E0428 00:55:27.983316 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:55:28.498570 kubelet[2526]: E0428 00:55:28.494979 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:55:30.177601 kubelet[2526]: E0428 00:55:30.176845 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:55:31.594169 sshd[9364]: pam_unix(sshd:session): session closed for user core Apr 28 00:55:31.718718 systemd[1]: sshd@77-10.0.0.11:22-10.0.0.1:55534.service: Deactivated successfully. Apr 28 00:55:31.751583 systemd[1]: session-78.scope: Deactivated successfully. Apr 28 00:55:31.762101 systemd[1]: session-78.scope: Consumed 2.472s CPU time. Apr 28 00:55:31.882719 systemd-logind[1457]: Session 78 logged out. Waiting for processes to exit. Apr 28 00:55:31.916552 systemd-logind[1457]: Removed session 78. Apr 28 00:55:37.169339 systemd[1]: Started sshd@78-10.0.0.11:22-10.0.0.1:35484.service - OpenSSH per-connection server daemon (10.0.0.1:35484). Apr 28 00:55:37.745552 kubelet[2526]: E0428 00:55:37.744673 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:55:38.647661 kubelet[2526]: E0428 00:55:38.640517 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:55:38.719846 kubelet[2526]: E0428 00:55:38.687278 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:55:39.795183 sshd[9415]: Accepted publickey for core from 10.0.0.1 port 35484 ssh2: RSA SHA256:h1NhNWfC+Dmp6wIIzeQOuTKnwsIBe3BN0LKeEqeOidc Apr 28 00:55:40.173308 sshd[9415]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 00:55:40.689991 systemd-logind[1457]: New session 79 of user core. Apr 28 00:55:40.949044 systemd[1]: Started session-79.scope - Session 79 of User core. Apr 28 00:55:41.305835 kubelet[2526]: E0428 00:55:41.299555 2526 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.017s" Apr 28 00:55:42.070093 kubelet[2526]: E0428 00:55:42.067486 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:55:44.357839 sshd[9415]: pam_unix(sshd:session): session closed for user core Apr 28 00:55:44.591137 systemd[1]: sshd@78-10.0.0.11:22-10.0.0.1:35484.service: Deactivated successfully. Apr 28 00:55:44.596198 systemd[1]: sshd@78-10.0.0.11:22-10.0.0.1:35484.service: Consumed 1.314s CPU time. Apr 28 00:55:44.685780 systemd[1]: session-79.scope: Deactivated successfully. Apr 28 00:55:44.702386 systemd[1]: session-79.scope: Consumed 2.484s CPU time. Apr 28 00:55:44.792118 systemd-logind[1457]: Session 79 logged out. Waiting for processes to exit. Apr 28 00:55:44.823311 systemd-logind[1457]: Removed session 79. Apr 28 00:55:45.237175 kubelet[2526]: E0428 00:55:45.235716 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:55:48.305855 kubelet[2526]: E0428 00:55:48.300976 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:55:49.902635 systemd[1]: Started sshd@79-10.0.0.11:22-10.0.0.1:55448.service - OpenSSH per-connection server daemon (10.0.0.1:55448). Apr 28 00:55:50.787706 sshd[9470]: Accepted publickey for core from 10.0.0.1 port 55448 ssh2: RSA SHA256:h1NhNWfC+Dmp6wIIzeQOuTKnwsIBe3BN0LKeEqeOidc Apr 28 00:55:50.817465 sshd[9470]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 00:55:51.116329 systemd-logind[1457]: New session 80 of user core. Apr 28 00:55:51.360270 systemd[1]: Started session-80.scope - Session 80 of User core. Apr 28 00:55:57.819453 sshd[9470]: pam_unix(sshd:session): session closed for user core Apr 28 00:55:58.064632 systemd[1]: sshd@79-10.0.0.11:22-10.0.0.1:55448.service: Deactivated successfully. Apr 28 00:55:58.266918 systemd[1]: session-80.scope: Deactivated successfully. Apr 28 00:55:58.267487 systemd[1]: session-80.scope: Consumed 3.383s CPU time. Apr 28 00:55:58.434590 systemd-logind[1457]: Session 80 logged out. Waiting for processes to exit. Apr 28 00:55:58.597297 systemd-logind[1457]: Removed session 80. Apr 28 00:56:03.316882 systemd[1]: Started sshd@80-10.0.0.11:22-10.0.0.1:42624.service - OpenSSH per-connection server daemon (10.0.0.1:42624). Apr 28 00:56:04.361989 kubelet[2526]: E0428 00:56:04.299858 2526 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.014s" Apr 28 00:56:05.538459 sshd[9518]: Accepted publickey for core from 10.0.0.1 port 42624 ssh2: RSA SHA256:h1NhNWfC+Dmp6wIIzeQOuTKnwsIBe3BN0LKeEqeOidc Apr 28 00:56:05.681555 sshd[9518]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 00:56:06.046414 systemd-logind[1457]: New session 81 of user core. Apr 28 00:56:06.052825 systemd[1]: Started session-81.scope - Session 81 of User core. Apr 28 00:56:07.202611 sshd[9518]: pam_unix(sshd:session): session closed for user core Apr 28 00:56:07.592648 systemd[1]: sshd@80-10.0.0.11:22-10.0.0.1:42624.service: Deactivated successfully. Apr 28 00:56:07.738640 systemd[1]: session-81.scope: Deactivated successfully. Apr 28 00:56:07.848577 systemd-logind[1457]: Session 81 logged out. Waiting for processes to exit. Apr 28 00:56:07.940156 systemd-logind[1457]: Removed session 81. Apr 28 00:56:08.357691 kubelet[2526]: E0428 00:56:08.353123 2526 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.023s" Apr 28 00:56:12.393197 systemd[1]: Started sshd@81-10.0.0.11:22-10.0.0.1:59992.service - OpenSSH per-connection server daemon (10.0.0.1:59992). Apr 28 00:56:13.528872 sshd[9565]: Accepted publickey for core from 10.0.0.1 port 59992 ssh2: RSA SHA256:h1NhNWfC+Dmp6wIIzeQOuTKnwsIBe3BN0LKeEqeOidc Apr 28 00:56:13.547392 sshd[9565]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 00:56:14.183759 systemd-logind[1457]: New session 82 of user core. Apr 28 00:56:14.257142 systemd[1]: Started session-82.scope - Session 82 of User core. Apr 28 00:56:17.496656 sshd[9565]: pam_unix(sshd:session): session closed for user core Apr 28 00:56:17.587840 systemd[1]: sshd@81-10.0.0.11:22-10.0.0.1:59992.service: Deactivated successfully. Apr 28 00:56:17.651593 systemd[1]: session-82.scope: Deactivated successfully. Apr 28 00:56:17.651853 systemd[1]: session-82.scope: Consumed 1.723s CPU time. Apr 28 00:56:17.772864 systemd-logind[1457]: Session 82 logged out. Waiting for processes to exit. Apr 28 00:56:17.814440 systemd-logind[1457]: Removed session 82. Apr 28 00:56:22.742249 systemd[1]: Started sshd@82-10.0.0.11:22-10.0.0.1:36324.service - OpenSSH per-connection server daemon (10.0.0.1:36324). Apr 28 00:56:24.466554 kubelet[2526]: E0428 00:56:24.466165 2526 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.254s" Apr 28 00:56:24.781850 sshd[9606]: Accepted publickey for core from 10.0.0.1 port 36324 ssh2: RSA SHA256:h1NhNWfC+Dmp6wIIzeQOuTKnwsIBe3BN0LKeEqeOidc Apr 28 00:56:25.070844 sshd[9606]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 00:56:25.987936 systemd-logind[1457]: New session 83 of user core. Apr 28 00:56:26.180847 systemd[1]: Started session-83.scope - Session 83 of User core. Apr 28 00:56:28.287031 kubelet[2526]: E0428 00:56:28.286958 2526 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.723s" Apr 28 00:56:29.866730 kubelet[2526]: E0428 00:56:29.864754 2526 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.471s" Apr 28 00:56:35.571811 kubelet[2526]: E0428 00:56:35.566239 2526 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="5.438s" Apr 28 00:56:35.743734 systemd[1]: cri-containerd-f9a396c4e783c7a856252e1329b9281410598d88a71d21eb083e8a7b9a46defe.scope: Deactivated successfully. Apr 28 00:56:35.772480 systemd[1]: cri-containerd-f9a396c4e783c7a856252e1329b9281410598d88a71d21eb083e8a7b9a46defe.scope: Consumed 27.843s CPU time. Apr 28 00:56:42.577850 sshd[9606]: pam_unix(sshd:session): session closed for user core Apr 28 00:56:42.745346 kubelet[2526]: E0428 00:56:42.745222 2526 controller.go:195] "Failed to update lease" err="Put \"https://10.0.0.11:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded" Apr 28 00:56:43.090169 systemd[1]: sshd@82-10.0.0.11:22-10.0.0.1:36324.service: Deactivated successfully. Apr 28 00:56:43.243667 systemd[1]: session-83.scope: Deactivated successfully. Apr 28 00:56:43.245546 systemd[1]: session-83.scope: Consumed 4.238s CPU time. Apr 28 00:56:43.516171 systemd-logind[1457]: Session 83 logged out. Waiting for processes to exit. Apr 28 00:56:43.686572 systemd-logind[1457]: Removed session 83. Apr 28 00:56:44.464151 systemd[1]: cri-containerd-cfca32e6e6731a59b42e2c0091992c12ea86aef9e0ea3f18c4deab8e0c8ba221.scope: Deactivated successfully. Apr 28 00:56:44.553013 systemd[1]: cri-containerd-cfca32e6e6731a59b42e2c0091992c12ea86aef9e0ea3f18c4deab8e0c8ba221.scope: Consumed 22.296s CPU time. Apr 28 00:56:44.884137 kubelet[2526]: E0428 00:56:44.864386 2526 cadvisor_stats_provider.go:567] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod824fd89300514e351ed3b68d82c665c6.slice/cri-containerd-cfca32e6e6731a59b42e2c0091992c12ea86aef9e0ea3f18c4deab8e0c8ba221.scope\": RecentStats: unable to find data in memory cache]" Apr 28 00:56:47.201294 kubelet[2526]: E0428 00:56:47.197800 2526 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="11.182s" Apr 28 00:56:48.487739 systemd[1]: Started sshd@83-10.0.0.11:22-10.0.0.1:44360.service - OpenSSH per-connection server daemon (10.0.0.1:44360). Apr 28 00:56:54.548774 sshd[9659]: Accepted publickey for core from 10.0.0.1 port 44360 ssh2: RSA SHA256:h1NhNWfC+Dmp6wIIzeQOuTKnwsIBe3BN0LKeEqeOidc Apr 28 00:56:55.407839 sshd[9659]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 00:56:55.960333 kubelet[2526]: E0428 00:56:55.958634 2526 controller.go:195] "Failed to update lease" err="Put \"https://10.0.0.11:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Apr 28 00:56:56.070770 containerd[1473]: time="2026-04-28T00:56:55.805320849Z" level=error msg="failed to handle container TaskExit event container_id:\"cfca32e6e6731a59b42e2c0091992c12ea86aef9e0ea3f18c4deab8e0c8ba221\" id:\"cfca32e6e6731a59b42e2c0091992c12ea86aef9e0ea3f18c4deab8e0c8ba221\" pid:9278 exit_status:1 exited_at:{seconds:1777337805 nanos:276632232}" error="failed to stop container: failed to delete task: context deadline exceeded: unknown" Apr 28 00:56:56.463176 systemd-logind[1457]: New session 84 of user core. Apr 28 00:56:56.810269 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cfca32e6e6731a59b42e2c0091992c12ea86aef9e0ea3f18c4deab8e0c8ba221-rootfs.mount: Deactivated successfully. Apr 28 00:56:56.979242 containerd[1473]: time="2026-04-28T00:56:56.812607372Z" level=error msg="ttrpc: received message on inactive stream" stream=39 Apr 28 00:56:57.071242 containerd[1473]: time="2026-04-28T00:56:56.979379001Z" level=error msg="get state for f9a396c4e783c7a856252e1329b9281410598d88a71d21eb083e8a7b9a46defe" error="context deadline exceeded: unknown" Apr 28 00:56:57.207657 systemd[1]: Started session-84.scope - Session 84 of User core. Apr 28 00:56:57.344251 containerd[1473]: time="2026-04-28T00:56:57.020774664Z" level=warning msg="unknown status" status=0 Apr 28 00:56:57.465230 containerd[1473]: time="2026-04-28T00:56:57.464583165Z" level=error msg="ttrpc: received message on inactive stream" stream=39 Apr 28 00:56:57.465230 containerd[1473]: time="2026-04-28T00:56:57.465111547Z" level=info msg="TaskExit event container_id:\"cfca32e6e6731a59b42e2c0091992c12ea86aef9e0ea3f18c4deab8e0c8ba221\" id:\"cfca32e6e6731a59b42e2c0091992c12ea86aef9e0ea3f18c4deab8e0c8ba221\" pid:9278 exit_status:1 exited_at:{seconds:1777337805 nanos:276632232}" Apr 28 00:56:58.011720 containerd[1473]: time="2026-04-28T00:56:57.789077713Z" level=error msg="failed to handle container TaskExit event container_id:\"f9a396c4e783c7a856252e1329b9281410598d88a71d21eb083e8a7b9a46defe\" id:\"f9a396c4e783c7a856252e1329b9281410598d88a71d21eb083e8a7b9a46defe\" pid:9242 exit_status:1 exited_at:{seconds:1777337806 nanos:44344604}" error="failed to stop container: failed to delete task: context deadline exceeded: unknown" Apr 28 00:57:03.190740 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f9a396c4e783c7a856252e1329b9281410598d88a71d21eb083e8a7b9a46defe-rootfs.mount: Deactivated successfully. Apr 28 00:57:03.703355 containerd[1473]: time="2026-04-28T00:57:03.620978203Z" level=error msg="ttrpc: received message on inactive stream" stream=41 Apr 28 00:57:08.002355 containerd[1473]: time="2026-04-28T00:57:07.778823086Z" level=error msg="Failed to handle backOff event container_id:\"cfca32e6e6731a59b42e2c0091992c12ea86aef9e0ea3f18c4deab8e0c8ba221\" id:\"cfca32e6e6731a59b42e2c0091992c12ea86aef9e0ea3f18c4deab8e0c8ba221\" pid:9278 exit_status:1 exited_at:{seconds:1777337805 nanos:276632232} for cfca32e6e6731a59b42e2c0091992c12ea86aef9e0ea3f18c4deab8e0c8ba221" error="failed to handle container TaskExit event: failed to stop container: failed to delete task: context deadline exceeded: unknown" Apr 28 00:57:08.152412 containerd[1473]: time="2026-04-28T00:57:08.145882991Z" level=info msg="TaskExit event container_id:\"f9a396c4e783c7a856252e1329b9281410598d88a71d21eb083e8a7b9a46defe\" id:\"f9a396c4e783c7a856252e1329b9281410598d88a71d21eb083e8a7b9a46defe\" pid:9242 exit_status:1 exited_at:{seconds:1777337806 nanos:44344604}" Apr 28 00:57:08.612327 containerd[1473]: time="2026-04-28T00:57:08.448686897Z" level=error msg="ttrpc: received message on inactive stream" stream=51 Apr 28 00:57:09.492617 kubelet[2526]: E0428 00:57:09.116129 2526 controller.go:195] "Failed to update lease" err="Put \"https://10.0.0.11:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Apr 28 00:57:13.399666 kubelet[2526]: E0428 00:57:13.205992 2526 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="25.623s" Apr 28 00:57:18.277759 containerd[1473]: time="2026-04-28T00:57:18.266533077Z" level=error msg="Failed to handle backOff event container_id:\"f9a396c4e783c7a856252e1329b9281410598d88a71d21eb083e8a7b9a46defe\" id:\"f9a396c4e783c7a856252e1329b9281410598d88a71d21eb083e8a7b9a46defe\" pid:9242 exit_status:1 exited_at:{seconds:1777337806 nanos:44344604} for f9a396c4e783c7a856252e1329b9281410598d88a71d21eb083e8a7b9a46defe" error="failed to handle container TaskExit event: failed to stop container: failed to delete task: context deadline exceeded: unknown" Apr 28 00:57:18.598955 containerd[1473]: time="2026-04-28T00:57:18.586866801Z" level=info msg="TaskExit event container_id:\"cfca32e6e6731a59b42e2c0091992c12ea86aef9e0ea3f18c4deab8e0c8ba221\" id:\"cfca32e6e6731a59b42e2c0091992c12ea86aef9e0ea3f18c4deab8e0c8ba221\" pid:9278 exit_status:1 exited_at:{seconds:1777337805 nanos:276632232}" Apr 28 00:57:19.652860 containerd[1473]: time="2026-04-28T00:57:19.649226294Z" level=error msg="ttrpc: received message on inactive stream" stream=53 Apr 28 00:57:22.674650 sshd[9659]: pam_unix(sshd:session): session closed for user core Apr 28 00:57:22.955985 systemd[1]: sshd@83-10.0.0.11:22-10.0.0.1:44360.service: Deactivated successfully. Apr 28 00:57:22.958749 systemd[1]: sshd@83-10.0.0.11:22-10.0.0.1:44360.service: Consumed 1.890s CPU time. Apr 28 00:57:23.264109 systemd[1]: session-84.scope: Deactivated successfully. Apr 28 00:57:23.319036 systemd[1]: session-84.scope: Consumed 10.850s CPU time. Apr 28 00:57:23.671227 systemd-logind[1457]: Session 84 logged out. Waiting for processes to exit. Apr 28 00:57:23.768272 kubelet[2526]: E0428 00:57:23.692619 2526 controller.go:195] "Failed to update lease" err="Put \"https://10.0.0.11:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Apr 28 00:57:24.119725 systemd-logind[1457]: Removed session 84. Apr 28 00:57:28.746327 containerd[1473]: time="2026-04-28T00:57:28.734768533Z" level=error msg="Failed to handle backOff event container_id:\"cfca32e6e6731a59b42e2c0091992c12ea86aef9e0ea3f18c4deab8e0c8ba221\" id:\"cfca32e6e6731a59b42e2c0091992c12ea86aef9e0ea3f18c4deab8e0c8ba221\" pid:9278 exit_status:1 exited_at:{seconds:1777337805 nanos:276632232} for cfca32e6e6731a59b42e2c0091992c12ea86aef9e0ea3f18c4deab8e0c8ba221" error="failed to handle container TaskExit event: failed to stop container: failed to delete task: context deadline exceeded: unknown" Apr 28 00:57:28.947500 containerd[1473]: time="2026-04-28T00:57:28.884801216Z" level=info msg="TaskExit event container_id:\"f9a396c4e783c7a856252e1329b9281410598d88a71d21eb083e8a7b9a46defe\" id:\"f9a396c4e783c7a856252e1329b9281410598d88a71d21eb083e8a7b9a46defe\" pid:9242 exit_status:1 exited_at:{seconds:1777337806 nanos:44344604}" Apr 28 00:57:28.898689 systemd[1]: Started sshd@84-10.0.0.11:22-10.0.0.1:58548.service - OpenSSH per-connection server daemon (10.0.0.1:58548). Apr 28 00:57:29.756597 containerd[1473]: time="2026-04-28T00:57:29.656713987Z" level=error msg="ttrpc: received message on inactive stream" stream=65 Apr 28 00:57:33.259560 kubelet[2526]: E0428 00:57:32.796724 2526 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="19.016s" Apr 28 00:57:34.756161 sshd[9769]: Accepted publickey for core from 10.0.0.1 port 58548 ssh2: RSA SHA256:h1NhNWfC+Dmp6wIIzeQOuTKnwsIBe3BN0LKeEqeOidc Apr 28 00:57:34.856613 sshd[9769]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 00:57:36.031775 systemd-logind[1457]: New session 85 of user core. Apr 28 00:57:36.166479 systemd[1]: Started session-85.scope - Session 85 of User core. Apr 28 00:57:37.956514 kubelet[2526]: E0428 00:57:37.904548 2526 controller.go:195] "Failed to update lease" err="Put \"https://10.0.0.11:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded" Apr 28 00:57:38.363759 kubelet[2526]: I0428 00:57:38.320718 2526 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Apr 28 00:57:39.166259 containerd[1473]: time="2026-04-28T00:57:39.160751449Z" level=error msg="Failed to handle backOff event container_id:\"f9a396c4e783c7a856252e1329b9281410598d88a71d21eb083e8a7b9a46defe\" id:\"f9a396c4e783c7a856252e1329b9281410598d88a71d21eb083e8a7b9a46defe\" pid:9242 exit_status:1 exited_at:{seconds:1777337806 nanos:44344604} for f9a396c4e783c7a856252e1329b9281410598d88a71d21eb083e8a7b9a46defe" error="failed to handle container TaskExit event: failed to stop container: failed to delete task: context deadline exceeded: unknown" Apr 28 00:57:39.166259 containerd[1473]: time="2026-04-28T00:57:39.161110578Z" level=info msg="TaskExit event container_id:\"cfca32e6e6731a59b42e2c0091992c12ea86aef9e0ea3f18c4deab8e0c8ba221\" id:\"cfca32e6e6731a59b42e2c0091992c12ea86aef9e0ea3f18c4deab8e0c8ba221\" pid:9278 exit_status:1 exited_at:{seconds:1777337805 nanos:276632232}" Apr 28 00:57:40.655029 kubelet[2526]: E0428 00:57:40.653573 2526 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="7.343s" Apr 28 00:57:41.921581 containerd[1473]: time="2026-04-28T00:57:41.393865132Z" level=error msg="ttrpc: received message on inactive stream" stream=67 Apr 28 00:57:43.457795 kubelet[2526]: E0428 00:57:43.456042 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:57:43.457795 kubelet[2526]: E0428 00:57:43.455988 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:57:43.568161 kubelet[2526]: E0428 00:57:43.563450 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:57:43.642447 kubelet[2526]: E0428 00:57:43.639431 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:57:44.348359 kubelet[2526]: E0428 00:57:44.348113 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:57:44.356365 kubelet[2526]: E0428 00:57:44.350508 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:57:44.356365 kubelet[2526]: E0428 00:57:44.350596 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:57:45.654739 kubelet[2526]: E0428 00:57:45.654502 2526 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.19s" Apr 28 00:57:45.795774 kubelet[2526]: E0428 00:57:45.762691 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:57:47.273832 kubelet[2526]: E0428 00:57:47.264057 2526 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.492s" Apr 28 00:57:48.084643 containerd[1473]: time="2026-04-28T00:57:48.022048329Z" level=info msg="shim disconnected" id=cfca32e6e6731a59b42e2c0091992c12ea86aef9e0ea3f18c4deab8e0c8ba221 namespace=k8s.io Apr 28 00:57:48.084643 containerd[1473]: time="2026-04-28T00:57:48.022081269Z" level=warning msg="cleaning up after shim disconnected" id=cfca32e6e6731a59b42e2c0091992c12ea86aef9e0ea3f18c4deab8e0c8ba221 namespace=k8s.io Apr 28 00:57:48.084643 containerd[1473]: time="2026-04-28T00:57:48.022087749Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 28 00:57:48.377972 kubelet[2526]: I0428 00:57:48.279724 2526 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 28 00:57:48.403575 kubelet[2526]: E0428 00:57:48.402780 2526 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.11:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded" interval="200ms" Apr 28 00:57:48.416348 kubelet[2526]: E0428 00:57:48.411387 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:57:48.570576 kubelet[2526]: E0428 00:57:48.561495 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:57:48.923070 kubelet[2526]: E0428 00:57:48.897754 2526 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.544s" Apr 28 00:57:49.092868 containerd[1473]: time="2026-04-28T00:57:49.074061683Z" level=info msg="StopContainer for \"f9a396c4e783c7a856252e1329b9281410598d88a71d21eb083e8a7b9a46defe\" with timeout 30 (s)" Apr 28 00:57:49.469381 containerd[1473]: time="2026-04-28T00:57:49.468022404Z" level=error msg="failed to delete shim" error="1 error occurred:\n\t* close wait error: context deadline exceeded\n\n" id=cfca32e6e6731a59b42e2c0091992c12ea86aef9e0ea3f18c4deab8e0c8ba221 Apr 28 00:57:49.513183 containerd[1473]: time="2026-04-28T00:57:49.507807451Z" level=info msg="Stop container \"f9a396c4e783c7a856252e1329b9281410598d88a71d21eb083e8a7b9a46defe\" with signal terminated" Apr 28 00:57:51.005002 containerd[1473]: time="2026-04-28T00:57:50.901845882Z" level=info msg="TaskExit event container_id:\"f9a396c4e783c7a856252e1329b9281410598d88a71d21eb083e8a7b9a46defe\" id:\"f9a396c4e783c7a856252e1329b9281410598d88a71d21eb083e8a7b9a46defe\" pid:9242 exit_status:1 exited_at:{seconds:1777337806 nanos:44344604}" Apr 28 00:57:53.311074 kubelet[2526]: E0428 00:57:53.117638 2526 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://10.0.0.11:6443/api/v1/namespaces/kube-system/events/coredns-66bc5c9577-sn6rz.18aa5d8381c7a9f0\": stream error: stream ID 331; INTERNAL_ERROR; received from peer" event="&Event{ObjectMeta:{coredns-66bc5c9577-sn6rz.18aa5d8381c7a9f0 kube-system 1399 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:coredns-66bc5c9577-sn6rz,UID:69b6c5c4-1b0f-43c4-a6e5-e4ff6b274b36,APIVersion:v1,ResourceVersion:568,FieldPath:spec.containers{coredns},},Reason:Unhealthy,Message:Readiness probe failed: Get \"http://192.168.0.3:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-28 00:23:42 +0000 UTC,LastTimestamp:2026-04-28 00:56:33.105291882 +0000 UTC m=+2178.543269194,Count:73,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 28 00:57:53.361491 containerd[1473]: time="2026-04-28T00:57:53.351818913Z" level=error msg="get state for f9a396c4e783c7a856252e1329b9281410598d88a71d21eb083e8a7b9a46defe" error="context deadline exceeded: unknown" Apr 28 00:57:53.379170 containerd[1473]: time="2026-04-28T00:57:53.361606389Z" level=warning msg="unknown status" status=0 Apr 28 00:57:53.477797 containerd[1473]: time="2026-04-28T00:57:53.360693370Z" level=error msg="failed to delete" cmd="/usr/bin/containerd-shim-runc-v2 -namespace k8s.io -address /run/containerd/containerd.sock -publish-binary /usr/bin/containerd -id cfca32e6e6731a59b42e2c0091992c12ea86aef9e0ea3f18c4deab8e0c8ba221 -bundle /run/containerd/io.containerd.runtime.v2.task/k8s.io/cfca32e6e6731a59b42e2c0091992c12ea86aef9e0ea3f18c4deab8e0c8ba221 delete" error="signal: killed" namespace=k8s.io Apr 28 00:57:53.477797 containerd[1473]: time="2026-04-28T00:57:53.472336461Z" level=warning msg="failed to clean up after shim disconnected" error=": signal: killed" id=cfca32e6e6731a59b42e2c0091992c12ea86aef9e0ea3f18c4deab8e0c8ba221 namespace=k8s.io Apr 28 00:57:53.738232 kubelet[2526]: E0428 00:57:53.722306 2526 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="4.452s" Apr 28 00:57:55.462645 containerd[1473]: time="2026-04-28T00:57:55.399037427Z" level=error msg="ttrpc: received message on inactive stream" stream=75 Apr 28 00:57:55.549599 containerd[1473]: time="2026-04-28T00:57:55.515185149Z" level=error msg="get state for f9a396c4e783c7a856252e1329b9281410598d88a71d21eb083e8a7b9a46defe" error="context deadline exceeded: unknown" Apr 28 00:57:55.549599 containerd[1473]: time="2026-04-28T00:57:55.543868813Z" level=error msg="ttrpc: received message on inactive stream" stream=77 Apr 28 00:57:55.549599 containerd[1473]: time="2026-04-28T00:57:55.543994258Z" level=warning msg="unknown status" status=0 Apr 28 00:57:55.854187 kubelet[2526]: E0428 00:57:55.847599 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:57:58.420816 kubelet[2526]: E0428 00:57:58.420701 2526 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="4.674s" Apr 28 00:57:58.468258 kubelet[2526]: I0428 00:57:58.421189 2526 scope.go:117] "RemoveContainer" containerID="cfca32e6e6731a59b42e2c0091992c12ea86aef9e0ea3f18c4deab8e0c8ba221" Apr 28 00:57:58.468258 kubelet[2526]: E0428 00:57:58.421266 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:57:58.468258 kubelet[2526]: I0428 00:57:58.422083 2526 scope.go:117] "RemoveContainer" containerID="6dc550d34c16d61f2413488d8637e49fb5bb9df7067e03a8482f046125cfd41a" Apr 28 00:57:58.861489 kubelet[2526]: E0428 00:57:58.859225 2526 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.11:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="400ms" Apr 28 00:58:00.062233 sshd[9769]: pam_unix(sshd:session): session closed for user core Apr 28 00:58:00.422430 systemd[1]: sshd@84-10.0.0.11:22-10.0.0.1:58548.service: Deactivated successfully. Apr 28 00:58:00.456038 systemd[1]: sshd@84-10.0.0.11:22-10.0.0.1:58548.service: Consumed 1.923s CPU time. Apr 28 00:58:00.612498 systemd[1]: session-85.scope: Deactivated successfully. Apr 28 00:58:00.613021 systemd[1]: session-85.scope: Consumed 8.011s CPU time. Apr 28 00:58:00.665214 systemd-logind[1457]: Session 85 logged out. Waiting for processes to exit. Apr 28 00:58:00.894477 systemd-logind[1457]: Removed session 85. Apr 28 00:58:01.121339 containerd[1473]: time="2026-04-28T00:58:01.117419478Z" level=error msg="Failed to handle backOff event container_id:\"f9a396c4e783c7a856252e1329b9281410598d88a71d21eb083e8a7b9a46defe\" id:\"f9a396c4e783c7a856252e1329b9281410598d88a71d21eb083e8a7b9a46defe\" pid:9242 exit_status:1 exited_at:{seconds:1777337806 nanos:44344604} for f9a396c4e783c7a856252e1329b9281410598d88a71d21eb083e8a7b9a46defe" error="failed to handle container TaskExit event: failed to stop container: failed to delete task: context deadline exceeded: unknown" Apr 28 00:58:01.820265 containerd[1473]: time="2026-04-28T00:58:01.817406789Z" level=error msg="ttrpc: received message on inactive stream" stream=79 Apr 28 00:58:02.191139 containerd[1473]: time="2026-04-28T00:58:02.181423471Z" level=info msg="CreateContainer within sandbox \"5c78d6efeb1497bc4b241801b76ef5ef29f9c34ed77d9b10eab33b0cd1c147bb\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:6,}" Apr 28 00:58:02.720804 containerd[1473]: time="2026-04-28T00:58:02.713823507Z" level=info msg="RemoveContainer for \"6dc550d34c16d61f2413488d8637e49fb5bb9df7067e03a8482f046125cfd41a\"" Apr 28 00:58:03.198869 containerd[1473]: time="2026-04-28T00:58:03.193777841Z" level=info msg="RemoveContainer for \"6dc550d34c16d61f2413488d8637e49fb5bb9df7067e03a8482f046125cfd41a\" returns successfully" Apr 28 00:58:04.247099 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4138378139.mount: Deactivated successfully. Apr 28 00:58:04.574282 containerd[1473]: time="2026-04-28T00:58:04.554986991Z" level=info msg="CreateContainer within sandbox \"5c78d6efeb1497bc4b241801b76ef5ef29f9c34ed77d9b10eab33b0cd1c147bb\" for &ContainerMetadata{Name:kube-scheduler,Attempt:6,} returns container id \"aed8088b962574918c29f6426c8a54316f74e1dd3561a4460c636738bf90c263\"" Apr 28 00:58:04.739129 kubelet[2526]: E0428 00:58:04.738164 2526 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="6.317s" Apr 28 00:58:05.865034 systemd[1]: Started sshd@85-10.0.0.11:22-10.0.0.1:56308.service - OpenSSH per-connection server daemon (10.0.0.1:56308). Apr 28 00:58:06.370604 containerd[1473]: time="2026-04-28T00:58:06.359476112Z" level=info msg="StartContainer for \"aed8088b962574918c29f6426c8a54316f74e1dd3561a4460c636738bf90c263\"" Apr 28 00:58:08.122737 kubelet[2526]: E0428 00:58:08.121061 2526 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.383s" Apr 28 00:58:08.398876 kubelet[2526]: E0428 00:58:08.353973 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:58:08.571538 sshd[9893]: Accepted publickey for core from 10.0.0.1 port 56308 ssh2: RSA SHA256:h1NhNWfC+Dmp6wIIzeQOuTKnwsIBe3BN0LKeEqeOidc Apr 28 00:58:08.588810 sshd[9893]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 00:58:09.203746 systemd-logind[1457]: New session 86 of user core. Apr 28 00:58:09.263667 systemd[1]: Started session-86.scope - Session 86 of User core. Apr 28 00:58:09.471458 containerd[1473]: time="2026-04-28T00:58:09.452880330Z" level=info msg="TaskExit event container_id:\"f9a396c4e783c7a856252e1329b9281410598d88a71d21eb083e8a7b9a46defe\" id:\"f9a396c4e783c7a856252e1329b9281410598d88a71d21eb083e8a7b9a46defe\" pid:9242 exit_status:1 exited_at:{seconds:1777337806 nanos:44344604}" Apr 28 00:58:09.761067 kubelet[2526]: E0428 00:58:09.728808 2526 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.607s" Apr 28 00:58:10.799320 containerd[1473]: time="2026-04-28T00:58:10.799039903Z" level=info msg="shim disconnected" id=f9a396c4e783c7a856252e1329b9281410598d88a71d21eb083e8a7b9a46defe namespace=k8s.io Apr 28 00:58:10.814317 containerd[1473]: time="2026-04-28T00:58:10.813654999Z" level=warning msg="cleaning up after shim disconnected" id=f9a396c4e783c7a856252e1329b9281410598d88a71d21eb083e8a7b9a46defe namespace=k8s.io Apr 28 00:58:10.885753 containerd[1473]: time="2026-04-28T00:58:10.836871498Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 28 00:58:13.416528 kubelet[2526]: E0428 00:58:13.413823 2526 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.094s" Apr 28 00:58:14.798160 kubelet[2526]: E0428 00:58:14.793547 2526 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.293s" Apr 28 00:58:15.056413 containerd[1473]: time="2026-04-28T00:58:15.055358192Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 28 00:58:15.056413 containerd[1473]: time="2026-04-28T00:58:15.055539928Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 28 00:58:15.056413 containerd[1473]: time="2026-04-28T00:58:15.055548724Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 28 00:58:15.063195 containerd[1473]: time="2026-04-28T00:58:15.062999300Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 28 00:58:15.126182 sshd[9893]: pam_unix(sshd:session): session closed for user core Apr 28 00:58:15.365014 systemd[1]: sshd@85-10.0.0.11:22-10.0.0.1:56308.service: Deactivated successfully. Apr 28 00:58:15.691369 systemd[1]: session-86.scope: Deactivated successfully. Apr 28 00:58:15.701794 systemd[1]: session-86.scope: Consumed 3.670s CPU time. Apr 28 00:58:15.759732 systemd-logind[1457]: Session 86 logged out. Waiting for processes to exit. Apr 28 00:58:15.922142 systemd-logind[1457]: Removed session 86. Apr 28 00:58:16.075407 containerd[1473]: time="2026-04-28T00:58:16.066552352Z" level=error msg="failed to delete" cmd="/usr/bin/containerd-shim-runc-v2 -namespace k8s.io -address /run/containerd/containerd.sock -publish-binary /usr/bin/containerd -id f9a396c4e783c7a856252e1329b9281410598d88a71d21eb083e8a7b9a46defe -bundle /run/containerd/io.containerd.runtime.v2.task/k8s.io/f9a396c4e783c7a856252e1329b9281410598d88a71d21eb083e8a7b9a46defe delete" error="signal: killed" namespace=k8s.io Apr 28 00:58:16.101809 containerd[1473]: time="2026-04-28T00:58:16.086488446Z" level=warning msg="failed to clean up after shim disconnected" error=": signal: killed" id=f9a396c4e783c7a856252e1329b9281410598d88a71d21eb083e8a7b9a46defe namespace=k8s.io Apr 28 00:58:16.459648 kubelet[2526]: I0428 00:58:16.459459 2526 scope.go:117] "RemoveContainer" containerID="cfca32e6e6731a59b42e2c0091992c12ea86aef9e0ea3f18c4deab8e0c8ba221" Apr 28 00:58:16.570593 containerd[1473]: time="2026-04-28T00:58:16.560211452Z" level=info msg="Ensure that container f9a396c4e783c7a856252e1329b9281410598d88a71d21eb083e8a7b9a46defe in task-service has been cleanup successfully" Apr 28 00:58:16.741695 containerd[1473]: time="2026-04-28T00:58:16.717415657Z" level=info msg="StopContainer for \"f9a396c4e783c7a856252e1329b9281410598d88a71d21eb083e8a7b9a46defe\" returns successfully" Apr 28 00:58:16.767023 kubelet[2526]: E0428 00:58:16.764760 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:58:16.970703 containerd[1473]: time="2026-04-28T00:58:16.970116774Z" level=info msg="RemoveContainer for \"cfca32e6e6731a59b42e2c0091992c12ea86aef9e0ea3f18c4deab8e0c8ba221\"" Apr 28 00:58:17.231473 kubelet[2526]: E0428 00:58:17.231238 2526 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.21s" Apr 28 00:58:18.358482 containerd[1473]: time="2026-04-28T00:58:18.358400542Z" level=info msg="RemoveContainer for \"cfca32e6e6731a59b42e2c0091992c12ea86aef9e0ea3f18c4deab8e0c8ba221\" returns successfully" Apr 28 00:58:18.360262 containerd[1473]: time="2026-04-28T00:58:18.359439910Z" level=info msg="CreateContainer within sandbox \"670f440219fcc7a0b00ba64a35e5d1ee4e1b4357741788815366a9fc0871c449\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:7,}" Apr 28 00:58:18.786145 systemd[1]: Started cri-containerd-aed8088b962574918c29f6426c8a54316f74e1dd3561a4460c636738bf90c263.scope - libcontainer container aed8088b962574918c29f6426c8a54316f74e1dd3561a4460c636738bf90c263. Apr 28 00:58:18.906310 kubelet[2526]: E0428 00:58:18.905377 2526 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.671s" Apr 28 00:58:20.562839 containerd[1473]: time="2026-04-28T00:58:20.553643575Z" level=info msg="CreateContainer within sandbox \"670f440219fcc7a0b00ba64a35e5d1ee4e1b4357741788815366a9fc0871c449\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:7,} returns container id \"a958cf45f8d91498bf8dbbf27dcea98bd86e0f22a1f9daaf7263bbc0906d857d\"" Apr 28 00:58:21.241386 containerd[1473]: time="2026-04-28T00:58:21.233591971Z" level=info msg="StartContainer for \"a958cf45f8d91498bf8dbbf27dcea98bd86e0f22a1f9daaf7263bbc0906d857d\"" Apr 28 00:58:21.241785 systemd[1]: Started sshd@86-10.0.0.11:22-10.0.0.1:38596.service - OpenSSH per-connection server daemon (10.0.0.1:38596). Apr 28 00:58:21.502643 containerd[1473]: time="2026-04-28T00:58:21.434681193Z" level=error msg="get state for aed8088b962574918c29f6426c8a54316f74e1dd3561a4460c636738bf90c263" error="context deadline exceeded: unknown" Apr 28 00:58:21.502643 containerd[1473]: time="2026-04-28T00:58:21.498161476Z" level=warning msg="unknown status" status=0 Apr 28 00:58:22.948689 kubelet[2526]: E0428 00:58:22.948538 2526 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.58s" Apr 28 00:58:23.617667 sshd[9996]: Accepted publickey for core from 10.0.0.1 port 38596 ssh2: RSA SHA256:h1NhNWfC+Dmp6wIIzeQOuTKnwsIBe3BN0LKeEqeOidc Apr 28 00:58:24.163333 sshd[9996]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 00:58:24.489391 systemd-logind[1457]: New session 87 of user core. Apr 28 00:58:24.614874 systemd[1]: Started session-87.scope - Session 87 of User core. Apr 28 00:58:24.940169 containerd[1473]: time="2026-04-28T00:58:24.906571457Z" level=error msg="ttrpc: received message on inactive stream" stream=3 Apr 28 00:58:25.815502 kubelet[2526]: E0428 00:58:25.809539 2526 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.345s" Apr 28 00:58:28.743569 kubelet[2526]: E0428 00:58:28.742829 2526 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.933s" Apr 28 00:58:28.990512 containerd[1473]: time="2026-04-28T00:58:28.990328906Z" level=info msg="StartContainer for \"aed8088b962574918c29f6426c8a54316f74e1dd3561a4460c636738bf90c263\" returns successfully" Apr 28 00:58:29.401368 containerd[1473]: time="2026-04-28T00:58:29.397431842Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 28 00:58:29.401368 containerd[1473]: time="2026-04-28T00:58:29.397579466Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 28 00:58:29.401368 containerd[1473]: time="2026-04-28T00:58:29.397589002Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 28 00:58:29.587625 containerd[1473]: time="2026-04-28T00:58:29.533541142Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 28 00:58:30.608940 kubelet[2526]: E0428 00:58:30.608748 2526 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.85s" Apr 28 00:58:30.876798 sshd[9996]: pam_unix(sshd:session): session closed for user core Apr 28 00:58:31.189795 systemd-logind[1457]: Session 87 logged out. Waiting for processes to exit. Apr 28 00:58:31.214515 systemd[1]: sshd@86-10.0.0.11:22-10.0.0.1:38596.service: Deactivated successfully. Apr 28 00:58:31.305513 systemd[1]: sshd@86-10.0.0.11:22-10.0.0.1:38596.service: Consumed 1.030s CPU time. Apr 28 00:58:31.360162 systemd[1]: session-87.scope: Deactivated successfully. Apr 28 00:58:31.360495 systemd[1]: session-87.scope: Consumed 3.663s CPU time. Apr 28 00:58:31.387871 systemd-logind[1457]: Removed session 87. Apr 28 00:58:32.598365 kubelet[2526]: E0428 00:58:32.598228 2526 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.986s" Apr 28 00:58:33.599855 kubelet[2526]: E0428 00:58:33.594420 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:58:33.804673 systemd[1]: Started cri-containerd-a958cf45f8d91498bf8dbbf27dcea98bd86e0f22a1f9daaf7263bbc0906d857d.scope - libcontainer container a958cf45f8d91498bf8dbbf27dcea98bd86e0f22a1f9daaf7263bbc0906d857d. Apr 28 00:58:34.805035 kubelet[2526]: E0428 00:58:34.804697 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:58:36.449044 systemd[1]: Started sshd@87-10.0.0.11:22-10.0.0.1:58998.service - OpenSSH per-connection server daemon (10.0.0.1:58998). Apr 28 00:58:36.717013 containerd[1473]: time="2026-04-28T00:58:36.715239597Z" level=error msg="get state for a958cf45f8d91498bf8dbbf27dcea98bd86e0f22a1f9daaf7263bbc0906d857d" error="context deadline exceeded: unknown" Apr 28 00:58:36.810343 kubelet[2526]: E0428 00:58:36.760769 2526 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.543s" Apr 28 00:58:37.016642 containerd[1473]: time="2026-04-28T00:58:37.010826032Z" level=warning msg="unknown status" status=0 Apr 28 00:58:39.659002 kubelet[2526]: E0428 00:58:39.658821 2526 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.998s" Apr 28 00:58:40.178687 sshd[10105]: Accepted publickey for core from 10.0.0.1 port 58998 ssh2: RSA SHA256:h1NhNWfC+Dmp6wIIzeQOuTKnwsIBe3BN0LKeEqeOidc Apr 28 00:58:40.465787 sshd[10105]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 00:58:40.636001 containerd[1473]: time="2026-04-28T00:58:40.634239289Z" level=error msg="get state for a958cf45f8d91498bf8dbbf27dcea98bd86e0f22a1f9daaf7263bbc0906d857d" error="context deadline exceeded: unknown" Apr 28 00:58:40.636001 containerd[1473]: time="2026-04-28T00:58:40.634499922Z" level=warning msg="unknown status" status=0 Apr 28 00:58:41.125561 containerd[1473]: time="2026-04-28T00:58:41.087733334Z" level=error msg="ttrpc: received message on inactive stream" stream=5 Apr 28 00:58:41.153134 systemd-logind[1457]: New session 88 of user core. Apr 28 00:58:41.305688 systemd[1]: Started session-88.scope - Session 88 of User core. Apr 28 00:58:41.413242 containerd[1473]: time="2026-04-28T00:58:41.187267572Z" level=error msg="ttrpc: received message on inactive stream" stream=7 Apr 28 00:58:42.245816 kubelet[2526]: E0428 00:58:42.245747 2526 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.481s" Apr 28 00:58:43.828984 kubelet[2526]: E0428 00:58:43.827337 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:58:45.252427 kubelet[2526]: E0428 00:58:45.249157 2526 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.003s" Apr 28 00:58:46.183069 containerd[1473]: time="2026-04-28T00:58:46.181244297Z" level=info msg="StartContainer for \"a958cf45f8d91498bf8dbbf27dcea98bd86e0f22a1f9daaf7263bbc0906d857d\" returns successfully" Apr 28 00:58:46.898384 kubelet[2526]: E0428 00:58:46.870632 2526 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.48s" Apr 28 00:58:48.040303 kubelet[2526]: E0428 00:58:48.039128 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:58:48.876810 kubelet[2526]: E0428 00:58:48.873860 2526 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.57s" Apr 28 00:58:49.666240 kubelet[2526]: E0428 00:58:49.665936 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:58:50.161294 kubelet[2526]: E0428 00:58:50.160243 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:58:55.135250 sshd[10105]: pam_unix(sshd:session): session closed for user core Apr 28 00:58:55.500260 systemd[1]: sshd@87-10.0.0.11:22-10.0.0.1:58998.service: Deactivated successfully. Apr 28 00:58:55.508541 systemd[1]: sshd@87-10.0.0.11:22-10.0.0.1:58998.service: Consumed 1.155s CPU time. Apr 28 00:58:55.780829 systemd[1]: session-88.scope: Deactivated successfully. Apr 28 00:58:55.825669 systemd[1]: session-88.scope: Consumed 3.841s CPU time. Apr 28 00:58:55.849071 systemd-logind[1457]: Session 88 logged out. Waiting for processes to exit. Apr 28 00:58:55.851584 systemd-logind[1457]: Removed session 88. Apr 28 00:58:56.615132 kubelet[2526]: E0428 00:58:56.614292 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:58:58.365575 kubelet[2526]: E0428 00:58:58.357089 2526 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.109s" Apr 28 00:58:58.661493 kubelet[2526]: E0428 00:58:58.661258 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:59:00.551445 systemd[1]: Started sshd@88-10.0.0.11:22-10.0.0.1:58104.service - OpenSSH per-connection server daemon (10.0.0.1:58104). Apr 28 00:59:02.404164 kubelet[2526]: E0428 00:59:02.380047 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:59:03.433495 sshd[10179]: Accepted publickey for core from 10.0.0.1 port 58104 ssh2: RSA SHA256:h1NhNWfC+Dmp6wIIzeQOuTKnwsIBe3BN0LKeEqeOidc Apr 28 00:59:03.647662 sshd[10179]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 00:59:04.351964 systemd-logind[1457]: New session 89 of user core. Apr 28 00:59:04.363383 systemd[1]: Started session-89.scope - Session 89 of User core. Apr 28 00:59:04.935392 kubelet[2526]: E0428 00:59:04.929459 2526 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.728s" Apr 28 00:59:05.151773 kubelet[2526]: E0428 00:59:05.150563 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:59:06.505514 kubelet[2526]: E0428 00:59:06.502605 2526 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.179s" Apr 28 00:59:07.627766 kubelet[2526]: E0428 00:59:07.627350 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:59:07.979985 sshd[10179]: pam_unix(sshd:session): session closed for user core Apr 28 00:59:08.098805 systemd[1]: sshd@88-10.0.0.11:22-10.0.0.1:58104.service: Deactivated successfully. Apr 28 00:59:08.103395 systemd[1]: sshd@88-10.0.0.11:22-10.0.0.1:58104.service: Consumed 1.060s CPU time. Apr 28 00:59:08.214335 systemd[1]: session-89.scope: Deactivated successfully. Apr 28 00:59:08.219845 systemd[1]: session-89.scope: Consumed 2.222s CPU time. Apr 28 00:59:08.258632 systemd-logind[1457]: Session 89 logged out. Waiting for processes to exit. Apr 28 00:59:08.353522 systemd-logind[1457]: Removed session 89. Apr 28 00:59:10.407677 kubelet[2526]: E0428 00:59:10.406558 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:59:12.526976 kubelet[2526]: E0428 00:59:12.526653 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:59:13.216031 kubelet[2526]: E0428 00:59:13.215756 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:59:13.365473 systemd[1]: Started sshd@89-10.0.0.11:22-10.0.0.1:53222.service - OpenSSH per-connection server daemon (10.0.0.1:53222). Apr 28 00:59:14.259613 sshd[10224]: Accepted publickey for core from 10.0.0.1 port 53222 ssh2: RSA SHA256:h1NhNWfC+Dmp6wIIzeQOuTKnwsIBe3BN0LKeEqeOidc Apr 28 00:59:14.315446 sshd[10224]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 00:59:14.795563 systemd-logind[1457]: New session 90 of user core. Apr 28 00:59:14.854850 systemd[1]: Started session-90.scope - Session 90 of User core. Apr 28 00:59:19.009759 sshd[10224]: pam_unix(sshd:session): session closed for user core Apr 28 00:59:19.244849 systemd[1]: sshd@89-10.0.0.11:22-10.0.0.1:53222.service: Deactivated successfully. Apr 28 00:59:19.364979 systemd[1]: session-90.scope: Deactivated successfully. Apr 28 00:59:19.367776 systemd[1]: session-90.scope: Consumed 2.341s CPU time. Apr 28 00:59:19.567769 systemd-logind[1457]: Session 90 logged out. Waiting for processes to exit. Apr 28 00:59:19.644382 systemd-logind[1457]: Removed session 90. Apr 28 00:59:24.368154 systemd[1]: Started sshd@90-10.0.0.11:22-10.0.0.1:43454.service - OpenSSH per-connection server daemon (10.0.0.1:43454). Apr 28 00:59:25.923058 sshd[10283]: Accepted publickey for core from 10.0.0.1 port 43454 ssh2: RSA SHA256:h1NhNWfC+Dmp6wIIzeQOuTKnwsIBe3BN0LKeEqeOidc Apr 28 00:59:26.010861 sshd[10283]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 00:59:26.683366 systemd-logind[1457]: New session 91 of user core. Apr 28 00:59:26.807181 systemd[1]: Started session-91.scope - Session 91 of User core. Apr 28 00:59:30.537014 kubelet[2526]: E0428 00:59:30.522770 2526 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.037s" Apr 28 00:59:32.468880 kubelet[2526]: E0428 00:59:32.456850 2526 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.246s" Apr 28 00:59:32.843752 sshd[10283]: pam_unix(sshd:session): session closed for user core Apr 28 00:59:32.897035 systemd-logind[1457]: Session 91 logged out. Waiting for processes to exit. Apr 28 00:59:32.911997 systemd[1]: sshd@90-10.0.0.11:22-10.0.0.1:43454.service: Deactivated successfully. Apr 28 00:59:33.152576 systemd[1]: session-91.scope: Deactivated successfully. Apr 28 00:59:33.157176 systemd[1]: session-91.scope: Consumed 3.707s CPU time. Apr 28 00:59:33.344971 systemd-logind[1457]: Removed session 91. Apr 28 00:59:38.397641 systemd[1]: Started sshd@91-10.0.0.11:22-10.0.0.1:52250.service - OpenSSH per-connection server daemon (10.0.0.1:52250). Apr 28 00:59:41.117692 kubelet[2526]: E0428 00:59:41.117023 2526 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.796s" Apr 28 00:59:41.164770 sshd[10324]: Accepted publickey for core from 10.0.0.1 port 52250 ssh2: RSA SHA256:h1NhNWfC+Dmp6wIIzeQOuTKnwsIBe3BN0LKeEqeOidc Apr 28 00:59:41.352857 sshd[10324]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 00:59:41.805060 systemd-logind[1457]: New session 92 of user core. Apr 28 00:59:41.848217 systemd[1]: Started session-92.scope - Session 92 of User core. Apr 28 00:59:45.813228 sshd[10324]: pam_unix(sshd:session): session closed for user core Apr 28 00:59:45.960472 systemd[1]: sshd@91-10.0.0.11:22-10.0.0.1:52250.service: Deactivated successfully. Apr 28 00:59:45.974094 systemd[1]: sshd@91-10.0.0.11:22-10.0.0.1:52250.service: Consumed 1.097s CPU time. Apr 28 00:59:46.017410 systemd[1]: session-92.scope: Deactivated successfully. Apr 28 00:59:46.062704 systemd[1]: session-92.scope: Consumed 2.304s CPU time. Apr 28 00:59:46.193473 systemd-logind[1457]: Session 92 logged out. Waiting for processes to exit. Apr 28 00:59:46.302665 systemd-logind[1457]: Removed session 92. Apr 28 00:59:51.410683 systemd[1]: Started sshd@92-10.0.0.11:22-10.0.0.1:52948.service - OpenSSH per-connection server daemon (10.0.0.1:52948). Apr 28 00:59:53.064841 sshd[10378]: Accepted publickey for core from 10.0.0.1 port 52948 ssh2: RSA SHA256:h1NhNWfC+Dmp6wIIzeQOuTKnwsIBe3BN0LKeEqeOidc Apr 28 00:59:53.096678 sshd[10378]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 00:59:53.523084 systemd-logind[1457]: New session 93 of user core. Apr 28 00:59:53.698709 systemd[1]: Started session-93.scope - Session 93 of User core. Apr 28 00:59:56.802571 sshd[10378]: pam_unix(sshd:session): session closed for user core Apr 28 00:59:56.893318 systemd[1]: sshd@92-10.0.0.11:22-10.0.0.1:52948.service: Deactivated successfully. Apr 28 00:59:57.029403 systemd[1]: session-93.scope: Deactivated successfully. Apr 28 00:59:57.033693 systemd[1]: session-93.scope: Consumed 1.102s CPU time. Apr 28 00:59:57.170679 systemd-logind[1457]: Session 93 logged out. Waiting for processes to exit. Apr 28 00:59:57.243485 systemd-logind[1457]: Removed session 93. Apr 28 01:00:02.450512 systemd[1]: Started sshd@93-10.0.0.11:22-10.0.0.1:50590.service - OpenSSH per-connection server daemon (10.0.0.1:50590). Apr 28 01:00:03.507137 kubelet[2526]: E0428 01:00:03.505859 2526 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.189s" Apr 28 01:00:05.467132 sshd[10416]: Accepted publickey for core from 10.0.0.1 port 50590 ssh2: RSA SHA256:h1NhNWfC+Dmp6wIIzeQOuTKnwsIBe3BN0LKeEqeOidc Apr 28 01:00:05.605978 sshd[10416]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 01:00:06.314616 kubelet[2526]: E0428 01:00:05.979324 2526 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.426s" Apr 28 01:00:06.767028 systemd-logind[1457]: New session 94 of user core. Apr 28 01:00:07.104858 systemd[1]: Started session-94.scope - Session 94 of User core. Apr 28 01:00:10.642353 systemd[1]: cri-containerd-a958cf45f8d91498bf8dbbf27dcea98bd86e0f22a1f9daaf7263bbc0906d857d.scope: Deactivated successfully. Apr 28 01:00:10.672745 systemd[1]: cri-containerd-a958cf45f8d91498bf8dbbf27dcea98bd86e0f22a1f9daaf7263bbc0906d857d.scope: Consumed 30.614s CPU time. Apr 28 01:00:12.637071 kubelet[2526]: E0428 01:00:12.635213 2526 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="6.176s" Apr 28 01:00:17.351668 kubelet[2526]: E0428 01:00:17.351426 2526 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="4.175s" Apr 28 01:00:18.956861 systemd[1]: cri-containerd-aed8088b962574918c29f6426c8a54316f74e1dd3561a4460c636738bf90c263.scope: Deactivated successfully. Apr 28 01:00:18.958992 systemd[1]: cri-containerd-aed8088b962574918c29f6426c8a54316f74e1dd3561a4460c636738bf90c263.scope: Consumed 30.693s CPU time. Apr 28 01:00:19.081724 kubelet[2526]: E0428 01:00:19.075177 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:00:19.081724 kubelet[2526]: E0428 01:00:19.080240 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:00:19.081724 kubelet[2526]: E0428 01:00:19.080802 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:00:19.505537 kubelet[2526]: E0428 01:00:19.502295 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:00:20.977096 kubelet[2526]: E0428 01:00:20.974520 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:00:21.595633 kubelet[2526]: E0428 01:00:21.469190 2526 controller.go:195] "Failed to update lease" err="Put \"https://10.0.0.11:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Apr 28 01:00:23.671701 containerd[1473]: time="2026-04-28T01:00:23.602518346Z" level=error msg="failed to handle container TaskExit event container_id:\"a958cf45f8d91498bf8dbbf27dcea98bd86e0f22a1f9daaf7263bbc0906d857d\" id:\"a958cf45f8d91498bf8dbbf27dcea98bd86e0f22a1f9daaf7263bbc0906d857d\" pid:10095 exit_status:1 exited_at:{seconds:1777338012 nanos:638180807}" error="failed to stop container: failed to delete task: context deadline exceeded: unknown" Apr 28 01:00:25.593246 containerd[1473]: time="2026-04-28T01:00:25.539427257Z" level=error msg="ttrpc: received message on inactive stream" stream=43 Apr 28 01:00:25.670727 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a958cf45f8d91498bf8dbbf27dcea98bd86e0f22a1f9daaf7263bbc0906d857d-rootfs.mount: Deactivated successfully. Apr 28 01:00:25.891577 containerd[1473]: time="2026-04-28T01:00:25.786786761Z" level=info msg="TaskExit event container_id:\"a958cf45f8d91498bf8dbbf27dcea98bd86e0f22a1f9daaf7263bbc0906d857d\" id:\"a958cf45f8d91498bf8dbbf27dcea98bd86e0f22a1f9daaf7263bbc0906d857d\" pid:10095 exit_status:1 exited_at:{seconds:1777338012 nanos:638180807}" Apr 28 01:00:26.496596 kubelet[2526]: E0428 01:00:26.495106 2526 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="8.686s" Apr 28 01:00:27.267636 kubelet[2526]: E0428 01:00:27.265519 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:00:27.679525 kubelet[2526]: E0428 01:00:27.674618 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:00:29.238956 sshd[10416]: pam_unix(sshd:session): session closed for user core Apr 28 01:00:29.454715 kubelet[2526]: E0428 01:00:29.445945 2526 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.879s" Apr 28 01:00:29.613028 systemd[1]: sshd@93-10.0.0.11:22-10.0.0.1:50590.service: Deactivated successfully. Apr 28 01:00:29.613478 systemd[1]: sshd@93-10.0.0.11:22-10.0.0.1:50590.service: Consumed 1.249s CPU time. Apr 28 01:00:29.714682 systemd[1]: session-94.scope: Deactivated successfully. Apr 28 01:00:29.715389 systemd[1]: session-94.scope: Consumed 7.405s CPU time. Apr 28 01:00:29.810429 systemd-logind[1457]: Session 94 logged out. Waiting for processes to exit. Apr 28 01:00:29.845629 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-aed8088b962574918c29f6426c8a54316f74e1dd3561a4460c636738bf90c263-rootfs.mount: Deactivated successfully. Apr 28 01:00:29.912615 systemd-logind[1457]: Removed session 94. Apr 28 01:00:30.397778 containerd[1473]: time="2026-04-28T01:00:30.394202230Z" level=info msg="shim disconnected" id=aed8088b962574918c29f6426c8a54316f74e1dd3561a4460c636738bf90c263 namespace=k8s.io Apr 28 01:00:30.397778 containerd[1473]: time="2026-04-28T01:00:30.394601606Z" level=warning msg="cleaning up after shim disconnected" id=aed8088b962574918c29f6426c8a54316f74e1dd3561a4460c636738bf90c263 namespace=k8s.io Apr 28 01:00:30.397778 containerd[1473]: time="2026-04-28T01:00:30.394684789Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 28 01:00:32.151310 kubelet[2526]: E0428 01:00:32.149944 2526 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.384s" Apr 28 01:00:33.518878 containerd[1473]: time="2026-04-28T01:00:33.518754834Z" level=error msg="failed to delete shim" error="1 error occurred:\n\t* close wait error: context deadline exceeded\n\n" id=aed8088b962574918c29f6426c8a54316f74e1dd3561a4460c636738bf90c263 Apr 28 01:00:33.860434 containerd[1473]: time="2026-04-28T01:00:33.848768117Z" level=info msg="shim disconnected" id=a958cf45f8d91498bf8dbbf27dcea98bd86e0f22a1f9daaf7263bbc0906d857d namespace=k8s.io Apr 28 01:00:33.992066 containerd[1473]: time="2026-04-28T01:00:33.861291659Z" level=warning msg="cleaning up after shim disconnected" id=a958cf45f8d91498bf8dbbf27dcea98bd86e0f22a1f9daaf7263bbc0906d857d namespace=k8s.io Apr 28 01:00:33.992066 containerd[1473]: time="2026-04-28T01:00:33.861607457Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 28 01:00:34.692280 systemd[1]: Started sshd@94-10.0.0.11:22-10.0.0.1:58606.service - OpenSSH per-connection server daemon (10.0.0.1:58606). Apr 28 01:00:35.263076 containerd[1473]: time="2026-04-28T01:00:35.255311147Z" level=error msg="failed to delete" cmd="/usr/bin/containerd-shim-runc-v2 -namespace k8s.io -address /run/containerd/containerd.sock -publish-binary /usr/bin/containerd -id aed8088b962574918c29f6426c8a54316f74e1dd3561a4460c636738bf90c263 -bundle /run/containerd/io.containerd.runtime.v2.task/k8s.io/aed8088b962574918c29f6426c8a54316f74e1dd3561a4460c636738bf90c263 delete" error="exit status 1" namespace=k8s.io Apr 28 01:00:35.263076 containerd[1473]: time="2026-04-28T01:00:35.256263324Z" level=warning msg="failed to clean up after shim disconnected" error="io.containerd.runc.v2: getwd: no such file or directory: exit status 1" id=aed8088b962574918c29f6426c8a54316f74e1dd3561a4460c636738bf90c263 namespace=k8s.io Apr 28 01:00:35.421161 kubelet[2526]: E0428 01:00:35.415410 2526 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.264s" Apr 28 01:00:35.812763 containerd[1473]: time="2026-04-28T01:00:35.810966536Z" level=error msg="failed to delete shim" error="1 error occurred:\n\t* close wait error: context deadline exceeded\n\n" id=a958cf45f8d91498bf8dbbf27dcea98bd86e0f22a1f9daaf7263bbc0906d857d Apr 28 01:00:35.825496 containerd[1473]: time="2026-04-28T01:00:35.825409819Z" level=error msg="failed to delete" cmd="/usr/bin/containerd-shim-runc-v2 -namespace k8s.io -address /run/containerd/containerd.sock -publish-binary /usr/bin/containerd -id a958cf45f8d91498bf8dbbf27dcea98bd86e0f22a1f9daaf7263bbc0906d857d -bundle /run/containerd/io.containerd.runtime.v2.task/k8s.io/a958cf45f8d91498bf8dbbf27dcea98bd86e0f22a1f9daaf7263bbc0906d857d delete" error="exit status 1" namespace=k8s.io Apr 28 01:00:35.830437 containerd[1473]: time="2026-04-28T01:00:35.830238153Z" level=warning msg="failed to clean up after shim disconnected" error="io.containerd.runc.v2: getwd: no such file or directory: exit status 1" id=a958cf45f8d91498bf8dbbf27dcea98bd86e0f22a1f9daaf7263bbc0906d857d namespace=k8s.io Apr 28 01:00:36.252480 sshd[10541]: Accepted publickey for core from 10.0.0.1 port 58606 ssh2: RSA SHA256:h1NhNWfC+Dmp6wIIzeQOuTKnwsIBe3BN0LKeEqeOidc Apr 28 01:00:36.277856 kubelet[2526]: I0428 01:00:36.252427 2526 scope.go:117] "RemoveContainer" containerID="aed8088b962574918c29f6426c8a54316f74e1dd3561a4460c636738bf90c263" Apr 28 01:00:36.277856 kubelet[2526]: E0428 01:00:36.254035 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:00:36.277856 kubelet[2526]: E0428 01:00:36.254228 2526 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-scheduler pod=kube-scheduler-localhost_kube-system(824fd89300514e351ed3b68d82c665c6)\"" pod="kube-system/kube-scheduler-localhost" podUID="824fd89300514e351ed3b68d82c665c6" Apr 28 01:00:36.486283 sshd[10541]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 01:00:36.807347 systemd-logind[1457]: New session 95 of user core. Apr 28 01:00:36.921072 systemd[1]: Started session-95.scope - Session 95 of User core. Apr 28 01:00:37.664050 kubelet[2526]: I0428 01:00:37.659616 2526 scope.go:117] "RemoveContainer" containerID="f9a396c4e783c7a856252e1329b9281410598d88a71d21eb083e8a7b9a46defe" Apr 28 01:00:37.877640 kubelet[2526]: I0428 01:00:37.877096 2526 scope.go:117] "RemoveContainer" containerID="a958cf45f8d91498bf8dbbf27dcea98bd86e0f22a1f9daaf7263bbc0906d857d" Apr 28 01:00:37.883649 kubelet[2526]: I0428 01:00:37.883386 2526 scope.go:117] "RemoveContainer" containerID="aed8088b962574918c29f6426c8a54316f74e1dd3561a4460c636738bf90c263" Apr 28 01:00:37.919614 kubelet[2526]: E0428 01:00:37.914746 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:00:37.922780 kubelet[2526]: E0428 01:00:37.920410 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:00:37.922780 kubelet[2526]: E0428 01:00:37.921590 2526 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-scheduler pod=kube-scheduler-localhost_kube-system(824fd89300514e351ed3b68d82c665c6)\"" pod="kube-system/kube-scheduler-localhost" podUID="824fd89300514e351ed3b68d82c665c6" Apr 28 01:00:37.987581 containerd[1473]: time="2026-04-28T01:00:37.985822284Z" level=info msg="RemoveContainer for \"f9a396c4e783c7a856252e1329b9281410598d88a71d21eb083e8a7b9a46defe\"" Apr 28 01:00:38.364522 containerd[1473]: time="2026-04-28T01:00:38.354508922Z" level=info msg="RemoveContainer for \"f9a396c4e783c7a856252e1329b9281410598d88a71d21eb083e8a7b9a46defe\" returns successfully" Apr 28 01:00:38.643851 containerd[1473]: time="2026-04-28T01:00:38.639998075Z" level=info msg="CreateContainer within sandbox \"670f440219fcc7a0b00ba64a35e5d1ee4e1b4357741788815366a9fc0871c449\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:8,}" Apr 28 01:00:39.649750 containerd[1473]: time="2026-04-28T01:00:39.649279880Z" level=info msg="CreateContainer within sandbox \"670f440219fcc7a0b00ba64a35e5d1ee4e1b4357741788815366a9fc0871c449\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:8,} returns container id \"6ad1e4cb7cc9aa3a41151a796060ca7557effadc46041ae9c3b9c9c2ea7a5138\"" Apr 28 01:00:39.884568 containerd[1473]: time="2026-04-28T01:00:39.882719846Z" level=info msg="StartContainer for \"6ad1e4cb7cc9aa3a41151a796060ca7557effadc46041ae9c3b9c9c2ea7a5138\"" Apr 28 01:00:40.347680 containerd[1473]: time="2026-04-28T01:00:40.345159132Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 28 01:00:40.347680 containerd[1473]: time="2026-04-28T01:00:40.345453790Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 28 01:00:40.347680 containerd[1473]: time="2026-04-28T01:00:40.345466747Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 28 01:00:40.347680 containerd[1473]: time="2026-04-28T01:00:40.345643936Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 28 01:00:40.432216 systemd[1]: Started cri-containerd-6ad1e4cb7cc9aa3a41151a796060ca7557effadc46041ae9c3b9c9c2ea7a5138.scope - libcontainer container 6ad1e4cb7cc9aa3a41151a796060ca7557effadc46041ae9c3b9c9c2ea7a5138. Apr 28 01:00:40.434616 sshd[10541]: pam_unix(sshd:session): session closed for user core Apr 28 01:00:40.439423 systemd[1]: sshd@94-10.0.0.11:22-10.0.0.1:58606.service: Deactivated successfully. Apr 28 01:00:40.441465 systemd[1]: session-95.scope: Deactivated successfully. Apr 28 01:00:40.441632 systemd[1]: session-95.scope: Consumed 2.183s CPU time. Apr 28 01:00:40.476056 systemd-logind[1457]: Session 95 logged out. Waiting for processes to exit. Apr 28 01:00:40.484516 systemd-logind[1457]: Removed session 95. Apr 28 01:00:40.607178 containerd[1473]: time="2026-04-28T01:00:40.605843599Z" level=info msg="StartContainer for \"6ad1e4cb7cc9aa3a41151a796060ca7557effadc46041ae9c3b9c9c2ea7a5138\" returns successfully" Apr 28 01:00:42.219143 kubelet[2526]: E0428 01:00:42.208230 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:00:42.238785 kubelet[2526]: E0428 01:00:42.238683 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:00:45.922705 systemd[1]: Started sshd@95-10.0.0.11:22-10.0.0.1:55042.service - OpenSSH per-connection server daemon (10.0.0.1:55042). Apr 28 01:00:47.380243 kubelet[2526]: E0428 01:00:47.377755 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:00:47.795362 sshd[10646]: Accepted publickey for core from 10.0.0.1 port 55042 ssh2: RSA SHA256:h1NhNWfC+Dmp6wIIzeQOuTKnwsIBe3BN0LKeEqeOidc Apr 28 01:00:47.825211 sshd[10646]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 01:00:48.345419 systemd-logind[1457]: New session 96 of user core. Apr 28 01:00:48.866699 systemd[1]: Started session-96.scope - Session 96 of User core. Apr 28 01:00:51.253750 kubelet[2526]: E0428 01:00:51.251306 2526 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.834s" Apr 28 01:00:57.294806 kubelet[2526]: E0428 01:00:57.279096 2526 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="5.895s" Apr 28 01:01:01.459786 kubelet[2526]: E0428 01:01:01.459072 2526 controller.go:195] "Failed to update lease" err="Put \"https://10.0.0.11:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded" Apr 28 01:01:01.664568 kubelet[2526]: I0428 01:01:01.573497 2526 scope.go:117] "RemoveContainer" containerID="aed8088b962574918c29f6426c8a54316f74e1dd3561a4460c636738bf90c263" Apr 28 01:01:03.661642 kubelet[2526]: E0428 01:01:03.653843 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:01:04.586801 kubelet[2526]: E0428 01:01:04.579750 2526 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="6.107s" Apr 28 01:01:04.861017 kubelet[2526]: E0428 01:01:04.848834 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:01:06.045424 containerd[1473]: time="2026-04-28T01:01:06.038677889Z" level=info msg="CreateContainer within sandbox \"5c78d6efeb1497bc4b241801b76ef5ef29f9c34ed77d9b10eab33b0cd1c147bb\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:7,}" Apr 28 01:01:06.392609 kubelet[2526]: E0428 01:01:06.283442 2526 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.644s" Apr 28 01:01:07.311561 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1484529789.mount: Deactivated successfully. Apr 28 01:01:07.814187 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount135211431.mount: Deactivated successfully. Apr 28 01:01:07.971463 kubelet[2526]: E0428 01:01:07.967084 2526 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.403s" Apr 28 01:01:08.101588 sshd[10646]: pam_unix(sshd:session): session closed for user core Apr 28 01:01:08.613185 systemd[1]: sshd@95-10.0.0.11:22-10.0.0.1:55042.service: Deactivated successfully. Apr 28 01:01:09.098523 systemd[1]: session-96.scope: Deactivated successfully. Apr 28 01:01:09.139097 systemd[1]: session-96.scope: Consumed 10.648s CPU time. Apr 28 01:01:09.455990 systemd-logind[1457]: Session 96 logged out. Waiting for processes to exit. Apr 28 01:01:09.843824 systemd-logind[1457]: Removed session 96. Apr 28 01:01:09.968288 containerd[1473]: time="2026-04-28T01:01:09.967383482Z" level=info msg="CreateContainer within sandbox \"5c78d6efeb1497bc4b241801b76ef5ef29f9c34ed77d9b10eab33b0cd1c147bb\" for &ContainerMetadata{Name:kube-scheduler,Attempt:7,} returns container id \"bb47141897a7ec9a04de5b4368382ffd774cbd43fa643bc02b4a807ac578b04b\"" Apr 28 01:01:11.208761 containerd[1473]: time="2026-04-28T01:01:11.208142587Z" level=info msg="StartContainer for \"bb47141897a7ec9a04de5b4368382ffd774cbd43fa643bc02b4a807ac578b04b\"" Apr 28 01:01:13.791269 systemd[1]: Started sshd@96-10.0.0.11:22-10.0.0.1:37738.service - OpenSSH per-connection server daemon (10.0.0.1:37738). Apr 28 01:01:15.487820 sshd[10706]: Accepted publickey for core from 10.0.0.1 port 37738 ssh2: RSA SHA256:h1NhNWfC+Dmp6wIIzeQOuTKnwsIBe3BN0LKeEqeOidc Apr 28 01:01:16.300490 sshd[10706]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 01:01:16.889715 systemd-logind[1457]: New session 97 of user core. Apr 28 01:01:17.100636 systemd[1]: Started session-97.scope - Session 97 of User core. Apr 28 01:01:17.514232 kubelet[2526]: E0428 01:01:17.513818 2526 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="9.545s" Apr 28 01:01:17.519521 containerd[1473]: time="2026-04-28T01:01:17.505687146Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 28 01:01:17.519521 containerd[1473]: time="2026-04-28T01:01:17.513197249Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 28 01:01:17.519521 containerd[1473]: time="2026-04-28T01:01:17.513270825Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 28 01:01:17.805709 containerd[1473]: time="2026-04-28T01:01:17.791570641Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 28 01:01:19.661100 kubelet[2526]: E0428 01:01:19.659809 2526 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.054s" Apr 28 01:01:21.717243 systemd[1]: Started cri-containerd-bb47141897a7ec9a04de5b4368382ffd774cbd43fa643bc02b4a807ac578b04b.scope - libcontainer container bb47141897a7ec9a04de5b4368382ffd774cbd43fa643bc02b4a807ac578b04b. Apr 28 01:01:22.053218 kubelet[2526]: E0428 01:01:22.044551 2526 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.308s" Apr 28 01:01:23.596520 kubelet[2526]: E0428 01:01:23.594706 2526 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.416s" Apr 28 01:01:24.205149 kubelet[2526]: E0428 01:01:24.204844 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:01:24.685425 containerd[1473]: time="2026-04-28T01:01:24.684531736Z" level=error msg="get state for bb47141897a7ec9a04de5b4368382ffd774cbd43fa643bc02b4a807ac578b04b" error="context deadline exceeded: unknown" Apr 28 01:01:24.758053 containerd[1473]: time="2026-04-28T01:01:24.747789571Z" level=warning msg="unknown status" status=0 Apr 28 01:01:26.959312 kubelet[2526]: I0428 01:01:26.956638 2526 scope.go:117] "RemoveContainer" containerID="aed8088b962574918c29f6426c8a54316f74e1dd3561a4460c636738bf90c263" Apr 28 01:01:27.104142 containerd[1473]: time="2026-04-28T01:01:27.102612067Z" level=error msg="get state for bb47141897a7ec9a04de5b4368382ffd774cbd43fa643bc02b4a807ac578b04b" error="context deadline exceeded: unknown" Apr 28 01:01:27.201819 containerd[1473]: time="2026-04-28T01:01:27.116193078Z" level=warning msg="unknown status" status=0 Apr 28 01:01:27.545767 kubelet[2526]: E0428 01:01:27.521666 2526 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.204s" Apr 28 01:01:29.299969 containerd[1473]: time="2026-04-28T01:01:29.294282965Z" level=info msg="RemoveContainer for \"aed8088b962574918c29f6426c8a54316f74e1dd3561a4460c636738bf90c263\"" Apr 28 01:01:29.494556 containerd[1473]: time="2026-04-28T01:01:29.492580592Z" level=info msg="RemoveContainer for \"aed8088b962574918c29f6426c8a54316f74e1dd3561a4460c636738bf90c263\" returns successfully" Apr 28 01:01:29.557810 kubelet[2526]: E0428 01:01:29.492819 2526 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.952s" Apr 28 01:01:29.817598 kubelet[2526]: E0428 01:01:29.812457 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:01:29.852782 containerd[1473]: time="2026-04-28T01:01:29.850812407Z" level=error msg="get state for bb47141897a7ec9a04de5b4368382ffd774cbd43fa643bc02b4a807ac578b04b" error="context deadline exceeded: unknown" Apr 28 01:01:29.852782 containerd[1473]: time="2026-04-28T01:01:29.851026204Z" level=warning msg="unknown status" status=0 Apr 28 01:01:29.890152 kubelet[2526]: E0428 01:01:29.889545 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:01:31.603403 sshd[10706]: pam_unix(sshd:session): session closed for user core Apr 28 01:01:32.214265 systemd[1]: sshd@96-10.0.0.11:22-10.0.0.1:37738.service: Deactivated successfully. Apr 28 01:01:32.415289 systemd[1]: session-97.scope: Deactivated successfully. Apr 28 01:01:32.415848 systemd[1]: session-97.scope: Consumed 3.657s CPU time. Apr 28 01:01:32.523560 systemd[1]: cri-containerd-6ad1e4cb7cc9aa3a41151a796060ca7557effadc46041ae9c3b9c9c2ea7a5138.scope: Deactivated successfully. Apr 28 01:01:32.797822 containerd[1473]: time="2026-04-28T01:01:32.790707869Z" level=error msg="get state for bb47141897a7ec9a04de5b4368382ffd774cbd43fa643bc02b4a807ac578b04b" error="context deadline exceeded: unknown" Apr 28 01:01:32.577812 systemd[1]: cri-containerd-6ad1e4cb7cc9aa3a41151a796060ca7557effadc46041ae9c3b9c9c2ea7a5138.scope: Consumed 10.593s CPU time. Apr 28 01:01:32.894005 containerd[1473]: time="2026-04-28T01:01:32.820183632Z" level=warning msg="unknown status" status=0 Apr 28 01:01:32.894005 containerd[1473]: time="2026-04-28T01:01:32.831751910Z" level=error msg="ttrpc: received message on inactive stream" stream=5 Apr 28 01:01:32.894005 containerd[1473]: time="2026-04-28T01:01:32.856254788Z" level=error msg="ttrpc: received message on inactive stream" stream=7 Apr 28 01:01:32.894005 containerd[1473]: time="2026-04-28T01:01:32.856438275Z" level=error msg="ttrpc: received message on inactive stream" stream=9 Apr 28 01:01:32.894005 containerd[1473]: time="2026-04-28T01:01:32.856483146Z" level=error msg="ttrpc: received message on inactive stream" stream=11 Apr 28 01:01:32.672286 systemd-logind[1457]: Session 97 logged out. Waiting for processes to exit. Apr 28 01:01:32.807292 systemd-logind[1457]: Removed session 97.