Apr 14 12:37:22.027727 kernel: Linux version 6.6.127-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Mon Apr 13 18:40:27 -00 2026 Apr 14 12:37:22.027756 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=c1ba97db2f6278922cfc5bd0ca74b4bb573fca2c3aed19c121a34271e693e156 Apr 14 12:37:22.027770 kernel: BIOS-provided physical RAM map: Apr 14 12:37:22.027777 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Apr 14 12:37:22.027784 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Apr 14 12:37:22.027929 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Apr 14 12:37:22.027938 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Apr 14 12:37:22.027946 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Apr 14 12:37:22.027953 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Apr 14 12:37:22.027964 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Apr 14 12:37:22.027972 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Apr 14 12:37:22.027980 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Apr 14 12:37:22.028005 kernel: NX (Execute Disable) protection: active Apr 14 12:37:22.028013 kernel: APIC: Static calls initialized Apr 14 12:37:22.028023 kernel: SMBIOS 2.8 present. Apr 14 12:37:22.028047 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Apr 14 12:37:22.028055 kernel: Hypervisor detected: KVM Apr 14 12:37:22.028064 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Apr 14 12:37:22.028074 kernel: kvm-clock: using sched offset of 7011183007 cycles Apr 14 12:37:22.028083 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Apr 14 12:37:22.028092 kernel: tsc: Detected 2793.438 MHz processor Apr 14 12:37:22.028100 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Apr 14 12:37:22.028110 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Apr 14 12:37:22.028119 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x10000000000 Apr 14 12:37:22.028131 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Apr 14 12:37:22.028140 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Apr 14 12:37:22.028149 kernel: Using GB pages for direct mapping Apr 14 12:37:22.028157 kernel: ACPI: Early table checksum verification disabled Apr 14 12:37:22.028166 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Apr 14 12:37:22.028174 kernel: ACPI: RSDT 0x000000009CFE241A 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 14 12:37:22.028182 kernel: ACPI: FACP 0x000000009CFE21FA 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Apr 14 12:37:22.028190 kernel: ACPI: DSDT 0x000000009CFE0040 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 14 12:37:22.028198 kernel: ACPI: FACS 0x000000009CFE0000 000040 Apr 14 12:37:22.028208 kernel: ACPI: APIC 0x000000009CFE22EE 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 14 12:37:22.028239 kernel: ACPI: HPET 0x000000009CFE237E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 14 12:37:22.028248 kernel: ACPI: MCFG 0x000000009CFE23B6 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 14 12:37:22.028256 kernel: ACPI: WAET 0x000000009CFE23F2 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 14 12:37:22.028263 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21fa-0x9cfe22ed] Apr 14 12:37:22.028271 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21f9] Apr 14 12:37:22.028279 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Apr 14 12:37:22.028304 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22ee-0x9cfe237d] Apr 14 12:37:22.028316 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe237e-0x9cfe23b5] Apr 14 12:37:22.028324 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23b6-0x9cfe23f1] Apr 14 12:37:22.028334 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23f2-0x9cfe2419] Apr 14 12:37:22.028342 kernel: No NUMA configuration found Apr 14 12:37:22.028350 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Apr 14 12:37:22.028359 kernel: NODE_DATA(0) allocated [mem 0x9cfd6000-0x9cfdbfff] Apr 14 12:37:22.028371 kernel: Zone ranges: Apr 14 12:37:22.028380 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Apr 14 12:37:22.028388 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Apr 14 12:37:22.028397 kernel: Normal empty Apr 14 12:37:22.028407 kernel: Movable zone start for each node Apr 14 12:37:22.028416 kernel: Early memory node ranges Apr 14 12:37:22.028426 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Apr 14 12:37:22.028435 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Apr 14 12:37:22.028444 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Apr 14 12:37:22.028454 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Apr 14 12:37:22.028465 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Apr 14 12:37:22.028493 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Apr 14 12:37:22.028503 kernel: ACPI: PM-Timer IO Port: 0x608 Apr 14 12:37:22.028525 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Apr 14 12:37:22.028534 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Apr 14 12:37:22.028554 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Apr 14 12:37:22.028574 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Apr 14 12:37:22.028601 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Apr 14 12:37:22.028621 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Apr 14 12:37:22.028653 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Apr 14 12:37:22.028662 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Apr 14 12:37:22.028672 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Apr 14 12:37:22.028682 kernel: TSC deadline timer available Apr 14 12:37:22.028691 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Apr 14 12:37:22.028701 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Apr 14 12:37:22.028720 kernel: kvm-guest: KVM setup pv remote TLB flush Apr 14 12:37:22.028729 kernel: kvm-guest: setup PV sched yield Apr 14 12:37:22.028761 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Apr 14 12:37:22.028784 kernel: Booting paravirtualized kernel on KVM Apr 14 12:37:22.028823 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Apr 14 12:37:22.028829 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Apr 14 12:37:22.028834 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u524288 Apr 14 12:37:22.028847 kernel: pcpu-alloc: s196328 r8192 d28952 u524288 alloc=1*2097152 Apr 14 12:37:22.028863 kernel: pcpu-alloc: [0] 0 1 2 3 Apr 14 12:37:22.028871 kernel: kvm-guest: PV spinlocks enabled Apr 14 12:37:22.028891 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Apr 14 12:37:22.028932 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=c1ba97db2f6278922cfc5bd0ca74b4bb573fca2c3aed19c121a34271e693e156 Apr 14 12:37:22.028973 kernel: random: crng init done Apr 14 12:37:22.028990 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Apr 14 12:37:22.029003 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Apr 14 12:37:22.029016 kernel: Fallback order for Node 0: 0 Apr 14 12:37:22.029021 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632732 Apr 14 12:37:22.029026 kernel: Policy zone: DMA32 Apr 14 12:37:22.029032 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Apr 14 12:37:22.029037 kernel: Memory: 2433652K/2571752K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42896K init, 2300K bss, 137896K reserved, 0K cma-reserved) Apr 14 12:37:22.029044 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Apr 14 12:37:22.029050 kernel: ftrace: allocating 37996 entries in 149 pages Apr 14 12:37:22.029055 kernel: ftrace: allocated 149 pages with 4 groups Apr 14 12:37:22.029060 kernel: Dynamic Preempt: voluntary Apr 14 12:37:22.029065 kernel: rcu: Preemptible hierarchical RCU implementation. Apr 14 12:37:22.029071 kernel: rcu: RCU event tracing is enabled. Apr 14 12:37:22.029076 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Apr 14 12:37:22.029082 kernel: Trampoline variant of Tasks RCU enabled. Apr 14 12:37:22.029087 kernel: Rude variant of Tasks RCU enabled. Apr 14 12:37:22.029094 kernel: Tracing variant of Tasks RCU enabled. Apr 14 12:37:22.029099 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Apr 14 12:37:22.029105 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Apr 14 12:37:22.029110 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Apr 14 12:37:22.029124 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Apr 14 12:37:22.029129 kernel: Console: colour VGA+ 80x25 Apr 14 12:37:22.029134 kernel: printk: console [ttyS0] enabled Apr 14 12:37:22.029139 kernel: ACPI: Core revision 20230628 Apr 14 12:37:22.029144 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Apr 14 12:37:22.029152 kernel: APIC: Switch to symmetric I/O mode setup Apr 14 12:37:22.029157 kernel: x2apic enabled Apr 14 12:37:22.029162 kernel: APIC: Switched APIC routing to: physical x2apic Apr 14 12:37:22.029167 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Apr 14 12:37:22.029172 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Apr 14 12:37:22.029177 kernel: kvm-guest: setup PV IPIs Apr 14 12:37:22.029183 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Apr 14 12:37:22.029188 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x284409db922, max_idle_ns: 440795228871 ns Apr 14 12:37:22.029201 kernel: Calibrating delay loop (skipped) preset value.. 5586.87 BogoMIPS (lpj=2793438) Apr 14 12:37:22.029206 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Apr 14 12:37:22.029230 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Apr 14 12:37:22.029240 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Apr 14 12:37:22.029287 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Apr 14 12:37:22.029298 kernel: Spectre V2 : Mitigation: Retpolines Apr 14 12:37:22.029308 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Apr 14 12:37:22.029318 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Apr 14 12:37:22.029331 kernel: RETBleed: Vulnerable Apr 14 12:37:22.029341 kernel: Speculative Store Bypass: Vulnerable Apr 14 12:37:22.029349 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Apr 14 12:37:22.029366 kernel: GDS: Unknown: Dependent on hypervisor status Apr 14 12:37:22.029372 kernel: active return thunk: its_return_thunk Apr 14 12:37:22.029377 kernel: ITS: Mitigation: Aligned branch/return thunks Apr 14 12:37:22.029383 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Apr 14 12:37:22.029389 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Apr 14 12:37:22.029395 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Apr 14 12:37:22.029403 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Apr 14 12:37:22.029409 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Apr 14 12:37:22.029414 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Apr 14 12:37:22.029420 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Apr 14 12:37:22.029425 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Apr 14 12:37:22.029431 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Apr 14 12:37:22.029437 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Apr 14 12:37:22.029443 kernel: x86/fpu: Enabled xstate features 0xe7, context size is 2432 bytes, using 'compacted' format. Apr 14 12:37:22.029448 kernel: Freeing SMP alternatives memory: 32K Apr 14 12:37:22.029456 kernel: pid_max: default: 32768 minimum: 301 Apr 14 12:37:22.029461 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Apr 14 12:37:22.029467 kernel: landlock: Up and running. Apr 14 12:37:22.029473 kernel: SELinux: Initializing. Apr 14 12:37:22.029478 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 14 12:37:22.029484 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 14 12:37:22.029490 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8370C CPU @ 2.80GHz (family: 0x6, model: 0x6a, stepping: 0x6) Apr 14 12:37:22.029503 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Apr 14 12:37:22.029509 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Apr 14 12:37:22.029517 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Apr 14 12:37:22.029523 kernel: Performance Events: unsupported p6 CPU model 106 no PMU driver, software events only. Apr 14 12:37:22.029529 kernel: signal: max sigframe size: 3632 Apr 14 12:37:22.029534 kernel: rcu: Hierarchical SRCU implementation. Apr 14 12:37:22.029540 kernel: rcu: Max phase no-delay instances is 400. Apr 14 12:37:22.029546 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Apr 14 12:37:22.029552 kernel: smp: Bringing up secondary CPUs ... Apr 14 12:37:22.029558 kernel: smpboot: x86: Booting SMP configuration: Apr 14 12:37:22.029563 kernel: .... node #0, CPUs: #1 #2 #3 Apr 14 12:37:22.029571 kernel: smp: Brought up 1 node, 4 CPUs Apr 14 12:37:22.029577 kernel: smpboot: Max logical packages: 1 Apr 14 12:37:22.029582 kernel: smpboot: Total of 4 processors activated (22347.50 BogoMIPS) Apr 14 12:37:22.029588 kernel: devtmpfs: initialized Apr 14 12:37:22.029594 kernel: x86/mm: Memory block size: 128MB Apr 14 12:37:22.029599 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Apr 14 12:37:22.029605 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Apr 14 12:37:22.029611 kernel: pinctrl core: initialized pinctrl subsystem Apr 14 12:37:22.029617 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Apr 14 12:37:22.029624 kernel: audit: initializing netlink subsys (disabled) Apr 14 12:37:22.029630 kernel: audit: type=2000 audit(1776170240.486:1): state=initialized audit_enabled=0 res=1 Apr 14 12:37:22.029635 kernel: thermal_sys: Registered thermal governor 'step_wise' Apr 14 12:37:22.029641 kernel: thermal_sys: Registered thermal governor 'user_space' Apr 14 12:37:22.029646 kernel: cpuidle: using governor menu Apr 14 12:37:22.029652 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Apr 14 12:37:22.029658 kernel: dca service started, version 1.12.1 Apr 14 12:37:22.029664 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Apr 14 12:37:22.029669 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Apr 14 12:37:22.029677 kernel: PCI: Using configuration type 1 for base access Apr 14 12:37:22.029682 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Apr 14 12:37:22.029688 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Apr 14 12:37:22.029694 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Apr 14 12:37:22.029700 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Apr 14 12:37:22.029709 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Apr 14 12:37:22.029719 kernel: ACPI: Added _OSI(Module Device) Apr 14 12:37:22.029728 kernel: ACPI: Added _OSI(Processor Device) Apr 14 12:37:22.029738 kernel: ACPI: Added _OSI(Processor Aggregator Device) Apr 14 12:37:22.029751 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Apr 14 12:37:22.029761 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Apr 14 12:37:22.029771 kernel: ACPI: Interpreter enabled Apr 14 12:37:22.029780 kernel: ACPI: PM: (supports S0 S3 S5) Apr 14 12:37:22.029786 kernel: ACPI: Using IOAPIC for interrupt routing Apr 14 12:37:22.029819 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Apr 14 12:37:22.029828 kernel: PCI: Using E820 reservations for host bridge windows Apr 14 12:37:22.029838 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Apr 14 12:37:22.029848 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Apr 14 12:37:22.030055 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Apr 14 12:37:22.030167 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Apr 14 12:37:22.030261 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Apr 14 12:37:22.030275 kernel: PCI host bridge to bus 0000:00 Apr 14 12:37:22.030507 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Apr 14 12:37:22.030628 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Apr 14 12:37:22.030696 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Apr 14 12:37:22.030752 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Apr 14 12:37:22.030849 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Apr 14 12:37:22.030904 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Apr 14 12:37:22.030958 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Apr 14 12:37:22.031069 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Apr 14 12:37:22.031150 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Apr 14 12:37:22.031301 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Apr 14 12:37:22.031410 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Apr 14 12:37:22.031514 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Apr 14 12:37:22.031576 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Apr 14 12:37:22.031663 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Apr 14 12:37:22.031728 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc0c0-0xc0df] Apr 14 12:37:22.031851 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Apr 14 12:37:22.031921 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Apr 14 12:37:22.032014 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Apr 14 12:37:22.032083 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc000-0xc07f] Apr 14 12:37:22.032185 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Apr 14 12:37:22.032317 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Apr 14 12:37:22.032440 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Apr 14 12:37:22.032548 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc0e0-0xc0ff] Apr 14 12:37:22.032644 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] Apr 14 12:37:22.032708 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] Apr 14 12:37:22.032769 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Apr 14 12:37:22.032899 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Apr 14 12:37:22.032964 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Apr 14 12:37:22.033057 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Apr 14 12:37:22.033124 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc100-0xc11f] Apr 14 12:37:22.033186 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd4000-0xfebd4fff] Apr 14 12:37:22.033323 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Apr 14 12:37:22.033425 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Apr 14 12:37:22.033439 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Apr 14 12:37:22.033450 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Apr 14 12:37:22.033460 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Apr 14 12:37:22.033471 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Apr 14 12:37:22.033486 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Apr 14 12:37:22.033492 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Apr 14 12:37:22.033498 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Apr 14 12:37:22.033503 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Apr 14 12:37:22.033509 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Apr 14 12:37:22.033515 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Apr 14 12:37:22.033521 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Apr 14 12:37:22.033526 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Apr 14 12:37:22.033532 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Apr 14 12:37:22.033540 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Apr 14 12:37:22.033546 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Apr 14 12:37:22.033551 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Apr 14 12:37:22.033557 kernel: iommu: Default domain type: Translated Apr 14 12:37:22.033563 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Apr 14 12:37:22.033569 kernel: PCI: Using ACPI for IRQ routing Apr 14 12:37:22.033575 kernel: PCI: pci_cache_line_size set to 64 bytes Apr 14 12:37:22.033580 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Apr 14 12:37:22.033586 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Apr 14 12:37:22.033655 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Apr 14 12:37:22.033748 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Apr 14 12:37:22.034444 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Apr 14 12:37:22.034465 kernel: vgaarb: loaded Apr 14 12:37:22.034475 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Apr 14 12:37:22.034486 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Apr 14 12:37:22.034496 kernel: clocksource: Switched to clocksource kvm-clock Apr 14 12:37:22.034514 kernel: VFS: Disk quotas dquot_6.6.0 Apr 14 12:37:22.034527 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Apr 14 12:37:22.034533 kernel: pnp: PnP ACPI init Apr 14 12:37:22.034666 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Apr 14 12:37:22.034676 kernel: pnp: PnP ACPI: found 6 devices Apr 14 12:37:22.034682 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Apr 14 12:37:22.034687 kernel: NET: Registered PF_INET protocol family Apr 14 12:37:22.034693 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Apr 14 12:37:22.034699 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Apr 14 12:37:22.034708 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Apr 14 12:37:22.034714 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Apr 14 12:37:22.034720 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Apr 14 12:37:22.034726 kernel: TCP: Hash tables configured (established 32768 bind 32768) Apr 14 12:37:22.034731 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 14 12:37:22.034737 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 14 12:37:22.034743 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Apr 14 12:37:22.034748 kernel: NET: Registered PF_XDP protocol family Apr 14 12:37:22.034864 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Apr 14 12:37:22.034928 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Apr 14 12:37:22.034982 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Apr 14 12:37:22.035036 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Apr 14 12:37:22.035089 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Apr 14 12:37:22.035143 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Apr 14 12:37:22.035151 kernel: PCI: CLS 0 bytes, default 64 Apr 14 12:37:22.035156 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Apr 14 12:37:22.035162 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x284409db922, max_idle_ns: 440795228871 ns Apr 14 12:37:22.035170 kernel: Initialise system trusted keyrings Apr 14 12:37:22.035178 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Apr 14 12:37:22.035187 kernel: Key type asymmetric registered Apr 14 12:37:22.035197 kernel: Asymmetric key parser 'x509' registered Apr 14 12:37:22.035206 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Apr 14 12:37:22.035236 kernel: io scheduler mq-deadline registered Apr 14 12:37:22.035245 kernel: io scheduler kyber registered Apr 14 12:37:22.035251 kernel: io scheduler bfq registered Apr 14 12:37:22.035257 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Apr 14 12:37:22.035266 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Apr 14 12:37:22.035272 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Apr 14 12:37:22.035278 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Apr 14 12:37:22.035284 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Apr 14 12:37:22.035289 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Apr 14 12:37:22.035296 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Apr 14 12:37:22.035301 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Apr 14 12:37:22.035307 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Apr 14 12:37:22.035412 kernel: rtc_cmos 00:04: RTC can wake from S4 Apr 14 12:37:22.035423 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Apr 14 12:37:22.035481 kernel: rtc_cmos 00:04: registered as rtc0 Apr 14 12:37:22.035537 kernel: rtc_cmos 00:04: setting system clock to 2026-04-14T12:37:21 UTC (1776170241) Apr 14 12:37:22.035595 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Apr 14 12:37:22.035606 kernel: intel_pstate: CPU model not supported Apr 14 12:37:22.035616 kernel: NET: Registered PF_INET6 protocol family Apr 14 12:37:22.035624 kernel: Segment Routing with IPv6 Apr 14 12:37:22.035634 kernel: In-situ OAM (IOAM) with IPv6 Apr 14 12:37:22.035646 kernel: NET: Registered PF_PACKET protocol family Apr 14 12:37:22.035652 kernel: Key type dns_resolver registered Apr 14 12:37:22.035658 kernel: IPI shorthand broadcast: enabled Apr 14 12:37:22.035664 kernel: sched_clock: Marking stable (1392010437, 424259755)->(1958142394, -141872202) Apr 14 12:37:22.035670 kernel: registered taskstats version 1 Apr 14 12:37:22.035678 kernel: Loading compiled-in X.509 certificates Apr 14 12:37:22.035688 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.127-flatcar: 51221ce98a81ccf90ef3d16403b42695603c5d00' Apr 14 12:37:22.035698 kernel: Key type .fscrypt registered Apr 14 12:37:22.035709 kernel: Key type fscrypt-provisioning registered Apr 14 12:37:22.035721 kernel: ima: No TPM chip found, activating TPM-bypass! Apr 14 12:37:22.035735 kernel: ima: Allocated hash algorithm: sha1 Apr 14 12:37:22.035742 kernel: ima: No architecture policies found Apr 14 12:37:22.035750 kernel: clk: Disabling unused clocks Apr 14 12:37:22.035760 kernel: Freeing unused kernel image (initmem) memory: 42896K Apr 14 12:37:22.035770 kernel: Write protecting the kernel read-only data: 36864k Apr 14 12:37:22.035781 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Apr 14 12:37:22.035842 kernel: Run /init as init process Apr 14 12:37:22.035849 kernel: with arguments: Apr 14 12:37:22.035855 kernel: /init Apr 14 12:37:22.035863 kernel: with environment: Apr 14 12:37:22.035869 kernel: HOME=/ Apr 14 12:37:22.035875 kernel: TERM=linux Apr 14 12:37:22.035883 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 14 12:37:22.035892 systemd[1]: Detected virtualization kvm. Apr 14 12:37:22.035898 systemd[1]: Detected architecture x86-64. Apr 14 12:37:22.035904 systemd[1]: Running in initrd. Apr 14 12:37:22.035912 systemd[1]: No hostname configured, using default hostname. Apr 14 12:37:22.035918 systemd[1]: Hostname set to . Apr 14 12:37:22.035924 systemd[1]: Initializing machine ID from VM UUID. Apr 14 12:37:22.035930 systemd[1]: Queued start job for default target initrd.target. Apr 14 12:37:22.035936 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 14 12:37:22.035942 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 14 12:37:22.035949 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Apr 14 12:37:22.035955 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 14 12:37:22.035963 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Apr 14 12:37:22.035969 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Apr 14 12:37:22.035986 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Apr 14 12:37:22.035993 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Apr 14 12:37:22.035999 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 14 12:37:22.036007 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 14 12:37:22.036014 systemd[1]: Reached target paths.target - Path Units. Apr 14 12:37:22.036020 systemd[1]: Reached target slices.target - Slice Units. Apr 14 12:37:22.036026 systemd[1]: Reached target swap.target - Swaps. Apr 14 12:37:22.036032 systemd[1]: Reached target timers.target - Timer Units. Apr 14 12:37:22.036038 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Apr 14 12:37:22.036044 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 14 12:37:22.036050 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 14 12:37:22.036058 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Apr 14 12:37:22.036064 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 14 12:37:22.036071 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 14 12:37:22.036077 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 14 12:37:22.036083 systemd[1]: Reached target sockets.target - Socket Units. Apr 14 12:37:22.036089 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Apr 14 12:37:22.036095 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 14 12:37:22.036101 systemd[1]: Finished network-cleanup.service - Network Cleanup. Apr 14 12:37:22.036107 systemd[1]: Starting systemd-fsck-usr.service... Apr 14 12:37:22.036115 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 14 12:37:22.036122 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 14 12:37:22.036133 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 14 12:37:22.036143 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Apr 14 12:37:22.036152 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 14 12:37:22.036665 systemd-journald[194]: Collecting audit messages is disabled. Apr 14 12:37:22.036695 systemd[1]: Finished systemd-fsck-usr.service. Apr 14 12:37:22.036707 systemd-journald[194]: Journal started Apr 14 12:37:22.036723 systemd-journald[194]: Runtime Journal (/run/log/journal/9e17f89d7e8d43efbaeba710caab547c) is 6.0M, max 48.4M, 42.3M free. Apr 14 12:37:22.041349 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 14 12:37:22.029329 systemd-modules-load[195]: Inserted module 'overlay' Apr 14 12:37:22.138912 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Apr 14 12:37:22.138942 kernel: Bridge firewalling registered Apr 14 12:37:22.060713 systemd-modules-load[195]: Inserted module 'br_netfilter' Apr 14 12:37:22.145864 systemd[1]: Started systemd-journald.service - Journal Service. Apr 14 12:37:22.148128 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 14 12:37:22.161409 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 14 12:37:22.166192 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 14 12:37:22.170453 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 14 12:37:22.176705 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 14 12:37:22.178605 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 14 12:37:22.189387 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 14 12:37:22.193080 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 14 12:37:22.195968 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 14 12:37:22.204949 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 14 12:37:22.208662 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 14 12:37:22.248689 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Apr 14 12:37:22.307155 kernel: hrtimer: interrupt took 4083570 ns Apr 14 12:37:22.320322 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 14 12:37:22.427302 dracut-cmdline[230]: dracut-dracut-053 Apr 14 12:37:22.457324 dracut-cmdline[230]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=c1ba97db2f6278922cfc5bd0ca74b4bb573fca2c3aed19c121a34271e693e156 Apr 14 12:37:22.557936 systemd-resolved[229]: Positive Trust Anchors: Apr 14 12:37:22.558187 systemd-resolved[229]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 14 12:37:22.558248 systemd-resolved[229]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 14 12:37:22.562686 systemd-resolved[229]: Defaulting to hostname 'linux'. Apr 14 12:37:22.565429 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 14 12:37:22.573938 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 14 12:37:22.649906 kernel: SCSI subsystem initialized Apr 14 12:37:22.659974 kernel: Loading iSCSI transport class v2.0-870. Apr 14 12:37:22.674915 kernel: iscsi: registered transport (tcp) Apr 14 12:37:22.705549 kernel: iscsi: registered transport (qla4xxx) Apr 14 12:37:22.705770 kernel: QLogic iSCSI HBA Driver Apr 14 12:37:22.831510 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Apr 14 12:37:22.852173 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Apr 14 12:37:22.936648 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Apr 14 12:37:22.936997 kernel: device-mapper: uevent: version 1.0.3 Apr 14 12:37:22.942949 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Apr 14 12:37:23.025452 kernel: raid6: avx512x4 gen() 30354 MB/s Apr 14 12:37:23.043054 kernel: raid6: avx512x2 gen() 22277 MB/s Apr 14 12:37:23.062137 kernel: raid6: avx512x1 gen() 21798 MB/s Apr 14 12:37:23.080473 kernel: raid6: avx2x4 gen() 18030 MB/s Apr 14 12:37:23.098050 kernel: raid6: avx2x2 gen() 17979 MB/s Apr 14 12:37:23.117660 kernel: raid6: avx2x1 gen() 13785 MB/s Apr 14 12:37:23.118163 kernel: raid6: using algorithm avx512x4 gen() 30354 MB/s Apr 14 12:37:23.136454 kernel: raid6: .... xor() 7412 MB/s, rmw enabled Apr 14 12:37:23.136716 kernel: raid6: using avx512x2 recovery algorithm Apr 14 12:37:23.174960 kernel: xor: automatically using best checksumming function avx Apr 14 12:37:23.541947 kernel: Btrfs loaded, zoned=no, fsverity=no Apr 14 12:37:23.599392 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Apr 14 12:37:23.621935 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 14 12:37:23.650579 systemd-udevd[414]: Using default interface naming scheme 'v255'. Apr 14 12:37:23.656355 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 14 12:37:23.664275 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Apr 14 12:37:23.687021 dracut-pre-trigger[416]: rd.md=0: removing MD RAID activation Apr 14 12:37:23.764034 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Apr 14 12:37:23.783616 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 14 12:37:23.944448 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 14 12:37:23.956094 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Apr 14 12:37:23.982408 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Apr 14 12:37:23.987366 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Apr 14 12:37:24.002451 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Apr 14 12:37:24.002689 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Apr 14 12:37:23.992616 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 14 12:37:24.009989 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Apr 14 12:37:24.010019 kernel: GPT:9289727 != 19775487 Apr 14 12:37:24.010036 kernel: GPT:Alternate GPT header not at the end of the disk. Apr 14 12:37:24.010051 kernel: GPT:9289727 != 19775487 Apr 14 12:37:24.010092 kernel: GPT: Use GNU Parted to correct GPT errors. Apr 14 12:37:24.000412 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 14 12:37:24.015273 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 14 12:37:24.020041 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Apr 14 12:37:24.042005 kernel: cryptd: max_cpu_qlen set to 1000 Apr 14 12:37:24.046178 kernel: libata version 3.00 loaded. Apr 14 12:37:24.048437 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Apr 14 12:37:24.053578 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 14 12:37:24.058116 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (467) Apr 14 12:37:24.053678 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 14 12:37:24.064152 kernel: BTRFS: device fsid de1edd48-4571-4695-92f0-7af6e33c4e3d devid 1 transid 31 /dev/vda3 scanned by (udev-worker) (460) Apr 14 12:37:24.064357 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 14 12:37:24.067340 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 14 12:37:24.067398 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 14 12:37:24.076912 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Apr 14 12:37:24.086373 kernel: ahci 0000:00:1f.2: version 3.0 Apr 14 12:37:24.086706 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Apr 14 12:37:24.091209 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Apr 14 12:37:24.091398 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Apr 14 12:37:24.090756 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 14 12:37:24.099137 kernel: scsi host0: ahci Apr 14 12:37:24.100807 kernel: AVX2 version of gcm_enc/dec engaged. Apr 14 12:37:24.100960 kernel: scsi host1: ahci Apr 14 12:37:24.103891 kernel: scsi host2: ahci Apr 14 12:37:24.104096 kernel: AES CTR mode by8 optimization enabled Apr 14 12:37:24.108037 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Apr 14 12:37:24.112852 kernel: scsi host3: ahci Apr 14 12:37:24.115900 kernel: scsi host4: ahci Apr 14 12:37:24.120736 kernel: scsi host5: ahci Apr 14 12:37:24.121816 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 31 Apr 14 12:37:24.121830 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 31 Apr 14 12:37:24.121838 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 31 Apr 14 12:37:24.121845 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 31 Apr 14 12:37:24.121852 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 31 Apr 14 12:37:24.121858 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 31 Apr 14 12:37:24.130731 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Apr 14 12:37:24.142961 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Apr 14 12:37:24.158153 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Apr 14 12:37:24.268507 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Apr 14 12:37:24.271640 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 14 12:37:24.286493 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Apr 14 12:37:24.291890 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 14 12:37:24.301955 disk-uuid[556]: Primary Header is updated. Apr 14 12:37:24.301955 disk-uuid[556]: Secondary Entries is updated. Apr 14 12:37:24.301955 disk-uuid[556]: Secondary Header is updated. Apr 14 12:37:24.313880 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 14 12:37:24.321032 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 14 12:37:24.328840 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 14 12:37:24.330910 kernel: block device autoloading is deprecated and will be removed. Apr 14 12:37:24.335075 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 14 12:37:24.434842 kernel: ata5: SATA link down (SStatus 0 SControl 300) Apr 14 12:37:24.440966 kernel: ata1: SATA link down (SStatus 0 SControl 300) Apr 14 12:37:24.441135 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Apr 14 12:37:24.441151 kernel: ata6: SATA link down (SStatus 0 SControl 300) Apr 14 12:37:24.441174 kernel: ata2: SATA link down (SStatus 0 SControl 300) Apr 14 12:37:24.441850 kernel: ata4: SATA link down (SStatus 0 SControl 300) Apr 14 12:37:24.443827 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Apr 14 12:37:24.445641 kernel: ata3.00: applying bridge limits Apr 14 12:37:24.449952 kernel: ata3.00: configured for UDMA/100 Apr 14 12:37:24.452904 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Apr 14 12:37:24.492989 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Apr 14 12:37:24.493175 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Apr 14 12:37:24.514826 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Apr 14 12:37:25.332830 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 14 12:37:25.332966 disk-uuid[557]: The operation has completed successfully. Apr 14 12:37:25.371704 systemd[1]: disk-uuid.service: Deactivated successfully. Apr 14 12:37:25.372395 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Apr 14 12:37:25.425063 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Apr 14 12:37:25.445997 sh[599]: Success Apr 14 12:37:25.469418 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Apr 14 12:37:25.531532 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Apr 14 12:37:25.547214 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Apr 14 12:37:25.556264 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Apr 14 12:37:25.573286 kernel: BTRFS info (device dm-0): first mount of filesystem de1edd48-4571-4695-92f0-7af6e33c4e3d Apr 14 12:37:25.573408 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Apr 14 12:37:25.573419 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Apr 14 12:37:25.578024 kernel: BTRFS info (device dm-0): disabling log replay at mount time Apr 14 12:37:25.578229 kernel: BTRFS info (device dm-0): using free space tree Apr 14 12:37:25.600058 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Apr 14 12:37:25.602067 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Apr 14 12:37:25.619614 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Apr 14 12:37:25.623086 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Apr 14 12:37:25.643774 kernel: BTRFS info (device vda6): first mount of filesystem 7dd1319a-da93-42af-ac3b-f04d4587a8af Apr 14 12:37:25.644006 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 14 12:37:25.644023 kernel: BTRFS info (device vda6): using free space tree Apr 14 12:37:25.652007 kernel: BTRFS info (device vda6): auto enabling async discard Apr 14 12:37:25.720861 kernel: BTRFS info (device vda6): last unmount of filesystem 7dd1319a-da93-42af-ac3b-f04d4587a8af Apr 14 12:37:25.721269 systemd[1]: mnt-oem.mount: Deactivated successfully. Apr 14 12:37:25.736670 systemd[1]: Finished ignition-setup.service - Ignition (setup). Apr 14 12:37:25.752169 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Apr 14 12:37:25.828503 ignition[694]: Ignition 2.19.0 Apr 14 12:37:25.828512 ignition[694]: Stage: fetch-offline Apr 14 12:37:25.828541 ignition[694]: no configs at "/usr/lib/ignition/base.d" Apr 14 12:37:25.828547 ignition[694]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 14 12:37:25.828645 ignition[694]: parsed url from cmdline: "" Apr 14 12:37:25.828647 ignition[694]: no config URL provided Apr 14 12:37:25.828651 ignition[694]: reading system config file "/usr/lib/ignition/user.ign" Apr 14 12:37:25.828658 ignition[694]: no config at "/usr/lib/ignition/user.ign" Apr 14 12:37:25.828753 ignition[694]: op(1): [started] loading QEMU firmware config module Apr 14 12:37:25.828756 ignition[694]: op(1): executing: "modprobe" "qemu_fw_cfg" Apr 14 12:37:25.839670 ignition[694]: op(1): [finished] loading QEMU firmware config module Apr 14 12:37:25.951888 ignition[694]: parsing config with SHA512: fa3fd35136a128514bb8347e8f8e34ae3ed8c648a620d529e995e83462489cad0b31a69fc743b7f1718e441e90e055e37ddb97a814cca949b9575582771b0561 Apr 14 12:37:25.952053 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 14 12:37:25.960283 unknown[694]: fetched base config from "system" Apr 14 12:37:25.960297 unknown[694]: fetched user config from "qemu" Apr 14 12:37:25.963031 ignition[694]: fetch-offline: fetch-offline passed Apr 14 12:37:25.963131 ignition[694]: Ignition finished successfully Apr 14 12:37:25.978391 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 14 12:37:25.980702 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Apr 14 12:37:26.026882 systemd-networkd[788]: lo: Link UP Apr 14 12:37:26.027483 systemd-networkd[788]: lo: Gained carrier Apr 14 12:37:26.035519 systemd-networkd[788]: Enumeration completed Apr 14 12:37:26.036091 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 14 12:37:26.036418 systemd-networkd[788]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 14 12:37:26.036421 systemd-networkd[788]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 14 12:37:26.038428 systemd-networkd[788]: eth0: Link UP Apr 14 12:37:26.038432 systemd-networkd[788]: eth0: Gained carrier Apr 14 12:37:26.038441 systemd-networkd[788]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 14 12:37:26.038612 systemd[1]: Reached target network.target - Network. Apr 14 12:37:26.041525 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Apr 14 12:37:26.062176 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Apr 14 12:37:26.071949 systemd-networkd[788]: eth0: DHCPv4 address 10.0.0.43/16, gateway 10.0.0.1 acquired from 10.0.0.1 Apr 14 12:37:26.085467 ignition[791]: Ignition 2.19.0 Apr 14 12:37:26.085483 ignition[791]: Stage: kargs Apr 14 12:37:26.085738 ignition[791]: no configs at "/usr/lib/ignition/base.d" Apr 14 12:37:26.085750 ignition[791]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 14 12:37:26.087436 ignition[791]: kargs: kargs passed Apr 14 12:37:26.087481 ignition[791]: Ignition finished successfully Apr 14 12:37:26.094453 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Apr 14 12:37:26.108366 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Apr 14 12:37:26.129408 ignition[800]: Ignition 2.19.0 Apr 14 12:37:26.129429 ignition[800]: Stage: disks Apr 14 12:37:26.129627 ignition[800]: no configs at "/usr/lib/ignition/base.d" Apr 14 12:37:26.129638 ignition[800]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 14 12:37:26.136937 ignition[800]: disks: disks passed Apr 14 12:37:26.137039 ignition[800]: Ignition finished successfully Apr 14 12:37:26.142339 systemd[1]: Finished ignition-disks.service - Ignition (disks). Apr 14 12:37:26.143858 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Apr 14 12:37:26.148585 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 14 12:37:26.152553 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 14 12:37:26.156613 systemd[1]: Reached target sysinit.target - System Initialization. Apr 14 12:37:26.161429 systemd[1]: Reached target basic.target - Basic System. Apr 14 12:37:26.188477 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Apr 14 12:37:26.239708 systemd-fsck[810]: ROOT: clean, 14/553520 files, 52654/553472 blocks Apr 14 12:37:26.245743 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Apr 14 12:37:26.265977 systemd[1]: Mounting sysroot.mount - /sysroot... Apr 14 12:37:26.420202 kernel: EXT4-fs (vda9): mounted filesystem e02793bf-3e0d-4c7e-b11a-92c664da7ce3 r/w with ordered data mode. Quota mode: none. Apr 14 12:37:26.422062 systemd[1]: Mounted sysroot.mount - /sysroot. Apr 14 12:37:26.423338 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Apr 14 12:37:26.447761 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 14 12:37:26.451353 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Apr 14 12:37:26.452571 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Apr 14 12:37:26.466514 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (819) Apr 14 12:37:26.467551 kernel: BTRFS info (device vda6): first mount of filesystem 7dd1319a-da93-42af-ac3b-f04d4587a8af Apr 14 12:37:26.452655 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Apr 14 12:37:26.534629 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 14 12:37:26.534668 kernel: BTRFS info (device vda6): using free space tree Apr 14 12:37:26.452730 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Apr 14 12:37:26.541839 kernel: BTRFS info (device vda6): auto enabling async discard Apr 14 12:37:26.556002 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 14 12:37:26.564552 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Apr 14 12:37:27.453090 systemd-networkd[788]: eth0: Gained IPv6LL Apr 14 12:38:25.841134 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Apr 14 12:38:26.026469 initrd-setup-root[843]: cut: /sysroot/etc/passwd: No such file or directory Apr 14 12:38:26.035559 initrd-setup-root[850]: cut: /sysroot/etc/group: No such file or directory Apr 14 12:38:26.053389 initrd-setup-root[857]: cut: /sysroot/etc/shadow: No such file or directory Apr 14 12:38:26.062944 initrd-setup-root[864]: cut: /sysroot/etc/gshadow: No such file or directory Apr 14 12:38:26.432950 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Apr 14 12:38:26.449432 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Apr 14 12:38:26.486216 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Apr 14 12:38:26.516073 systemd[1]: sysroot-oem.mount: Deactivated successfully. Apr 14 12:38:26.522882 kernel: BTRFS info (device vda6): last unmount of filesystem 7dd1319a-da93-42af-ac3b-f04d4587a8af Apr 14 12:38:26.551324 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Apr 14 12:38:26.601765 ignition[933]: INFO : Ignition 2.19.0 Apr 14 12:38:26.601765 ignition[933]: INFO : Stage: mount Apr 14 12:38:26.606413 ignition[933]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 14 12:38:26.606413 ignition[933]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 14 12:38:26.606413 ignition[933]: INFO : mount: mount passed Apr 14 12:38:26.606413 ignition[933]: INFO : Ignition finished successfully Apr 14 12:38:26.612932 systemd[1]: Finished ignition-mount.service - Ignition (mount). Apr 14 12:38:26.638829 systemd[1]: Starting ignition-files.service - Ignition (files)... Apr 14 12:38:26.657569 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 14 12:38:26.716979 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (948) Apr 14 12:38:26.720508 kernel: BTRFS info (device vda6): first mount of filesystem 7dd1319a-da93-42af-ac3b-f04d4587a8af Apr 14 12:38:26.720583 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 14 12:38:26.720617 kernel: BTRFS info (device vda6): using free space tree Apr 14 12:38:26.732346 kernel: BTRFS info (device vda6): auto enabling async discard Apr 14 12:38:26.737380 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 14 12:38:26.827777 ignition[965]: INFO : Ignition 2.19.0 Apr 14 12:38:26.827777 ignition[965]: INFO : Stage: files Apr 14 12:38:26.836124 ignition[965]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 14 12:38:26.836124 ignition[965]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 14 12:38:26.843714 ignition[965]: DEBUG : files: compiled without relabeling support, skipping Apr 14 12:38:26.851036 ignition[965]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Apr 14 12:38:26.851036 ignition[965]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Apr 14 12:38:26.913444 ignition[965]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Apr 14 12:38:26.919324 ignition[965]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Apr 14 12:38:26.925155 ignition[965]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Apr 14 12:38:26.922218 unknown[965]: wrote ssh authorized keys file for user: core Apr 14 12:38:26.932391 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Apr 14 12:38:26.932391 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Apr 14 12:38:27.027509 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Apr 14 12:38:27.399251 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Apr 14 12:38:27.399251 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Apr 14 12:38:27.410493 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Apr 14 12:38:27.410493 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Apr 14 12:38:27.410493 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Apr 14 12:38:27.410493 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 14 12:38:27.410493 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 14 12:38:27.410493 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 14 12:38:27.410493 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 14 12:38:27.410493 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Apr 14 12:38:27.410493 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Apr 14 12:38:27.410493 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.35.1-x86-64.raw" Apr 14 12:38:27.410493 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.35.1-x86-64.raw" Apr 14 12:38:27.410493 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.35.1-x86-64.raw" Apr 14 12:38:27.410493 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.35.1-x86-64.raw: attempt #1 Apr 14 12:38:27.965585 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Apr 14 12:38:29.854047 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.35.1-x86-64.raw" Apr 14 12:38:29.854047 ignition[965]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Apr 14 12:38:29.861695 ignition[965]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 14 12:38:29.865968 ignition[965]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 14 12:38:29.865968 ignition[965]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Apr 14 12:38:29.865968 ignition[965]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Apr 14 12:38:29.865968 ignition[965]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Apr 14 12:38:29.865968 ignition[965]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Apr 14 12:38:29.865968 ignition[965]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Apr 14 12:38:29.865968 ignition[965]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Apr 14 12:38:29.947694 ignition[965]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Apr 14 12:38:29.962229 ignition[965]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Apr 14 12:38:29.965700 ignition[965]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Apr 14 12:38:29.965700 ignition[965]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Apr 14 12:38:29.965700 ignition[965]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Apr 14 12:38:29.965700 ignition[965]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Apr 14 12:38:29.965700 ignition[965]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Apr 14 12:38:29.965700 ignition[965]: INFO : files: files passed Apr 14 12:38:29.965700 ignition[965]: INFO : Ignition finished successfully Apr 14 12:38:29.970222 systemd[1]: Finished ignition-files.service - Ignition (files). Apr 14 12:38:30.039344 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Apr 14 12:38:30.044877 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Apr 14 12:38:30.053846 systemd[1]: ignition-quench.service: Deactivated successfully. Apr 14 12:38:30.053985 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Apr 14 12:38:30.067188 initrd-setup-root-after-ignition[995]: grep: /sysroot/oem/oem-release: No such file or directory Apr 14 12:38:30.074023 initrd-setup-root-after-ignition[1001]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 14 12:38:30.079978 initrd-setup-root-after-ignition[997]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 14 12:38:30.079978 initrd-setup-root-after-ignition[997]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Apr 14 12:38:30.076909 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 14 12:38:30.080567 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Apr 14 12:38:30.094475 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Apr 14 12:38:30.182269 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Apr 14 12:38:30.183075 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Apr 14 12:38:30.190261 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Apr 14 12:38:30.195121 systemd[1]: Reached target initrd.target - Initrd Default Target. Apr 14 12:38:30.195877 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Apr 14 12:38:30.215635 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Apr 14 12:38:30.344937 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 14 12:38:30.381987 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Apr 14 12:38:30.462189 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Apr 14 12:38:30.466728 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 14 12:38:30.472353 systemd[1]: Stopped target timers.target - Timer Units. Apr 14 12:38:30.475147 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Apr 14 12:38:30.475299 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 14 12:38:30.479378 systemd[1]: Stopped target initrd.target - Initrd Default Target. Apr 14 12:38:30.483047 systemd[1]: Stopped target basic.target - Basic System. Apr 14 12:38:30.485505 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Apr 14 12:38:30.489390 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Apr 14 12:38:30.492326 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Apr 14 12:38:30.493997 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Apr 14 12:38:30.497592 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Apr 14 12:38:30.502727 systemd[1]: Stopped target sysinit.target - System Initialization. Apr 14 12:38:30.506721 systemd[1]: Stopped target local-fs.target - Local File Systems. Apr 14 12:38:30.511261 systemd[1]: Stopped target swap.target - Swaps. Apr 14 12:38:30.514886 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Apr 14 12:38:30.515153 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Apr 14 12:38:30.520932 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Apr 14 12:38:30.521660 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 14 12:38:30.528682 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Apr 14 12:38:30.530446 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 14 12:38:30.533853 systemd[1]: dracut-initqueue.service: Deactivated successfully. Apr 14 12:38:30.534118 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Apr 14 12:38:30.539949 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Apr 14 12:38:30.540162 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Apr 14 12:38:30.540646 systemd[1]: Stopped target paths.target - Path Units. Apr 14 12:38:30.547226 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Apr 14 12:38:30.553386 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 14 12:38:30.556038 systemd[1]: Stopped target slices.target - Slice Units. Apr 14 12:38:30.561341 systemd[1]: Stopped target sockets.target - Socket Units. Apr 14 12:38:30.563898 systemd[1]: iscsid.socket: Deactivated successfully. Apr 14 12:38:30.564042 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Apr 14 12:38:30.567851 systemd[1]: iscsiuio.socket: Deactivated successfully. Apr 14 12:38:30.568595 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 14 12:38:30.570376 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Apr 14 12:38:30.570501 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 14 12:38:30.574105 systemd[1]: ignition-files.service: Deactivated successfully. Apr 14 12:38:30.574243 systemd[1]: Stopped ignition-files.service - Ignition (files). Apr 14 12:38:30.591334 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Apr 14 12:38:30.595936 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Apr 14 12:38:30.596640 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Apr 14 12:38:30.596850 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Apr 14 12:38:30.600856 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Apr 14 12:38:30.600972 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Apr 14 12:38:30.609984 systemd[1]: initrd-cleanup.service: Deactivated successfully. Apr 14 12:38:30.610385 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Apr 14 12:38:30.616199 ignition[1022]: INFO : Ignition 2.19.0 Apr 14 12:38:30.616199 ignition[1022]: INFO : Stage: umount Apr 14 12:38:30.618575 ignition[1022]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 14 12:38:30.618575 ignition[1022]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 14 12:38:30.618575 ignition[1022]: INFO : umount: umount passed Apr 14 12:38:30.618575 ignition[1022]: INFO : Ignition finished successfully Apr 14 12:38:30.618531 systemd[1]: ignition-mount.service: Deactivated successfully. Apr 14 12:38:30.618680 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Apr 14 12:38:30.622745 systemd[1]: Stopped target network.target - Network. Apr 14 12:38:30.624262 systemd[1]: ignition-disks.service: Deactivated successfully. Apr 14 12:38:30.624332 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Apr 14 12:38:30.628599 systemd[1]: ignition-kargs.service: Deactivated successfully. Apr 14 12:38:30.628697 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Apr 14 12:38:30.629714 systemd[1]: ignition-setup.service: Deactivated successfully. Apr 14 12:38:30.630306 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Apr 14 12:38:30.632722 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Apr 14 12:38:30.633986 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Apr 14 12:38:30.636250 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Apr 14 12:38:30.641776 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Apr 14 12:38:30.643578 systemd[1]: sysroot-boot.mount: Deactivated successfully. Apr 14 12:38:30.658108 systemd[1]: systemd-resolved.service: Deactivated successfully. Apr 14 12:38:30.658330 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Apr 14 12:38:30.660470 systemd-networkd[788]: eth0: DHCPv6 lease lost Apr 14 12:38:30.662086 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Apr 14 12:38:30.662152 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 14 12:38:30.671167 systemd[1]: systemd-networkd.service: Deactivated successfully. Apr 14 12:38:30.671317 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Apr 14 12:38:30.676705 systemd[1]: systemd-networkd.socket: Deactivated successfully. Apr 14 12:38:30.676754 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Apr 14 12:38:30.697971 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Apr 14 12:38:30.701564 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Apr 14 12:38:30.701839 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 14 12:38:30.705490 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 14 12:38:30.705558 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 14 12:38:30.706652 systemd[1]: systemd-modules-load.service: Deactivated successfully. Apr 14 12:38:30.706726 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Apr 14 12:38:30.712498 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 14 12:38:30.717516 systemd[1]: sysroot-boot.service: Deactivated successfully. Apr 14 12:38:30.717649 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Apr 14 12:38:30.720872 systemd[1]: initrd-setup-root.service: Deactivated successfully. Apr 14 12:38:30.720933 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Apr 14 12:38:30.730263 systemd[1]: systemd-udevd.service: Deactivated successfully. Apr 14 12:38:30.730442 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 14 12:38:30.732568 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Apr 14 12:38:30.732656 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Apr 14 12:38:30.737035 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Apr 14 12:38:30.737109 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Apr 14 12:38:30.739679 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Apr 14 12:38:30.740270 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Apr 14 12:38:30.747705 systemd[1]: dracut-cmdline.service: Deactivated successfully. Apr 14 12:38:30.749479 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Apr 14 12:38:30.753766 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 14 12:38:30.753857 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 14 12:38:30.762225 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Apr 14 12:38:30.764248 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Apr 14 12:38:30.764305 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 14 12:38:30.770460 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Apr 14 12:38:30.770503 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 14 12:38:30.774288 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Apr 14 12:38:30.774357 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Apr 14 12:38:30.776961 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 14 12:38:30.777016 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 14 12:38:30.781750 systemd[1]: network-cleanup.service: Deactivated successfully. Apr 14 12:38:30.781866 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Apr 14 12:38:30.788231 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Apr 14 12:38:30.788322 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Apr 14 12:38:30.791467 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Apr 14 12:38:30.810985 systemd[1]: Starting initrd-switch-root.service - Switch Root... Apr 14 12:38:30.828553 systemd[1]: Switching root. Apr 14 12:38:30.870705 systemd-journald[194]: Journal stopped Apr 14 12:38:32.594210 systemd-journald[194]: Received SIGTERM from PID 1 (systemd). Apr 14 12:38:32.594267 kernel: SELinux: policy capability network_peer_controls=1 Apr 14 12:38:32.594279 kernel: SELinux: policy capability open_perms=1 Apr 14 12:38:32.594290 kernel: SELinux: policy capability extended_socket_class=1 Apr 14 12:38:32.594298 kernel: SELinux: policy capability always_check_network=0 Apr 14 12:38:32.594306 kernel: SELinux: policy capability cgroup_seclabel=1 Apr 14 12:38:32.594319 kernel: SELinux: policy capability nnp_nosuid_transition=1 Apr 14 12:38:32.594326 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Apr 14 12:38:32.594334 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Apr 14 12:38:32.594353 kernel: audit: type=1403 audit(1776170311.065:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Apr 14 12:38:32.594367 systemd[1]: Successfully loaded SELinux policy in 50.643ms. Apr 14 12:38:32.594382 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 15.919ms. Apr 14 12:38:32.594391 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 14 12:38:32.594401 systemd[1]: Detected virtualization kvm. Apr 14 12:38:32.594409 systemd[1]: Detected architecture x86-64. Apr 14 12:38:32.594417 systemd[1]: Detected first boot. Apr 14 12:38:32.594425 systemd[1]: Initializing machine ID from VM UUID. Apr 14 12:38:32.594434 zram_generator::config[1068]: No configuration found. Apr 14 12:38:32.594454 systemd[1]: Populated /etc with preset unit settings. Apr 14 12:38:32.594462 systemd[1]: initrd-switch-root.service: Deactivated successfully. Apr 14 12:38:32.594470 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Apr 14 12:38:32.594479 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Apr 14 12:38:32.594488 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Apr 14 12:38:32.594496 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Apr 14 12:38:32.594504 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Apr 14 12:38:32.594512 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Apr 14 12:38:32.594530 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Apr 14 12:38:32.594539 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Apr 14 12:38:32.594547 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Apr 14 12:38:32.594555 systemd[1]: Created slice user.slice - User and Session Slice. Apr 14 12:38:32.594563 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 14 12:38:32.594571 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 14 12:38:32.594590 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Apr 14 12:38:32.594599 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Apr 14 12:38:32.594610 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Apr 14 12:38:32.594643 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 14 12:38:32.594652 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Apr 14 12:38:32.594660 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 14 12:38:32.594668 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Apr 14 12:38:32.594685 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Apr 14 12:38:32.594694 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Apr 14 12:38:32.594702 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Apr 14 12:38:32.594723 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 14 12:38:32.594732 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 14 12:38:32.594740 systemd[1]: Reached target slices.target - Slice Units. Apr 14 12:38:32.594748 systemd[1]: Reached target swap.target - Swaps. Apr 14 12:38:32.594757 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Apr 14 12:38:32.594765 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Apr 14 12:38:32.594772 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 14 12:38:32.594781 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 14 12:38:32.594814 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 14 12:38:32.594823 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Apr 14 12:38:32.594841 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Apr 14 12:38:32.594859 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Apr 14 12:38:32.594867 systemd[1]: Mounting media.mount - External Media Directory... Apr 14 12:38:32.594875 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 14 12:38:32.594884 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Apr 14 12:38:32.594891 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Apr 14 12:38:32.594900 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Apr 14 12:38:32.594908 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Apr 14 12:38:32.594930 systemd[1]: Reached target machines.target - Containers. Apr 14 12:38:32.594938 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Apr 14 12:38:32.594947 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 14 12:38:32.594955 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 14 12:38:32.594963 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Apr 14 12:38:32.594971 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 14 12:38:32.594979 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 14 12:38:32.594987 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 14 12:38:32.594995 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Apr 14 12:38:32.595012 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 14 12:38:32.595021 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Apr 14 12:38:32.595029 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Apr 14 12:38:32.595037 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Apr 14 12:38:32.595045 kernel: fuse: init (API version 7.39) Apr 14 12:38:32.595063 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Apr 14 12:38:32.595071 systemd[1]: Stopped systemd-fsck-usr.service. Apr 14 12:38:32.595079 kernel: loop: module loaded Apr 14 12:38:32.595086 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 14 12:38:32.595097 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 14 12:38:32.595105 kernel: ACPI: bus type drm_connector registered Apr 14 12:38:32.595112 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Apr 14 12:38:32.595120 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Apr 14 12:38:32.595128 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 14 12:38:32.595136 systemd[1]: verity-setup.service: Deactivated successfully. Apr 14 12:38:32.595159 systemd-journald[1145]: Collecting audit messages is disabled. Apr 14 12:38:32.595180 systemd[1]: Stopped verity-setup.service. Apr 14 12:38:32.595189 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 14 12:38:32.595198 systemd-journald[1145]: Journal started Apr 14 12:38:32.595215 systemd-journald[1145]: Runtime Journal (/run/log/journal/9e17f89d7e8d43efbaeba710caab547c) is 6.0M, max 48.4M, 42.3M free. Apr 14 12:38:32.109253 systemd[1]: Queued start job for default target multi-user.target. Apr 14 12:38:32.149611 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Apr 14 12:38:32.152533 systemd[1]: systemd-journald.service: Deactivated successfully. Apr 14 12:38:32.153243 systemd[1]: systemd-journald.service: Consumed 1.018s CPU time. Apr 14 12:38:32.600951 systemd[1]: Started systemd-journald.service - Journal Service. Apr 14 12:38:32.602516 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Apr 14 12:38:32.605650 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Apr 14 12:38:32.607722 systemd[1]: Mounted media.mount - External Media Directory. Apr 14 12:38:32.609902 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Apr 14 12:38:32.612396 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Apr 14 12:38:32.615335 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Apr 14 12:38:32.618578 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Apr 14 12:38:32.622870 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 14 12:38:32.625851 systemd[1]: modprobe@configfs.service: Deactivated successfully. Apr 14 12:38:32.626054 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Apr 14 12:38:32.628280 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 14 12:38:32.629379 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 14 12:38:32.632192 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 14 12:38:32.632396 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 14 12:38:32.635113 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 14 12:38:32.635262 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 14 12:38:32.637217 systemd[1]: modprobe@fuse.service: Deactivated successfully. Apr 14 12:38:32.637349 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Apr 14 12:38:32.640723 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 14 12:38:32.640895 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 14 12:38:32.643783 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 14 12:38:32.645696 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Apr 14 12:38:32.648381 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Apr 14 12:38:32.663109 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 14 12:38:32.672404 systemd[1]: Reached target network-pre.target - Preparation for Network. Apr 14 12:38:32.688832 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Apr 14 12:38:32.696234 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Apr 14 12:38:32.698075 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Apr 14 12:38:32.698123 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 14 12:38:32.700483 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Apr 14 12:38:32.706998 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Apr 14 12:38:32.711539 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Apr 14 12:38:32.714313 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 14 12:38:32.716768 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Apr 14 12:38:32.722357 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Apr 14 12:38:32.724211 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 14 12:38:32.726118 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Apr 14 12:38:32.727909 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 14 12:38:32.728963 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 14 12:38:32.738458 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Apr 14 12:38:32.742982 systemd-journald[1145]: Time spent on flushing to /var/log/journal/9e17f89d7e8d43efbaeba710caab547c is 93.839ms for 956 entries. Apr 14 12:38:32.742982 systemd-journald[1145]: System Journal (/var/log/journal/9e17f89d7e8d43efbaeba710caab547c) is 8.0M, max 195.6M, 187.6M free. Apr 14 12:38:32.854874 systemd-journald[1145]: Received client request to flush runtime journal. Apr 14 12:38:32.858724 kernel: loop0: detected capacity change from 0 to 142488 Apr 14 12:38:32.743011 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 14 12:38:32.758333 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Apr 14 12:38:32.773313 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Apr 14 12:38:32.836174 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Apr 14 12:38:32.838948 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Apr 14 12:38:32.843283 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Apr 14 12:38:32.850302 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Apr 14 12:38:32.865277 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Apr 14 12:38:32.869757 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Apr 14 12:38:32.873470 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 14 12:38:32.880316 udevadm[1185]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Apr 14 12:38:32.900485 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Apr 14 12:38:32.900522 systemd-tmpfiles[1184]: ACLs are not supported, ignoring. Apr 14 12:38:32.900531 systemd-tmpfiles[1184]: ACLs are not supported, ignoring. Apr 14 12:38:32.901067 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Apr 14 12:38:32.903873 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Apr 14 12:38:32.905851 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 14 12:38:32.918433 systemd[1]: Starting systemd-sysusers.service - Create System Users... Apr 14 12:38:32.939848 kernel: loop1: detected capacity change from 0 to 217752 Apr 14 12:38:32.948545 systemd[1]: Finished systemd-sysusers.service - Create System Users. Apr 14 12:38:32.964380 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 14 12:38:33.041126 kernel: loop2: detected capacity change from 0 to 140768 Apr 14 12:38:33.061041 systemd-tmpfiles[1206]: ACLs are not supported, ignoring. Apr 14 12:38:33.061070 systemd-tmpfiles[1206]: ACLs are not supported, ignoring. Apr 14 12:38:33.068986 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 14 12:38:33.115058 kernel: loop3: detected capacity change from 0 to 142488 Apr 14 12:38:33.142838 kernel: loop4: detected capacity change from 0 to 217752 Apr 14 12:38:33.157833 kernel: loop5: detected capacity change from 0 to 140768 Apr 14 12:38:33.177237 (sd-merge)[1211]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Apr 14 12:38:33.177664 (sd-merge)[1211]: Merged extensions into '/usr'. Apr 14 12:38:33.184039 systemd[1]: Reloading requested from client PID 1183 ('systemd-sysext') (unit systemd-sysext.service)... Apr 14 12:38:33.184199 systemd[1]: Reloading... Apr 14 12:38:33.257848 zram_generator::config[1235]: No configuration found. Apr 14 12:38:33.351082 ldconfig[1178]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Apr 14 12:38:33.403873 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 14 12:38:33.461403 systemd[1]: Reloading finished in 276 ms. Apr 14 12:38:33.544853 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Apr 14 12:38:33.548165 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Apr 14 12:38:33.552713 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Apr 14 12:38:33.607129 systemd[1]: Starting ensure-sysext.service... Apr 14 12:38:33.613703 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 14 12:38:33.623139 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 14 12:38:33.631981 systemd[1]: Reloading requested from client PID 1275 ('systemctl') (unit ensure-sysext.service)... Apr 14 12:38:33.632013 systemd[1]: Reloading... Apr 14 12:38:33.669029 systemd-tmpfiles[1276]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Apr 14 12:38:33.670757 systemd-tmpfiles[1276]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Apr 14 12:38:33.673882 systemd-tmpfiles[1276]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Apr 14 12:38:33.674200 systemd-tmpfiles[1276]: ACLs are not supported, ignoring. Apr 14 12:38:33.674267 systemd-tmpfiles[1276]: ACLs are not supported, ignoring. Apr 14 12:38:33.679379 systemd-tmpfiles[1276]: Detected autofs mount point /boot during canonicalization of boot. Apr 14 12:38:33.679404 systemd-tmpfiles[1276]: Skipping /boot Apr 14 12:38:33.691167 systemd-udevd[1277]: Using default interface naming scheme 'v255'. Apr 14 12:38:33.696684 systemd-tmpfiles[1276]: Detected autofs mount point /boot during canonicalization of boot. Apr 14 12:38:33.696895 systemd-tmpfiles[1276]: Skipping /boot Apr 14 12:38:33.723012 zram_generator::config[1305]: No configuration found. Apr 14 12:38:33.787530 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 31 scanned by (udev-worker) (1314) Apr 14 12:38:33.941871 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Apr 14 12:38:33.949371 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 14 12:38:33.952368 kernel: ACPI: button: Power Button [PWRF] Apr 14 12:38:33.976840 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Apr 14 12:38:33.997877 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Apr 14 12:38:34.002881 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Apr 14 12:38:34.003339 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Apr 14 12:38:34.007829 kernel: mousedev: PS/2 mouse device common for all mice Apr 14 12:38:34.031481 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Apr 14 12:38:34.036048 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Apr 14 12:38:34.036548 systemd[1]: Reloading finished in 404 ms. Apr 14 12:38:34.152236 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 14 12:38:34.186404 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 14 12:38:34.211371 systemd[1]: Finished ensure-sysext.service. Apr 14 12:38:34.285663 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Apr 14 12:38:34.294099 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 14 12:38:34.313238 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 14 12:38:34.319712 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Apr 14 12:38:34.322087 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 14 12:38:34.324547 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Apr 14 12:38:34.329551 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 14 12:38:34.336666 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 14 12:38:34.347966 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 14 12:38:34.348840 lvm[1375]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 14 12:38:34.354752 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 14 12:38:34.360883 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 14 12:38:34.367612 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Apr 14 12:38:34.377489 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Apr 14 12:38:34.382153 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 14 12:38:34.389003 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 14 12:38:34.405492 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Apr 14 12:38:34.418316 augenrules[1400]: No rules Apr 14 12:38:34.431428 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Apr 14 12:38:34.436172 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 14 12:38:34.437754 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 14 12:38:34.438606 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 14 12:38:34.441225 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Apr 14 12:38:34.443909 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 14 12:38:34.444090 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 14 12:38:34.447414 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 14 12:38:34.447589 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 14 12:38:34.461058 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 14 12:38:34.462154 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 14 12:38:34.466311 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 14 12:38:34.466506 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 14 12:38:34.470178 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Apr 14 12:38:34.490765 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Apr 14 12:38:34.545497 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Apr 14 12:38:34.549537 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 14 12:38:34.565885 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Apr 14 12:38:34.569988 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 14 12:38:34.572688 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 14 12:38:34.575951 systemd[1]: Starting systemd-update-done.service - Update is Completed... Apr 14 12:38:34.583936 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Apr 14 12:38:34.585164 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Apr 14 12:38:34.585496 lvm[1418]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 14 12:38:34.591680 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Apr 14 12:38:34.605946 systemd[1]: Finished systemd-update-done.service - Update is Completed. Apr 14 12:38:34.629238 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Apr 14 12:38:34.675319 systemd[1]: Started systemd-userdbd.service - User Database Manager. Apr 14 12:38:34.747119 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 14 12:38:34.762189 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Apr 14 12:38:34.762341 systemd-resolved[1394]: Positive Trust Anchors: Apr 14 12:38:34.762354 systemd-resolved[1394]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 14 12:38:34.762391 systemd-resolved[1394]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 14 12:38:34.766247 systemd[1]: Reached target time-set.target - System Time Set. Apr 14 12:38:34.773433 systemd-resolved[1394]: Defaulting to hostname 'linux'. Apr 14 12:38:34.777146 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 14 12:38:34.778308 systemd-networkd[1392]: lo: Link UP Apr 14 12:38:34.778336 systemd-networkd[1392]: lo: Gained carrier Apr 14 12:38:34.779923 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 14 12:38:34.781636 systemd-networkd[1392]: Enumeration completed Apr 14 12:38:34.782576 systemd[1]: Reached target sysinit.target - System Initialization. Apr 14 12:38:34.782913 systemd-networkd[1392]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 14 12:38:34.782916 systemd-networkd[1392]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 14 12:38:34.783702 systemd-networkd[1392]: eth0: Link UP Apr 14 12:38:34.783709 systemd-networkd[1392]: eth0: Gained carrier Apr 14 12:38:34.783719 systemd-networkd[1392]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 14 12:38:34.784540 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Apr 14 12:38:34.789974 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Apr 14 12:38:34.794738 systemd[1]: Started logrotate.timer - Daily rotation of log files. Apr 14 12:38:34.800577 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Apr 14 12:38:34.806901 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Apr 14 12:38:34.806990 systemd-networkd[1392]: eth0: DHCPv4 address 10.0.0.43/16, gateway 10.0.0.1 acquired from 10.0.0.1 Apr 14 12:38:34.811841 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Apr 14 12:38:34.811897 systemd[1]: Reached target paths.target - Path Units. Apr 14 12:38:34.811932 systemd-timesyncd[1396]: Network configuration changed, trying to establish connection. Apr 14 12:38:34.813752 systemd[1]: Reached target timers.target - Timer Units. Apr 14 12:38:35.519998 systemd-resolved[1394]: Clock change detected. Flushing caches. Apr 14 12:38:35.520055 systemd-timesyncd[1396]: Contacted time server 10.0.0.1:123 (10.0.0.1). Apr 14 12:38:35.520101 systemd-timesyncd[1396]: Initial clock synchronization to Tue 2026-04-14 12:38:35.519170 UTC. Apr 14 12:38:35.520143 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Apr 14 12:38:35.523878 systemd[1]: Starting docker.socket - Docker Socket for the API... Apr 14 12:38:35.559831 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Apr 14 12:38:35.564528 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 14 12:38:35.570978 systemd[1]: Listening on docker.socket - Docker Socket for the API. Apr 14 12:38:35.616056 systemd[1]: Reached target network.target - Network. Apr 14 12:38:35.617991 systemd[1]: Reached target sockets.target - Socket Units. Apr 14 12:38:35.619821 systemd[1]: Reached target basic.target - Basic System. Apr 14 12:38:35.622787 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Apr 14 12:38:35.622898 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Apr 14 12:38:35.642854 systemd[1]: Starting containerd.service - containerd container runtime... Apr 14 12:38:35.645921 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Apr 14 12:38:35.653007 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Apr 14 12:38:35.656427 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Apr 14 12:38:35.659721 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Apr 14 12:38:35.663478 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Apr 14 12:38:35.665081 jq[1439]: false Apr 14 12:38:35.665757 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Apr 14 12:38:35.671581 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Apr 14 12:38:35.680799 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Apr 14 12:38:35.691028 extend-filesystems[1440]: Found loop3 Apr 14 12:38:35.697869 extend-filesystems[1440]: Found loop4 Apr 14 12:38:35.697869 extend-filesystems[1440]: Found loop5 Apr 14 12:38:35.697869 extend-filesystems[1440]: Found sr0 Apr 14 12:38:35.697869 extend-filesystems[1440]: Found vda Apr 14 12:38:35.697869 extend-filesystems[1440]: Found vda1 Apr 14 12:38:35.697869 extend-filesystems[1440]: Found vda2 Apr 14 12:38:35.697869 extend-filesystems[1440]: Found vda3 Apr 14 12:38:35.697869 extend-filesystems[1440]: Found usr Apr 14 12:38:35.697869 extend-filesystems[1440]: Found vda4 Apr 14 12:38:35.697869 extend-filesystems[1440]: Found vda6 Apr 14 12:38:35.697869 extend-filesystems[1440]: Found vda7 Apr 14 12:38:35.697869 extend-filesystems[1440]: Found vda9 Apr 14 12:38:35.697869 extend-filesystems[1440]: Checking size of /dev/vda9 Apr 14 12:38:35.759809 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 31 scanned by (udev-worker) (1315) Apr 14 12:38:35.759890 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Apr 14 12:38:35.693657 systemd[1]: Starting systemd-logind.service - User Login Management... Apr 14 12:38:35.697608 dbus-daemon[1438]: [system] SELinux support is enabled Apr 14 12:38:35.765240 extend-filesystems[1440]: Resized partition /dev/vda9 Apr 14 12:38:35.699196 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Apr 14 12:38:35.770892 extend-filesystems[1461]: resize2fs 1.47.1 (20-May-2024) Apr 14 12:38:35.702150 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Apr 14 12:38:35.857920 update_engine[1457]: I20260414 12:38:35.764825 1457 main.cc:92] Flatcar Update Engine starting Apr 14 12:38:35.857920 update_engine[1457]: I20260414 12:38:35.809700 1457 update_check_scheduler.cc:74] Next update check in 7m13s Apr 14 12:38:35.702679 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Apr 14 12:38:35.714836 systemd[1]: Starting update-engine.service - Update Engine... Apr 14 12:38:35.718816 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Apr 14 12:38:35.860209 jq[1459]: true Apr 14 12:38:35.725059 systemd[1]: Started dbus.service - D-Bus System Message Bus. Apr 14 12:38:35.730931 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Apr 14 12:38:35.734474 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Apr 14 12:38:35.735112 systemd[1]: motdgen.service: Deactivated successfully. Apr 14 12:38:35.735249 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Apr 14 12:38:35.744574 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Apr 14 12:38:35.744767 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Apr 14 12:38:35.838252 systemd-logind[1450]: Watching system buttons on /dev/input/event1 (Power Button) Apr 14 12:38:35.838418 systemd-logind[1450]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Apr 14 12:38:35.840029 systemd-logind[1450]: New seat seat0. Apr 14 12:38:35.856338 (ntainerd)[1466]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Apr 14 12:38:35.860110 systemd[1]: Started systemd-logind.service - User Login Management. Apr 14 12:38:35.865652 jq[1469]: true Apr 14 12:38:35.880777 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Apr 14 12:38:35.886331 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Apr 14 12:38:35.886373 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Apr 14 12:38:35.896736 dbus-daemon[1438]: [system] Successfully activated service 'org.freedesktop.systemd1' Apr 14 12:38:35.891966 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Apr 14 12:38:35.892051 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Apr 14 12:38:35.897255 systemd[1]: Started update-engine.service - Update Engine. Apr 14 12:38:35.903678 extend-filesystems[1461]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Apr 14 12:38:35.903678 extend-filesystems[1461]: old_desc_blocks = 1, new_desc_blocks = 1 Apr 14 12:38:35.903678 extend-filesystems[1461]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Apr 14 12:38:35.923359 extend-filesystems[1440]: Resized filesystem in /dev/vda9 Apr 14 12:38:35.914235 systemd[1]: extend-filesystems.service: Deactivated successfully. Apr 14 12:38:35.924935 sshd_keygen[1464]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Apr 14 12:38:35.914430 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Apr 14 12:38:35.931412 tar[1463]: linux-amd64/LICENSE Apr 14 12:38:35.931412 tar[1463]: linux-amd64/helm Apr 14 12:38:35.945948 systemd[1]: Started locksmithd.service - Cluster reboot manager. Apr 14 12:38:35.984435 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Apr 14 12:38:36.002124 systemd[1]: Starting issuegen.service - Generate /run/issue... Apr 14 12:38:36.004034 bash[1500]: Updated "/home/core/.ssh/authorized_keys" Apr 14 12:38:36.008527 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Apr 14 12:38:36.012383 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Apr 14 12:38:36.017160 systemd[1]: issuegen.service: Deactivated successfully. Apr 14 12:38:36.019553 systemd[1]: Finished issuegen.service - Generate /run/issue. Apr 14 12:38:36.025245 locksmithd[1491]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Apr 14 12:38:36.032577 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Apr 14 12:38:36.085343 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Apr 14 12:38:36.131688 systemd[1]: Started getty@tty1.service - Getty on tty1. Apr 14 12:38:36.144478 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Apr 14 12:38:36.148061 systemd[1]: Reached target getty.target - Login Prompts. Apr 14 12:38:36.178624 containerd[1466]: time="2026-04-14T12:38:36.178391407Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Apr 14 12:38:36.216056 containerd[1466]: time="2026-04-14T12:38:36.215795657Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Apr 14 12:38:36.224281 containerd[1466]: time="2026-04-14T12:38:36.222496052Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.127-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Apr 14 12:38:36.224281 containerd[1466]: time="2026-04-14T12:38:36.222549403Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Apr 14 12:38:36.224281 containerd[1466]: time="2026-04-14T12:38:36.222573374Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Apr 14 12:38:36.224281 containerd[1466]: time="2026-04-14T12:38:36.222895479Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Apr 14 12:38:36.224281 containerd[1466]: time="2026-04-14T12:38:36.222922427Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Apr 14 12:38:36.224281 containerd[1466]: time="2026-04-14T12:38:36.222998190Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Apr 14 12:38:36.224281 containerd[1466]: time="2026-04-14T12:38:36.223017816Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Apr 14 12:38:36.224281 containerd[1466]: time="2026-04-14T12:38:36.223230952Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 14 12:38:36.224281 containerd[1466]: time="2026-04-14T12:38:36.223250274Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Apr 14 12:38:36.224281 containerd[1466]: time="2026-04-14T12:38:36.223266687Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Apr 14 12:38:36.224281 containerd[1466]: time="2026-04-14T12:38:36.223279945Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Apr 14 12:38:36.224772 containerd[1466]: time="2026-04-14T12:38:36.223372791Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Apr 14 12:38:36.224772 containerd[1466]: time="2026-04-14T12:38:36.223712132Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Apr 14 12:38:36.224772 containerd[1466]: time="2026-04-14T12:38:36.223849168Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 14 12:38:36.224772 containerd[1466]: time="2026-04-14T12:38:36.223868993Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Apr 14 12:38:36.224772 containerd[1466]: time="2026-04-14T12:38:36.223965391Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Apr 14 12:38:36.224772 containerd[1466]: time="2026-04-14T12:38:36.224094831Z" level=info msg="metadata content store policy set" policy=shared Apr 14 12:38:36.231947 containerd[1466]: time="2026-04-14T12:38:36.231659729Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Apr 14 12:38:36.231947 containerd[1466]: time="2026-04-14T12:38:36.231819035Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Apr 14 12:38:36.231947 containerd[1466]: time="2026-04-14T12:38:36.231844499Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Apr 14 12:38:36.231947 containerd[1466]: time="2026-04-14T12:38:36.231863671Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Apr 14 12:38:36.231947 containerd[1466]: time="2026-04-14T12:38:36.231881747Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Apr 14 12:38:36.232565 containerd[1466]: time="2026-04-14T12:38:36.232253332Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Apr 14 12:38:36.232650 containerd[1466]: time="2026-04-14T12:38:36.232619240Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Apr 14 12:38:36.232854 containerd[1466]: time="2026-04-14T12:38:36.232785368Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Apr 14 12:38:36.232854 containerd[1466]: time="2026-04-14T12:38:36.232823326Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Apr 14 12:38:36.232854 containerd[1466]: time="2026-04-14T12:38:36.232839628Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Apr 14 12:38:36.232942 containerd[1466]: time="2026-04-14T12:38:36.232855977Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Apr 14 12:38:36.232942 containerd[1466]: time="2026-04-14T12:38:36.232871101Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Apr 14 12:38:36.232942 containerd[1466]: time="2026-04-14T12:38:36.232886142Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Apr 14 12:38:36.232942 containerd[1466]: time="2026-04-14T12:38:36.232903752Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Apr 14 12:38:36.232942 containerd[1466]: time="2026-04-14T12:38:36.232921164Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Apr 14 12:38:36.232942 containerd[1466]: time="2026-04-14T12:38:36.232937859Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Apr 14 12:38:36.233083 containerd[1466]: time="2026-04-14T12:38:36.232952428Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Apr 14 12:38:36.233083 containerd[1466]: time="2026-04-14T12:38:36.232965740Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Apr 14 12:38:36.233083 containerd[1466]: time="2026-04-14T12:38:36.232987910Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Apr 14 12:38:36.233083 containerd[1466]: time="2026-04-14T12:38:36.233003655Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Apr 14 12:38:36.233083 containerd[1466]: time="2026-04-14T12:38:36.233025785Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Apr 14 12:38:36.233083 containerd[1466]: time="2026-04-14T12:38:36.233042951Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Apr 14 12:38:36.233083 containerd[1466]: time="2026-04-14T12:38:36.233057949Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Apr 14 12:38:36.233083 containerd[1466]: time="2026-04-14T12:38:36.233072396Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Apr 14 12:38:36.233275 containerd[1466]: time="2026-04-14T12:38:36.233085539Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Apr 14 12:38:36.233275 containerd[1466]: time="2026-04-14T12:38:36.233100479Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Apr 14 12:38:36.233275 containerd[1466]: time="2026-04-14T12:38:36.233115373Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Apr 14 12:38:36.233275 containerd[1466]: time="2026-04-14T12:38:36.233131572Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Apr 14 12:38:36.233275 containerd[1466]: time="2026-04-14T12:38:36.233145322Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Apr 14 12:38:36.233275 containerd[1466]: time="2026-04-14T12:38:36.233158299Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Apr 14 12:38:36.233275 containerd[1466]: time="2026-04-14T12:38:36.233192466Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Apr 14 12:38:36.233275 containerd[1466]: time="2026-04-14T12:38:36.233211077Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Apr 14 12:38:36.233275 containerd[1466]: time="2026-04-14T12:38:36.233235439Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Apr 14 12:38:36.233275 containerd[1466]: time="2026-04-14T12:38:36.233248943Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Apr 14 12:38:36.233275 containerd[1466]: time="2026-04-14T12:38:36.233262592Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Apr 14 12:38:36.233548 containerd[1466]: time="2026-04-14T12:38:36.233325257Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Apr 14 12:38:36.233548 containerd[1466]: time="2026-04-14T12:38:36.233347555Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Apr 14 12:38:36.233548 containerd[1466]: time="2026-04-14T12:38:36.233360836Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Apr 14 12:38:36.233548 containerd[1466]: time="2026-04-14T12:38:36.233373592Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Apr 14 12:38:36.233548 containerd[1466]: time="2026-04-14T12:38:36.233386432Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Apr 14 12:38:36.233548 containerd[1466]: time="2026-04-14T12:38:36.233408227Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Apr 14 12:38:36.233548 containerd[1466]: time="2026-04-14T12:38:36.233460328Z" level=info msg="NRI interface is disabled by configuration." Apr 14 12:38:36.233548 containerd[1466]: time="2026-04-14T12:38:36.233476168Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Apr 14 12:38:36.233952 containerd[1466]: time="2026-04-14T12:38:36.233811555Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Apr 14 12:38:36.233952 containerd[1466]: time="2026-04-14T12:38:36.233922800Z" level=info msg="Connect containerd service" Apr 14 12:38:36.234152 containerd[1466]: time="2026-04-14T12:38:36.233962605Z" level=info msg="using legacy CRI server" Apr 14 12:38:36.234152 containerd[1466]: time="2026-04-14T12:38:36.233970834Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Apr 14 12:38:36.234152 containerd[1466]: time="2026-04-14T12:38:36.234071911Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Apr 14 12:38:36.237473 containerd[1466]: time="2026-04-14T12:38:36.237115732Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 14 12:38:36.237473 containerd[1466]: time="2026-04-14T12:38:36.237366203Z" level=info msg="Start subscribing containerd event" Apr 14 12:38:36.237473 containerd[1466]: time="2026-04-14T12:38:36.237417308Z" level=info msg="Start recovering state" Apr 14 12:38:36.237675 containerd[1466]: time="2026-04-14T12:38:36.237497713Z" level=info msg="Start event monitor" Apr 14 12:38:36.237675 containerd[1466]: time="2026-04-14T12:38:36.237507371Z" level=info msg="Start snapshots syncer" Apr 14 12:38:36.237675 containerd[1466]: time="2026-04-14T12:38:36.237515423Z" level=info msg="Start cni network conf syncer for default" Apr 14 12:38:36.237675 containerd[1466]: time="2026-04-14T12:38:36.237521115Z" level=info msg="Start streaming server" Apr 14 12:38:36.238276 containerd[1466]: time="2026-04-14T12:38:36.238239796Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Apr 14 12:38:36.238361 containerd[1466]: time="2026-04-14T12:38:36.238335461Z" level=info msg=serving... address=/run/containerd/containerd.sock Apr 14 12:38:36.238436 containerd[1466]: time="2026-04-14T12:38:36.238411933Z" level=info msg="containerd successfully booted in 0.062063s" Apr 14 12:38:36.240479 systemd[1]: Started containerd.service - containerd container runtime. Apr 14 12:38:36.927423 tar[1463]: linux-amd64/README.md Apr 14 12:38:36.969648 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Apr 14 12:38:37.338433 systemd-networkd[1392]: eth0: Gained IPv6LL Apr 14 12:38:37.354680 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Apr 14 12:38:37.432652 systemd[1]: Reached target network-online.target - Network is Online. Apr 14 12:38:37.464235 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Apr 14 12:38:37.480926 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 14 12:38:37.484063 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Apr 14 12:38:37.524234 systemd[1]: coreos-metadata.service: Deactivated successfully. Apr 14 12:38:37.524416 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Apr 14 12:38:37.534042 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Apr 14 12:38:37.543162 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Apr 14 12:38:37.669993 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Apr 14 12:38:37.704783 systemd[1]: Started sshd@0-10.0.0.43:22-10.0.0.1:59810.service - OpenSSH per-connection server daemon (10.0.0.1:59810). Apr 14 12:38:37.802910 sshd[1548]: Accepted publickey for core from 10.0.0.1 port 59810 ssh2: RSA SHA256:qzhFhta2jMUFpsUMpLJ2lZjvWQxYMUNkIBekZ4ekVbM Apr 14 12:38:37.804417 sshd[1548]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 12:38:37.819926 systemd-logind[1450]: New session 1 of user core. Apr 14 12:38:37.822318 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Apr 14 12:38:37.841292 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Apr 14 12:38:37.888683 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Apr 14 12:38:37.906555 systemd[1]: Starting user@500.service - User Manager for UID 500... Apr 14 12:38:37.941008 (systemd)[1552]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Apr 14 12:38:38.375923 systemd[1552]: Queued start job for default target default.target. Apr 14 12:38:38.391861 systemd[1552]: Created slice app.slice - User Application Slice. Apr 14 12:38:38.392011 systemd[1552]: Reached target paths.target - Paths. Apr 14 12:38:38.392030 systemd[1552]: Reached target timers.target - Timers. Apr 14 12:38:38.403004 systemd[1552]: Starting dbus.socket - D-Bus User Message Bus Socket... Apr 14 12:38:38.446027 systemd[1552]: Listening on dbus.socket - D-Bus User Message Bus Socket. Apr 14 12:38:38.446180 systemd[1552]: Reached target sockets.target - Sockets. Apr 14 12:38:38.446203 systemd[1552]: Reached target basic.target - Basic System. Apr 14 12:38:38.446237 systemd[1552]: Reached target default.target - Main User Target. Apr 14 12:38:38.446260 systemd[1552]: Startup finished in 479ms. Apr 14 12:38:38.446617 systemd[1]: Started user@500.service - User Manager for UID 500. Apr 14 12:38:38.468803 systemd[1]: Started session-1.scope - Session 1 of User core. Apr 14 12:38:38.593109 systemd[1]: Started sshd@1-10.0.0.43:22-10.0.0.1:59814.service - OpenSSH per-connection server daemon (10.0.0.1:59814). Apr 14 12:38:38.651792 sshd[1563]: Accepted publickey for core from 10.0.0.1 port 59814 ssh2: RSA SHA256:qzhFhta2jMUFpsUMpLJ2lZjvWQxYMUNkIBekZ4ekVbM Apr 14 12:38:38.658897 sshd[1563]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 12:38:38.686529 systemd-logind[1450]: New session 2 of user core. Apr 14 12:38:38.736315 systemd[1]: Started session-2.scope - Session 2 of User core. Apr 14 12:38:38.800949 sshd[1563]: pam_unix(sshd:session): session closed for user core Apr 14 12:38:38.809812 systemd[1]: sshd@1-10.0.0.43:22-10.0.0.1:59814.service: Deactivated successfully. Apr 14 12:38:38.811042 systemd[1]: session-2.scope: Deactivated successfully. Apr 14 12:38:38.814389 systemd-logind[1450]: Session 2 logged out. Waiting for processes to exit. Apr 14 12:38:38.820878 systemd[1]: Started sshd@2-10.0.0.43:22-10.0.0.1:59828.service - OpenSSH per-connection server daemon (10.0.0.1:59828). Apr 14 12:38:38.824449 systemd-logind[1450]: Removed session 2. Apr 14 12:38:38.874872 sshd[1570]: Accepted publickey for core from 10.0.0.1 port 59828 ssh2: RSA SHA256:qzhFhta2jMUFpsUMpLJ2lZjvWQxYMUNkIBekZ4ekVbM Apr 14 12:38:38.878241 sshd[1570]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 12:38:38.889688 systemd-logind[1450]: New session 3 of user core. Apr 14 12:38:38.906212 systemd[1]: Started session-3.scope - Session 3 of User core. Apr 14 12:38:38.984616 sshd[1570]: pam_unix(sshd:session): session closed for user core Apr 14 12:38:38.988823 systemd[1]: sshd@2-10.0.0.43:22-10.0.0.1:59828.service: Deactivated successfully. Apr 14 12:38:38.990061 systemd[1]: session-3.scope: Deactivated successfully. Apr 14 12:38:38.990607 systemd-logind[1450]: Session 3 logged out. Waiting for processes to exit. Apr 14 12:38:38.991385 systemd-logind[1450]: Removed session 3. Apr 14 12:38:39.135523 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 14 12:38:39.139723 systemd[1]: Reached target multi-user.target - Multi-User System. Apr 14 12:38:39.142524 systemd[1]: Startup finished in 1.553s (kernel) + 1min 9.297s (initrd) + 7.421s (userspace) = 1min 18.271s. Apr 14 12:38:39.143692 (kubelet)[1581]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 14 12:38:40.007888 kubelet[1581]: E0414 12:38:40.005379 1581 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 14 12:38:40.012008 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 14 12:38:40.012143 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 14 12:38:40.012453 systemd[1]: kubelet.service: Consumed 1.439s CPU time. Apr 14 12:38:49.115178 systemd[1]: Started sshd@3-10.0.0.43:22-10.0.0.1:46570.service - OpenSSH per-connection server daemon (10.0.0.1:46570). Apr 14 12:38:49.354374 sshd[1594]: Accepted publickey for core from 10.0.0.1 port 46570 ssh2: RSA SHA256:qzhFhta2jMUFpsUMpLJ2lZjvWQxYMUNkIBekZ4ekVbM Apr 14 12:38:49.359728 sshd[1594]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 12:38:49.500728 systemd-logind[1450]: New session 4 of user core. Apr 14 12:38:49.522468 systemd[1]: Started session-4.scope - Session 4 of User core. Apr 14 12:38:49.690559 sshd[1594]: pam_unix(sshd:session): session closed for user core Apr 14 12:38:49.712097 systemd[1]: sshd@3-10.0.0.43:22-10.0.0.1:46570.service: Deactivated successfully. Apr 14 12:38:49.714030 systemd[1]: session-4.scope: Deactivated successfully. Apr 14 12:38:49.721394 systemd-logind[1450]: Session 4 logged out. Waiting for processes to exit. Apr 14 12:38:49.753238 systemd[1]: Started sshd@4-10.0.0.43:22-10.0.0.1:36608.service - OpenSSH per-connection server daemon (10.0.0.1:36608). Apr 14 12:38:49.756582 systemd-logind[1450]: Removed session 4. Apr 14 12:38:49.956383 sshd[1601]: Accepted publickey for core from 10.0.0.1 port 36608 ssh2: RSA SHA256:qzhFhta2jMUFpsUMpLJ2lZjvWQxYMUNkIBekZ4ekVbM Apr 14 12:38:49.965055 sshd[1601]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 12:38:49.991140 systemd-logind[1450]: New session 5 of user core. Apr 14 12:38:50.018187 systemd[1]: Started session-5.scope - Session 5 of User core. Apr 14 12:38:50.026859 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Apr 14 12:38:50.048363 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 14 12:38:50.124454 sshd[1601]: pam_unix(sshd:session): session closed for user core Apr 14 12:38:50.130671 systemd[1]: sshd@4-10.0.0.43:22-10.0.0.1:36608.service: Deactivated successfully. Apr 14 12:38:50.132184 systemd[1]: session-5.scope: Deactivated successfully. Apr 14 12:38:50.140139 systemd-logind[1450]: Session 5 logged out. Waiting for processes to exit. Apr 14 12:38:50.163264 systemd[1]: Started sshd@5-10.0.0.43:22-10.0.0.1:36622.service - OpenSSH per-connection server daemon (10.0.0.1:36622). Apr 14 12:38:50.169728 systemd-logind[1450]: Removed session 5. Apr 14 12:38:50.336443 sshd[1611]: Accepted publickey for core from 10.0.0.1 port 36622 ssh2: RSA SHA256:qzhFhta2jMUFpsUMpLJ2lZjvWQxYMUNkIBekZ4ekVbM Apr 14 12:38:50.343916 sshd[1611]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 12:38:50.365424 systemd-logind[1450]: New session 6 of user core. Apr 14 12:38:50.390577 systemd[1]: Started session-6.scope - Session 6 of User core. Apr 14 12:38:50.536109 sshd[1611]: pam_unix(sshd:session): session closed for user core Apr 14 12:38:50.539793 systemd[1]: sshd@5-10.0.0.43:22-10.0.0.1:36622.service: Deactivated successfully. Apr 14 12:38:50.542716 systemd[1]: session-6.scope: Deactivated successfully. Apr 14 12:38:50.550575 systemd-logind[1450]: Session 6 logged out. Waiting for processes to exit. Apr 14 12:38:50.563698 systemd[1]: Started sshd@6-10.0.0.43:22-10.0.0.1:36630.service - OpenSSH per-connection server daemon (10.0.0.1:36630). Apr 14 12:38:50.567696 systemd-logind[1450]: Removed session 6. Apr 14 12:38:50.576266 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 14 12:38:50.628496 (kubelet)[1624]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 14 12:38:50.656606 sshd[1622]: Accepted publickey for core from 10.0.0.1 port 36630 ssh2: RSA SHA256:qzhFhta2jMUFpsUMpLJ2lZjvWQxYMUNkIBekZ4ekVbM Apr 14 12:38:50.657085 sshd[1622]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 12:38:50.667508 systemd-logind[1450]: New session 7 of user core. Apr 14 12:38:50.693849 systemd[1]: Started session-7.scope - Session 7 of User core. Apr 14 12:38:50.811546 kubelet[1624]: E0414 12:38:50.811472 1624 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 14 12:38:50.820323 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 14 12:38:50.820466 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 14 12:38:50.845655 sudo[1633]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Apr 14 12:38:50.845994 sudo[1633]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 14 12:38:55.432667 systemd[1]: Starting docker.service - Docker Application Container Engine... Apr 14 12:38:55.433518 (dockerd)[1653]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Apr 14 12:38:58.239401 dockerd[1653]: time="2026-04-14T12:38:58.238946445Z" level=info msg="Starting up" Apr 14 12:38:58.958959 systemd[1]: var-lib-docker-metacopy\x2dcheck383731379-merged.mount: Deactivated successfully. Apr 14 12:38:59.018503 dockerd[1653]: time="2026-04-14T12:38:59.017455194Z" level=info msg="Loading containers: start." Apr 14 12:38:59.470422 kernel: Initializing XFRM netlink socket Apr 14 12:38:59.790820 systemd-networkd[1392]: docker0: Link UP Apr 14 12:38:59.836166 dockerd[1653]: time="2026-04-14T12:38:59.834299067Z" level=info msg="Loading containers: done." Apr 14 12:38:59.971166 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2526241012-merged.mount: Deactivated successfully. Apr 14 12:38:59.978141 dockerd[1653]: time="2026-04-14T12:38:59.977996503Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Apr 14 12:38:59.978504 dockerd[1653]: time="2026-04-14T12:38:59.978467099Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Apr 14 12:38:59.980941 dockerd[1653]: time="2026-04-14T12:38:59.980258710Z" level=info msg="Daemon has completed initialization" Apr 14 12:39:00.255867 dockerd[1653]: time="2026-04-14T12:39:00.255282780Z" level=info msg="API listen on /run/docker.sock" Apr 14 12:39:00.263796 systemd[1]: Started docker.service - Docker Application Container Engine. Apr 14 12:39:01.053445 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Apr 14 12:39:01.075921 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 14 12:39:01.805732 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 14 12:39:01.809906 (kubelet)[1806]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 14 12:39:02.354277 kubelet[1806]: E0414 12:39:02.353703 1806 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 14 12:39:02.416100 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 14 12:39:02.416286 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 14 12:39:03.734926 containerd[1466]: time="2026-04-14T12:39:03.734506285Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.35.3\"" Apr 14 12:39:05.045464 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3969689834.mount: Deactivated successfully. Apr 14 12:39:11.398412 containerd[1466]: time="2026-04-14T12:39:11.397063108Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.35.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 12:39:11.399836 containerd[1466]: time="2026-04-14T12:39:11.399766664Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.35.3: active requests=0, bytes read=27569134" Apr 14 12:39:11.401486 containerd[1466]: time="2026-04-14T12:39:11.401428312Z" level=info msg="ImageCreate event name:\"sha256:0f2b96c93465f04111c58c3fc41ad0ed2e16b5b3c4b6282b84dc951ad0ea4d66\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 12:39:11.420160 containerd[1466]: time="2026-04-14T12:39:11.419785781Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:6c6e2571f98e738015a39ed21305ab4166a3e2873f9cc01d7fa58371cf0f5d30\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 12:39:11.424505 containerd[1466]: time="2026-04-14T12:39:11.422564976Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.35.3\" with image id \"sha256:0f2b96c93465f04111c58c3fc41ad0ed2e16b5b3c4b6282b84dc951ad0ea4d66\", repo tag \"registry.k8s.io/kube-apiserver:v1.35.3\", repo digest \"registry.k8s.io/kube-apiserver@sha256:6c6e2571f98e738015a39ed21305ab4166a3e2873f9cc01d7fa58371cf0f5d30\", size \"27566295\" in 7.687398123s" Apr 14 12:39:11.424505 containerd[1466]: time="2026-04-14T12:39:11.423403442Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.35.3\" returns image reference \"sha256:0f2b96c93465f04111c58c3fc41ad0ed2e16b5b3c4b6282b84dc951ad0ea4d66\"" Apr 14 12:39:11.431809 containerd[1466]: time="2026-04-14T12:39:11.431527456Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.35.3\"" Apr 14 12:39:12.582064 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Apr 14 12:39:12.933177 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 14 12:39:13.993001 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 14 12:39:14.022553 (kubelet)[1887]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 14 12:39:14.509803 kubelet[1887]: E0414 12:39:14.509436 1887 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 14 12:39:14.514805 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 14 12:39:14.515007 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 14 12:39:14.518068 systemd[1]: kubelet.service: Consumed 1.234s CPU time. Apr 14 12:39:15.411484 containerd[1466]: time="2026-04-14T12:39:15.408842903Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.35.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 12:39:15.423255 containerd[1466]: time="2026-04-14T12:39:15.418121270Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.35.3: active requests=0, bytes read=21449527" Apr 14 12:39:15.433334 containerd[1466]: time="2026-04-14T12:39:15.432130693Z" level=info msg="ImageCreate event name:\"sha256:0eb506280f9bca2258673771e7029de0d5e92881f0fbaebd4a835e7e302b7d27\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 12:39:15.552063 containerd[1466]: time="2026-04-14T12:39:15.550717515Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:23a24aafa10831eb47477b0b31a525ee8a4a99d2c17251aac46c43be8201ec59\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 12:39:15.571158 containerd[1466]: time="2026-04-14T12:39:15.570574433Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.35.3\" with image id \"sha256:0eb506280f9bca2258673771e7029de0d5e92881f0fbaebd4a835e7e302b7d27\", repo tag \"registry.k8s.io/kube-controller-manager:v1.35.3\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:23a24aafa10831eb47477b0b31a525ee8a4a99d2c17251aac46c43be8201ec59\", size \"23014443\" in 4.138826154s" Apr 14 12:39:15.571158 containerd[1466]: time="2026-04-14T12:39:15.570871234Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.35.3\" returns image reference \"sha256:0eb506280f9bca2258673771e7029de0d5e92881f0fbaebd4a835e7e302b7d27\"" Apr 14 12:39:15.616948 containerd[1466]: time="2026-04-14T12:39:15.615481701Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.35.3\"" Apr 14 12:39:19.219096 containerd[1466]: time="2026-04-14T12:39:19.218252501Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.35.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 12:39:19.226893 containerd[1466]: time="2026-04-14T12:39:19.224567928Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.35.3: active requests=0, bytes read=15548358" Apr 14 12:39:19.239452 containerd[1466]: time="2026-04-14T12:39:19.230837201Z" level=info msg="ImageCreate event name:\"sha256:87c9b0e4f80d3039b60fbfaf2a4d423e6a891df883a55adb58b8d5b37a4cb23c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 12:39:19.324440 containerd[1466]: time="2026-04-14T12:39:19.322029550Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:7070dff574916315268ab483f1088a107b1f3a8a1a87f3e3645933111ade7013\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 12:39:19.330169 containerd[1466]: time="2026-04-14T12:39:19.329796230Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.35.3\" with image id \"sha256:87c9b0e4f80d3039b60fbfaf2a4d423e6a891df883a55adb58b8d5b37a4cb23c\", repo tag \"registry.k8s.io/kube-scheduler:v1.35.3\", repo digest \"registry.k8s.io/kube-scheduler@sha256:7070dff574916315268ab483f1088a107b1f3a8a1a87f3e3645933111ade7013\", size \"17113292\" in 3.713034107s" Apr 14 12:39:19.330169 containerd[1466]: time="2026-04-14T12:39:19.329934412Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.35.3\" returns image reference \"sha256:87c9b0e4f80d3039b60fbfaf2a4d423e6a891df883a55adb58b8d5b37a4cb23c\"" Apr 14 12:39:19.333104 containerd[1466]: time="2026-04-14T12:39:19.332790544Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.35.3\"" Apr 14 12:39:20.923422 update_engine[1457]: I20260414 12:39:20.922412 1457 update_attempter.cc:509] Updating boot flags... Apr 14 12:39:21.391279 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 31 scanned by (udev-worker) (1910) Apr 14 12:39:21.624906 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 31 scanned by (udev-worker) (1909) Apr 14 12:39:24.556985 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Apr 14 12:39:24.576283 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 14 12:39:25.346801 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2277857572.mount: Deactivated successfully. Apr 14 12:39:25.364170 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 14 12:39:25.379169 (kubelet)[1925]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 14 12:39:25.718535 kubelet[1925]: E0414 12:39:25.715988 1925 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 14 12:39:25.727407 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 14 12:39:25.727953 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 14 12:39:26.917019 containerd[1466]: time="2026-04-14T12:39:26.916526614Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.35.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 12:39:26.921191 containerd[1466]: time="2026-04-14T12:39:26.921062828Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.35.3: active requests=0, bytes read=25685215" Apr 14 12:39:26.924151 containerd[1466]: time="2026-04-14T12:39:26.922700869Z" level=info msg="ImageCreate event name:\"sha256:53ed370019059b0cdce5a02a20f8aca81f977e34956368c7f1b7ce9709398b79\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 12:39:26.943411 containerd[1466]: time="2026-04-14T12:39:26.943014595Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:8743aec6a360aedcb7a076cbecea367b072abe1bfade2e2098650df502e2bc89\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 12:39:26.943411 containerd[1466]: time="2026-04-14T12:39:26.943659192Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.35.3\" with image id \"sha256:53ed370019059b0cdce5a02a20f8aca81f977e34956368c7f1b7ce9709398b79\", repo tag \"registry.k8s.io/kube-proxy:v1.35.3\", repo digest \"registry.k8s.io/kube-proxy@sha256:8743aec6a360aedcb7a076cbecea367b072abe1bfade2e2098650df502e2bc89\", size \"25684340\" in 7.610702402s" Apr 14 12:39:26.943411 containerd[1466]: time="2026-04-14T12:39:26.943688833Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.35.3\" returns image reference \"sha256:53ed370019059b0cdce5a02a20f8aca81f977e34956368c7f1b7ce9709398b79\"" Apr 14 12:39:26.945520 containerd[1466]: time="2026-04-14T12:39:26.945460997Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.13.1\"" Apr 14 12:39:28.540878 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3619033904.mount: Deactivated successfully. Apr 14 12:39:35.802836 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Apr 14 12:39:35.859840 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 14 12:39:37.312998 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 14 12:39:37.410290 (kubelet)[2005]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 14 12:39:38.147480 kubelet[2005]: E0414 12:39:38.146765 2005 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 14 12:39:38.243739 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 14 12:39:38.244322 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 14 12:39:38.244882 systemd[1]: kubelet.service: Consumed 1.581s CPU time. Apr 14 12:39:39.846299 containerd[1466]: time="2026-04-14T12:39:39.845371285Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.13.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 12:39:39.849387 containerd[1466]: time="2026-04-14T12:39:39.849271858Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.13.1: active requests=0, bytes read=23555980" Apr 14 12:39:39.854794 containerd[1466]: time="2026-04-14T12:39:39.854506299Z" level=info msg="ImageCreate event name:\"sha256:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 12:39:39.869362 containerd[1466]: time="2026-04-14T12:39:39.868780804Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 12:39:39.897326 containerd[1466]: time="2026-04-14T12:39:39.896939044Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.13.1\" with image id \"sha256:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139\", repo tag \"registry.k8s.io/coredns/coredns:v1.13.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6\", size \"23553139\" in 12.95143567s" Apr 14 12:39:39.897326 containerd[1466]: time="2026-04-14T12:39:39.897125129Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.13.1\" returns image reference \"sha256:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139\"" Apr 14 12:39:39.899709 containerd[1466]: time="2026-04-14T12:39:39.899671543Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Apr 14 12:39:41.607107 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2519179965.mount: Deactivated successfully. Apr 14 12:39:41.627399 containerd[1466]: time="2026-04-14T12:39:41.625878293Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 12:39:41.629117 containerd[1466]: time="2026-04-14T12:39:41.628991979Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=321150" Apr 14 12:39:41.633942 containerd[1466]: time="2026-04-14T12:39:41.633441590Z" level=info msg="ImageCreate event name:\"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 12:39:41.655415 containerd[1466]: time="2026-04-14T12:39:41.654013983Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 12:39:41.660776 containerd[1466]: time="2026-04-14T12:39:41.660260050Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"320448\" in 1.760536108s" Apr 14 12:39:41.660776 containerd[1466]: time="2026-04-14T12:39:41.660614555Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\"" Apr 14 12:39:41.744755 containerd[1466]: time="2026-04-14T12:39:41.744318895Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.6-0\"" Apr 14 12:39:43.838430 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3972569058.mount: Deactivated successfully. Apr 14 12:39:48.335848 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Apr 14 12:39:48.365411 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 14 12:39:48.861829 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 14 12:39:48.886941 (kubelet)[2083]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 14 12:39:49.119937 containerd[1466]: time="2026-04-14T12:39:49.119212969Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.6-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 12:39:49.124816 containerd[1466]: time="2026-04-14T12:39:49.124426131Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.6-0: active requests=0, bytes read=23643431" Apr 14 12:39:49.126471 containerd[1466]: time="2026-04-14T12:39:49.126437282Z" level=info msg="ImageCreate event name:\"sha256:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 12:39:49.129976 containerd[1466]: time="2026-04-14T12:39:49.129266549Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:60a30b5d81b2217555e2cfb9537f655b7ba97220b99c39ee2e162a7127225890\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 12:39:49.135949 containerd[1466]: time="2026-04-14T12:39:49.132722477Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.6-0\" with image id \"sha256:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2\", repo tag \"registry.k8s.io/etcd:3.6.6-0\", repo digest \"registry.k8s.io/etcd@sha256:60a30b5d81b2217555e2cfb9537f655b7ba97220b99c39ee2e162a7127225890\", size \"23641797\" in 7.388198401s" Apr 14 12:39:49.135949 containerd[1466]: time="2026-04-14T12:39:49.132811237Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.6-0\" returns image reference \"sha256:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2\"" Apr 14 12:39:49.137704 kubelet[2083]: E0414 12:39:49.137485 2083 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 14 12:39:49.143748 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 14 12:39:49.143955 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 14 12:39:56.528278 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 14 12:39:56.686219 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 14 12:39:57.043903 systemd[1]: Reloading requested from client PID 2128 ('systemctl') (unit session-7.scope)... Apr 14 12:39:57.043998 systemd[1]: Reloading... Apr 14 12:39:57.411214 zram_generator::config[2167]: No configuration found. Apr 14 12:39:58.291078 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 14 12:39:58.883902 systemd[1]: Reloading finished in 1833 ms. Apr 14 12:39:59.242434 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 14 12:39:59.355509 (kubelet)[2207]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 14 12:39:59.397074 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 14 12:39:59.404808 systemd[1]: kubelet.service: Deactivated successfully. Apr 14 12:39:59.409363 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 14 12:39:59.439202 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 14 12:40:00.703954 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 14 12:40:00.724338 (kubelet)[2223]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 14 12:40:01.208342 kubelet[2223]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 14 12:40:01.765500 kubelet[2223]: I0414 12:40:01.763018 2223 server.go:525] "Kubelet version" kubeletVersion="v1.35.1" Apr 14 12:40:01.765500 kubelet[2223]: I0414 12:40:01.765449 2223 server.go:527] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 14 12:40:01.765500 kubelet[2223]: I0414 12:40:01.768489 2223 watchdog_linux.go:95] "Systemd watchdog is not enabled" Apr 14 12:40:01.824159 kubelet[2223]: I0414 12:40:01.768713 2223 watchdog_linux.go:138] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 14 12:40:01.824159 kubelet[2223]: I0414 12:40:01.770791 2223 server.go:951] "Client rotation is on, will bootstrap in background" Apr 14 12:40:02.033236 kubelet[2223]: E0414 12:40:02.030978 2223 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.43:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.43:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 14 12:40:02.049699 kubelet[2223]: I0414 12:40:02.048744 2223 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 14 12:40:02.073333 kubelet[2223]: E0414 12:40:02.071358 2223 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 14 12:40:02.077829 kubelet[2223]: I0414 12:40:02.077506 2223 server.go:1395] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Apr 14 12:40:02.120536 kubelet[2223]: I0414 12:40:02.120415 2223 server.go:775] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Apr 14 12:40:02.122323 kubelet[2223]: I0414 12:40:02.122203 2223 container_manager_linux.go:272] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 14 12:40:02.122638 kubelet[2223]: I0414 12:40:02.122296 2223 container_manager_linux.go:277] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 14 12:40:02.122638 kubelet[2223]: I0414 12:40:02.122571 2223 topology_manager.go:143] "Creating topology manager with none policy" Apr 14 12:40:02.122638 kubelet[2223]: I0414 12:40:02.122638 2223 container_manager_linux.go:308] "Creating device plugin manager" Apr 14 12:40:02.125573 kubelet[2223]: I0414 12:40:02.125361 2223 container_manager_linux.go:317] "Creating Dynamic Resource Allocation (DRA) manager" Apr 14 12:40:02.143861 kubelet[2223]: I0414 12:40:02.143490 2223 state_mem.go:41] "Initialized" logger="CPUManager state memory" Apr 14 12:40:02.239304 kubelet[2223]: I0414 12:40:02.238155 2223 kubelet.go:482] "Attempting to sync node with API server" Apr 14 12:40:02.239304 kubelet[2223]: I0414 12:40:02.238402 2223 kubelet.go:383] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 14 12:40:02.239304 kubelet[2223]: I0414 12:40:02.239263 2223 kubelet.go:394] "Adding apiserver pod source" Apr 14 12:40:02.239304 kubelet[2223]: I0414 12:40:02.239347 2223 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 14 12:40:02.244400 kubelet[2223]: I0414 12:40:02.244201 2223 kuberuntime_manager.go:294] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 14 12:40:02.251059 kubelet[2223]: I0414 12:40:02.250686 2223 kubelet.go:943] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 14 12:40:02.251059 kubelet[2223]: I0414 12:40:02.250866 2223 kubelet.go:970] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Apr 14 12:40:02.251059 kubelet[2223]: W0414 12:40:02.251054 2223 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Apr 14 12:40:02.280261 kubelet[2223]: I0414 12:40:02.278960 2223 server.go:1257] "Started kubelet" Apr 14 12:40:02.281094 kubelet[2223]: I0414 12:40:02.280331 2223 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 14 12:40:02.281094 kubelet[2223]: I0414 12:40:02.280837 2223 server_v1.go:49] "podresources" method="list" useActivePods=true Apr 14 12:40:02.281689 kubelet[2223]: I0414 12:40:02.281656 2223 server.go:254] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 14 12:40:02.281791 kubelet[2223]: I0414 12:40:02.281747 2223 server.go:182] "Starting to listen" address="0.0.0.0" port=10250 Apr 14 12:40:02.291181 kubelet[2223]: I0414 12:40:02.287392 2223 server.go:317] "Adding debug handlers to kubelet server" Apr 14 12:40:02.292471 kubelet[2223]: I0414 12:40:02.292408 2223 fs_resource_analyzer.go:69] "Starting FS ResourceAnalyzer" Apr 14 12:40:02.295712 kubelet[2223]: I0414 12:40:02.293330 2223 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 14 12:40:02.296401 kubelet[2223]: E0414 12:40:02.296373 2223 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 14 12:40:02.296866 kubelet[2223]: I0414 12:40:02.296796 2223 volume_manager.go:311] "Starting Kubelet Volume Manager" Apr 14 12:40:02.296963 kubelet[2223]: E0414 12:40:02.296880 2223 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.43:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.43:6443: connect: connection refused" interval="200ms" Apr 14 12:40:02.296963 kubelet[2223]: I0414 12:40:02.296891 2223 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Apr 14 12:40:02.297243 kubelet[2223]: I0414 12:40:02.297003 2223 reconciler.go:29] "Reconciler: start to sync state" Apr 14 12:40:02.301781 kubelet[2223]: I0414 12:40:02.301515 2223 factory.go:223] Registration of the systemd container factory successfully Apr 14 12:40:02.301781 kubelet[2223]: I0414 12:40:02.301785 2223 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 14 12:40:02.304884 kubelet[2223]: E0414 12:40:02.301632 2223 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.43:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.43:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18a639920062713e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-14 12:40:02.278699326 +0000 UTC m=+1.525416090,LastTimestamp:2026-04-14 12:40:02.278699326 +0000 UTC m=+1.525416090,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 14 12:40:02.306885 kubelet[2223]: I0414 12:40:02.306693 2223 factory.go:223] Registration of the containerd container factory successfully Apr 14 12:40:02.442811 kubelet[2223]: E0414 12:40:02.440578 2223 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 14 12:40:02.458328 kubelet[2223]: I0414 12:40:02.458060 2223 cpu_manager.go:225] "Starting" policy="none" Apr 14 12:40:02.458328 kubelet[2223]: I0414 12:40:02.458258 2223 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Apr 14 12:40:02.458328 kubelet[2223]: I0414 12:40:02.458375 2223 state_mem.go:41] "Initialized" logger="CPUManager state checkpoint.CPUManager state memory" Apr 14 12:40:02.475877 kubelet[2223]: I0414 12:40:02.475684 2223 policy_none.go:50] "Start" Apr 14 12:40:02.475877 kubelet[2223]: I0414 12:40:02.475796 2223 memory_manager.go:187] "Starting memorymanager" policy="None" Apr 14 12:40:02.475877 kubelet[2223]: I0414 12:40:02.475842 2223 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Apr 14 12:40:02.486183 kubelet[2223]: I0414 12:40:02.485924 2223 policy_none.go:44] "Start" Apr 14 12:40:02.497875 kubelet[2223]: E0414 12:40:02.497789 2223 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.43:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.43:6443: connect: connection refused" interval="400ms" Apr 14 12:40:02.508732 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Apr 14 12:40:02.551137 kubelet[2223]: E0414 12:40:02.548529 2223 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 14 12:40:02.551137 kubelet[2223]: I0414 12:40:02.548472 2223 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Apr 14 12:40:02.553934 kubelet[2223]: I0414 12:40:02.553848 2223 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Apr 14 12:40:02.554213 kubelet[2223]: I0414 12:40:02.553951 2223 status_manager.go:249] "Starting to sync pod status with apiserver" Apr 14 12:40:02.554213 kubelet[2223]: I0414 12:40:02.554025 2223 kubelet.go:2501] "Starting kubelet main sync loop" Apr 14 12:40:02.554213 kubelet[2223]: E0414 12:40:02.554124 2223 kubelet.go:2525] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 14 12:40:02.565358 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Apr 14 12:40:02.642657 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Apr 14 12:40:02.656111 kubelet[2223]: E0414 12:40:02.654766 2223 kubelet.go:2525] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Apr 14 12:40:02.656111 kubelet[2223]: E0414 12:40:02.654760 2223 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 14 12:40:02.676551 kubelet[2223]: E0414 12:40:02.676429 2223 manager.go:525] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 14 12:40:02.676956 kubelet[2223]: I0414 12:40:02.676942 2223 eviction_manager.go:194] "Eviction manager: starting control loop" Apr 14 12:40:02.677109 kubelet[2223]: I0414 12:40:02.676960 2223 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 14 12:40:02.678668 kubelet[2223]: I0414 12:40:02.677314 2223 plugin_manager.go:121] "Starting Kubelet Plugin Manager" Apr 14 12:40:02.679323 kubelet[2223]: E0414 12:40:02.679304 2223 eviction_manager.go:272] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 14 12:40:02.679435 kubelet[2223]: E0414 12:40:02.679425 2223 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 14 12:40:02.819450 kubelet[2223]: I0414 12:40:02.818798 2223 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Apr 14 12:40:02.819851 kubelet[2223]: E0414 12:40:02.819817 2223 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://10.0.0.43:6443/api/v1/nodes\": dial tcp 10.0.0.43:6443: connect: connection refused" node="localhost" Apr 14 12:40:02.910718 kubelet[2223]: E0414 12:40:02.910487 2223 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.43:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.43:6443: connect: connection refused" interval="800ms" Apr 14 12:40:03.049087 kubelet[2223]: I0414 12:40:03.048336 2223 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ed5e991544c38f12435d82988fd12fee-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"ed5e991544c38f12435d82988fd12fee\") " pod="kube-system/kube-apiserver-localhost" Apr 14 12:40:03.053126 kubelet[2223]: I0414 12:40:03.050611 2223 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ed5e991544c38f12435d82988fd12fee-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"ed5e991544c38f12435d82988fd12fee\") " pod="kube-system/kube-apiserver-localhost" Apr 14 12:40:03.053126 kubelet[2223]: I0414 12:40:03.050747 2223 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ed5e991544c38f12435d82988fd12fee-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"ed5e991544c38f12435d82988fd12fee\") " pod="kube-system/kube-apiserver-localhost" Apr 14 12:40:03.060602 kubelet[2223]: I0414 12:40:03.059491 2223 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Apr 14 12:40:03.061582 kubelet[2223]: E0414 12:40:03.061527 2223 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://10.0.0.43:6443/api/v1/nodes\": dial tcp 10.0.0.43:6443: connect: connection refused" node="localhost" Apr 14 12:40:03.102799 systemd[1]: Created slice kubepods-burstable-poded5e991544c38f12435d82988fd12fee.slice - libcontainer container kubepods-burstable-poded5e991544c38f12435d82988fd12fee.slice. Apr 14 12:40:03.156456 kubelet[2223]: I0414 12:40:03.151740 2223 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/bd70d524e6bc561f2082b467706799ed-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"bd70d524e6bc561f2082b467706799ed\") " pod="kube-system/kube-controller-manager-localhost" Apr 14 12:40:03.156456 kubelet[2223]: I0414 12:40:03.151942 2223 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/bd70d524e6bc561f2082b467706799ed-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"bd70d524e6bc561f2082b467706799ed\") " pod="kube-system/kube-controller-manager-localhost" Apr 14 12:40:03.156456 kubelet[2223]: I0414 12:40:03.151958 2223 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/bd70d524e6bc561f2082b467706799ed-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"bd70d524e6bc561f2082b467706799ed\") " pod="kube-system/kube-controller-manager-localhost" Apr 14 12:40:03.156456 kubelet[2223]: I0414 12:40:03.151972 2223 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/bd70d524e6bc561f2082b467706799ed-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"bd70d524e6bc561f2082b467706799ed\") " pod="kube-system/kube-controller-manager-localhost" Apr 14 12:40:03.156456 kubelet[2223]: I0414 12:40:03.152034 2223 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3566c1d7ed03bb3c60facf009a5678dd-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"3566c1d7ed03bb3c60facf009a5678dd\") " pod="kube-system/kube-scheduler-localhost" Apr 14 12:40:03.159525 kubelet[2223]: I0414 12:40:03.152083 2223 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/bd70d524e6bc561f2082b467706799ed-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"bd70d524e6bc561f2082b467706799ed\") " pod="kube-system/kube-controller-manager-localhost" Apr 14 12:40:03.159525 kubelet[2223]: E0414 12:40:03.152476 2223 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 14 12:40:03.251630 kubelet[2223]: E0414 12:40:03.250419 2223 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 12:40:03.265876 containerd[1466]: time="2026-04-14T12:40:03.265153034Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:ed5e991544c38f12435d82988fd12fee,Namespace:kube-system,Attempt:0,}" Apr 14 12:40:03.278353 systemd[1]: Created slice kubepods-burstable-podbd70d524e6bc561f2082b467706799ed.slice - libcontainer container kubepods-burstable-podbd70d524e6bc561f2082b467706799ed.slice. Apr 14 12:40:03.308646 kubelet[2223]: E0414 12:40:03.308482 2223 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 14 12:40:03.310201 systemd[1]: Created slice kubepods-burstable-pod3566c1d7ed03bb3c60facf009a5678dd.slice - libcontainer container kubepods-burstable-pod3566c1d7ed03bb3c60facf009a5678dd.slice. Apr 14 12:40:03.318827 kubelet[2223]: E0414 12:40:03.318571 2223 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 14 12:40:03.318827 kubelet[2223]: E0414 12:40:03.318673 2223 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 12:40:03.326934 containerd[1466]: time="2026-04-14T12:40:03.326270902Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:bd70d524e6bc561f2082b467706799ed,Namespace:kube-system,Attempt:0,}" Apr 14 12:40:03.331894 kubelet[2223]: E0414 12:40:03.330167 2223 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 12:40:03.340809 containerd[1466]: time="2026-04-14T12:40:03.340564296Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:3566c1d7ed03bb3c60facf009a5678dd,Namespace:kube-system,Attempt:0,}" Apr 14 12:40:03.532182 kubelet[2223]: I0414 12:40:03.530659 2223 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Apr 14 12:40:03.533225 kubelet[2223]: E0414 12:40:03.533140 2223 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://10.0.0.43:6443/api/v1/nodes\": dial tcp 10.0.0.43:6443: connect: connection refused" node="localhost" Apr 14 12:40:03.713429 kubelet[2223]: E0414 12:40:03.713170 2223 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.43:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.43:6443: connect: connection refused" interval="1.6s" Apr 14 12:40:04.080120 kubelet[2223]: E0414 12:40:04.079943 2223 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.43:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.43:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 14 12:40:04.358840 kubelet[2223]: I0414 12:40:04.356907 2223 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Apr 14 12:40:04.410467 kubelet[2223]: E0414 12:40:04.359060 2223 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://10.0.0.43:6443/api/v1/nodes\": dial tcp 10.0.0.43:6443: connect: connection refused" node="localhost" Apr 14 12:40:04.806790 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2094521244.mount: Deactivated successfully. Apr 14 12:40:04.991852 containerd[1466]: time="2026-04-14T12:40:04.991454771Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 14 12:40:05.002343 containerd[1466]: time="2026-04-14T12:40:05.002199591Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 14 12:40:05.006796 containerd[1466]: time="2026-04-14T12:40:05.006390724Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=311988" Apr 14 12:40:05.010735 containerd[1466]: time="2026-04-14T12:40:05.009968533Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 14 12:40:05.019471 containerd[1466]: time="2026-04-14T12:40:05.019057695Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 14 12:40:05.026841 containerd[1466]: time="2026-04-14T12:40:05.026513326Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 14 12:40:05.026841 containerd[1466]: time="2026-04-14T12:40:05.026567446Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 14 12:40:05.041527 containerd[1466]: time="2026-04-14T12:40:05.041218489Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 14 12:40:05.042456 containerd[1466]: time="2026-04-14T12:40:05.042391395Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.715170161s" Apr 14 12:40:05.132403 containerd[1466]: time="2026-04-14T12:40:05.068084961Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.726574694s" Apr 14 12:40:05.138521 containerd[1466]: time="2026-04-14T12:40:05.138444997Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.873219013s" Apr 14 12:40:05.340844 kubelet[2223]: E0414 12:40:05.339159 2223 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.43:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.43:6443: connect: connection refused" interval="3.2s" Apr 14 12:40:06.041300 containerd[1466]: time="2026-04-14T12:40:06.037659315Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 14 12:40:06.041300 containerd[1466]: time="2026-04-14T12:40:06.037889450Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 14 12:40:06.041300 containerd[1466]: time="2026-04-14T12:40:06.037899791Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 14 12:40:06.041300 containerd[1466]: time="2026-04-14T12:40:06.038073017Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 14 12:40:06.055814 containerd[1466]: time="2026-04-14T12:40:06.044890805Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 14 12:40:06.055814 containerd[1466]: time="2026-04-14T12:40:06.044990007Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 14 12:40:06.055814 containerd[1466]: time="2026-04-14T12:40:06.045003334Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 14 12:40:06.055814 containerd[1466]: time="2026-04-14T12:40:06.040799998Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 14 12:40:06.055814 containerd[1466]: time="2026-04-14T12:40:06.044402927Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 14 12:40:06.055814 containerd[1466]: time="2026-04-14T12:40:06.044416859Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 14 12:40:06.055814 containerd[1466]: time="2026-04-14T12:40:06.044829057Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 14 12:40:06.074360 containerd[1466]: time="2026-04-14T12:40:06.059285599Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 14 12:40:06.074402 kubelet[2223]: I0414 12:40:06.058557 2223 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Apr 14 12:40:06.074402 kubelet[2223]: E0414 12:40:06.071010 2223 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://10.0.0.43:6443/api/v1/nodes\": dial tcp 10.0.0.43:6443: connect: connection refused" node="localhost" Apr 14 12:40:06.343225 systemd[1]: Started cri-containerd-44ed18cd941dc188279a1ea348d137198d2efa296555a540e1b1b64cce2420e7.scope - libcontainer container 44ed18cd941dc188279a1ea348d137198d2efa296555a540e1b1b64cce2420e7. Apr 14 12:40:06.416096 systemd[1]: Started cri-containerd-e5886807b102c9b52b314793a738bd13f5f513f2b95c62167b7e7611e720cfca.scope - libcontainer container e5886807b102c9b52b314793a738bd13f5f513f2b95c62167b7e7611e720cfca. Apr 14 12:40:06.424866 systemd[1]: Started cri-containerd-e6e85fec2c8d99964501468473b50536e90439fb2a70be7746d8683556d02abe.scope - libcontainer container e6e85fec2c8d99964501468473b50536e90439fb2a70be7746d8683556d02abe. Apr 14 12:40:06.733142 containerd[1466]: time="2026-04-14T12:40:06.672738130Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:ed5e991544c38f12435d82988fd12fee,Namespace:kube-system,Attempt:0,} returns sandbox id \"44ed18cd941dc188279a1ea348d137198d2efa296555a540e1b1b64cce2420e7\"" Apr 14 12:40:06.733142 containerd[1466]: time="2026-04-14T12:40:06.672982426Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:3566c1d7ed03bb3c60facf009a5678dd,Namespace:kube-system,Attempt:0,} returns sandbox id \"e5886807b102c9b52b314793a738bd13f5f513f2b95c62167b7e7611e720cfca\"" Apr 14 12:40:06.750093 containerd[1466]: time="2026-04-14T12:40:06.749787459Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:bd70d524e6bc561f2082b467706799ed,Namespace:kube-system,Attempt:0,} returns sandbox id \"e6e85fec2c8d99964501468473b50536e90439fb2a70be7746d8683556d02abe\"" Apr 14 12:40:06.751882 kubelet[2223]: E0414 12:40:06.751839 2223 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 12:40:06.752040 kubelet[2223]: E0414 12:40:06.751978 2223 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 12:40:06.752832 kubelet[2223]: E0414 12:40:06.752768 2223 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 12:40:06.774787 containerd[1466]: time="2026-04-14T12:40:06.769483716Z" level=info msg="CreateContainer within sandbox \"e5886807b102c9b52b314793a738bd13f5f513f2b95c62167b7e7611e720cfca\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Apr 14 12:40:06.781446 containerd[1466]: time="2026-04-14T12:40:06.781207081Z" level=info msg="CreateContainer within sandbox \"44ed18cd941dc188279a1ea348d137198d2efa296555a540e1b1b64cce2420e7\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Apr 14 12:40:06.783736 containerd[1466]: time="2026-04-14T12:40:06.783682309Z" level=info msg="CreateContainer within sandbox \"e6e85fec2c8d99964501468473b50536e90439fb2a70be7746d8683556d02abe\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Apr 14 12:40:06.871335 containerd[1466]: time="2026-04-14T12:40:06.871138180Z" level=info msg="CreateContainer within sandbox \"44ed18cd941dc188279a1ea348d137198d2efa296555a540e1b1b64cce2420e7\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"a0f6b3d0dd3f8ad4278cf5baa8834b82987120c6a172552ad2c18c38a7830695\"" Apr 14 12:40:06.876968 containerd[1466]: time="2026-04-14T12:40:06.876527914Z" level=info msg="CreateContainer within sandbox \"e5886807b102c9b52b314793a738bd13f5f513f2b95c62167b7e7611e720cfca\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"dbef3dfc5774f46dca15615bea398a334ad43fbd81fb85c2ef49b20ee7536c86\"" Apr 14 12:40:06.877462 containerd[1466]: time="2026-04-14T12:40:06.877420772Z" level=info msg="StartContainer for \"a0f6b3d0dd3f8ad4278cf5baa8834b82987120c6a172552ad2c18c38a7830695\"" Apr 14 12:40:06.877721 containerd[1466]: time="2026-04-14T12:40:06.877702507Z" level=info msg="StartContainer for \"dbef3dfc5774f46dca15615bea398a334ad43fbd81fb85c2ef49b20ee7536c86\"" Apr 14 12:40:06.881875 containerd[1466]: time="2026-04-14T12:40:06.881713734Z" level=info msg="CreateContainer within sandbox \"e6e85fec2c8d99964501468473b50536e90439fb2a70be7746d8683556d02abe\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"e4e0d694920dace3b0160d0dc5f65705d37c4fd0c45f4b4dbfe6e1be020f4d29\"" Apr 14 12:40:06.888126 containerd[1466]: time="2026-04-14T12:40:06.887516922Z" level=info msg="StartContainer for \"e4e0d694920dace3b0160d0dc5f65705d37c4fd0c45f4b4dbfe6e1be020f4d29\"" Apr 14 12:40:06.970384 systemd[1]: Started cri-containerd-a0f6b3d0dd3f8ad4278cf5baa8834b82987120c6a172552ad2c18c38a7830695.scope - libcontainer container a0f6b3d0dd3f8ad4278cf5baa8834b82987120c6a172552ad2c18c38a7830695. Apr 14 12:40:07.026654 systemd[1]: Started cri-containerd-dbef3dfc5774f46dca15615bea398a334ad43fbd81fb85c2ef49b20ee7536c86.scope - libcontainer container dbef3dfc5774f46dca15615bea398a334ad43fbd81fb85c2ef49b20ee7536c86. Apr 14 12:40:07.057035 systemd[1]: Started cri-containerd-e4e0d694920dace3b0160d0dc5f65705d37c4fd0c45f4b4dbfe6e1be020f4d29.scope - libcontainer container e4e0d694920dace3b0160d0dc5f65705d37c4fd0c45f4b4dbfe6e1be020f4d29. Apr 14 12:40:07.439509 containerd[1466]: time="2026-04-14T12:40:07.439236777Z" level=info msg="StartContainer for \"dbef3dfc5774f46dca15615bea398a334ad43fbd81fb85c2ef49b20ee7536c86\" returns successfully" Apr 14 12:40:07.573352 containerd[1466]: time="2026-04-14T12:40:07.573133496Z" level=info msg="StartContainer for \"a0f6b3d0dd3f8ad4278cf5baa8834b82987120c6a172552ad2c18c38a7830695\" returns successfully" Apr 14 12:40:07.579483 containerd[1466]: time="2026-04-14T12:40:07.578066406Z" level=info msg="StartContainer for \"e4e0d694920dace3b0160d0dc5f65705d37c4fd0c45f4b4dbfe6e1be020f4d29\" returns successfully" Apr 14 12:40:07.979760 kubelet[2223]: E0414 12:40:07.979718 2223 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 14 12:40:07.979760 kubelet[2223]: E0414 12:40:07.979827 2223 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 14 12:40:07.980235 kubelet[2223]: E0414 12:40:07.979934 2223 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 12:40:07.980235 kubelet[2223]: E0414 12:40:07.979945 2223 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 12:40:07.984663 kubelet[2223]: E0414 12:40:07.984526 2223 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 14 12:40:07.985404 kubelet[2223]: E0414 12:40:07.985391 2223 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 12:40:09.024950 kubelet[2223]: E0414 12:40:09.024799 2223 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 14 12:40:09.024950 kubelet[2223]: E0414 12:40:09.024899 2223 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 14 12:40:09.024950 kubelet[2223]: E0414 12:40:09.024960 2223 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 12:40:09.025743 kubelet[2223]: E0414 12:40:09.025070 2223 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 12:40:09.027923 kubelet[2223]: E0414 12:40:09.027060 2223 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 14 12:40:09.028847 kubelet[2223]: E0414 12:40:09.028826 2223 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 12:40:09.284293 kubelet[2223]: I0414 12:40:09.283177 2223 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Apr 14 12:40:10.118744 kubelet[2223]: E0414 12:40:10.118462 2223 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 14 12:40:10.127641 kubelet[2223]: E0414 12:40:10.119044 2223 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 14 12:40:10.127641 kubelet[2223]: E0414 12:40:10.119165 2223 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 12:40:10.127641 kubelet[2223]: E0414 12:40:10.119223 2223 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 12:40:11.147976 kubelet[2223]: E0414 12:40:11.147798 2223 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 14 12:40:11.147976 kubelet[2223]: E0414 12:40:11.148110 2223 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 12:40:12.745775 kubelet[2223]: E0414 12:40:12.744947 2223 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 14 12:40:13.139446 kubelet[2223]: E0414 12:40:13.139272 2223 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 14 12:40:13.139446 kubelet[2223]: E0414 12:40:13.139472 2223 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 12:40:17.491868 kubelet[2223]: E0414 12:40:17.488695 2223 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 14 12:40:17.537358 kubelet[2223]: E0414 12:40:17.536791 2223 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 12:40:18.247416 kubelet[2223]: E0414 12:40:18.243217 2223 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.43:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": net/http: TLS handshake timeout" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 14 12:40:18.479328 kubelet[2223]: E0414 12:40:18.479169 2223 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 14 12:40:18.479778 kubelet[2223]: E0414 12:40:18.479542 2223 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 12:40:18.556816 kubelet[2223]: E0414 12:40:18.544435 2223 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.43:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" interval="6.4s" Apr 14 12:40:19.290234 kubelet[2223]: E0414 12:40:19.289075 2223 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://10.0.0.43:6443/api/v1/nodes\": net/http: TLS handshake timeout" node="localhost" Apr 14 12:40:22.414458 kubelet[2223]: I0414 12:40:22.413434 2223 apiserver.go:52] "Watching apiserver" Apr 14 12:40:22.655649 kubelet[2223]: E0414 12:40:22.652027 2223 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.18a639920062713e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-14 12:40:02.278699326 +0000 UTC m=+1.525416090,LastTimestamp:2026-04-14 12:40:02.278699326 +0000 UTC m=+1.525416090,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 14 12:40:22.852207 kubelet[2223]: E0414 12:40:22.844112 2223 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 14 12:40:23.119576 kubelet[2223]: I0414 12:40:23.103023 2223 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Apr 14 12:40:23.247041 kubelet[2223]: E0414 12:40:23.246887 2223 csi_plugin.go:399] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found Apr 14 12:40:23.620816 kubelet[2223]: E0414 12:40:23.620650 2223 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 14 12:40:23.635628 kubelet[2223]: E0414 12:40:23.631778 2223 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 12:40:23.916931 kubelet[2223]: E0414 12:40:23.855790 2223 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.18a639920acc1ebf default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node localhost status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-14 12:40:02.453397183 +0000 UTC m=+1.700113946,LastTimestamp:2026-04-14 12:40:02.453397183 +0000 UTC m=+1.700113946,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 14 12:40:24.344174 kubelet[2223]: E0414 12:40:24.343656 2223 csi_plugin.go:399] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found Apr 14 12:40:25.114382 kubelet[2223]: E0414 12:40:25.112744 2223 nodelease.go:50] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Apr 14 12:40:25.297061 kubelet[2223]: E0414 12:40:25.296777 2223 csi_plugin.go:399] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found Apr 14 12:40:25.747051 kubelet[2223]: I0414 12:40:25.746879 2223 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Apr 14 12:40:25.927572 kubelet[2223]: I0414 12:40:25.915729 2223 kubelet_node_status.go:77] "Successfully registered node" node="localhost" Apr 14 12:40:26.021835 kubelet[2223]: I0414 12:40:25.999651 2223 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Apr 14 12:40:27.316543 kubelet[2223]: E0414 12:40:27.312371 2223 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 12:40:27.652520 kubelet[2223]: I0414 12:40:27.324859 2223 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Apr 14 12:40:30.583158 kubelet[2223]: E0414 12:40:30.582860 2223 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.01s" Apr 14 12:40:31.628733 kubelet[2223]: E0414 12:40:31.624667 2223 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.019s" Apr 14 12:40:34.004134 kubelet[2223]: I0414 12:40:34.003617 2223 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Apr 14 12:40:34.147309 kubelet[2223]: E0414 12:40:34.147101 2223 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 12:40:34.912445 kubelet[2223]: E0414 12:40:34.912050 2223 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 12:40:43.306770 kubelet[2223]: I0414 12:40:43.305173 2223 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=17.304532981 podStartE2EDuration="17.304532981s" podCreationTimestamp="2026-04-14 12:40:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-14 12:40:41.859985281 +0000 UTC m=+41.106702049" watchObservedRunningTime="2026-04-14 12:40:43.304532981 +0000 UTC m=+42.551249751" Apr 14 12:40:43.347958 kubelet[2223]: I0414 12:40:43.307812 2223 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=11.307793474 podStartE2EDuration="11.307793474s" podCreationTimestamp="2026-04-14 12:40:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-14 12:40:43.30417954 +0000 UTC m=+42.550896311" watchObservedRunningTime="2026-04-14 12:40:43.307793474 +0000 UTC m=+42.554510237" Apr 14 12:41:07.510901 systemd[1]: Reloading requested from client PID 2527 ('systemctl') (unit session-7.scope)... Apr 14 12:41:07.510948 systemd[1]: Reloading... Apr 14 12:41:08.514276 zram_generator::config[2563]: No configuration found. Apr 14 12:41:09.347323 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 14 12:41:10.232390 systemd[1]: Reloading finished in 2721 ms. Apr 14 12:41:10.518887 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 14 12:41:10.571340 systemd[1]: kubelet.service: Deactivated successfully. Apr 14 12:41:10.572472 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 14 12:41:10.572786 systemd[1]: kubelet.service: Consumed 29.601s CPU time, 137.0M memory peak, 0B memory swap peak. Apr 14 12:41:10.697291 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 14 12:41:11.831680 (kubelet)[2611]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 14 12:41:11.832782 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 14 12:41:12.104994 kubelet[2611]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 14 12:41:12.209121 kubelet[2611]: I0414 12:41:12.208535 2611 server.go:525] "Kubelet version" kubeletVersion="v1.35.1" Apr 14 12:41:12.209121 kubelet[2611]: I0414 12:41:12.208947 2611 server.go:527] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 14 12:41:12.209121 kubelet[2611]: I0414 12:41:12.209014 2611 watchdog_linux.go:95] "Systemd watchdog is not enabled" Apr 14 12:41:12.209121 kubelet[2611]: I0414 12:41:12.209019 2611 watchdog_linux.go:138] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 14 12:41:12.209121 kubelet[2611]: I0414 12:41:12.209308 2611 server.go:951] "Client rotation is on, will bootstrap in background" Apr 14 12:41:12.212311 kubelet[2611]: I0414 12:41:12.210924 2611 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Apr 14 12:41:12.216936 kubelet[2611]: I0414 12:41:12.216680 2611 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 14 12:41:12.344889 kubelet[2611]: E0414 12:41:12.344613 2611 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 14 12:41:12.344889 kubelet[2611]: I0414 12:41:12.344776 2611 server.go:1395] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Apr 14 12:41:12.409436 kubelet[2611]: I0414 12:41:12.403881 2611 server.go:775] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Apr 14 12:41:12.409735 kubelet[2611]: I0414 12:41:12.409466 2611 container_manager_linux.go:272] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 14 12:41:12.409856 kubelet[2611]: I0414 12:41:12.409573 2611 container_manager_linux.go:277] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 14 12:41:12.409856 kubelet[2611]: I0414 12:41:12.409776 2611 topology_manager.go:143] "Creating topology manager with none policy" Apr 14 12:41:12.409856 kubelet[2611]: I0414 12:41:12.409785 2611 container_manager_linux.go:308] "Creating device plugin manager" Apr 14 12:41:12.409856 kubelet[2611]: I0414 12:41:12.409812 2611 container_manager_linux.go:317] "Creating Dynamic Resource Allocation (DRA) manager" Apr 14 12:41:12.423435 kubelet[2611]: I0414 12:41:12.410245 2611 state_mem.go:41] "Initialized" logger="CPUManager state memory" Apr 14 12:41:12.423435 kubelet[2611]: I0414 12:41:12.410486 2611 kubelet.go:482] "Attempting to sync node with API server" Apr 14 12:41:12.423435 kubelet[2611]: I0414 12:41:12.410497 2611 kubelet.go:383] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 14 12:41:12.423435 kubelet[2611]: I0414 12:41:12.416580 2611 kubelet.go:394] "Adding apiserver pod source" Apr 14 12:41:12.423435 kubelet[2611]: I0414 12:41:12.421523 2611 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 14 12:41:12.473338 kubelet[2611]: I0414 12:41:12.458432 2611 kuberuntime_manager.go:294] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 14 12:41:12.582477 kubelet[2611]: I0414 12:41:12.580127 2611 kubelet.go:943] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 14 12:41:12.582477 kubelet[2611]: I0414 12:41:12.580351 2611 kubelet.go:970] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Apr 14 12:41:12.873447 kubelet[2611]: I0414 12:41:12.771915 2611 server.go:1257] "Started kubelet" Apr 14 12:41:12.899268 kubelet[2611]: I0414 12:41:12.898953 2611 server.go:182] "Starting to listen" address="0.0.0.0" port=10250 Apr 14 12:41:13.026794 kubelet[2611]: I0414 12:41:13.025125 2611 server.go:317] "Adding debug handlers to kubelet server" Apr 14 12:41:13.051650 kubelet[2611]: I0414 12:41:12.951443 2611 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 14 12:41:13.051650 kubelet[2611]: I0414 12:41:13.047244 2611 server_v1.go:49] "podresources" method="list" useActivePods=true Apr 14 12:41:13.054360 kubelet[2611]: I0414 12:41:13.054143 2611 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 14 12:41:13.057688 kubelet[2611]: I0414 12:41:13.051571 2611 fs_resource_analyzer.go:69] "Starting FS ResourceAnalyzer" Apr 14 12:41:13.075184 kubelet[2611]: I0414 12:41:13.074947 2611 volume_manager.go:311] "Starting Kubelet Volume Manager" Apr 14 12:41:13.075184 kubelet[2611]: I0414 12:41:13.075190 2611 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Apr 14 12:41:13.075184 kubelet[2611]: E0414 12:41:13.079804 2611 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 14 12:41:13.075184 kubelet[2611]: I0414 12:41:13.079878 2611 reconciler.go:29] "Reconciler: start to sync state" Apr 14 12:41:13.096566 kubelet[2611]: I0414 12:41:13.096017 2611 factory.go:223] Registration of the systemd container factory successfully Apr 14 12:41:13.100099 kubelet[2611]: I0414 12:41:13.099228 2611 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 14 12:41:13.113193 kubelet[2611]: I0414 12:41:13.112488 2611 server.go:254] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 14 12:41:13.284759 kubelet[2611]: I0414 12:41:13.284371 2611 factory.go:223] Registration of the containerd container factory successfully Apr 14 12:41:13.286797 kubelet[2611]: E0414 12:41:13.285071 2611 kubelet.go:1656] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 14 12:41:13.428675 kubelet[2611]: I0414 12:41:13.428443 2611 apiserver.go:52] "Watching apiserver" Apr 14 12:41:14.000704 kubelet[2611]: I0414 12:41:14.000543 2611 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Apr 14 12:41:14.042049 kubelet[2611]: I0414 12:41:14.041864 2611 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Apr 14 12:41:14.042049 kubelet[2611]: I0414 12:41:14.041965 2611 status_manager.go:249] "Starting to sync pod status with apiserver" Apr 14 12:41:14.042049 kubelet[2611]: I0414 12:41:14.042051 2611 kubelet.go:2501] "Starting kubelet main sync loop" Apr 14 12:41:14.042539 kubelet[2611]: E0414 12:41:14.042157 2611 kubelet.go:2525] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 14 12:41:14.211401 kubelet[2611]: E0414 12:41:14.211143 2611 kubelet.go:2525] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 14 12:41:14.423353 kubelet[2611]: E0414 12:41:14.421045 2611 kubelet.go:2525] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Apr 14 12:41:14.826478 kubelet[2611]: E0414 12:41:14.822924 2611 kubelet.go:2525] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Apr 14 12:41:15.151342 kubelet[2611]: I0414 12:41:15.142335 2611 cpu_manager.go:225] "Starting" policy="none" Apr 14 12:41:15.151342 kubelet[2611]: I0414 12:41:15.142444 2611 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Apr 14 12:41:15.151342 kubelet[2611]: I0414 12:41:15.142560 2611 state_mem.go:41] "Initialized" logger="CPUManager state checkpoint.CPUManager state memory" Apr 14 12:41:15.215458 kubelet[2611]: I0414 12:41:15.214795 2611 state_mem.go:94] "Updated default CPUSet" logger="CPUManager state checkpoint.CPUManager state memory" cpuSet="" Apr 14 12:41:15.221761 kubelet[2611]: I0414 12:41:15.216678 2611 state_mem.go:102] "Updated CPUSet assignments" logger="CPUManager state checkpoint.CPUManager state memory" assignments={} Apr 14 12:41:15.221761 kubelet[2611]: I0414 12:41:15.216887 2611 policy_none.go:50] "Start" Apr 14 12:41:15.221761 kubelet[2611]: I0414 12:41:15.216957 2611 memory_manager.go:187] "Starting memorymanager" policy="None" Apr 14 12:41:15.221761 kubelet[2611]: I0414 12:41:15.217035 2611 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Apr 14 12:41:15.221761 kubelet[2611]: I0414 12:41:15.217772 2611 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Apr 14 12:41:15.221761 kubelet[2611]: I0414 12:41:15.217789 2611 policy_none.go:44] "Start" Apr 14 12:41:15.658308 kubelet[2611]: E0414 12:41:15.646829 2611 kubelet.go:2525] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Apr 14 12:41:15.665902 kubelet[2611]: E0414 12:41:15.659031 2611 manager.go:525] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 14 12:41:15.667273 kubelet[2611]: I0414 12:41:15.667183 2611 eviction_manager.go:194] "Eviction manager: starting control loop" Apr 14 12:41:15.667273 kubelet[2611]: I0414 12:41:15.667227 2611 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 14 12:41:15.744508 kubelet[2611]: I0414 12:41:15.719319 2611 plugin_manager.go:121] "Starting Kubelet Plugin Manager" Apr 14 12:41:16.117203 kubelet[2611]: E0414 12:41:16.112299 2611 eviction_manager.go:272] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 14 12:41:16.565144 kubelet[2611]: I0414 12:41:16.560923 2611 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Apr 14 12:41:17.079365 kubelet[2611]: I0414 12:41:17.074758 2611 kubelet_node_status.go:123] "Node was previously registered" node="localhost" Apr 14 12:41:17.096048 kubelet[2611]: I0414 12:41:17.095807 2611 kubelet_node_status.go:77] "Successfully registered node" node="localhost" Apr 14 12:41:17.314207 kubelet[2611]: I0414 12:41:17.311530 2611 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Apr 14 12:41:17.319312 kubelet[2611]: I0414 12:41:17.317960 2611 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Apr 14 12:41:17.319312 kubelet[2611]: I0414 12:41:17.314413 2611 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Apr 14 12:41:17.347257 kubelet[2611]: I0414 12:41:17.346678 2611 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3566c1d7ed03bb3c60facf009a5678dd-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"3566c1d7ed03bb3c60facf009a5678dd\") " pod="kube-system/kube-scheduler-localhost" Apr 14 12:41:17.347549 kubelet[2611]: I0414 12:41:17.347535 2611 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ed5e991544c38f12435d82988fd12fee-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"ed5e991544c38f12435d82988fd12fee\") " pod="kube-system/kube-apiserver-localhost" Apr 14 12:41:17.347634 kubelet[2611]: I0414 12:41:17.347622 2611 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ed5e991544c38f12435d82988fd12fee-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"ed5e991544c38f12435d82988fd12fee\") " pod="kube-system/kube-apiserver-localhost" Apr 14 12:41:17.347678 kubelet[2611]: I0414 12:41:17.347670 2611 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ed5e991544c38f12435d82988fd12fee-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"ed5e991544c38f12435d82988fd12fee\") " pod="kube-system/kube-apiserver-localhost" Apr 14 12:41:17.347972 kubelet[2611]: I0414 12:41:17.347961 2611 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/bd70d524e6bc561f2082b467706799ed-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"bd70d524e6bc561f2082b467706799ed\") " pod="kube-system/kube-controller-manager-localhost" Apr 14 12:41:17.348179 kubelet[2611]: I0414 12:41:17.348089 2611 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/bd70d524e6bc561f2082b467706799ed-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"bd70d524e6bc561f2082b467706799ed\") " pod="kube-system/kube-controller-manager-localhost" Apr 14 12:41:17.348179 kubelet[2611]: I0414 12:41:17.348106 2611 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/bd70d524e6bc561f2082b467706799ed-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"bd70d524e6bc561f2082b467706799ed\") " pod="kube-system/kube-controller-manager-localhost" Apr 14 12:41:17.348179 kubelet[2611]: I0414 12:41:17.348119 2611 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/bd70d524e6bc561f2082b467706799ed-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"bd70d524e6bc561f2082b467706799ed\") " pod="kube-system/kube-controller-manager-localhost" Apr 14 12:41:17.348179 kubelet[2611]: I0414 12:41:17.348132 2611 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/bd70d524e6bc561f2082b467706799ed-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"bd70d524e6bc561f2082b467706799ed\") " pod="kube-system/kube-controller-manager-localhost" Apr 14 12:41:17.699043 kubelet[2611]: E0414 12:41:17.687358 2611 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 12:41:18.310548 kubelet[2611]: E0414 12:41:18.310261 2611 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 12:41:18.525967 kubelet[2611]: E0414 12:41:18.520476 2611 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Apr 14 12:41:18.525967 kubelet[2611]: E0414 12:41:18.520972 2611 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 12:41:18.555196 kubelet[2611]: E0414 12:41:18.544521 2611 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Apr 14 12:41:18.709442 kubelet[2611]: E0414 12:41:18.708236 2611 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 12:41:19.347078 kubelet[2611]: E0414 12:41:19.346907 2611 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 12:41:19.353155 kubelet[2611]: E0414 12:41:19.346908 2611 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 12:41:21.146792 kubelet[2611]: E0414 12:41:21.145100 2611 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 12:41:33.511263 kubelet[2611]: I0414 12:41:33.509632 2611 kuberuntime_manager.go:2062] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Apr 14 12:41:33.532891 containerd[1466]: time="2026-04-14T12:41:33.527561632Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Apr 14 12:41:33.572577 kubelet[2611]: I0414 12:41:33.572161 2611 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Apr 14 12:41:34.146276 sudo[1633]: pam_unix(sudo:session): session closed for user root Apr 14 12:41:34.213742 sshd[1622]: pam_unix(sshd:session): session closed for user core Apr 14 12:41:34.272837 systemd-logind[1450]: Session 7 logged out. Waiting for processes to exit. Apr 14 12:41:34.306500 systemd[1]: sshd@6-10.0.0.43:22-10.0.0.1:36630.service: Deactivated successfully. Apr 14 12:41:34.436828 systemd[1]: session-7.scope: Deactivated successfully. Apr 14 12:41:34.437833 systemd[1]: session-7.scope: Consumed 25.845s CPU time, 163.4M memory peak, 0B memory swap peak. Apr 14 12:41:34.557050 systemd-logind[1450]: Removed session 7. Apr 14 12:41:35.569881 kubelet[2611]: I0414 12:41:35.569662 2611 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f2fa7266-51c5-466b-a868-24c5ddaa1eb5-xtables-lock\") pod \"kube-proxy-nrpqh\" (UID: \"f2fa7266-51c5-466b-a868-24c5ddaa1eb5\") " pod="kube-system/kube-proxy-nrpqh" Apr 14 12:41:35.597446 kubelet[2611]: I0414 12:41:35.569892 2611 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/f2fa7266-51c5-466b-a868-24c5ddaa1eb5-kube-proxy\") pod \"kube-proxy-nrpqh\" (UID: \"f2fa7266-51c5-466b-a868-24c5ddaa1eb5\") " pod="kube-system/kube-proxy-nrpqh" Apr 14 12:41:35.597446 kubelet[2611]: I0414 12:41:35.569907 2611 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f2fa7266-51c5-466b-a868-24c5ddaa1eb5-lib-modules\") pod \"kube-proxy-nrpqh\" (UID: \"f2fa7266-51c5-466b-a868-24c5ddaa1eb5\") " pod="kube-system/kube-proxy-nrpqh" Apr 14 12:41:35.597446 kubelet[2611]: I0414 12:41:35.569919 2611 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4g9qs\" (UniqueName: \"kubernetes.io/projected/f2fa7266-51c5-466b-a868-24c5ddaa1eb5-kube-api-access-4g9qs\") pod \"kube-proxy-nrpqh\" (UID: \"f2fa7266-51c5-466b-a868-24c5ddaa1eb5\") " pod="kube-system/kube-proxy-nrpqh" Apr 14 12:41:35.609097 systemd[1]: Created slice kubepods-besteffort-podf2fa7266_51c5_466b_a868_24c5ddaa1eb5.slice - libcontainer container kubepods-besteffort-podf2fa7266_51c5_466b_a868_24c5ddaa1eb5.slice. Apr 14 12:41:35.948383 kubelet[2611]: I0414 12:41:35.947505 2611 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni\" (UniqueName: \"kubernetes.io/host-path/a8726727-6ac0-481b-ba97-27723443187f-cni\") pod \"kube-flannel-ds-tkcpw\" (UID: \"a8726727-6ac0-481b-ba97-27723443187f\") " pod="kube-flannel/kube-flannel-ds-tkcpw" Apr 14 12:41:35.948383 kubelet[2611]: I0414 12:41:35.947644 2611 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flannel-cfg\" (UniqueName: \"kubernetes.io/configmap/a8726727-6ac0-481b-ba97-27723443187f-flannel-cfg\") pod \"kube-flannel-ds-tkcpw\" (UID: \"a8726727-6ac0-481b-ba97-27723443187f\") " pod="kube-flannel/kube-flannel-ds-tkcpw" Apr 14 12:41:35.948383 kubelet[2611]: I0414 12:41:35.947720 2611 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a8726727-6ac0-481b-ba97-27723443187f-xtables-lock\") pod \"kube-flannel-ds-tkcpw\" (UID: \"a8726727-6ac0-481b-ba97-27723443187f\") " pod="kube-flannel/kube-flannel-ds-tkcpw" Apr 14 12:41:35.948383 kubelet[2611]: I0414 12:41:35.947869 2611 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-plugin\" (UniqueName: \"kubernetes.io/host-path/a8726727-6ac0-481b-ba97-27723443187f-cni-plugin\") pod \"kube-flannel-ds-tkcpw\" (UID: \"a8726727-6ac0-481b-ba97-27723443187f\") " pod="kube-flannel/kube-flannel-ds-tkcpw" Apr 14 12:41:35.948383 kubelet[2611]: I0414 12:41:35.947890 2611 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/a8726727-6ac0-481b-ba97-27723443187f-run\") pod \"kube-flannel-ds-tkcpw\" (UID: \"a8726727-6ac0-481b-ba97-27723443187f\") " pod="kube-flannel/kube-flannel-ds-tkcpw" Apr 14 12:41:35.949052 kubelet[2611]: I0414 12:41:35.947912 2611 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mdrh4\" (UniqueName: \"kubernetes.io/projected/a8726727-6ac0-481b-ba97-27723443187f-kube-api-access-mdrh4\") pod \"kube-flannel-ds-tkcpw\" (UID: \"a8726727-6ac0-481b-ba97-27723443187f\") " pod="kube-flannel/kube-flannel-ds-tkcpw" Apr 14 12:41:36.046738 systemd[1]: Created slice kubepods-burstable-poda8726727_6ac0_481b_ba97_27723443187f.slice - libcontainer container kubepods-burstable-poda8726727_6ac0_481b_ba97_27723443187f.slice. Apr 14 12:41:36.107444 kubelet[2611]: E0414 12:41:36.106900 2611 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 12:41:36.112630 containerd[1466]: time="2026-04-14T12:41:36.108862028Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-nrpqh,Uid:f2fa7266-51c5-466b-a868-24c5ddaa1eb5,Namespace:kube-system,Attempt:0,}" Apr 14 12:41:36.301645 containerd[1466]: time="2026-04-14T12:41:36.300162112Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 14 12:41:36.301645 containerd[1466]: time="2026-04-14T12:41:36.300417899Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 14 12:41:36.301645 containerd[1466]: time="2026-04-14T12:41:36.300431502Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 14 12:41:36.301645 containerd[1466]: time="2026-04-14T12:41:36.300558893Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 14 12:41:36.427306 systemd[1]: Started cri-containerd-6cf4f29c176f01b1d548916febe8b839d31832937a04e875327aecdc18647200.scope - libcontainer container 6cf4f29c176f01b1d548916febe8b839d31832937a04e875327aecdc18647200. Apr 14 12:41:36.440887 kubelet[2611]: E0414 12:41:36.440844 2611 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 12:41:36.458418 containerd[1466]: time="2026-04-14T12:41:36.458377302Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-tkcpw,Uid:a8726727-6ac0-481b-ba97-27723443187f,Namespace:kube-flannel,Attempt:0,}" Apr 14 12:41:36.877490 containerd[1466]: time="2026-04-14T12:41:36.877337314Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-nrpqh,Uid:f2fa7266-51c5-466b-a868-24c5ddaa1eb5,Namespace:kube-system,Attempt:0,} returns sandbox id \"6cf4f29c176f01b1d548916febe8b839d31832937a04e875327aecdc18647200\"" Apr 14 12:41:36.891442 kubelet[2611]: E0414 12:41:36.890790 2611 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 12:41:37.458410 containerd[1466]: time="2026-04-14T12:41:37.457911662Z" level=info msg="CreateContainer within sandbox \"6cf4f29c176f01b1d548916febe8b839d31832937a04e875327aecdc18647200\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Apr 14 12:41:37.593473 containerd[1466]: time="2026-04-14T12:41:37.592446126Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 14 12:41:37.593473 containerd[1466]: time="2026-04-14T12:41:37.592916969Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 14 12:41:37.593473 containerd[1466]: time="2026-04-14T12:41:37.592937048Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 14 12:41:37.593473 containerd[1466]: time="2026-04-14T12:41:37.593199703Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 14 12:41:37.631919 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1119487978.mount: Deactivated successfully. Apr 14 12:41:37.647649 containerd[1466]: time="2026-04-14T12:41:37.647494736Z" level=info msg="CreateContainer within sandbox \"6cf4f29c176f01b1d548916febe8b839d31832937a04e875327aecdc18647200\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"fb2e200dfc1622e1c120e9f95ac469ebdf7ee37340495d2b66ca35e26fceab7d\"" Apr 14 12:41:37.647706 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1543901641.mount: Deactivated successfully. Apr 14 12:41:37.650027 containerd[1466]: time="2026-04-14T12:41:37.649982150Z" level=info msg="StartContainer for \"fb2e200dfc1622e1c120e9f95ac469ebdf7ee37340495d2b66ca35e26fceab7d\"" Apr 14 12:41:37.711545 systemd[1]: Started cri-containerd-09b34daa12055410920cd2c59ab2988c002d485c993940518b96cc86cb940351.scope - libcontainer container 09b34daa12055410920cd2c59ab2988c002d485c993940518b96cc86cb940351. Apr 14 12:41:38.341293 systemd[1]: run-containerd-runc-k8s.io-fb2e200dfc1622e1c120e9f95ac469ebdf7ee37340495d2b66ca35e26fceab7d-runc.GT6JOk.mount: Deactivated successfully. Apr 14 12:41:38.367002 systemd[1]: Started cri-containerd-fb2e200dfc1622e1c120e9f95ac469ebdf7ee37340495d2b66ca35e26fceab7d.scope - libcontainer container fb2e200dfc1622e1c120e9f95ac469ebdf7ee37340495d2b66ca35e26fceab7d. Apr 14 12:41:38.633821 containerd[1466]: time="2026-04-14T12:41:38.566410856Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-tkcpw,Uid:a8726727-6ac0-481b-ba97-27723443187f,Namespace:kube-flannel,Attempt:0,} returns sandbox id \"09b34daa12055410920cd2c59ab2988c002d485c993940518b96cc86cb940351\"" Apr 14 12:41:38.644680 kubelet[2611]: E0414 12:41:38.644377 2611 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 12:41:38.647364 containerd[1466]: time="2026-04-14T12:41:38.647266489Z" level=info msg="PullImage \"ghcr.io/flannel-io/flannel-cni-plugin:v1.6.2-flannel1\"" Apr 14 12:41:39.321936 containerd[1466]: time="2026-04-14T12:41:39.321665695Z" level=info msg="StartContainer for \"fb2e200dfc1622e1c120e9f95ac469ebdf7ee37340495d2b66ca35e26fceab7d\" returns successfully" Apr 14 12:41:39.899344 kubelet[2611]: E0414 12:41:39.896096 2611 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 12:41:41.259087 kubelet[2611]: E0414 12:41:41.258847 2611 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 12:41:44.267203 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3281576256.mount: Deactivated successfully. Apr 14 12:41:46.359406 containerd[1466]: time="2026-04-14T12:41:46.359000174Z" level=info msg="stop pulling image ghcr.io/flannel-io/flannel-cni-plugin:v1.6.2-flannel1: active requests=0, bytes read=4857008" Apr 14 12:41:46.475442 containerd[1466]: time="2026-04-14T12:41:46.474272676Z" level=info msg="ImageCreate event name:\"ghcr.io/flannel-io/flannel-cni-plugin:v1.6.2-flannel1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 12:41:46.888371 containerd[1466]: time="2026-04-14T12:41:46.859175168Z" level=info msg="ImageCreate event name:\"sha256:55ce2385d9d8c6f720091c177fbe885a21c9dc07c9e480bfb4d94b3001f58182\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 12:41:47.031216 containerd[1466]: time="2026-04-14T12:41:47.026266909Z" level=info msg="ImageCreate event name:\"ghcr.io/flannel-io/flannel-cni-plugin@sha256:f1812994f0edbcb5bb5ccb63be2147ba6ad10e1faaa7ca9fcdad4f441739d84f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 12:41:47.123476 containerd[1466]: time="2026-04-14T12:41:47.073824286Z" level=info msg="Pulled image \"ghcr.io/flannel-io/flannel-cni-plugin:v1.6.2-flannel1\" with image id \"sha256:55ce2385d9d8c6f720091c177fbe885a21c9dc07c9e480bfb4d94b3001f58182\", repo tag \"ghcr.io/flannel-io/flannel-cni-plugin:v1.6.2-flannel1\", repo digest \"ghcr.io/flannel-io/flannel-cni-plugin@sha256:f1812994f0edbcb5bb5ccb63be2147ba6ad10e1faaa7ca9fcdad4f441739d84f\", size \"4856838\" in 8.426409101s" Apr 14 12:41:47.123476 containerd[1466]: time="2026-04-14T12:41:47.120567537Z" level=info msg="PullImage \"ghcr.io/flannel-io/flannel-cni-plugin:v1.6.2-flannel1\" returns image reference \"sha256:55ce2385d9d8c6f720091c177fbe885a21c9dc07c9e480bfb4d94b3001f58182\"" Apr 14 12:41:48.287977 containerd[1466]: time="2026-04-14T12:41:48.285704537Z" level=info msg="CreateContainer within sandbox \"09b34daa12055410920cd2c59ab2988c002d485c993940518b96cc86cb940351\" for container &ContainerMetadata{Name:install-cni-plugin,Attempt:0,}" Apr 14 12:41:49.297852 containerd[1466]: time="2026-04-14T12:41:49.273085819Z" level=info msg="CreateContainer within sandbox \"09b34daa12055410920cd2c59ab2988c002d485c993940518b96cc86cb940351\" for &ContainerMetadata{Name:install-cni-plugin,Attempt:0,} returns container id \"795cc00941e553abebb52e4b77431bdb3262c8d35a4756ee7e0b2add835ae39b\"" Apr 14 12:41:49.417230 containerd[1466]: time="2026-04-14T12:41:49.415894131Z" level=info msg="StartContainer for \"795cc00941e553abebb52e4b77431bdb3262c8d35a4756ee7e0b2add835ae39b\"" Apr 14 12:41:50.874232 systemd[1]: Started cri-containerd-795cc00941e553abebb52e4b77431bdb3262c8d35a4756ee7e0b2add835ae39b.scope - libcontainer container 795cc00941e553abebb52e4b77431bdb3262c8d35a4756ee7e0b2add835ae39b. Apr 14 12:41:52.124341 containerd[1466]: time="2026-04-14T12:41:52.124155748Z" level=info msg="StartContainer for \"795cc00941e553abebb52e4b77431bdb3262c8d35a4756ee7e0b2add835ae39b\" returns successfully" Apr 14 12:41:52.172998 systemd[1]: cri-containerd-795cc00941e553abebb52e4b77431bdb3262c8d35a4756ee7e0b2add835ae39b.scope: Deactivated successfully. Apr 14 12:41:53.386870 kubelet[2611]: E0414 12:41:53.384505 2611 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 12:41:53.419386 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-795cc00941e553abebb52e4b77431bdb3262c8d35a4756ee7e0b2add835ae39b-rootfs.mount: Deactivated successfully. Apr 14 12:41:53.945037 containerd[1466]: time="2026-04-14T12:41:53.919192563Z" level=info msg="shim disconnected" id=795cc00941e553abebb52e4b77431bdb3262c8d35a4756ee7e0b2add835ae39b namespace=k8s.io Apr 14 12:41:53.949537 containerd[1466]: time="2026-04-14T12:41:53.945117605Z" level=warning msg="cleaning up after shim disconnected" id=795cc00941e553abebb52e4b77431bdb3262c8d35a4756ee7e0b2add835ae39b namespace=k8s.io Apr 14 12:41:53.949537 containerd[1466]: time="2026-04-14T12:41:53.945449319Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 14 12:41:54.639923 containerd[1466]: time="2026-04-14T12:41:54.637700463Z" level=warning msg="cleanup warnings time=\"2026-04-14T12:41:54Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Apr 14 12:41:55.077859 kubelet[2611]: E0414 12:41:55.075973 2611 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.023s" Apr 14 12:41:56.602225 kubelet[2611]: E0414 12:41:56.571389 2611 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 12:41:56.933113 containerd[1466]: time="2026-04-14T12:41:56.927618999Z" level=info msg="PullImage \"ghcr.io/flannel-io/flannel:v0.26.7\"" Apr 14 12:41:59.443087 kubelet[2611]: E0414 12:41:59.442771 2611 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.398s" Apr 14 12:42:01.311494 kubelet[2611]: E0414 12:42:01.309517 2611 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.159s" Apr 14 12:42:04.501945 kubelet[2611]: E0414 12:42:04.481457 2611 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.353s" Apr 14 12:42:06.652904 kubelet[2611]: E0414 12:42:06.649687 2611 controller.go:251] "Failed to update lease" err="Put \"https://10.0.0.43:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Apr 14 12:42:08.231830 kubelet[2611]: E0414 12:42:08.209906 2611 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.982s" Apr 14 12:42:09.948030 systemd[1]: cri-containerd-e4e0d694920dace3b0160d0dc5f65705d37c4fd0c45f4b4dbfe6e1be020f4d29.scope: Deactivated successfully. Apr 14 12:42:09.957079 systemd[1]: cri-containerd-e4e0d694920dace3b0160d0dc5f65705d37c4fd0c45f4b4dbfe6e1be020f4d29.scope: Consumed 23.903s CPU time, 18.0M memory peak, 0B memory swap peak. Apr 14 12:42:10.265091 kubelet[2611]: E0414 12:42:10.241229 2611 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.958s" Apr 14 12:42:10.908497 kubelet[2611]: E0414 12:42:10.908125 2611 cadvisor_stats_provider.go:569] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbd70d524e6bc561f2082b467706799ed.slice/cri-containerd-e4e0d694920dace3b0160d0dc5f65705d37c4fd0c45f4b4dbfe6e1be020f4d29.scope\": RecentStats: unable to find data in memory cache]" Apr 14 12:42:11.439096 kubelet[2611]: E0414 12:42:11.428480 2611 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.123s" Apr 14 12:42:17.100285 kubelet[2611]: E0414 12:42:17.058506 2611 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="4.803s" Apr 14 12:42:17.341464 kubelet[2611]: E0414 12:42:17.111676 2611 controller.go:251] "Failed to update lease" err="Put \"https://10.0.0.43:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Apr 14 12:42:18.127691 systemd[1]: cri-containerd-dbef3dfc5774f46dca15615bea398a334ad43fbd81fb85c2ef49b20ee7536c86.scope: Deactivated successfully. Apr 14 12:42:18.140987 systemd[1]: cri-containerd-dbef3dfc5774f46dca15615bea398a334ad43fbd81fb85c2ef49b20ee7536c86.scope: Consumed 20.239s CPU time, 18.0M memory peak, 0B memory swap peak. Apr 14 12:42:19.320034 kubelet[2611]: I0414 12:42:19.299657 2611 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-proxy-nrpqh" podStartSLOduration=45.292133942 podStartE2EDuration="45.292133942s" podCreationTimestamp="2026-04-14 12:41:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-14 12:41:40.916105282 +0000 UTC m=+28.972224738" watchObservedRunningTime="2026-04-14 12:42:19.292133942 +0000 UTC m=+67.348253408" Apr 14 12:42:19.778010 kubelet[2611]: E0414 12:42:19.777540 2611 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.647s" Apr 14 12:42:19.930357 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e4e0d694920dace3b0160d0dc5f65705d37c4fd0c45f4b4dbfe6e1be020f4d29-rootfs.mount: Deactivated successfully. Apr 14 12:42:19.941302 kubelet[2611]: E0414 12:42:19.941225 2611 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 12:42:20.441110 containerd[1466]: time="2026-04-14T12:42:20.365012953Z" level=info msg="shim disconnected" id=e4e0d694920dace3b0160d0dc5f65705d37c4fd0c45f4b4dbfe6e1be020f4d29 namespace=k8s.io Apr 14 12:42:20.617025 containerd[1466]: time="2026-04-14T12:42:20.589926484Z" level=warning msg="cleaning up after shim disconnected" id=e4e0d694920dace3b0160d0dc5f65705d37c4fd0c45f4b4dbfe6e1be020f4d29 namespace=k8s.io Apr 14 12:42:20.617025 containerd[1466]: time="2026-04-14T12:42:20.422934807Z" level=error msg="failed to delete shim" error="1 error occurred:\n\t* close wait error: context deadline exceeded\n\n" id=e4e0d694920dace3b0160d0dc5f65705d37c4fd0c45f4b4dbfe6e1be020f4d29 Apr 14 12:42:20.617025 containerd[1466]: time="2026-04-14T12:42:20.613022607Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 14 12:42:20.754904 containerd[1466]: time="2026-04-14T12:42:20.740089677Z" level=error msg="failed to delete" cmd="/usr/bin/containerd-shim-runc-v2 -namespace k8s.io -address /run/containerd/containerd.sock -publish-binary /usr/bin/containerd -id e4e0d694920dace3b0160d0dc5f65705d37c4fd0c45f4b4dbfe6e1be020f4d29 -bundle /run/containerd/io.containerd.runtime.v2.task/k8s.io/e4e0d694920dace3b0160d0dc5f65705d37c4fd0c45f4b4dbfe6e1be020f4d29 delete" error="fork/exec /usr/bin/containerd-shim-runc-v2: no such file or directory" namespace=k8s.io Apr 14 12:42:20.754904 containerd[1466]: time="2026-04-14T12:42:20.740738725Z" level=warning msg="failed to clean up after shim disconnected" error=": fork/exec /usr/bin/containerd-shim-runc-v2: no such file or directory" id=e4e0d694920dace3b0160d0dc5f65705d37c4fd0c45f4b4dbfe6e1be020f4d29 namespace=k8s.io Apr 14 12:42:22.172786 kubelet[2611]: E0414 12:42:22.172653 2611 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 12:42:23.528973 kubelet[2611]: I0414 12:42:23.511312 2611 scope.go:122] "RemoveContainer" containerID="e4e0d694920dace3b0160d0dc5f65705d37c4fd0c45f4b4dbfe6e1be020f4d29" Apr 14 12:42:23.528973 kubelet[2611]: E0414 12:42:23.511656 2611 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 12:42:24.431359 containerd[1466]: time="2026-04-14T12:42:24.429055961Z" level=info msg="CreateContainer within sandbox \"e6e85fec2c8d99964501468473b50536e90439fb2a70be7746d8683556d02abe\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Apr 14 12:42:24.562252 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-dbef3dfc5774f46dca15615bea398a334ad43fbd81fb85c2ef49b20ee7536c86-rootfs.mount: Deactivated successfully. Apr 14 12:42:24.933966 containerd[1466]: time="2026-04-14T12:42:24.932572882Z" level=info msg="shim disconnected" id=dbef3dfc5774f46dca15615bea398a334ad43fbd81fb85c2ef49b20ee7536c86 namespace=k8s.io Apr 14 12:42:24.944678 containerd[1466]: time="2026-04-14T12:42:24.943879805Z" level=warning msg="cleaning up after shim disconnected" id=dbef3dfc5774f46dca15615bea398a334ad43fbd81fb85c2ef49b20ee7536c86 namespace=k8s.io Apr 14 12:42:24.944678 containerd[1466]: time="2026-04-14T12:42:24.944285319Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 14 12:42:25.063078 containerd[1466]: time="2026-04-14T12:42:25.058436384Z" level=info msg="CreateContainer within sandbox \"e6e85fec2c8d99964501468473b50536e90439fb2a70be7746d8683556d02abe\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"474e6866abfc55ca7a04b76aac7c35565018d999796bf452868de1b09224bad7\"" Apr 14 12:42:25.142217 containerd[1466]: time="2026-04-14T12:42:25.138008055Z" level=info msg="StartContainer for \"474e6866abfc55ca7a04b76aac7c35565018d999796bf452868de1b09224bad7\"" Apr 14 12:42:25.871403 systemd[1]: run-containerd-runc-k8s.io-474e6866abfc55ca7a04b76aac7c35565018d999796bf452868de1b09224bad7-runc.ID4Aaa.mount: Deactivated successfully. Apr 14 12:42:26.172472 systemd[1]: Started cri-containerd-474e6866abfc55ca7a04b76aac7c35565018d999796bf452868de1b09224bad7.scope - libcontainer container 474e6866abfc55ca7a04b76aac7c35565018d999796bf452868de1b09224bad7. Apr 14 12:42:26.834937 containerd[1466]: time="2026-04-14T12:42:26.834474727Z" level=info msg="StartContainer for \"474e6866abfc55ca7a04b76aac7c35565018d999796bf452868de1b09224bad7\" returns successfully" Apr 14 12:42:28.350214 kubelet[2611]: I0414 12:42:28.343832 2611 scope.go:122] "RemoveContainer" containerID="dbef3dfc5774f46dca15615bea398a334ad43fbd81fb85c2ef49b20ee7536c86" Apr 14 12:42:28.350214 kubelet[2611]: E0414 12:42:28.343923 2611 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 12:42:28.722065 kubelet[2611]: E0414 12:42:28.711296 2611 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 12:42:29.112115 containerd[1466]: time="2026-04-14T12:42:29.109750655Z" level=info msg="CreateContainer within sandbox \"e5886807b102c9b52b314793a738bd13f5f513f2b95c62167b7e7611e720cfca\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Apr 14 12:42:29.657581 containerd[1466]: time="2026-04-14T12:42:29.566542088Z" level=info msg="CreateContainer within sandbox \"e5886807b102c9b52b314793a738bd13f5f513f2b95c62167b7e7611e720cfca\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"a4ba45763c1b8e90095512ed73a3ecf8ddc9e7aa1d6a7198f9506d3b4bbebadb\"" Apr 14 12:42:29.682194 containerd[1466]: time="2026-04-14T12:42:29.681286860Z" level=info msg="StartContainer for \"a4ba45763c1b8e90095512ed73a3ecf8ddc9e7aa1d6a7198f9506d3b4bbebadb\"" Apr 14 12:42:30.863628 systemd[1]: Started cri-containerd-a4ba45763c1b8e90095512ed73a3ecf8ddc9e7aa1d6a7198f9506d3b4bbebadb.scope - libcontainer container a4ba45763c1b8e90095512ed73a3ecf8ddc9e7aa1d6a7198f9506d3b4bbebadb. Apr 14 12:42:31.272289 kubelet[2611]: E0414 12:42:31.268907 2611 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 12:42:31.944950 containerd[1466]: time="2026-04-14T12:42:31.944292422Z" level=info msg="StartContainer for \"a4ba45763c1b8e90095512ed73a3ecf8ddc9e7aa1d6a7198f9506d3b4bbebadb\" returns successfully" Apr 14 12:42:32.443306 kubelet[2611]: E0414 12:42:32.440538 2611 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 12:42:32.453358 kubelet[2611]: E0414 12:42:32.450195 2611 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 12:42:33.544968 kubelet[2611]: E0414 12:42:33.542076 2611 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 12:42:34.635275 kubelet[2611]: E0414 12:42:34.635176 2611 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 12:42:35.644085 kubelet[2611]: E0414 12:42:35.643874 2611 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 12:42:36.646259 containerd[1466]: time="2026-04-14T12:42:36.645368471Z" level=info msg="ImageCreate event name:\"ghcr.io/flannel-io/flannel:v0.26.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 12:42:36.650708 containerd[1466]: time="2026-04-14T12:42:36.650653479Z" level=info msg="stop pulling image ghcr.io/flannel-io/flannel:v0.26.7: active requests=0, bytes read=29354574" Apr 14 12:42:36.652482 containerd[1466]: time="2026-04-14T12:42:36.652453080Z" level=info msg="ImageCreate event name:\"sha256:965b9dd4aa4c1b6b68a4c54a166692b4645b6e6f8a5937d8dc17736cb63f515e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 12:42:36.762359 containerd[1466]: time="2026-04-14T12:42:36.762061150Z" level=info msg="ImageCreate event name:\"ghcr.io/flannel-io/flannel@sha256:7f471907fa940f944867270de4ed78121b8b4c5d564e17f940dc787cb16dea82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 12:42:36.768579 containerd[1466]: time="2026-04-14T12:42:36.768333887Z" level=info msg="Pulled image \"ghcr.io/flannel-io/flannel:v0.26.7\" with image id \"sha256:965b9dd4aa4c1b6b68a4c54a166692b4645b6e6f8a5937d8dc17736cb63f515e\", repo tag \"ghcr.io/flannel-io/flannel:v0.26.7\", repo digest \"ghcr.io/flannel-io/flannel@sha256:7f471907fa940f944867270de4ed78121b8b4c5d564e17f940dc787cb16dea82\", size \"32996046\" in 39.835829958s" Apr 14 12:42:36.768579 containerd[1466]: time="2026-04-14T12:42:36.768419232Z" level=info msg="PullImage \"ghcr.io/flannel-io/flannel:v0.26.7\" returns image reference \"sha256:965b9dd4aa4c1b6b68a4c54a166692b4645b6e6f8a5937d8dc17736cb63f515e\"" Apr 14 12:42:36.801428 containerd[1466]: time="2026-04-14T12:42:36.800958706Z" level=info msg="CreateContainer within sandbox \"09b34daa12055410920cd2c59ab2988c002d485c993940518b96cc86cb940351\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Apr 14 12:42:36.870203 containerd[1466]: time="2026-04-14T12:42:36.869628333Z" level=info msg="CreateContainer within sandbox \"09b34daa12055410920cd2c59ab2988c002d485c993940518b96cc86cb940351\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"063f7e4fd883bad94d10bf63133139ca63d25a5179f735354d08033e8ce24669\"" Apr 14 12:42:36.880629 containerd[1466]: time="2026-04-14T12:42:36.880496124Z" level=info msg="StartContainer for \"063f7e4fd883bad94d10bf63133139ca63d25a5179f735354d08033e8ce24669\"" Apr 14 12:42:37.233015 systemd[1]: Started cri-containerd-063f7e4fd883bad94d10bf63133139ca63d25a5179f735354d08033e8ce24669.scope - libcontainer container 063f7e4fd883bad94d10bf63133139ca63d25a5179f735354d08033e8ce24669. Apr 14 12:42:37.360245 systemd[1]: cri-containerd-063f7e4fd883bad94d10bf63133139ca63d25a5179f735354d08033e8ce24669.scope: Deactivated successfully. Apr 14 12:42:37.375138 containerd[1466]: time="2026-04-14T12:42:37.374470325Z" level=info msg="StartContainer for \"063f7e4fd883bad94d10bf63133139ca63d25a5179f735354d08033e8ce24669\" returns successfully" Apr 14 12:42:37.402795 kubelet[2611]: I0414 12:42:37.402309 2611 kubelet_node_status.go:427] "Fast updating node status as it just became ready" Apr 14 12:42:37.476120 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-063f7e4fd883bad94d10bf63133139ca63d25a5179f735354d08033e8ce24669-rootfs.mount: Deactivated successfully. Apr 14 12:42:37.651092 containerd[1466]: time="2026-04-14T12:42:37.650858586Z" level=info msg="shim disconnected" id=063f7e4fd883bad94d10bf63133139ca63d25a5179f735354d08033e8ce24669 namespace=k8s.io Apr 14 12:42:37.670710 containerd[1466]: time="2026-04-14T12:42:37.651814020Z" level=warning msg="cleaning up after shim disconnected" id=063f7e4fd883bad94d10bf63133139ca63d25a5179f735354d08033e8ce24669 namespace=k8s.io Apr 14 12:42:37.670710 containerd[1466]: time="2026-04-14T12:42:37.651831048Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 14 12:42:37.930438 kubelet[2611]: E0414 12:42:37.919439 2611 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 12:42:38.958208 kubelet[2611]: E0414 12:42:38.954578 2611 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 12:42:39.019061 containerd[1466]: time="2026-04-14T12:42:39.018909657Z" level=info msg="CreateContainer within sandbox \"09b34daa12055410920cd2c59ab2988c002d485c993940518b96cc86cb940351\" for container &ContainerMetadata{Name:kube-flannel,Attempt:0,}" Apr 14 12:42:39.204331 containerd[1466]: time="2026-04-14T12:42:39.200378313Z" level=info msg="CreateContainer within sandbox \"09b34daa12055410920cd2c59ab2988c002d485c993940518b96cc86cb940351\" for &ContainerMetadata{Name:kube-flannel,Attempt:0,} returns container id \"c94e8e5c2ab2ac9d39f600023d1cb8761d5338608b4b635985a74a169deb87dd\"" Apr 14 12:42:39.217032 containerd[1466]: time="2026-04-14T12:42:39.213230290Z" level=info msg="StartContainer for \"c94e8e5c2ab2ac9d39f600023d1cb8761d5338608b4b635985a74a169deb87dd\"" Apr 14 12:42:39.503883 systemd[1]: Started cri-containerd-c94e8e5c2ab2ac9d39f600023d1cb8761d5338608b4b635985a74a169deb87dd.scope - libcontainer container c94e8e5c2ab2ac9d39f600023d1cb8761d5338608b4b635985a74a169deb87dd. Apr 14 12:42:39.738683 containerd[1466]: time="2026-04-14T12:42:39.738337691Z" level=info msg="StartContainer for \"c94e8e5c2ab2ac9d39f600023d1cb8761d5338608b4b635985a74a169deb87dd\" returns successfully" Apr 14 12:42:39.991437 kubelet[2611]: E0414 12:42:39.991196 2611 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 12:42:40.031507 kubelet[2611]: E0414 12:42:40.031116 2611 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 12:42:41.176262 kubelet[2611]: E0414 12:42:41.172242 2611 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 12:42:41.537956 kubelet[2611]: I0414 12:42:41.535869 2611 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-flannel/kube-flannel-ds-tkcpw" podStartSLOduration=6.21297606 podStartE2EDuration="1m6.535687039s" podCreationTimestamp="2026-04-14 12:41:35 +0000 UTC" firstStartedPulling="2026-04-14 12:41:38.646450709 +0000 UTC m=+26.702570158" lastFinishedPulling="2026-04-14 12:42:38.969161686 +0000 UTC m=+87.025281137" observedRunningTime="2026-04-14 12:42:41.181574813 +0000 UTC m=+89.237694266" watchObservedRunningTime="2026-04-14 12:42:41.535687039 +0000 UTC m=+89.591806482" Apr 14 12:42:41.549337 systemd-networkd[1392]: flannel.1: Link UP Apr 14 12:42:41.549354 systemd-networkd[1392]: flannel.1: Gained carrier Apr 14 12:42:42.718433 systemd-networkd[1392]: flannel.1: Gained IPv6LL Apr 14 12:42:45.282891 kubelet[2611]: E0414 12:42:45.280210 2611 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.235s" Apr 14 12:42:45.352268 kubelet[2611]: E0414 12:42:45.339559 2611 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 12:42:46.274296 kubelet[2611]: E0414 12:42:46.248415 2611 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 12:42:46.304610 kubelet[2611]: E0414 12:42:46.303561 2611 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 12:42:54.104123 kubelet[2611]: E0414 12:42:54.094500 2611 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 12:43:01.325231 systemd[1]: Created slice kubepods-burstable-podfb975314_b950_4dd9_9942_b30d52d99a2a.slice - libcontainer container kubepods-burstable-podfb975314_b950_4dd9_9942_b30d52d99a2a.slice. Apr 14 12:43:01.425450 systemd[1]: Created slice kubepods-burstable-podc0e24bdb_6150_4745_b66b_9386ee241a93.slice - libcontainer container kubepods-burstable-podc0e24bdb_6150_4745_b66b_9386ee241a93.slice. Apr 14 12:43:01.444808 kubelet[2611]: I0414 12:43:01.444301 2611 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c0e24bdb-6150-4745-b66b-9386ee241a93-config-volume\") pod \"coredns-7d764666f9-f44gt\" (UID: \"c0e24bdb-6150-4745-b66b-9386ee241a93\") " pod="kube-system/coredns-7d764666f9-f44gt" Apr 14 12:43:01.444808 kubelet[2611]: I0414 12:43:01.444462 2611 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xdv8h\" (UniqueName: \"kubernetes.io/projected/c0e24bdb-6150-4745-b66b-9386ee241a93-kube-api-access-xdv8h\") pod \"coredns-7d764666f9-f44gt\" (UID: \"c0e24bdb-6150-4745-b66b-9386ee241a93\") " pod="kube-system/coredns-7d764666f9-f44gt" Apr 14 12:43:01.444808 kubelet[2611]: I0414 12:43:01.444509 2611 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fb975314-b950-4dd9-9942-b30d52d99a2a-config-volume\") pod \"coredns-7d764666f9-spttk\" (UID: \"fb975314-b950-4dd9-9942-b30d52d99a2a\") " pod="kube-system/coredns-7d764666f9-spttk" Apr 14 12:43:01.444808 kubelet[2611]: I0414 12:43:01.444557 2611 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tflkx\" (UniqueName: \"kubernetes.io/projected/fb975314-b950-4dd9-9942-b30d52d99a2a-kube-api-access-tflkx\") pod \"coredns-7d764666f9-spttk\" (UID: \"fb975314-b950-4dd9-9942-b30d52d99a2a\") " pod="kube-system/coredns-7d764666f9-spttk" Apr 14 12:43:01.657199 kubelet[2611]: E0414 12:43:01.656090 2611 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 12:43:01.658942 containerd[1466]: time="2026-04-14T12:43:01.658842834Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-spttk,Uid:fb975314-b950-4dd9-9942-b30d52d99a2a,Namespace:kube-system,Attempt:0,}" Apr 14 12:43:01.761184 kubelet[2611]: E0414 12:43:01.760676 2611 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 12:43:01.763815 containerd[1466]: time="2026-04-14T12:43:01.761741754Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-f44gt,Uid:c0e24bdb-6150-4745-b66b-9386ee241a93,Namespace:kube-system,Attempt:0,}" Apr 14 12:43:01.799807 systemd-networkd[1392]: cni0: Link UP Apr 14 12:43:01.804570 systemd-networkd[1392]: cni0: Gained carrier Apr 14 12:43:01.805011 systemd-networkd[1392]: cni0: Lost carrier Apr 14 12:43:01.807039 systemd-networkd[1392]: vethd0f65be3: Link UP Apr 14 12:43:01.808891 kernel: cni0: port 1(vethd0f65be3) entered blocking state Apr 14 12:43:01.808958 kernel: cni0: port 1(vethd0f65be3) entered disabled state Apr 14 12:43:01.808977 kernel: vethd0f65be3: entered allmulticast mode Apr 14 12:43:01.816469 kernel: vethd0f65be3: entered promiscuous mode Apr 14 12:43:01.818660 kernel: cni0: port 1(vethd0f65be3) entered blocking state Apr 14 12:43:01.818764 kernel: cni0: port 1(vethd0f65be3) entered forwarding state Apr 14 12:43:01.818778 kernel: cni0: port 1(vethd0f65be3) entered disabled state Apr 14 12:43:01.837024 kernel: cni0: port 1(vethd0f65be3) entered blocking state Apr 14 12:43:01.837700 kernel: cni0: port 1(vethd0f65be3) entered forwarding state Apr 14 12:43:01.839064 systemd-networkd[1392]: vethd0f65be3: Gained carrier Apr 14 12:43:01.839936 systemd-networkd[1392]: cni0: Gained carrier Apr 14 12:43:01.843493 containerd[1466]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil), MTU:0, AdvMSS:0, Priority:0, Table:(*int)(nil), Scope:(*int)(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0xc0000129a0), "name":"cbr0", "type":"bridge"} Apr 14 12:43:01.843493 containerd[1466]: delegateAdd: netconf sent to delegate plugin: Apr 14 12:43:01.860227 systemd-networkd[1392]: veth712c369a: Link UP Apr 14 12:43:01.865080 kernel: cni0: port 2(veth712c369a) entered blocking state Apr 14 12:43:01.865939 kernel: cni0: port 2(veth712c369a) entered disabled state Apr 14 12:43:01.866011 kernel: veth712c369a: entered allmulticast mode Apr 14 12:43:01.867213 kernel: veth712c369a: entered promiscuous mode Apr 14 12:43:01.868611 kernel: cni0: port 2(veth712c369a) entered blocking state Apr 14 12:43:01.868643 kernel: cni0: port 2(veth712c369a) entered forwarding state Apr 14 12:43:01.890199 systemd-networkd[1392]: veth712c369a: Gained carrier Apr 14 12:43:01.893714 containerd[1466]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"} Apr 14 12:43:01.893714 containerd[1466]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil), MTU:0, AdvMSS:0, Priority:0, Table:(*int)(nil), Scope:(*int)(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0xc00009a950), "name":"cbr0", "type":"bridge"} Apr 14 12:43:01.893714 containerd[1466]: delegateAdd: netconf sent to delegate plugin: Apr 14 12:43:01.925323 containerd[1466]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2026-04-14T12:43:01.923876343Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 14 12:43:01.925323 containerd[1466]: time="2026-04-14T12:43:01.924057170Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 14 12:43:01.925323 containerd[1466]: time="2026-04-14T12:43:01.924074187Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 14 12:43:01.925323 containerd[1466]: time="2026-04-14T12:43:01.924217862Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 14 12:43:01.961241 containerd[1466]: time="2026-04-14T12:43:01.960785088Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 14 12:43:01.961241 containerd[1466]: time="2026-04-14T12:43:01.960886637Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 14 12:43:01.961241 containerd[1466]: time="2026-04-14T12:43:01.960909411Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 14 12:43:01.961241 containerd[1466]: time="2026-04-14T12:43:01.960981939Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 14 12:43:01.975724 systemd[1]: Started cri-containerd-113e216a6ecbef12a69b5c11f9e43873e72fef1eac5ccbea9a830f305674834a.scope - libcontainer container 113e216a6ecbef12a69b5c11f9e43873e72fef1eac5ccbea9a830f305674834a. Apr 14 12:43:02.023809 systemd[1]: Started cri-containerd-3d91cf6f3c853bb6a0265024aa1649584842b98c6b04d8516cc91e18a53a250b.scope - libcontainer container 3d91cf6f3c853bb6a0265024aa1649584842b98c6b04d8516cc91e18a53a250b. Apr 14 12:43:02.043756 systemd-resolved[1394]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 14 12:43:02.091080 systemd-resolved[1394]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 14 12:43:02.128848 containerd[1466]: time="2026-04-14T12:43:02.127932675Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-spttk,Uid:fb975314-b950-4dd9-9942-b30d52d99a2a,Namespace:kube-system,Attempt:0,} returns sandbox id \"113e216a6ecbef12a69b5c11f9e43873e72fef1eac5ccbea9a830f305674834a\"" Apr 14 12:43:02.131988 kubelet[2611]: E0414 12:43:02.131934 2611 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 12:43:02.176041 containerd[1466]: time="2026-04-14T12:43:02.175182525Z" level=info msg="CreateContainer within sandbox \"113e216a6ecbef12a69b5c11f9e43873e72fef1eac5ccbea9a830f305674834a\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 14 12:43:02.240416 containerd[1466]: time="2026-04-14T12:43:02.240209952Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-f44gt,Uid:c0e24bdb-6150-4745-b66b-9386ee241a93,Namespace:kube-system,Attempt:0,} returns sandbox id \"3d91cf6f3c853bb6a0265024aa1649584842b98c6b04d8516cc91e18a53a250b\"" Apr 14 12:43:02.243930 kubelet[2611]: E0414 12:43:02.243896 2611 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 12:43:02.255694 containerd[1466]: time="2026-04-14T12:43:02.255409435Z" level=info msg="CreateContainer within sandbox \"3d91cf6f3c853bb6a0265024aa1649584842b98c6b04d8516cc91e18a53a250b\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 14 12:43:02.275600 containerd[1466]: time="2026-04-14T12:43:02.275447915Z" level=info msg="CreateContainer within sandbox \"113e216a6ecbef12a69b5c11f9e43873e72fef1eac5ccbea9a830f305674834a\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"8d3c1054a6cc9347dba2d780a318571b021087092c2d44730b1a6d81eb6af2a3\"" Apr 14 12:43:02.278340 containerd[1466]: time="2026-04-14T12:43:02.278306370Z" level=info msg="StartContainer for \"8d3c1054a6cc9347dba2d780a318571b021087092c2d44730b1a6d81eb6af2a3\"" Apr 14 12:43:02.312176 containerd[1466]: time="2026-04-14T12:43:02.311897978Z" level=info msg="CreateContainer within sandbox \"3d91cf6f3c853bb6a0265024aa1649584842b98c6b04d8516cc91e18a53a250b\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"deefeb9d7fce7a43dd4dcf925e6b98c76823dace7f986803922e1451a289da2d\"" Apr 14 12:43:02.321319 containerd[1466]: time="2026-04-14T12:43:02.321143982Z" level=info msg="StartContainer for \"deefeb9d7fce7a43dd4dcf925e6b98c76823dace7f986803922e1451a289da2d\"" Apr 14 12:43:02.414615 systemd[1]: Started cri-containerd-8d3c1054a6cc9347dba2d780a318571b021087092c2d44730b1a6d81eb6af2a3.scope - libcontainer container 8d3c1054a6cc9347dba2d780a318571b021087092c2d44730b1a6d81eb6af2a3. Apr 14 12:43:02.628679 systemd[1]: Started cri-containerd-deefeb9d7fce7a43dd4dcf925e6b98c76823dace7f986803922e1451a289da2d.scope - libcontainer container deefeb9d7fce7a43dd4dcf925e6b98c76823dace7f986803922e1451a289da2d. Apr 14 12:43:02.823840 containerd[1466]: time="2026-04-14T12:43:02.820637228Z" level=info msg="StartContainer for \"8d3c1054a6cc9347dba2d780a318571b021087092c2d44730b1a6d81eb6af2a3\" returns successfully" Apr 14 12:43:02.989828 containerd[1466]: time="2026-04-14T12:43:02.989219461Z" level=info msg="StartContainer for \"deefeb9d7fce7a43dd4dcf925e6b98c76823dace7f986803922e1451a289da2d\" returns successfully" Apr 14 12:43:03.067720 kubelet[2611]: E0414 12:43:03.059140 2611 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 12:43:03.134279 systemd-networkd[1392]: cni0: Gained IPv6LL Apr 14 12:43:03.142657 systemd-networkd[1392]: vethd0f65be3: Gained IPv6LL Apr 14 12:43:03.143220 kubelet[2611]: E0414 12:43:03.143157 2611 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 12:43:03.265370 kubelet[2611]: I0414 12:43:03.262729 2611 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/coredns-7d764666f9-f44gt" podStartSLOduration=88.262574477 podStartE2EDuration="1m28.262574477s" podCreationTimestamp="2026-04-14 12:41:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-14 12:43:03.258394215 +0000 UTC m=+111.314513666" watchObservedRunningTime="2026-04-14 12:43:03.262574477 +0000 UTC m=+111.318693919" Apr 14 12:43:03.704160 systemd[1]: Started sshd@7-10.0.0.43:22-10.0.0.1:52756.service - OpenSSH per-connection server daemon (10.0.0.1:52756). Apr 14 12:43:03.772365 systemd-networkd[1392]: veth712c369a: Gained IPv6LL Apr 14 12:43:03.941420 kubelet[2611]: I0414 12:43:03.932207 2611 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/coredns-7d764666f9-spttk" podStartSLOduration=88.932123601 podStartE2EDuration="1m28.932123601s" podCreationTimestamp="2026-04-14 12:41:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-14 12:43:03.91139124 +0000 UTC m=+111.967510686" watchObservedRunningTime="2026-04-14 12:43:03.932123601 +0000 UTC m=+111.988243046" Apr 14 12:43:04.060215 sshd[3635]: Accepted publickey for core from 10.0.0.1 port 52756 ssh2: RSA SHA256:qzhFhta2jMUFpsUMpLJ2lZjvWQxYMUNkIBekZ4ekVbM Apr 14 12:43:04.134096 sshd[3635]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 12:43:04.194765 systemd-logind[1450]: New session 8 of user core. Apr 14 12:43:04.207295 systemd[1]: Started session-8.scope - Session 8 of User core. Apr 14 12:43:04.217347 kubelet[2611]: E0414 12:43:04.217285 2611 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 12:43:04.230327 kubelet[2611]: E0414 12:43:04.229818 2611 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 12:43:05.225424 kubelet[2611]: E0414 12:43:05.221958 2611 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 12:43:05.234764 kubelet[2611]: E0414 12:43:05.232945 2611 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 12:43:05.349127 sshd[3635]: pam_unix(sshd:session): session closed for user core Apr 14 12:43:05.376122 systemd[1]: sshd@7-10.0.0.43:22-10.0.0.1:52756.service: Deactivated successfully. Apr 14 12:43:05.382023 systemd[1]: session-8.scope: Deactivated successfully. Apr 14 12:43:05.392237 systemd-logind[1450]: Session 8 logged out. Waiting for processes to exit. Apr 14 12:43:05.399430 systemd-logind[1450]: Removed session 8. Apr 14 12:43:10.621065 systemd[1]: Started sshd@8-10.0.0.43:22-10.0.0.1:58410.service - OpenSSH per-connection server daemon (10.0.0.1:58410). Apr 14 12:43:11.359797 sshd[3679]: Accepted publickey for core from 10.0.0.1 port 58410 ssh2: RSA SHA256:qzhFhta2jMUFpsUMpLJ2lZjvWQxYMUNkIBekZ4ekVbM Apr 14 12:43:11.373488 sshd[3679]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 12:43:11.534414 systemd-logind[1450]: New session 9 of user core. Apr 14 12:43:11.558645 systemd[1]: Started session-9.scope - Session 9 of User core. Apr 14 12:43:16.798512 sshd[3679]: pam_unix(sshd:session): session closed for user core Apr 14 12:43:17.038235 systemd[1]: sshd@8-10.0.0.43:22-10.0.0.1:58410.service: Deactivated successfully. Apr 14 12:43:17.194863 systemd[1]: session-9.scope: Deactivated successfully. Apr 14 12:43:17.207519 systemd[1]: session-9.scope: Consumed 1.640s CPU time. Apr 14 12:43:17.259269 systemd-logind[1450]: Session 9 logged out. Waiting for processes to exit. Apr 14 12:43:17.270268 systemd-logind[1450]: Removed session 9. Apr 14 12:43:21.847035 systemd[1]: Started sshd@9-10.0.0.43:22-10.0.0.1:48180.service - OpenSSH per-connection server daemon (10.0.0.1:48180). Apr 14 12:43:22.007785 sshd[3744]: Accepted publickey for core from 10.0.0.1 port 48180 ssh2: RSA SHA256:qzhFhta2jMUFpsUMpLJ2lZjvWQxYMUNkIBekZ4ekVbM Apr 14 12:43:22.018691 sshd[3744]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 12:43:22.039159 systemd-logind[1450]: New session 10 of user core. Apr 14 12:43:22.049946 systemd[1]: Started session-10.scope - Session 10 of User core. Apr 14 12:43:23.061985 sshd[3744]: pam_unix(sshd:session): session closed for user core Apr 14 12:43:23.103218 systemd[1]: sshd@9-10.0.0.43:22-10.0.0.1:48180.service: Deactivated successfully. Apr 14 12:43:23.128440 systemd[1]: session-10.scope: Deactivated successfully. Apr 14 12:43:23.175258 systemd-logind[1450]: Session 10 logged out. Waiting for processes to exit. Apr 14 12:43:23.241380 systemd[1]: Started sshd@10-10.0.0.43:22-10.0.0.1:48188.service - OpenSSH per-connection server daemon (10.0.0.1:48188). Apr 14 12:43:23.276822 systemd-logind[1450]: Removed session 10. Apr 14 12:43:23.513088 sshd[3760]: Accepted publickey for core from 10.0.0.1 port 48188 ssh2: RSA SHA256:qzhFhta2jMUFpsUMpLJ2lZjvWQxYMUNkIBekZ4ekVbM Apr 14 12:43:23.528473 sshd[3760]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 12:43:23.557198 systemd-logind[1450]: New session 11 of user core. Apr 14 12:43:23.619281 systemd[1]: Started session-11.scope - Session 11 of User core. Apr 14 12:43:25.302390 kubelet[2611]: E0414 12:43:25.289638 2611 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.221s" Apr 14 12:43:30.745228 sshd[3760]: pam_unix(sshd:session): session closed for user core Apr 14 12:43:30.779126 systemd[1]: sshd@10-10.0.0.43:22-10.0.0.1:48188.service: Deactivated successfully. Apr 14 12:43:30.781540 systemd[1]: session-11.scope: Deactivated successfully. Apr 14 12:43:30.781770 systemd[1]: session-11.scope: Consumed 3.001s CPU time. Apr 14 12:43:30.786574 systemd-logind[1450]: Session 11 logged out. Waiting for processes to exit. Apr 14 12:43:30.804387 systemd[1]: Started sshd@11-10.0.0.43:22-10.0.0.1:55624.service - OpenSSH per-connection server daemon (10.0.0.1:55624). Apr 14 12:43:30.825861 systemd-logind[1450]: Removed session 11. Apr 14 12:43:31.027721 sshd[3792]: Accepted publickey for core from 10.0.0.1 port 55624 ssh2: RSA SHA256:qzhFhta2jMUFpsUMpLJ2lZjvWQxYMUNkIBekZ4ekVbM Apr 14 12:43:31.071686 sshd[3792]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 12:43:31.201872 systemd-logind[1450]: New session 12 of user core. Apr 14 12:43:31.211626 systemd[1]: Started session-12.scope - Session 12 of User core. Apr 14 12:43:33.137857 sshd[3792]: pam_unix(sshd:session): session closed for user core Apr 14 12:43:33.269377 systemd[1]: sshd@11-10.0.0.43:22-10.0.0.1:55624.service: Deactivated successfully. Apr 14 12:43:33.340140 systemd[1]: session-12.scope: Deactivated successfully. Apr 14 12:43:33.341429 systemd-logind[1450]: Session 12 logged out. Waiting for processes to exit. Apr 14 12:43:33.420325 systemd-logind[1450]: Removed session 12. Apr 14 12:43:38.173191 systemd[1]: Started sshd@12-10.0.0.43:22-10.0.0.1:55634.service - OpenSSH per-connection server daemon (10.0.0.1:55634). Apr 14 12:43:38.442682 sshd[3829]: Accepted publickey for core from 10.0.0.1 port 55634 ssh2: RSA SHA256:qzhFhta2jMUFpsUMpLJ2lZjvWQxYMUNkIBekZ4ekVbM Apr 14 12:43:38.568065 sshd[3829]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 12:43:38.669376 systemd-logind[1450]: New session 13 of user core. Apr 14 12:43:38.792425 systemd[1]: Started session-13.scope - Session 13 of User core. Apr 14 12:43:39.452385 sshd[3829]: pam_unix(sshd:session): session closed for user core Apr 14 12:43:39.648970 systemd[1]: sshd@12-10.0.0.43:22-10.0.0.1:55634.service: Deactivated successfully. Apr 14 12:43:39.719762 systemd[1]: session-13.scope: Deactivated successfully. Apr 14 12:43:39.739530 systemd-logind[1450]: Session 13 logged out. Waiting for processes to exit. Apr 14 12:43:39.752129 systemd-logind[1450]: Removed session 13. Apr 14 12:43:44.558903 systemd[1]: Started sshd@13-10.0.0.43:22-10.0.0.1:49222.service - OpenSSH per-connection server daemon (10.0.0.1:49222). Apr 14 12:43:44.640702 sshd[3888]: Accepted publickey for core from 10.0.0.1 port 49222 ssh2: RSA SHA256:qzhFhta2jMUFpsUMpLJ2lZjvWQxYMUNkIBekZ4ekVbM Apr 14 12:43:44.645370 sshd[3888]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 12:43:44.755870 systemd-logind[1450]: New session 14 of user core. Apr 14 12:43:44.772489 systemd[1]: Started session-14.scope - Session 14 of User core. Apr 14 12:43:45.235260 sshd[3888]: pam_unix(sshd:session): session closed for user core Apr 14 12:43:45.249962 systemd[1]: sshd@13-10.0.0.43:22-10.0.0.1:49222.service: Deactivated successfully. Apr 14 12:43:45.252860 systemd[1]: session-14.scope: Deactivated successfully. Apr 14 12:43:45.253545 systemd-logind[1450]: Session 14 logged out. Waiting for processes to exit. Apr 14 12:43:45.268193 systemd-logind[1450]: Removed session 14. Apr 14 12:43:50.338185 systemd[1]: Started sshd@14-10.0.0.43:22-10.0.0.1:35854.service - OpenSSH per-connection server daemon (10.0.0.1:35854). Apr 14 12:43:50.564088 sshd[3923]: Accepted publickey for core from 10.0.0.1 port 35854 ssh2: RSA SHA256:qzhFhta2jMUFpsUMpLJ2lZjvWQxYMUNkIBekZ4ekVbM Apr 14 12:43:50.567868 sshd[3923]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 12:43:50.669288 systemd-logind[1450]: New session 15 of user core. Apr 14 12:43:50.682509 systemd[1]: Started session-15.scope - Session 15 of User core. Apr 14 12:43:51.458496 sshd[3923]: pam_unix(sshd:session): session closed for user core Apr 14 12:43:51.472295 systemd[1]: sshd@14-10.0.0.43:22-10.0.0.1:35854.service: Deactivated successfully. Apr 14 12:43:51.474076 systemd[1]: session-15.scope: Deactivated successfully. Apr 14 12:43:51.493809 systemd-logind[1450]: Session 15 logged out. Waiting for processes to exit. Apr 14 12:43:51.523402 systemd[1]: Started sshd@15-10.0.0.43:22-10.0.0.1:35856.service - OpenSSH per-connection server daemon (10.0.0.1:35856). Apr 14 12:43:51.538559 systemd-logind[1450]: Removed session 15. Apr 14 12:43:51.644361 sshd[3938]: Accepted publickey for core from 10.0.0.1 port 35856 ssh2: RSA SHA256:qzhFhta2jMUFpsUMpLJ2lZjvWQxYMUNkIBekZ4ekVbM Apr 14 12:43:51.647203 sshd[3938]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 12:43:51.693555 systemd-logind[1450]: New session 16 of user core. Apr 14 12:43:51.719761 systemd[1]: Started session-16.scope - Session 16 of User core. Apr 14 12:43:52.141017 sshd[3938]: pam_unix(sshd:session): session closed for user core Apr 14 12:43:52.150898 systemd[1]: sshd@15-10.0.0.43:22-10.0.0.1:35856.service: Deactivated successfully. Apr 14 12:43:52.153029 systemd[1]: session-16.scope: Deactivated successfully. Apr 14 12:43:52.154480 systemd-logind[1450]: Session 16 logged out. Waiting for processes to exit. Apr 14 12:43:52.183306 systemd[1]: Started sshd@16-10.0.0.43:22-10.0.0.1:35862.service - OpenSSH per-connection server daemon (10.0.0.1:35862). Apr 14 12:43:52.198879 systemd-logind[1450]: Removed session 16. Apr 14 12:43:52.295415 sshd[3951]: Accepted publickey for core from 10.0.0.1 port 35862 ssh2: RSA SHA256:qzhFhta2jMUFpsUMpLJ2lZjvWQxYMUNkIBekZ4ekVbM Apr 14 12:43:52.298036 sshd[3951]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 12:43:52.303401 systemd-logind[1450]: New session 17 of user core. Apr 14 12:43:52.315495 systemd[1]: Started session-17.scope - Session 17 of User core. Apr 14 12:43:54.055401 kubelet[2611]: E0414 12:43:54.052882 2611 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 12:43:56.680659 sshd[3951]: pam_unix(sshd:session): session closed for user core Apr 14 12:43:56.712956 systemd[1]: sshd@16-10.0.0.43:22-10.0.0.1:35862.service: Deactivated successfully. Apr 14 12:43:56.820527 systemd[1]: session-17.scope: Deactivated successfully. Apr 14 12:43:56.821995 systemd[1]: session-17.scope: Consumed 1.824s CPU time. Apr 14 12:43:56.832263 systemd-logind[1450]: Session 17 logged out. Waiting for processes to exit. Apr 14 12:43:56.894120 systemd[1]: Started sshd@17-10.0.0.43:22-10.0.0.1:35874.service - OpenSSH per-connection server daemon (10.0.0.1:35874). Apr 14 12:43:56.907787 systemd-logind[1450]: Removed session 17. Apr 14 12:43:57.358016 sshd[3990]: Accepted publickey for core from 10.0.0.1 port 35874 ssh2: RSA SHA256:qzhFhta2jMUFpsUMpLJ2lZjvWQxYMUNkIBekZ4ekVbM Apr 14 12:43:57.367370 sshd[3990]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 12:43:57.409942 systemd-logind[1450]: New session 18 of user core. Apr 14 12:43:57.431913 systemd[1]: Started session-18.scope - Session 18 of User core. Apr 14 12:44:13.392187 kubelet[2611]: E0414 12:44:13.264405 2611 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="11.217s" Apr 14 12:44:16.013453 kubelet[2611]: E0414 12:44:16.012845 2611 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 12:44:16.521349 kubelet[2611]: E0414 12:44:16.516959 2611 controller.go:251] "Failed to update lease" err="Put \"https://10.0.0.43:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Apr 14 12:44:17.556384 kubelet[2611]: E0414 12:44:17.556180 2611 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 12:44:17.614919 kubelet[2611]: E0414 12:44:17.609558 2611 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.757s" Apr 14 12:44:17.762941 kubelet[2611]: E0414 12:44:17.762486 2611 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 12:44:17.947745 kubelet[2611]: E0414 12:44:17.946422 2611 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 12:44:18.731504 systemd[1]: cri-containerd-474e6866abfc55ca7a04b76aac7c35565018d999796bf452868de1b09224bad7.scope: Deactivated successfully. Apr 14 12:44:18.737544 systemd[1]: cri-containerd-474e6866abfc55ca7a04b76aac7c35565018d999796bf452868de1b09224bad7.scope: Consumed 27.221s CPU time. Apr 14 12:44:22.672312 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-474e6866abfc55ca7a04b76aac7c35565018d999796bf452868de1b09224bad7-rootfs.mount: Deactivated successfully. Apr 14 12:44:22.809460 containerd[1466]: time="2026-04-14T12:44:22.806189760Z" level=info msg="shim disconnected" id=474e6866abfc55ca7a04b76aac7c35565018d999796bf452868de1b09224bad7 namespace=k8s.io Apr 14 12:44:22.809460 containerd[1466]: time="2026-04-14T12:44:22.806427278Z" level=warning msg="cleaning up after shim disconnected" id=474e6866abfc55ca7a04b76aac7c35565018d999796bf452868de1b09224bad7 namespace=k8s.io Apr 14 12:44:22.844394 containerd[1466]: time="2026-04-14T12:44:22.806442368Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 14 12:44:23.024012 kubelet[2611]: E0414 12:44:23.006969 2611 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 12:44:23.039191 systemd[1]: cri-containerd-a4ba45763c1b8e90095512ed73a3ecf8ddc9e7aa1d6a7198f9506d3b4bbebadb.scope: Deactivated successfully. Apr 14 12:44:23.040748 systemd[1]: cri-containerd-a4ba45763c1b8e90095512ed73a3ecf8ddc9e7aa1d6a7198f9506d3b4bbebadb.scope: Consumed 15.880s CPU time. Apr 14 12:44:23.749562 kubelet[2611]: E0414 12:44:23.736422 2611 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.527s" Apr 14 12:44:24.017277 kubelet[2611]: E0414 12:44:24.016384 2611 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 12:44:27.732437 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a4ba45763c1b8e90095512ed73a3ecf8ddc9e7aa1d6a7198f9506d3b4bbebadb-rootfs.mount: Deactivated successfully. Apr 14 12:44:27.794374 containerd[1466]: time="2026-04-14T12:44:27.791778376Z" level=info msg="shim disconnected" id=a4ba45763c1b8e90095512ed73a3ecf8ddc9e7aa1d6a7198f9506d3b4bbebadb namespace=k8s.io Apr 14 12:44:27.794374 containerd[1466]: time="2026-04-14T12:44:27.793197189Z" level=warning msg="cleaning up after shim disconnected" id=a4ba45763c1b8e90095512ed73a3ecf8ddc9e7aa1d6a7198f9506d3b4bbebadb namespace=k8s.io Apr 14 12:44:27.794374 containerd[1466]: time="2026-04-14T12:44:27.793214995Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 14 12:44:28.007030 kubelet[2611]: I0414 12:44:28.006925 2611 scope.go:122] "RemoveContainer" containerID="e4e0d694920dace3b0160d0dc5f65705d37c4fd0c45f4b4dbfe6e1be020f4d29" Apr 14 12:44:28.007578 kubelet[2611]: I0414 12:44:28.007280 2611 scope.go:122] "RemoveContainer" containerID="474e6866abfc55ca7a04b76aac7c35565018d999796bf452868de1b09224bad7" Apr 14 12:44:28.007578 kubelet[2611]: E0414 12:44:28.007332 2611 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 12:44:28.007578 kubelet[2611]: E0414 12:44:28.007474 2611 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-controller-manager pod=kube-controller-manager-localhost_kube-system(bd70d524e6bc561f2082b467706799ed)\"" pod="kube-system/kube-controller-manager-localhost" podUID="bd70d524e6bc561f2082b467706799ed" Apr 14 12:44:28.173720 kubelet[2611]: E0414 12:44:28.173474 2611 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 12:44:28.225334 containerd[1466]: time="2026-04-14T12:44:28.173987886Z" level=info msg="RemoveContainer for \"e4e0d694920dace3b0160d0dc5f65705d37c4fd0c45f4b4dbfe6e1be020f4d29\"" Apr 14 12:44:28.247561 sshd[3990]: pam_unix(sshd:session): session closed for user core Apr 14 12:44:28.335879 containerd[1466]: time="2026-04-14T12:44:28.335472456Z" level=info msg="RemoveContainer for \"e4e0d694920dace3b0160d0dc5f65705d37c4fd0c45f4b4dbfe6e1be020f4d29\" returns successfully" Apr 14 12:44:28.340937 systemd[1]: sshd@17-10.0.0.43:22-10.0.0.1:35874.service: Deactivated successfully. Apr 14 12:44:28.345788 systemd[1]: session-18.scope: Deactivated successfully. Apr 14 12:44:28.346029 systemd[1]: session-18.scope: Consumed 14.804s CPU time. Apr 14 12:44:28.364270 systemd-logind[1450]: Session 18 logged out. Waiting for processes to exit. Apr 14 12:44:28.387441 systemd[1]: Started sshd@18-10.0.0.43:22-10.0.0.1:42382.service - OpenSSH per-connection server daemon (10.0.0.1:42382). Apr 14 12:44:28.399992 systemd-logind[1450]: Removed session 18. Apr 14 12:44:28.562920 sshd[4130]: Accepted publickey for core from 10.0.0.1 port 42382 ssh2: RSA SHA256:qzhFhta2jMUFpsUMpLJ2lZjvWQxYMUNkIBekZ4ekVbM Apr 14 12:44:28.579354 sshd[4130]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 12:44:28.629767 systemd-logind[1450]: New session 19 of user core. Apr 14 12:44:28.761192 systemd[1]: Started session-19.scope - Session 19 of User core. Apr 14 12:44:29.141060 kubelet[2611]: I0414 12:44:29.136139 2611 scope.go:122] "RemoveContainer" containerID="dbef3dfc5774f46dca15615bea398a334ad43fbd81fb85c2ef49b20ee7536c86" Apr 14 12:44:29.216357 kubelet[2611]: I0414 12:44:29.210957 2611 scope.go:122] "RemoveContainer" containerID="a4ba45763c1b8e90095512ed73a3ecf8ddc9e7aa1d6a7198f9506d3b4bbebadb" Apr 14 12:44:29.216357 kubelet[2611]: E0414 12:44:29.211194 2611 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 12:44:29.237187 kubelet[2611]: E0414 12:44:29.236538 2611 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-scheduler pod=kube-scheduler-localhost_kube-system(3566c1d7ed03bb3c60facf009a5678dd)\"" pod="kube-system/kube-scheduler-localhost" podUID="3566c1d7ed03bb3c60facf009a5678dd" Apr 14 12:44:29.267148 containerd[1466]: time="2026-04-14T12:44:29.258264788Z" level=info msg="RemoveContainer for \"dbef3dfc5774f46dca15615bea398a334ad43fbd81fb85c2ef49b20ee7536c86\"" Apr 14 12:44:29.325512 containerd[1466]: time="2026-04-14T12:44:29.325280746Z" level=info msg="RemoveContainer for \"dbef3dfc5774f46dca15615bea398a334ad43fbd81fb85c2ef49b20ee7536c86\" returns successfully" Apr 14 12:44:30.136554 kubelet[2611]: I0414 12:44:30.127025 2611 scope.go:122] "RemoveContainer" containerID="474e6866abfc55ca7a04b76aac7c35565018d999796bf452868de1b09224bad7" Apr 14 12:44:30.136554 kubelet[2611]: E0414 12:44:30.129118 2611 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 12:44:30.328569 containerd[1466]: time="2026-04-14T12:44:30.327136062Z" level=info msg="CreateContainer within sandbox \"e6e85fec2c8d99964501468473b50536e90439fb2a70be7746d8683556d02abe\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:2,}" Apr 14 12:44:31.359358 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2736940554.mount: Deactivated successfully. Apr 14 12:44:31.395542 kubelet[2611]: I0414 12:44:31.363465 2611 scope.go:122] "RemoveContainer" containerID="a4ba45763c1b8e90095512ed73a3ecf8ddc9e7aa1d6a7198f9506d3b4bbebadb" Apr 14 12:44:31.581095 kubelet[2611]: E0414 12:44:31.395645 2611 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 12:44:31.581095 kubelet[2611]: E0414 12:44:31.460348 2611 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-scheduler pod=kube-scheduler-localhost_kube-system(3566c1d7ed03bb3c60facf009a5678dd)\"" pod="kube-system/kube-scheduler-localhost" podUID="3566c1d7ed03bb3c60facf009a5678dd" Apr 14 12:44:31.669545 containerd[1466]: time="2026-04-14T12:44:31.642527077Z" level=info msg="CreateContainer within sandbox \"e6e85fec2c8d99964501468473b50536e90439fb2a70be7746d8683556d02abe\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:2,} returns container id \"778a61b4dc0aec05b016d9a099f85c3b8b3531417291a1fe6809c5d0a256c99b\"" Apr 14 12:44:31.830829 containerd[1466]: time="2026-04-14T12:44:31.827133185Z" level=info msg="StartContainer for \"778a61b4dc0aec05b016d9a099f85c3b8b3531417291a1fe6809c5d0a256c99b\"" Apr 14 12:44:35.537835 kubelet[2611]: E0414 12:44:35.537282 2611 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.415s" Apr 14 12:44:35.734314 kubelet[2611]: I0414 12:44:35.717219 2611 scope.go:122] "RemoveContainer" containerID="a4ba45763c1b8e90095512ed73a3ecf8ddc9e7aa1d6a7198f9506d3b4bbebadb" Apr 14 12:44:35.734314 kubelet[2611]: E0414 12:44:35.722568 2611 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 12:44:37.967953 containerd[1466]: time="2026-04-14T12:44:37.929561966Z" level=info msg="CreateContainer within sandbox \"e5886807b102c9b52b314793a738bd13f5f513f2b95c62167b7e7611e720cfca\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:2,}" Apr 14 12:44:38.804881 kubelet[2611]: E0414 12:44:38.747381 2611 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.478s" Apr 14 12:44:39.820708 sshd[4130]: pam_unix(sshd:session): session closed for user core Apr 14 12:44:40.157924 systemd[1]: Started cri-containerd-778a61b4dc0aec05b016d9a099f85c3b8b3531417291a1fe6809c5d0a256c99b.scope - libcontainer container 778a61b4dc0aec05b016d9a099f85c3b8b3531417291a1fe6809c5d0a256c99b. Apr 14 12:44:40.268892 systemd[1]: sshd@18-10.0.0.43:22-10.0.0.1:42382.service: Deactivated successfully. Apr 14 12:44:40.418531 systemd[1]: session-19.scope: Deactivated successfully. Apr 14 12:44:40.435459 systemd[1]: session-19.scope: Consumed 3.939s CPU time. Apr 14 12:44:40.515994 systemd-logind[1450]: Session 19 logged out. Waiting for processes to exit. Apr 14 12:44:40.688737 systemd-logind[1450]: Removed session 19. Apr 14 12:44:41.453072 containerd[1466]: time="2026-04-14T12:44:41.443398697Z" level=info msg="CreateContainer within sandbox \"e5886807b102c9b52b314793a738bd13f5f513f2b95c62167b7e7611e720cfca\" for &ContainerMetadata{Name:kube-scheduler,Attempt:2,} returns container id \"c5ac755f0da796fbf6500d0318179f2bc1f8a968974534ac1af3050ef38cb3c8\"" Apr 14 12:44:41.596289 containerd[1466]: time="2026-04-14T12:44:41.595093039Z" level=info msg="StartContainer for \"c5ac755f0da796fbf6500d0318179f2bc1f8a968974534ac1af3050ef38cb3c8\"" Apr 14 12:44:42.767320 kubelet[2611]: E0414 12:44:42.766960 2611 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.964s" Apr 14 12:44:44.837836 containerd[1466]: time="2026-04-14T12:44:44.827628740Z" level=error msg="get state for 778a61b4dc0aec05b016d9a099f85c3b8b3531417291a1fe6809c5d0a256c99b" error="context deadline exceeded: unknown" Apr 14 12:44:44.837836 containerd[1466]: time="2026-04-14T12:44:44.827894969Z" level=warning msg="unknown status" status=0 Apr 14 12:44:44.963488 kubelet[2611]: E0414 12:44:44.949633 2611 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.115s" Apr 14 12:44:45.467507 systemd[1]: Started sshd@19-10.0.0.43:22-10.0.0.1:46084.service - OpenSSH per-connection server daemon (10.0.0.1:46084). Apr 14 12:44:46.465464 kubelet[2611]: E0414 12:44:46.425517 2611 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.46s" Apr 14 12:44:47.693251 containerd[1466]: time="2026-04-14T12:44:47.656430954Z" level=error msg="get state for 778a61b4dc0aec05b016d9a099f85c3b8b3531417291a1fe6809c5d0a256c99b" error="context deadline exceeded: unknown" Apr 14 12:44:47.721489 containerd[1466]: time="2026-04-14T12:44:47.710225882Z" level=warning msg="unknown status" status=0 Apr 14 12:44:48.454573 containerd[1466]: time="2026-04-14T12:44:48.446066780Z" level=error msg="ttrpc: received message on inactive stream" stream=3 Apr 14 12:44:48.454573 containerd[1466]: time="2026-04-14T12:44:48.446142397Z" level=error msg="ttrpc: received message on inactive stream" stream=5 Apr 14 12:44:48.598476 sshd[4211]: Accepted publickey for core from 10.0.0.1 port 46084 ssh2: RSA SHA256:qzhFhta2jMUFpsUMpLJ2lZjvWQxYMUNkIBekZ4ekVbM Apr 14 12:44:48.934549 sshd[4211]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 12:44:49.276424 kubelet[2611]: E0414 12:44:49.271154 2611 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.145s" Apr 14 12:44:49.329389 systemd-logind[1450]: New session 20 of user core. Apr 14 12:44:49.452231 systemd[1]: Started session-20.scope - Session 20 of User core. Apr 14 12:44:49.847874 systemd[1]: Started cri-containerd-c5ac755f0da796fbf6500d0318179f2bc1f8a968974534ac1af3050ef38cb3c8.scope - libcontainer container c5ac755f0da796fbf6500d0318179f2bc1f8a968974534ac1af3050ef38cb3c8. Apr 14 12:44:50.360688 containerd[1466]: time="2026-04-14T12:44:50.360387439Z" level=info msg="StartContainer for \"778a61b4dc0aec05b016d9a099f85c3b8b3531417291a1fe6809c5d0a256c99b\" returns successfully" Apr 14 12:44:51.554450 kubelet[2611]: E0414 12:44:51.553833 2611 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 12:44:52.261666 containerd[1466]: time="2026-04-14T12:44:52.261269720Z" level=error msg="get state for c5ac755f0da796fbf6500d0318179f2bc1f8a968974534ac1af3050ef38cb3c8" error="context deadline exceeded: unknown" Apr 14 12:44:52.261666 containerd[1466]: time="2026-04-14T12:44:52.261543656Z" level=warning msg="unknown status" status=0 Apr 14 12:44:52.617320 kubelet[2611]: E0414 12:44:52.611499 2611 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 12:44:52.936678 containerd[1466]: time="2026-04-14T12:44:52.861578644Z" level=error msg="ttrpc: received message on inactive stream" stream=3 Apr 14 12:44:53.541048 containerd[1466]: time="2026-04-14T12:44:53.540825754Z" level=info msg="StartContainer for \"c5ac755f0da796fbf6500d0318179f2bc1f8a968974534ac1af3050ef38cb3c8\" returns successfully" Apr 14 12:44:54.266128 kubelet[2611]: E0414 12:44:54.265465 2611 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 12:44:54.636690 sshd[4211]: pam_unix(sshd:session): session closed for user core Apr 14 12:44:54.850539 systemd[1]: sshd@19-10.0.0.43:22-10.0.0.1:46084.service: Deactivated successfully. Apr 14 12:44:54.909885 systemd[1]: session-20.scope: Deactivated successfully. Apr 14 12:44:54.914899 systemd[1]: session-20.scope: Consumed 2.448s CPU time. Apr 14 12:44:54.931413 systemd-logind[1450]: Session 20 logged out. Waiting for processes to exit. Apr 14 12:44:55.043050 systemd-logind[1450]: Removed session 20. Apr 14 12:44:55.442324 kubelet[2611]: E0414 12:44:55.439405 2611 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 12:44:56.362420 kubelet[2611]: E0414 12:44:56.362069 2611 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 12:44:59.911945 systemd[1]: Started sshd@20-10.0.0.43:22-10.0.0.1:44830.service - OpenSSH per-connection server daemon (10.0.0.1:44830). Apr 14 12:44:59.954692 kubelet[2611]: E0414 12:44:59.953747 2611 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 12:45:02.479412 sshd[4315]: Accepted publickey for core from 10.0.0.1 port 44830 ssh2: RSA SHA256:qzhFhta2jMUFpsUMpLJ2lZjvWQxYMUNkIBekZ4ekVbM Apr 14 12:45:02.635730 sshd[4315]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 12:45:03.269272 systemd-logind[1450]: New session 21 of user core. Apr 14 12:45:03.360365 systemd[1]: Started session-21.scope - Session 21 of User core. Apr 14 12:45:05.052045 kubelet[2611]: E0414 12:45:05.049521 2611 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 12:45:05.366975 kubelet[2611]: E0414 12:45:05.356134 2611 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 12:45:06.325552 kubelet[2611]: E0414 12:45:06.325375 2611 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 12:45:07.877721 sshd[4315]: pam_unix(sshd:session): session closed for user core Apr 14 12:45:07.882402 systemd[1]: sshd@20-10.0.0.43:22-10.0.0.1:44830.service: Deactivated successfully. Apr 14 12:45:07.973656 systemd[1]: session-21.scope: Deactivated successfully. Apr 14 12:45:07.974253 systemd[1]: session-21.scope: Consumed 2.592s CPU time. Apr 14 12:45:08.058169 systemd-logind[1450]: Session 21 logged out. Waiting for processes to exit. Apr 14 12:45:08.073920 systemd-logind[1450]: Removed session 21. Apr 14 12:45:10.030616 kubelet[2611]: E0414 12:45:10.030026 2611 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 12:45:13.137252 systemd[1]: Started sshd@21-10.0.0.43:22-10.0.0.1:54968.service - OpenSSH per-connection server daemon (10.0.0.1:54968). Apr 14 12:45:13.556333 sshd[4371]: Accepted publickey for core from 10.0.0.1 port 54968 ssh2: RSA SHA256:qzhFhta2jMUFpsUMpLJ2lZjvWQxYMUNkIBekZ4ekVbM Apr 14 12:45:13.578357 sshd[4371]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 12:45:13.613317 systemd-logind[1450]: New session 22 of user core. Apr 14 12:45:13.726424 systemd[1]: Started session-22.scope - Session 22 of User core. Apr 14 12:45:17.474468 kubelet[2611]: E0414 12:45:17.467173 2611 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.323s" Apr 14 12:45:18.545870 sshd[4371]: pam_unix(sshd:session): session closed for user core Apr 14 12:45:18.674472 systemd[1]: sshd@21-10.0.0.43:22-10.0.0.1:54968.service: Deactivated successfully. Apr 14 12:45:18.807420 systemd[1]: session-22.scope: Deactivated successfully. Apr 14 12:45:18.816750 systemd[1]: session-22.scope: Consumed 3.067s CPU time. Apr 14 12:45:18.860356 systemd-logind[1450]: Session 22 logged out. Waiting for processes to exit. Apr 14 12:45:19.008969 systemd-logind[1450]: Removed session 22. Apr 14 12:45:23.986022 systemd[1]: Started sshd@22-10.0.0.43:22-10.0.0.1:49224.service - OpenSSH per-connection server daemon (10.0.0.1:49224). Apr 14 12:45:24.912535 sshd[4429]: Accepted publickey for core from 10.0.0.1 port 49224 ssh2: RSA SHA256:qzhFhta2jMUFpsUMpLJ2lZjvWQxYMUNkIBekZ4ekVbM Apr 14 12:45:25.036778 sshd[4429]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 12:45:25.754149 systemd-logind[1450]: New session 23 of user core. Apr 14 12:45:25.832808 systemd[1]: Started session-23.scope - Session 23 of User core. Apr 14 12:45:27.613782 kubelet[2611]: E0414 12:45:27.453791 2611 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.273s" Apr 14 12:45:31.186981 kubelet[2611]: E0414 12:45:31.185360 2611 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.142s" Apr 14 12:45:32.172037 kubelet[2611]: E0414 12:45:32.168130 2611 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 12:45:35.961525 kubelet[2611]: E0414 12:45:35.960287 2611 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.687s" Apr 14 12:45:37.061636 kubelet[2611]: E0414 12:45:37.061207 2611 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 12:45:39.210838 systemd[1]: cri-containerd-778a61b4dc0aec05b016d9a099f85c3b8b3531417291a1fe6809c5d0a256c99b.scope: Deactivated successfully. Apr 14 12:45:39.318691 systemd[1]: cri-containerd-778a61b4dc0aec05b016d9a099f85c3b8b3531417291a1fe6809c5d0a256c99b.scope: Consumed 10.536s CPU time. Apr 14 12:45:40.548167 kubelet[2611]: E0414 12:45:40.539582 2611 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.491s" Apr 14 12:45:40.952683 kubelet[2611]: E0414 12:45:40.948731 2611 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 12:45:42.451519 containerd[1466]: time="2026-04-14T12:45:42.444485685Z" level=info msg="shim disconnected" id=778a61b4dc0aec05b016d9a099f85c3b8b3531417291a1fe6809c5d0a256c99b namespace=k8s.io Apr 14 12:45:42.451519 containerd[1466]: time="2026-04-14T12:45:42.444764291Z" level=warning msg="cleaning up after shim disconnected" id=778a61b4dc0aec05b016d9a099f85c3b8b3531417291a1fe6809c5d0a256c99b namespace=k8s.io Apr 14 12:45:42.451519 containerd[1466]: time="2026-04-14T12:45:42.444773257Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 14 12:45:42.541898 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-778a61b4dc0aec05b016d9a099f85c3b8b3531417291a1fe6809c5d0a256c99b-rootfs.mount: Deactivated successfully. Apr 14 12:45:44.500951 sshd[4429]: pam_unix(sshd:session): session closed for user core Apr 14 12:45:44.821373 systemd[1]: sshd@22-10.0.0.43:22-10.0.0.1:49224.service: Deactivated successfully. Apr 14 12:45:44.920451 systemd[1]: session-23.scope: Deactivated successfully. Apr 14 12:45:44.921271 systemd[1]: session-23.scope: Consumed 6.597s CPU time. Apr 14 12:45:44.955737 systemd-logind[1450]: Session 23 logged out. Waiting for processes to exit. Apr 14 12:45:45.030888 systemd-logind[1450]: Removed session 23. Apr 14 12:45:45.293068 systemd[1]: cri-containerd-c5ac755f0da796fbf6500d0318179f2bc1f8a968974534ac1af3050ef38cb3c8.scope: Deactivated successfully. Apr 14 12:45:45.316260 systemd[1]: cri-containerd-c5ac755f0da796fbf6500d0318179f2bc1f8a968974534ac1af3050ef38cb3c8.scope: Consumed 11.646s CPU time. Apr 14 12:45:47.527573 kubelet[2611]: I0414 12:45:47.525736 2611 scope.go:122] "RemoveContainer" containerID="474e6866abfc55ca7a04b76aac7c35565018d999796bf452868de1b09224bad7" Apr 14 12:45:47.716895 kubelet[2611]: E0414 12:45:47.713138 2611 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 12:45:47.735851 kubelet[2611]: I0414 12:45:47.729089 2611 scope.go:122] "RemoveContainer" containerID="778a61b4dc0aec05b016d9a099f85c3b8b3531417291a1fe6809c5d0a256c99b" Apr 14 12:45:47.735851 kubelet[2611]: E0414 12:45:47.729349 2611 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 12:45:47.749439 kubelet[2611]: E0414 12:45:47.749252 2611 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-controller-manager pod=kube-controller-manager-localhost_kube-system(bd70d524e6bc561f2082b467706799ed)\"" pod="kube-system/kube-controller-manager-localhost" podUID="bd70d524e6bc561f2082b467706799ed" Apr 14 12:45:47.964206 containerd[1466]: time="2026-04-14T12:45:47.961559364Z" level=info msg="RemoveContainer for \"474e6866abfc55ca7a04b76aac7c35565018d999796bf452868de1b09224bad7\"" Apr 14 12:45:48.117487 containerd[1466]: time="2026-04-14T12:45:48.117374205Z" level=info msg="RemoveContainer for \"474e6866abfc55ca7a04b76aac7c35565018d999796bf452868de1b09224bad7\" returns successfully" Apr 14 12:45:48.629793 containerd[1466]: time="2026-04-14T12:45:48.619202843Z" level=info msg="shim disconnected" id=c5ac755f0da796fbf6500d0318179f2bc1f8a968974534ac1af3050ef38cb3c8 namespace=k8s.io Apr 14 12:45:48.635829 containerd[1466]: time="2026-04-14T12:45:48.635232015Z" level=warning msg="cleaning up after shim disconnected" id=c5ac755f0da796fbf6500d0318179f2bc1f8a968974534ac1af3050ef38cb3c8 namespace=k8s.io Apr 14 12:45:48.635829 containerd[1466]: time="2026-04-14T12:45:48.635492560Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 14 12:45:48.635790 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c5ac755f0da796fbf6500d0318179f2bc1f8a968974534ac1af3050ef38cb3c8-rootfs.mount: Deactivated successfully. Apr 14 12:45:48.890974 update_engine[1457]: I20260414 12:45:48.889484 1457 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Apr 14 12:45:48.925742 update_engine[1457]: I20260414 12:45:48.895748 1457 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Apr 14 12:45:48.925939 update_engine[1457]: I20260414 12:45:48.925850 1457 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Apr 14 12:45:48.950497 update_engine[1457]: I20260414 12:45:48.948932 1457 omaha_request_params.cc:62] Current group set to lts Apr 14 12:45:48.954533 update_engine[1457]: I20260414 12:45:48.954485 1457 update_attempter.cc:499] Already updated boot flags. Skipping. Apr 14 12:45:48.993996 update_engine[1457]: I20260414 12:45:48.960974 1457 update_attempter.cc:643] Scheduling an action processor start. Apr 14 12:45:48.993996 update_engine[1457]: I20260414 12:45:48.961381 1457 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Apr 14 12:45:48.993996 update_engine[1457]: I20260414 12:45:48.967460 1457 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Apr 14 12:45:49.009402 update_engine[1457]: I20260414 12:45:48.995397 1457 omaha_request_action.cc:271] Posting an Omaha request to disabled Apr 14 12:45:49.009402 update_engine[1457]: I20260414 12:45:48.995503 1457 omaha_request_action.cc:272] Request: Apr 14 12:45:49.009402 update_engine[1457]: Apr 14 12:45:49.009402 update_engine[1457]: Apr 14 12:45:49.009402 update_engine[1457]: Apr 14 12:45:49.009402 update_engine[1457]: Apr 14 12:45:49.009402 update_engine[1457]: Apr 14 12:45:49.009402 update_engine[1457]: Apr 14 12:45:49.009402 update_engine[1457]: Apr 14 12:45:49.009402 update_engine[1457]: Apr 14 12:45:49.009402 update_engine[1457]: I20260414 12:45:48.995513 1457 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 14 12:45:49.034297 locksmithd[1491]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Apr 14 12:45:49.043060 update_engine[1457]: I20260414 12:45:49.041148 1457 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 14 12:45:49.043060 update_engine[1457]: I20260414 12:45:49.041764 1457 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 14 12:45:49.052416 update_engine[1457]: E20260414 12:45:49.052320 1457 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Apr 14 12:45:49.099183 update_engine[1457]: I20260414 12:45:49.095453 1457 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Apr 14 12:45:49.948527 containerd[1466]: time="2026-04-14T12:45:49.938100974Z" level=warning msg="cleanup warnings time=\"2026-04-14T12:45:49Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Apr 14 12:45:50.012095 systemd[1]: Started sshd@23-10.0.0.43:22-10.0.0.1:40670.service - OpenSSH per-connection server daemon (10.0.0.1:40670). Apr 14 12:45:50.143039 kubelet[2611]: E0414 12:45:50.140453 2611 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 12:45:50.857093 sshd[4552]: Accepted publickey for core from 10.0.0.1 port 40670 ssh2: RSA SHA256:qzhFhta2jMUFpsUMpLJ2lZjvWQxYMUNkIBekZ4ekVbM Apr 14 12:45:50.894421 sshd[4552]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 12:45:51.152074 kubelet[2611]: I0414 12:45:51.151435 2611 scope.go:122] "RemoveContainer" containerID="a4ba45763c1b8e90095512ed73a3ecf8ddc9e7aa1d6a7198f9506d3b4bbebadb" Apr 14 12:45:51.241576 kubelet[2611]: I0414 12:45:51.152197 2611 scope.go:122] "RemoveContainer" containerID="c5ac755f0da796fbf6500d0318179f2bc1f8a968974534ac1af3050ef38cb3c8" Apr 14 12:45:51.241576 kubelet[2611]: E0414 12:45:51.152258 2611 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 12:45:51.241576 kubelet[2611]: E0414 12:45:51.152459 2611 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-scheduler pod=kube-scheduler-localhost_kube-system(3566c1d7ed03bb3c60facf009a5678dd)\"" pod="kube-system/kube-scheduler-localhost" podUID="3566c1d7ed03bb3c60facf009a5678dd" Apr 14 12:45:51.248205 systemd-logind[1450]: New session 24 of user core. Apr 14 12:45:51.456558 systemd[1]: Started session-24.scope - Session 24 of User core. Apr 14 12:45:51.497011 containerd[1466]: time="2026-04-14T12:45:51.495966296Z" level=info msg="RemoveContainer for \"a4ba45763c1b8e90095512ed73a3ecf8ddc9e7aa1d6a7198f9506d3b4bbebadb\"" Apr 14 12:45:51.572077 containerd[1466]: time="2026-04-14T12:45:51.571785309Z" level=info msg="RemoveContainer for \"a4ba45763c1b8e90095512ed73a3ecf8ddc9e7aa1d6a7198f9506d3b4bbebadb\" returns successfully" Apr 14 12:45:54.654561 kubelet[2611]: I0414 12:45:54.654088 2611 scope.go:122] "RemoveContainer" containerID="778a61b4dc0aec05b016d9a099f85c3b8b3531417291a1fe6809c5d0a256c99b" Apr 14 12:45:54.935331 kubelet[2611]: E0414 12:45:54.700281 2611 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 12:45:54.935331 kubelet[2611]: E0414 12:45:54.894682 2611 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-controller-manager pod=kube-controller-manager-localhost_kube-system(bd70d524e6bc561f2082b467706799ed)\"" pod="kube-system/kube-controller-manager-localhost" podUID="bd70d524e6bc561f2082b467706799ed" Apr 14 12:45:58.903695 update_engine[1457]: I20260414 12:45:58.896939 1457 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 14 12:45:59.003232 update_engine[1457]: I20260414 12:45:58.960880 1457 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 14 12:45:59.069479 update_engine[1457]: I20260414 12:45:59.069434 1457 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 14 12:45:59.111804 update_engine[1457]: E20260414 12:45:59.108385 1457 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Apr 14 12:45:59.118050 update_engine[1457]: I20260414 12:45:59.115038 1457 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Apr 14 12:45:59.294877 kubelet[2611]: E0414 12:45:59.266722 2611 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.217s" Apr 14 12:46:00.379420 kubelet[2611]: I0414 12:46:00.362442 2611 scope.go:122] "RemoveContainer" containerID="c5ac755f0da796fbf6500d0318179f2bc1f8a968974534ac1af3050ef38cb3c8" Apr 14 12:46:00.724956 kubelet[2611]: E0414 12:46:00.701392 2611 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 12:46:01.019222 kubelet[2611]: E0414 12:46:01.015943 2611 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-scheduler pod=kube-scheduler-localhost_kube-system(3566c1d7ed03bb3c60facf009a5678dd)\"" pod="kube-system/kube-scheduler-localhost" podUID="3566c1d7ed03bb3c60facf009a5678dd" Apr 14 12:46:01.573794 kubelet[2611]: E0414 12:46:01.573234 2611 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.373s" Apr 14 12:46:02.156561 kubelet[2611]: I0414 12:46:02.156215 2611 scope.go:122] "RemoveContainer" containerID="778a61b4dc0aec05b016d9a099f85c3b8b3531417291a1fe6809c5d0a256c99b" Apr 14 12:46:02.156561 kubelet[2611]: E0414 12:46:02.156652 2611 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 12:46:02.650170 sshd[4552]: pam_unix(sshd:session): session closed for user core Apr 14 12:46:02.904564 containerd[1466]: time="2026-04-14T12:46:02.849546590Z" level=info msg="CreateContainer within sandbox \"e6e85fec2c8d99964501468473b50536e90439fb2a70be7746d8683556d02abe\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:3,}" Apr 14 12:46:03.393229 systemd[1]: sshd@23-10.0.0.43:22-10.0.0.1:40670.service: Deactivated successfully. Apr 14 12:46:03.588782 systemd[1]: session-24.scope: Deactivated successfully. Apr 14 12:46:03.591469 systemd[1]: session-24.scope: Consumed 6.981s CPU time. Apr 14 12:46:04.758064 containerd[1466]: time="2026-04-14T12:46:04.749266943Z" level=info msg="CreateContainer within sandbox \"e6e85fec2c8d99964501468473b50536e90439fb2a70be7746d8683556d02abe\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:3,} returns container id \"463e2a5025e270c9bc3e218a5e7078cace8ee16134ce90c9b90f00f93e7ffcbe\"" Apr 14 12:46:04.864771 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1810514233.mount: Deactivated successfully. Apr 14 12:46:05.053173 systemd-logind[1450]: Session 24 logged out. Waiting for processes to exit. Apr 14 12:46:05.133487 systemd-logind[1450]: Removed session 24. Apr 14 12:46:05.141090 containerd[1466]: time="2026-04-14T12:46:05.135062338Z" level=info msg="StartContainer for \"463e2a5025e270c9bc3e218a5e7078cace8ee16134ce90c9b90f00f93e7ffcbe\"" Apr 14 12:46:05.739356 kubelet[2611]: E0414 12:46:05.738845 2611 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.009s" Apr 14 12:46:07.162113 kubelet[2611]: I0414 12:46:07.160271 2611 scope.go:122] "RemoveContainer" containerID="c5ac755f0da796fbf6500d0318179f2bc1f8a968974534ac1af3050ef38cb3c8" Apr 14 12:46:07.162113 kubelet[2611]: E0414 12:46:07.160421 2611 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 12:46:07.332957 kubelet[2611]: E0414 12:46:07.328379 2611 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 12:46:07.848487 containerd[1466]: time="2026-04-14T12:46:07.843771448Z" level=info msg="CreateContainer within sandbox \"e5886807b102c9b52b314793a738bd13f5f513f2b95c62167b7e7611e720cfca\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:3,}" Apr 14 12:46:08.432810 systemd[1]: Started sshd@24-10.0.0.43:22-10.0.0.1:56498.service - OpenSSH per-connection server daemon (10.0.0.1:56498). Apr 14 12:46:08.690769 containerd[1466]: time="2026-04-14T12:46:08.689090945Z" level=info msg="CreateContainer within sandbox \"e5886807b102c9b52b314793a738bd13f5f513f2b95c62167b7e7611e720cfca\" for &ContainerMetadata{Name:kube-scheduler,Attempt:3,} returns container id \"45c377c23bba339391d621103735ee33ebbfd684c88edc16dfac008900bc7b05\"" Apr 14 12:46:08.706634 containerd[1466]: time="2026-04-14T12:46:08.706516663Z" level=info msg="StartContainer for \"45c377c23bba339391d621103735ee33ebbfd684c88edc16dfac008900bc7b05\"" Apr 14 12:46:08.882024 update_engine[1457]: I20260414 12:46:08.880924 1457 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 14 12:46:08.882024 update_engine[1457]: I20260414 12:46:08.881647 1457 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 14 12:46:08.882024 update_engine[1457]: I20260414 12:46:08.881971 1457 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 14 12:46:08.904770 update_engine[1457]: E20260414 12:46:08.898397 1457 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Apr 14 12:46:08.904770 update_engine[1457]: I20260414 12:46:08.904492 1457 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Apr 14 12:46:11.440181 sshd[4626]: Accepted publickey for core from 10.0.0.1 port 56498 ssh2: RSA SHA256:qzhFhta2jMUFpsUMpLJ2lZjvWQxYMUNkIBekZ4ekVbM Apr 14 12:46:11.471162 systemd[1]: Started cri-containerd-45c377c23bba339391d621103735ee33ebbfd684c88edc16dfac008900bc7b05.scope - libcontainer container 45c377c23bba339391d621103735ee33ebbfd684c88edc16dfac008900bc7b05. Apr 14 12:46:11.475545 sshd[4626]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 12:46:11.496720 systemd[1]: Started cri-containerd-463e2a5025e270c9bc3e218a5e7078cace8ee16134ce90c9b90f00f93e7ffcbe.scope - libcontainer container 463e2a5025e270c9bc3e218a5e7078cace8ee16134ce90c9b90f00f93e7ffcbe. Apr 14 12:46:11.657490 systemd-logind[1450]: New session 25 of user core. Apr 14 12:46:11.678294 systemd[1]: Started session-25.scope - Session 25 of User core. Apr 14 12:46:14.119370 containerd[1466]: time="2026-04-14T12:46:14.116437196Z" level=info msg="StartContainer for \"463e2a5025e270c9bc3e218a5e7078cace8ee16134ce90c9b90f00f93e7ffcbe\" returns successfully" Apr 14 12:46:14.911548 containerd[1466]: time="2026-04-14T12:46:14.911093084Z" level=info msg="StartContainer for \"45c377c23bba339391d621103735ee33ebbfd684c88edc16dfac008900bc7b05\" returns successfully" Apr 14 12:46:16.859785 kubelet[2611]: E0414 12:46:16.847480 2611 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.703s" Apr 14 12:46:18.892096 update_engine[1457]: I20260414 12:46:18.891642 1457 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 14 12:46:18.999411 update_engine[1457]: I20260414 12:46:18.954483 1457 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 14 12:46:18.999411 update_engine[1457]: I20260414 12:46:18.955080 1457 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 14 12:46:19.000022 kubelet[2611]: E0414 12:46:18.956806 2611 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.102s" Apr 14 12:46:19.012520 update_engine[1457]: E20260414 12:46:19.010968 1457 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Apr 14 12:46:19.012520 update_engine[1457]: I20260414 12:46:19.011351 1457 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Apr 14 12:46:19.012520 update_engine[1457]: I20260414 12:46:19.011363 1457 omaha_request_action.cc:617] Omaha request response: Apr 14 12:46:19.031803 update_engine[1457]: E20260414 12:46:19.020099 1457 omaha_request_action.cc:636] Omaha request network transfer failed. Apr 14 12:46:19.031803 update_engine[1457]: I20260414 12:46:19.020519 1457 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Apr 14 12:46:19.031803 update_engine[1457]: I20260414 12:46:19.020534 1457 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Apr 14 12:46:19.031803 update_engine[1457]: I20260414 12:46:19.020541 1457 update_attempter.cc:306] Processing Done. Apr 14 12:46:19.031803 update_engine[1457]: E20260414 12:46:19.028187 1457 update_attempter.cc:619] Update failed. Apr 14 12:46:19.031803 update_engine[1457]: I20260414 12:46:19.028281 1457 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Apr 14 12:46:19.031803 update_engine[1457]: I20260414 12:46:19.028286 1457 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Apr 14 12:46:19.031803 update_engine[1457]: I20260414 12:46:19.028448 1457 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Apr 14 12:46:19.032493 update_engine[1457]: I20260414 12:46:19.032465 1457 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Apr 14 12:46:19.038397 update_engine[1457]: I20260414 12:46:19.033197 1457 omaha_request_action.cc:271] Posting an Omaha request to disabled Apr 14 12:46:19.038397 update_engine[1457]: I20260414 12:46:19.033212 1457 omaha_request_action.cc:272] Request: Apr 14 12:46:19.038397 update_engine[1457]: Apr 14 12:46:19.038397 update_engine[1457]: Apr 14 12:46:19.038397 update_engine[1457]: Apr 14 12:46:19.038397 update_engine[1457]: Apr 14 12:46:19.038397 update_engine[1457]: Apr 14 12:46:19.038397 update_engine[1457]: Apr 14 12:46:19.038397 update_engine[1457]: I20260414 12:46:19.033218 1457 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 14 12:46:19.047242 locksmithd[1491]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Apr 14 12:46:19.070177 update_engine[1457]: I20260414 12:46:19.060688 1457 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 14 12:46:19.070177 update_engine[1457]: I20260414 12:46:19.064221 1457 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 14 12:46:19.206117 update_engine[1457]: E20260414 12:46:19.194979 1457 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Apr 14 12:46:19.212284 update_engine[1457]: I20260414 12:46:19.197252 1457 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Apr 14 12:46:19.212284 update_engine[1457]: I20260414 12:46:19.206784 1457 omaha_request_action.cc:617] Omaha request response: Apr 14 12:46:19.212284 update_engine[1457]: I20260414 12:46:19.206818 1457 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Apr 14 12:46:19.212284 update_engine[1457]: I20260414 12:46:19.206825 1457 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Apr 14 12:46:19.212284 update_engine[1457]: I20260414 12:46:19.206833 1457 update_attempter.cc:306] Processing Done. Apr 14 12:46:19.212284 update_engine[1457]: I20260414 12:46:19.206916 1457 update_attempter.cc:310] Error event sent. Apr 14 12:46:19.212284 update_engine[1457]: I20260414 12:46:19.210750 1457 update_check_scheduler.cc:74] Next update check in 44m4s Apr 14 12:46:19.242833 locksmithd[1491]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Apr 14 12:46:20.898810 kubelet[2611]: E0414 12:46:20.897651 2611 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.941s" Apr 14 12:46:21.092752 kubelet[2611]: E0414 12:46:21.085197 2611 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 12:46:21.262497 kubelet[2611]: E0414 12:46:21.261611 2611 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 12:46:22.328128 kubelet[2611]: E0414 12:46:22.324874 2611 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 12:46:23.153248 sshd[4626]: pam_unix(sshd:session): session closed for user core Apr 14 12:46:23.359203 systemd[1]: sshd@24-10.0.0.43:22-10.0.0.1:56498.service: Deactivated successfully. Apr 14 12:46:23.431929 systemd[1]: session-25.scope: Deactivated successfully. Apr 14 12:46:23.432620 systemd[1]: session-25.scope: Consumed 2.741s CPU time. Apr 14 12:46:23.442086 systemd-logind[1450]: Session 25 logged out. Waiting for processes to exit. Apr 14 12:46:23.454771 systemd-logind[1450]: Removed session 25. Apr 14 12:46:24.686885 kubelet[2611]: E0414 12:46:24.685529 2611 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 12:46:27.034447 kubelet[2611]: E0414 12:46:27.031702 2611 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 12:46:28.422800 kubelet[2611]: E0414 12:46:28.422191 2611 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 12:46:28.557843 systemd[1]: Started sshd@25-10.0.0.43:22-10.0.0.1:45014.service - OpenSSH per-connection server daemon (10.0.0.1:45014). Apr 14 12:46:29.921487 kubelet[2611]: E0414 12:46:29.920535 2611 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 12:46:30.311287 kubelet[2611]: E0414 12:46:30.307210 2611 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 12:46:30.608742 sshd[4768]: Accepted publickey for core from 10.0.0.1 port 45014 ssh2: RSA SHA256:qzhFhta2jMUFpsUMpLJ2lZjvWQxYMUNkIBekZ4ekVbM Apr 14 12:46:30.613079 sshd[4768]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 12:46:30.931442 systemd-logind[1450]: New session 26 of user core. Apr 14 12:46:30.983544 systemd[1]: Started session-26.scope - Session 26 of User core. Apr 14 12:46:34.861510 sshd[4768]: pam_unix(sshd:session): session closed for user core Apr 14 12:46:35.030516 systemd[1]: sshd@25-10.0.0.43:22-10.0.0.1:45014.service: Deactivated successfully. Apr 14 12:46:35.184522 systemd[1]: session-26.scope: Deactivated successfully. Apr 14 12:46:35.189380 systemd[1]: session-26.scope: Consumed 2.394s CPU time. Apr 14 12:46:35.332088 systemd-logind[1450]: Session 26 logged out. Waiting for processes to exit. Apr 14 12:46:35.393907 systemd-logind[1450]: Removed session 26. Apr 14 12:46:38.055530 kubelet[2611]: E0414 12:46:38.055070 2611 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 12:46:40.253485 systemd[1]: Started sshd@26-10.0.0.43:22-10.0.0.1:49672.service - OpenSSH per-connection server daemon (10.0.0.1:49672). Apr 14 12:46:41.634661 sshd[4813]: Accepted publickey for core from 10.0.0.1 port 49672 ssh2: RSA SHA256:qzhFhta2jMUFpsUMpLJ2lZjvWQxYMUNkIBekZ4ekVbM Apr 14 12:46:41.893668 sshd[4813]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 12:46:42.356177 systemd-logind[1450]: New session 27 of user core. Apr 14 12:46:42.367466 systemd[1]: Started session-27.scope - Session 27 of User core. Apr 14 12:46:43.111497 kubelet[2611]: E0414 12:46:43.111267 2611 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 12:46:53.598423 kubelet[2611]: E0414 12:46:53.595472 2611 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.534s" Apr 14 12:46:57.055429 kubelet[2611]: E0414 12:46:57.047536 2611 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.995s" Apr 14 12:46:59.530108 systemd[1]: cri-containerd-463e2a5025e270c9bc3e218a5e7078cace8ee16134ce90c9b90f00f93e7ffcbe.scope: Deactivated successfully. Apr 14 12:46:59.655887 systemd[1]: cri-containerd-463e2a5025e270c9bc3e218a5e7078cace8ee16134ce90c9b90f00f93e7ffcbe.scope: Consumed 9.410s CPU time. Apr 14 12:47:09.009305 kubelet[2611]: E0414 12:47:06.653552 2611 controller.go:251] "Failed to update lease" err="Put \"https://10.0.0.43:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded" Apr 14 12:47:10.421494 containerd[1466]: time="2026-04-14T12:47:10.420331511Z" level=error msg="failed to handle container TaskExit event container_id:\"463e2a5025e270c9bc3e218a5e7078cace8ee16134ce90c9b90f00f93e7ffcbe\" id:\"463e2a5025e270c9bc3e218a5e7078cace8ee16134ce90c9b90f00f93e7ffcbe\" pid:4663 exit_status:1 exited_at:{seconds:1776170819 nanos:661134422}" error="failed to stop container: failed to delete task: context deadline exceeded: unknown" Apr 14 12:47:11.450996 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-463e2a5025e270c9bc3e218a5e7078cace8ee16134ce90c9b90f00f93e7ffcbe-rootfs.mount: Deactivated successfully. Apr 14 12:47:11.759048 systemd[1]: cri-containerd-45c377c23bba339391d621103735ee33ebbfd684c88edc16dfac008900bc7b05.scope: Deactivated successfully. Apr 14 12:47:11.831411 systemd[1]: cri-containerd-45c377c23bba339391d621103735ee33ebbfd684c88edc16dfac008900bc7b05.scope: Consumed 13.032s CPU time. Apr 14 12:47:12.458425 containerd[1466]: time="2026-04-14T12:47:12.450202480Z" level=error msg="ttrpc: received message on inactive stream" stream=33 Apr 14 12:47:12.542126 containerd[1466]: time="2026-04-14T12:47:12.489819936Z" level=info msg="TaskExit event container_id:\"463e2a5025e270c9bc3e218a5e7078cace8ee16134ce90c9b90f00f93e7ffcbe\" id:\"463e2a5025e270c9bc3e218a5e7078cace8ee16134ce90c9b90f00f93e7ffcbe\" pid:4663 exit_status:1 exited_at:{seconds:1776170819 nanos:661134422}" Apr 14 12:47:15.718290 kubelet[2611]: E0414 12:47:15.718104 2611 cadvisor_stats_provider.go:569] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3566c1d7ed03bb3c60facf009a5678dd.slice/cri-containerd-45c377c23bba339391d621103735ee33ebbfd684c88edc16dfac008900bc7b05.scope\": RecentStats: unable to find data in memory cache]" Apr 14 12:47:18.002045 kubelet[2611]: E0414 12:47:17.959489 2611 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="17.849s" Apr 14 12:47:19.450896 kubelet[2611]: E0414 12:47:19.354990 2611 controller.go:251] "Failed to update lease" err="Put \"https://10.0.0.43:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Apr 14 12:47:21.171940 kubelet[2611]: E0414 12:47:21.171345 2611 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 12:47:22.708809 containerd[1466]: time="2026-04-14T12:47:22.672573098Z" level=error msg="Failed to handle backOff event container_id:\"463e2a5025e270c9bc3e218a5e7078cace8ee16134ce90c9b90f00f93e7ffcbe\" id:\"463e2a5025e270c9bc3e218a5e7078cace8ee16134ce90c9b90f00f93e7ffcbe\" pid:4663 exit_status:1 exited_at:{seconds:1776170819 nanos:661134422} for 463e2a5025e270c9bc3e218a5e7078cace8ee16134ce90c9b90f00f93e7ffcbe" error="failed to handle container TaskExit event: failed to stop container: failed to delete task: context deadline exceeded: unknown" Apr 14 12:47:23.273501 containerd[1466]: time="2026-04-14T12:47:23.242197965Z" level=error msg="failed to handle container TaskExit event container_id:\"45c377c23bba339391d621103735ee33ebbfd684c88edc16dfac008900bc7b05\" id:\"45c377c23bba339391d621103735ee33ebbfd684c88edc16dfac008900bc7b05\" pid:4662 exit_status:1 exited_at:{seconds:1776170833 nanos:51460285}" error="failed to stop container: failed to delete task: context deadline exceeded: unknown" Apr 14 12:47:23.907518 containerd[1466]: time="2026-04-14T12:47:23.905364698Z" level=error msg="ttrpc: received message on inactive stream" stream=45 Apr 14 12:47:24.802117 containerd[1466]: time="2026-04-14T12:47:24.756036518Z" level=error msg="ttrpc: received message on inactive stream" stream=33 Apr 14 12:47:25.128018 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-45c377c23bba339391d621103735ee33ebbfd684c88edc16dfac008900bc7b05-rootfs.mount: Deactivated successfully. Apr 14 12:47:25.270903 containerd[1466]: time="2026-04-14T12:47:25.252485847Z" level=info msg="TaskExit event container_id:\"463e2a5025e270c9bc3e218a5e7078cace8ee16134ce90c9b90f00f93e7ffcbe\" id:\"463e2a5025e270c9bc3e218a5e7078cace8ee16134ce90c9b90f00f93e7ffcbe\" pid:4663 exit_status:1 exited_at:{seconds:1776170819 nanos:661134422}" Apr 14 12:47:29.788428 kubelet[2611]: E0414 12:47:29.781023 2611 controller.go:251] "Failed to update lease" err="Put \"https://10.0.0.43:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Apr 14 12:47:31.389950 kubelet[2611]: I0414 12:47:31.387030 2611 reflector.go:578] "Warning: watch ended with error" reflector="object-\"kube-system\"/\"coredns\"" type="*v1.ConfigMap" err="an error on the server (\"unable to decode an event from the watch stream: http2: client connection lost\") has prevented the request from succeeding" Apr 14 12:47:32.351343 kubelet[2611]: I0414 12:47:31.406399 2611 reflector.go:578] "Warning: watch ended with error" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Node" err="an error on the server (\"unable to decode an event from the watch stream: http2: client connection lost\") has prevented the request from succeeding" Apr 14 12:47:32.351343 kubelet[2611]: I0414 12:47:31.449752 2611 reflector.go:578] "Warning: watch ended with error" reflector="object-\"kube-flannel\"/\"kube-flannel-cfg\"" type="*v1.ConfigMap" err="an error on the server (\"unable to decode an event from the watch stream: http2: client connection lost\") has prevented the request from succeeding" Apr 14 12:47:32.358159 kubelet[2611]: I0414 12:47:31.458044 2611 reflector.go:578] "Warning: watch ended with error" reflector="object-\"kube-flannel\"/\"kube-root-ca.crt\"" type="*v1.ConfigMap" err="an error on the server (\"unable to decode an event from the watch stream: http2: client connection lost\") has prevented the request from succeeding" Apr 14 12:47:32.369428 kubelet[2611]: I0414 12:47:31.920423 2611 reflector.go:578] "Warning: watch ended with error" reflector="object-\"kube-system\"/\"kube-root-ca.crt\"" type="*v1.ConfigMap" err="an error on the server (\"unable to decode an event from the watch stream: http2: client connection lost\") has prevented the request from succeeding" Apr 14 12:47:32.369428 kubelet[2611]: I0414 12:47:31.412524 2611 reflector.go:578] "Warning: watch ended with error" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIDriver" err="an error on the server (\"unable to decode an event from the watch stream: http2: client connection lost\") has prevented the request from succeeding" Apr 14 12:47:32.369428 kubelet[2611]: I0414 12:47:31.905644 2611 reflector.go:578] "Warning: watch ended with error" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Service" err="an error on the server (\"unable to decode an event from the watch stream: http2: client connection lost\") has prevented the request from succeeding" Apr 14 12:47:32.518629 kubelet[2611]: I0414 12:47:32.517951 2611 reflector.go:578] "Warning: watch ended with error" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.RuntimeClass" err="an error on the server (\"unable to decode an event from the watch stream: http2: client connection lost\") has prevented the request from succeeding" Apr 14 12:47:32.518629 kubelet[2611]: I0414 12:47:32.518037 2611 reflector.go:578] "Warning: watch ended with error" reflector="object-\"kube-system\"/\"kube-proxy\"" type="*v1.ConfigMap" err="an error on the server (\"unable to decode an event from the watch stream: http2: client connection lost\") has prevented the request from succeeding" Apr 14 12:47:32.518629 kubelet[2611]: I0414 12:47:32.518097 2611 reflector.go:578] "Warning: watch ended with error" reflector="pkg/kubelet/config/apiserver.go:66" type="*v1.Pod" err="an error on the server (\"unable to decode an event from the watch stream: http2: client connection lost\") has prevented the request from succeeding" Apr 14 12:47:33.924396 kubelet[2611]: E0414 12:47:31.788905 2611 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://10.0.0.43:6443/api/v1/namespaces/kube-system/events/coredns-7d764666f9-spttk.18a639caec13d420\": http2: client connection lost" event="&Event{ObjectMeta:{coredns-7d764666f9-spttk.18a639caec13d420 kube-system 875 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:coredns-7d764666f9-spttk,UID:fb975314-b950-4dd9-9942-b30d52d99a2a,APIVersion:v1,ResourceVersion:629,FieldPath:spec.containers{coredns},},Reason:Unhealthy,Message:Readiness probe failed: Get \"http://192.168.0.2:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-14 12:44:06 +0000 UTC,LastTimestamp:2026-04-14 12:46:55.564406716 +0000 UTC m=+343.620526160,Count:3,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 14 12:47:35.354467 containerd[1466]: time="2026-04-14T12:47:35.340541412Z" level=error msg="Failed to handle backOff event container_id:\"463e2a5025e270c9bc3e218a5e7078cace8ee16134ce90c9b90f00f93e7ffcbe\" id:\"463e2a5025e270c9bc3e218a5e7078cace8ee16134ce90c9b90f00f93e7ffcbe\" pid:4663 exit_status:1 exited_at:{seconds:1776170819 nanos:661134422} for 463e2a5025e270c9bc3e218a5e7078cace8ee16134ce90c9b90f00f93e7ffcbe" error="failed to handle container TaskExit event: failed to stop container: failed to delete task: context deadline exceeded: unknown" Apr 14 12:47:35.675296 containerd[1466]: time="2026-04-14T12:47:35.662424012Z" level=info msg="TaskExit event container_id:\"45c377c23bba339391d621103735ee33ebbfd684c88edc16dfac008900bc7b05\" id:\"45c377c23bba339391d621103735ee33ebbfd684c88edc16dfac008900bc7b05\" pid:4662 exit_status:1 exited_at:{seconds:1776170833 nanos:51460285}" Apr 14 12:47:35.877460 kubelet[2611]: E0414 12:47:35.869121 2611 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="17.26s" Apr 14 12:47:36.327366 containerd[1466]: time="2026-04-14T12:47:36.324151647Z" level=error msg="ttrpc: received message on inactive stream" stream=57 Apr 14 12:47:36.520021 kubelet[2611]: E0414 12:47:36.514209 2611 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 12:47:36.937179 kubelet[2611]: E0414 12:47:36.935186 2611 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 12:47:37.133219 kubelet[2611]: E0414 12:47:37.085502 2611 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 12:47:37.595991 kubelet[2611]: E0414 12:47:37.595127 2611 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.726s" Apr 14 12:47:37.825876 kubelet[2611]: E0414 12:47:37.825479 2611 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 12:47:39.281878 containerd[1466]: time="2026-04-14T12:47:39.256413727Z" level=info msg="shim disconnected" id=45c377c23bba339391d621103735ee33ebbfd684c88edc16dfac008900bc7b05 namespace=k8s.io Apr 14 12:47:39.281878 containerd[1466]: time="2026-04-14T12:47:39.279976543Z" level=warning msg="cleaning up after shim disconnected" id=45c377c23bba339391d621103735ee33ebbfd684c88edc16dfac008900bc7b05 namespace=k8s.io Apr 14 12:47:39.281878 containerd[1466]: time="2026-04-14T12:47:39.280296328Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 14 12:47:39.619950 kubelet[2611]: E0414 12:47:39.614533 2611 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.077s" Apr 14 12:47:39.966514 kubelet[2611]: E0414 12:47:39.900798 2611 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 12:47:40.194880 kubelet[2611]: E0414 12:47:40.189790 2611 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 12:47:40.817152 kubelet[2611]: E0414 12:47:40.775330 2611 controller.go:251] "Failed to update lease" err="Put \"https://10.0.0.43:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded" Apr 14 12:47:43.035557 kubelet[2611]: E0414 12:47:43.023436 2611 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.879s" Apr 14 12:47:43.453252 containerd[1466]: time="2026-04-14T12:47:43.149557159Z" level=info msg="TaskExit event container_id:\"463e2a5025e270c9bc3e218a5e7078cace8ee16134ce90c9b90f00f93e7ffcbe\" id:\"463e2a5025e270c9bc3e218a5e7078cace8ee16134ce90c9b90f00f93e7ffcbe\" pid:4663 exit_status:1 exited_at:{seconds:1776170819 nanos:661134422}" Apr 14 12:47:43.934345 kubelet[2611]: E0414 12:47:43.933790 2611 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 12:47:44.047447 kubelet[2611]: E0414 12:47:44.046186 2611 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 12:47:46.539406 kubelet[2611]: E0414 12:47:46.534469 2611 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.913s" Apr 14 12:47:47.073051 kubelet[2611]: E0414 12:47:46.800360 2611 status_manager.go:1045] "Failed to get status for pod" err="Get \"https://10.0.0.43:6443/api/v1/namespaces/kube-system/pods/kube-scheduler-localhost\": net/http: TLS handshake timeout - error from a previous attempt: http2: client connection lost" podUID="3566c1d7ed03bb3c60facf009a5678dd" pod="kube-system/kube-scheduler-localhost" Apr 14 12:47:49.044238 sshd[4813]: pam_unix(sshd:session): session closed for user core Apr 14 12:47:49.540186 systemd[1]: sshd@26-10.0.0.43:22-10.0.0.1:49672.service: Deactivated successfully. Apr 14 12:47:49.674958 systemd[1]: session-27.scope: Deactivated successfully. Apr 14 12:47:49.711143 systemd[1]: session-27.scope: Consumed 5.594s CPU time. Apr 14 12:47:49.801527 kubelet[2611]: E0414 12:47:49.651524 2611 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://10.0.0.43:6443/api/v1/namespaces/kube-system/events/coredns-7d764666f9-spttk.18a639caec13d420\": net/http: TLS handshake timeout" event="&Event{ObjectMeta:{coredns-7d764666f9-spttk.18a639caec13d420 kube-system 875 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:coredns-7d764666f9-spttk,UID:fb975314-b950-4dd9-9942-b30d52d99a2a,APIVersion:v1,ResourceVersion:629,FieldPath:spec.containers{coredns},},Reason:Unhealthy,Message:Readiness probe failed: Get \"http://192.168.0.2:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-14 12:44:06 +0000 UTC,LastTimestamp:2026-04-14 12:46:55.564406716 +0000 UTC m=+343.620526160,Count:3,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 14 12:47:49.949794 systemd-logind[1450]: Session 27 logged out. Waiting for processes to exit. Apr 14 12:47:50.070760 systemd-logind[1450]: Removed session 27. Apr 14 12:47:50.538192 containerd[1466]: time="2026-04-14T12:47:50.537734475Z" level=info msg="shim disconnected" id=463e2a5025e270c9bc3e218a5e7078cace8ee16134ce90c9b90f00f93e7ffcbe namespace=k8s.io Apr 14 12:47:50.538702 containerd[1466]: time="2026-04-14T12:47:50.538682040Z" level=warning msg="cleaning up after shim disconnected" id=463e2a5025e270c9bc3e218a5e7078cace8ee16134ce90c9b90f00f93e7ffcbe namespace=k8s.io Apr 14 12:47:50.538785 containerd[1466]: time="2026-04-14T12:47:50.538775553Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 14 12:47:50.786433 kubelet[2611]: E0414 12:47:50.776371 2611 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="4.16s" Apr 14 12:47:51.349047 kubelet[2611]: E0414 12:47:51.348682 2611 controller.go:251] "Failed to update lease" err="Put \"https://10.0.0.43:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded" Apr 14 12:47:51.365054 kubelet[2611]: I0414 12:47:51.362098 2611 controller.go:171] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Apr 14 12:47:51.818187 kubelet[2611]: I0414 12:47:51.754429 2611 scope.go:122] "RemoveContainer" containerID="c5ac755f0da796fbf6500d0318179f2bc1f8a968974534ac1af3050ef38cb3c8" Apr 14 12:47:51.851702 kubelet[2611]: E0414 12:47:51.839559 2611 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 12:47:51.944469 kubelet[2611]: I0414 12:47:51.944241 2611 scope.go:122] "RemoveContainer" containerID="45c377c23bba339391d621103735ee33ebbfd684c88edc16dfac008900bc7b05" Apr 14 12:47:51.944469 kubelet[2611]: E0414 12:47:51.944488 2611 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 12:47:51.945234 kubelet[2611]: E0414 12:47:51.944778 2611 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-scheduler pod=kube-scheduler-localhost_kube-system(3566c1d7ed03bb3c60facf009a5678dd)\"" pod="kube-system/kube-scheduler-localhost" podUID="3566c1d7ed03bb3c60facf009a5678dd" Apr 14 12:47:51.958741 containerd[1466]: time="2026-04-14T12:47:51.958408350Z" level=info msg="RemoveContainer for \"c5ac755f0da796fbf6500d0318179f2bc1f8a968974534ac1af3050ef38cb3c8\"" Apr 14 12:47:52.537019 containerd[1466]: time="2026-04-14T12:47:52.536719073Z" level=info msg="RemoveContainer for \"c5ac755f0da796fbf6500d0318179f2bc1f8a968974534ac1af3050ef38cb3c8\" returns successfully" Apr 14 12:47:53.426139 containerd[1466]: time="2026-04-14T12:47:53.410190345Z" level=error msg="failed to delete shim" error="1 error occurred:\n\t* close wait error: context deadline exceeded\n\n" id=463e2a5025e270c9bc3e218a5e7078cace8ee16134ce90c9b90f00f93e7ffcbe Apr 14 12:47:55.051061 systemd[1]: Started sshd@27-10.0.0.43:22-10.0.0.1:54424.service - OpenSSH per-connection server daemon (10.0.0.1:54424). Apr 14 12:47:56.072487 kubelet[2611]: I0414 12:47:56.042313 2611 scope.go:122] "RemoveContainer" containerID="45c377c23bba339391d621103735ee33ebbfd684c88edc16dfac008900bc7b05" Apr 14 12:47:56.072487 kubelet[2611]: E0414 12:47:56.043052 2611 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 12:48:01.520744 kubelet[2611]: E0414 12:48:00.185183 2611 status_manager.go:1045] "Failed to get status for pod" err="Get \"https://10.0.0.43:6443/api/v1/namespaces/kube-system/pods/coredns-7d764666f9-spttk\": net/http: TLS handshake timeout" podUID="fb975314-b950-4dd9-9942-b30d52d99a2a" pod="kube-system/coredns-7d764666f9-spttk" Apr 14 12:48:01.716043 kubelet[2611]: E0414 12:48:01.714646 2611 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.43:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" interval="200ms" Apr 14 12:48:02.738772 sshd[5060]: Accepted publickey for core from 10.0.0.1 port 54424 ssh2: RSA SHA256:qzhFhta2jMUFpsUMpLJ2lZjvWQxYMUNkIBekZ4ekVbM Apr 14 12:48:03.317765 sshd[5060]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 12:48:04.954285 systemd-logind[1450]: New session 28 of user core. Apr 14 12:48:05.744188 systemd[1]: Started session-28.scope - Session 28 of User core. Apr 14 12:48:06.444439 containerd[1466]: time="2026-04-14T12:48:06.442170054Z" level=info msg="CreateContainer within sandbox \"e5886807b102c9b52b314793a738bd13f5f513f2b95c62167b7e7611e720cfca\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:4,}" Apr 14 12:48:14.463965 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2810459659.mount: Deactivated successfully. Apr 14 12:48:15.336462 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1389211364.mount: Deactivated successfully. Apr 14 12:48:15.868300 containerd[1466]: time="2026-04-14T12:48:15.860134991Z" level=info msg="CreateContainer within sandbox \"e5886807b102c9b52b314793a738bd13f5f513f2b95c62167b7e7611e720cfca\" for &ContainerMetadata{Name:kube-scheduler,Attempt:4,} returns container id \"433f46bcd44951ef1c6557ae32f8f3d62062ad479df52baf9dc40601b5c7810d\"" Apr 14 12:48:16.960980 kubelet[2611]: E0414 12:48:16.874068 2611 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.43:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" interval="400ms" Apr 14 12:48:18.248926 kubelet[2611]: E0414 12:48:16.958531 2611 status_manager.go:1045] "Failed to get status for pod" err="Get \"https://10.0.0.43:6443/api/v1/namespaces/kube-system/pods/coredns-7d764666f9-f44gt\": net/http: TLS handshake timeout" podUID="c0e24bdb-6150-4745-b66b-9386ee241a93" pod="kube-system/coredns-7d764666f9-f44gt" Apr 14 12:48:24.639718 kubelet[2611]: I0414 12:48:19.208369 2611 request.go:752] "Waited before sending request" delay="1.521422813s" reason="retries: 3, retry-after: 1s - retry-reason: due to retryable error, error: Get \"https://10.0.0.43:6443/api/v1/nodes?allowWatchBookmarks=true&fieldSelector=metadata.name%3Dlocalhost&resourceVersion=980&resourceVersionMatch=NotOlderThan&sendInitialEvents=true&timeout=9m58s&timeoutSeconds=598&watch=true\": net/http: TLS handshake timeout" verb="GET" URL="https://10.0.0.43:6443/api/v1/nodes?allowWatchBookmarks=true&fieldSelector=metadata.name%3Dlocalhost&resourceVersion=980&resourceVersionMatch=NotOlderThan&sendInitialEvents=true&timeout=9m58s&timeoutSeconds=598&watch=true" Apr 14 12:48:25.547433 kubelet[2611]: E0414 12:48:23.716368 2611 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://10.0.0.43:6443/api/v1/namespaces/kube-system/events/coredns-7d764666f9-spttk.18a639caec13d420\": net/http: TLS handshake timeout" event="&Event{ObjectMeta:{coredns-7d764666f9-spttk.18a639caec13d420 kube-system 875 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:coredns-7d764666f9-spttk,UID:fb975314-b950-4dd9-9942-b30d52d99a2a,APIVersion:v1,ResourceVersion:629,FieldPath:spec.containers{coredns},},Reason:Unhealthy,Message:Readiness probe failed: Get \"http://192.168.0.2:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-14 12:44:06 +0000 UTC,LastTimestamp:2026-04-14 12:46:55.564406716 +0000 UTC m=+343.620526160,Count:3,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 14 12:48:26.839268 containerd[1466]: time="2026-04-14T12:48:26.818362425Z" level=info msg="StartContainer for \"433f46bcd44951ef1c6557ae32f8f3d62062ad479df52baf9dc40601b5c7810d\"" Apr 14 12:48:31.106754 kubelet[2611]: E0414 12:48:31.104917 2611 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.43:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" interval="800ms" Apr 14 12:48:35.034412 kubelet[2611]: E0414 12:48:34.323499 2611 status_manager.go:1045] "Failed to get status for pod" err="Get \"https://10.0.0.43:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-localhost\": net/http: TLS handshake timeout" podUID="ed5e991544c38f12435d82988fd12fee" pod="kube-system/kube-apiserver-localhost" Apr 14 12:48:43.834052 kubelet[2611]: E0414 12:48:43.825469 2611 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.43:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded" interval="1.6s" Apr 14 12:48:44.371856 kubelet[2611]: I0414 12:48:43.923324 2611 scope.go:122] "RemoveContainer" containerID="778a61b4dc0aec05b016d9a099f85c3b8b3531417291a1fe6809c5d0a256c99b" Apr 14 12:48:51.290986 kubelet[2611]: E0414 12:48:51.248432 2611 status_manager.go:1045] "Failed to get status for pod" err="Get \"https://10.0.0.43:6443/api/v1/namespaces/kube-system/pods/kube-scheduler-localhost\": net/http: TLS handshake timeout" podUID="3566c1d7ed03bb3c60facf009a5678dd" pod="kube-system/kube-scheduler-localhost" Apr 14 12:48:55.620305 kubelet[2611]: E0414 12:48:50.300519 2611 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://10.0.0.43:6443/api/v1/namespaces/kube-system/events/coredns-7d764666f9-spttk.18a639caec13d420\": net/http: TLS handshake timeout" event="&Event{ObjectMeta:{coredns-7d764666f9-spttk.18a639caec13d420 kube-system 875 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:coredns-7d764666f9-spttk,UID:fb975314-b950-4dd9-9942-b30d52d99a2a,APIVersion:v1,ResourceVersion:629,FieldPath:spec.containers{coredns},},Reason:Unhealthy,Message:Readiness probe failed: Get \"http://192.168.0.2:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-14 12:44:06 +0000 UTC,LastTimestamp:2026-04-14 12:46:55.564406716 +0000 UTC m=+343.620526160,Count:3,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 14 12:48:59.732541 kubelet[2611]: E0414 12:48:59.704430 2611 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.43:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded" interval="3.2s" Apr 14 12:49:06.939226 kubelet[2611]: I0414 12:49:06.632474 2611 request.go:752] "Waited before sending request" delay="2.863562524s" reason="retries: 5, retry-after: 1s - retry-reason: due to retryable error, error: Get \"https://10.0.0.43:6443/api/v1/namespaces/kube-system/configmaps?allowWatchBookmarks=true&fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=996&resourceVersionMatch=NotOlderThan&sendInitialEvents=true&timeout=41m2s&timeoutSeconds=2462&watch=true\": net/http: TLS handshake timeout" verb="GET" URL="https://10.0.0.43:6443/api/v1/namespaces/kube-system/configmaps?allowWatchBookmarks=true&fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=996&resourceVersionMatch=NotOlderThan&sendInitialEvents=true&timeout=41m2s&timeoutSeconds=2462&watch=true" Apr 14 12:49:08.046439 kubelet[2611]: E0414 12:49:08.030893 2611 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1m11.988s" Apr 14 12:49:18.027489 sshd[5060]: pam_unix(sshd:session): session closed for user core Apr 14 12:49:18.789546 systemd[1]: sshd@27-10.0.0.43:22-10.0.0.1:54424.service: Deactivated successfully. Apr 14 12:49:18.857415 systemd[1]: sshd@27-10.0.0.43:22-10.0.0.1:54424.service: Consumed 2.051s CPU time. Apr 14 12:49:19.446399 systemd[1]: session-28.scope: Deactivated successfully. Apr 14 12:49:19.554208 systemd[1]: session-28.scope: Consumed 35.814s CPU time. Apr 14 12:49:19.997369 systemd-logind[1450]: Session 28 logged out. Waiting for processes to exit. Apr 14 12:49:20.204971 systemd-logind[1450]: Removed session 28. Apr 14 12:49:20.554678 kubelet[2611]: E0414 12:49:20.219471 2611 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.43:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" interval="6.4s" Apr 14 12:49:20.720310 containerd[1466]: time="2026-04-14T12:49:20.718617156Z" level=info msg="RemoveContainer for \"778a61b4dc0aec05b016d9a099f85c3b8b3531417291a1fe6809c5d0a256c99b\"" Apr 14 12:49:22.455278 containerd[1466]: time="2026-04-14T12:49:22.442417719Z" level=info msg="RemoveContainer for \"778a61b4dc0aec05b016d9a099f85c3b8b3531417291a1fe6809c5d0a256c99b\" returns successfully" Apr 14 12:49:23.675840 kubelet[2611]: E0414 12:49:23.668217 2611 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 12:49:24.540956 kubelet[2611]: E0414 12:49:22.932570 2611 status_manager.go:1045] "Failed to get status for pod" err="Get \"https://10.0.0.43:6443/api/v1/namespaces/kube-system/pods/kube-scheduler-localhost\": net/http: TLS handshake timeout" podUID="3566c1d7ed03bb3c60facf009a5678dd" pod="kube-system/kube-scheduler-localhost" Apr 14 12:49:25.968872 systemd[1]: Started sshd@28-10.0.0.43:22-10.0.0.1:35078.service - OpenSSH per-connection server daemon (10.0.0.1:35078). Apr 14 12:49:28.747310 kubelet[2611]: E0414 12:49:28.647380 2611 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://10.0.0.43:6443/api/v1/namespaces/kube-system/events/coredns-7d764666f9-spttk.18a639caec13d420\": net/http: TLS handshake timeout" event="&Event{ObjectMeta:{coredns-7d764666f9-spttk.18a639caec13d420 kube-system 875 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:coredns-7d764666f9-spttk,UID:fb975314-b950-4dd9-9942-b30d52d99a2a,APIVersion:v1,ResourceVersion:629,FieldPath:spec.containers{coredns},},Reason:Unhealthy,Message:Readiness probe failed: Get \"http://192.168.0.2:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-14 12:44:06 +0000 UTC,LastTimestamp:2026-04-14 12:46:55.564406716 +0000 UTC m=+343.620526160,Count:3,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 14 12:49:34.455988 kubelet[2611]: I0414 12:49:34.438937 2611 scope.go:122] "RemoveContainer" containerID="45c377c23bba339391d621103735ee33ebbfd684c88edc16dfac008900bc7b05" Apr 14 12:49:43.541150 kubelet[2611]: E0414 12:49:43.512969 2611 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.43:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded" interval="7s" Apr 14 12:49:45.847337 containerd[1466]: time="2026-04-14T12:49:45.737977930Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 14 12:49:46.465475 containerd[1466]: time="2026-04-14T12:49:45.873208502Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 14 12:49:46.739535 containerd[1466]: time="2026-04-14T12:49:46.466628024Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 14 12:49:47.143155 containerd[1466]: time="2026-04-14T12:49:47.035541570Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 14 12:49:48.821030 sshd[5165]: Accepted publickey for core from 10.0.0.1 port 35078 ssh2: RSA SHA256:qzhFhta2jMUFpsUMpLJ2lZjvWQxYMUNkIBekZ4ekVbM Apr 14 12:49:50.470710 kubelet[2611]: E0414 12:49:50.412983 2611 status_manager.go:1045] "Failed to get status for pod" err="Get \"https://10.0.0.43:6443/api/v1/namespaces/kube-system/pods/coredns-7d764666f9-spttk\": net/http: TLS handshake timeout" podUID="fb975314-b950-4dd9-9942-b30d52d99a2a" pod="kube-system/coredns-7d764666f9-spttk" Apr 14 12:49:50.772295 sshd[5165]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 12:49:55.653546 systemd-logind[1450]: New session 29 of user core. Apr 14 12:49:57.040867 systemd[1]: Started session-29.scope - Session 29 of User core. Apr 14 12:50:07.652059 kubelet[2611]: E0414 12:50:07.648836 2611 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.43:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded" interval="7s" Apr 14 12:50:15.230548 kubelet[2611]: E0414 12:50:15.226091 2611 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://10.0.0.43:6443/api/v1/namespaces/kube-system/events/coredns-7d764666f9-spttk.18a639caec13d420\": net/http: TLS handshake timeout" event="&Event{ObjectMeta:{coredns-7d764666f9-spttk.18a639caec13d420 kube-system 875 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:coredns-7d764666f9-spttk,UID:fb975314-b950-4dd9-9942-b30d52d99a2a,APIVersion:v1,ResourceVersion:629,FieldPath:spec.containers{coredns},},Reason:Unhealthy,Message:Readiness probe failed: Get \"http://192.168.0.2:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-14 12:44:06 +0000 UTC,LastTimestamp:2026-04-14 12:46:55.564406716 +0000 UTC m=+343.620526160,Count:3,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 14 12:50:16.390677 systemd[1]: Started cri-containerd-433f46bcd44951ef1c6557ae32f8f3d62062ad479df52baf9dc40601b5c7810d.scope - libcontainer container 433f46bcd44951ef1c6557ae32f8f3d62062ad479df52baf9dc40601b5c7810d. Apr 14 12:50:23.945165 kubelet[2611]: E0414 12:50:19.643064 2611 status_manager.go:1045] "Failed to get status for pod" err="Get \"https://10.0.0.43:6443/api/v1/namespaces/kube-system/pods/coredns-7d764666f9-f44gt\": net/http: TLS handshake timeout" podUID="c0e24bdb-6150-4745-b66b-9386ee241a93" pod="kube-system/coredns-7d764666f9-f44gt" Apr 14 12:50:28.521576 kubelet[2611]: E0414 12:50:27.570572 2611 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.43:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" interval="7s" Apr 14 12:50:29.442002 kubelet[2611]: E0414 12:50:29.039174 2611 log.go:32] "StartContainer from runtime service failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" containerID="433f46bcd44951ef1c6557ae32f8f3d62062ad479df52baf9dc40601b5c7810d" Apr 14 12:50:30.457473 kubelet[2611]: E0414 12:50:30.442305 2611 kuberuntime_manager.go:1664] "Unhandled Error" err="container kube-scheduler start failed in pod kube-scheduler-localhost_kube-system(3566c1d7ed03bb3c60facf009a5678dd): RunContainerError: context deadline exceeded" logger="UnhandledError" Apr 14 12:50:32.231671 containerd[1466]: time="2026-04-14T12:50:31.782580172Z" level=info msg="shim disconnected" id=433f46bcd44951ef1c6557ae32f8f3d62062ad479df52baf9dc40601b5c7810d namespace=k8s.io Apr 14 12:50:32.638898 containerd[1466]: time="2026-04-14T12:50:31.827558172Z" level=error msg="Failed to pipe stderr of container \"433f46bcd44951ef1c6557ae32f8f3d62062ad479df52baf9dc40601b5c7810d\"" error="reading from a closed fifo" Apr 14 12:50:32.638898 containerd[1466]: time="2026-04-14T12:50:32.571192705Z" level=warning msg="cleaning up after shim disconnected" id=433f46bcd44951ef1c6557ae32f8f3d62062ad479df52baf9dc40601b5c7810d namespace=k8s.io Apr 14 12:50:32.638898 containerd[1466]: time="2026-04-14T12:50:32.571564423Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 14 12:50:32.998876 containerd[1466]: time="2026-04-14T12:50:32.130454512Z" level=error msg="Failed to pipe stdout of container \"433f46bcd44951ef1c6557ae32f8f3d62062ad479df52baf9dc40601b5c7810d\"" error="reading from a closed fifo" Apr 14 12:50:33.419705 kubelet[2611]: E0414 12:50:32.401131 2611 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with RunContainerError: \"context deadline exceeded\"" pod="kube-system/kube-scheduler-localhost" podUID="3566c1d7ed03bb3c60facf009a5678dd" Apr 14 12:50:34.501264 containerd[1466]: time="2026-04-14T12:50:34.375565468Z" level=error msg="StartContainer for \"433f46bcd44951ef1c6557ae32f8f3d62062ad479df52baf9dc40601b5c7810d\" failed" error="failed to create containerd task: failed to create shim task: context deadline exceeded: unknown" Apr 14 12:50:38.385848 containerd[1466]: time="2026-04-14T12:50:38.122439104Z" level=error msg="failed to delete" cmd="/usr/bin/containerd-shim-runc-v2 -namespace k8s.io -address /run/containerd/containerd.sock -publish-binary /usr/bin/containerd -id 433f46bcd44951ef1c6557ae32f8f3d62062ad479df52baf9dc40601b5c7810d -bundle /run/containerd/io.containerd.runtime.v2.task/k8s.io/433f46bcd44951ef1c6557ae32f8f3d62062ad479df52baf9dc40601b5c7810d delete" error="signal: killed" namespace=k8s.io Apr 14 12:50:38.385848 containerd[1466]: time="2026-04-14T12:50:38.226084126Z" level=warning msg="failed to clean up after shim disconnected" error=": signal: killed" id=433f46bcd44951ef1c6557ae32f8f3d62062ad479df52baf9dc40601b5c7810d namespace=k8s.io Apr 14 12:50:41.672695 kubelet[2611]: E0414 12:50:40.630826 2611 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://10.0.0.43:6443/api/v1/namespaces/kube-system/events/coredns-7d764666f9-spttk.18a639caec13d420\": net/http: TLS handshake timeout" event="&Event{ObjectMeta:{coredns-7d764666f9-spttk.18a639caec13d420 kube-system 875 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:coredns-7d764666f9-spttk,UID:fb975314-b950-4dd9-9942-b30d52d99a2a,APIVersion:v1,ResourceVersion:629,FieldPath:spec.containers{coredns},},Reason:Unhealthy,Message:Readiness probe failed: Get \"http://192.168.0.2:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-14 12:44:06 +0000 UTC,LastTimestamp:2026-04-14 12:46:55.564406716 +0000 UTC m=+343.620526160,Count:3,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 14 12:50:44.728504 containerd[1466]: time="2026-04-14T12:50:44.727395740Z" level=info msg="RemoveContainer for \"45c377c23bba339391d621103735ee33ebbfd684c88edc16dfac008900bc7b05\"" Apr 14 12:50:44.948907 kubelet[2611]: E0414 12:50:44.824016 2611 kubelet_node_status.go:474] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"NetworkUnavailable\\\"},{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-04-14T12:50:24Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-14T12:50:24Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-14T12:50:24Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-14T12:50:24Z\\\",\\\"type\\\":\\\"Ready\\\"}]}}\" for node \"localhost\": Patch \"https://10.0.0.43:6443/api/v1/nodes/localhost/status?timeout=10s\": context deadline exceeded" Apr 14 12:50:45.535907 kubelet[2611]: I0414 12:50:43.915960 2611 request.go:752] "Waited before sending request" delay="1.082308903s" reason="retries: 10, retry-after: 1s - retry-reason: due to retryable error, error: Get \"https://10.0.0.43:6443/api/v1/pods?allowWatchBookmarks=true&fieldSelector=spec.nodeName%3Dlocalhost&resourceVersion=1004&resourceVersionMatch=NotOlderThan&sendInitialEvents=true&timeoutSeconds=506&watch=true\": net/http: TLS handshake timeout" verb="GET" URL="https://10.0.0.43:6443/api/v1/pods?allowWatchBookmarks=true&fieldSelector=spec.nodeName%3Dlocalhost&resourceVersion=1004&resourceVersionMatch=NotOlderThan&sendInitialEvents=true&timeoutSeconds=506&watch=true" Apr 14 12:50:46.967954 systemd[1]: cri-containerd-433f46bcd44951ef1c6557ae32f8f3d62062ad479df52baf9dc40601b5c7810d.scope: Deactivated successfully. Apr 14 12:50:47.013251 systemd[1]: cri-containerd-433f46bcd44951ef1c6557ae32f8f3d62062ad479df52baf9dc40601b5c7810d.scope: Consumed 4.331s CPU time. Apr 14 12:50:47.124083 containerd[1466]: time="2026-04-14T12:50:47.122817549Z" level=info msg="RemoveContainer for \"45c377c23bba339391d621103735ee33ebbfd684c88edc16dfac008900bc7b05\" returns successfully" Apr 14 12:50:48.715493 kubelet[2611]: E0414 12:50:48.675450 2611 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.43:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded" interval="7s" Apr 14 12:50:49.756292 kubelet[2611]: E0414 12:50:47.440162 2611 status_manager.go:1045] "Failed to get status for pod" err="Get \"https://10.0.0.43:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-localhost\": net/http: TLS handshake timeout" podUID="ed5e991544c38f12435d82988fd12fee" pod="kube-system/kube-apiserver-localhost" Apr 14 12:51:01.706554 kubelet[2611]: E0414 12:51:00.696238 2611 kubelet_node_status.go:474] "Error updating node status, will retry" err="error getting node \"localhost\": Get \"https://10.0.0.43:6443/api/v1/nodes/localhost?timeout=10s\": context deadline exceeded" Apr 14 12:51:05.694415 kubelet[2611]: E0414 12:51:05.592083 2611 status_manager.go:1045] "Failed to get status for pod" err="Get \"https://10.0.0.43:6443/api/v1/namespaces/kube-system/pods/kube-scheduler-localhost\": net/http: TLS handshake timeout" podUID="3566c1d7ed03bb3c60facf009a5678dd" pod="kube-system/kube-scheduler-localhost" Apr 14 12:51:07.832192 kubelet[2611]: E0414 12:51:05.998030 2611 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1m45.753s" Apr 14 12:51:09.459006 kubelet[2611]: I0414 12:51:08.915459 2611 scope.go:122] "RemoveContainer" containerID="778a61b4dc0aec05b016d9a099f85c3b8b3531417291a1fe6809c5d0a256c99b" Apr 14 12:51:10.840241 kubelet[2611]: E0414 12:51:08.934151 2611 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.43:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" interval="7s" Apr 14 12:51:15.150518 kubelet[2611]: E0414 12:51:15.076079 2611 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://10.0.0.43:6443/api/v1/namespaces/kube-system/events/coredns-7d764666f9-spttk.18a639caec13d420\": net/http: TLS handshake timeout" event="&Event{ObjectMeta:{coredns-7d764666f9-spttk.18a639caec13d420 kube-system 875 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:coredns-7d764666f9-spttk,UID:fb975314-b950-4dd9-9942-b30d52d99a2a,APIVersion:v1,ResourceVersion:629,FieldPath:spec.containers{coredns},},Reason:Unhealthy,Message:Readiness probe failed: Get \"http://192.168.0.2:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-14 12:44:06 +0000 UTC,LastTimestamp:2026-04-14 12:46:55.564406716 +0000 UTC m=+343.620526160,Count:3,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 14 12:51:15.922485 kubelet[2611]: E0414 12:51:14.954577 2611 kubelet_node_status.go:474] "Error updating node status, will retry" err="error getting node \"localhost\": Get \"https://10.0.0.43:6443/api/v1/nodes/localhost?timeout=10s\": context deadline exceeded" Apr 14 12:51:16.549199 kubelet[2611]: E0414 12:51:15.910264 2611 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 12:51:21.432458 kubelet[2611]: I0414 12:51:21.332317 2611 scope.go:122] "RemoveContainer" containerID="463e2a5025e270c9bc3e218a5e7078cace8ee16134ce90c9b90f00f93e7ffcbe" Apr 14 12:51:22.567516 containerd[1466]: time="2026-04-14T12:51:22.558996500Z" level=info msg="StopContainer for \"a0f6b3d0dd3f8ad4278cf5baa8834b82987120c6a172552ad2c18c38a7830695\" with timeout 30 (s)" Apr 14 12:51:23.507521 containerd[1466]: time="2026-04-14T12:51:22.559312931Z" level=error msg="ContainerStatus for \"778a61b4dc0aec05b016d9a099f85c3b8b3531417291a1fe6809c5d0a256c99b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"778a61b4dc0aec05b016d9a099f85c3b8b3531417291a1fe6809c5d0a256c99b\": not found" Apr 14 12:51:23.507521 containerd[1466]: time="2026-04-14T12:51:23.129459040Z" level=info msg="Stop container \"a0f6b3d0dd3f8ad4278cf5baa8834b82987120c6a172552ad2c18c38a7830695\" with signal terminated" Apr 14 12:51:25.002700 kubelet[2611]: E0414 12:51:24.940305 2611 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 12:51:26.174108 kubelet[2611]: E0414 12:51:24.717268 2611 status_manager.go:1045] "Failed to get status for pod" err="Get \"https://10.0.0.43:6443/api/v1/namespaces/kube-system/pods/coredns-7d764666f9-spttk\": net/http: TLS handshake timeout" podUID="fb975314-b950-4dd9-9942-b30d52d99a2a" pod="kube-system/coredns-7d764666f9-spttk" Apr 14 12:51:29.149905 kubelet[2611]: E0414 12:51:29.148202 2611 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"778a61b4dc0aec05b016d9a099f85c3b8b3531417291a1fe6809c5d0a256c99b\": not found" containerID="778a61b4dc0aec05b016d9a099f85c3b8b3531417291a1fe6809c5d0a256c99b" Apr 14 12:51:35.260255 kubelet[2611]: I0414 12:51:29.150215 2611 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"778a61b4dc0aec05b016d9a099f85c3b8b3531417291a1fe6809c5d0a256c99b"} err="failed to get container status \"778a61b4dc0aec05b016d9a099f85c3b8b3531417291a1fe6809c5d0a256c99b\": rpc error: code = NotFound desc = an error occurred when try to find container \"778a61b4dc0aec05b016d9a099f85c3b8b3531417291a1fe6809c5d0a256c99b\": not found" Apr 14 12:51:36.373363 containerd[1466]: time="2026-04-14T12:51:36.314002141Z" level=info msg="StopContainer for \"deefeb9d7fce7a43dd4dcf925e6b98c76823dace7f986803922e1451a289da2d\" with timeout 30 (s)" Apr 14 12:51:36.965769 containerd[1466]: time="2026-04-14T12:51:36.964133890Z" level=info msg="Stop container \"deefeb9d7fce7a43dd4dcf925e6b98c76823dace7f986803922e1451a289da2d\" with signal terminated" Apr 14 12:51:38.991537 kubelet[2611]: E0414 12:51:36.518155 2611 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.43:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded" interval="7s" Apr 14 12:51:42.628214 kubelet[2611]: E0414 12:51:42.626195 2611 kubelet_node_status.go:474] "Error updating node status, will retry" err="error getting node \"localhost\": Get \"https://10.0.0.43:6443/api/v1/nodes/localhost?timeout=10s\": context deadline exceeded" Apr 14 12:51:44.328934 kubelet[2611]: I0414 12:51:42.810268 2611 request.go:752] "Waited before sending request" delay="1.058663508s" reason="retries: 1, retry-after: 1s - retry-reason: due to retryable error, error: Get \"https://10.0.0.43:6443/api/v1/namespaces/kube-flannel/configmaps?allowWatchBookmarks=true&fieldSelector=metadata.name%3Dkube-flannel-cfg&resourceVersion=996&resourceVersionMatch=NotOlderThan&sendInitialEvents=true&timeout=54m31s&timeoutSeconds=3271&watch=true\": net/http: TLS handshake timeout" verb="GET" URL="https://10.0.0.43:6443/api/v1/namespaces/kube-flannel/configmaps?allowWatchBookmarks=true&fieldSelector=metadata.name%3Dkube-flannel-cfg&resourceVersion=996&resourceVersionMatch=NotOlderThan&sendInitialEvents=true&timeout=54m31s&timeoutSeconds=3271&watch=true" Apr 14 12:51:44.975359 sshd[5165]: pam_unix(sshd:session): session closed for user core Apr 14 12:51:46.014987 systemd[1]: sshd@28-10.0.0.43:22-10.0.0.1:35078.service: Deactivated successfully. Apr 14 12:51:46.123062 systemd[1]: sshd@28-10.0.0.43:22-10.0.0.1:35078.service: Consumed 6.095s CPU time. Apr 14 12:51:46.957285 systemd[1]: session-29.scope: Deactivated successfully. Apr 14 12:51:47.058770 systemd[1]: session-29.scope: Consumed 53.770s CPU time. Apr 14 12:51:47.273166 systemd-logind[1450]: Session 29 logged out. Waiting for processes to exit. Apr 14 12:51:48.143367 systemd-logind[1450]: Removed session 29. Apr 14 12:51:49.423660 containerd[1466]: time="2026-04-14T12:51:49.351401020Z" level=info msg="StopContainer for \"8d3c1054a6cc9347dba2d780a318571b021087092c2d44730b1a6d81eb6af2a3\" with timeout 30 (s)" Apr 14 12:51:49.928534 containerd[1466]: time="2026-04-14T12:51:49.637163908Z" level=info msg="Stop container \"8d3c1054a6cc9347dba2d780a318571b021087092c2d44730b1a6d81eb6af2a3\" with signal terminated" Apr 14 12:51:51.350001 kubelet[2611]: E0414 12:51:51.341729 2611 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 12:51:53.160038 systemd[1]: Started sshd@29-10.0.0.43:22-10.0.0.1:37634.service - OpenSSH per-connection server daemon (10.0.0.1:37634). Apr 14 12:51:57.195344 systemd[1]: cri-containerd-deefeb9d7fce7a43dd4dcf925e6b98c76823dace7f986803922e1451a289da2d.scope: Deactivated successfully. Apr 14 12:51:57.257010 systemd[1]: cri-containerd-deefeb9d7fce7a43dd4dcf925e6b98c76823dace7f986803922e1451a289da2d.scope: Consumed 1min 19.462s CPU time. Apr 14 12:51:59.634230 kubelet[2611]: E0414 12:51:59.542328 2611 status_manager.go:1045] "Failed to get status for pod" err="Get \"https://10.0.0.43:6443/api/v1/namespaces/kube-system/pods/coredns-7d764666f9-f44gt\": net/http: TLS handshake timeout" podUID="c0e24bdb-6150-4745-b66b-9386ee241a93" pod="kube-system/coredns-7d764666f9-f44gt" Apr 14 12:52:01.231333 kubelet[2611]: E0414 12:52:00.422380 2611 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.43:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded" interval="7s" Apr 14 12:52:02.775148 kubelet[2611]: E0414 12:52:02.762342 2611 kubelet_node_status.go:474] "Error updating node status, will retry" err="error getting node \"localhost\": Get \"https://10.0.0.43:6443/api/v1/nodes/localhost?timeout=10s\": context deadline exceeded" Apr 14 12:52:04.370322 containerd[1466]: time="2026-04-14T12:52:04.353380367Z" level=info msg="Kill container \"a0f6b3d0dd3f8ad4278cf5baa8834b82987120c6a172552ad2c18c38a7830695\"" Apr 14 12:52:04.937055 kubelet[2611]: E0414 12:52:02.216180 2611 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://10.0.0.43:6443/api/v1/namespaces/kube-system/events/coredns-7d764666f9-spttk.18a639caec13d420\": net/http: TLS handshake timeout" event="&Event{ObjectMeta:{coredns-7d764666f9-spttk.18a639caec13d420 kube-system 875 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:coredns-7d764666f9-spttk,UID:fb975314-b950-4dd9-9942-b30d52d99a2a,APIVersion:v1,ResourceVersion:629,FieldPath:spec.containers{coredns},},Reason:Unhealthy,Message:Readiness probe failed: Get \"http://192.168.0.2:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-14 12:44:06 +0000 UTC,LastTimestamp:2026-04-14 12:46:55.564406716 +0000 UTC m=+343.620526160,Count:3,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 14 12:52:05.509533 systemd[1]: cri-containerd-8d3c1054a6cc9347dba2d780a318571b021087092c2d44730b1a6d81eb6af2a3.scope: Deactivated successfully. Apr 14 12:52:05.644001 systemd[1]: cri-containerd-8d3c1054a6cc9347dba2d780a318571b021087092c2d44730b1a6d81eb6af2a3.scope: Consumed 1min 23.242s CPU time. Apr 14 12:52:06.345198 kubelet[2611]: E0414 12:52:03.543148 2611 kubelet_node_status.go:461] "Unable to update node status" err="update node status exceeds retry count" Apr 14 12:52:09.606570 containerd[1466]: time="2026-04-14T12:52:09.605475595Z" level=error msg="failed to handle container TaskExit event container_id:\"deefeb9d7fce7a43dd4dcf925e6b98c76823dace7f986803922e1451a289da2d\" id:\"deefeb9d7fce7a43dd4dcf925e6b98c76823dace7f986803922e1451a289da2d\" pid:3582 exited_at:{seconds:1776171119 nanos:107692540}" error="failed to stop container: context deadline exceeded: unknown" Apr 14 12:52:11.828280 containerd[1466]: time="2026-04-14T12:52:11.775462549Z" level=info msg="TaskExit event container_id:\"deefeb9d7fce7a43dd4dcf925e6b98c76823dace7f986803922e1451a289da2d\" id:\"deefeb9d7fce7a43dd4dcf925e6b98c76823dace7f986803922e1451a289da2d\" pid:3582 exited_at:{seconds:1776171119 nanos:107692540}" Apr 14 12:52:12.827460 containerd[1466]: time="2026-04-14T12:52:12.212713014Z" level=error msg="ttrpc: received message on inactive stream" stream=59 Apr 14 12:52:12.827460 containerd[1466]: time="2026-04-14T12:52:12.291838702Z" level=error msg="ttrpc: received message on inactive stream" stream=63 Apr 14 12:52:20.877440 containerd[1466]: time="2026-04-14T12:52:20.875796870Z" level=info msg="Kill container \"deefeb9d7fce7a43dd4dcf925e6b98c76823dace7f986803922e1451a289da2d\"" Apr 14 12:52:21.843005 containerd[1466]: time="2026-04-14T12:52:21.476966419Z" level=error msg="Failed to handle backOff event container_id:\"deefeb9d7fce7a43dd4dcf925e6b98c76823dace7f986803922e1451a289da2d\" id:\"deefeb9d7fce7a43dd4dcf925e6b98c76823dace7f986803922e1451a289da2d\" pid:3582 exited_at:{seconds:1776171119 nanos:107692540} for deefeb9d7fce7a43dd4dcf925e6b98c76823dace7f986803922e1451a289da2d" error="failed to handle container TaskExit event: failed to stop container: context deadline exceeded: unknown" Apr 14 12:52:21.943569 kubelet[2611]: E0414 12:52:21.937271 2611 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.43:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded" interval="7s" Apr 14 12:52:22.832388 systemd[1]: Starting systemd-tmpfiles-clean.service - Cleanup of Temporary Directories... Apr 14 12:52:23.941866 containerd[1466]: time="2026-04-14T12:52:23.212413813Z" level=error msg="ttrpc: received message on inactive stream" stream=69 Apr 14 12:52:23.941866 containerd[1466]: time="2026-04-14T12:52:23.593390998Z" level=info msg="TaskExit event container_id:\"deefeb9d7fce7a43dd4dcf925e6b98c76823dace7f986803922e1451a289da2d\" id:\"deefeb9d7fce7a43dd4dcf925e6b98c76823dace7f986803922e1451a289da2d\" pid:3582 exited_at:{seconds:1776171119 nanos:107692540}" Apr 14 12:52:24.359454 containerd[1466]: time="2026-04-14T12:52:23.749451852Z" level=error msg="ttrpc: received message on inactive stream" stream=71 Apr 14 12:52:25.807188 containerd[1466]: time="2026-04-14T12:52:25.672787073Z" level=error msg="failed to handle container TaskExit event container_id:\"8d3c1054a6cc9347dba2d780a318571b021087092c2d44730b1a6d81eb6af2a3\" id:\"8d3c1054a6cc9347dba2d780a318571b021087092c2d44730b1a6d81eb6af2a3\" pid:3564 exited_at:{seconds:1776171134 nanos:419422785}" error="failed to stop container: context deadline exceeded: unknown" Apr 14 12:52:26.997143 containerd[1466]: time="2026-04-14T12:52:26.994267418Z" level=error msg="get state for deefeb9d7fce7a43dd4dcf925e6b98c76823dace7f986803922e1451a289da2d" error="context deadline exceeded: unknown" Apr 14 12:52:28.031445 containerd[1466]: time="2026-04-14T12:52:27.093523955Z" level=warning msg="unknown status" status=0 Apr 14 12:52:28.564576 sshd[5317]: Accepted publickey for core from 10.0.0.1 port 37634 ssh2: RSA SHA256:qzhFhta2jMUFpsUMpLJ2lZjvWQxYMUNkIBekZ4ekVbM Apr 14 12:52:28.920198 containerd[1466]: time="2026-04-14T12:52:28.132204747Z" level=error msg="ttrpc: received message on inactive stream" stream=61 Apr 14 12:52:28.920198 containerd[1466]: time="2026-04-14T12:52:28.132447227Z" level=error msg="ttrpc: received message on inactive stream" stream=63 Apr 14 12:52:29.511805 containerd[1466]: time="2026-04-14T12:52:29.507362176Z" level=info msg="Kill container \"8d3c1054a6cc9347dba2d780a318571b021087092c2d44730b1a6d81eb6af2a3\"" Apr 14 12:52:30.264852 sshd[5317]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 12:52:30.730326 containerd[1466]: time="2026-04-14T12:52:30.729979317Z" level=error msg="get state for deefeb9d7fce7a43dd4dcf925e6b98c76823dace7f986803922e1451a289da2d" error="context deadline exceeded: unknown" Apr 14 12:52:30.967252 containerd[1466]: time="2026-04-14T12:52:30.825500060Z" level=warning msg="unknown status" status=0 Apr 14 12:52:33.636775 containerd[1466]: time="2026-04-14T12:52:33.620340188Z" level=error msg="Failed to handle backOff event container_id:\"deefeb9d7fce7a43dd4dcf925e6b98c76823dace7f986803922e1451a289da2d\" id:\"deefeb9d7fce7a43dd4dcf925e6b98c76823dace7f986803922e1451a289da2d\" pid:3582 exited_at:{seconds:1776171119 nanos:107692540} for deefeb9d7fce7a43dd4dcf925e6b98c76823dace7f986803922e1451a289da2d" error="failed to handle container TaskExit event: failed to stop container: failed to delete task: context deadline exceeded: unknown" Apr 14 12:52:34.358863 containerd[1466]: time="2026-04-14T12:52:33.674092693Z" level=info msg="TaskExit event container_id:\"8d3c1054a6cc9347dba2d780a318571b021087092c2d44730b1a6d81eb6af2a3\" id:\"8d3c1054a6cc9347dba2d780a318571b021087092c2d44730b1a6d81eb6af2a3\" pid:3564 exited_at:{seconds:1776171134 nanos:419422785}" Apr 14 12:52:34.358863 containerd[1466]: time="2026-04-14T12:52:33.731440482Z" level=info msg="CreateContainer within sandbox \"e6e85fec2c8d99964501468473b50536e90439fb2a70be7746d8683556d02abe\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:4,}" Apr 14 12:52:34.101943 systemd-logind[1450]: New session 30 of user core. Apr 14 12:52:35.102377 systemd[1]: Started session-30.scope - Session 30 of User core. Apr 14 12:52:36.746775 containerd[1466]: time="2026-04-14T12:52:36.714205333Z" level=error msg="get state for 8d3c1054a6cc9347dba2d780a318571b021087092c2d44730b1a6d81eb6af2a3" error="context deadline exceeded: unknown" Apr 14 12:52:36.935507 containerd[1466]: time="2026-04-14T12:52:36.928633461Z" level=warning msg="unknown status" status=0 Apr 14 12:52:40.079354 kubelet[2611]: E0414 12:52:38.341303 2611 status_manager.go:1045] "Failed to get status for pod" err="Get \"https://10.0.0.43:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-localhost\": net/http: TLS handshake timeout" podUID="ed5e991544c38f12435d82988fd12fee" pod="kube-system/kube-apiserver-localhost" Apr 14 12:52:40.725368 containerd[1466]: time="2026-04-14T12:52:40.068396553Z" level=error msg="get state for 8d3c1054a6cc9347dba2d780a318571b021087092c2d44730b1a6d81eb6af2a3" error="context deadline exceeded: unknown" Apr 14 12:52:40.949130 containerd[1466]: time="2026-04-14T12:52:40.541396465Z" level=warning msg="unknown status" status=0 Apr 14 12:52:44.159207 containerd[1466]: time="2026-04-14T12:52:44.048396814Z" level=error msg="Failed to handle backOff event container_id:\"8d3c1054a6cc9347dba2d780a318571b021087092c2d44730b1a6d81eb6af2a3\" id:\"8d3c1054a6cc9347dba2d780a318571b021087092c2d44730b1a6d81eb6af2a3\" pid:3564 exited_at:{seconds:1776171134 nanos:419422785} for 8d3c1054a6cc9347dba2d780a318571b021087092c2d44730b1a6d81eb6af2a3" error="failed to handle container TaskExit event: failed to stop container: failed to delete task: context deadline exceeded: unknown" Apr 14 12:52:44.507087 kubelet[2611]: I0414 12:52:44.204114 2611 request.go:752] "Waited before sending request" delay="1.768836527s" reason="retries: 1, retry-after: 1s - retry-reason: due to retryable error, error: Get \"https://10.0.0.43:6443/api/v1/namespaces/kube-system/configmaps?allowWatchBookmarks=true&fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=996&resourceVersionMatch=NotOlderThan&sendInitialEvents=true&timeout=52m39s&timeoutSeconds=3159&watch=true\": net/http: TLS handshake timeout" verb="GET" URL="https://10.0.0.43:6443/api/v1/namespaces/kube-system/configmaps?allowWatchBookmarks=true&fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=996&resourceVersionMatch=NotOlderThan&sendInitialEvents=true&timeout=52m39s&timeoutSeconds=3159&watch=true" Apr 14 12:52:45.196811 containerd[1466]: time="2026-04-14T12:52:44.699451240Z" level=info msg="TaskExit event container_id:\"deefeb9d7fce7a43dd4dcf925e6b98c76823dace7f986803922e1451a289da2d\" id:\"deefeb9d7fce7a43dd4dcf925e6b98c76823dace7f986803922e1451a289da2d\" pid:3582 exited_at:{seconds:1776171119 nanos:107692540}" Apr 14 12:52:46.958357 containerd[1466]: time="2026-04-14T12:52:46.838995715Z" level=error msg="get state for deefeb9d7fce7a43dd4dcf925e6b98c76823dace7f986803922e1451a289da2d" error="context deadline exceeded: unknown" Apr 14 12:52:47.544388 containerd[1466]: time="2026-04-14T12:52:47.355466773Z" level=warning msg="unknown status" status=0 Apr 14 12:52:50.308045 containerd[1466]: time="2026-04-14T12:52:50.299294997Z" level=error msg="get state for deefeb9d7fce7a43dd4dcf925e6b98c76823dace7f986803922e1451a289da2d" error="context deadline exceeded: unknown" Apr 14 12:52:51.319444 containerd[1466]: time="2026-04-14T12:52:50.848438970Z" level=warning msg="unknown status" status=0 Apr 14 12:52:54.544888 systemd-tmpfiles[5382]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Apr 14 12:52:55.829485 containerd[1466]: time="2026-04-14T12:52:55.507316190Z" level=error msg="Failed to handle backOff event container_id:\"deefeb9d7fce7a43dd4dcf925e6b98c76823dace7f986803922e1451a289da2d\" id:\"deefeb9d7fce7a43dd4dcf925e6b98c76823dace7f986803922e1451a289da2d\" pid:3582 exited_at:{seconds:1776171119 nanos:107692540} for deefeb9d7fce7a43dd4dcf925e6b98c76823dace7f986803922e1451a289da2d" error="failed to handle container TaskExit event: failed to stop container: failed to delete task: context deadline exceeded: unknown" Apr 14 12:52:55.829485 containerd[1466]: time="2026-04-14T12:52:55.529002422Z" level=info msg="TaskExit event container_id:\"8d3c1054a6cc9347dba2d780a318571b021087092c2d44730b1a6d81eb6af2a3\" id:\"8d3c1054a6cc9347dba2d780a318571b021087092c2d44730b1a6d81eb6af2a3\" pid:3564 exited_at:{seconds:1776171134 nanos:419422785}" Apr 14 12:52:55.829485 containerd[1466]: time="2026-04-14T12:52:55.849159515Z" level=error msg="ttrpc: received message on inactive stream" stream=69 Apr 14 12:52:57.151859 containerd[1466]: time="2026-04-14T12:52:56.361389534Z" level=error msg="ttrpc: received message on inactive stream" stream=67 Apr 14 12:52:56.872020 systemd[1]: cri-containerd-a0f6b3d0dd3f8ad4278cf5baa8834b82987120c6a172552ad2c18c38a7830695.scope: Deactivated successfully. Apr 14 12:52:58.185261 containerd[1466]: time="2026-04-14T12:52:58.021860394Z" level=info msg="CreateContainer within sandbox \"e6e85fec2c8d99964501468473b50536e90439fb2a70be7746d8683556d02abe\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:4,} returns container id \"dfea1f2839f09d79baa829bb270334b0c6732ff51677645b59f48ef8e7a6d142\"" Apr 14 12:52:58.615255 kubelet[2611]: E0414 12:52:58.027927 2611 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.43:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" interval="7s" Apr 14 12:52:57.013389 systemd[1]: cri-containerd-a0f6b3d0dd3f8ad4278cf5baa8834b82987120c6a172552ad2c18c38a7830695.scope: Consumed 9min 16.646s CPU time, 189.2M memory peak, 0B memory swap peak. Apr 14 12:52:57.714375 systemd-tmpfiles[5382]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Apr 14 12:52:59.702396 systemd-tmpfiles[5382]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Apr 14 12:53:00.461953 containerd[1466]: time="2026-04-14T12:52:59.437512625Z" level=error msg="ttrpc: received message on inactive stream" stream=71 Apr 14 12:52:59.913581 systemd-tmpfiles[5382]: ACLs are not supported, ignoring. Apr 14 12:53:00.018475 systemd-tmpfiles[5382]: ACLs are not supported, ignoring. Apr 14 12:53:00.106455 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8d3c1054a6cc9347dba2d780a318571b021087092c2d44730b1a6d81eb6af2a3-rootfs.mount: Deactivated successfully. Apr 14 12:53:02.008519 systemd-tmpfiles[5382]: Detected autofs mount point /boot during canonicalization of boot. Apr 14 12:53:02.078143 systemd-tmpfiles[5382]: Skipping /boot Apr 14 12:53:02.957783 kubelet[2611]: E0414 12:52:42.808726 2611 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://10.0.0.43:6443/api/v1/namespaces/kube-system/events/coredns-7d764666f9-spttk.18a639caec13d420\": net/http: TLS handshake timeout" event="&Event{ObjectMeta:{coredns-7d764666f9-spttk.18a639caec13d420 kube-system 875 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:coredns-7d764666f9-spttk,UID:fb975314-b950-4dd9-9942-b30d52d99a2a,APIVersion:v1,ResourceVersion:629,FieldPath:spec.containers{coredns},},Reason:Unhealthy,Message:Readiness probe failed: Get \"http://192.168.0.2:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-14 12:44:06 +0000 UTC,LastTimestamp:2026-04-14 12:46:55.564406716 +0000 UTC m=+343.620526160,Count:3,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 14 12:53:05.677086 containerd[1466]: time="2026-04-14T12:53:05.573949764Z" level=error msg="Failed to handle backOff event container_id:\"8d3c1054a6cc9347dba2d780a318571b021087092c2d44730b1a6d81eb6af2a3\" id:\"8d3c1054a6cc9347dba2d780a318571b021087092c2d44730b1a6d81eb6af2a3\" pid:3564 exited_at:{seconds:1776171134 nanos:419422785} for 8d3c1054a6cc9347dba2d780a318571b021087092c2d44730b1a6d81eb6af2a3" error="failed to handle container TaskExit event: failed to stop container: context deadline exceeded: unknown" Apr 14 12:53:05.677086 containerd[1466]: time="2026-04-14T12:53:05.574320047Z" level=info msg="TaskExit event container_id:\"deefeb9d7fce7a43dd4dcf925e6b98c76823dace7f986803922e1451a289da2d\" id:\"deefeb9d7fce7a43dd4dcf925e6b98c76823dace7f986803922e1451a289da2d\" pid:3582 exited_at:{seconds:1776171119 nanos:107692540}" Apr 14 12:53:06.937011 systemd[1]: systemd-tmpfiles-clean.service: Deactivated successfully. Apr 14 12:53:07.050476 systemd[1]: Finished systemd-tmpfiles-clean.service - Cleanup of Temporary Directories. Apr 14 12:53:07.277461 systemd[1]: systemd-tmpfiles-clean.service: Consumed 7.944s CPU time. Apr 14 12:53:08.086284 containerd[1466]: time="2026-04-14T12:53:08.082067079Z" level=error msg="get state for deefeb9d7fce7a43dd4dcf925e6b98c76823dace7f986803922e1451a289da2d" error="context deadline exceeded: unknown" Apr 14 12:53:08.086284 containerd[1466]: time="2026-04-14T12:53:08.082435241Z" level=warning msg="unknown status" status=0 Apr 14 12:53:08.461507 containerd[1466]: time="2026-04-14T12:53:08.422578681Z" level=error msg="ttrpc: received message on inactive stream" stream=77 Apr 14 12:53:10.044432 containerd[1466]: time="2026-04-14T12:53:10.038686204Z" level=error msg="failed to handle container TaskExit event container_id:\"a0f6b3d0dd3f8ad4278cf5baa8834b82987120c6a172552ad2c18c38a7830695\" id:\"a0f6b3d0dd3f8ad4278cf5baa8834b82987120c6a172552ad2c18c38a7830695\" pid:2449 exit_status:137 exited_at:{seconds:1776171177 nanos:520202480}" error="failed to stop container: context deadline exceeded: unknown" Apr 14 12:53:10.210532 kubelet[2611]: E0414 12:53:10.059856 2611 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.43:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.43:6443: connect: connection refused" interval="7s" Apr 14 12:53:10.377379 containerd[1466]: time="2026-04-14T12:53:10.294369360Z" level=error msg="get state for deefeb9d7fce7a43dd4dcf925e6b98c76823dace7f986803922e1451a289da2d" error="context deadline exceeded: unknown" Apr 14 12:53:10.377379 containerd[1466]: time="2026-04-14T12:53:10.361351694Z" level=warning msg="unknown status" status=0 Apr 14 12:53:11.196003 containerd[1466]: time="2026-04-14T12:53:11.134264516Z" level=error msg="ttrpc: received message on inactive stream" stream=89 Apr 14 12:53:11.196003 containerd[1466]: time="2026-04-14T12:53:11.197797792Z" level=error msg="ttrpc: received message on inactive stream" stream=93 Apr 14 12:53:13.012149 containerd[1466]: time="2026-04-14T12:53:13.009312673Z" level=info msg="StartContainer for \"dfea1f2839f09d79baa829bb270334b0c6732ff51677645b59f48ef8e7a6d142\"" Apr 14 12:53:15.397943 containerd[1466]: time="2026-04-14T12:53:15.355755822Z" level=error msg="ttrpc: received message on inactive stream" stream=77 Apr 14 12:53:15.397943 containerd[1466]: time="2026-04-14T12:53:15.356050211Z" level=error msg="ttrpc: received message on inactive stream" stream=75 Apr 14 12:53:16.142356 containerd[1466]: time="2026-04-14T12:53:16.136098147Z" level=error msg="Failed to handle backOff event container_id:\"deefeb9d7fce7a43dd4dcf925e6b98c76823dace7f986803922e1451a289da2d\" id:\"deefeb9d7fce7a43dd4dcf925e6b98c76823dace7f986803922e1451a289da2d\" pid:3582 exited_at:{seconds:1776171119 nanos:107692540} for deefeb9d7fce7a43dd4dcf925e6b98c76823dace7f986803922e1451a289da2d" error="failed to handle container TaskExit event: failed to stop container: failed to delete task: context deadline exceeded: unknown" Apr 14 12:53:16.371445 containerd[1466]: time="2026-04-14T12:53:16.277429495Z" level=info msg="TaskExit event container_id:\"8d3c1054a6cc9347dba2d780a318571b021087092c2d44730b1a6d81eb6af2a3\" id:\"8d3c1054a6cc9347dba2d780a318571b021087092c2d44730b1a6d81eb6af2a3\" pid:3564 exited_at:{seconds:1776171134 nanos:419422785}" Apr 14 12:53:16.898304 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-deefeb9d7fce7a43dd4dcf925e6b98c76823dace7f986803922e1451a289da2d-rootfs.mount: Deactivated successfully. Apr 14 12:53:17.922299 kubelet[2611]: E0414 12:53:17.922018 2611 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2m8.637s" Apr 14 12:53:17.923549 containerd[1466]: time="2026-04-14T12:53:17.922988423Z" level=error msg="ttrpc: received message on inactive stream" stream=79 Apr 14 12:53:17.923549 containerd[1466]: time="2026-04-14T12:53:17.923013838Z" level=error msg="ttrpc: received message on inactive stream" stream=81 Apr 14 12:53:18.590392 kubelet[2611]: I0414 12:53:18.573877 2611 scope.go:122] "RemoveContainer" containerID="433f46bcd44951ef1c6557ae32f8f3d62062ad479df52baf9dc40601b5c7810d" Apr 14 12:53:19.472193 containerd[1466]: time="2026-04-14T12:53:19.428511788Z" level=error msg="ttrpc: received message on inactive stream" stream=89 Apr 14 12:53:19.869765 containerd[1466]: time="2026-04-14T12:53:19.826390420Z" level=error msg="ttrpc: received message on inactive stream" stream=85 Apr 14 12:53:19.942677 kubelet[2611]: E0414 12:53:15.054519 2611 status_manager.go:1045] "Failed to get status for pod" err="Get \"https://10.0.0.43:6443/api/v1/namespaces/kube-system/pods/kube-scheduler-localhost\": dial tcp 10.0.0.43:6443: connect: connection refused - error from a previous attempt: write tcp 10.0.0.43:37414->10.0.0.43:6443: write: connection reset by peer" podUID="3566c1d7ed03bb3c60facf009a5678dd" pod="kube-system/kube-scheduler-localhost" Apr 14 12:53:20.018353 containerd[1466]: time="2026-04-14T12:53:19.959502198Z" level=error msg="ttrpc: received message on inactive stream" stream=87 Apr 14 12:53:20.018353 containerd[1466]: time="2026-04-14T12:53:20.016384838Z" level=error msg="ttrpc: received message on inactive stream" stream=83 Apr 14 12:53:20.018353 containerd[1466]: time="2026-04-14T12:53:20.018066953Z" level=error msg="ttrpc: received message on inactive stream" stream=91 Apr 14 12:53:21.033000 kubelet[2611]: E0414 12:53:20.135636 2611 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 12:53:24.022380 kubelet[2611]: E0414 12:53:23.690356 2611 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://10.0.0.43:6443/api/v1/namespaces/kube-system/events/coredns-7d764666f9-spttk.18a639caec13d420\": dial tcp 10.0.0.43:6443: connect: connection refused" event="&Event{ObjectMeta:{coredns-7d764666f9-spttk.18a639caec13d420 kube-system 875 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:coredns-7d764666f9-spttk,UID:fb975314-b950-4dd9-9942-b30d52d99a2a,APIVersion:v1,ResourceVersion:629,FieldPath:spec.containers{coredns},},Reason:Unhealthy,Message:Readiness probe failed: Get \"http://192.168.0.2:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-14 12:44:06 +0000 UTC,LastTimestamp:2026-04-14 12:46:55.564406716 +0000 UTC m=+343.620526160,Count:3,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 14 12:53:26.056612 kubelet[2611]: E0414 12:53:25.436348 2611 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.43:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.43:6443: connect: connection refused" interval="7s" Apr 14 12:53:26.564151 containerd[1466]: time="2026-04-14T12:53:26.442081871Z" level=error msg="Failed to handle backOff event container_id:\"8d3c1054a6cc9347dba2d780a318571b021087092c2d44730b1a6d81eb6af2a3\" id:\"8d3c1054a6cc9347dba2d780a318571b021087092c2d44730b1a6d81eb6af2a3\" pid:3564 exited_at:{seconds:1776171134 nanos:419422785} for 8d3c1054a6cc9347dba2d780a318571b021087092c2d44730b1a6d81eb6af2a3" error="failed to handle container TaskExit event: failed to stop container: context deadline exceeded: unknown" Apr 14 12:53:26.916400 containerd[1466]: time="2026-04-14T12:53:26.560531354Z" level=info msg="TaskExit event container_id:\"a0f6b3d0dd3f8ad4278cf5baa8834b82987120c6a172552ad2c18c38a7830695\" id:\"a0f6b3d0dd3f8ad4278cf5baa8834b82987120c6a172552ad2c18c38a7830695\" pid:2449 exit_status:137 exited_at:{seconds:1776171177 nanos:520202480}" Apr 14 12:53:27.427244 containerd[1466]: time="2026-04-14T12:53:27.346542418Z" level=error msg="ttrpc: received message on inactive stream" stream=87 Apr 14 12:53:27.599210 containerd[1466]: time="2026-04-14T12:53:27.565430941Z" level=error msg="ttrpc: received message on inactive stream" stream=85 Apr 14 12:53:28.972962 kubelet[2611]: E0414 12:53:28.967369 2611 kubelet_node_status.go:474] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"NetworkUnavailable\\\"},{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-04-14T12:53:16Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-14T12:53:16Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-14T12:53:16Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-14T12:53:16Z\\\",\\\"type\\\":\\\"Ready\\\"}]}}\" for node \"localhost\": Patch \"https://10.0.0.43:6443/api/v1/nodes/localhost/status?timeout=10s\": dial tcp 10.0.0.43:6443: connect: connection refused" Apr 14 12:53:30.218579 kubelet[2611]: E0414 12:53:30.217320 2611 status_manager.go:1045] "Failed to get status for pod" err="Get \"https://10.0.0.43:6443/api/v1/namespaces/kube-system/pods/coredns-7d764666f9-spttk\": dial tcp 10.0.0.43:6443: connect: connection refused" podUID="fb975314-b950-4dd9-9942-b30d52d99a2a" pod="kube-system/coredns-7d764666f9-spttk" Apr 14 12:53:32.854560 kubelet[2611]: E0414 12:53:32.841293 2611 kubelet_node_status.go:474] "Error updating node status, will retry" err="error getting node \"localhost\": Get \"https://10.0.0.43:6443/api/v1/nodes/localhost?timeout=10s\": dial tcp 10.0.0.43:6443: connect: connection refused" Apr 14 12:53:34.431950 kubelet[2611]: E0414 12:53:34.413157 2611 status_manager.go:1045] "Failed to get status for pod" err="Get \"https://10.0.0.43:6443/api/v1/namespaces/kube-system/pods/coredns-7d764666f9-f44gt\": dial tcp 10.0.0.43:6443: connect: connection refused" podUID="c0e24bdb-6150-4745-b66b-9386ee241a93" pod="kube-system/coredns-7d764666f9-f44gt" Apr 14 12:53:36.295326 kubelet[2611]: E0414 12:53:36.294685 2611 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.43:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.43:6443: connect: connection refused" interval="7s" Apr 14 12:53:36.954387 containerd[1466]: time="2026-04-14T12:53:36.950485062Z" level=error msg="Failed to handle backOff event container_id:\"a0f6b3d0dd3f8ad4278cf5baa8834b82987120c6a172552ad2c18c38a7830695\" id:\"a0f6b3d0dd3f8ad4278cf5baa8834b82987120c6a172552ad2c18c38a7830695\" pid:2449 exit_status:137 exited_at:{seconds:1776171177 nanos:520202480} for a0f6b3d0dd3f8ad4278cf5baa8834b82987120c6a172552ad2c18c38a7830695" error="failed to handle container TaskExit event: failed to stop container: context deadline exceeded: unknown" Apr 14 12:53:36.954387 containerd[1466]: time="2026-04-14T12:53:36.983410932Z" level=info msg="TaskExit event container_id:\"8d3c1054a6cc9347dba2d780a318571b021087092c2d44730b1a6d81eb6af2a3\" id:\"8d3c1054a6cc9347dba2d780a318571b021087092c2d44730b1a6d81eb6af2a3\" pid:3564 exited_at:{seconds:1776171134 nanos:419422785}" Apr 14 12:53:37.384471 containerd[1466]: time="2026-04-14T12:53:37.134414909Z" level=error msg="ttrpc: received message on inactive stream" stream=101 Apr 14 12:53:37.384471 containerd[1466]: time="2026-04-14T12:53:37.160248946Z" level=error msg="ttrpc: received message on inactive stream" stream=97 Apr 14 12:53:39.367061 kubelet[2611]: E0414 12:53:39.314432 2611 status_manager.go:1045] "Failed to get status for pod" err="Get \"https://10.0.0.43:6443/api/v1/namespaces/kube-system/pods/kube-controller-manager-localhost\": dial tcp 10.0.0.43:6443: connect: connection refused" podUID="bd70d524e6bc561f2082b467706799ed" pod="kube-system/kube-controller-manager-localhost" Apr 14 12:53:40.097405 kubelet[2611]: E0414 12:53:39.418369 2611 kubelet_node_status.go:474] "Error updating node status, will retry" err="error getting node \"localhost\": Get \"https://10.0.0.43:6443/api/v1/nodes/localhost?timeout=10s\": dial tcp 10.0.0.43:6443: connect: connection refused" Apr 14 12:53:40.276208 kubelet[2611]: E0414 12:53:39.604939 2611 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="21.683s" Apr 14 12:53:42.638280 kubelet[2611]: E0414 12:53:42.636068 2611 kubelet_node_status.go:474] "Error updating node status, will retry" err="error getting node \"localhost\": Get \"https://10.0.0.43:6443/api/v1/nodes/localhost?timeout=10s\": dial tcp 10.0.0.43:6443: connect: connection refused" Apr 14 12:53:43.510349 kubelet[2611]: E0414 12:53:42.635435 2611 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://10.0.0.43:6443/api/v1/namespaces/kube-system/events/coredns-7d764666f9-spttk.18a639caec13d420\": dial tcp 10.0.0.43:6443: connect: connection refused" event="&Event{ObjectMeta:{coredns-7d764666f9-spttk.18a639caec13d420 kube-system 875 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:coredns-7d764666f9-spttk,UID:fb975314-b950-4dd9-9942-b30d52d99a2a,APIVersion:v1,ResourceVersion:629,FieldPath:spec.containers{coredns},},Reason:Unhealthy,Message:Readiness probe failed: Get \"http://192.168.0.2:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-14 12:44:06 +0000 UTC,LastTimestamp:2026-04-14 12:46:55.564406716 +0000 UTC m=+343.620526160,Count:3,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 14 12:53:43.760343 kubelet[2611]: E0414 12:53:43.759292 2611 status_manager.go:1045] "Failed to get status for pod" err="Get \"https://10.0.0.43:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-localhost\": dial tcp 10.0.0.43:6443: connect: connection refused" podUID="ed5e991544c38f12435d82988fd12fee" pod="kube-system/kube-apiserver-localhost" Apr 14 12:53:45.541512 containerd[1466]: time="2026-04-14T12:53:45.475264158Z" level=info msg="CreateContainer within sandbox \"e5886807b102c9b52b314793a738bd13f5f513f2b95c62167b7e7611e720cfca\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:5,}" Apr 14 12:53:46.166212 kubelet[2611]: E0414 12:53:43.530474 2611 event.go:307] "Unable to write event (retry limit exceeded!)" event="&Event{ObjectMeta:{coredns-7d764666f9-spttk.18a639f23a21dfbc kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:coredns-7d764666f9-spttk,UID:fb975314-b950-4dd9-9942-b30d52d99a2a,APIVersion:v1,ResourceVersion:629,FieldPath:spec.containers{coredns},},Reason:Unhealthy,Message:Readiness probe failed: Get \"http://192.168.0.2:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-14 12:46:55.564406716 +0000 UTC m=+343.620526160,LastTimestamp:2026-04-14 12:46:55.564406716 +0000 UTC m=+343.620526160,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 14 12:53:47.020113 containerd[1466]: time="2026-04-14T12:53:47.000887513Z" level=error msg="Failed to handle backOff event container_id:\"8d3c1054a6cc9347dba2d780a318571b021087092c2d44730b1a6d81eb6af2a3\" id:\"8d3c1054a6cc9347dba2d780a318571b021087092c2d44730b1a6d81eb6af2a3\" pid:3564 exited_at:{seconds:1776171134 nanos:419422785} for 8d3c1054a6cc9347dba2d780a318571b021087092c2d44730b1a6d81eb6af2a3" error="failed to handle container TaskExit event: failed to stop container: context deadline exceeded: unknown" Apr 14 12:53:47.020113 containerd[1466]: time="2026-04-14T12:53:47.005481804Z" level=info msg="TaskExit event container_id:\"deefeb9d7fce7a43dd4dcf925e6b98c76823dace7f986803922e1451a289da2d\" id:\"deefeb9d7fce7a43dd4dcf925e6b98c76823dace7f986803922e1451a289da2d\" pid:3582 exited_at:{seconds:1776171119 nanos:107692540}" Apr 14 12:53:47.662139 kubelet[2611]: E0414 12:53:47.177795 2611 kubelet_node_status.go:474] "Error updating node status, will retry" err="error getting node \"localhost\": Get \"https://10.0.0.43:6443/api/v1/nodes/localhost?timeout=10s\": dial tcp 10.0.0.43:6443: connect: connection refused" Apr 14 12:53:48.249573 containerd[1466]: time="2026-04-14T12:53:48.247185690Z" level=error msg="ttrpc: received message on inactive stream" stream=95 Apr 14 12:53:48.986374 kubelet[2611]: E0414 12:53:48.934379 2611 kubelet_node_status.go:461] "Unable to update node status" err="update node status exceeds retry count" Apr 14 12:53:52.010130 kubelet[2611]: E0414 12:53:51.998422 2611 log.go:32] "StopContainer from runtime service failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" containerID="deefeb9d7fce7a43dd4dcf925e6b98c76823dace7f986803922e1451a289da2d" Apr 14 12:53:52.535823 kubelet[2611]: E0414 12:53:52.515432 2611 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.43:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.43:6443: connect: connection refused" interval="7s" Apr 14 12:53:52.647005 kubelet[2611]: E0414 12:53:52.534083 2611 status_manager.go:1045] "Failed to get status for pod" err="Get \"https://10.0.0.43:6443/api/v1/namespaces/kube-system/pods/kube-controller-manager-localhost\": dial tcp 10.0.0.43:6443: connect: connection refused" podUID="bd70d524e6bc561f2082b467706799ed" pod="kube-system/kube-controller-manager-localhost" Apr 14 12:53:52.769432 sshd[5317]: pam_unix(sshd:session): session closed for user core Apr 14 12:53:53.130074 containerd[1466]: time="2026-04-14T12:53:52.938407933Z" level=error msg="StopContainer for \"a0f6b3d0dd3f8ad4278cf5baa8834b82987120c6a172552ad2c18c38a7830695\" failed" error="rpc error: code = DeadlineExceeded desc = an error occurs during waiting for container \"a0f6b3d0dd3f8ad4278cf5baa8834b82987120c6a172552ad2c18c38a7830695\" to be killed: wait container \"a0f6b3d0dd3f8ad4278cf5baa8834b82987120c6a172552ad2c18c38a7830695\": context deadline exceeded" Apr 14 12:53:53.470392 containerd[1466]: time="2026-04-14T12:53:53.435408033Z" level=error msg="StopContainer for \"deefeb9d7fce7a43dd4dcf925e6b98c76823dace7f986803922e1451a289da2d\" failed" error="rpc error: code = DeadlineExceeded desc = an error occurs during waiting for container \"deefeb9d7fce7a43dd4dcf925e6b98c76823dace7f986803922e1451a289da2d\" to be killed: wait container \"deefeb9d7fce7a43dd4dcf925e6b98c76823dace7f986803922e1451a289da2d\": context deadline exceeded" Apr 14 12:53:53.955223 kubelet[2611]: E0414 12:53:53.420007 2611 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://10.0.0.43:6443/api/v1/namespaces/kube-system/events/coredns-7d764666f9-f44gt.18a639cb2be0dea0\": dial tcp 10.0.0.43:6443: connect: connection refused" event="&Event{ObjectMeta:{coredns-7d764666f9-f44gt.18a639cb2be0dea0 kube-system 876 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:coredns-7d764666f9-f44gt,UID:c0e24bdb-6150-4745-b66b-9386ee241a93,APIVersion:v1,ResourceVersion:631,FieldPath:spec.containers{coredns},},Reason:Unhealthy,Message:Readiness probe failed: Get \"http://192.168.0.3:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-14 12:44:07 +0000 UTC,LastTimestamp:2026-04-14 12:46:55.642258431 +0000 UTC m=+343.698377875,Count:3,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 14 12:53:54.258022 kubelet[2611]: E0414 12:53:53.429343 2611 kuberuntime_container.go:895] "Container termination failed with gracePeriod" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" pod="kube-system/coredns-7d764666f9-f44gt" podUID="c0e24bdb-6150-4745-b66b-9386ee241a93" containerName="coredns" containerID="containerd://deefeb9d7fce7a43dd4dcf925e6b98c76823dace7f986803922e1451a289da2d" gracePeriod=30 Apr 14 12:53:54.650343 kubelet[2611]: E0414 12:53:54.046811 2611 log.go:32] "StopContainer from runtime service failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" containerID="a0f6b3d0dd3f8ad4278cf5baa8834b82987120c6a172552ad2c18c38a7830695" Apr 14 12:53:54.710829 kubelet[2611]: E0414 12:53:54.676134 2611 kuberuntime_container.go:895] "Container termination failed with gracePeriod" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" pod="kube-system/kube-apiserver-localhost" podUID="ed5e991544c38f12435d82988fd12fee" containerName="kube-apiserver" containerID="containerd://a0f6b3d0dd3f8ad4278cf5baa8834b82987120c6a172552ad2c18c38a7830695" gracePeriod=30 Apr 14 12:53:54.726491 kubelet[2611]: E0414 12:53:54.725562 2611 kuberuntime_manager.go:1437] "killContainer for pod failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" containerName="kube-apiserver" containerID={"Type":"containerd","ID":"a0f6b3d0dd3f8ad4278cf5baa8834b82987120c6a172552ad2c18c38a7830695"} pod="kube-system/kube-apiserver-localhost" Apr 14 12:53:54.749575 kubelet[2611]: E0414 12:53:54.738682 2611 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillContainer\" for \"kube-apiserver\" with KillContainerError: \"rpc error: code = DeadlineExceeded desc = context deadline exceeded\"" pod="kube-system/kube-apiserver-localhost" podUID="ed5e991544c38f12435d82988fd12fee" Apr 14 12:53:54.854039 containerd[1466]: time="2026-04-14T12:53:54.847905956Z" level=info msg="CreateContainer within sandbox \"e5886807b102c9b52b314793a738bd13f5f513f2b95c62167b7e7611e720cfca\" for &ContainerMetadata{Name:kube-scheduler,Attempt:5,} returns container id \"136c90ee92d4f4f6f33ede4d8cc14a79e8a4a3e4a3a0c3c4e1334b03d00f7f93\"" Apr 14 12:53:55.068441 kubelet[2611]: E0414 12:53:54.247437 2611 kuberuntime_manager.go:1437] "killContainer for pod failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" containerName="coredns" containerID={"Type":"containerd","ID":"deefeb9d7fce7a43dd4dcf925e6b98c76823dace7f986803922e1451a289da2d"} pod="kube-system/coredns-7d764666f9-f44gt" Apr 14 12:53:55.264483 kubelet[2611]: E0414 12:53:55.260498 2611 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillContainer\" for \"coredns\" with KillContainerError: \"rpc error: code = DeadlineExceeded desc = context deadline exceeded\"" pod="kube-system/coredns-7d764666f9-f44gt" podUID="c0e24bdb-6150-4745-b66b-9386ee241a93" Apr 14 12:53:55.552372 containerd[1466]: time="2026-04-14T12:53:55.476154328Z" level=info msg="StartContainer for \"136c90ee92d4f4f6f33ede4d8cc14a79e8a4a3e4a3a0c3c4e1334b03d00f7f93\"" Apr 14 12:53:55.555317 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2640562480.mount: Deactivated successfully. Apr 14 12:53:55.944357 kubelet[2611]: E0414 12:53:55.775427 2611 status_manager.go:1045] "Failed to get status for pod" err="Get \"https://10.0.0.43:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-localhost\": dial tcp 10.0.0.43:6443: connect: connection refused" podUID="ed5e991544c38f12435d82988fd12fee" pod="kube-system/kube-apiserver-localhost" Apr 14 12:53:56.329086 systemd[1]: sshd@29-10.0.0.43:22-10.0.0.1:37634.service: Deactivated successfully. Apr 14 12:53:56.361558 systemd[1]: sshd@29-10.0.0.43:22-10.0.0.1:37634.service: Consumed 8.347s CPU time. Apr 14 12:53:56.971619 kubelet[2611]: E0414 12:53:56.647562 2611 status_manager.go:1045] "Failed to get status for pod" err="Get \"https://10.0.0.43:6443/api/v1/namespaces/kube-system/pods/kube-scheduler-localhost\": dial tcp 10.0.0.43:6443: connect: connection refused" podUID="3566c1d7ed03bb3c60facf009a5678dd" pod="kube-system/kube-scheduler-localhost" Apr 14 12:53:56.981575 systemd[1]: session-30.scope: Deactivated successfully. Apr 14 12:53:57.007347 systemd[1]: session-30.scope: Consumed 41.084s CPU time. Apr 14 12:53:57.132138 containerd[1466]: time="2026-04-14T12:53:57.050452276Z" level=error msg="Failed to handle backOff event container_id:\"deefeb9d7fce7a43dd4dcf925e6b98c76823dace7f986803922e1451a289da2d\" id:\"deefeb9d7fce7a43dd4dcf925e6b98c76823dace7f986803922e1451a289da2d\" pid:3582 exited_at:{seconds:1776171119 nanos:107692540} for deefeb9d7fce7a43dd4dcf925e6b98c76823dace7f986803922e1451a289da2d" error="failed to handle container TaskExit event: failed to stop container: context deadline exceeded: unknown" Apr 14 12:53:57.202968 containerd[1466]: time="2026-04-14T12:53:57.185317914Z" level=info msg="TaskExit event container_id:\"a0f6b3d0dd3f8ad4278cf5baa8834b82987120c6a172552ad2c18c38a7830695\" id:\"a0f6b3d0dd3f8ad4278cf5baa8834b82987120c6a172552ad2c18c38a7830695\" pid:2449 exit_status:137 exited_at:{seconds:1776171177 nanos:520202480}" Apr 14 12:53:57.308218 containerd[1466]: time="2026-04-14T12:53:57.251686128Z" level=error msg="ttrpc: received message on inactive stream" stream=99 Apr 14 12:53:57.272915 systemd-logind[1450]: Session 30 logged out. Waiting for processes to exit. Apr 14 12:53:57.355424 kubelet[2611]: E0414 12:53:57.228553 2611 status_manager.go:1045] "Failed to get status for pod" err="Get \"https://10.0.0.43:6443/api/v1/namespaces/kube-system/pods/coredns-7d764666f9-spttk\": dial tcp 10.0.0.43:6443: connect: connection refused" podUID="fb975314-b950-4dd9-9942-b30d52d99a2a" pod="kube-system/coredns-7d764666f9-spttk" Apr 14 12:53:57.776685 systemd-logind[1450]: Removed session 30. Apr 14 12:53:58.219576 kubelet[2611]: E0414 12:53:58.203297 2611 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://10.0.0.43:6443/api/v1/namespaces/kube-system/events/coredns-7d764666f9-f44gt.18a639cb2be0dea0\": dial tcp 10.0.0.43:6443: connect: connection refused" event="&Event{ObjectMeta:{coredns-7d764666f9-f44gt.18a639cb2be0dea0 kube-system 876 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:coredns-7d764666f9-f44gt,UID:c0e24bdb-6150-4745-b66b-9386ee241a93,APIVersion:v1,ResourceVersion:631,FieldPath:spec.containers{coredns},},Reason:Unhealthy,Message:Readiness probe failed: Get \"http://192.168.0.3:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-14 12:44:07 +0000 UTC,LastTimestamp:2026-04-14 12:46:55.642258431 +0000 UTC m=+343.698377875,Count:3,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 14 12:53:58.219576 kubelet[2611]: E0414 12:53:57.809481 2611 status_manager.go:1045] "Failed to get status for pod" err="Get \"https://10.0.0.43:6443/api/v1/namespaces/kube-system/pods/coredns-7d764666f9-f44gt\": dial tcp 10.0.0.43:6443: connect: connection refused" podUID="c0e24bdb-6150-4745-b66b-9386ee241a93" pod="kube-system/coredns-7d764666f9-f44gt" Apr 14 12:53:58.819389 kubelet[2611]: E0414 12:53:58.758474 2611 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="15.249s" Apr 14 12:54:00.454570 kubelet[2611]: E0414 12:53:58.998498 2611 status_manager.go:1045] "Failed to get status for pod" err="Get \"https://10.0.0.43:6443/api/v1/namespaces/kube-system/pods/coredns-7d764666f9-spttk\": dial tcp 10.0.0.43:6443: connect: connection refused" podUID="fb975314-b950-4dd9-9942-b30d52d99a2a" pod="kube-system/coredns-7d764666f9-spttk" Apr 14 12:54:00.556420 kubelet[2611]: E0414 12:54:00.544264 2611 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 12:54:00.834356 kubelet[2611]: E0414 12:54:00.814562 2611 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.43:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.43:6443: connect: connection refused" interval="7s" Apr 14 12:54:01.124709 systemd[1]: Started sshd@30-10.0.0.43:22-10.0.0.1:39012.service - OpenSSH per-connection server daemon (10.0.0.1:39012). Apr 14 12:54:03.493378 kubelet[2611]: E0414 12:54:02.951125 2611 status_manager.go:1045] "Failed to get status for pod" err="Get \"https://10.0.0.43:6443/api/v1/namespaces/kube-system/pods/coredns-7d764666f9-f44gt\": dial tcp 10.0.0.43:6443: connect: connection refused" podUID="c0e24bdb-6150-4745-b66b-9386ee241a93" pod="kube-system/coredns-7d764666f9-f44gt" Apr 14 12:54:05.001035 kubelet[2611]: E0414 12:54:04.620407 2611 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 12:54:05.570431 kubelet[2611]: E0414 12:54:05.566280 2611 status_manager.go:1045] "Failed to get status for pod" err="Get \"https://10.0.0.43:6443/api/v1/namespaces/kube-system/pods/kube-controller-manager-localhost\": dial tcp 10.0.0.43:6443: connect: connection refused" podUID="bd70d524e6bc561f2082b467706799ed" pod="kube-system/kube-controller-manager-localhost" Apr 14 12:54:05.820341 systemd[1]: run-containerd-runc-k8s.io-dfea1f2839f09d79baa829bb270334b0c6732ff51677645b59f48ef8e7a6d142-runc.y0kStY.mount: Deactivated successfully. Apr 14 12:54:05.869476 kubelet[2611]: E0414 12:54:05.859159 2611 status_manager.go:1045] "Failed to get status for pod" err="Get \"https://10.0.0.43:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-localhost\": dial tcp 10.0.0.43:6443: connect: connection refused" podUID="ed5e991544c38f12435d82988fd12fee" pod="kube-system/kube-apiserver-localhost" Apr 14 12:54:06.053381 kubelet[2611]: E0414 12:54:06.019265 2611 status_manager.go:1045] "Failed to get status for pod" err="Get \"https://10.0.0.43:6443/api/v1/namespaces/kube-system/pods/kube-scheduler-localhost\": dial tcp 10.0.0.43:6443: connect: connection refused" podUID="3566c1d7ed03bb3c60facf009a5678dd" pod="kube-system/kube-scheduler-localhost" Apr 14 12:54:06.693611 kubelet[2611]: E0414 12:54:06.690812 2611 status_manager.go:1045] "Failed to get status for pod" err="Get \"https://10.0.0.43:6443/api/v1/namespaces/kube-system/pods/coredns-7d764666f9-spttk\": dial tcp 10.0.0.43:6443: connect: connection refused" podUID="fb975314-b950-4dd9-9942-b30d52d99a2a" pod="kube-system/coredns-7d764666f9-spttk" Apr 14 12:54:06.693611 kubelet[2611]: E0414 12:54:06.691322 2611 status_manager.go:1045] "Failed to get status for pod" err="Get \"https://10.0.0.43:6443/api/v1/namespaces/kube-system/pods/coredns-7d764666f9-f44gt\": dial tcp 10.0.0.43:6443: connect: connection refused" podUID="c0e24bdb-6150-4745-b66b-9386ee241a93" pod="kube-system/coredns-7d764666f9-f44gt" Apr 14 12:54:06.693611 kubelet[2611]: E0414 12:54:06.691475 2611 status_manager.go:1045] "Failed to get status for pod" err="Get \"https://10.0.0.43:6443/api/v1/namespaces/kube-system/pods/kube-controller-manager-localhost\": dial tcp 10.0.0.43:6443: connect: connection refused" podUID="bd70d524e6bc561f2082b467706799ed" pod="kube-system/kube-controller-manager-localhost" Apr 14 12:54:06.695392 kubelet[2611]: E0414 12:54:06.693194 2611 status_manager.go:1045] "Failed to get status for pod" err="Get \"https://10.0.0.43:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-localhost\": dial tcp 10.0.0.43:6443: connect: connection refused" podUID="ed5e991544c38f12435d82988fd12fee" pod="kube-system/kube-apiserver-localhost" Apr 14 12:54:06.695392 kubelet[2611]: E0414 12:54:06.694888 2611 kubelet_node_status.go:474] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"NetworkUnavailable\\\"},{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-04-14T12:54:05Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-14T12:54:05Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-14T12:54:05Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-14T12:54:05Z\\\",\\\"type\\\":\\\"Ready\\\"}]}}\" for node \"localhost\": Patch \"https://10.0.0.43:6443/api/v1/nodes/localhost/status?timeout=10s\": dial tcp 10.0.0.43:6443: connect: connection refused" Apr 14 12:54:06.695392 kubelet[2611]: E0414 12:54:06.695362 2611 status_manager.go:1045] "Failed to get status for pod" err="Get \"https://10.0.0.43:6443/api/v1/namespaces/kube-system/pods/kube-scheduler-localhost\": dial tcp 10.0.0.43:6443: connect: connection refused" podUID="3566c1d7ed03bb3c60facf009a5678dd" pod="kube-system/kube-scheduler-localhost" Apr 14 12:54:06.871387 kubelet[2611]: E0414 12:54:06.871151 2611 kubelet_node_status.go:474] "Error updating node status, will retry" err="error getting node \"localhost\": Get \"https://10.0.0.43:6443/api/v1/nodes/localhost?timeout=10s\": dial tcp 10.0.0.43:6443: connect: connection refused" Apr 14 12:54:06.964632 kubelet[2611]: E0414 12:54:06.940765 2611 kubelet_node_status.go:474] "Error updating node status, will retry" err="error getting node \"localhost\": Get \"https://10.0.0.43:6443/api/v1/nodes/localhost?timeout=10s\": dial tcp 10.0.0.43:6443: connect: connection refused" Apr 14 12:54:06.964632 kubelet[2611]: E0414 12:54:06.949487 2611 kubelet_node_status.go:474] "Error updating node status, will retry" err="error getting node \"localhost\": Get \"https://10.0.0.43:6443/api/v1/nodes/localhost?timeout=10s\": dial tcp 10.0.0.43:6443: connect: connection refused" Apr 14 12:54:07.010977 kubelet[2611]: E0414 12:54:07.010761 2611 kubelet_node_status.go:474] "Error updating node status, will retry" err="error getting node \"localhost\": Get \"https://10.0.0.43:6443/api/v1/nodes/localhost?timeout=10s\": dial tcp 10.0.0.43:6443: connect: connection refused" Apr 14 12:54:07.010977 kubelet[2611]: E0414 12:54:07.011021 2611 kubelet_node_status.go:461] "Unable to update node status" err="update node status exceeds retry count" Apr 14 12:54:07.298581 containerd[1466]: time="2026-04-14T12:54:07.251176658Z" level=error msg="Failed to handle backOff event container_id:\"a0f6b3d0dd3f8ad4278cf5baa8834b82987120c6a172552ad2c18c38a7830695\" id:\"a0f6b3d0dd3f8ad4278cf5baa8834b82987120c6a172552ad2c18c38a7830695\" pid:2449 exit_status:137 exited_at:{seconds:1776171177 nanos:520202480} for a0f6b3d0dd3f8ad4278cf5baa8834b82987120c6a172552ad2c18c38a7830695" error="failed to handle container TaskExit event: failed to stop container: failed to delete task: context deadline exceeded: unknown" Apr 14 12:54:07.298581 containerd[1466]: time="2026-04-14T12:54:07.261197554Z" level=info msg="TaskExit event container_id:\"8d3c1054a6cc9347dba2d780a318571b021087092c2d44730b1a6d81eb6af2a3\" id:\"8d3c1054a6cc9347dba2d780a318571b021087092c2d44730b1a6d81eb6af2a3\" pid:3564 exited_at:{seconds:1776171134 nanos:419422785}" Apr 14 12:54:08.031558 kubelet[2611]: E0414 12:54:08.031415 2611 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="9.126s" Apr 14 12:54:08.205333 kubelet[2611]: E0414 12:54:08.138372 2611 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.43:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.43:6443: connect: connection refused" interval="7s" Apr 14 12:54:08.844944 systemd[1]: Started cri-containerd-dfea1f2839f09d79baa829bb270334b0c6732ff51677645b59f48ef8e7a6d142.scope - libcontainer container dfea1f2839f09d79baa829bb270334b0c6732ff51677645b59f48ef8e7a6d142. Apr 14 12:54:09.382091 kubelet[2611]: E0414 12:54:08.964463 2611 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://10.0.0.43:6443/api/v1/namespaces/kube-system/events/coredns-7d764666f9-f44gt.18a639cb2be0dea0\": dial tcp 10.0.0.43:6443: connect: connection refused" event="&Event{ObjectMeta:{coredns-7d764666f9-f44gt.18a639cb2be0dea0 kube-system 876 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:coredns-7d764666f9-f44gt,UID:c0e24bdb-6150-4745-b66b-9386ee241a93,APIVersion:v1,ResourceVersion:631,FieldPath:spec.containers{coredns},},Reason:Unhealthy,Message:Readiness probe failed: Get \"http://192.168.0.3:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-14 12:44:07 +0000 UTC,LastTimestamp:2026-04-14 12:46:55.642258431 +0000 UTC m=+343.698377875,Count:3,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 14 12:54:09.529524 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a0f6b3d0dd3f8ad4278cf5baa8834b82987120c6a172552ad2c18c38a7830695-rootfs.mount: Deactivated successfully. Apr 14 12:54:09.605470 containerd[1466]: time="2026-04-14T12:54:09.532090189Z" level=error msg="ttrpc: received message on inactive stream" stream=115 Apr 14 12:54:10.168513 sshd[5626]: Accepted publickey for core from 10.0.0.1 port 39012 ssh2: RSA SHA256:qzhFhta2jMUFpsUMpLJ2lZjvWQxYMUNkIBekZ4ekVbM Apr 14 12:54:11.206168 sshd[5626]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 12:54:11.551146 containerd[1466]: time="2026-04-14T12:54:11.431404653Z" level=info msg="StopContainer for \"a0f6b3d0dd3f8ad4278cf5baa8834b82987120c6a172552ad2c18c38a7830695\" with timeout 30 (s)" Apr 14 12:54:12.404529 containerd[1466]: time="2026-04-14T12:54:12.378425357Z" level=info msg="Skipping the sending of signal terminated to container \"a0f6b3d0dd3f8ad4278cf5baa8834b82987120c6a172552ad2c18c38a7830695\" because a prior stop with timeout>0 request already sent the signal" Apr 14 12:54:13.861392 containerd[1466]: time="2026-04-14T12:54:13.836833157Z" level=error msg="get state for dfea1f2839f09d79baa829bb270334b0c6732ff51677645b59f48ef8e7a6d142" error="context deadline exceeded: unknown" Apr 14 12:54:13.861392 containerd[1466]: time="2026-04-14T12:54:13.837444723Z" level=warning msg="unknown status" status=0 Apr 14 12:54:15.090274 systemd-logind[1450]: New session 31 of user core. Apr 14 12:54:15.754435 systemd[1]: Started session-31.scope - Session 31 of User core. Apr 14 12:54:16.231018 kubelet[2611]: E0414 12:54:16.230458 2611 status_manager.go:1045] "Failed to get status for pod" err="Get \"https://10.0.0.43:6443/api/v1/namespaces/kube-system/pods/coredns-7d764666f9-spttk\": dial tcp 10.0.0.43:6443: connect: connection refused" podUID="fb975314-b950-4dd9-9942-b30d52d99a2a" pod="kube-system/coredns-7d764666f9-spttk" Apr 14 12:54:16.937768 kubelet[2611]: E0414 12:54:16.930088 2611 log.go:32] "StopContainer from runtime service failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" containerID="8d3c1054a6cc9347dba2d780a318571b021087092c2d44730b1a6d81eb6af2a3" Apr 14 12:54:17.410176 containerd[1466]: time="2026-04-14T12:54:17.336351908Z" level=error msg="Failed to handle backOff event container_id:\"8d3c1054a6cc9347dba2d780a318571b021087092c2d44730b1a6d81eb6af2a3\" id:\"8d3c1054a6cc9347dba2d780a318571b021087092c2d44730b1a6d81eb6af2a3\" pid:3564 exited_at:{seconds:1776171134 nanos:419422785} for 8d3c1054a6cc9347dba2d780a318571b021087092c2d44730b1a6d81eb6af2a3" error="failed to handle container TaskExit event: failed to stop container: context deadline exceeded: unknown" Apr 14 12:54:17.813357 kubelet[2611]: E0414 12:54:17.197764 2611 kuberuntime_container.go:895] "Container termination failed with gracePeriod" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" pod="kube-system/coredns-7d764666f9-spttk" podUID="fb975314-b950-4dd9-9942-b30d52d99a2a" containerName="coredns" containerID="containerd://8d3c1054a6cc9347dba2d780a318571b021087092c2d44730b1a6d81eb6af2a3" gracePeriod=30 Apr 14 12:54:18.052341 kubelet[2611]: E0414 12:54:18.048545 2611 kuberuntime_manager.go:1437] "killContainer for pod failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" containerName="coredns" containerID={"Type":"containerd","ID":"8d3c1054a6cc9347dba2d780a318571b021087092c2d44730b1a6d81eb6af2a3"} pod="kube-system/coredns-7d764666f9-spttk" Apr 14 12:54:18.058564 kubelet[2611]: E0414 12:54:18.058334 2611 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillContainer\" for \"coredns\" with KillContainerError: \"rpc error: code = DeadlineExceeded desc = context deadline exceeded\"" pod="kube-system/coredns-7d764666f9-spttk" podUID="fb975314-b950-4dd9-9942-b30d52d99a2a" Apr 14 12:54:18.136606 containerd[1466]: time="2026-04-14T12:54:17.965246030Z" level=error msg="StopContainer for \"8d3c1054a6cc9347dba2d780a318571b021087092c2d44730b1a6d81eb6af2a3\" failed" error="rpc error: code = Canceled desc = an error occurs during waiting for container \"8d3c1054a6cc9347dba2d780a318571b021087092c2d44730b1a6d81eb6af2a3\" to be killed: wait container \"8d3c1054a6cc9347dba2d780a318571b021087092c2d44730b1a6d81eb6af2a3\": context canceled" Apr 14 12:54:18.225889 containerd[1466]: time="2026-04-14T12:54:18.225435448Z" level=error msg="ttrpc: received message on inactive stream" stream=105 Apr 14 12:54:18.232790 containerd[1466]: time="2026-04-14T12:54:18.230344576Z" level=error msg="ttrpc: received message on inactive stream" stream=101 Apr 14 12:54:18.232790 containerd[1466]: time="2026-04-14T12:54:18.057452987Z" level=info msg="TaskExit event container_id:\"a0f6b3d0dd3f8ad4278cf5baa8834b82987120c6a172552ad2c18c38a7830695\" id:\"a0f6b3d0dd3f8ad4278cf5baa8834b82987120c6a172552ad2c18c38a7830695\" pid:2449 exit_status:137 exited_at:{seconds:1776171177 nanos:520202480}" Apr 14 12:54:18.676224 kubelet[2611]: E0414 12:54:18.626670 2611 status_manager.go:1045] "Failed to get status for pod" err="Get \"https://10.0.0.43:6443/api/v1/namespaces/kube-system/pods/coredns-7d764666f9-f44gt\": dial tcp 10.0.0.43:6443: connect: connection refused" podUID="c0e24bdb-6150-4745-b66b-9386ee241a93" pod="kube-system/coredns-7d764666f9-f44gt" Apr 14 12:54:19.729555 containerd[1466]: time="2026-04-14T12:54:19.655303268Z" level=error msg="get state for dfea1f2839f09d79baa829bb270334b0c6732ff51677645b59f48ef8e7a6d142" error="context deadline exceeded: unknown" Apr 14 12:54:20.125251 containerd[1466]: time="2026-04-14T12:54:19.756316511Z" level=warning msg="unknown status" status=0 Apr 14 12:54:20.536171 kubelet[2611]: E0414 12:54:19.483483 2611 status_manager.go:1045] "Failed to get status for pod" err="Get \"https://10.0.0.43:6443/api/v1/namespaces/kube-system/pods/kube-controller-manager-localhost\": dial tcp 10.0.0.43:6443: connect: connection refused" podUID="bd70d524e6bc561f2082b467706799ed" pod="kube-system/kube-controller-manager-localhost" Apr 14 12:54:21.761547 kubelet[2611]: I0414 12:54:21.746534 2611 scope.go:122] "RemoveContainer" containerID="463e2a5025e270c9bc3e218a5e7078cace8ee16134ce90c9b90f00f93e7ffcbe" Apr 14 12:54:23.976844 kubelet[2611]: E0414 12:54:23.976564 2611 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="15.941s" Apr 14 12:54:25.250431 kubelet[2611]: E0414 12:54:21.254929 2611 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://10.0.0.43:6443/api/v1/namespaces/kube-system/events/coredns-7d764666f9-f44gt.18a639cb2be0dea0\": dial tcp 10.0.0.43:6443: connect: connection refused" event="&Event{ObjectMeta:{coredns-7d764666f9-f44gt.18a639cb2be0dea0 kube-system 876 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:coredns-7d764666f9-f44gt,UID:c0e24bdb-6150-4745-b66b-9386ee241a93,APIVersion:v1,ResourceVersion:631,FieldPath:spec.containers{coredns},},Reason:Unhealthy,Message:Readiness probe failed: Get \"http://192.168.0.3:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-14 12:44:07 +0000 UTC,LastTimestamp:2026-04-14 12:46:55.642258431 +0000 UTC m=+343.698377875,Count:3,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 14 12:54:27.504435 kubelet[2611]: E0414 12:54:27.503760 2611 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.43:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.43:6443: connect: connection refused" interval="7s" Apr 14 12:54:28.156518 containerd[1466]: time="2026-04-14T12:54:28.138488358Z" level=error msg="Failed to handle backOff event container_id:\"a0f6b3d0dd3f8ad4278cf5baa8834b82987120c6a172552ad2c18c38a7830695\" id:\"a0f6b3d0dd3f8ad4278cf5baa8834b82987120c6a172552ad2c18c38a7830695\" pid:2449 exit_status:137 exited_at:{seconds:1776171177 nanos:520202480} for a0f6b3d0dd3f8ad4278cf5baa8834b82987120c6a172552ad2c18c38a7830695" error="failed to handle container TaskExit event: failed to stop container: context deadline exceeded: unknown" Apr 14 12:54:28.531099 kubelet[2611]: E0414 12:54:28.419076 2611 status_manager.go:1045] "Failed to get status for pod" err="Get \"https://10.0.0.43:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-localhost\": dial tcp 10.0.0.43:6443: connect: connection refused" podUID="ed5e991544c38f12435d82988fd12fee" pod="kube-system/kube-apiserver-localhost" Apr 14 12:54:28.945048 containerd[1466]: time="2026-04-14T12:54:28.943935111Z" level=error msg="ttrpc: received message on inactive stream" stream=125 Apr 14 12:54:29.031559 containerd[1466]: time="2026-04-14T12:54:29.013530154Z" level=error msg="ttrpc: received message on inactive stream" stream=121 Apr 14 12:54:29.376445 containerd[1466]: time="2026-04-14T12:54:29.086368887Z" level=error msg="get state for dfea1f2839f09d79baa829bb270334b0c6732ff51677645b59f48ef8e7a6d142" error="context deadline exceeded: unknown" Apr 14 12:54:29.376445 containerd[1466]: time="2026-04-14T12:54:29.330407810Z" level=warning msg="unknown status" status=0 Apr 14 12:54:29.875771 containerd[1466]: time="2026-04-14T12:54:29.553282508Z" level=info msg="TaskExit event container_id:\"deefeb9d7fce7a43dd4dcf925e6b98c76823dace7f986803922e1451a289da2d\" id:\"deefeb9d7fce7a43dd4dcf925e6b98c76823dace7f986803922e1451a289da2d\" pid:3582 exited_at:{seconds:1776171119 nanos:107692540}" Apr 14 12:54:29.876177 kubelet[2611]: E0414 12:54:29.856528 2611 status_manager.go:1045] "Failed to get status for pod" err="Get \"https://10.0.0.43:6443/api/v1/namespaces/kube-system/pods/kube-scheduler-localhost\": dial tcp 10.0.0.43:6443: connect: connection refused" podUID="3566c1d7ed03bb3c60facf009a5678dd" pod="kube-system/kube-scheduler-localhost" Apr 14 12:54:32.367517 containerd[1466]: time="2026-04-14T12:54:32.301552317Z" level=info msg="StopContainer for \"deefeb9d7fce7a43dd4dcf925e6b98c76823dace7f986803922e1451a289da2d\" with timeout 30 (s)" Apr 14 12:54:35.265216 containerd[1466]: time="2026-04-14T12:54:35.258838867Z" level=error msg="get state for deefeb9d7fce7a43dd4dcf925e6b98c76823dace7f986803922e1451a289da2d" error="context deadline exceeded: unknown" Apr 14 12:54:35.265216 containerd[1466]: time="2026-04-14T12:54:35.259195088Z" level=warning msg="unknown status" status=0 Apr 14 12:54:35.265216 containerd[1466]: time="2026-04-14T12:54:35.259256192Z" level=info msg="Skipping the sending of signal terminated to container \"deefeb9d7fce7a43dd4dcf925e6b98c76823dace7f986803922e1451a289da2d\" because a prior stop with timeout>0 request already sent the signal" Apr 14 12:54:36.531524 containerd[1466]: time="2026-04-14T12:54:35.466577484Z" level=error msg="get state for dfea1f2839f09d79baa829bb270334b0c6732ff51677645b59f48ef8e7a6d142" error="context deadline exceeded: unknown" Apr 14 12:54:36.531524 containerd[1466]: time="2026-04-14T12:54:35.595538415Z" level=warning msg="unknown status" status=0 Apr 14 12:54:38.902565 kubelet[2611]: E0414 12:54:38.896155 2611 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://10.0.0.43:6443/api/v1/namespaces/kube-system/events/coredns-7d764666f9-f44gt.18a639cb2be0dea0\": dial tcp 10.0.0.43:6443: connect: connection refused" event="&Event{ObjectMeta:{coredns-7d764666f9-f44gt.18a639cb2be0dea0 kube-system 876 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:coredns-7d764666f9-f44gt,UID:c0e24bdb-6150-4745-b66b-9386ee241a93,APIVersion:v1,ResourceVersion:631,FieldPath:spec.containers{coredns},},Reason:Unhealthy,Message:Readiness probe failed: Get \"http://192.168.0.3:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-14 12:44:07 +0000 UTC,LastTimestamp:2026-04-14 12:46:55.642258431 +0000 UTC m=+343.698377875,Count:3,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 14 12:54:39.519837 containerd[1466]: time="2026-04-14T12:54:39.515417059Z" level=error msg="Failed to handle backOff event container_id:\"deefeb9d7fce7a43dd4dcf925e6b98c76823dace7f986803922e1451a289da2d\" id:\"deefeb9d7fce7a43dd4dcf925e6b98c76823dace7f986803922e1451a289da2d\" pid:3582 exited_at:{seconds:1776171119 nanos:107692540} for deefeb9d7fce7a43dd4dcf925e6b98c76823dace7f986803922e1451a289da2d" error="failed to handle container TaskExit event: failed to stop container: context deadline exceeded: unknown" Apr 14 12:54:39.762624 containerd[1466]: time="2026-04-14T12:54:39.756571699Z" level=info msg="TaskExit event container_id:\"a0f6b3d0dd3f8ad4278cf5baa8834b82987120c6a172552ad2c18c38a7830695\" id:\"a0f6b3d0dd3f8ad4278cf5baa8834b82987120c6a172552ad2c18c38a7830695\" pid:2449 exit_status:137 exited_at:{seconds:1776171177 nanos:520202480}" Apr 14 12:54:39.992410 containerd[1466]: time="2026-04-14T12:54:39.989168637Z" level=info msg="RemoveContainer for \"463e2a5025e270c9bc3e218a5e7078cace8ee16134ce90c9b90f00f93e7ffcbe\"" Apr 14 12:54:40.435145 kubelet[2611]: E0414 12:54:40.127268 2611 status_manager.go:1045] "Failed to get status for pod" err="Get \"https://10.0.0.43:6443/api/v1/namespaces/kube-system/pods/kube-controller-manager-localhost\": dial tcp 10.0.0.43:6443: connect: connection refused" podUID="bd70d524e6bc561f2082b467706799ed" pod="kube-system/kube-controller-manager-localhost" Apr 14 12:54:40.546332 containerd[1466]: time="2026-04-14T12:54:40.545666690Z" level=error msg="ttrpc: received message on inactive stream" stream=109 Apr 14 12:54:40.643387 containerd[1466]: time="2026-04-14T12:54:40.558115043Z" level=error msg="ttrpc: received message on inactive stream" stream=111 Apr 14 12:54:40.819767 kubelet[2611]: E0414 12:54:40.718797 2611 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.43:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.43:6443: connect: connection refused" interval="7s" Apr 14 12:54:41.256519 kubelet[2611]: E0414 12:54:41.246241 2611 kubelet_node_status.go:474] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"NetworkUnavailable\\\"},{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-04-14T12:54:37Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-14T12:54:37Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-14T12:54:37Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-14T12:54:37Z\\\",\\\"type\\\":\\\"Ready\\\"}]}}\" for node \"localhost\": Patch \"https://10.0.0.43:6443/api/v1/nodes/localhost/status?timeout=10s\": dial tcp 10.0.0.43:6443: connect: connection refused" Apr 14 12:54:41.277484 kubelet[2611]: E0414 12:54:41.276143 2611 status_manager.go:1045] "Failed to get status for pod" err="Get \"https://10.0.0.43:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-localhost\": dial tcp 10.0.0.43:6443: connect: connection refused" podUID="ed5e991544c38f12435d82988fd12fee" pod="kube-system/kube-apiserver-localhost" Apr 14 12:54:41.374295 containerd[1466]: time="2026-04-14T12:54:41.367050533Z" level=info msg="StopContainer for \"8d3c1054a6cc9347dba2d780a318571b021087092c2d44730b1a6d81eb6af2a3\" with timeout 30 (s)" Apr 14 12:54:41.661523 kubelet[2611]: E0414 12:54:41.659776 2611 status_manager.go:1045] "Failed to get status for pod" err="Get \"https://10.0.0.43:6443/api/v1/namespaces/kube-system/pods/kube-scheduler-localhost\": dial tcp 10.0.0.43:6443: connect: connection refused" podUID="3566c1d7ed03bb3c60facf009a5678dd" pod="kube-system/kube-scheduler-localhost" Apr 14 12:54:41.661523 kubelet[2611]: E0414 12:54:41.660257 2611 status_manager.go:1045] "Failed to get status for pod" err="Get \"https://10.0.0.43:6443/api/v1/namespaces/kube-system/pods/coredns-7d764666f9-spttk\": dial tcp 10.0.0.43:6443: connect: connection refused" podUID="fb975314-b950-4dd9-9942-b30d52d99a2a" pod="kube-system/coredns-7d764666f9-spttk" Apr 14 12:54:41.661523 kubelet[2611]: E0414 12:54:41.660361 2611 status_manager.go:1045] "Failed to get status for pod" err="Get \"https://10.0.0.43:6443/api/v1/namespaces/kube-system/pods/coredns-7d764666f9-f44gt\": dial tcp 10.0.0.43:6443: connect: connection refused" podUID="c0e24bdb-6150-4745-b66b-9386ee241a93" pod="kube-system/coredns-7d764666f9-f44gt" Apr 14 12:54:41.672651 kubelet[2611]: E0414 12:54:41.671030 2611 kubelet_node_status.go:474] "Error updating node status, will retry" err="error getting node \"localhost\": Get \"https://10.0.0.43:6443/api/v1/nodes/localhost?timeout=10s\": dial tcp 10.0.0.43:6443: connect: connection refused" Apr 14 12:54:41.726842 kubelet[2611]: E0414 12:54:41.726287 2611 kubelet_node_status.go:474] "Error updating node status, will retry" err="error getting node \"localhost\": Get \"https://10.0.0.43:6443/api/v1/nodes/localhost?timeout=10s\": dial tcp 10.0.0.43:6443: connect: connection refused" Apr 14 12:54:41.753491 kubelet[2611]: E0414 12:54:41.753142 2611 kubelet_node_status.go:474] "Error updating node status, will retry" err="error getting node \"localhost\": Get \"https://10.0.0.43:6443/api/v1/nodes/localhost?timeout=10s\": dial tcp 10.0.0.43:6443: connect: connection refused" Apr 14 12:54:41.762013 kubelet[2611]: E0414 12:54:41.753192 2611 status_manager.go:1045] "Failed to get status for pod" err="Get \"https://10.0.0.43:6443/api/v1/namespaces/kube-system/pods/coredns-7d764666f9-spttk\": dial tcp 10.0.0.43:6443: connect: connection refused" podUID="fb975314-b950-4dd9-9942-b30d52d99a2a" pod="kube-system/coredns-7d764666f9-spttk" Apr 14 12:54:41.763012 kubelet[2611]: E0414 12:54:41.762963 2611 kubelet_node_status.go:474] "Error updating node status, will retry" err="error getting node \"localhost\": Get \"https://10.0.0.43:6443/api/v1/nodes/localhost?timeout=10s\": dial tcp 10.0.0.43:6443: connect: connection refused" Apr 14 12:54:41.763012 kubelet[2611]: E0414 12:54:41.762989 2611 kubelet_node_status.go:461] "Unable to update node status" err="update node status exceeds retry count" Apr 14 12:54:41.763283 kubelet[2611]: E0414 12:54:41.763103 2611 status_manager.go:1045] "Failed to get status for pod" err="Get \"https://10.0.0.43:6443/api/v1/namespaces/kube-system/pods/coredns-7d764666f9-f44gt\": dial tcp 10.0.0.43:6443: connect: connection refused" podUID="c0e24bdb-6150-4745-b66b-9386ee241a93" pod="kube-system/coredns-7d764666f9-f44gt" Apr 14 12:54:41.777316 kubelet[2611]: E0414 12:54:41.775457 2611 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="15.413s" Apr 14 12:54:41.844499 kubelet[2611]: E0414 12:54:41.834905 2611 status_manager.go:1045] "Failed to get status for pod" err="Get \"https://10.0.0.43:6443/api/v1/namespaces/kube-system/pods/kube-controller-manager-localhost\": dial tcp 10.0.0.43:6443: connect: connection refused" podUID="bd70d524e6bc561f2082b467706799ed" pod="kube-system/kube-controller-manager-localhost" Apr 14 12:54:41.844499 kubelet[2611]: I0414 12:54:41.835168 2611 scope.go:122] "RemoveContainer" containerID="433f46bcd44951ef1c6557ae32f8f3d62062ad479df52baf9dc40601b5c7810d" Apr 14 12:54:41.846731 containerd[1466]: time="2026-04-14T12:54:41.829231255Z" level=info msg="Skipping the sending of signal terminated to container \"8d3c1054a6cc9347dba2d780a318571b021087092c2d44730b1a6d81eb6af2a3\" because a prior stop with timeout>0 request already sent the signal" Apr 14 12:54:41.846731 containerd[1466]: time="2026-04-14T12:54:41.832740089Z" level=info msg="RemoveContainer for \"463e2a5025e270c9bc3e218a5e7078cace8ee16134ce90c9b90f00f93e7ffcbe\" returns successfully" Apr 14 12:54:41.847150 kubelet[2611]: E0414 12:54:41.845276 2611 status_manager.go:1045] "Failed to get status for pod" err="Get \"https://10.0.0.43:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-localhost\": dial tcp 10.0.0.43:6443: connect: connection refused" podUID="ed5e991544c38f12435d82988fd12fee" pod="kube-system/kube-apiserver-localhost" Apr 14 12:54:41.847150 kubelet[2611]: E0414 12:54:41.846649 2611 status_manager.go:1045] "Failed to get status for pod" err="Get \"https://10.0.0.43:6443/api/v1/namespaces/kube-system/pods/kube-scheduler-localhost\": dial tcp 10.0.0.43:6443: connect: connection refused" podUID="3566c1d7ed03bb3c60facf009a5678dd" pod="kube-system/kube-scheduler-localhost" Apr 14 12:54:41.873263 containerd[1466]: time="2026-04-14T12:54:41.871729218Z" level=error msg="ttrpc: received message on inactive stream" stream=5 Apr 14 12:54:41.873263 containerd[1466]: time="2026-04-14T12:54:41.872113548Z" level=error msg="ttrpc: received message on inactive stream" stream=7 Apr 14 12:54:41.873263 containerd[1466]: time="2026-04-14T12:54:41.872133763Z" level=error msg="ttrpc: received message on inactive stream" stream=9 Apr 14 12:54:41.873263 containerd[1466]: time="2026-04-14T12:54:41.872174741Z" level=error msg="ttrpc: received message on inactive stream" stream=11 Apr 14 12:54:42.035111 kubelet[2611]: E0414 12:54:42.029712 2611 cadvisor_stats_provider.go:569] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbd70d524e6bc561f2082b467706799ed.slice/cri-containerd-dfea1f2839f09d79baa829bb270334b0c6732ff51677645b59f48ef8e7a6d142.scope\": RecentStats: unable to find data in memory cache]" Apr 14 12:54:42.143357 containerd[1466]: time="2026-04-14T12:54:42.141460732Z" level=info msg="RemoveContainer for \"433f46bcd44951ef1c6557ae32f8f3d62062ad479df52baf9dc40601b5c7810d\"" Apr 14 12:54:42.343351 containerd[1466]: time="2026-04-14T12:54:42.343211543Z" level=info msg="RemoveContainer for \"433f46bcd44951ef1c6557ae32f8f3d62062ad479df52baf9dc40601b5c7810d\" returns successfully" Apr 14 12:54:42.849906 containerd[1466]: time="2026-04-14T12:54:42.849518766Z" level=info msg="Kill container \"a0f6b3d0dd3f8ad4278cf5baa8834b82987120c6a172552ad2c18c38a7830695\"" Apr 14 12:54:43.037530 containerd[1466]: time="2026-04-14T12:54:43.034166525Z" level=info msg="StartContainer for \"dfea1f2839f09d79baa829bb270334b0c6732ff51677645b59f48ef8e7a6d142\" returns successfully" Apr 14 12:54:43.236462 containerd[1466]: time="2026-04-14T12:54:43.053540465Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 14 12:54:43.270075 containerd[1466]: time="2026-04-14T12:54:43.210410895Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 14 12:54:43.270075 containerd[1466]: time="2026-04-14T12:54:43.267719873Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 14 12:54:43.479570 containerd[1466]: time="2026-04-14T12:54:43.477417325Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 14 12:54:43.615325 containerd[1466]: time="2026-04-14T12:54:43.571475649Z" level=info msg="shim disconnected" id=a0f6b3d0dd3f8ad4278cf5baa8834b82987120c6a172552ad2c18c38a7830695 namespace=k8s.io Apr 14 12:54:43.615325 containerd[1466]: time="2026-04-14T12:54:43.578454907Z" level=warning msg="cleaning up after shim disconnected" id=a0f6b3d0dd3f8ad4278cf5baa8834b82987120c6a172552ad2c18c38a7830695 namespace=k8s.io Apr 14 12:54:43.615325 containerd[1466]: time="2026-04-14T12:54:43.669571104Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 14 12:54:44.035013 kubelet[2611]: E0414 12:54:43.977864 2611 status_manager.go:1045] "Failed to get status for pod" err="Get \"https://10.0.0.43:6443/api/v1/namespaces/kube-system/pods/kube-scheduler-localhost\": dial tcp 10.0.0.43:6443: connect: connection refused" podUID="3566c1d7ed03bb3c60facf009a5678dd" pod="kube-system/kube-scheduler-localhost" Apr 14 12:54:44.035013 kubelet[2611]: E0414 12:54:44.032514 2611 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 12:54:44.454795 kubelet[2611]: E0414 12:54:44.448506 2611 status_manager.go:1045] "Failed to get status for pod" err="Get \"https://10.0.0.43:6443/api/v1/namespaces/kube-system/pods/coredns-7d764666f9-spttk\": dial tcp 10.0.0.43:6443: connect: connection refused" podUID="fb975314-b950-4dd9-9942-b30d52d99a2a" pod="kube-system/coredns-7d764666f9-spttk" Apr 14 12:54:44.713204 kubelet[2611]: E0414 12:54:44.707418 2611 status_manager.go:1045] "Failed to get status for pod" err="Get \"https://10.0.0.43:6443/api/v1/namespaces/kube-system/pods/coredns-7d764666f9-f44gt\": dial tcp 10.0.0.43:6443: connect: connection refused" podUID="c0e24bdb-6150-4745-b66b-9386ee241a93" pod="kube-system/coredns-7d764666f9-f44gt" Apr 14 12:54:44.897096 kubelet[2611]: E0414 12:54:44.883179 2611 status_manager.go:1045] "Failed to get status for pod" err="Get \"https://10.0.0.43:6443/api/v1/namespaces/kube-system/pods/kube-controller-manager-localhost\": dial tcp 10.0.0.43:6443: connect: connection refused" podUID="bd70d524e6bc561f2082b467706799ed" pod="kube-system/kube-controller-manager-localhost" Apr 14 12:54:44.948454 kubelet[2611]: E0414 12:54:44.947396 2611 status_manager.go:1045] "Failed to get status for pod" err="Get \"https://10.0.0.43:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-localhost\": dial tcp 10.0.0.43:6443: connect: connection refused" podUID="ed5e991544c38f12435d82988fd12fee" pod="kube-system/kube-apiserver-localhost" Apr 14 12:54:45.051931 kubelet[2611]: E0414 12:54:45.049242 2611 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 12:54:45.053250 kubelet[2611]: E0414 12:54:45.050245 2611 status_manager.go:1045] "Failed to get status for pod" err="Get \"https://10.0.0.43:6443/api/v1/namespaces/kube-system/pods/kube-controller-manager-localhost\": dial tcp 10.0.0.43:6443: connect: connection refused" podUID="bd70d524e6bc561f2082b467706799ed" pod="kube-system/kube-controller-manager-localhost" Apr 14 12:54:45.057298 kubelet[2611]: E0414 12:54:45.053920 2611 status_manager.go:1045] "Failed to get status for pod" err="Get \"https://10.0.0.43:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-localhost\": dial tcp 10.0.0.43:6443: connect: connection refused" podUID="ed5e991544c38f12435d82988fd12fee" pod="kube-system/kube-apiserver-localhost" Apr 14 12:54:45.057298 kubelet[2611]: E0414 12:54:45.054137 2611 status_manager.go:1045] "Failed to get status for pod" err="Get \"https://10.0.0.43:6443/api/v1/namespaces/kube-system/pods/kube-scheduler-localhost\": dial tcp 10.0.0.43:6443: connect: connection refused" podUID="3566c1d7ed03bb3c60facf009a5678dd" pod="kube-system/kube-scheduler-localhost" Apr 14 12:54:45.057298 kubelet[2611]: E0414 12:54:45.054234 2611 status_manager.go:1045] "Failed to get status for pod" err="Get \"https://10.0.0.43:6443/api/v1/namespaces/kube-system/pods/coredns-7d764666f9-spttk\": dial tcp 10.0.0.43:6443: connect: connection refused" podUID="fb975314-b950-4dd9-9942-b30d52d99a2a" pod="kube-system/coredns-7d764666f9-spttk" Apr 14 12:54:45.057298 kubelet[2611]: E0414 12:54:45.054313 2611 status_manager.go:1045] "Failed to get status for pod" err="Get \"https://10.0.0.43:6443/api/v1/namespaces/kube-system/pods/coredns-7d764666f9-f44gt\": dial tcp 10.0.0.43:6443: connect: connection refused" podUID="c0e24bdb-6150-4745-b66b-9386ee241a93" pod="kube-system/coredns-7d764666f9-f44gt" Apr 14 12:54:45.096782 sshd[5626]: pam_unix(sshd:session): session closed for user core Apr 14 12:54:45.192396 systemd[1]: sshd@30-10.0.0.43:22-10.0.0.1:39012.service: Deactivated successfully. Apr 14 12:54:45.196512 systemd[1]: sshd@30-10.0.0.43:22-10.0.0.1:39012.service: Consumed 3.243s CPU time. Apr 14 12:54:45.256820 systemd[1]: session-31.scope: Deactivated successfully. Apr 14 12:54:45.267471 systemd[1]: session-31.scope: Consumed 16.460s CPU time. Apr 14 12:54:45.395831 systemd-logind[1450]: Session 31 logged out. Waiting for processes to exit. Apr 14 12:54:45.425564 systemd-logind[1450]: Removed session 31. Apr 14 12:54:45.657195 containerd[1466]: time="2026-04-14T12:54:45.654929634Z" level=info msg="StopContainer for \"a0f6b3d0dd3f8ad4278cf5baa8834b82987120c6a172552ad2c18c38a7830695\" returns successfully" Apr 14 12:54:45.747950 kubelet[2611]: E0414 12:54:45.660153 2611 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 12:54:45.821290 systemd[1]: Started cri-containerd-136c90ee92d4f4f6f33ede4d8cc14a79e8a4a3e4a3a0c3c4e1334b03d00f7f93.scope - libcontainer container 136c90ee92d4f4f6f33ede4d8cc14a79e8a4a3e4a3a0c3c4e1334b03d00f7f93. Apr 14 12:54:45.837214 containerd[1466]: time="2026-04-14T12:54:45.821683883Z" level=info msg="CreateContainer within sandbox \"44ed18cd941dc188279a1ea348d137198d2efa296555a540e1b1b64cce2420e7\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:1,}" Apr 14 12:54:46.171127 containerd[1466]: time="2026-04-14T12:54:46.157098617Z" level=info msg="CreateContainer within sandbox \"44ed18cd941dc188279a1ea348d137198d2efa296555a540e1b1b64cce2420e7\" for &ContainerMetadata{Name:kube-apiserver,Attempt:1,} returns container id \"b86275e2231fa75858d1e9c8ce55145faead9a140326f2000e4c1335bb8bebc4\"" Apr 14 12:54:46.202802 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2967092253.mount: Deactivated successfully. Apr 14 12:54:46.220731 containerd[1466]: time="2026-04-14T12:54:46.220610660Z" level=info msg="StartContainer for \"b86275e2231fa75858d1e9c8ce55145faead9a140326f2000e4c1335bb8bebc4\"" Apr 14 12:54:47.915492 containerd[1466]: time="2026-04-14T12:54:47.871812745Z" level=error msg="get state for 136c90ee92d4f4f6f33ede4d8cc14a79e8a4a3e4a3a0c3c4e1334b03d00f7f93" error="context deadline exceeded: unknown" Apr 14 12:54:47.925128 containerd[1466]: time="2026-04-14T12:54:47.922556849Z" level=warning msg="unknown status" status=0 Apr 14 12:54:48.036109 containerd[1466]: time="2026-04-14T12:54:48.034204852Z" level=error msg="ttrpc: received message on inactive stream" stream=3 Apr 14 12:54:48.220946 kubelet[2611]: E0414 12:54:48.215922 2611 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.43:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.43:6443: connect: connection refused" interval="7s" Apr 14 12:54:48.684318 containerd[1466]: time="2026-04-14T12:54:48.683999309Z" level=info msg="StartContainer for \"136c90ee92d4f4f6f33ede4d8cc14a79e8a4a3e4a3a0c3c4e1334b03d00f7f93\" returns successfully" Apr 14 12:54:48.706475 systemd[1]: Started cri-containerd-b86275e2231fa75858d1e9c8ce55145faead9a140326f2000e4c1335bb8bebc4.scope - libcontainer container b86275e2231fa75858d1e9c8ce55145faead9a140326f2000e4c1335bb8bebc4. Apr 14 12:54:49.037026 kubelet[2611]: E0414 12:54:48.968754 2611 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://10.0.0.43:6443/api/v1/namespaces/kube-system/events/coredns-7d764666f9-f44gt.18a639cb2be0dea0\": dial tcp 10.0.0.43:6443: connect: connection refused" event="&Event{ObjectMeta:{coredns-7d764666f9-f44gt.18a639cb2be0dea0 kube-system 876 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:coredns-7d764666f9-f44gt,UID:c0e24bdb-6150-4745-b66b-9386ee241a93,APIVersion:v1,ResourceVersion:631,FieldPath:spec.containers{coredns},},Reason:Unhealthy,Message:Readiness probe failed: Get \"http://192.168.0.3:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-14 12:44:07 +0000 UTC,LastTimestamp:2026-04-14 12:46:55.642258431 +0000 UTC m=+343.698377875,Count:3,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 14 12:54:49.163147 kubelet[2611]: E0414 12:54:49.162141 2611 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 12:54:49.163147 kubelet[2611]: E0414 12:54:49.162854 2611 status_manager.go:1045] "Failed to get status for pod" err="Get \"https://10.0.0.43:6443/api/v1/namespaces/kube-system/pods/coredns-7d764666f9-spttk\": dial tcp 10.0.0.43:6443: connect: connection refused" podUID="fb975314-b950-4dd9-9942-b30d52d99a2a" pod="kube-system/coredns-7d764666f9-spttk" Apr 14 12:54:49.166566 kubelet[2611]: E0414 12:54:49.164935 2611 status_manager.go:1045] "Failed to get status for pod" err="Get \"https://10.0.0.43:6443/api/v1/namespaces/kube-system/pods/coredns-7d764666f9-f44gt\": dial tcp 10.0.0.43:6443: connect: connection refused" podUID="c0e24bdb-6150-4745-b66b-9386ee241a93" pod="kube-system/coredns-7d764666f9-f44gt" Apr 14 12:54:49.167233 kubelet[2611]: E0414 12:54:49.166859 2611 status_manager.go:1045] "Failed to get status for pod" err="Get \"https://10.0.0.43:6443/api/v1/namespaces/kube-system/pods/kube-controller-manager-localhost\": dial tcp 10.0.0.43:6443: connect: connection refused" podUID="bd70d524e6bc561f2082b467706799ed" pod="kube-system/kube-controller-manager-localhost" Apr 14 12:54:49.167233 kubelet[2611]: E0414 12:54:49.167104 2611 status_manager.go:1045] "Failed to get status for pod" err="Get \"https://10.0.0.43:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-localhost\": dial tcp 10.0.0.43:6443: connect: connection refused" podUID="ed5e991544c38f12435d82988fd12fee" pod="kube-system/kube-apiserver-localhost" Apr 14 12:54:49.167233 kubelet[2611]: E0414 12:54:49.167203 2611 status_manager.go:1045] "Failed to get status for pod" err="Get \"https://10.0.0.43:6443/api/v1/namespaces/kube-system/pods/kube-scheduler-localhost\": dial tcp 10.0.0.43:6443: connect: connection refused" podUID="3566c1d7ed03bb3c60facf009a5678dd" pod="kube-system/kube-scheduler-localhost" Apr 14 12:54:49.936498 kubelet[2611]: E0414 12:54:49.936404 2611 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 12:54:50.203109 kubelet[2611]: E0414 12:54:50.199255 2611 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 12:54:50.344126 systemd[1]: Started sshd@31-10.0.0.43:22-10.0.0.1:41140.service - OpenSSH per-connection server daemon (10.0.0.1:41140). Apr 14 12:54:50.364482 containerd[1466]: time="2026-04-14T12:54:50.278006793Z" level=info msg="TaskExit event container_id:\"8d3c1054a6cc9347dba2d780a318571b021087092c2d44730b1a6d81eb6af2a3\" id:\"8d3c1054a6cc9347dba2d780a318571b021087092c2d44730b1a6d81eb6af2a3\" pid:3564 exited_at:{seconds:1776171134 nanos:419422785}" Apr 14 12:54:50.779167 containerd[1466]: time="2026-04-14T12:54:50.778753340Z" level=info msg="StartContainer for \"b86275e2231fa75858d1e9c8ce55145faead9a140326f2000e4c1335bb8bebc4\" returns successfully" Apr 14 12:54:51.259643 containerd[1466]: time="2026-04-14T12:54:51.259532967Z" level=info msg="shim disconnected" id=8d3c1054a6cc9347dba2d780a318571b021087092c2d44730b1a6d81eb6af2a3 namespace=k8s.io Apr 14 12:54:51.275962 containerd[1466]: time="2026-04-14T12:54:51.263361019Z" level=warning msg="cleaning up after shim disconnected" id=8d3c1054a6cc9347dba2d780a318571b021087092c2d44730b1a6d81eb6af2a3 namespace=k8s.io Apr 14 12:54:51.275962 containerd[1466]: time="2026-04-14T12:54:51.263679989Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 14 12:54:51.411372 kubelet[2611]: E0414 12:54:51.401366 2611 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 12:54:51.440285 kubelet[2611]: E0414 12:54:51.438955 2611 status_manager.go:1045] "Failed to get status for pod" err="Get \"https://10.0.0.43:6443/api/v1/namespaces/kube-system/pods/coredns-7d764666f9-spttk\": dial tcp 10.0.0.43:6443: connect: connection refused" podUID="fb975314-b950-4dd9-9942-b30d52d99a2a" pod="kube-system/coredns-7d764666f9-spttk" Apr 14 12:54:51.440285 kubelet[2611]: E0414 12:54:51.439120 2611 status_manager.go:1045] "Failed to get status for pod" err="Get \"https://10.0.0.43:6443/api/v1/namespaces/kube-system/pods/coredns-7d764666f9-f44gt\": dial tcp 10.0.0.43:6443: connect: connection refused" podUID="c0e24bdb-6150-4745-b66b-9386ee241a93" pod="kube-system/coredns-7d764666f9-f44gt" Apr 14 12:54:51.462073 sshd[5897]: Accepted publickey for core from 10.0.0.1 port 41140 ssh2: RSA SHA256:qzhFhta2jMUFpsUMpLJ2lZjvWQxYMUNkIBekZ4ekVbM Apr 14 12:54:51.511537 kubelet[2611]: E0414 12:54:51.478067 2611 status_manager.go:1045] "Failed to get status for pod" err="Get \"https://10.0.0.43:6443/api/v1/namespaces/kube-system/pods/kube-controller-manager-localhost\": dial tcp 10.0.0.43:6443: connect: connection refused" podUID="bd70d524e6bc561f2082b467706799ed" pod="kube-system/kube-controller-manager-localhost" Apr 14 12:54:51.511537 kubelet[2611]: E0414 12:54:51.506392 2611 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 12:54:51.511537 kubelet[2611]: E0414 12:54:51.506365 2611 status_manager.go:1045] "Failed to get status for pod" err="Get \"https://10.0.0.43:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-localhost\": dial tcp 10.0.0.43:6443: connect: connection refused" podUID="ed5e991544c38f12435d82988fd12fee" pod="kube-system/kube-apiserver-localhost" Apr 14 12:54:51.522335 kubelet[2611]: E0414 12:54:51.519331 2611 status_manager.go:1045] "Failed to get status for pod" err="Get \"https://10.0.0.43:6443/api/v1/namespaces/kube-system/pods/kube-scheduler-localhost\": dial tcp 10.0.0.43:6443: connect: connection refused" podUID="3566c1d7ed03bb3c60facf009a5678dd" pod="kube-system/kube-scheduler-localhost" Apr 14 12:54:51.543853 sshd[5897]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 12:54:51.660866 systemd-logind[1450]: New session 32 of user core. Apr 14 12:54:51.731821 systemd[1]: Started session-32.scope - Session 32 of User core. Apr 14 12:54:51.763041 containerd[1466]: time="2026-04-14T12:54:51.762159988Z" level=info msg="StopContainer for \"8d3c1054a6cc9347dba2d780a318571b021087092c2d44730b1a6d81eb6af2a3\" returns successfully" Apr 14 12:54:51.822506 kubelet[2611]: E0414 12:54:51.822109 2611 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 12:54:51.910935 containerd[1466]: time="2026-04-14T12:54:51.910617345Z" level=info msg="CreateContainer within sandbox \"113e216a6ecbef12a69b5c11f9e43873e72fef1eac5ccbea9a830f305674834a\" for container &ContainerMetadata{Name:coredns,Attempt:1,}" Apr 14 12:54:53.427450 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2806275815.mount: Deactivated successfully. Apr 14 12:54:53.432456 containerd[1466]: time="2026-04-14T12:54:53.432381046Z" level=info msg="CreateContainer within sandbox \"113e216a6ecbef12a69b5c11f9e43873e72fef1eac5ccbea9a830f305674834a\" for &ContainerMetadata{Name:coredns,Attempt:1,} returns container id \"95160dc3a49ef8c8b494c4dd4066713357c86176e18f3fa11c83be3c2d5db085\"" Apr 14 12:54:53.450445 containerd[1466]: time="2026-04-14T12:54:53.450231216Z" level=info msg="StartContainer for \"95160dc3a49ef8c8b494c4dd4066713357c86176e18f3fa11c83be3c2d5db085\"" Apr 14 12:54:53.509017 kubelet[2611]: E0414 12:54:53.508937 2611 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.463s" Apr 14 12:54:53.521629 kubelet[2611]: E0414 12:54:53.520115 2611 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 12:54:53.807978 systemd[1]: Started cri-containerd-95160dc3a49ef8c8b494c4dd4066713357c86176e18f3fa11c83be3c2d5db085.scope - libcontainer container 95160dc3a49ef8c8b494c4dd4066713357c86176e18f3fa11c83be3c2d5db085. Apr 14 12:54:54.014834 containerd[1466]: time="2026-04-14T12:54:54.011370448Z" level=info msg="StartContainer for \"95160dc3a49ef8c8b494c4dd4066713357c86176e18f3fa11c83be3c2d5db085\" returns successfully" Apr 14 12:54:54.505685 kubelet[2611]: E0414 12:54:54.505368 2611 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 12:54:54.677431 kubelet[2611]: E0414 12:54:54.677092 2611 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 12:54:55.517535 kubelet[2611]: E0414 12:54:55.517113 2611 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 12:54:56.442610 kubelet[2611]: E0414 12:54:56.441722 2611 status_manager.go:1045] "Failed to get status for pod" err="pods \"kube-scheduler-localhost\" is forbidden: User \"system:node:localhost\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'localhost' and this object" podUID="3566c1d7ed03bb3c60facf009a5678dd" pod="kube-system/kube-scheduler-localhost" Apr 14 12:54:56.478656 kubelet[2611]: E0414 12:54:56.478426 2611 status_manager.go:1045] "Failed to get status for pod" err="pods \"coredns-7d764666f9-spttk\" is forbidden: User \"system:node:localhost\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'localhost' and this object" podUID="fb975314-b950-4dd9-9942-b30d52d99a2a" pod="kube-system/coredns-7d764666f9-spttk" Apr 14 12:54:56.528793 kubelet[2611]: E0414 12:54:56.528577 2611 status_manager.go:1045] "Failed to get status for pod" err="pods \"coredns-7d764666f9-f44gt\" is forbidden: User \"system:node:localhost\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'localhost' and this object" podUID="c0e24bdb-6150-4745-b66b-9386ee241a93" pod="kube-system/coredns-7d764666f9-f44gt" Apr 14 12:54:56.537398 sshd[5897]: pam_unix(sshd:session): session closed for user core Apr 14 12:54:56.543552 systemd[1]: sshd@31-10.0.0.43:22-10.0.0.1:41140.service: Deactivated successfully. Apr 14 12:54:56.544332 kubelet[2611]: E0414 12:54:56.544301 2611 status_manager.go:1045] "Failed to get status for pod" err=< Apr 14 12:54:56.544332 kubelet[2611]: pods "kube-controller-manager-localhost" is forbidden: User "system:node:localhost" cannot get resource "pods" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object Apr 14 12:54:56.544332 kubelet[2611]: RBAC: [role.rbac.authorization.k8s.io "kubeadm:kubelet-config" not found, role.rbac.authorization.k8s.io "kubeadm:nodes-kubeadm-config" not found] Apr 14 12:54:56.544332 kubelet[2611]: > podUID="bd70d524e6bc561f2082b467706799ed" pod="kube-system/kube-controller-manager-localhost" Apr 14 12:54:56.555161 kubelet[2611]: E0414 12:54:56.555036 2611 status_manager.go:1045] "Failed to get status for pod" err=< Apr 14 12:54:56.555161 kubelet[2611]: pods "kube-apiserver-localhost" is forbidden: User "system:node:localhost" cannot get resource "pods" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object Apr 14 12:54:56.555161 kubelet[2611]: RBAC: [role.rbac.authorization.k8s.io "kubeadm:kubelet-config" not found, role.rbac.authorization.k8s.io "kubeadm:nodes-kubeadm-config" not found] Apr 14 12:54:56.555161 kubelet[2611]: > podUID="ed5e991544c38f12435d82988fd12fee" pod="kube-system/kube-apiserver-localhost" Apr 14 12:54:56.560133 kubelet[2611]: E0414 12:54:56.560015 2611 status_manager.go:1045] "Failed to get status for pod" err=< Apr 14 12:54:56.560133 kubelet[2611]: pods "kube-scheduler-localhost" is forbidden: User "system:node:localhost" cannot get resource "pods" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object Apr 14 12:54:56.560133 kubelet[2611]: RBAC: [role.rbac.authorization.k8s.io "kubeadm:kubelet-config" not found, role.rbac.authorization.k8s.io "kubeadm:nodes-kubeadm-config" not found] Apr 14 12:54:56.560133 kubelet[2611]: > podUID="3566c1d7ed03bb3c60facf009a5678dd" pod="kube-system/kube-scheduler-localhost" Apr 14 12:54:56.560688 systemd[1]: session-32.scope: Deactivated successfully. Apr 14 12:54:56.561189 systemd[1]: session-32.scope: Consumed 1.220s CPU time. Apr 14 12:54:56.563865 systemd-logind[1450]: Session 32 logged out. Waiting for processes to exit. Apr 14 12:54:56.564551 kubelet[2611]: E0414 12:54:56.564083 2611 status_manager.go:1045] "Failed to get status for pod" err=< Apr 14 12:54:56.564551 kubelet[2611]: pods "coredns-7d764666f9-spttk" is forbidden: User "system:node:localhost" cannot get resource "pods" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object Apr 14 12:54:56.564551 kubelet[2611]: RBAC: [role.rbac.authorization.k8s.io "kubeadm:kubelet-config" not found, role.rbac.authorization.k8s.io "kubeadm:nodes-kubeadm-config" not found] Apr 14 12:54:56.564551 kubelet[2611]: > podUID="fb975314-b950-4dd9-9942-b30d52d99a2a" pod="kube-system/coredns-7d764666f9-spttk" Apr 14 12:54:56.565308 systemd-logind[1450]: Removed session 32. Apr 14 12:54:56.568688 kubelet[2611]: E0414 12:54:56.568653 2611 status_manager.go:1045] "Failed to get status for pod" err=< Apr 14 12:54:56.568688 kubelet[2611]: pods "coredns-7d764666f9-f44gt" is forbidden: User "system:node:localhost" cannot get resource "pods" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object Apr 14 12:54:56.568688 kubelet[2611]: RBAC: [role.rbac.authorization.k8s.io "kubeadm:nodes-kubeadm-config" not found, role.rbac.authorization.k8s.io "kubeadm:kubelet-config" not found] Apr 14 12:54:56.568688 kubelet[2611]: > podUID="c0e24bdb-6150-4745-b66b-9386ee241a93" pod="kube-system/coredns-7d764666f9-f44gt" Apr 14 12:54:56.569670 kubelet[2611]: E0414 12:54:56.569616 2611 status_manager.go:1045] "Failed to get status for pod" err=< Apr 14 12:54:56.569670 kubelet[2611]: pods "kube-controller-manager-localhost" is forbidden: User "system:node:localhost" cannot get resource "pods" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object Apr 14 12:54:56.569670 kubelet[2611]: RBAC: [role.rbac.authorization.k8s.io "kubeadm:kubelet-config" not found, role.rbac.authorization.k8s.io "kubeadm:nodes-kubeadm-config" not found] Apr 14 12:54:56.569670 kubelet[2611]: > podUID="bd70d524e6bc561f2082b467706799ed" pod="kube-system/kube-controller-manager-localhost" Apr 14 12:54:56.571150 kubelet[2611]: E0414 12:54:56.571092 2611 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 12:54:56.571358 kubelet[2611]: E0414 12:54:56.571313 2611 status_manager.go:1045] "Failed to get status for pod" err=< Apr 14 12:54:56.571358 kubelet[2611]: pods "kube-apiserver-localhost" is forbidden: User "system:node:localhost" cannot get resource "pods" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object Apr 14 12:54:56.571358 kubelet[2611]: RBAC: [role.rbac.authorization.k8s.io "kubeadm:kubelet-config" not found, role.rbac.authorization.k8s.io "kubeadm:nodes-kubeadm-config" not found] Apr 14 12:54:56.571358 kubelet[2611]: > podUID="ed5e991544c38f12435d82988fd12fee" pod="kube-system/kube-apiserver-localhost" Apr 14 12:54:56.579022 kubelet[2611]: E0414 12:54:56.578850 2611 status_manager.go:1045] "Failed to get status for pod" err=< Apr 14 12:54:56.579022 kubelet[2611]: pods "kube-controller-manager-localhost" is forbidden: User "system:node:localhost" cannot get resource "pods" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object Apr 14 12:54:56.579022 kubelet[2611]: RBAC: [role.rbac.authorization.k8s.io "kubeadm:kubelet-config" not found, role.rbac.authorization.k8s.io "kubeadm:nodes-kubeadm-config" not found] Apr 14 12:54:56.579022 kubelet[2611]: > podUID="bd70d524e6bc561f2082b467706799ed" pod="kube-system/kube-controller-manager-localhost" Apr 14 12:54:56.659041 kubelet[2611]: E0414 12:54:56.658666 2611 status_manager.go:1045] "Failed to get status for pod" err="pods \"kube-apiserver-localhost\" is forbidden: User \"system:node:localhost\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'localhost' and this object" podUID="ed5e991544c38f12435d82988fd12fee" pod="kube-system/kube-apiserver-localhost" Apr 14 12:55:01.336495 kubelet[2611]: E0414 12:55:01.328380 2611 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 12:55:02.768578 systemd[1]: Started sshd@32-10.0.0.43:22-10.0.0.1:39286.service - OpenSSH per-connection server daemon (10.0.0.1:39286). Apr 14 12:55:05.457659 containerd[1466]: time="2026-04-14T12:55:05.323759531Z" level=info msg="Kill container \"deefeb9d7fce7a43dd4dcf925e6b98c76823dace7f986803922e1451a289da2d\"" Apr 14 12:55:21.057007 kubelet[2611]: E0414 12:55:21.046955 2611 controller.go:251] "Failed to update lease" err="Put \"https://10.0.0.43:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded" Apr 14 12:55:24.152779 kubelet[2611]: E0414 12:55:24.152424 2611 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="22.1s" Apr 14 12:55:24.362213 kubelet[2611]: E0414 12:55:24.361937 2611 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 12:55:25.523714 kubelet[2611]: E0414 12:55:25.523219 2611 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 12:55:25.774718 kubelet[2611]: E0414 12:55:25.769995 2611 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 12:55:26.764756 sshd[6043]: Accepted publickey for core from 10.0.0.1 port 39286 ssh2: RSA SHA256:qzhFhta2jMUFpsUMpLJ2lZjvWQxYMUNkIBekZ4ekVbM Apr 14 12:55:27.267044 sshd[6043]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 12:55:27.902508 systemd-logind[1450]: New session 33 of user core. Apr 14 12:55:27.962256 systemd[1]: Started session-33.scope - Session 33 of User core. Apr 14 12:55:30.364007 kubelet[2611]: E0414 12:55:30.349532 2611 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 12:55:32.294760 kubelet[2611]: E0414 12:55:32.261506 2611 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.197s" Apr 14 12:55:32.857391 kubelet[2611]: E0414 12:55:32.853086 2611 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 12:55:35.462608 kubelet[2611]: E0414 12:55:35.445571 2611 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.378s" Apr 14 12:55:39.237775 systemd[1]: cri-containerd-dfea1f2839f09d79baa829bb270334b0c6732ff51677645b59f48ef8e7a6d142.scope: Deactivated successfully. Apr 14 12:55:39.339530 systemd[1]: cri-containerd-dfea1f2839f09d79baa829bb270334b0c6732ff51677645b59f48ef8e7a6d142.scope: Consumed 17.968s CPU time. Apr 14 12:55:44.260042 containerd[1466]: time="2026-04-14T12:55:44.239394674Z" level=info msg="TaskExit event container_id:\"deefeb9d7fce7a43dd4dcf925e6b98c76823dace7f986803922e1451a289da2d\" id:\"deefeb9d7fce7a43dd4dcf925e6b98c76823dace7f986803922e1451a289da2d\" pid:3582 exited_at:{seconds:1776171119 nanos:107692540}" Apr 14 12:55:50.272881 containerd[1466]: time="2026-04-14T12:55:49.970380276Z" level=error msg="failed to handle container TaskExit event container_id:\"dfea1f2839f09d79baa829bb270334b0c6732ff51677645b59f48ef8e7a6d142\" id:\"dfea1f2839f09d79baa829bb270334b0c6732ff51677645b59f48ef8e7a6d142\" pid:5660 exit_status:1 exited_at:{seconds:1776171339 nanos:622382145}" error="failed to stop container: context deadline exceeded: unknown" Apr 14 12:55:51.739471 containerd[1466]: time="2026-04-14T12:55:51.734096210Z" level=error msg="ttrpc: received message on inactive stream" stream=35 Apr 14 12:55:51.739471 containerd[1466]: time="2026-04-14T12:55:51.734363057Z" level=error msg="ttrpc: received message on inactive stream" stream=33 Apr 14 12:55:53.402324 kubelet[2611]: I0414 12:55:53.377169 2611 reflector.go:1159] "Warning: event bookmark expired" err="pkg/kubelet/config/apiserver.go:66: awaiting required bookmark event for initial events stream, no events received for 20.211824894s" Apr 14 12:55:54.356496 containerd[1466]: time="2026-04-14T12:55:54.273450443Z" level=error msg="Failed to handle backOff event container_id:\"deefeb9d7fce7a43dd4dcf925e6b98c76823dace7f986803922e1451a289da2d\" id:\"deefeb9d7fce7a43dd4dcf925e6b98c76823dace7f986803922e1451a289da2d\" pid:3582 exited_at:{seconds:1776171119 nanos:107692540} for deefeb9d7fce7a43dd4dcf925e6b98c76823dace7f986803922e1451a289da2d" error="failed to handle container TaskExit event: failed to stop container: context deadline exceeded: unknown" Apr 14 12:55:54.919271 containerd[1466]: time="2026-04-14T12:55:54.560499972Z" level=info msg="TaskExit event container_id:\"dfea1f2839f09d79baa829bb270334b0c6732ff51677645b59f48ef8e7a6d142\" id:\"dfea1f2839f09d79baa829bb270334b0c6732ff51677645b59f48ef8e7a6d142\" pid:5660 exit_status:1 exited_at:{seconds:1776171339 nanos:622382145}" Apr 14 12:55:56.020213 containerd[1466]: time="2026-04-14T12:55:55.956066936Z" level=error msg="ttrpc: received message on inactive stream" stream=123 Apr 14 12:55:56.373482 containerd[1466]: time="2026-04-14T12:55:56.364234217Z" level=error msg="ttrpc: received message on inactive stream" stream=125 Apr 14 12:56:02.806253 kubelet[2611]: E0414 12:56:02.672236 2611 controller.go:251] "Failed to update lease" err="Put \"https://10.0.0.43:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded" Apr 14 12:56:04.428455 containerd[1466]: time="2026-04-14T12:56:04.350288151Z" level=error msg="Failed to handle backOff event container_id:\"dfea1f2839f09d79baa829bb270334b0c6732ff51677645b59f48ef8e7a6d142\" id:\"dfea1f2839f09d79baa829bb270334b0c6732ff51677645b59f48ef8e7a6d142\" pid:5660 exit_status:1 exited_at:{seconds:1776171339 nanos:622382145} for dfea1f2839f09d79baa829bb270334b0c6732ff51677645b59f48ef8e7a6d142" error="failed to handle container TaskExit event: failed to stop container: context deadline exceeded: unknown" Apr 14 12:56:05.119739 containerd[1466]: time="2026-04-14T12:56:05.114155672Z" level=error msg="ttrpc: received message on inactive stream" stream=39 Apr 14 12:56:05.119739 containerd[1466]: time="2026-04-14T12:56:05.117333317Z" level=error msg="ttrpc: received message on inactive stream" stream=43 Apr 14 12:56:05.557820 kubelet[2611]: I0414 12:56:05.504396 2611 reflector.go:1159] "Warning: event bookmark expired" err="pkg/kubelet/config/apiserver.go:66: awaiting required bookmark event for initial events stream, no events received for 32.876136176s" Apr 14 12:56:07.240385 containerd[1466]: time="2026-04-14T12:56:07.239897274Z" level=info msg="TaskExit event container_id:\"dfea1f2839f09d79baa829bb270334b0c6732ff51677645b59f48ef8e7a6d142\" id:\"dfea1f2839f09d79baa829bb270334b0c6732ff51677645b59f48ef8e7a6d142\" pid:5660 exit_status:1 exited_at:{seconds:1776171339 nanos:622382145}" Apr 14 12:56:12.424338 sshd[6043]: pam_unix(sshd:session): session closed for user core Apr 14 12:56:13.552245 systemd[1]: sshd@32-10.0.0.43:22-10.0.0.1:39286.service: Deactivated successfully. Apr 14 12:56:13.664334 systemd[1]: sshd@32-10.0.0.43:22-10.0.0.1:39286.service: Consumed 5.193s CPU time. Apr 14 12:56:14.549893 systemd[1]: session-33.scope: Deactivated successfully. Apr 14 12:56:14.621463 systemd[1]: session-33.scope: Consumed 19.950s CPU time. Apr 14 12:56:14.842284 systemd-logind[1450]: Session 33 logged out. Waiting for processes to exit. Apr 14 12:56:14.964958 systemd-logind[1450]: Removed session 33. Apr 14 12:56:15.248399 kubelet[2611]: I0414 12:56:15.012217 2611 reflector.go:1159] "Warning: event bookmark expired" err="pkg/kubelet/config/apiserver.go:66: awaiting required bookmark event for initial events stream, no events received for 40.183811169s" Apr 14 12:56:17.358708 containerd[1466]: time="2026-04-14T12:56:17.334378616Z" level=error msg="Failed to handle backOff event container_id:\"dfea1f2839f09d79baa829bb270334b0c6732ff51677645b59f48ef8e7a6d142\" id:\"dfea1f2839f09d79baa829bb270334b0c6732ff51677645b59f48ef8e7a6d142\" pid:5660 exit_status:1 exited_at:{seconds:1776171339 nanos:622382145} for dfea1f2839f09d79baa829bb270334b0c6732ff51677645b59f48ef8e7a6d142" error="failed to handle container TaskExit event: failed to stop container: context deadline exceeded: unknown" Apr 14 12:56:18.755555 containerd[1466]: time="2026-04-14T12:56:18.736308997Z" level=error msg="ttrpc: received message on inactive stream" stream=53 Apr 14 12:56:19.357483 containerd[1466]: time="2026-04-14T12:56:19.352898107Z" level=error msg="ttrpc: received message on inactive stream" stream=49 Apr 14 12:56:20.084738 systemd[1]: Started sshd@33-10.0.0.43:22-10.0.0.1:42780.service - OpenSSH per-connection server daemon (10.0.0.1:42780). Apr 14 12:56:21.521039 kubelet[2611]: E0414 12:56:20.639578 2611 controller.go:251] "Failed to update lease" err="Put \"https://10.0.0.43:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Apr 14 12:56:22.372511 kubelet[2611]: I0414 12:56:22.032528 2611 reflector.go:1159] "Warning: event bookmark expired" err="pkg/kubelet/config/apiserver.go:66: awaiting required bookmark event for initial events stream, no events received for 50.191673805s" Apr 14 12:56:22.971546 containerd[1466]: time="2026-04-14T12:56:22.355535571Z" level=info msg="TaskExit event container_id:\"dfea1f2839f09d79baa829bb270334b0c6732ff51677645b59f48ef8e7a6d142\" id:\"dfea1f2839f09d79baa829bb270334b0c6732ff51677645b59f48ef8e7a6d142\" pid:5660 exit_status:1 exited_at:{seconds:1776171339 nanos:622382145}" Apr 14 12:56:24.849460 containerd[1466]: time="2026-04-14T12:56:24.837285018Z" level=error msg="get state for dfea1f2839f09d79baa829bb270334b0c6732ff51677645b59f48ef8e7a6d142" error="context deadline exceeded: unknown" Apr 14 12:56:25.428334 containerd[1466]: time="2026-04-14T12:56:25.040543691Z" level=warning msg="unknown status" status=0 Apr 14 12:56:26.369386 containerd[1466]: time="2026-04-14T12:56:26.317427684Z" level=error msg="ttrpc: received message on inactive stream" stream=55 Apr 14 12:56:27.757918 kubelet[2611]: I0414 12:56:27.750804 2611 reflector.go:1159] "Warning: event bookmark expired" err="k8s.io/client-go/informers/factory.go:161: awaiting required bookmark event for initial events stream, no events received for 59.286008369s" Apr 14 12:56:31.807366 kubelet[2611]: I0414 12:56:31.759414 2611 reflector.go:578] "Warning: watch ended with error" reflector="object-\"kube-system\"/\"kube-proxy\"" type="*v1.ConfigMap" err="an error on the server (\"unable to decode an event from the watch stream: http2: client connection lost\") has prevented the request from succeeding" Apr 14 12:56:32.923147 containerd[1466]: time="2026-04-14T12:56:32.899827203Z" level=error msg="Failed to handle backOff event container_id:\"dfea1f2839f09d79baa829bb270334b0c6732ff51677645b59f48ef8e7a6d142\" id:\"dfea1f2839f09d79baa829bb270334b0c6732ff51677645b59f48ef8e7a6d142\" pid:5660 exit_status:1 exited_at:{seconds:1776171339 nanos:622382145} for dfea1f2839f09d79baa829bb270334b0c6732ff51677645b59f48ef8e7a6d142" error="failed to handle container TaskExit event: failed to stop container: failed to delete task: context deadline exceeded: unknown" Apr 14 12:56:34.601909 kubelet[2611]: I0414 12:56:33.264431 2611 reflector.go:1159] "Warning: event bookmark expired" err="pkg/kubelet/config/apiserver.go:66: awaiting required bookmark event for initial events stream, no events received for 1m1.376237169s" Apr 14 12:56:36.142574 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-dfea1f2839f09d79baa829bb270334b0c6732ff51677645b59f48ef8e7a6d142-rootfs.mount: Deactivated successfully. Apr 14 12:56:37.385249 kubelet[2611]: I0414 12:56:37.347276 2611 reflector.go:578] "Warning: watch ended with error" reflector="object-\"kube-system\"/\"coredns\"" type="*v1.ConfigMap" err="an error on the server (\"unable to decode an event from the watch stream: http2: client connection lost\") has prevented the request from succeeding" Apr 14 12:56:38.427469 containerd[1466]: time="2026-04-14T12:56:38.420463559Z" level=error msg="ttrpc: received message on inactive stream" stream=59 Apr 14 12:56:39.827962 kubelet[2611]: E0414 12:56:39.816861 2611 controller.go:251] "Failed to update lease" err="Put \"https://10.0.0.43:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": http2: client connection lost (Client.Timeout exceeded while awaiting headers)" Apr 14 12:56:40.238754 kubelet[2611]: I0414 12:56:39.106284 2611 reflector.go:578] "Warning: watch ended with error" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Node" err="an error on the server (\"unable to decode an event from the watch stream: http2: client connection lost\") has prevented the request from succeeding" Apr 14 12:56:41.125871 kubelet[2611]: I0414 12:56:41.071953 2611 reflector.go:578] "Warning: watch ended with error" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.RuntimeClass" err="an error on the server (\"unable to decode an event from the watch stream: http2: client connection lost\") has prevented the request from succeeding" Apr 14 12:56:41.596975 containerd[1466]: time="2026-04-14T12:56:41.261671815Z" level=info msg="TaskExit event container_id:\"dfea1f2839f09d79baa829bb270334b0c6732ff51677645b59f48ef8e7a6d142\" id:\"dfea1f2839f09d79baa829bb270334b0c6732ff51677645b59f48ef8e7a6d142\" pid:5660 exit_status:1 exited_at:{seconds:1776171339 nanos:622382145}" Apr 14 12:56:41.648938 kubelet[2611]: I0414 12:56:41.643726 2611 reflector.go:578] "Warning: watch ended with error" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Service" err="an error on the server (\"unable to decode an event from the watch stream: http2: client connection lost\") has prevented the request from succeeding" Apr 14 12:56:42.428262 kubelet[2611]: I0414 12:56:41.070367 2611 reflector.go:578] "Warning: watch ended with error" reflector="object-\"kube-flannel\"/\"kube-root-ca.crt\"" type="*v1.ConfigMap" err="an error on the server (\"unable to decode an event from the watch stream: http2: client connection lost\") has prevented the request from succeeding" Apr 14 12:56:45.631789 sshd[6169]: Accepted publickey for core from 10.0.0.1 port 42780 ssh2: RSA SHA256:qzhFhta2jMUFpsUMpLJ2lZjvWQxYMUNkIBekZ4ekVbM Apr 14 12:56:46.631438 sshd[6169]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 12:56:48.957319 systemd-logind[1450]: New session 34 of user core. Apr 14 12:56:50.006803 systemd[1]: Started session-34.scope - Session 34 of User core. Apr 14 12:56:50.827887 kubelet[2611]: E0414 12:56:43.009310 2611 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://10.0.0.43:6443/api/v1/namespaces/kube-system/events/coredns-7d764666f9-f44gt.18a639cb2be0dea0\": http2: client connection lost" event="&Event{ObjectMeta:{coredns-7d764666f9-f44gt.18a639cb2be0dea0 kube-system 1042 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:coredns-7d764666f9-f44gt,UID:c0e24bdb-6150-4745-b66b-9386ee241a93,APIVersion:v1,ResourceVersion:631,FieldPath:spec.containers{coredns},},Reason:Unhealthy,Message:Readiness probe failed: Get \"http://192.168.0.3:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-14 12:44:07 +0000 UTC,LastTimestamp:2026-04-14 12:47:15.46562745 +0000 UTC m=+363.521746894,Count:5,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 14 12:56:51.435972 containerd[1466]: time="2026-04-14T12:56:51.405013703Z" level=error msg="Failed to handle backOff event container_id:\"dfea1f2839f09d79baa829bb270334b0c6732ff51677645b59f48ef8e7a6d142\" id:\"dfea1f2839f09d79baa829bb270334b0c6732ff51677645b59f48ef8e7a6d142\" pid:5660 exit_status:1 exited_at:{seconds:1776171339 nanos:622382145} for dfea1f2839f09d79baa829bb270334b0c6732ff51677645b59f48ef8e7a6d142" error="failed to handle container TaskExit event: failed to stop container: failed to delete task: context deadline exceeded: unknown" Apr 14 12:56:52.333279 kubelet[2611]: E0414 12:56:50.130478 2611 controller.go:251] "Failed to update lease" err="Put \"https://10.0.0.43:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Apr 14 12:56:54.276474 containerd[1466]: time="2026-04-14T12:56:54.241568037Z" level=error msg="ttrpc: received message on inactive stream" stream=71 Apr 14 12:56:57.251638 kubelet[2611]: E0414 12:56:57.215483 2611 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.43:6443/apis/storage.k8s.io/v1/csidrivers?resourceVersion=948\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIDriver" Apr 14 12:56:58.337685 kubelet[2611]: E0414 12:56:56.900532 2611 reflector.go:204] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.0.0.43:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dlocalhost&resourceVersion=1004\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="pkg/kubelet/config/apiserver.go:66" type="*v1.Pod" Apr 14 12:56:59.798034 kubelet[2611]: E0414 12:56:59.797341 2611 reflector.go:204] "Failed to watch" err="failed to list *v1.ConfigMap: Get \"https://10.0.0.43:6443/api/v1/namespaces/kube-flannel/configmaps?fieldSelector=metadata.name%3Dkube-flannel-cfg&resourceVersion=996\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="object-\"kube-flannel\"/\"kube-flannel-cfg\"" type="*v1.ConfigMap" Apr 14 12:57:00.387453 kubelet[2611]: E0414 12:56:57.207548 2611 status_manager.go:1045] "Failed to get status for pod" err="Get \"https://10.0.0.43:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-localhost\": net/http: TLS handshake timeout - error from a previous attempt: http2: client connection lost" podUID="ed5e991544c38f12435d82988fd12fee" pod="kube-system/kube-apiserver-localhost" Apr 14 12:57:01.231823 containerd[1466]: time="2026-04-14T12:57:01.204093641Z" level=error msg="StopContainer for \"deefeb9d7fce7a43dd4dcf925e6b98c76823dace7f986803922e1451a289da2d\" failed" error="rpc error: code = Canceled desc = an error occurs during waiting for container \"deefeb9d7fce7a43dd4dcf925e6b98c76823dace7f986803922e1451a289da2d\" to be killed: wait container \"deefeb9d7fce7a43dd4dcf925e6b98c76823dace7f986803922e1451a289da2d\": context canceled" Apr 14 12:57:01.807458 kubelet[2611]: E0414 12:57:01.803248 2611 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1m25.738s" Apr 14 12:57:02.357156 kubelet[2611]: E0414 12:57:01.905024 2611 log.go:32] "StopContainer from runtime service failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" containerID="deefeb9d7fce7a43dd4dcf925e6b98c76823dace7f986803922e1451a289da2d" Apr 14 12:57:04.245955 kubelet[2611]: E0414 12:57:01.911709 2611 kuberuntime_container.go:895] "Container termination failed with gracePeriod" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" pod="kube-system/coredns-7d764666f9-f44gt" podUID="c0e24bdb-6150-4745-b66b-9386ee241a93" containerName="coredns" containerID="containerd://deefeb9d7fce7a43dd4dcf925e6b98c76823dace7f986803922e1451a289da2d" gracePeriod=30 Apr 14 12:57:05.667704 kubelet[2611]: I0414 12:57:05.008893 2611 request.go:752] "Waited before sending request" delay="1.976540972s" reason="retries: 1, retry-after: 1s - retry-reason: due to retryable error, error: Get \"https://10.0.0.43:6443/api/v1/services?allowWatchBookmarks=true&fieldSelector=spec.clusterIP%21%3DNone&resourceVersion=1047&resourceVersionMatch=NotOlderThan&sendInitialEvents=true&timeout=9m13s&timeoutSeconds=553&watch=true\": net/http: TLS handshake timeout" verb="GET" URL="https://10.0.0.43:6443/api/v1/services?allowWatchBookmarks=true&fieldSelector=spec.clusterIP%21%3DNone&resourceVersion=1047&resourceVersionMatch=NotOlderThan&sendInitialEvents=true&timeout=9m13s&timeoutSeconds=553&watch=true" Apr 14 12:57:06.241976 kubelet[2611]: E0414 12:57:05.014514 2611 kuberuntime_manager.go:1437] "killContainer for pod failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" containerName="coredns" containerID={"Type":"containerd","ID":"deefeb9d7fce7a43dd4dcf925e6b98c76823dace7f986803922e1451a289da2d"} pod="kube-system/coredns-7d764666f9-f44gt" Apr 14 12:57:07.468969 containerd[1466]: time="2026-04-14T12:57:07.468415291Z" level=info msg="TaskExit event container_id:\"dfea1f2839f09d79baa829bb270334b0c6732ff51677645b59f48ef8e7a6d142\" id:\"dfea1f2839f09d79baa829bb270334b0c6732ff51677645b59f48ef8e7a6d142\" pid:5660 exit_status:1 exited_at:{seconds:1776171339 nanos:622382145}" Apr 14 12:57:08.166481 kubelet[2611]: E0414 12:57:05.992304 2611 controller.go:251] "Failed to update lease" err="Put \"https://10.0.0.43:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded" Apr 14 12:57:09.817546 kubelet[2611]: E0414 12:57:07.358530 2611 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillContainer\" for \"coredns\" with KillContainerError: \"rpc error: code = DeadlineExceeded desc = context deadline exceeded\"" pod="kube-system/coredns-7d764666f9-f44gt" podUID="c0e24bdb-6150-4745-b66b-9386ee241a93" Apr 14 12:57:11.544271 kubelet[2611]: I0414 12:57:09.828521 2611 controller.go:171] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Apr 14 12:57:14.743470 kubelet[2611]: E0414 12:57:07.277108 2611 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://10.0.0.43:6443/api/v1/namespaces/kube-system/events/coredns-7d764666f9-f44gt.18a639cb2be0dea0\": net/http: TLS handshake timeout" event="&Event{ObjectMeta:{coredns-7d764666f9-f44gt.18a639cb2be0dea0 kube-system 1042 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:coredns-7d764666f9-f44gt,UID:c0e24bdb-6150-4745-b66b-9386ee241a93,APIVersion:v1,ResourceVersion:631,FieldPath:spec.containers{coredns},},Reason:Unhealthy,Message:Readiness probe failed: Get \"http://192.168.0.3:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-14 12:44:07 +0000 UTC,LastTimestamp:2026-04-14 12:47:15.46562745 +0000 UTC m=+363.521746894,Count:5,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 14 12:57:16.547990 kubelet[2611]: E0414 12:57:15.755120 2611 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 12:57:17.698083 containerd[1466]: time="2026-04-14T12:57:17.636416047Z" level=error msg="Failed to handle backOff event container_id:\"dfea1f2839f09d79baa829bb270334b0c6732ff51677645b59f48ef8e7a6d142\" id:\"dfea1f2839f09d79baa829bb270334b0c6732ff51677645b59f48ef8e7a6d142\" pid:5660 exit_status:1 exited_at:{seconds:1776171339 nanos:622382145} for dfea1f2839f09d79baa829bb270334b0c6732ff51677645b59f48ef8e7a6d142" error="failed to handle container TaskExit event: failed to stop container: context deadline exceeded: unknown" Apr 14 12:57:18.111026 containerd[1466]: time="2026-04-14T12:57:18.067047386Z" level=error msg="ttrpc: received message on inactive stream" stream=79 Apr 14 12:57:18.351324 containerd[1466]: time="2026-04-14T12:57:18.331424449Z" level=error msg="ttrpc: received message on inactive stream" stream=75 Apr 14 12:57:18.579089 kubelet[2611]: E0414 12:57:18.491443 2611 status_manager.go:1045] "Failed to get status for pod" err="Get \"https://10.0.0.43:6443/api/v1/namespaces/kube-system/pods/coredns-7d764666f9-f44gt\": net/http: TLS handshake timeout" podUID="c0e24bdb-6150-4745-b66b-9386ee241a93" pod="kube-system/coredns-7d764666f9-f44gt" Apr 14 12:57:23.030867 kubelet[2611]: E0414 12:57:23.030073 2611 reflector.go:204] "Failed to watch" err="failed to list *v1.ConfigMap: Get \"https://10.0.0.43:6443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=996\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="object-\"kube-system\"/\"kube-root-ca.crt\"" type="*v1.ConfigMap" Apr 14 12:57:23.513017 containerd[1466]: time="2026-04-14T12:57:23.230524560Z" level=info msg="StopContainer for \"dfea1f2839f09d79baa829bb270334b0c6732ff51677645b59f48ef8e7a6d142\" with timeout 30 (s)" Apr 14 12:57:23.865917 containerd[1466]: time="2026-04-14T12:57:23.768328013Z" level=info msg="Stop container \"dfea1f2839f09d79baa829bb270334b0c6732ff51677645b59f48ef8e7a6d142\" with signal terminated" Apr 14 12:57:24.818403 kubelet[2611]: E0414 12:57:24.813821 2611 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 12:57:29.149032 kubelet[2611]: E0414 12:57:26.518151 2611 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.43:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" interval="200ms" Apr 14 12:57:36.118036 kubelet[2611]: E0414 12:57:36.055334 2611 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 12:57:36.458045 kubelet[2611]: E0414 12:57:34.360442 2611 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 12:57:38.222460 kubelet[2611]: E0414 12:57:36.730383 2611 status_manager.go:1045] "Failed to get status for pod" err="Get \"https://10.0.0.43:6443/api/v1/namespaces/kube-system/pods/kube-controller-manager-localhost\": net/http: TLS handshake timeout" podUID="bd70d524e6bc561f2082b467706799ed" pod="kube-system/kube-controller-manager-localhost" Apr 14 12:57:38.754798 kubelet[2611]: E0414 12:57:37.930964 2611 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://10.0.0.43:6443/api/v1/namespaces/kube-system/events/coredns-7d764666f9-f44gt.18a639cb2be0dea0\": net/http: TLS handshake timeout" event="&Event{ObjectMeta:{coredns-7d764666f9-f44gt.18a639cb2be0dea0 kube-system 1042 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:coredns-7d764666f9-f44gt,UID:c0e24bdb-6150-4745-b66b-9386ee241a93,APIVersion:v1,ResourceVersion:631,FieldPath:spec.containers{coredns},},Reason:Unhealthy,Message:Readiness probe failed: Get \"http://192.168.0.3:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-14 12:44:07 +0000 UTC,LastTimestamp:2026-04-14 12:47:15.46562745 +0000 UTC m=+363.521746894,Count:5,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 14 12:57:41.095309 kubelet[2611]: E0414 12:57:41.092882 2611 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.43:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded" interval="400ms" Apr 14 12:57:41.452070 containerd[1466]: time="2026-04-14T12:57:41.380122214Z" level=info msg="StopContainer for \"95160dc3a49ef8c8b494c4dd4066713357c86176e18f3fa11c83be3c2d5db085\" with timeout 30 (s)" Apr 14 12:57:41.652076 containerd[1466]: time="2026-04-14T12:57:41.574002986Z" level=info msg="Stop container \"95160dc3a49ef8c8b494c4dd4066713357c86176e18f3fa11c83be3c2d5db085\" with signal terminated" Apr 14 12:57:43.856362 sshd[6169]: pam_unix(sshd:session): session closed for user core Apr 14 12:57:44.646120 systemd[1]: sshd@33-10.0.0.43:22-10.0.0.1:42780.service: Deactivated successfully. Apr 14 12:57:44.659080 systemd[1]: sshd@33-10.0.0.43:22-10.0.0.1:42780.service: Consumed 5.895s CPU time. Apr 14 12:57:45.121136 systemd[1]: session-34.scope: Deactivated successfully. Apr 14 12:57:45.138293 systemd[1]: session-34.scope: Consumed 25.655s CPU time. Apr 14 12:57:45.287286 systemd-logind[1450]: Session 34 logged out. Waiting for processes to exit. Apr 14 12:57:45.857537 systemd-logind[1450]: Removed session 34. Apr 14 12:57:50.257357 containerd[1466]: time="2026-04-14T12:57:50.248338860Z" level=info msg="TaskExit event container_id:\"dfea1f2839f09d79baa829bb270334b0c6732ff51677645b59f48ef8e7a6d142\" id:\"dfea1f2839f09d79baa829bb270334b0c6732ff51677645b59f48ef8e7a6d142\" pid:5660 exit_status:1 exited_at:{seconds:1776171339 nanos:622382145}" Apr 14 12:57:50.322396 systemd[1]: Started sshd@34-10.0.0.43:22-10.0.0.1:57752.service - OpenSSH per-connection server daemon (10.0.0.1:57752). Apr 14 12:57:53.097187 kubelet[2611]: E0414 12:57:53.092540 2611 status_manager.go:1045] "Failed to get status for pod" err="Get \"https://10.0.0.43:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-localhost\": net/http: TLS handshake timeout" podUID="ed5e991544c38f12435d82988fd12fee" pod="kube-system/kube-apiserver-localhost" Apr 14 12:57:54.106387 kubelet[2611]: E0414 12:57:52.315414 2611 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.43:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded" interval="800ms" Apr 14 12:57:55.577361 containerd[1466]: time="2026-04-14T12:57:55.564970836Z" level=info msg="Kill container \"dfea1f2839f09d79baa829bb270334b0c6732ff51677645b59f48ef8e7a6d142\"" Apr 14 12:57:58.212028 sshd[6298]: Accepted publickey for core from 10.0.0.1 port 57752 ssh2: RSA SHA256:qzhFhta2jMUFpsUMpLJ2lZjvWQxYMUNkIBekZ4ekVbM Apr 14 12:57:59.259773 sshd[6298]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 12:58:00.537979 containerd[1466]: time="2026-04-14T12:58:00.362239704Z" level=error msg="Failed to handle backOff event container_id:\"dfea1f2839f09d79baa829bb270334b0c6732ff51677645b59f48ef8e7a6d142\" id:\"dfea1f2839f09d79baa829bb270334b0c6732ff51677645b59f48ef8e7a6d142\" pid:5660 exit_status:1 exited_at:{seconds:1776171339 nanos:622382145} for dfea1f2839f09d79baa829bb270334b0c6732ff51677645b59f48ef8e7a6d142" error="failed to handle container TaskExit event: failed to stop container: failed to delete task: context deadline exceeded: unknown" Apr 14 12:58:01.169427 systemd[1]: cri-containerd-95160dc3a49ef8c8b494c4dd4066713357c86176e18f3fa11c83be3c2d5db085.scope: Deactivated successfully. Apr 14 12:58:01.261189 systemd[1]: cri-containerd-95160dc3a49ef8c8b494c4dd4066713357c86176e18f3fa11c83be3c2d5db085.scope: Consumed 38.978s CPU time. Apr 14 12:58:02.764567 systemd-logind[1450]: New session 35 of user core. Apr 14 12:58:03.539674 systemd[1]: Started session-35.scope - Session 35 of User core. Apr 14 12:58:03.610082 containerd[1466]: time="2026-04-14T12:58:03.608510553Z" level=info msg="TaskExit event container_id:\"deefeb9d7fce7a43dd4dcf925e6b98c76823dace7f986803922e1451a289da2d\" id:\"deefeb9d7fce7a43dd4dcf925e6b98c76823dace7f986803922e1451a289da2d\" pid:3582 exited_at:{seconds:1776171119 nanos:107692540}" Apr 14 12:58:04.413017 containerd[1466]: time="2026-04-14T12:58:04.412494068Z" level=error msg="ttrpc: received message on inactive stream" stream=95 Apr 14 12:58:04.704544 kubelet[2611]: E0414 12:58:02.055795 2611 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://10.0.0.43:6443/api/v1/namespaces/kube-system/events/coredns-7d764666f9-f44gt.18a639cb2be0dea0\": net/http: TLS handshake timeout" event="&Event{ObjectMeta:{coredns-7d764666f9-f44gt.18a639cb2be0dea0 kube-system 1042 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:coredns-7d764666f9-f44gt,UID:c0e24bdb-6150-4745-b66b-9386ee241a93,APIVersion:v1,ResourceVersion:631,FieldPath:spec.containers{coredns},},Reason:Unhealthy,Message:Readiness probe failed: Get \"http://192.168.0.3:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-14 12:44:07 +0000 UTC,LastTimestamp:2026-04-14 12:47:15.46562745 +0000 UTC m=+363.521746894,Count:5,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 14 12:58:09.121089 kubelet[2611]: E0414 12:58:09.100196 2611 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.43:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" interval="1.6s" Apr 14 12:58:09.522125 kubelet[2611]: E0414 12:58:09.322581 2611 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1m6.93s" Apr 14 12:58:10.191567 kubelet[2611]: I0414 12:58:10.187370 2611 request.go:752] "Waited before sending request" delay="1.102615119s" reason="retries: 2, retry-after: 1s - retry-reason: due to retryable error, error: Get \"https://10.0.0.43:6443/api/v1/namespaces/kube-system/configmaps?allowWatchBookmarks=true&fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=996&resourceVersionMatch=NotOlderThan&sendInitialEvents=true&timeout=51m26s&timeoutSeconds=3086&watch=true\": net/http: TLS handshake timeout" verb="GET" URL="https://10.0.0.43:6443/api/v1/namespaces/kube-system/configmaps?allowWatchBookmarks=true&fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=996&resourceVersionMatch=NotOlderThan&sendInitialEvents=true&timeout=51m26s&timeoutSeconds=3086&watch=true" Apr 14 12:58:11.928116 containerd[1466]: time="2026-04-14T12:58:11.926840968Z" level=error msg="failed to handle container TaskExit event container_id:\"95160dc3a49ef8c8b494c4dd4066713357c86176e18f3fa11c83be3c2d5db085\" id:\"95160dc3a49ef8c8b494c4dd4066713357c86176e18f3fa11c83be3c2d5db085\" pid:5993 exited_at:{seconds:1776171481 nanos:791418641}" error="failed to stop container: context deadline exceeded: unknown" Apr 14 12:58:12.971159 kubelet[2611]: E0414 12:58:12.933885 2611 status_manager.go:1045] "Failed to get status for pod" err="Get \"https://10.0.0.43:6443/api/v1/namespaces/kube-system/pods/kube-scheduler-localhost\": net/http: TLS handshake timeout" podUID="3566c1d7ed03bb3c60facf009a5678dd" pod="kube-system/kube-scheduler-localhost" Apr 14 12:58:13.715565 containerd[1466]: time="2026-04-14T12:58:13.705342010Z" level=error msg="ttrpc: received message on inactive stream" stream=25 Apr 14 12:58:14.360418 containerd[1466]: time="2026-04-14T12:58:13.754000368Z" level=error msg="ttrpc: received message on inactive stream" stream=29 Apr 14 12:58:14.704863 containerd[1466]: time="2026-04-14T12:58:14.418257437Z" level=error msg="Failed to handle backOff event container_id:\"deefeb9d7fce7a43dd4dcf925e6b98c76823dace7f986803922e1451a289da2d\" id:\"deefeb9d7fce7a43dd4dcf925e6b98c76823dace7f986803922e1451a289da2d\" pid:3582 exited_at:{seconds:1776171119 nanos:107692540} for deefeb9d7fce7a43dd4dcf925e6b98c76823dace7f986803922e1451a289da2d" error="failed to handle container TaskExit event: failed to stop container: context deadline exceeded: unknown" Apr 14 12:58:15.050404 containerd[1466]: time="2026-04-14T12:58:14.536663409Z" level=error msg="ttrpc: received message on inactive stream" stream=131 Apr 14 12:58:15.526564 containerd[1466]: time="2026-04-14T12:58:15.057312005Z" level=info msg="TaskExit event container_id:\"95160dc3a49ef8c8b494c4dd4066713357c86176e18f3fa11c83be3c2d5db085\" id:\"95160dc3a49ef8c8b494c4dd4066713357c86176e18f3fa11c83be3c2d5db085\" pid:5993 exited_at:{seconds:1776171481 nanos:791418641}" Apr 14 12:58:15.731484 containerd[1466]: time="2026-04-14T12:58:15.729937362Z" level=error msg="ttrpc: received message on inactive stream" stream=135 Apr 14 12:58:18.768035 containerd[1466]: time="2026-04-14T12:58:18.737536142Z" level=info msg="Kill container \"95160dc3a49ef8c8b494c4dd4066713357c86176e18f3fa11c83be3c2d5db085\"" Apr 14 12:58:24.934535 containerd[1466]: time="2026-04-14T12:58:24.930202482Z" level=error msg="Failed to handle backOff event container_id:\"95160dc3a49ef8c8b494c4dd4066713357c86176e18f3fa11c83be3c2d5db085\" id:\"95160dc3a49ef8c8b494c4dd4066713357c86176e18f3fa11c83be3c2d5db085\" pid:5993 exited_at:{seconds:1776171481 nanos:791418641} for 95160dc3a49ef8c8b494c4dd4066713357c86176e18f3fa11c83be3c2d5db085" error="failed to handle container TaskExit event: failed to stop container: context deadline exceeded: unknown" Apr 14 12:58:25.300388 kubelet[2611]: I0414 12:58:23.514231 2611 request.go:752] "Waited before sending request" delay="1.835072355s" reason="retries: 5, retry-after: 1s - retry-reason: due to retryable error, error: Get \"https://10.0.0.43:6443/apis/node.k8s.io/v1/runtimeclasses?allowWatchBookmarks=true&resourceVersion=1025&resourceVersionMatch=NotOlderThan&sendInitialEvents=true&timeout=9m58s&timeoutSeconds=598&watch=true\": net/http: TLS handshake timeout" verb="GET" URL="https://10.0.0.43:6443/apis/node.k8s.io/v1/runtimeclasses?allowWatchBookmarks=true&resourceVersion=1025&resourceVersionMatch=NotOlderThan&sendInitialEvents=true&timeout=9m58s&timeoutSeconds=598&watch=true" Apr 14 12:58:25.567980 kubelet[2611]: E0414 12:58:25.279202 2611 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.43:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" interval="3.2s" Apr 14 12:58:25.775283 containerd[1466]: time="2026-04-14T12:58:25.775026627Z" level=error msg="ttrpc: received message on inactive stream" stream=33 Apr 14 12:58:26.062104 containerd[1466]: time="2026-04-14T12:58:26.055123699Z" level=error msg="ttrpc: received message on inactive stream" stream=37 Apr 14 12:58:27.244251 containerd[1466]: time="2026-04-14T12:58:27.242550248Z" level=info msg="TaskExit event container_id:\"95160dc3a49ef8c8b494c4dd4066713357c86176e18f3fa11c83be3c2d5db085\" id:\"95160dc3a49ef8c8b494c4dd4066713357c86176e18f3fa11c83be3c2d5db085\" pid:5993 exited_at:{seconds:1776171481 nanos:791418641}" Apr 14 12:58:29.526424 containerd[1466]: time="2026-04-14T12:58:29.514382365Z" level=error msg="get state for 95160dc3a49ef8c8b494c4dd4066713357c86176e18f3fa11c83be3c2d5db085" error="context deadline exceeded: unknown" Apr 14 12:58:29.676978 containerd[1466]: time="2026-04-14T12:58:29.676048157Z" level=warning msg="unknown status" status=0 Apr 14 12:58:31.751975 kubelet[2611]: E0414 12:58:30.819465 2611 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://10.0.0.43:6443/api/v1/namespaces/kube-system/events/coredns-7d764666f9-f44gt.18a639cb2be0dea0\": net/http: TLS handshake timeout" event="&Event{ObjectMeta:{coredns-7d764666f9-f44gt.18a639cb2be0dea0 kube-system 1042 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:coredns-7d764666f9-f44gt,UID:c0e24bdb-6150-4745-b66b-9386ee241a93,APIVersion:v1,ResourceVersion:631,FieldPath:spec.containers{coredns},},Reason:Unhealthy,Message:Readiness probe failed: Get \"http://192.168.0.3:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-14 12:44:07 +0000 UTC,LastTimestamp:2026-04-14 12:47:15.46562745 +0000 UTC m=+363.521746894,Count:5,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 14 12:58:32.591011 containerd[1466]: time="2026-04-14T12:58:32.507110427Z" level=error msg="get state for 95160dc3a49ef8c8b494c4dd4066713357c86176e18f3fa11c83be3c2d5db085" error="context deadline exceeded: unknown" Apr 14 12:58:32.639315 containerd[1466]: time="2026-04-14T12:58:32.618558967Z" level=warning msg="unknown status" status=0 Apr 14 12:58:34.101134 kubelet[2611]: E0414 12:58:33.471806 2611 status_manager.go:1045] "Failed to get status for pod" err="Get \"https://10.0.0.43:6443/api/v1/namespaces/kube-system/pods/coredns-7d764666f9-spttk\": net/http: TLS handshake timeout" podUID="fb975314-b950-4dd9-9942-b30d52d99a2a" pod="kube-system/coredns-7d764666f9-spttk" Apr 14 12:58:36.949391 kubelet[2611]: I0414 12:58:36.871162 2611 request.go:752] "Waited before sending request" delay="1.332076243s" reason="retries: 3, retry-after: 1s - retry-reason: due to retryable error, error: Get \"https://10.0.0.43:6443/api/v1/namespaces/kube-system/configmaps?allowWatchBookmarks=true&fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=996&resourceVersionMatch=NotOlderThan&sendInitialEvents=true&timeout=51m26s&timeoutSeconds=3086&watch=true\": net/http: TLS handshake timeout" verb="GET" URL="https://10.0.0.43:6443/api/v1/namespaces/kube-system/configmaps?allowWatchBookmarks=true&fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=996&resourceVersionMatch=NotOlderThan&sendInitialEvents=true&timeout=51m26s&timeoutSeconds=3086&watch=true" Apr 14 12:58:37.992634 containerd[1466]: time="2026-04-14T12:58:37.371863304Z" level=error msg="Failed to handle backOff event container_id:\"95160dc3a49ef8c8b494c4dd4066713357c86176e18f3fa11c83be3c2d5db085\" id:\"95160dc3a49ef8c8b494c4dd4066713357c86176e18f3fa11c83be3c2d5db085\" pid:5993 exited_at:{seconds:1776171481 nanos:791418641} for 95160dc3a49ef8c8b494c4dd4066713357c86176e18f3fa11c83be3c2d5db085" error="failed to handle container TaskExit event: failed to stop container: failed to delete task: context deadline exceeded: unknown" Apr 14 12:58:42.253434 containerd[1466]: time="2026-04-14T12:58:42.248474302Z" level=info msg="TaskExit event container_id:\"95160dc3a49ef8c8b494c4dd4066713357c86176e18f3fa11c83be3c2d5db085\" id:\"95160dc3a49ef8c8b494c4dd4066713357c86176e18f3fa11c83be3c2d5db085\" pid:5993 exited_at:{seconds:1776171481 nanos:791418641}" Apr 14 12:58:44.366403 containerd[1466]: time="2026-04-14T12:58:44.361853856Z" level=error msg="get state for 95160dc3a49ef8c8b494c4dd4066713357c86176e18f3fa11c83be3c2d5db085" error="context deadline exceeded: unknown" Apr 14 12:58:44.939568 containerd[1466]: time="2026-04-14T12:58:44.468416060Z" level=warning msg="unknown status" status=0 Apr 14 12:58:45.321382 kubelet[2611]: E0414 12:58:45.261170 2611 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.43:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded" interval="6.4s" Apr 14 12:58:46.197025 containerd[1466]: time="2026-04-14T12:58:46.192463767Z" level=error msg="ttrpc: received message on inactive stream" stream=41 Apr 14 12:58:46.613514 containerd[1466]: time="2026-04-14T12:58:46.194566916Z" level=error msg="ttrpc: received message on inactive stream" stream=43 Apr 14 12:58:47.519303 containerd[1466]: time="2026-04-14T12:58:47.509050741Z" level=error msg="get state for 95160dc3a49ef8c8b494c4dd4066713357c86176e18f3fa11c83be3c2d5db085" error="context deadline exceeded: unknown" Apr 14 12:58:47.833165 containerd[1466]: time="2026-04-14T12:58:47.511484552Z" level=warning msg="unknown status" status=0 Apr 14 12:58:48.374473 containerd[1466]: time="2026-04-14T12:58:48.204174869Z" level=error msg="ttrpc: received message on inactive stream" stream=45 Apr 14 12:58:48.374473 containerd[1466]: time="2026-04-14T12:58:48.262139724Z" level=error msg="ttrpc: received message on inactive stream" stream=49 Apr 14 12:58:48.374473 containerd[1466]: time="2026-04-14T12:58:48.262267555Z" level=error msg="ttrpc: received message on inactive stream" stream=47 Apr 14 12:58:48.510942 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-95160dc3a49ef8c8b494c4dd4066713357c86176e18f3fa11c83be3c2d5db085-rootfs.mount: Deactivated successfully. Apr 14 12:58:52.353428 containerd[1466]: time="2026-04-14T12:58:52.350567940Z" level=error msg="Failed to handle backOff event container_id:\"95160dc3a49ef8c8b494c4dd4066713357c86176e18f3fa11c83be3c2d5db085\" id:\"95160dc3a49ef8c8b494c4dd4066713357c86176e18f3fa11c83be3c2d5db085\" pid:5993 exited_at:{seconds:1776171481 nanos:791418641} for 95160dc3a49ef8c8b494c4dd4066713357c86176e18f3fa11c83be3c2d5db085" error="failed to handle container TaskExit event: failed to stop container: failed to delete task: context deadline exceeded: unknown" Apr 14 12:58:53.867309 containerd[1466]: time="2026-04-14T12:58:53.866236279Z" level=error msg="ttrpc: received message on inactive stream" stream=51 Apr 14 12:58:56.930911 kubelet[2611]: E0414 12:58:56.930565 2611 status_manager.go:1045] "Failed to get status for pod" err="Get \"https://10.0.0.43:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-localhost\": net/http: TLS handshake timeout" podUID="ed5e991544c38f12435d82988fd12fee" pod="kube-system/kube-apiserver-localhost" Apr 14 12:59:01.110362 kubelet[2611]: E0414 12:59:00.334073 2611 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://10.0.0.43:6443/api/v1/namespaces/kube-system/events/coredns-7d764666f9-f44gt.18a639cb2be0dea0\": net/http: TLS handshake timeout" event="&Event{ObjectMeta:{coredns-7d764666f9-f44gt.18a639cb2be0dea0 kube-system 1042 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:coredns-7d764666f9-f44gt,UID:c0e24bdb-6150-4745-b66b-9386ee241a93,APIVersion:v1,ResourceVersion:631,FieldPath:spec.containers{coredns},},Reason:Unhealthy,Message:Readiness probe failed: Get \"http://192.168.0.3:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-14 12:44:07 +0000 UTC,LastTimestamp:2026-04-14 12:47:15.46562745 +0000 UTC m=+363.521746894,Count:5,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 14 12:59:01.259029 containerd[1466]: time="2026-04-14T12:59:01.258239309Z" level=info msg="TaskExit event container_id:\"95160dc3a49ef8c8b494c4dd4066713357c86176e18f3fa11c83be3c2d5db085\" id:\"95160dc3a49ef8c8b494c4dd4066713357c86176e18f3fa11c83be3c2d5db085\" pid:5993 exited_at:{seconds:1776171481 nanos:791418641}" Apr 14 12:59:01.840233 kubelet[2611]: E0414 12:59:01.766247 2611 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="52.079s" Apr 14 12:59:05.178785 kubelet[2611]: E0414 12:59:05.169722 2611 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.304s" Apr 14 12:59:05.746303 kubelet[2611]: E0414 12:59:05.744647 2611 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 12:59:05.867996 kubelet[2611]: E0414 12:59:05.859188 2611 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.43:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded" interval="7s" Apr 14 12:59:06.118114 containerd[1466]: time="2026-04-14T12:59:06.117195451Z" level=info msg="StopContainer for \"deefeb9d7fce7a43dd4dcf925e6b98c76823dace7f986803922e1451a289da2d\" with timeout 30 (s)" Apr 14 12:59:06.149542 kubelet[2611]: E0414 12:59:06.127200 2611 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 12:59:06.611237 kubelet[2611]: E0414 12:59:06.552021 2611 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 12:59:06.903901 containerd[1466]: time="2026-04-14T12:59:06.896249187Z" level=info msg="Skipping the sending of signal terminated to container \"deefeb9d7fce7a43dd4dcf925e6b98c76823dace7f986803922e1451a289da2d\" because a prior stop with timeout>0 request already sent the signal" Apr 14 12:59:07.094394 kubelet[2611]: E0414 12:59:07.077500 2611 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.81s" Apr 14 12:59:09.868257 kubelet[2611]: E0414 12:59:09.779391 2611 status_manager.go:1045] "Failed to get status for pod" err="Get \"https://10.0.0.43:6443/api/v1/namespaces/kube-system/pods/kube-scheduler-localhost\": net/http: TLS handshake timeout" podUID="3566c1d7ed03bb3c60facf009a5678dd" pod="kube-system/kube-scheduler-localhost" Apr 14 12:59:10.593417 kubelet[2611]: E0414 12:59:10.545408 2611 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.468s" Apr 14 12:59:10.603164 containerd[1466]: time="2026-04-14T12:59:10.597412573Z" level=info msg="shim disconnected" id=95160dc3a49ef8c8b494c4dd4066713357c86176e18f3fa11c83be3c2d5db085 namespace=k8s.io Apr 14 12:59:10.603164 containerd[1466]: time="2026-04-14T12:59:10.597543859Z" level=warning msg="cleaning up after shim disconnected" id=95160dc3a49ef8c8b494c4dd4066713357c86176e18f3fa11c83be3c2d5db085 namespace=k8s.io Apr 14 12:59:10.626371 containerd[1466]: time="2026-04-14T12:59:10.597577460Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 14 12:59:10.776234 kubelet[2611]: E0414 12:59:10.769227 2611 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 12:59:11.349136 containerd[1466]: time="2026-04-14T12:59:11.347480914Z" level=error msg="failed to delete shim" error="1 error occurred:\n\t* close wait error: context deadline exceeded\n\n" id=95160dc3a49ef8c8b494c4dd4066713357c86176e18f3fa11c83be3c2d5db085 Apr 14 12:59:11.552361 kubelet[2611]: E0414 12:59:11.537780 2611 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 12:59:11.712869 containerd[1466]: time="2026-04-14T12:59:11.711077940Z" level=info msg="TaskExit event container_id:\"dfea1f2839f09d79baa829bb270334b0c6732ff51677645b59f48ef8e7a6d142\" id:\"dfea1f2839f09d79baa829bb270334b0c6732ff51677645b59f48ef8e7a6d142\" pid:5660 exit_status:1 exited_at:{seconds:1776171339 nanos:622382145}" Apr 14 12:59:11.756806 containerd[1466]: time="2026-04-14T12:59:11.756155090Z" level=info msg="StopContainer for \"95160dc3a49ef8c8b494c4dd4066713357c86176e18f3fa11c83be3c2d5db085\" returns successfully" Apr 14 12:59:11.941296 kubelet[2611]: E0414 12:59:11.934025 2611 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 12:59:12.119229 kubelet[2611]: E0414 12:59:12.118936 2611 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.527s" Apr 14 12:59:12.628733 containerd[1466]: time="2026-04-14T12:59:12.620003056Z" level=error msg="failed to delete" cmd="/usr/bin/containerd-shim-runc-v2 -namespace k8s.io -address /run/containerd/containerd.sock -publish-binary /usr/bin/containerd -id 95160dc3a49ef8c8b494c4dd4066713357c86176e18f3fa11c83be3c2d5db085 -bundle /run/containerd/io.containerd.runtime.v2.task/k8s.io/95160dc3a49ef8c8b494c4dd4066713357c86176e18f3fa11c83be3c2d5db085 delete" error="exit status 1" namespace=k8s.io Apr 14 12:59:12.631181 containerd[1466]: time="2026-04-14T12:59:12.628171927Z" level=warning msg="failed to clean up after shim disconnected" error="io.containerd.runc.v2: getwd: no such file or directory: exit status 1" id=95160dc3a49ef8c8b494c4dd4066713357c86176e18f3fa11c83be3c2d5db085 namespace=k8s.io Apr 14 12:59:13.508409 containerd[1466]: time="2026-04-14T12:59:13.507922296Z" level=info msg="CreateContainer within sandbox \"113e216a6ecbef12a69b5c11f9e43873e72fef1eac5ccbea9a830f305674834a\" for container &ContainerMetadata{Name:coredns,Attempt:2,}" Apr 14 12:59:14.212984 kubelet[2611]: E0414 12:59:14.211353 2611 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.969s" Apr 14 12:59:14.224963 sshd[6298]: pam_unix(sshd:session): session closed for user core Apr 14 12:59:14.422731 systemd[1]: sshd@34-10.0.0.43:22-10.0.0.1:57752.service: Deactivated successfully. Apr 14 12:59:14.472144 systemd[1]: sshd@34-10.0.0.43:22-10.0.0.1:57752.service: Consumed 2.479s CPU time. Apr 14 12:59:14.603764 containerd[1466]: time="2026-04-14T12:59:14.603655403Z" level=info msg="CreateContainer within sandbox \"113e216a6ecbef12a69b5c11f9e43873e72fef1eac5ccbea9a830f305674834a\" for &ContainerMetadata{Name:coredns,Attempt:2,} returns container id \"5265bd74eb305dedcf8d69217808cacaf7d6f94a3b93058deb31bc35850717f4\"" Apr 14 12:59:14.777418 systemd[1]: session-35.scope: Deactivated successfully. Apr 14 12:59:14.836887 systemd[1]: session-35.scope: Consumed 35.208s CPU time. Apr 14 12:59:14.878378 systemd-logind[1450]: Session 35 logged out. Waiting for processes to exit. Apr 14 12:59:15.027987 systemd-logind[1450]: Removed session 35. Apr 14 12:59:15.052865 containerd[1466]: time="2026-04-14T12:59:15.049631638Z" level=info msg="StartContainer for \"5265bd74eb305dedcf8d69217808cacaf7d6f94a3b93058deb31bc35850717f4\"" Apr 14 12:59:16.091846 containerd[1466]: time="2026-04-14T12:59:16.091309314Z" level=info msg="shim disconnected" id=dfea1f2839f09d79baa829bb270334b0c6732ff51677645b59f48ef8e7a6d142 namespace=k8s.io Apr 14 12:59:16.091846 containerd[1466]: time="2026-04-14T12:59:16.091394990Z" level=warning msg="cleaning up after shim disconnected" id=dfea1f2839f09d79baa829bb270334b0c6732ff51677645b59f48ef8e7a6d142 namespace=k8s.io Apr 14 12:59:16.091846 containerd[1466]: time="2026-04-14T12:59:16.091407446Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 14 12:59:16.233387 kubelet[2611]: E0414 12:59:16.122148 2611 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.879s" Apr 14 12:59:16.681797 kubelet[2611]: I0414 12:59:16.669635 2611 scope.go:122] "RemoveContainer" containerID="8d3c1054a6cc9347dba2d780a318571b021087092c2d44730b1a6d81eb6af2a3" Apr 14 12:59:18.466973 kubelet[2611]: E0414 12:59:18.461352 2611 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.302s" Apr 14 12:59:18.714250 containerd[1466]: time="2026-04-14T12:59:18.714032488Z" level=info msg="RemoveContainer for \"8d3c1054a6cc9347dba2d780a318571b021087092c2d44730b1a6d81eb6af2a3\"" Apr 14 12:59:20.062408 systemd[1]: Started sshd@35-10.0.0.43:22-10.0.0.1:57104.service - OpenSSH per-connection server daemon (10.0.0.1:57104). Apr 14 12:59:21.316260 kubelet[2611]: E0414 12:59:21.313749 2611 status_manager.go:1045] "Failed to get status for pod" err="Get \"https://10.0.0.43:6443/api/v1/namespaces/kube-system/pods/coredns-7d764666f9-spttk\": net/http: TLS handshake timeout" podUID="fb975314-b950-4dd9-9942-b30d52d99a2a" pod="kube-system/coredns-7d764666f9-spttk" Apr 14 12:59:21.520499 containerd[1466]: time="2026-04-14T12:59:21.519372283Z" level=info msg="RemoveContainer for \"8d3c1054a6cc9347dba2d780a318571b021087092c2d44730b1a6d81eb6af2a3\" returns successfully" Apr 14 12:59:21.741756 containerd[1466]: time="2026-04-14T12:59:21.714439479Z" level=error msg="failed to delete" cmd="/usr/bin/containerd-shim-runc-v2 -namespace k8s.io -address /run/containerd/containerd.sock -publish-binary /usr/bin/containerd -id dfea1f2839f09d79baa829bb270334b0c6732ff51677645b59f48ef8e7a6d142 -bundle /run/containerd/io.containerd.runtime.v2.task/k8s.io/dfea1f2839f09d79baa829bb270334b0c6732ff51677645b59f48ef8e7a6d142 delete" error="signal: killed" namespace=k8s.io Apr 14 12:59:21.956865 containerd[1466]: time="2026-04-14T12:59:21.762441010Z" level=warning msg="failed to clean up after shim disconnected" error=": signal: killed" id=dfea1f2839f09d79baa829bb270334b0c6732ff51677645b59f48ef8e7a6d142 namespace=k8s.io Apr 14 12:59:22.867264 containerd[1466]: time="2026-04-14T12:59:22.865199525Z" level=error msg="failed to delete shim" error="1 error occurred:\n\t* close wait error: context deadline exceeded\n\n" id=dfea1f2839f09d79baa829bb270334b0c6732ff51677645b59f48ef8e7a6d142 Apr 14 12:59:23.201957 kubelet[2611]: E0414 12:59:23.039397 2611 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.43:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" interval="7s" Apr 14 12:59:23.699926 containerd[1466]: time="2026-04-14T12:59:23.699392455Z" level=info msg="StopContainer for \"dfea1f2839f09d79baa829bb270334b0c6732ff51677645b59f48ef8e7a6d142\" returns successfully" Apr 14 12:59:24.958360 kubelet[2611]: E0414 12:59:23.311041 2611 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://10.0.0.43:6443/api/v1/namespaces/kube-system/events/coredns-7d764666f9-f44gt.18a639cb2be0dea0\": net/http: TLS handshake timeout" event="&Event{ObjectMeta:{coredns-7d764666f9-f44gt.18a639cb2be0dea0 kube-system 1042 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:coredns-7d764666f9-f44gt,UID:c0e24bdb-6150-4745-b66b-9386ee241a93,APIVersion:v1,ResourceVersion:631,FieldPath:spec.containers{coredns},},Reason:Unhealthy,Message:Readiness probe failed: Get \"http://192.168.0.3:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-14 12:44:07 +0000 UTC,LastTimestamp:2026-04-14 12:47:15.46562745 +0000 UTC m=+363.521746894,Count:5,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 14 12:59:25.424401 kubelet[2611]: E0414 12:59:25.413899 2611 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 12:59:25.768905 containerd[1466]: time="2026-04-14T12:59:25.700213255Z" level=error msg="ContainerStatus for \"8d3c1054a6cc9347dba2d780a318571b021087092c2d44730b1a6d81eb6af2a3\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"8d3c1054a6cc9347dba2d780a318571b021087092c2d44730b1a6d81eb6af2a3\": not found" Apr 14 12:59:28.417968 kubelet[2611]: E0414 12:59:28.410161 2611 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"8d3c1054a6cc9347dba2d780a318571b021087092c2d44730b1a6d81eb6af2a3\": not found" containerID="8d3c1054a6cc9347dba2d780a318571b021087092c2d44730b1a6d81eb6af2a3" Apr 14 12:59:29.148436 sshd[6523]: Accepted publickey for core from 10.0.0.1 port 57104 ssh2: RSA SHA256:qzhFhta2jMUFpsUMpLJ2lZjvWQxYMUNkIBekZ4ekVbM Apr 14 12:59:30.674323 sshd[6523]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 12:59:31.342420 kubelet[2611]: E0414 12:59:31.223265 2611 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="12.757s" Apr 14 12:59:32.531852 systemd-logind[1450]: New session 36 of user core. Apr 14 12:59:32.712381 systemd[1]: Started session-36.scope - Session 36 of User core. Apr 14 12:59:33.655355 containerd[1466]: time="2026-04-14T12:59:33.561136483Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 14 12:59:33.655355 containerd[1466]: time="2026-04-14T12:59:33.561398825Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 14 12:59:33.655355 containerd[1466]: time="2026-04-14T12:59:33.561416309Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 14 12:59:34.220195 containerd[1466]: time="2026-04-14T12:59:33.877118722Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 14 12:59:34.365534 containerd[1466]: time="2026-04-14T12:59:34.358321460Z" level=info msg="CreateContainer within sandbox \"e6e85fec2c8d99964501468473b50536e90439fb2a70be7746d8683556d02abe\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:5,}" Apr 14 12:59:35.219442 kubelet[2611]: E0414 12:59:35.032157 2611 status_manager.go:1045] "Failed to get status for pod" err="Get \"https://10.0.0.43:6443/api/v1/namespaces/kube-system/pods/coredns-7d764666f9-f44gt\": net/http: TLS handshake timeout" podUID="c0e24bdb-6150-4745-b66b-9386ee241a93" pod="kube-system/coredns-7d764666f9-f44gt" Apr 14 12:59:37.051680 containerd[1466]: time="2026-04-14T12:59:36.968434661Z" level=info msg="Kill container \"deefeb9d7fce7a43dd4dcf925e6b98c76823dace7f986803922e1451a289da2d\"" Apr 14 12:59:41.150132 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3080569315.mount: Deactivated successfully. Apr 14 12:59:41.679130 kubelet[2611]: E0414 12:59:41.226798 2611 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.43:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded" interval="7s" Apr 14 12:59:43.328223 containerd[1466]: time="2026-04-14T12:59:43.321272490Z" level=info msg="CreateContainer within sandbox \"e6e85fec2c8d99964501468473b50536e90439fb2a70be7746d8683556d02abe\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:5,} returns container id \"6a7e2fae1655e41cef2dfdb5c50ad04067c295136c5f16436150ec1be6d0d289\"" Apr 14 12:59:43.932080 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount270510122.mount: Deactivated successfully. Apr 14 12:59:45.453297 kubelet[2611]: E0414 12:59:45.431767 2611 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://10.0.0.43:6443/api/v1/namespaces/kube-system/events/coredns-7d764666f9-f44gt.18a639cb2be0dea0\": net/http: TLS handshake timeout" event="&Event{ObjectMeta:{coredns-7d764666f9-f44gt.18a639cb2be0dea0 kube-system 1042 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:coredns-7d764666f9-f44gt,UID:c0e24bdb-6150-4745-b66b-9386ee241a93,APIVersion:v1,ResourceVersion:631,FieldPath:spec.containers{coredns},},Reason:Unhealthy,Message:Readiness probe failed: Get \"http://192.168.0.3:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-14 12:44:07 +0000 UTC,LastTimestamp:2026-04-14 12:47:15.46562745 +0000 UTC m=+363.521746894,Count:5,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 14 12:59:45.778060 kubelet[2611]: E0414 12:59:45.763984 2611 status_manager.go:1045] "Failed to get status for pod" err="Get \"https://10.0.0.43:6443/api/v1/namespaces/kube-system/pods/kube-controller-manager-localhost\": net/http: TLS handshake timeout" podUID="bd70d524e6bc561f2082b467706799ed" pod="kube-system/kube-controller-manager-localhost" Apr 14 12:59:47.622567 kubelet[2611]: E0414 12:59:47.612511 2611 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="16.335s" Apr 14 12:59:48.756043 containerd[1466]: time="2026-04-14T12:59:48.531341348Z" level=info msg="StartContainer for \"6a7e2fae1655e41cef2dfdb5c50ad04067c295136c5f16436150ec1be6d0d289\"" Apr 14 13:00:08.237634 kubelet[2611]: E0414 13:00:07.124176 2611 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.43:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" interval="7s" Apr 14 13:00:09.232126 systemd[1]: Started cri-containerd-5265bd74eb305dedcf8d69217808cacaf7d6f94a3b93058deb31bc35850717f4.scope - libcontainer container 5265bd74eb305dedcf8d69217808cacaf7d6f94a3b93058deb31bc35850717f4. Apr 14 13:00:15.357458 kubelet[2611]: E0414 13:00:13.060506 2611 status_manager.go:1045] "Failed to get status for pod" err="Get \"https://10.0.0.43:6443/api/v1/namespaces/kube-system/pods/kube-controller-manager-localhost\": net/http: TLS handshake timeout" podUID="bd70d524e6bc561f2082b467706799ed" pod="kube-system/kube-controller-manager-localhost" Apr 14 13:00:18.940436 kubelet[2611]: I0414 13:00:18.938339 2611 scope.go:122] "RemoveContainer" containerID="dfea1f2839f09d79baa829bb270334b0c6732ff51677645b59f48ef8e7a6d142" Apr 14 13:00:19.337070 kubelet[2611]: E0414 13:00:14.489746 2611 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://10.0.0.43:6443/api/v1/namespaces/kube-system/events/coredns-7d764666f9-f44gt.18a639cb2be0dea0\": net/http: TLS handshake timeout" event="&Event{ObjectMeta:{coredns-7d764666f9-f44gt.18a639cb2be0dea0 kube-system 1042 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:coredns-7d764666f9-f44gt,UID:c0e24bdb-6150-4745-b66b-9386ee241a93,APIVersion:v1,ResourceVersion:631,FieldPath:spec.containers{coredns},},Reason:Unhealthy,Message:Readiness probe failed: Get \"http://192.168.0.3:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-14 12:44:07 +0000 UTC,LastTimestamp:2026-04-14 12:47:15.46562745 +0000 UTC m=+363.521746894,Count:5,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 14 13:00:23.731358 containerd[1466]: time="2026-04-14T13:00:23.727740962Z" level=info msg="RemoveContainer for \"dfea1f2839f09d79baa829bb270334b0c6732ff51677645b59f48ef8e7a6d142\"" Apr 14 13:00:24.026361 kubelet[2611]: E0414 13:00:23.701446 2611 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="34.975s" Apr 14 13:00:25.222780 systemd[1]: run-containerd-runc-k8s.io-6a7e2fae1655e41cef2dfdb5c50ad04067c295136c5f16436150ec1be6d0d289-runc.0XcEk2.mount: Deactivated successfully. Apr 14 13:00:26.057672 containerd[1466]: time="2026-04-14T13:00:26.055676418Z" level=info msg="RemoveContainer for \"dfea1f2839f09d79baa829bb270334b0c6732ff51677645b59f48ef8e7a6d142\" returns successfully" Apr 14 13:00:26.764246 kubelet[2611]: E0414 13:00:26.758795 2611 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.43:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded" interval="7s" Apr 14 13:00:26.803196 kubelet[2611]: I0414 13:00:26.776410 2611 scope.go:122] "RemoveContainer" containerID="95160dc3a49ef8c8b494c4dd4066713357c86176e18f3fa11c83be3c2d5db085" Apr 14 13:00:26.951476 systemd[1]: Started cri-containerd-6a7e2fae1655e41cef2dfdb5c50ad04067c295136c5f16436150ec1be6d0d289.scope - libcontainer container 6a7e2fae1655e41cef2dfdb5c50ad04067c295136c5f16436150ec1be6d0d289. Apr 14 13:00:28.327534 sshd[6523]: pam_unix(sshd:session): session closed for user core Apr 14 13:00:28.647874 systemd-logind[1450]: Session 36 logged out. Waiting for processes to exit. Apr 14 13:00:28.783541 systemd[1]: sshd@35-10.0.0.43:22-10.0.0.1:57104.service: Deactivated successfully. Apr 14 13:00:28.856307 systemd[1]: sshd@35-10.0.0.43:22-10.0.0.1:57104.service: Consumed 2.659s CPU time. Apr 14 13:00:28.978206 systemd[1]: session-36.scope: Deactivated successfully. Apr 14 13:00:28.978751 systemd[1]: session-36.scope: Consumed 24.734s CPU time. Apr 14 13:00:29.147371 systemd-logind[1450]: Removed session 36. Apr 14 13:00:31.649103 kubelet[2611]: E0414 13:00:30.375516 2611 status_manager.go:1045] "Failed to get status for pod" err="Get \"https://10.0.0.43:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-localhost\": net/http: TLS handshake timeout" podUID="ed5e991544c38f12435d82988fd12fee" pod="kube-system/kube-apiserver-localhost" Apr 14 13:00:32.931488 containerd[1466]: time="2026-04-14T13:00:32.930189468Z" level=info msg="RemoveContainer for \"95160dc3a49ef8c8b494c4dd4066713357c86176e18f3fa11c83be3c2d5db085\"" Apr 14 13:00:35.222041 systemd[1]: Started sshd@36-10.0.0.43:22-10.0.0.1:33614.service - OpenSSH per-connection server daemon (10.0.0.1:33614). Apr 14 13:00:35.375427 containerd[1466]: time="2026-04-14T13:00:35.369032586Z" level=error msg="get state for 6a7e2fae1655e41cef2dfdb5c50ad04067c295136c5f16436150ec1be6d0d289" error="context deadline exceeded: unknown" Apr 14 13:00:35.396189 containerd[1466]: time="2026-04-14T13:00:35.394428768Z" level=info msg="RemoveContainer for \"95160dc3a49ef8c8b494c4dd4066713357c86176e18f3fa11c83be3c2d5db085\" returns successfully" Apr 14 13:00:35.580105 containerd[1466]: time="2026-04-14T13:00:35.469773148Z" level=warning msg="unknown status" status=0 Apr 14 13:00:41.659343 kubelet[2611]: E0414 13:00:41.655245 2611 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://10.0.0.43:6443/api/v1/namespaces/kube-system/events/coredns-7d764666f9-f44gt.18a639cb2be0dea0\": net/http: TLS handshake timeout" event="&Event{ObjectMeta:{coredns-7d764666f9-f44gt.18a639cb2be0dea0 kube-system 1042 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:coredns-7d764666f9-f44gt,UID:c0e24bdb-6150-4745-b66b-9386ee241a93,APIVersion:v1,ResourceVersion:631,FieldPath:spec.containers{coredns},},Reason:Unhealthy,Message:Readiness probe failed: Get \"http://192.168.0.3:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-14 12:44:07 +0000 UTC,LastTimestamp:2026-04-14 12:47:15.46562745 +0000 UTC m=+363.521746894,Count:5,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 14 13:00:42.048481 kubelet[2611]: E0414 13:00:40.926415 2611 kubelet_node_status.go:474] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"NetworkUnavailable\\\"},{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-04-14T13:00:21Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-14T13:00:21Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-14T13:00:21Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-14T13:00:21Z\\\",\\\"type\\\":\\\"Ready\\\"}]}}\" for node \"localhost\": Patch \"https://10.0.0.43:6443/api/v1/nodes/localhost/status?timeout=10s\": context deadline exceeded" Apr 14 13:00:43.502409 containerd[1466]: time="2026-04-14T13:00:43.453461605Z" level=error msg="get state for 6a7e2fae1655e41cef2dfdb5c50ad04067c295136c5f16436150ec1be6d0d289" error="context deadline exceeded: unknown" Apr 14 13:00:43.641033 containerd[1466]: time="2026-04-14T13:00:43.541193453Z" level=warning msg="unknown status" status=0 Apr 14 13:00:44.789901 kubelet[2611]: E0414 13:00:44.159429 2611 status_manager.go:1045] "Failed to get status for pod" err="Get \"https://10.0.0.43:6443/api/v1/namespaces/kube-system/pods/kube-scheduler-localhost\": net/http: TLS handshake timeout" podUID="3566c1d7ed03bb3c60facf009a5678dd" pod="kube-system/kube-scheduler-localhost" Apr 14 13:00:46.962481 containerd[1466]: time="2026-04-14T13:00:46.958568885Z" level=info msg="StartContainer for \"5265bd74eb305dedcf8d69217808cacaf7d6f94a3b93058deb31bc35850717f4\" returns successfully" Apr 14 13:00:48.404042 kubelet[2611]: E0414 13:00:48.400406 2611 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.43:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" interval="7s" Apr 14 13:00:48.722200 sshd[6692]: Accepted publickey for core from 10.0.0.1 port 33614 ssh2: RSA SHA256:qzhFhta2jMUFpsUMpLJ2lZjvWQxYMUNkIBekZ4ekVbM Apr 14 13:00:49.446514 sshd[6692]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 13:00:50.999064 containerd[1466]: time="2026-04-14T13:00:50.992148856Z" level=error msg="get state for 6a7e2fae1655e41cef2dfdb5c50ad04067c295136c5f16436150ec1be6d0d289" error="context deadline exceeded: unknown" Apr 14 13:00:50.999064 containerd[1466]: time="2026-04-14T13:00:50.992270434Z" level=warning msg="unknown status" status=0 Apr 14 13:00:51.319455 kubelet[2611]: E0414 13:00:51.130858 2611 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="27.357s" Apr 14 13:00:51.365944 systemd-logind[1450]: New session 37 of user core. Apr 14 13:00:51.867808 systemd[1]: Started session-37.scope - Session 37 of User core. Apr 14 13:00:52.524326 kubelet[2611]: E0414 13:00:52.482139 2611 kubelet_node_status.go:474] "Error updating node status, will retry" err="error getting node \"localhost\": Get \"https://10.0.0.43:6443/api/v1/nodes/localhost?timeout=10s\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Apr 14 13:00:56.432233 containerd[1466]: time="2026-04-14T13:00:56.430369829Z" level=error msg="get state for 6a7e2fae1655e41cef2dfdb5c50ad04067c295136c5f16436150ec1be6d0d289" error="context deadline exceeded: unknown" Apr 14 13:00:56.648301 containerd[1466]: time="2026-04-14T13:00:56.459788015Z" level=warning msg="unknown status" status=0 Apr 14 13:00:57.638998 kubelet[2611]: E0414 13:00:57.638042 2611 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="6.507s" Apr 14 13:00:57.833311 kubelet[2611]: E0414 13:00:57.825404 2611 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:00:58.499348 kubelet[2611]: E0414 13:00:58.459489 2611 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:00:58.963323 kubelet[2611]: E0414 13:00:58.956319 2611 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:00:59.143128 kubelet[2611]: E0414 13:00:59.140452 2611 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:00:59.436435 kubelet[2611]: E0414 13:00:59.434938 2611 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:00:59.524233 kubelet[2611]: E0414 13:00:59.515371 2611 status_manager.go:1045] "Failed to get status for pod" err="Get \"https://10.0.0.43:6443/api/v1/namespaces/kube-system/pods/coredns-7d764666f9-spttk\": net/http: TLS handshake timeout" podUID="fb975314-b950-4dd9-9942-b30d52d99a2a" pod="kube-system/coredns-7d764666f9-spttk" Apr 14 13:00:59.954437 containerd[1466]: time="2026-04-14T13:00:59.950348725Z" level=error msg="get state for 6a7e2fae1655e41cef2dfdb5c50ad04067c295136c5f16436150ec1be6d0d289" error="context deadline exceeded: unknown" Apr 14 13:00:59.954437 containerd[1466]: time="2026-04-14T13:00:59.951680536Z" level=warning msg="unknown status" status=0 Apr 14 13:01:00.351444 kubelet[2611]: E0414 13:01:00.350137 2611 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.678s" Apr 14 13:01:02.565048 kubelet[2611]: E0414 13:01:02.560529 2611 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:01:03.378365 kubelet[2611]: E0414 13:01:03.361920 2611 kubelet_node_status.go:474] "Error updating node status, will retry" err="error getting node \"localhost\": Get \"https://10.0.0.43:6443/api/v1/nodes/localhost?timeout=10s\": context deadline exceeded" Apr 14 13:01:03.874371 kubelet[2611]: E0414 13:01:03.781236 2611 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.242s" Apr 14 13:01:04.382025 containerd[1466]: time="2026-04-14T13:01:04.039249512Z" level=error msg="get state for 6a7e2fae1655e41cef2dfdb5c50ad04067c295136c5f16436150ec1be6d0d289" error="context deadline exceeded: unknown" Apr 14 13:01:04.382025 containerd[1466]: time="2026-04-14T13:01:04.276113311Z" level=warning msg="unknown status" status=0 Apr 14 13:01:05.038146 containerd[1466]: time="2026-04-14T13:01:05.037028548Z" level=error msg="ttrpc: received message on inactive stream" stream=3 Apr 14 13:01:05.534510 containerd[1466]: time="2026-04-14T13:01:05.352465966Z" level=error msg="ttrpc: received message on inactive stream" stream=5 Apr 14 13:01:05.708524 containerd[1466]: time="2026-04-14T13:01:05.680423052Z" level=error msg="ttrpc: received message on inactive stream" stream=7 Apr 14 13:01:05.743213 containerd[1466]: time="2026-04-14T13:01:05.728753466Z" level=error msg="ttrpc: received message on inactive stream" stream=9 Apr 14 13:01:05.743213 containerd[1466]: time="2026-04-14T13:01:05.729075807Z" level=error msg="ttrpc: received message on inactive stream" stream=11 Apr 14 13:01:05.743213 containerd[1466]: time="2026-04-14T13:01:05.729174321Z" level=error msg="ttrpc: received message on inactive stream" stream=13 Apr 14 13:01:05.851562 kubelet[2611]: W0414 13:01:05.803917 2611 manager.go:1172] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbd70d524e6bc561f2082b467706799ed.slice/cri-containerd-6a7e2fae1655e41cef2dfdb5c50ad04067c295136c5f16436150ec1be6d0d289.scope WatchSource:0}: containerd task is in unknown state Apr 14 13:01:07.843512 kubelet[2611]: E0414 13:01:07.840459 2611 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://10.0.0.43:6443/api/v1/namespaces/kube-system/events/coredns-7d764666f9-f44gt.18a639cb2be0dea0\": net/http: TLS handshake timeout" event="&Event{ObjectMeta:{coredns-7d764666f9-f44gt.18a639cb2be0dea0 kube-system 1042 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:coredns-7d764666f9-f44gt,UID:c0e24bdb-6150-4745-b66b-9386ee241a93,APIVersion:v1,ResourceVersion:631,FieldPath:spec.containers{coredns},},Reason:Unhealthy,Message:Readiness probe failed: Get \"http://192.168.0.3:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-14 12:44:07 +0000 UTC,LastTimestamp:2026-04-14 12:47:15.46562745 +0000 UTC m=+363.521746894,Count:5,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 14 13:01:08.174311 kubelet[2611]: E0414 13:01:07.551555 2611 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.43:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded" interval="7s" Apr 14 13:01:10.693053 kubelet[2611]: E0414 13:01:10.580538 2611 status_manager.go:1045] "Failed to get status for pod" err="Get \"https://10.0.0.43:6443/api/v1/namespaces/kube-system/pods/coredns-7d764666f9-f44gt\": net/http: TLS handshake timeout" podUID="c0e24bdb-6150-4745-b66b-9386ee241a93" pod="kube-system/coredns-7d764666f9-f44gt" Apr 14 13:01:14.252564 containerd[1466]: time="2026-04-14T13:01:14.251189377Z" level=info msg="StartContainer for \"6a7e2fae1655e41cef2dfdb5c50ad04067c295136c5f16436150ec1be6d0d289\" returns successfully" Apr 14 13:01:14.422179 kubelet[2611]: E0414 13:01:14.421081 2611 kubelet_node_status.go:474] "Error updating node status, will retry" err="error getting node \"localhost\": Get \"https://10.0.0.43:6443/api/v1/nodes/localhost?timeout=10s\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Apr 14 13:01:24.553456 kubelet[2611]: E0414 13:01:24.542150 2611 kubelet_node_status.go:474] "Error updating node status, will retry" err="error getting node \"localhost\": Get \"https://10.0.0.43:6443/api/v1/nodes/localhost?timeout=10s\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Apr 14 13:01:25.022034 kubelet[2611]: E0414 13:01:24.960403 2611 kubelet_node_status.go:461] "Unable to update node status" err="update node status exceeds retry count" Apr 14 13:01:26.199374 kubelet[2611]: E0414 13:01:26.179058 2611 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.43:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" interval="7s" Apr 14 13:01:28.959518 kubelet[2611]: E0414 13:01:28.942330 2611 status_manager.go:1045] "Failed to get status for pod" err="Get \"https://10.0.0.43:6443/api/v1/namespaces/kube-system/pods/kube-controller-manager-localhost\": net/http: TLS handshake timeout" podUID="bd70d524e6bc561f2082b467706799ed" pod="kube-system/kube-controller-manager-localhost" Apr 14 13:01:30.368276 kubelet[2611]: E0414 13:01:30.357437 2611 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="26.57s" Apr 14 13:01:31.096355 kubelet[2611]: E0414 13:01:31.081075 2611 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://10.0.0.43:6443/api/v1/namespaces/kube-system/events/coredns-7d764666f9-f44gt.18a639cb2be0dea0\": net/http: TLS handshake timeout" event="&Event{ObjectMeta:{coredns-7d764666f9-f44gt.18a639cb2be0dea0 kube-system 1042 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:coredns-7d764666f9-f44gt,UID:c0e24bdb-6150-4745-b66b-9386ee241a93,APIVersion:v1,ResourceVersion:631,FieldPath:spec.containers{coredns},},Reason:Unhealthy,Message:Readiness probe failed: Get \"http://192.168.0.3:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-14 12:44:07 +0000 UTC,LastTimestamp:2026-04-14 12:47:15.46562745 +0000 UTC m=+363.521746894,Count:5,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 14 13:01:31.096355 kubelet[2611]: E0414 13:01:31.082743 2611 event.go:307] "Unable to write event (retry limit exceeded!)" event="&Event{ObjectMeta:{coredns-7d764666f9-f44gt.18a639f6dc56673a kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:coredns-7d764666f9-f44gt,UID:c0e24bdb-6150-4745-b66b-9386ee241a93,APIVersion:v1,ResourceVersion:631,FieldPath:spec.containers{coredns},},Reason:Unhealthy,Message:Readiness probe failed: Get \"http://192.168.0.3:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-14 12:47:15.46562745 +0000 UTC m=+363.521746894,LastTimestamp:2026-04-14 12:47:15.46562745 +0000 UTC m=+363.521746894,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 14 13:01:32.246374 kubelet[2611]: E0414 13:01:32.234312 2611 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:01:32.733232 kubelet[2611]: E0414 13:01:32.731447 2611 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.291s" Apr 14 13:01:33.957269 kubelet[2611]: E0414 13:01:33.941742 2611 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:01:35.833888 kubelet[2611]: E0414 13:01:35.820306 2611 log.go:32] "StopContainer from runtime service failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" containerID="deefeb9d7fce7a43dd4dcf925e6b98c76823dace7f986803922e1451a289da2d" Apr 14 13:01:35.915367 kubelet[2611]: E0414 13:01:35.868076 2611 kuberuntime_container.go:895] "Container termination failed with gracePeriod" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" pod="kube-system/coredns-7d764666f9-f44gt" podUID="c0e24bdb-6150-4745-b66b-9386ee241a93" containerName="coredns" containerID="containerd://deefeb9d7fce7a43dd4dcf925e6b98c76823dace7f986803922e1451a289da2d" gracePeriod=30 Apr 14 13:01:35.915367 kubelet[2611]: E0414 13:01:35.872400 2611 kuberuntime_manager.go:1437] "killContainer for pod failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" containerName="coredns" containerID={"Type":"containerd","ID":"deefeb9d7fce7a43dd4dcf925e6b98c76823dace7f986803922e1451a289da2d"} pod="kube-system/coredns-7d764666f9-f44gt" Apr 14 13:01:35.915367 kubelet[2611]: E0414 13:01:35.872833 2611 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillContainer\" for \"coredns\" with KillContainerError: \"rpc error: code = DeadlineExceeded desc = context deadline exceeded\"" pod="kube-system/coredns-7d764666f9-f44gt" podUID="c0e24bdb-6150-4745-b66b-9386ee241a93" Apr 14 13:01:36.156097 containerd[1466]: time="2026-04-14T13:01:35.830569727Z" level=error msg="StopContainer for \"deefeb9d7fce7a43dd4dcf925e6b98c76823dace7f986803922e1451a289da2d\" failed" error="rpc error: code = Canceled desc = an error occurs during waiting for container \"deefeb9d7fce7a43dd4dcf925e6b98c76823dace7f986803922e1451a289da2d\" to be killed: wait container \"deefeb9d7fce7a43dd4dcf925e6b98c76823dace7f986803922e1451a289da2d\": context canceled" Apr 14 13:01:37.269404 kubelet[2611]: E0414 13:01:37.264269 2611 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.147s" Apr 14 13:01:39.964144 kubelet[2611]: E0414 13:01:39.957000 2611 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:01:40.251255 kubelet[2611]: E0414 13:01:40.225484 2611 status_manager.go:1045] "Failed to get status for pod" err="Get \"https://10.0.0.43:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-localhost\": net/http: TLS handshake timeout" podUID="ed5e991544c38f12435d82988fd12fee" pod="kube-system/kube-apiserver-localhost" Apr 14 13:01:41.326035 kubelet[2611]: E0414 13:01:41.319473 2611 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://10.0.0.43:6443/api/v1/namespaces/kube-system/events/kube-scheduler-localhost.18a639b182c0ba56\": net/http: TLS handshake timeout" event="&Event{ObjectMeta:{kube-scheduler-localhost.18a639b182c0ba56 kube-system 1052 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-scheduler-localhost,UID:3566c1d7ed03bb3c60facf009a5678dd,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Unhealthy,Message:Readiness probe failed: Get \"https://127.0.0.1:10259/readyz\": dial tcp 127.0.0.1:10259: connect: connection refused,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-14 12:42:17 +0000 UTC,LastTimestamp:2026-04-14 12:47:15.706347617 +0000 UTC m=+363.762467067,Count:36,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 14 13:01:42.775205 sshd[6692]: pam_unix(sshd:session): session closed for user core Apr 14 13:01:43.385913 systemd[1]: sshd@36-10.0.0.43:22-10.0.0.1:33614.service: Deactivated successfully. Apr 14 13:01:43.505581 systemd[1]: sshd@36-10.0.0.43:22-10.0.0.1:33614.service: Consumed 3.508s CPU time. Apr 14 13:01:43.837466 systemd[1]: session-37.scope: Deactivated successfully. Apr 14 13:01:43.914237 systemd[1]: session-37.scope: Consumed 22.406s CPU time. Apr 14 13:01:44.044674 kubelet[2611]: E0414 13:01:44.042432 2611 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.43:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" interval="7s" Apr 14 13:01:44.069094 systemd-logind[1450]: Session 37 logged out. Waiting for processes to exit. Apr 14 13:01:44.256889 systemd-logind[1450]: Removed session 37. Apr 14 13:01:44.725036 kubelet[2611]: E0414 13:01:44.696371 2611 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.649s" Apr 14 13:01:45.505790 kubelet[2611]: E0414 13:01:45.484193 2611 kubelet_node_status.go:474] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"NetworkUnavailable\\\"},{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-04-14T13:01:35Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-14T13:01:35Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-14T13:01:35Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-14T13:01:35Z\\\",\\\"type\\\":\\\"Ready\\\"}]}}\" for node \"localhost\": Patch \"https://10.0.0.43:6443/api/v1/nodes/localhost/status?timeout=10s\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Apr 14 13:01:49.174444 systemd[1]: Started sshd@37-10.0.0.43:22-10.0.0.1:42892.service - OpenSSH per-connection server daemon (10.0.0.1:42892). Apr 14 13:01:52.718109 kubelet[2611]: E0414 13:01:52.704048 2611 status_manager.go:1045] "Failed to get status for pod" err="Get \"https://10.0.0.43:6443/api/v1/namespaces/kube-system/pods/kube-scheduler-localhost\": net/http: TLS handshake timeout" podUID="3566c1d7ed03bb3c60facf009a5678dd" pod="kube-system/kube-scheduler-localhost" Apr 14 13:01:56.654833 kubelet[2611]: E0414 13:01:56.651380 2611 kubelet_node_status.go:474] "Error updating node status, will retry" err="error getting node \"localhost\": Get \"https://10.0.0.43:6443/api/v1/nodes/localhost?timeout=10s\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Apr 14 13:01:57.305322 sshd[6804]: Accepted publickey for core from 10.0.0.1 port 42892 ssh2: RSA SHA256:qzhFhta2jMUFpsUMpLJ2lZjvWQxYMUNkIBekZ4ekVbM Apr 14 13:01:57.486243 sshd[6804]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 13:01:58.030160 systemd-logind[1450]: New session 38 of user core. Apr 14 13:01:58.108456 systemd[1]: Started session-38.scope - Session 38 of User core. Apr 14 13:02:00.227572 kubelet[2611]: E0414 13:02:00.226310 2611 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="15.514s" Apr 14 13:02:01.152075 kubelet[2611]: E0414 13:02:01.151506 2611 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:02:01.195084 containerd[1466]: time="2026-04-14T13:02:01.190333787Z" level=info msg="StopContainer for \"deefeb9d7fce7a43dd4dcf925e6b98c76823dace7f986803922e1451a289da2d\" with timeout 30 (s)" Apr 14 13:02:01.376278 containerd[1466]: time="2026-04-14T13:02:01.362784934Z" level=info msg="Skipping the sending of signal terminated to container \"deefeb9d7fce7a43dd4dcf925e6b98c76823dace7f986803922e1451a289da2d\" because a prior stop with timeout>0 request already sent the signal" Apr 14 13:02:02.268942 kubelet[2611]: E0414 13:02:02.266214 2611 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.43:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded" interval="7s" Apr 14 13:02:05.111441 kubelet[2611]: E0414 13:02:05.100531 2611 status_manager.go:1045] "Failed to get status for pod" err="Get \"https://10.0.0.43:6443/api/v1/namespaces/kube-system/pods/coredns-7d764666f9-spttk\": net/http: TLS handshake timeout" podUID="fb975314-b950-4dd9-9942-b30d52d99a2a" pod="kube-system/coredns-7d764666f9-spttk" Apr 14 13:02:05.456535 kubelet[2611]: E0414 13:02:05.432121 2611 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://10.0.0.43:6443/api/v1/namespaces/kube-system/events/kube-scheduler-localhost.18a639b182c0ba56\": net/http: TLS handshake timeout" event="&Event{ObjectMeta:{kube-scheduler-localhost.18a639b182c0ba56 kube-system 1052 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-scheduler-localhost,UID:3566c1d7ed03bb3c60facf009a5678dd,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Unhealthy,Message:Readiness probe failed: Get \"https://127.0.0.1:10259/readyz\": dial tcp 127.0.0.1:10259: connect: connection refused,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-14 12:42:17 +0000 UTC,LastTimestamp:2026-04-14 12:47:15.706347617 +0000 UTC m=+363.762467067,Count:36,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 14 13:02:08.511451 kubelet[2611]: E0414 13:02:08.509345 2611 kubelet_node_status.go:474] "Error updating node status, will retry" err="error getting node \"localhost\": Get \"https://10.0.0.43:6443/api/v1/nodes/localhost?timeout=10s\": context deadline exceeded" Apr 14 13:02:10.937231 kubelet[2611]: E0414 13:02:10.937062 2611 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="4.695s" Apr 14 13:02:13.645910 kubelet[2611]: E0414 13:02:13.640282 2611 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.564s" Apr 14 13:02:15.226827 kubelet[2611]: E0414 13:02:15.218460 2611 status_manager.go:1045] "Failed to get status for pod" err="Get \"https://10.0.0.43:6443/api/v1/namespaces/kube-system/pods/coredns-7d764666f9-f44gt\": net/http: TLS handshake timeout" podUID="c0e24bdb-6150-4745-b66b-9386ee241a93" pod="kube-system/coredns-7d764666f9-f44gt" Apr 14 13:02:20.206899 kubelet[2611]: E0414 13:02:20.170100 2611 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.43:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" interval="7s" Apr 14 13:02:20.564282 kubelet[2611]: E0414 13:02:20.546215 2611 kubelet_node_status.go:474] "Error updating node status, will retry" err="error getting node \"localhost\": Get \"https://10.0.0.43:6443/api/v1/nodes/localhost?timeout=10s\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Apr 14 13:02:21.160358 kubelet[2611]: E0414 13:02:21.158353 2611 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:02:23.435404 kubelet[2611]: E0414 13:02:23.435135 2611 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="8.343s" Apr 14 13:02:26.463400 sshd[6804]: pam_unix(sshd:session): session closed for user core Apr 14 13:02:27.237462 systemd[1]: sshd@37-10.0.0.43:22-10.0.0.1:42892.service: Deactivated successfully. Apr 14 13:02:27.372030 systemd[1]: sshd@37-10.0.0.43:22-10.0.0.1:42892.service: Consumed 2.987s CPU time. Apr 14 13:02:27.854187 systemd[1]: session-38.scope: Deactivated successfully. Apr 14 13:02:27.927259 systemd[1]: session-38.scope: Consumed 11.227s CPU time. Apr 14 13:02:28.262013 systemd-logind[1450]: Session 38 logged out. Waiting for processes to exit. Apr 14 13:02:28.860430 systemd-logind[1450]: Removed session 38. Apr 14 13:02:29.375447 kubelet[2611]: E0414 13:02:29.321234 2611 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="5.776s" Apr 14 13:02:31.279563 containerd[1466]: time="2026-04-14T13:02:31.277252569Z" level=info msg="TaskExit event container_id:\"deefeb9d7fce7a43dd4dcf925e6b98c76823dace7f986803922e1451a289da2d\" id:\"deefeb9d7fce7a43dd4dcf925e6b98c76823dace7f986803922e1451a289da2d\" pid:3582 exited_at:{seconds:1776171119 nanos:107692540}" Apr 14 13:02:31.425480 containerd[1466]: time="2026-04-14T13:02:31.410578340Z" level=info msg="Kill container \"deefeb9d7fce7a43dd4dcf925e6b98c76823dace7f986803922e1451a289da2d\"" Apr 14 13:02:33.000015 systemd[1]: Started sshd@38-10.0.0.43:22-10.0.0.1:39582.service - OpenSSH per-connection server daemon (10.0.0.1:39582). Apr 14 13:02:34.133530 containerd[1466]: time="2026-04-14T13:02:34.132080706Z" level=error msg="get state for deefeb9d7fce7a43dd4dcf925e6b98c76823dace7f986803922e1451a289da2d" error="context deadline exceeded: unknown" Apr 14 13:02:34.361337 containerd[1466]: time="2026-04-14T13:02:34.159808002Z" level=warning msg="unknown status" status=0 Apr 14 13:02:34.983037 kubelet[2611]: E0414 13:02:33.104545 2611 status_manager.go:1045] "Failed to get status for pod" err="Get \"https://10.0.0.43:6443/api/v1/namespaces/kube-system/pods/kube-scheduler-localhost\": net/http: TLS handshake timeout" podUID="3566c1d7ed03bb3c60facf009a5678dd" pod="kube-system/kube-scheduler-localhost" Apr 14 13:02:36.358690 kubelet[2611]: E0414 13:02:36.330883 2611 kubelet_node_status.go:474] "Error updating node status, will retry" err="error getting node \"localhost\": Get \"https://10.0.0.43:6443/api/v1/nodes/localhost?timeout=10s\": context deadline exceeded" Apr 14 13:02:36.358690 kubelet[2611]: E0414 13:02:36.333058 2611 kubelet_node_status.go:461] "Unable to update node status" err="update node status exceeds retry count" Apr 14 13:02:36.564460 kubelet[2611]: E0414 13:02:36.425207 2611 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://10.0.0.43:6443/api/v1/namespaces/kube-system/events/kube-scheduler-localhost.18a639b182c0ba56\": net/http: TLS handshake timeout" event="&Event{ObjectMeta:{kube-scheduler-localhost.18a639b182c0ba56 kube-system 1052 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-scheduler-localhost,UID:3566c1d7ed03bb3c60facf009a5678dd,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Unhealthy,Message:Readiness probe failed: Get \"https://127.0.0.1:10259/readyz\": dial tcp 127.0.0.1:10259: connect: connection refused,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-14 12:42:17 +0000 UTC,LastTimestamp:2026-04-14 12:47:15.706347617 +0000 UTC m=+363.762467067,Count:36,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 14 13:02:36.738396 kubelet[2611]: E0414 13:02:36.630005 2611 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:02:36.858711 containerd[1466]: time="2026-04-14T13:02:36.682541407Z" level=error msg="get state for deefeb9d7fce7a43dd4dcf925e6b98c76823dace7f986803922e1451a289da2d" error="context deadline exceeded: unknown" Apr 14 13:02:37.233503 kubelet[2611]: E0414 13:02:37.221370 2611 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:02:37.233503 kubelet[2611]: I0414 13:02:37.231371 2611 request.go:752] "Waited before sending request" delay="1.963144618s" reason="retries: 10, retry-after: 1s - retry-reason: due to retryable error, error: Get \"https://10.0.0.43:6443/api/v1/namespaces/kube-system/configmaps?allowWatchBookmarks=true&fieldSelector=metadata.name%3Dcoredns&resourceVersion=1034&resourceVersionMatch=NotOlderThan&sendInitialEvents=true&timeout=33m19s&timeoutSeconds=1999&watch=true\": net/http: TLS handshake timeout" verb="GET" URL="https://10.0.0.43:6443/api/v1/namespaces/kube-system/configmaps?allowWatchBookmarks=true&fieldSelector=metadata.name%3Dcoredns&resourceVersion=1034&resourceVersionMatch=NotOlderThan&sendInitialEvents=true&timeout=33m19s&timeoutSeconds=1999&watch=true" Apr 14 13:02:37.459088 containerd[1466]: time="2026-04-14T13:02:37.246573111Z" level=warning msg="unknown status" status=0 Apr 14 13:02:39.429156 kubelet[2611]: E0414 13:02:39.421180 2611 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:02:40.522434 kubelet[2611]: E0414 13:02:40.466348 2611 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.43:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" interval="7s" Apr 14 13:02:41.451772 containerd[1466]: time="2026-04-14T13:02:41.448214859Z" level=error msg="Failed to handle backOff event container_id:\"deefeb9d7fce7a43dd4dcf925e6b98c76823dace7f986803922e1451a289da2d\" id:\"deefeb9d7fce7a43dd4dcf925e6b98c76823dace7f986803922e1451a289da2d\" pid:3582 exited_at:{seconds:1776171119 nanos:107692540} for deefeb9d7fce7a43dd4dcf925e6b98c76823dace7f986803922e1451a289da2d" error="failed to handle container TaskExit event: failed to stop container: failed to delete task: context deadline exceeded: unknown" Apr 14 13:02:42.420519 sshd[6876]: Accepted publickey for core from 10.0.0.1 port 39582 ssh2: RSA SHA256:qzhFhta2jMUFpsUMpLJ2lZjvWQxYMUNkIBekZ4ekVbM Apr 14 13:02:42.884836 containerd[1466]: time="2026-04-14T13:02:42.883418091Z" level=error msg="ttrpc: received message on inactive stream" stream=151 Apr 14 13:02:43.256754 sshd[6876]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 13:02:44.262274 containerd[1466]: time="2026-04-14T13:02:44.237542866Z" level=error msg="ttrpc: received message on inactive stream" stream=155 Apr 14 13:02:44.446371 containerd[1466]: time="2026-04-14T13:02:44.360850541Z" level=error msg="ttrpc: received message on inactive stream" stream=153 Apr 14 13:02:44.473554 systemd-logind[1450]: New session 39 of user core. Apr 14 13:02:44.653466 systemd[1]: Started session-39.scope - Session 39 of User core. Apr 14 13:02:45.708890 containerd[1466]: time="2026-04-14T13:02:45.707692231Z" level=info msg="StopContainer for \"5265bd74eb305dedcf8d69217808cacaf7d6f94a3b93058deb31bc35850717f4\" with timeout 30 (s)" Apr 14 13:02:46.157924 containerd[1466]: time="2026-04-14T13:02:45.983391465Z" level=info msg="Stop container \"5265bd74eb305dedcf8d69217808cacaf7d6f94a3b93058deb31bc35850717f4\" with signal terminated" Apr 14 13:02:46.647833 kubelet[2611]: E0414 13:02:46.628454 2611 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="16.425s" Apr 14 13:02:48.414303 kubelet[2611]: E0414 13:02:48.409141 2611 status_manager.go:1045] "Failed to get status for pod" err="Get \"https://10.0.0.43:6443/api/v1/namespaces/kube-system/pods/coredns-7d764666f9-spttk\": net/http: TLS handshake timeout" podUID="fb975314-b950-4dd9-9942-b30d52d99a2a" pod="kube-system/coredns-7d764666f9-spttk" Apr 14 13:02:50.363766 kubelet[2611]: E0414 13:02:50.363208 2611 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.697s" Apr 14 13:02:54.945194 kubelet[2611]: E0414 13:02:54.944118 2611 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="4.421s" Apr 14 13:02:55.870029 systemd[1]: cri-containerd-5265bd74eb305dedcf8d69217808cacaf7d6f94a3b93058deb31bc35850717f4.scope: Deactivated successfully. Apr 14 13:02:55.935778 systemd[1]: cri-containerd-5265bd74eb305dedcf8d69217808cacaf7d6f94a3b93058deb31bc35850717f4.scope: Consumed 26.725s CPU time. Apr 14 13:02:57.390712 kubelet[2611]: E0414 13:02:57.388850 2611 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://10.0.0.43:6443/api/v1/namespaces/kube-system/events/kube-scheduler-localhost.18a639b182c0ba56\": net/http: TLS handshake timeout" event="&Event{ObjectMeta:{kube-scheduler-localhost.18a639b182c0ba56 kube-system 1052 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-scheduler-localhost,UID:3566c1d7ed03bb3c60facf009a5678dd,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Unhealthy,Message:Readiness probe failed: Get \"https://127.0.0.1:10259/readyz\": dial tcp 127.0.0.1:10259: connect: connection refused,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-14 12:42:17 +0000 UTC,LastTimestamp:2026-04-14 12:47:15.706347617 +0000 UTC m=+363.762467067,Count:36,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 14 13:02:57.868798 kubelet[2611]: E0414 13:02:57.851459 2611 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.43:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" interval="7s" Apr 14 13:02:59.341382 kubelet[2611]: E0414 13:02:59.335041 2611 status_manager.go:1045] "Failed to get status for pod" err="Get \"https://10.0.0.43:6443/api/v1/namespaces/kube-system/pods/coredns-7d764666f9-f44gt\": net/http: TLS handshake timeout" podUID="c0e24bdb-6150-4745-b66b-9386ee241a93" pod="kube-system/coredns-7d764666f9-f44gt" Apr 14 13:02:59.564306 kubelet[2611]: E0414 13:02:59.558082 2611 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="4.293s" Apr 14 13:03:01.955672 kubelet[2611]: E0414 13:03:01.933484 2611 kubelet_node_status.go:474] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"NetworkUnavailable\\\"},{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-04-14T13:02:50Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-14T13:02:50Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-14T13:02:50Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-14T13:02:50Z\\\",\\\"type\\\":\\\"Ready\\\"}]}}\" for node \"localhost\": Patch \"https://10.0.0.43:6443/api/v1/nodes/localhost/status?timeout=10s\": context deadline exceeded" Apr 14 13:03:07.430863 containerd[1466]: time="2026-04-14T13:03:07.422841102Z" level=error msg="failed to handle container TaskExit event container_id:\"5265bd74eb305dedcf8d69217808cacaf7d6f94a3b93058deb31bc35850717f4\" id:\"5265bd74eb305dedcf8d69217808cacaf7d6f94a3b93058deb31bc35850717f4\" pid:6623 exited_at:{seconds:1776171777 nanos:45534128}" error="failed to stop container: context deadline exceeded: unknown" Apr 14 13:03:08.606450 containerd[1466]: time="2026-04-14T13:03:08.555629596Z" level=error msg="ttrpc: received message on inactive stream" stream=29 Apr 14 13:03:09.313350 containerd[1466]: time="2026-04-14T13:03:09.312868983Z" level=info msg="TaskExit event container_id:\"5265bd74eb305dedcf8d69217808cacaf7d6f94a3b93058deb31bc35850717f4\" id:\"5265bd74eb305dedcf8d69217808cacaf7d6f94a3b93058deb31bc35850717f4\" pid:6623 exited_at:{seconds:1776171777 nanos:45534128}" Apr 14 13:03:10.107449 containerd[1466]: time="2026-04-14T13:03:09.926382879Z" level=error msg="ttrpc: received message on inactive stream" stream=27 Apr 14 13:03:10.418725 kubelet[2611]: E0414 13:03:10.048191 2611 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="10.368s" Apr 14 13:03:10.569311 kubelet[2611]: E0414 13:03:10.567974 2611 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:03:13.636278 kubelet[2611]: E0414 13:03:13.633548 2611 kubelet_node_status.go:474] "Error updating node status, will retry" err="error getting node \"localhost\": Get \"https://10.0.0.43:6443/api/v1/nodes/localhost?timeout=10s\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Apr 14 13:03:14.020864 kubelet[2611]: E0414 13:03:13.892504 2611 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.82s" Apr 14 13:03:15.949882 kubelet[2611]: E0414 13:03:15.947494 2611 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.43:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded" interval="7s" Apr 14 13:03:16.113243 kubelet[2611]: E0414 13:03:16.112969 2611 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.988s" Apr 14 13:03:17.011111 kubelet[2611]: E0414 13:03:16.924525 2611 status_manager.go:1045] "Failed to get status for pod" err="Get \"https://10.0.0.43:6443/api/v1/namespaces/kube-system/pods/kube-controller-manager-localhost\": net/http: TLS handshake timeout" podUID="bd70d524e6bc561f2082b467706799ed" pod="kube-system/kube-controller-manager-localhost" Apr 14 13:03:18.528492 kubelet[2611]: E0414 13:03:18.527690 2611 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://10.0.0.43:6443/api/v1/namespaces/kube-system/events/kube-scheduler-localhost.18a639b182c0ba56\": net/http: TLS handshake timeout" event="&Event{ObjectMeta:{kube-scheduler-localhost.18a639b182c0ba56 kube-system 1052 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-scheduler-localhost,UID:3566c1d7ed03bb3c60facf009a5678dd,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Unhealthy,Message:Readiness probe failed: Get \"https://127.0.0.1:10259/readyz\": dial tcp 127.0.0.1:10259: connect: connection refused,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-14 12:42:17 +0000 UTC,LastTimestamp:2026-04-14 12:47:15.706347617 +0000 UTC m=+363.762467067,Count:36,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 14 13:03:19.366562 containerd[1466]: time="2026-04-14T13:03:19.364023653Z" level=error msg="Failed to handle backOff event container_id:\"5265bd74eb305dedcf8d69217808cacaf7d6f94a3b93058deb31bc35850717f4\" id:\"5265bd74eb305dedcf8d69217808cacaf7d6f94a3b93058deb31bc35850717f4\" pid:6623 exited_at:{seconds:1776171777 nanos:45534128} for 5265bd74eb305dedcf8d69217808cacaf7d6f94a3b93058deb31bc35850717f4" error="failed to handle container TaskExit event: failed to stop container: failed to delete task: context deadline exceeded: unknown" Apr 14 13:03:19.581261 kubelet[2611]: E0414 13:03:19.529292 2611 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.471s" Apr 14 13:03:19.825811 containerd[1466]: time="2026-04-14T13:03:19.824404626Z" level=info msg="Kill container \"5265bd74eb305dedcf8d69217808cacaf7d6f94a3b93058deb31bc35850717f4\"" Apr 14 13:03:20.226824 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5265bd74eb305dedcf8d69217808cacaf7d6f94a3b93058deb31bc35850717f4-rootfs.mount: Deactivated successfully. Apr 14 13:03:20.344284 containerd[1466]: time="2026-04-14T13:03:20.341292327Z" level=error msg="ttrpc: received message on inactive stream" stream=43 Apr 14 13:03:21.162001 sshd[6876]: pam_unix(sshd:session): session closed for user core Apr 14 13:03:21.932360 systemd[1]: sshd@38-10.0.0.43:22-10.0.0.1:39582.service: Deactivated successfully. Apr 14 13:03:21.949519 systemd[1]: sshd@38-10.0.0.43:22-10.0.0.1:39582.service: Consumed 2.520s CPU time. Apr 14 13:03:22.215372 systemd[1]: session-39.scope: Deactivated successfully. Apr 14 13:03:22.221926 systemd[1]: session-39.scope: Consumed 15.672s CPU time. Apr 14 13:03:22.305874 systemd-logind[1450]: Session 39 logged out. Waiting for processes to exit. Apr 14 13:03:22.400863 containerd[1466]: time="2026-04-14T13:03:22.272163128Z" level=info msg="TaskExit event container_id:\"5265bd74eb305dedcf8d69217808cacaf7d6f94a3b93058deb31bc35850717f4\" id:\"5265bd74eb305dedcf8d69217808cacaf7d6f94a3b93058deb31bc35850717f4\" pid:6623 exited_at:{seconds:1776171777 nanos:45534128}" Apr 14 13:03:22.551530 systemd-logind[1450]: Removed session 39. Apr 14 13:03:24.073378 kubelet[2611]: E0414 13:03:24.068965 2611 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="4.019s" Apr 14 13:03:24.112558 kubelet[2611]: E0414 13:03:23.968165 2611 kubelet_node_status.go:474] "Error updating node status, will retry" err="error getting node \"localhost\": Get \"https://10.0.0.43:6443/api/v1/nodes/localhost?timeout=10s\": context deadline exceeded" Apr 14 13:03:25.220963 kubelet[2611]: E0414 13:03:25.218235 2611 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.149s" Apr 14 13:03:27.142390 systemd[1]: Started sshd@39-10.0.0.43:22-10.0.0.1:39736.service - OpenSSH per-connection server daemon (10.0.0.1:39736). Apr 14 13:03:28.650119 kubelet[2611]: E0414 13:03:28.636080 2611 status_manager.go:1045] "Failed to get status for pod" err="Get \"https://10.0.0.43:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-localhost\": net/http: TLS handshake timeout" podUID="ed5e991544c38f12435d82988fd12fee" pod="kube-system/kube-apiserver-localhost" Apr 14 13:03:31.259298 kubelet[2611]: E0414 13:03:31.254428 2611 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="5.21s" Apr 14 13:03:31.786131 sshd[7016]: Accepted publickey for core from 10.0.0.1 port 39736 ssh2: RSA SHA256:qzhFhta2jMUFpsUMpLJ2lZjvWQxYMUNkIBekZ4ekVbM Apr 14 13:03:32.088186 sshd[7016]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 13:03:32.094419 containerd[1466]: time="2026-04-14T13:03:32.093429758Z" level=info msg="shim disconnected" id=5265bd74eb305dedcf8d69217808cacaf7d6f94a3b93058deb31bc35850717f4 namespace=k8s.io Apr 14 13:03:32.094419 containerd[1466]: time="2026-04-14T13:03:32.093504849Z" level=warning msg="cleaning up after shim disconnected" id=5265bd74eb305dedcf8d69217808cacaf7d6f94a3b93058deb31bc35850717f4 namespace=k8s.io Apr 14 13:03:32.094419 containerd[1466]: time="2026-04-14T13:03:32.093511239Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 14 13:03:32.751632 systemd-logind[1450]: New session 40 of user core. Apr 14 13:03:32.796178 containerd[1466]: time="2026-04-14T13:03:32.751792184Z" level=error msg="failed to delete shim" error="1 error occurred:\n\t* close wait error: context deadline exceeded\n\n" id=5265bd74eb305dedcf8d69217808cacaf7d6f94a3b93058deb31bc35850717f4 Apr 14 13:03:32.843497 systemd[1]: Started session-40.scope - Session 40 of User core. Apr 14 13:03:33.072485 containerd[1466]: time="2026-04-14T13:03:33.070326720Z" level=info msg="StopContainer for \"5265bd74eb305dedcf8d69217808cacaf7d6f94a3b93058deb31bc35850717f4\" returns successfully" Apr 14 13:03:33.252805 kubelet[2611]: E0414 13:03:33.250460 2611 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:03:33.257957 kubelet[2611]: E0414 13:03:33.252502 2611 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.43:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded" interval="7s" Apr 14 13:03:33.376553 containerd[1466]: time="2026-04-14T13:03:33.375842421Z" level=error msg="failed to delete" cmd="/usr/bin/containerd-shim-runc-v2 -namespace k8s.io -address /run/containerd/containerd.sock -publish-binary /usr/bin/containerd -id 5265bd74eb305dedcf8d69217808cacaf7d6f94a3b93058deb31bc35850717f4 -bundle /run/containerd/io.containerd.runtime.v2.task/k8s.io/5265bd74eb305dedcf8d69217808cacaf7d6f94a3b93058deb31bc35850717f4 delete" error="exit status 1" namespace=k8s.io Apr 14 13:03:33.377777 containerd[1466]: time="2026-04-14T13:03:33.377747665Z" level=warning msg="failed to clean up after shim disconnected" error="io.containerd.runc.v2: getwd: no such file or directory: exit status 1" id=5265bd74eb305dedcf8d69217808cacaf7d6f94a3b93058deb31bc35850717f4 namespace=k8s.io Apr 14 13:03:33.842427 containerd[1466]: time="2026-04-14T13:03:33.839491279Z" level=info msg="CreateContainer within sandbox \"113e216a6ecbef12a69b5c11f9e43873e72fef1eac5ccbea9a830f305674834a\" for container &ContainerMetadata{Name:coredns,Attempt:3,}" Apr 14 13:03:34.356772 kubelet[2611]: E0414 13:03:34.344328 2611 kubelet_node_status.go:474] "Error updating node status, will retry" err="error getting node \"localhost\": Get \"https://10.0.0.43:6443/api/v1/nodes/localhost?timeout=10s\": net/http: TLS handshake timeout (Client.Timeout exceeded while awaiting headers)" Apr 14 13:03:35.029919 containerd[1466]: time="2026-04-14T13:03:35.009053678Z" level=info msg="CreateContainer within sandbox \"113e216a6ecbef12a69b5c11f9e43873e72fef1eac5ccbea9a830f305674834a\" for &ContainerMetadata{Name:coredns,Attempt:3,} returns container id \"cd468839a383e09afc6e1788ffe6f21ddc87a1806050f1f7bb5c0579761de61d\"" Apr 14 13:03:35.534290 containerd[1466]: time="2026-04-14T13:03:35.532931834Z" level=info msg="StartContainer for \"cd468839a383e09afc6e1788ffe6f21ddc87a1806050f1f7bb5c0579761de61d\"" Apr 14 13:03:35.563145 kubelet[2611]: E0414 13:03:35.482567 2611 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.305s" Apr 14 13:03:38.340304 containerd[1466]: time="2026-04-14T13:03:38.278218094Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 14 13:03:38.340304 containerd[1466]: time="2026-04-14T13:03:38.337880111Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 14 13:03:38.340304 containerd[1466]: time="2026-04-14T13:03:38.338054595Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 14 13:03:38.408299 containerd[1466]: time="2026-04-14T13:03:38.361012848Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 14 13:03:39.251608 systemd[1]: run-containerd-runc-k8s.io-cd468839a383e09afc6e1788ffe6f21ddc87a1806050f1f7bb5c0579761de61d-runc.DrNBsh.mount: Deactivated successfully. Apr 14 13:03:39.355209 kubelet[2611]: E0414 13:03:39.355003 2611 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:03:39.580871 systemd[1]: Started cri-containerd-cd468839a383e09afc6e1788ffe6f21ddc87a1806050f1f7bb5c0579761de61d.scope - libcontainer container cd468839a383e09afc6e1788ffe6f21ddc87a1806050f1f7bb5c0579761de61d. Apr 14 13:03:39.781518 kubelet[2611]: E0414 13:03:39.670784 2611 status_manager.go:1045] "Failed to get status for pod" err="Get \"https://10.0.0.43:6443/api/v1/namespaces/kube-system/pods/coredns-7d764666f9-spttk\": net/http: TLS handshake timeout" podUID="fb975314-b950-4dd9-9942-b30d52d99a2a" pod="kube-system/coredns-7d764666f9-spttk" Apr 14 13:03:39.848521 kubelet[2611]: E0414 13:03:39.814098 2611 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://10.0.0.43:6443/api/v1/namespaces/kube-system/events/kube-scheduler-localhost.18a639b182c0ba56\": net/http: TLS handshake timeout" event="&Event{ObjectMeta:{kube-scheduler-localhost.18a639b182c0ba56 kube-system 1052 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-scheduler-localhost,UID:3566c1d7ed03bb3c60facf009a5678dd,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Unhealthy,Message:Readiness probe failed: Get \"https://127.0.0.1:10259/readyz\": dial tcp 127.0.0.1:10259: connect: connection refused,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-14 12:42:17 +0000 UTC,LastTimestamp:2026-04-14 12:47:15.706347617 +0000 UTC m=+363.762467067,Count:36,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 14 13:03:41.555108 kubelet[2611]: E0414 13:03:41.427573 2611 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.369s" Apr 14 13:03:41.710458 containerd[1466]: time="2026-04-14T13:03:41.692943450Z" level=error msg="get state for cd468839a383e09afc6e1788ffe6f21ddc87a1806050f1f7bb5c0579761de61d" error="context deadline exceeded: unknown" Apr 14 13:03:41.710458 containerd[1466]: time="2026-04-14T13:03:41.693565086Z" level=warning msg="unknown status" status=0 Apr 14 13:03:43.172401 containerd[1466]: time="2026-04-14T13:03:43.170097111Z" level=error msg="ttrpc: received message on inactive stream" stream=3 Apr 14 13:03:43.375563 kubelet[2611]: E0414 13:03:43.374414 2611 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:03:44.478520 kubelet[2611]: E0414 13:03:44.471896 2611 kubelet_node_status.go:474] "Error updating node status, will retry" err="error getting node \"localhost\": Get \"https://10.0.0.43:6443/api/v1/nodes/localhost?timeout=10s\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Apr 14 13:03:44.624059 kubelet[2611]: E0414 13:03:44.496649 2611 kubelet_node_status.go:461] "Unable to update node status" err="update node status exceeds retry count" Apr 14 13:03:48.900735 sshd[7016]: pam_unix(sshd:session): session closed for user core Apr 14 13:03:49.608779 systemd[1]: sshd@39-10.0.0.43:22-10.0.0.1:39736.service: Deactivated successfully. Apr 14 13:03:49.758409 systemd[1]: sshd@39-10.0.0.43:22-10.0.0.1:39736.service: Consumed 1.341s CPU time. Apr 14 13:03:50.294869 kubelet[2611]: E0414 13:03:50.208802 2611 status_manager.go:1045] "Failed to get status for pod" err="Get \"https://10.0.0.43:6443/api/v1/namespaces/kube-system/pods/coredns-7d764666f9-f44gt\": net/http: TLS handshake timeout" podUID="c0e24bdb-6150-4745-b66b-9386ee241a93" pod="kube-system/coredns-7d764666f9-f44gt" Apr 14 13:03:50.317829 systemd[1]: session-40.scope: Deactivated successfully. Apr 14 13:03:50.376533 systemd[1]: session-40.scope: Consumed 3.558s CPU time. Apr 14 13:03:50.779280 systemd-logind[1450]: Session 40 logged out. Waiting for processes to exit. Apr 14 13:03:51.251177 systemd-logind[1450]: Removed session 40. Apr 14 13:03:52.040764 containerd[1466]: time="2026-04-14T13:03:51.966549409Z" level=info msg="StartContainer for \"cd468839a383e09afc6e1788ffe6f21ddc87a1806050f1f7bb5c0579761de61d\" returns successfully" Apr 14 13:03:52.712876 kubelet[2611]: E0414 13:03:52.413408 2611 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.43:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded" interval="7s" Apr 14 13:03:55.425980 kubelet[2611]: E0414 13:03:55.421795 2611 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="9.274s" Apr 14 13:03:56.611711 systemd[1]: Started sshd@40-10.0.0.43:22-10.0.0.1:39096.service - OpenSSH per-connection server daemon (10.0.0.1:39096). Apr 14 13:03:57.821342 kubelet[2611]: E0414 13:03:57.813851 2611 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:04:00.518413 kubelet[2611]: E0414 13:04:00.474087 2611 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://10.0.0.43:6443/api/v1/namespaces/kube-system/events/kube-scheduler-localhost.18a639b182c0ba56\": net/http: TLS handshake timeout" event="&Event{ObjectMeta:{kube-scheduler-localhost.18a639b182c0ba56 kube-system 1052 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-scheduler-localhost,UID:3566c1d7ed03bb3c60facf009a5678dd,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Unhealthy,Message:Readiness probe failed: Get \"https://127.0.0.1:10259/readyz\": dial tcp 127.0.0.1:10259: connect: connection refused,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-14 12:42:17 +0000 UTC,LastTimestamp:2026-04-14 12:47:15.706347617 +0000 UTC m=+363.762467067,Count:36,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 14 13:04:02.361313 kubelet[2611]: E0414 13:04:02.356191 2611 status_manager.go:1045] "Failed to get status for pod" err="Get \"https://10.0.0.43:6443/api/v1/namespaces/kube-system/pods/kube-controller-manager-localhost\": net/http: TLS handshake timeout" podUID="bd70d524e6bc561f2082b467706799ed" pod="kube-system/kube-controller-manager-localhost" Apr 14 13:04:03.707964 kubelet[2611]: E0414 13:04:03.681086 2611 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:04:04.215255 kubelet[2611]: E0414 13:04:04.174624 2611 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="8.753s" Apr 14 13:04:10.971527 kubelet[2611]: E0414 13:04:10.966152 2611 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.43:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": net/http: TLS handshake timeout" interval="7s" Apr 14 13:04:13.368515 sshd[7143]: Accepted publickey for core from 10.0.0.1 port 39096 ssh2: RSA SHA256:qzhFhta2jMUFpsUMpLJ2lZjvWQxYMUNkIBekZ4ekVbM Apr 14 13:04:14.222544 kubelet[2611]: E0414 13:04:11.945141 2611 kubelet_node_status.go:474] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"NetworkUnavailable\\\"},{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-04-14T13:03:59Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-14T13:03:59Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-14T13:03:59Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-14T13:03:59Z\\\",\\\"type\\\":\\\"Ready\\\"}]}}\" for node \"localhost\": Patch \"https://10.0.0.43:6443/api/v1/nodes/localhost/status?timeout=10s\": context deadline exceeded" Apr 14 13:04:14.475884 sshd[7143]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 13:04:15.910153 kubelet[2611]: E0414 13:04:15.372937 2611 status_manager.go:1045] "Failed to get status for pod" err="Get \"https://10.0.0.43:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-localhost\": net/http: TLS handshake timeout" podUID="ed5e991544c38f12435d82988fd12fee" pod="kube-system/kube-apiserver-localhost" Apr 14 13:04:15.922250 systemd-logind[1450]: New session 41 of user core. Apr 14 13:04:16.036559 systemd[1]: Started session-41.scope - Session 41 of User core. Apr 14 13:04:19.975345 kubelet[2611]: E0414 13:04:19.954692 2611 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="15.71s" Apr 14 13:04:22.238105 kubelet[2611]: E0414 13:04:22.237662 2611 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:04:24.419947 kubelet[2611]: E0414 13:04:24.417062 2611 kubelet_node_status.go:474] "Error updating node status, will retry" err="error getting node \"localhost\": Get \"https://10.0.0.43:6443/api/v1/nodes/localhost?timeout=10s\": context deadline exceeded" Apr 14 13:04:24.842371 kubelet[2611]: E0414 13:04:24.841141 2611 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="4.806s" Apr 14 13:04:24.954510 kubelet[2611]: E0414 13:04:24.924223 2611 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:04:26.278812 kubelet[2611]: E0414 13:04:26.267974 2611 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:04:26.435321 kubelet[2611]: E0414 13:04:26.434871 2611 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.564s" Apr 14 13:04:27.612995 kubelet[2611]: E0414 13:04:27.603061 2611 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://10.0.0.43:6443/api/v1/namespaces/kube-system/events/kube-scheduler-localhost.18a639b182c0ba56\": net/http: TLS handshake timeout" event="&Event{ObjectMeta:{kube-scheduler-localhost.18a639b182c0ba56 kube-system 1052 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-scheduler-localhost,UID:3566c1d7ed03bb3c60facf009a5678dd,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Unhealthy,Message:Readiness probe failed: Get \"https://127.0.0.1:10259/readyz\": dial tcp 127.0.0.1:10259: connect: connection refused,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-14 12:42:17 +0000 UTC,LastTimestamp:2026-04-14 12:47:15.706347617 +0000 UTC m=+363.762467067,Count:36,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 14 13:04:28.164388 kubelet[2611]: E0414 13:04:28.072964 2611 status_manager.go:1045] "Failed to get status for pod" err="Get \"https://10.0.0.43:6443/api/v1/namespaces/kube-system/pods/kube-scheduler-localhost\": net/http: TLS handshake timeout" podUID="3566c1d7ed03bb3c60facf009a5678dd" pod="kube-system/kube-scheduler-localhost" Apr 14 13:04:29.744993 kubelet[2611]: E0414 13:04:29.742765 2611 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.43:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" interval="7s" Apr 14 13:04:30.164216 kubelet[2611]: E0414 13:04:30.042150 2611 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.796s" Apr 14 13:04:31.188002 kubelet[2611]: E0414 13:04:31.185122 2611 log.go:32] "StopContainer from runtime service failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" containerID="deefeb9d7fce7a43dd4dcf925e6b98c76823dace7f986803922e1451a289da2d" Apr 14 13:04:31.282426 kubelet[2611]: E0414 13:04:31.205552 2611 kuberuntime_container.go:895] "Container termination failed with gracePeriod" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" pod="kube-system/coredns-7d764666f9-f44gt" podUID="c0e24bdb-6150-4745-b66b-9386ee241a93" containerName="coredns" containerID="containerd://deefeb9d7fce7a43dd4dcf925e6b98c76823dace7f986803922e1451a289da2d" gracePeriod=30 Apr 14 13:04:31.282426 kubelet[2611]: E0414 13:04:31.206099 2611 kuberuntime_manager.go:1437] "killContainer for pod failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" containerName="coredns" containerID={"Type":"containerd","ID":"deefeb9d7fce7a43dd4dcf925e6b98c76823dace7f986803922e1451a289da2d"} pod="kube-system/coredns-7d764666f9-f44gt" Apr 14 13:04:31.282426 kubelet[2611]: E0414 13:04:31.206224 2611 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillContainer\" for \"coredns\" with KillContainerError: \"rpc error: code = DeadlineExceeded desc = context deadline exceeded\"" pod="kube-system/coredns-7d764666f9-f44gt" podUID="c0e24bdb-6150-4745-b66b-9386ee241a93" Apr 14 13:04:31.313758 containerd[1466]: time="2026-04-14T13:04:31.191429299Z" level=error msg="StopContainer for \"deefeb9d7fce7a43dd4dcf925e6b98c76823dace7f986803922e1451a289da2d\" failed" error="rpc error: code = DeadlineExceeded desc = an error occurs during waiting for container \"deefeb9d7fce7a43dd4dcf925e6b98c76823dace7f986803922e1451a289da2d\" to be killed: wait container \"deefeb9d7fce7a43dd4dcf925e6b98c76823dace7f986803922e1451a289da2d\": context deadline exceeded" Apr 14 13:04:31.326653 kubelet[2611]: E0414 13:04:31.317176 2611 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.275s" Apr 14 13:04:33.344572 kubelet[2611]: E0414 13:04:33.342143 2611 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.212s" Apr 14 13:04:34.615239 kubelet[2611]: E0414 13:04:34.602229 2611 kubelet_node_status.go:474] "Error updating node status, will retry" err="error getting node \"localhost\": Get \"https://10.0.0.43:6443/api/v1/nodes/localhost?timeout=10s\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Apr 14 13:04:36.312475 kubelet[2611]: E0414 13:04:36.311085 2611 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.239s" Apr 14 13:04:38.602703 kubelet[2611]: E0414 13:04:38.599443 2611 status_manager.go:1045] "Failed to get status for pod" err="Get \"https://10.0.0.43:6443/api/v1/namespaces/kube-system/pods/kube-controller-manager-localhost\": net/http: TLS handshake timeout" podUID="bd70d524e6bc561f2082b467706799ed" pod="kube-system/kube-controller-manager-localhost" Apr 14 13:04:40.173106 kubelet[2611]: E0414 13:04:40.172441 2611 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:04:44.939648 kubelet[2611]: E0414 13:04:44.937686 2611 kubelet_node_status.go:474] "Error updating node status, will retry" err="error getting node \"localhost\": Get \"https://10.0.0.43:6443/api/v1/nodes/localhost?timeout=10s\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Apr 14 13:04:46.905576 kubelet[2611]: E0414 13:04:46.905085 2611 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.43:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded" interval="7s" Apr 14 13:04:47.384213 kubelet[2611]: E0414 13:04:47.378874 2611 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:04:47.414714 containerd[1466]: time="2026-04-14T13:04:47.409172407Z" level=info msg="StopContainer for \"deefeb9d7fce7a43dd4dcf925e6b98c76823dace7f986803922e1451a289da2d\" with timeout 30 (s)" Apr 14 13:04:47.452194 containerd[1466]: time="2026-04-14T13:04:47.446491195Z" level=info msg="Skipping the sending of signal terminated to container \"deefeb9d7fce7a43dd4dcf925e6b98c76823dace7f986803922e1451a289da2d\" because a prior stop with timeout>0 request already sent the signal" Apr 14 13:04:47.472102 sshd[7143]: pam_unix(sshd:session): session closed for user core Apr 14 13:04:47.848547 kubelet[2611]: E0414 13:04:47.843986 2611 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://10.0.0.43:6443/api/v1/namespaces/kube-system/events/kube-scheduler-localhost.18a639b182c0ba56\": net/http: TLS handshake timeout" event="&Event{ObjectMeta:{kube-scheduler-localhost.18a639b182c0ba56 kube-system 1052 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-scheduler-localhost,UID:3566c1d7ed03bb3c60facf009a5678dd,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Unhealthy,Message:Readiness probe failed: Get \"https://127.0.0.1:10259/readyz\": dial tcp 127.0.0.1:10259: connect: connection refused,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-14 12:42:17 +0000 UTC,LastTimestamp:2026-04-14 12:47:15.706347617 +0000 UTC m=+363.762467067,Count:36,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 14 13:04:48.054924 systemd[1]: sshd@40-10.0.0.43:22-10.0.0.1:39096.service: Deactivated successfully. Apr 14 13:04:48.150806 systemd[1]: sshd@40-10.0.0.43:22-10.0.0.1:39096.service: Consumed 4.164s CPU time. Apr 14 13:04:48.257679 systemd[1]: session-41.scope: Deactivated successfully. Apr 14 13:04:48.258830 systemd[1]: session-41.scope: Consumed 12.512s CPU time. Apr 14 13:04:48.409967 systemd-logind[1450]: Session 41 logged out. Waiting for processes to exit. Apr 14 13:04:48.417672 systemd-logind[1450]: Removed session 41. Apr 14 13:04:48.766983 kubelet[2611]: E0414 13:04:48.732412 2611 status_manager.go:1045] "Failed to get status for pod" err="Get \"https://10.0.0.43:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-localhost\": net/http: TLS handshake timeout" podUID="ed5e991544c38f12435d82988fd12fee" pod="kube-system/kube-apiserver-localhost" Apr 14 13:04:52.820221 systemd[1]: Started sshd@41-10.0.0.43:22-10.0.0.1:35948.service - OpenSSH per-connection server daemon (10.0.0.1:35948). Apr 14 13:04:55.027192 kubelet[2611]: E0414 13:04:55.023270 2611 kubelet_node_status.go:474] "Error updating node status, will retry" err="error getting node \"localhost\": Get \"https://10.0.0.43:6443/api/v1/nodes/localhost?timeout=10s\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Apr 14 13:04:55.027192 kubelet[2611]: E0414 13:04:55.023480 2611 kubelet_node_status.go:461] "Unable to update node status" err="update node status exceeds retry count" Apr 14 13:04:55.776287 kubelet[2611]: E0414 13:04:55.767726 2611 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.708s" Apr 14 13:04:57.233065 kubelet[2611]: E0414 13:04:57.231791 2611 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.181s" Apr 14 13:04:58.062429 sshd[7268]: Accepted publickey for core from 10.0.0.1 port 35948 ssh2: RSA SHA256:qzhFhta2jMUFpsUMpLJ2lZjvWQxYMUNkIBekZ4ekVbM Apr 14 13:04:58.653204 sshd[7268]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 13:04:58.812155 kubelet[2611]: E0414 13:04:58.810579 2611 status_manager.go:1045] "Failed to get status for pod" err="Get \"https://10.0.0.43:6443/api/v1/namespaces/kube-system/pods/kube-scheduler-localhost\": net/http: TLS handshake timeout" podUID="3566c1d7ed03bb3c60facf009a5678dd" pod="kube-system/kube-scheduler-localhost" Apr 14 13:04:59.156713 kubelet[2611]: E0414 13:04:59.156497 2611 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.114s" Apr 14 13:04:59.277070 systemd-logind[1450]: New session 42 of user core. Apr 14 13:04:59.335679 systemd[1]: Started session-42.scope - Session 42 of User core. Apr 14 13:05:03.140969 kubelet[2611]: E0414 13:05:03.137218 2611 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.08s" Apr 14 13:05:04.366181 kubelet[2611]: E0414 13:05:04.346553 2611 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.43:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": net/http: TLS handshake timeout" interval="7s" Apr 14 13:05:08.260006 kubelet[2611]: E0414 13:05:08.257422 2611 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="5.12s" Apr 14 13:05:08.609624 kubelet[2611]: E0414 13:05:08.606444 2611 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://10.0.0.43:6443/api/v1/namespaces/kube-system/events/kube-scheduler-localhost.18a639b182c0ba56\": net/http: TLS handshake timeout" event="&Event{ObjectMeta:{kube-scheduler-localhost.18a639b182c0ba56 kube-system 1052 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-scheduler-localhost,UID:3566c1d7ed03bb3c60facf009a5678dd,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Unhealthy,Message:Readiness probe failed: Get \"https://127.0.0.1:10259/readyz\": dial tcp 127.0.0.1:10259: connect: connection refused,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-14 12:42:17 +0000 UTC,LastTimestamp:2026-04-14 12:47:15.706347617 +0000 UTC m=+363.762467067,Count:36,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 14 13:05:09.228774 kubelet[2611]: E0414 13:05:09.176984 2611 status_manager.go:1045] "Failed to get status for pod" err="Get \"https://10.0.0.43:6443/api/v1/namespaces/kube-system/pods/coredns-7d764666f9-spttk\": net/http: TLS handshake timeout" podUID="fb975314-b950-4dd9-9942-b30d52d99a2a" pod="kube-system/coredns-7d764666f9-spttk" Apr 14 13:05:09.474537 kubelet[2611]: E0414 13:05:09.474286 2611 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:05:14.648538 kubelet[2611]: E0414 13:05:14.647318 2611 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:05:17.459541 containerd[1466]: time="2026-04-14T13:05:17.450223155Z" level=info msg="Kill container \"deefeb9d7fce7a43dd4dcf925e6b98c76823dace7f986803922e1451a289da2d\"" Apr 14 13:05:18.672469 kubelet[2611]: E0414 13:05:18.657065 2611 kubelet_node_status.go:474] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"NetworkUnavailable\\\"},{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-04-14T13:05:08Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-14T13:05:08Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-14T13:05:08Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-14T13:05:08Z\\\",\\\"type\\\":\\\"Ready\\\"}]}}\" for node \"localhost\": Patch \"https://10.0.0.43:6443/api/v1/nodes/localhost/status?timeout=10s\": context deadline exceeded" Apr 14 13:05:19.294526 kubelet[2611]: E0414 13:05:19.287098 2611 status_manager.go:1045] "Failed to get status for pod" err="Get \"https://10.0.0.43:6443/api/v1/namespaces/kube-system/pods/coredns-7d764666f9-f44gt\": net/http: TLS handshake timeout" podUID="c0e24bdb-6150-4745-b66b-9386ee241a93" pod="kube-system/coredns-7d764666f9-f44gt" Apr 14 13:05:19.734230 kubelet[2611]: E0414 13:05:19.710750 2611 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.666s" Apr 14 13:05:21.580991 sshd[7268]: pam_unix(sshd:session): session closed for user core Apr 14 13:05:21.955141 kubelet[2611]: E0414 13:05:21.946038 2611 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.43:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded" interval="7s" Apr 14 13:05:22.008250 systemd[1]: sshd@41-10.0.0.43:22-10.0.0.1:35948.service: Deactivated successfully. Apr 14 13:05:22.034966 systemd[1]: sshd@41-10.0.0.43:22-10.0.0.1:35948.service: Consumed 1.913s CPU time. Apr 14 13:05:22.258179 systemd[1]: session-42.scope: Deactivated successfully. Apr 14 13:05:22.269916 systemd[1]: session-42.scope: Consumed 7.001s CPU time. Apr 14 13:05:22.279855 systemd-logind[1450]: Session 42 logged out. Waiting for processes to exit. Apr 14 13:05:22.483226 systemd-logind[1450]: Removed session 42. Apr 14 13:05:23.779236 kubelet[2611]: E0414 13:05:23.776385 2611 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.67s" Apr 14 13:05:26.594099 systemd[1]: Started sshd@42-10.0.0.43:22-10.0.0.1:37054.service - OpenSSH per-connection server daemon (10.0.0.1:37054). Apr 14 13:05:26.947755 sshd[7365]: Accepted publickey for core from 10.0.0.1 port 37054 ssh2: RSA SHA256:qzhFhta2jMUFpsUMpLJ2lZjvWQxYMUNkIBekZ4ekVbM Apr 14 13:05:26.972300 sshd[7365]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 13:05:27.068981 systemd-logind[1450]: New session 43 of user core. Apr 14 13:05:27.084069 systemd[1]: Started session-43.scope - Session 43 of User core. Apr 14 13:05:28.122218 sshd[7365]: pam_unix(sshd:session): session closed for user core Apr 14 13:05:28.178193 systemd[1]: sshd@42-10.0.0.43:22-10.0.0.1:37054.service: Deactivated successfully. Apr 14 13:05:28.250547 systemd[1]: session-43.scope: Deactivated successfully. Apr 14 13:05:28.260581 systemd-logind[1450]: Session 43 logged out. Waiting for processes to exit. Apr 14 13:05:28.262196 systemd-logind[1450]: Removed session 43. Apr 14 13:05:33.141268 systemd[1]: Started sshd@43-10.0.0.43:22-10.0.0.1:49026.service - OpenSSH per-connection server daemon (10.0.0.1:49026). Apr 14 13:05:33.284020 sshd[7401]: Accepted publickey for core from 10.0.0.1 port 49026 ssh2: RSA SHA256:qzhFhta2jMUFpsUMpLJ2lZjvWQxYMUNkIBekZ4ekVbM Apr 14 13:05:33.287641 sshd[7401]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 13:05:33.299647 systemd-logind[1450]: New session 44 of user core. Apr 14 13:05:33.319828 systemd[1]: Started session-44.scope - Session 44 of User core. Apr 14 13:05:33.781069 sshd[7401]: pam_unix(sshd:session): session closed for user core Apr 14 13:05:33.806504 systemd[1]: sshd@43-10.0.0.43:22-10.0.0.1:49026.service: Deactivated successfully. Apr 14 13:05:33.862770 systemd[1]: session-44.scope: Deactivated successfully. Apr 14 13:05:33.874175 systemd-logind[1450]: Session 44 logged out. Waiting for processes to exit. Apr 14 13:05:33.888657 systemd-logind[1450]: Removed session 44. Apr 14 13:05:36.131977 kubelet[2611]: E0414 13:05:36.131762 2611 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:05:36.621305 kubelet[2611]: E0414 13:05:36.621032 2611 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:05:36.929260 kubelet[2611]: E0414 13:05:36.928188 2611 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:05:38.060854 kubelet[2611]: E0414 13:05:38.060412 2611 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:05:38.860717 systemd[1]: Started sshd@44-10.0.0.43:22-10.0.0.1:49038.service - OpenSSH per-connection server daemon (10.0.0.1:49038). Apr 14 13:05:39.149715 sshd[7436]: Accepted publickey for core from 10.0.0.1 port 49038 ssh2: RSA SHA256:qzhFhta2jMUFpsUMpLJ2lZjvWQxYMUNkIBekZ4ekVbM Apr 14 13:05:39.162433 sshd[7436]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 13:05:39.310907 systemd-logind[1450]: New session 45 of user core. Apr 14 13:05:39.362403 systemd[1]: Started session-45.scope - Session 45 of User core. Apr 14 13:05:41.193107 sshd[7436]: pam_unix(sshd:session): session closed for user core Apr 14 13:05:41.205212 systemd[1]: sshd@44-10.0.0.43:22-10.0.0.1:49038.service: Deactivated successfully. Apr 14 13:05:41.223167 systemd[1]: session-45.scope: Deactivated successfully. Apr 14 13:05:41.223520 systemd[1]: session-45.scope: Consumed 1.287s CPU time. Apr 14 13:05:41.224197 systemd-logind[1450]: Session 45 logged out. Waiting for processes to exit. Apr 14 13:05:41.225295 systemd-logind[1450]: Removed session 45. Apr 14 13:05:46.278369 systemd[1]: Started sshd@45-10.0.0.43:22-10.0.0.1:50766.service - OpenSSH per-connection server daemon (10.0.0.1:50766). Apr 14 13:05:46.399723 sshd[7491]: Accepted publickey for core from 10.0.0.1 port 50766 ssh2: RSA SHA256:qzhFhta2jMUFpsUMpLJ2lZjvWQxYMUNkIBekZ4ekVbM Apr 14 13:05:46.402100 sshd[7491]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 13:05:46.433093 systemd-logind[1450]: New session 46 of user core. Apr 14 13:05:46.444339 systemd[1]: Started session-46.scope - Session 46 of User core. Apr 14 13:05:46.767789 sshd[7491]: pam_unix(sshd:session): session closed for user core Apr 14 13:05:46.794628 systemd[1]: sshd@45-10.0.0.43:22-10.0.0.1:50766.service: Deactivated successfully. Apr 14 13:05:46.804768 systemd[1]: session-46.scope: Deactivated successfully. Apr 14 13:05:46.806530 systemd-logind[1450]: Session 46 logged out. Waiting for processes to exit. Apr 14 13:05:46.809770 systemd-logind[1450]: Removed session 46. Apr 14 13:05:47.054698 kubelet[2611]: E0414 13:05:47.054580 2611 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:05:51.785119 systemd[1]: Started sshd@46-10.0.0.43:22-10.0.0.1:55780.service - OpenSSH per-connection server daemon (10.0.0.1:55780). Apr 14 13:05:51.932078 sshd[7530]: Accepted publickey for core from 10.0.0.1 port 55780 ssh2: RSA SHA256:qzhFhta2jMUFpsUMpLJ2lZjvWQxYMUNkIBekZ4ekVbM Apr 14 13:05:51.934437 sshd[7530]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 13:05:51.962098 systemd-logind[1450]: New session 47 of user core. Apr 14 13:05:51.981688 systemd[1]: Started session-47.scope - Session 47 of User core. Apr 14 13:05:52.232823 sshd[7530]: pam_unix(sshd:session): session closed for user core Apr 14 13:05:52.243125 systemd[1]: sshd@46-10.0.0.43:22-10.0.0.1:55780.service: Deactivated successfully. Apr 14 13:05:52.260098 systemd[1]: session-47.scope: Deactivated successfully. Apr 14 13:05:52.261320 systemd-logind[1450]: Session 47 logged out. Waiting for processes to exit. Apr 14 13:05:52.262490 systemd-logind[1450]: Removed session 47. Apr 14 13:05:57.394337 systemd[1]: Started sshd@47-10.0.0.43:22-10.0.0.1:55790.service - OpenSSH per-connection server daemon (10.0.0.1:55790). Apr 14 13:05:57.849116 sshd[7567]: Accepted publickey for core from 10.0.0.1 port 55790 ssh2: RSA SHA256:qzhFhta2jMUFpsUMpLJ2lZjvWQxYMUNkIBekZ4ekVbM Apr 14 13:05:57.862619 sshd[7567]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 13:05:57.957056 systemd-logind[1450]: New session 48 of user core. Apr 14 13:05:57.988186 systemd[1]: Started session-48.scope - Session 48 of User core. Apr 14 13:05:58.877616 sshd[7567]: pam_unix(sshd:session): session closed for user core Apr 14 13:05:58.887748 systemd[1]: sshd@47-10.0.0.43:22-10.0.0.1:55790.service: Deactivated successfully. Apr 14 13:05:58.904146 systemd[1]: session-48.scope: Deactivated successfully. Apr 14 13:05:58.910966 systemd-logind[1450]: Session 48 logged out. Waiting for processes to exit. Apr 14 13:05:58.912561 systemd-logind[1450]: Removed session 48.