Apr 16 04:16:17.254210 kernel: Linux version 6.6.127-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Wed Apr 15 22:45:03 -00 2026 Apr 16 04:16:17.254241 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=27643dbc59f658eac8bb37add3a8b4ed010a3c31134319f01549aa493a1f070c Apr 16 04:16:17.254255 kernel: BIOS-provided physical RAM map: Apr 16 04:16:17.254263 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Apr 16 04:16:17.254269 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Apr 16 04:16:17.254277 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Apr 16 04:16:17.254285 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Apr 16 04:16:17.254293 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Apr 16 04:16:17.254300 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Apr 16 04:16:17.254310 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Apr 16 04:16:17.254318 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Apr 16 04:16:17.254325 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Apr 16 04:16:17.254937 kernel: NX (Execute Disable) protection: active Apr 16 04:16:17.254996 kernel: APIC: Static calls initialized Apr 16 04:16:17.255038 kernel: SMBIOS 2.8 present. Apr 16 04:16:17.255079 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Apr 16 04:16:17.255088 kernel: Hypervisor detected: KVM Apr 16 04:16:17.255096 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Apr 16 04:16:17.255104 kernel: kvm-clock: using sched offset of 10851774448 cycles Apr 16 04:16:17.255113 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Apr 16 04:16:17.255122 kernel: tsc: Detected 2793.438 MHz processor Apr 16 04:16:17.255130 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Apr 16 04:16:17.255139 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Apr 16 04:16:17.255146 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x10000000000 Apr 16 04:16:17.255157 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Apr 16 04:16:17.255165 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Apr 16 04:16:17.255172 kernel: Using GB pages for direct mapping Apr 16 04:16:17.255180 kernel: ACPI: Early table checksum verification disabled Apr 16 04:16:17.255188 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Apr 16 04:16:17.255195 kernel: ACPI: RSDT 0x000000009CFE241A 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 16 04:16:17.255203 kernel: ACPI: FACP 0x000000009CFE21FA 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Apr 16 04:16:17.255211 kernel: ACPI: DSDT 0x000000009CFE0040 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 16 04:16:17.255219 kernel: ACPI: FACS 0x000000009CFE0000 000040 Apr 16 04:16:17.255230 kernel: ACPI: APIC 0x000000009CFE22EE 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 16 04:16:17.255238 kernel: ACPI: HPET 0x000000009CFE237E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 16 04:16:17.255246 kernel: ACPI: MCFG 0x000000009CFE23B6 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 16 04:16:17.255253 kernel: ACPI: WAET 0x000000009CFE23F2 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 16 04:16:17.255262 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21fa-0x9cfe22ed] Apr 16 04:16:17.255270 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21f9] Apr 16 04:16:17.255278 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Apr 16 04:16:17.255293 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22ee-0x9cfe237d] Apr 16 04:16:17.255301 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe237e-0x9cfe23b5] Apr 16 04:16:17.255309 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23b6-0x9cfe23f1] Apr 16 04:16:17.255318 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23f2-0x9cfe2419] Apr 16 04:16:17.255326 kernel: No NUMA configuration found Apr 16 04:16:17.255335 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Apr 16 04:16:17.255343 kernel: NODE_DATA(0) allocated [mem 0x9cfd6000-0x9cfdbfff] Apr 16 04:16:17.255355 kernel: Zone ranges: Apr 16 04:16:17.255363 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Apr 16 04:16:17.255372 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Apr 16 04:16:17.255381 kernel: Normal empty Apr 16 04:16:17.255390 kernel: Movable zone start for each node Apr 16 04:16:17.255398 kernel: Early memory node ranges Apr 16 04:16:17.255407 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Apr 16 04:16:17.255415 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Apr 16 04:16:17.255423 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Apr 16 04:16:17.255432 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Apr 16 04:16:17.255443 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Apr 16 04:16:17.255464 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Apr 16 04:16:17.255473 kernel: ACPI: PM-Timer IO Port: 0x608 Apr 16 04:16:17.255481 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Apr 16 04:16:17.255490 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Apr 16 04:16:17.255498 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Apr 16 04:16:17.255507 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Apr 16 04:16:17.255517 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Apr 16 04:16:17.255526 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Apr 16 04:16:17.255538 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Apr 16 04:16:17.255547 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Apr 16 04:16:17.255556 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Apr 16 04:16:17.255565 kernel: TSC deadline timer available Apr 16 04:16:17.255573 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Apr 16 04:16:17.255581 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Apr 16 04:16:17.255589 kernel: kvm-guest: KVM setup pv remote TLB flush Apr 16 04:16:17.255598 kernel: kvm-guest: setup PV sched yield Apr 16 04:16:17.255617 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Apr 16 04:16:17.255630 kernel: Booting paravirtualized kernel on KVM Apr 16 04:16:17.255639 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Apr 16 04:16:17.255647 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Apr 16 04:16:17.255655 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u524288 Apr 16 04:16:17.255663 kernel: pcpu-alloc: s196328 r8192 d28952 u524288 alloc=1*2097152 Apr 16 04:16:17.255672 kernel: pcpu-alloc: [0] 0 1 2 3 Apr 16 04:16:17.255679 kernel: kvm-guest: PV spinlocks enabled Apr 16 04:16:17.255685 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Apr 16 04:16:17.255691 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=27643dbc59f658eac8bb37add3a8b4ed010a3c31134319f01549aa493a1f070c Apr 16 04:16:17.255700 kernel: random: crng init done Apr 16 04:16:17.255705 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Apr 16 04:16:17.255710 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Apr 16 04:16:17.255715 kernel: Fallback order for Node 0: 0 Apr 16 04:16:17.255720 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632732 Apr 16 04:16:17.255725 kernel: Policy zone: DMA32 Apr 16 04:16:17.255730 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Apr 16 04:16:17.255735 kernel: Memory: 2433648K/2571752K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42896K init, 2300K bss, 137900K reserved, 0K cma-reserved) Apr 16 04:16:17.255742 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Apr 16 04:16:17.255747 kernel: ftrace: allocating 37996 entries in 149 pages Apr 16 04:16:17.255752 kernel: ftrace: allocated 149 pages with 4 groups Apr 16 04:16:17.255757 kernel: Dynamic Preempt: voluntary Apr 16 04:16:17.255762 kernel: rcu: Preemptible hierarchical RCU implementation. Apr 16 04:16:17.255768 kernel: rcu: RCU event tracing is enabled. Apr 16 04:16:17.255774 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Apr 16 04:16:17.255779 kernel: Trampoline variant of Tasks RCU enabled. Apr 16 04:16:17.255784 kernel: Rude variant of Tasks RCU enabled. Apr 16 04:16:17.255791 kernel: Tracing variant of Tasks RCU enabled. Apr 16 04:16:17.255796 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Apr 16 04:16:17.255801 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Apr 16 04:16:17.255806 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Apr 16 04:16:17.255821 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Apr 16 04:16:17.255826 kernel: Console: colour VGA+ 80x25 Apr 16 04:16:17.255831 kernel: printk: console [ttyS0] enabled Apr 16 04:16:17.255836 kernel: ACPI: Core revision 20230628 Apr 16 04:16:17.255841 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Apr 16 04:16:17.255848 kernel: APIC: Switch to symmetric I/O mode setup Apr 16 04:16:17.255853 kernel: x2apic enabled Apr 16 04:16:17.255858 kernel: APIC: Switched APIC routing to: physical x2apic Apr 16 04:16:17.255864 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Apr 16 04:16:17.255869 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Apr 16 04:16:17.255874 kernel: kvm-guest: setup PV IPIs Apr 16 04:16:17.255879 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Apr 16 04:16:17.255885 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x284409db922, max_idle_ns: 440795228871 ns Apr 16 04:16:17.255897 kernel: Calibrating delay loop (skipped) preset value.. 5586.87 BogoMIPS (lpj=2793438) Apr 16 04:16:17.255903 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Apr 16 04:16:17.255909 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Apr 16 04:16:17.255914 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Apr 16 04:16:17.255922 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Apr 16 04:16:17.255927 kernel: Spectre V2 : Mitigation: Retpolines Apr 16 04:16:17.255933 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Apr 16 04:16:17.255938 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Apr 16 04:16:17.255946 kernel: RETBleed: Vulnerable Apr 16 04:16:17.255952 kernel: Speculative Store Bypass: Vulnerable Apr 16 04:16:17.255957 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Apr 16 04:16:17.256573 kernel: GDS: Unknown: Dependent on hypervisor status Apr 16 04:16:17.256610 kernel: active return thunk: its_return_thunk Apr 16 04:16:17.256616 kernel: ITS: Mitigation: Aligned branch/return thunks Apr 16 04:16:17.256622 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Apr 16 04:16:17.256628 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Apr 16 04:16:17.256634 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Apr 16 04:16:17.256644 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Apr 16 04:16:17.256650 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Apr 16 04:16:17.256656 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Apr 16 04:16:17.256661 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Apr 16 04:16:17.256667 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Apr 16 04:16:17.256673 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Apr 16 04:16:17.256678 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Apr 16 04:16:17.256684 kernel: x86/fpu: Enabled xstate features 0xe7, context size is 2432 bytes, using 'compacted' format. Apr 16 04:16:17.256690 kernel: Freeing SMP alternatives memory: 32K Apr 16 04:16:17.256715 kernel: pid_max: default: 32768 minimum: 301 Apr 16 04:16:17.256725 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Apr 16 04:16:17.256733 kernel: landlock: Up and running. Apr 16 04:16:17.256742 kernel: SELinux: Initializing. Apr 16 04:16:17.256751 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 16 04:16:17.256761 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 16 04:16:17.256771 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8370C CPU @ 2.80GHz (family: 0x6, model: 0x6a, stepping: 0x6) Apr 16 04:16:17.257278 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Apr 16 04:16:17.257312 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Apr 16 04:16:17.257363 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Apr 16 04:16:17.257369 kernel: Performance Events: unsupported p6 CPU model 106 no PMU driver, software events only. Apr 16 04:16:17.257375 kernel: signal: max sigframe size: 3632 Apr 16 04:16:17.257390 kernel: rcu: Hierarchical SRCU implementation. Apr 16 04:16:17.257397 kernel: rcu: Max phase no-delay instances is 400. Apr 16 04:16:17.257403 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Apr 16 04:16:17.257408 kernel: smp: Bringing up secondary CPUs ... Apr 16 04:16:17.257414 kernel: smpboot: x86: Booting SMP configuration: Apr 16 04:16:17.257429 kernel: .... node #0, CPUs: #1 #2 #3 Apr 16 04:16:17.257438 kernel: smp: Brought up 1 node, 4 CPUs Apr 16 04:16:17.257443 kernel: smpboot: Max logical packages: 1 Apr 16 04:16:17.257459 kernel: smpboot: Total of 4 processors activated (22347.50 BogoMIPS) Apr 16 04:16:17.257464 kernel: devtmpfs: initialized Apr 16 04:16:17.257470 kernel: x86/mm: Memory block size: 128MB Apr 16 04:16:17.257485 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Apr 16 04:16:17.257500 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Apr 16 04:16:17.257515 kernel: pinctrl core: initialized pinctrl subsystem Apr 16 04:16:17.257529 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Apr 16 04:16:17.257538 kernel: audit: initializing netlink subsys (disabled) Apr 16 04:16:17.257553 kernel: audit: type=2000 audit(1776312968.996:1): state=initialized audit_enabled=0 res=1 Apr 16 04:16:17.257568 kernel: thermal_sys: Registered thermal governor 'step_wise' Apr 16 04:16:17.257574 kernel: thermal_sys: Registered thermal governor 'user_space' Apr 16 04:16:17.257580 kernel: cpuidle: using governor menu Apr 16 04:16:17.257586 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Apr 16 04:16:17.257591 kernel: dca service started, version 1.12.1 Apr 16 04:16:17.257597 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Apr 16 04:16:17.257602 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Apr 16 04:16:17.257611 kernel: PCI: Using configuration type 1 for base access Apr 16 04:16:17.257616 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Apr 16 04:16:17.257622 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Apr 16 04:16:17.257627 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Apr 16 04:16:17.257633 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Apr 16 04:16:17.257639 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Apr 16 04:16:17.257644 kernel: ACPI: Added _OSI(Module Device) Apr 16 04:16:17.257650 kernel: ACPI: Added _OSI(Processor Device) Apr 16 04:16:17.257665 kernel: ACPI: Added _OSI(Processor Aggregator Device) Apr 16 04:16:17.257673 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Apr 16 04:16:17.257678 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Apr 16 04:16:17.257684 kernel: ACPI: Interpreter enabled Apr 16 04:16:17.257690 kernel: ACPI: PM: (supports S0 S3 S5) Apr 16 04:16:17.257695 kernel: ACPI: Using IOAPIC for interrupt routing Apr 16 04:16:17.257701 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Apr 16 04:16:17.257707 kernel: PCI: Using E820 reservations for host bridge windows Apr 16 04:16:17.257713 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Apr 16 04:16:17.257719 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Apr 16 04:16:17.258078 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Apr 16 04:16:17.258195 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Apr 16 04:16:17.258284 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Apr 16 04:16:17.258296 kernel: PCI host bridge to bus 0000:00 Apr 16 04:16:17.258439 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Apr 16 04:16:17.258530 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Apr 16 04:16:17.258630 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Apr 16 04:16:17.258708 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Apr 16 04:16:17.260806 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Apr 16 04:16:17.260932 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Apr 16 04:16:17.261189 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Apr 16 04:16:17.263772 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Apr 16 04:16:17.264647 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Apr 16 04:16:17.264841 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Apr 16 04:16:17.264944 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Apr 16 04:16:17.265644 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Apr 16 04:16:17.265763 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Apr 16 04:16:17.267672 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Apr 16 04:16:17.267789 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc0c0-0xc0df] Apr 16 04:16:17.267883 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Apr 16 04:16:17.267980 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Apr 16 04:16:17.271633 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Apr 16 04:16:17.271787 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc000-0xc07f] Apr 16 04:16:17.271887 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Apr 16 04:16:17.271978 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Apr 16 04:16:17.272146 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Apr 16 04:16:17.272246 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc0e0-0xc0ff] Apr 16 04:16:17.273885 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] Apr 16 04:16:17.273997 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] Apr 16 04:16:17.278080 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Apr 16 04:16:17.278269 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Apr 16 04:16:17.278362 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Apr 16 04:16:17.278470 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Apr 16 04:16:17.278571 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc100-0xc11f] Apr 16 04:16:17.278659 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd4000-0xfebd4fff] Apr 16 04:16:17.278789 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Apr 16 04:16:17.278876 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Apr 16 04:16:17.278888 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Apr 16 04:16:17.278897 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Apr 16 04:16:17.278907 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Apr 16 04:16:17.278915 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Apr 16 04:16:17.278929 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Apr 16 04:16:17.278938 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Apr 16 04:16:17.278947 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Apr 16 04:16:17.278956 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Apr 16 04:16:17.278964 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Apr 16 04:16:17.278973 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Apr 16 04:16:17.278982 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Apr 16 04:16:17.278990 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Apr 16 04:16:17.278998 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Apr 16 04:16:17.279748 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Apr 16 04:16:17.279763 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Apr 16 04:16:17.279773 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Apr 16 04:16:17.279782 kernel: iommu: Default domain type: Translated Apr 16 04:16:17.279793 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Apr 16 04:16:17.279802 kernel: PCI: Using ACPI for IRQ routing Apr 16 04:16:17.279812 kernel: PCI: pci_cache_line_size set to 64 bytes Apr 16 04:16:17.279822 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Apr 16 04:16:17.279832 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Apr 16 04:16:17.280088 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Apr 16 04:16:17.283082 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Apr 16 04:16:17.283193 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Apr 16 04:16:17.283206 kernel: vgaarb: loaded Apr 16 04:16:17.283216 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Apr 16 04:16:17.283226 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Apr 16 04:16:17.283236 kernel: clocksource: Switched to clocksource kvm-clock Apr 16 04:16:17.283246 kernel: VFS: Disk quotas dquot_6.6.0 Apr 16 04:16:17.283263 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Apr 16 04:16:17.283273 kernel: pnp: PnP ACPI init Apr 16 04:16:17.284889 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Apr 16 04:16:17.284913 kernel: pnp: PnP ACPI: found 6 devices Apr 16 04:16:17.284924 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Apr 16 04:16:17.284933 kernel: NET: Registered PF_INET protocol family Apr 16 04:16:17.284943 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Apr 16 04:16:17.284954 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Apr 16 04:16:17.284970 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Apr 16 04:16:17.284980 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Apr 16 04:16:17.284989 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Apr 16 04:16:17.284998 kernel: TCP: Hash tables configured (established 32768 bind 32768) Apr 16 04:16:17.285047 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 16 04:16:17.285073 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 16 04:16:17.285083 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Apr 16 04:16:17.285093 kernel: NET: Registered PF_XDP protocol family Apr 16 04:16:17.285195 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Apr 16 04:16:17.285282 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Apr 16 04:16:17.286741 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Apr 16 04:16:17.287156 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Apr 16 04:16:17.287248 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Apr 16 04:16:17.287326 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Apr 16 04:16:17.287338 kernel: PCI: CLS 0 bytes, default 64 Apr 16 04:16:17.287348 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Apr 16 04:16:17.287358 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x284409db922, max_idle_ns: 440795228871 ns Apr 16 04:16:17.287373 kernel: Initialise system trusted keyrings Apr 16 04:16:17.287383 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Apr 16 04:16:17.287392 kernel: Key type asymmetric registered Apr 16 04:16:17.287401 kernel: Asymmetric key parser 'x509' registered Apr 16 04:16:17.287411 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Apr 16 04:16:17.287421 kernel: io scheduler mq-deadline registered Apr 16 04:16:17.287430 kernel: io scheduler kyber registered Apr 16 04:16:17.287439 kernel: io scheduler bfq registered Apr 16 04:16:17.287449 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Apr 16 04:16:17.287463 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Apr 16 04:16:17.287471 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Apr 16 04:16:17.287481 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Apr 16 04:16:17.287490 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Apr 16 04:16:17.287499 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Apr 16 04:16:17.287507 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Apr 16 04:16:17.287516 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Apr 16 04:16:17.287525 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Apr 16 04:16:17.287533 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Apr 16 04:16:17.288620 kernel: rtc_cmos 00:04: RTC can wake from S4 Apr 16 04:16:17.288726 kernel: rtc_cmos 00:04: registered as rtc0 Apr 16 04:16:17.288809 kernel: rtc_cmos 00:04: setting system clock to 2026-04-16T04:16:14 UTC (1776312974) Apr 16 04:16:17.288891 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Apr 16 04:16:17.288905 kernel: intel_pstate: CPU model not supported Apr 16 04:16:17.288916 kernel: NET: Registered PF_INET6 protocol family Apr 16 04:16:17.288926 kernel: Segment Routing with IPv6 Apr 16 04:16:17.288937 kernel: In-situ OAM (IOAM) with IPv6 Apr 16 04:16:17.288953 kernel: NET: Registered PF_PACKET protocol family Apr 16 04:16:17.288963 kernel: Key type dns_resolver registered Apr 16 04:16:17.288972 kernel: IPI shorthand broadcast: enabled Apr 16 04:16:17.288982 kernel: sched_clock: Marking stable (5620074496, 975580142)->(7136266158, -540611520) Apr 16 04:16:17.288991 kernel: registered taskstats version 1 Apr 16 04:16:17.289001 kernel: Loading compiled-in X.509 certificates Apr 16 04:16:17.289816 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.127-flatcar: 6e6d886174c86dc730e1b14e46a1dab518d9b090' Apr 16 04:16:17.289832 kernel: Key type .fscrypt registered Apr 16 04:16:17.289843 kernel: Key type fscrypt-provisioning registered Apr 16 04:16:17.289893 kernel: ima: No TPM chip found, activating TPM-bypass! Apr 16 04:16:17.289903 kernel: ima: Allocated hash algorithm: sha1 Apr 16 04:16:17.289914 kernel: ima: No architecture policies found Apr 16 04:16:17.289924 kernel: clk: Disabling unused clocks Apr 16 04:16:17.289934 kernel: Freeing unused kernel image (initmem) memory: 42896K Apr 16 04:16:17.289944 kernel: Write protecting the kernel read-only data: 36864k Apr 16 04:16:17.289954 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Apr 16 04:16:17.289963 kernel: Run /init as init process Apr 16 04:16:17.289972 kernel: with arguments: Apr 16 04:16:17.289982 kernel: /init Apr 16 04:16:17.290001 kernel: with environment: Apr 16 04:16:17.290606 kernel: HOME=/ Apr 16 04:16:17.290619 kernel: TERM=linux Apr 16 04:16:17.290633 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 16 04:16:17.290646 systemd[1]: Detected virtualization kvm. Apr 16 04:16:17.290656 systemd[1]: Detected architecture x86-64. Apr 16 04:16:17.290666 systemd[1]: Running in initrd. Apr 16 04:16:17.290716 systemd[1]: No hostname configured, using default hostname. Apr 16 04:16:17.290726 systemd[1]: Hostname set to . Apr 16 04:16:17.290737 systemd[1]: Initializing machine ID from VM UUID. Apr 16 04:16:17.290746 kernel: hrtimer: interrupt took 12559162 ns Apr 16 04:16:17.290756 systemd[1]: Queued start job for default target initrd.target. Apr 16 04:16:17.290766 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 16 04:16:17.290776 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 16 04:16:17.290788 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Apr 16 04:16:17.290801 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 16 04:16:17.290812 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Apr 16 04:16:17.290841 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Apr 16 04:16:17.290855 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Apr 16 04:16:17.290866 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Apr 16 04:16:17.290878 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 16 04:16:17.290888 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 16 04:16:17.290898 systemd[1]: Reached target paths.target - Path Units. Apr 16 04:16:17.290909 systemd[1]: Reached target slices.target - Slice Units. Apr 16 04:16:17.290919 systemd[1]: Reached target swap.target - Swaps. Apr 16 04:16:17.290929 systemd[1]: Reached target timers.target - Timer Units. Apr 16 04:16:17.290940 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Apr 16 04:16:17.290950 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 16 04:16:17.290962 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 16 04:16:17.290972 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Apr 16 04:16:17.290982 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 16 04:16:17.290992 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 16 04:16:17.291002 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 16 04:16:17.291499 systemd[1]: Reached target sockets.target - Socket Units. Apr 16 04:16:17.291512 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Apr 16 04:16:17.291523 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 16 04:16:17.291533 systemd[1]: Finished network-cleanup.service - Network Cleanup. Apr 16 04:16:17.291582 systemd[1]: Starting systemd-fsck-usr.service... Apr 16 04:16:17.291593 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 16 04:16:17.291603 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 16 04:16:17.291614 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 16 04:16:17.291624 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Apr 16 04:16:17.291635 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 16 04:16:17.291645 systemd[1]: Finished systemd-fsck-usr.service. Apr 16 04:16:17.292895 systemd-journald[193]: Collecting audit messages is disabled. Apr 16 04:16:17.292936 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 16 04:16:17.292949 systemd-journald[193]: Journal started Apr 16 04:16:17.292973 systemd-journald[193]: Runtime Journal (/run/log/journal/c04414558d074d1d8e0486ec9fc8f5ac) is 6.0M, max 48.4M, 42.3M free. Apr 16 04:16:17.262743 systemd-modules-load[194]: Inserted module 'overlay' Apr 16 04:16:17.881980 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Apr 16 04:16:17.882126 kernel: Bridge firewalling registered Apr 16 04:16:17.367596 systemd-modules-load[194]: Inserted module 'br_netfilter' Apr 16 04:16:17.887491 systemd[1]: Started systemd-journald.service - Journal Service. Apr 16 04:16:17.889760 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 16 04:16:17.897594 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 16 04:16:17.903211 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 16 04:16:17.960582 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 16 04:16:17.966473 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 16 04:16:17.970402 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 16 04:16:17.982996 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 16 04:16:18.075187 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 16 04:16:18.104339 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 16 04:16:18.131470 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 16 04:16:18.166781 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Apr 16 04:16:18.175994 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 16 04:16:18.279580 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 16 04:16:18.313619 dracut-cmdline[230]: dracut-dracut-053 Apr 16 04:16:18.341563 dracut-cmdline[230]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=27643dbc59f658eac8bb37add3a8b4ed010a3c31134319f01549aa493a1f070c Apr 16 04:16:18.632620 systemd-resolved[234]: Positive Trust Anchors: Apr 16 04:16:18.634046 systemd-resolved[234]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 16 04:16:18.634102 systemd-resolved[234]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 16 04:16:18.644802 systemd-resolved[234]: Defaulting to hostname 'linux'. Apr 16 04:16:18.654330 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 16 04:16:18.661698 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 16 04:16:19.357390 kernel: SCSI subsystem initialized Apr 16 04:16:19.391590 kernel: Loading iSCSI transport class v2.0-870. Apr 16 04:16:19.550621 kernel: iscsi: registered transport (tcp) Apr 16 04:16:19.622456 kernel: iscsi: registered transport (qla4xxx) Apr 16 04:16:19.622784 kernel: QLogic iSCSI HBA Driver Apr 16 04:16:20.758691 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Apr 16 04:16:20.865662 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Apr 16 04:16:23.588923 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Apr 16 04:16:23.590069 kernel: device-mapper: uevent: version 1.0.3 Apr 16 04:16:23.594806 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Apr 16 04:16:24.332638 kernel: raid6: avx512x4 gen() 9558 MB/s Apr 16 04:16:24.364832 kernel: raid6: avx512x2 gen() 10740 MB/s Apr 16 04:16:24.383950 kernel: raid6: avx512x1 gen() 14151 MB/s Apr 16 04:16:24.407039 kernel: raid6: avx2x4 gen() 8203 MB/s Apr 16 04:16:24.440622 kernel: raid6: avx2x2 gen() 6442 MB/s Apr 16 04:16:24.464581 kernel: raid6: avx2x1 gen() 8074 MB/s Apr 16 04:16:24.465121 kernel: raid6: using algorithm avx512x1 gen() 14151 MB/s Apr 16 04:16:24.487702 kernel: raid6: .... xor() 12451 MB/s, rmw enabled Apr 16 04:16:24.488077 kernel: raid6: using avx512x2 recovery algorithm Apr 16 04:16:24.580591 kernel: xor: automatically using best checksumming function avx Apr 16 04:16:26.347860 kernel: Btrfs loaded, zoned=no, fsverity=no Apr 16 04:16:27.128895 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Apr 16 04:16:27.402784 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 16 04:16:29.752933 systemd-udevd[416]: Using default interface naming scheme 'v255'. Apr 16 04:16:29.893376 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 16 04:16:29.973983 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Apr 16 04:16:31.131773 dracut-pre-trigger[421]: rd.md=0: removing MD RAID activation Apr 16 04:16:31.337816 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Apr 16 04:16:31.385197 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 16 04:16:31.659595 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 16 04:16:31.688573 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Apr 16 04:16:31.820924 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Apr 16 04:16:31.837381 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Apr 16 04:16:31.842512 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 16 04:16:31.851973 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 16 04:16:31.866848 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Apr 16 04:16:31.914572 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Apr 16 04:16:31.931959 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 16 04:16:31.936267 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 16 04:16:31.970296 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Apr 16 04:16:31.978749 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Apr 16 04:16:31.990522 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Apr 16 04:16:31.990673 kernel: GPT:9289727 != 19775487 Apr 16 04:16:31.990794 kernel: GPT:Alternate GPT header not at the end of the disk. Apr 16 04:16:31.990812 kernel: GPT:9289727 != 19775487 Apr 16 04:16:31.990825 kernel: GPT: Use GNU Parted to correct GPT errors. Apr 16 04:16:31.990868 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 16 04:16:31.971702 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 16 04:16:31.978587 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 16 04:16:31.978873 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 16 04:16:31.987216 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Apr 16 04:16:32.030653 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 16 04:16:32.053285 kernel: cryptd: max_cpu_qlen set to 1000 Apr 16 04:16:32.277235 kernel: libata version 3.00 loaded. Apr 16 04:16:33.304936 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 16 04:16:33.454217 kernel: BTRFS: device fsid 936fcbd8-a8ab-4e87-b115-d77c7a08e984 devid 1 transid 34 /dev/vda3 scanned by (udev-worker) (462) Apr 16 04:16:33.745827 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 16 04:16:33.789429 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (468) Apr 16 04:16:33.796683 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Apr 16 04:16:33.832110 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Apr 16 04:16:33.867588 kernel: AVX2 version of gcm_enc/dec engaged. Apr 16 04:16:33.894550 kernel: AES CTR mode by8 optimization enabled Apr 16 04:16:33.907667 kernel: ahci 0000:00:1f.2: version 3.0 Apr 16 04:16:33.908097 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Apr 16 04:16:33.906916 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Apr 16 04:16:33.930915 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Apr 16 04:16:33.940775 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Apr 16 04:16:33.982672 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Apr 16 04:16:34.020328 kernel: scsi host0: ahci Apr 16 04:16:34.020972 kernel: scsi host1: ahci Apr 16 04:16:34.023310 kernel: scsi host2: ahci Apr 16 04:16:34.022555 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 16 04:16:34.043088 kernel: scsi host3: ahci Apr 16 04:16:34.056032 kernel: scsi host4: ahci Apr 16 04:16:34.058732 kernel: scsi host5: ahci Apr 16 04:16:34.058920 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 Apr 16 04:16:34.058937 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 Apr 16 04:16:34.058951 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 Apr 16 04:16:34.058965 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 Apr 16 04:16:34.058979 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 Apr 16 04:16:34.058993 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 Apr 16 04:16:34.056695 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Apr 16 04:16:34.132746 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Apr 16 04:16:34.232589 disk-uuid[555]: Primary Header is updated. Apr 16 04:16:34.232589 disk-uuid[555]: Secondary Entries is updated. Apr 16 04:16:34.232589 disk-uuid[555]: Secondary Header is updated. Apr 16 04:16:34.269302 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 16 04:16:34.306649 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 16 04:16:34.365622 kernel: ata2: SATA link down (SStatus 0 SControl 300) Apr 16 04:16:34.373295 kernel: ata1: SATA link down (SStatus 0 SControl 300) Apr 16 04:16:34.379462 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Apr 16 04:16:34.389095 kernel: ata6: SATA link down (SStatus 0 SControl 300) Apr 16 04:16:34.398921 kernel: ata5: SATA link down (SStatus 0 SControl 300) Apr 16 04:16:34.403442 kernel: ata4: SATA link down (SStatus 0 SControl 300) Apr 16 04:16:34.403821 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Apr 16 04:16:34.413259 kernel: ata3.00: applying bridge limits Apr 16 04:16:34.427401 kernel: ata3.00: configured for UDMA/100 Apr 16 04:16:34.437171 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Apr 16 04:16:34.817530 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Apr 16 04:16:34.818411 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Apr 16 04:16:34.856584 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Apr 16 04:16:35.307209 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 16 04:16:35.309727 disk-uuid[559]: The operation has completed successfully. Apr 16 04:16:35.718565 systemd[1]: disk-uuid.service: Deactivated successfully. Apr 16 04:16:35.718930 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Apr 16 04:16:35.747338 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Apr 16 04:16:35.935401 sh[592]: Success Apr 16 04:16:35.994152 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Apr 16 04:16:36.676169 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Apr 16 04:16:36.732491 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Apr 16 04:16:36.921451 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Apr 16 04:16:36.953098 kernel: BTRFS info (device dm-0): first mount of filesystem 936fcbd8-a8ab-4e87-b115-d77c7a08e984 Apr 16 04:16:36.953156 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Apr 16 04:16:36.953195 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Apr 16 04:16:36.970376 kernel: BTRFS info (device dm-0): disabling log replay at mount time Apr 16 04:16:36.970865 kernel: BTRFS info (device dm-0): using free space tree Apr 16 04:16:37.294465 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Apr 16 04:16:37.328411 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Apr 16 04:16:37.420889 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Apr 16 04:16:37.451897 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Apr 16 04:16:37.559231 kernel: BTRFS info (device vda6): first mount of filesystem 90718864-f2fc-45a7-9234-85fc9574bf9c Apr 16 04:16:37.560453 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 16 04:16:37.561221 kernel: BTRFS info (device vda6): using free space tree Apr 16 04:16:37.650409 kernel: BTRFS info (device vda6): auto enabling async discard Apr 16 04:16:37.810408 systemd[1]: mnt-oem.mount: Deactivated successfully. Apr 16 04:16:37.819200 kernel: BTRFS info (device vda6): last unmount of filesystem 90718864-f2fc-45a7-9234-85fc9574bf9c Apr 16 04:16:37.887746 systemd[1]: Finished ignition-setup.service - Ignition (setup). Apr 16 04:16:37.987812 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Apr 16 04:16:38.942798 ignition[685]: Ignition 2.19.0 Apr 16 04:16:38.942837 ignition[685]: Stage: fetch-offline Apr 16 04:16:38.942879 ignition[685]: no configs at "/usr/lib/ignition/base.d" Apr 16 04:16:38.942889 ignition[685]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 16 04:16:38.943178 ignition[685]: parsed url from cmdline: "" Apr 16 04:16:38.943182 ignition[685]: no config URL provided Apr 16 04:16:38.943188 ignition[685]: reading system config file "/usr/lib/ignition/user.ign" Apr 16 04:16:38.943197 ignition[685]: no config at "/usr/lib/ignition/user.ign" Apr 16 04:16:38.943324 ignition[685]: op(1): [started] loading QEMU firmware config module Apr 16 04:16:38.943330 ignition[685]: op(1): executing: "modprobe" "qemu_fw_cfg" Apr 16 04:16:39.034419 ignition[685]: op(1): [finished] loading QEMU firmware config module Apr 16 04:16:39.035074 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 16 04:16:39.064412 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 16 04:16:39.506297 ignition[685]: parsing config with SHA512: d03eeae68ed8705597f60386c274cd69753de7132813741de39bd1e0ec8c55d606219e1fa53180fe8edb9208e16191fd93542002637a622c837286b3ad4e73c3 Apr 16 04:16:40.237616 systemd-networkd[781]: lo: Link UP Apr 16 04:16:40.243404 systemd-networkd[781]: lo: Gained carrier Apr 16 04:16:40.250037 systemd-networkd[781]: Enumeration completed Apr 16 04:16:40.250948 systemd-networkd[781]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 16 04:16:40.250952 systemd-networkd[781]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 16 04:16:40.251371 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 16 04:16:40.295560 ignition[685]: fetch-offline: fetch-offline passed Apr 16 04:16:40.271610 systemd[1]: Reached target network.target - Network. Apr 16 04:16:40.295716 ignition[685]: Ignition finished successfully Apr 16 04:16:40.278641 systemd-networkd[781]: eth0: Link UP Apr 16 04:16:40.278650 systemd-networkd[781]: eth0: Gained carrier Apr 16 04:16:40.278748 systemd-networkd[781]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 16 04:16:40.290598 unknown[685]: fetched base config from "system" Apr 16 04:16:40.290609 unknown[685]: fetched user config from "qemu" Apr 16 04:16:40.305714 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Apr 16 04:16:40.326076 systemd-networkd[781]: eth0: DHCPv4 address 10.0.0.5/16, gateway 10.0.0.1 acquired from 10.0.0.1 Apr 16 04:16:40.326537 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Apr 16 04:16:40.362330 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Apr 16 04:16:40.660993 ignition[785]: Ignition 2.19.0 Apr 16 04:16:40.661242 ignition[785]: Stage: kargs Apr 16 04:16:40.661567 ignition[785]: no configs at "/usr/lib/ignition/base.d" Apr 16 04:16:40.661583 ignition[785]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 16 04:16:40.663454 ignition[785]: kargs: kargs passed Apr 16 04:16:40.663530 ignition[785]: Ignition finished successfully Apr 16 04:16:40.710308 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Apr 16 04:16:40.809443 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Apr 16 04:16:40.983087 systemd-resolved[234]: Detected conflict on linux IN A 10.0.0.5 Apr 16 04:16:40.983106 systemd-resolved[234]: Hostname conflict, changing published hostname from 'linux' to 'linux7'. Apr 16 04:16:41.176375 ignition[793]: Ignition 2.19.0 Apr 16 04:16:41.176581 ignition[793]: Stage: disks Apr 16 04:16:41.177117 ignition[793]: no configs at "/usr/lib/ignition/base.d" Apr 16 04:16:41.177128 ignition[793]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 16 04:16:41.194761 ignition[793]: disks: disks passed Apr 16 04:16:41.219444 systemd[1]: Finished ignition-disks.service - Ignition (disks). Apr 16 04:16:41.194834 ignition[793]: Ignition finished successfully Apr 16 04:16:41.231726 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Apr 16 04:16:41.244280 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 16 04:16:41.253985 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 16 04:16:41.268588 systemd[1]: Reached target sysinit.target - System Initialization. Apr 16 04:16:41.283087 systemd[1]: Reached target basic.target - Basic System. Apr 16 04:16:41.343082 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Apr 16 04:16:41.747338 systemd-fsck[803]: ROOT: clean, 14/553520 files, 52654/553472 blocks Apr 16 04:16:41.768928 systemd-networkd[781]: eth0: Gained IPv6LL Apr 16 04:16:41.781082 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Apr 16 04:16:41.857885 systemd[1]: Mounting sysroot.mount - /sysroot... Apr 16 04:16:43.270310 kernel: EXT4-fs (vda9): mounted filesystem 9ac74074-8829-477f-a4c4-5563740ec49b r/w with ordered data mode. Quota mode: none. Apr 16 04:16:43.292259 systemd[1]: Mounted sysroot.mount - /sysroot. Apr 16 04:16:43.308719 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Apr 16 04:16:43.386620 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 16 04:16:43.402664 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Apr 16 04:16:43.408817 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Apr 16 04:16:43.408949 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Apr 16 04:16:43.408979 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Apr 16 04:16:43.479278 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (812) Apr 16 04:16:43.488496 kernel: BTRFS info (device vda6): first mount of filesystem 90718864-f2fc-45a7-9234-85fc9574bf9c Apr 16 04:16:43.488909 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 16 04:16:43.492503 kernel: BTRFS info (device vda6): using free space tree Apr 16 04:16:43.530692 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Apr 16 04:16:43.542702 kernel: BTRFS info (device vda6): auto enabling async discard Apr 16 04:16:43.568674 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 16 04:16:43.600635 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Apr 16 04:16:44.634432 initrd-setup-root[836]: cut: /sysroot/etc/passwd: No such file or directory Apr 16 04:16:44.726211 initrd-setup-root[843]: cut: /sysroot/etc/group: No such file or directory Apr 16 04:16:44.866074 initrd-setup-root[850]: cut: /sysroot/etc/shadow: No such file or directory Apr 16 04:16:44.933870 initrd-setup-root[857]: cut: /sysroot/etc/gshadow: No such file or directory Apr 16 04:16:48.357786 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Apr 16 04:16:48.408428 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Apr 16 04:16:48.420597 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Apr 16 04:16:48.446587 systemd[1]: sysroot-oem.mount: Deactivated successfully. Apr 16 04:16:48.455048 kernel: BTRFS info (device vda6): last unmount of filesystem 90718864-f2fc-45a7-9234-85fc9574bf9c Apr 16 04:16:48.678128 ignition[927]: INFO : Ignition 2.19.0 Apr 16 04:16:48.678128 ignition[927]: INFO : Stage: mount Apr 16 04:16:48.683211 ignition[927]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 16 04:16:48.683211 ignition[927]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 16 04:16:48.697656 ignition[927]: INFO : mount: mount passed Apr 16 04:16:48.697656 ignition[927]: INFO : Ignition finished successfully Apr 16 04:16:48.684138 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Apr 16 04:16:48.710075 systemd[1]: Finished ignition-mount.service - Ignition (mount). Apr 16 04:16:48.772389 systemd[1]: Starting ignition-files.service - Ignition (files)... Apr 16 04:16:49.045253 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 16 04:16:49.072245 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (939) Apr 16 04:16:49.084748 kernel: BTRFS info (device vda6): first mount of filesystem 90718864-f2fc-45a7-9234-85fc9574bf9c Apr 16 04:16:49.085330 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 16 04:16:49.085346 kernel: BTRFS info (device vda6): using free space tree Apr 16 04:16:49.096674 kernel: BTRFS info (device vda6): auto enabling async discard Apr 16 04:16:49.109275 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 16 04:16:49.732772 ignition[956]: INFO : Ignition 2.19.0 Apr 16 04:16:49.732772 ignition[956]: INFO : Stage: files Apr 16 04:16:49.741256 ignition[956]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 16 04:16:49.741256 ignition[956]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 16 04:16:49.741256 ignition[956]: DEBUG : files: compiled without relabeling support, skipping Apr 16 04:16:49.772472 ignition[956]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Apr 16 04:16:49.772472 ignition[956]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Apr 16 04:16:49.898505 ignition[956]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Apr 16 04:16:49.952366 ignition[956]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Apr 16 04:16:50.090518 unknown[956]: wrote ssh authorized keys file for user: core Apr 16 04:16:50.109860 ignition[956]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Apr 16 04:16:50.163431 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Apr 16 04:16:50.163431 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Apr 16 04:16:50.163431 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Apr 16 04:16:50.163431 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Apr 16 04:16:50.667116 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Apr 16 04:16:51.508447 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Apr 16 04:16:51.508447 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Apr 16 04:16:51.508447 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Apr 16 04:16:52.248522 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Apr 16 04:16:54.071114 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Apr 16 04:16:54.071114 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/install.sh" Apr 16 04:16:54.099408 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/install.sh" Apr 16 04:16:54.099408 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nginx.yaml" Apr 16 04:16:54.099408 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nginx.yaml" Apr 16 04:16:54.099408 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 16 04:16:54.099408 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 16 04:16:54.099408 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 16 04:16:54.099408 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 16 04:16:54.099408 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/etc/flatcar/update.conf" Apr 16 04:16:54.099408 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/flatcar/update.conf" Apr 16 04:16:54.099408 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Apr 16 04:16:54.099408 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Apr 16 04:16:54.099408 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Apr 16 04:16:54.344899 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.8-x86-64.raw: attempt #1 Apr 16 04:16:54.393176 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET result: OK Apr 16 04:17:01.206001 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Apr 16 04:17:01.225637 ignition[956]: INFO : files: op(d): [started] processing unit "containerd.service" Apr 16 04:17:01.236249 ignition[956]: INFO : files: op(d): op(e): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Apr 16 04:17:01.253985 ignition[956]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Apr 16 04:17:01.253985 ignition[956]: INFO : files: op(d): [finished] processing unit "containerd.service" Apr 16 04:17:01.253985 ignition[956]: INFO : files: op(f): [started] processing unit "prepare-helm.service" Apr 16 04:17:01.253985 ignition[956]: INFO : files: op(f): op(10): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 16 04:17:01.282787 ignition[956]: INFO : files: op(f): op(10): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 16 04:17:01.282787 ignition[956]: INFO : files: op(f): [finished] processing unit "prepare-helm.service" Apr 16 04:17:01.282787 ignition[956]: INFO : files: op(11): [started] processing unit "coreos-metadata.service" Apr 16 04:17:01.282787 ignition[956]: INFO : files: op(11): op(12): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Apr 16 04:17:01.282787 ignition[956]: INFO : files: op(11): op(12): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Apr 16 04:17:01.282787 ignition[956]: INFO : files: op(11): [finished] processing unit "coreos-metadata.service" Apr 16 04:17:01.282787 ignition[956]: INFO : files: op(13): [started] setting preset to disabled for "coreos-metadata.service" Apr 16 04:17:02.398651 ignition[956]: INFO : files: op(13): op(14): [started] removing enablement symlink(s) for "coreos-metadata.service" Apr 16 04:17:02.472828 ignition[956]: INFO : files: op(13): op(14): [finished] removing enablement symlink(s) for "coreos-metadata.service" Apr 16 04:17:02.487974 ignition[956]: INFO : files: op(13): [finished] setting preset to disabled for "coreos-metadata.service" Apr 16 04:17:02.487974 ignition[956]: INFO : files: op(15): [started] setting preset to enabled for "prepare-helm.service" Apr 16 04:17:02.487974 ignition[956]: INFO : files: op(15): [finished] setting preset to enabled for "prepare-helm.service" Apr 16 04:17:02.520609 ignition[956]: INFO : files: createResultFile: createFiles: op(16): [started] writing file "/sysroot/etc/.ignition-result.json" Apr 16 04:17:02.547846 ignition[956]: INFO : files: createResultFile: createFiles: op(16): [finished] writing file "/sysroot/etc/.ignition-result.json" Apr 16 04:17:02.547846 ignition[956]: INFO : files: files passed Apr 16 04:17:02.547846 ignition[956]: INFO : Ignition finished successfully Apr 16 04:17:02.568749 systemd[1]: Finished ignition-files.service - Ignition (files). Apr 16 04:17:02.634663 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Apr 16 04:17:02.677052 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Apr 16 04:17:02.753080 systemd[1]: ignition-quench.service: Deactivated successfully. Apr 16 04:17:02.763533 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Apr 16 04:17:02.831515 initrd-setup-root-after-ignition[984]: grep: /sysroot/oem/oem-release: No such file or directory Apr 16 04:17:02.920259 initrd-setup-root-after-ignition[986]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 16 04:17:02.920259 initrd-setup-root-after-ignition[986]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Apr 16 04:17:02.939412 initrd-setup-root-after-ignition[990]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 16 04:17:02.945822 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 16 04:17:02.955089 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Apr 16 04:17:02.981212 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Apr 16 04:17:03.517734 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Apr 16 04:17:03.522139 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Apr 16 04:17:03.526377 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Apr 16 04:17:03.541462 systemd[1]: Reached target initrd.target - Initrd Default Target. Apr 16 04:17:03.559861 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Apr 16 04:17:03.589670 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Apr 16 04:17:03.952744 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 16 04:17:04.028970 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Apr 16 04:17:04.332298 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Apr 16 04:17:04.335499 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 16 04:17:04.358864 systemd[1]: Stopped target timers.target - Timer Units. Apr 16 04:17:04.377135 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Apr 16 04:17:04.377905 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 16 04:17:04.394328 systemd[1]: Stopped target initrd.target - Initrd Default Target. Apr 16 04:17:04.407559 systemd[1]: Stopped target basic.target - Basic System. Apr 16 04:17:04.487845 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Apr 16 04:17:04.506351 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Apr 16 04:17:04.520863 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Apr 16 04:17:04.521906 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Apr 16 04:17:04.549870 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Apr 16 04:17:04.567231 systemd[1]: Stopped target sysinit.target - System Initialization. Apr 16 04:17:04.587094 systemd[1]: Stopped target local-fs.target - Local File Systems. Apr 16 04:17:04.619402 systemd[1]: Stopped target swap.target - Swaps. Apr 16 04:17:04.632360 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Apr 16 04:17:04.637826 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Apr 16 04:17:04.681899 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Apr 16 04:17:04.710649 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 16 04:17:04.729707 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Apr 16 04:17:04.741506 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 16 04:17:04.742868 systemd[1]: dracut-initqueue.service: Deactivated successfully. Apr 16 04:17:04.762822 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Apr 16 04:17:04.862331 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Apr 16 04:17:04.862944 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Apr 16 04:17:04.874354 systemd[1]: Stopped target paths.target - Path Units. Apr 16 04:17:04.883775 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Apr 16 04:17:04.890935 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 16 04:17:04.899825 systemd[1]: Stopped target slices.target - Slice Units. Apr 16 04:17:04.915981 systemd[1]: Stopped target sockets.target - Socket Units. Apr 16 04:17:04.923819 systemd[1]: iscsid.socket: Deactivated successfully. Apr 16 04:17:04.923983 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Apr 16 04:17:04.952898 systemd[1]: iscsiuio.socket: Deactivated successfully. Apr 16 04:17:04.953114 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 16 04:17:04.954370 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Apr 16 04:17:04.954738 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 16 04:17:04.974551 systemd[1]: ignition-files.service: Deactivated successfully. Apr 16 04:17:04.974950 systemd[1]: Stopped ignition-files.service - Ignition (files). Apr 16 04:17:05.009345 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Apr 16 04:17:05.019833 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Apr 16 04:17:05.020751 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Apr 16 04:17:05.020901 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Apr 16 04:17:05.031963 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Apr 16 04:17:05.032260 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Apr 16 04:17:05.063819 systemd[1]: initrd-cleanup.service: Deactivated successfully. Apr 16 04:17:05.072738 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Apr 16 04:17:05.224913 systemd[1]: sysroot-boot.mount: Deactivated successfully. Apr 16 04:17:05.275510 ignition[1011]: INFO : Ignition 2.19.0 Apr 16 04:17:05.289305 ignition[1011]: INFO : Stage: umount Apr 16 04:17:05.299173 systemd[1]: sysroot-boot.service: Deactivated successfully. Apr 16 04:17:05.300631 ignition[1011]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 16 04:17:05.299400 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Apr 16 04:17:05.317162 ignition[1011]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 16 04:17:05.317162 ignition[1011]: INFO : umount: umount passed Apr 16 04:17:05.317162 ignition[1011]: INFO : Ignition finished successfully Apr 16 04:17:05.349983 systemd[1]: ignition-mount.service: Deactivated successfully. Apr 16 04:17:05.350830 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Apr 16 04:17:05.373855 systemd[1]: Stopped target network.target - Network. Apr 16 04:17:05.376115 systemd[1]: ignition-disks.service: Deactivated successfully. Apr 16 04:17:05.376344 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Apr 16 04:17:05.376961 systemd[1]: ignition-kargs.service: Deactivated successfully. Apr 16 04:17:05.377050 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Apr 16 04:17:05.405341 systemd[1]: ignition-setup.service: Deactivated successfully. Apr 16 04:17:05.405521 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Apr 16 04:17:05.433000 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Apr 16 04:17:05.433422 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Apr 16 04:17:05.444510 systemd[1]: initrd-setup-root.service: Deactivated successfully. Apr 16 04:17:05.444614 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Apr 16 04:17:05.498637 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Apr 16 04:17:05.500535 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Apr 16 04:17:05.536350 systemd[1]: systemd-resolved.service: Deactivated successfully. Apr 16 04:17:05.539501 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Apr 16 04:17:05.543161 systemd-networkd[781]: eth0: DHCPv6 lease lost Apr 16 04:17:05.556219 systemd[1]: systemd-networkd.service: Deactivated successfully. Apr 16 04:17:05.556370 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Apr 16 04:17:05.578171 systemd[1]: systemd-networkd.socket: Deactivated successfully. Apr 16 04:17:05.578381 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Apr 16 04:17:05.609919 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Apr 16 04:17:05.617671 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Apr 16 04:17:05.627907 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 16 04:17:05.639630 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 16 04:17:05.639875 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 16 04:17:05.655602 systemd[1]: systemd-modules-load.service: Deactivated successfully. Apr 16 04:17:05.660739 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Apr 16 04:17:05.669221 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Apr 16 04:17:05.670709 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 16 04:17:05.689040 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 16 04:17:05.721173 systemd[1]: systemd-udevd.service: Deactivated successfully. Apr 16 04:17:05.721457 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 16 04:17:05.736706 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Apr 16 04:17:05.736788 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Apr 16 04:17:05.744896 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Apr 16 04:17:05.745182 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Apr 16 04:17:05.755820 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Apr 16 04:17:05.756042 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Apr 16 04:17:05.776338 systemd[1]: dracut-cmdline.service: Deactivated successfully. Apr 16 04:17:05.776797 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Apr 16 04:17:05.785697 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 16 04:17:05.785954 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 16 04:17:05.836636 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Apr 16 04:17:05.845176 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Apr 16 04:17:05.845554 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 16 04:17:05.856481 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Apr 16 04:17:05.856570 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 16 04:17:05.856696 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Apr 16 04:17:05.856732 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Apr 16 04:17:05.874049 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 16 04:17:05.874405 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 16 04:17:05.880287 systemd[1]: network-cleanup.service: Deactivated successfully. Apr 16 04:17:05.965569 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Apr 16 04:17:06.126901 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Apr 16 04:17:06.127177 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Apr 16 04:17:06.128401 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Apr 16 04:17:06.285740 systemd[1]: Starting initrd-switch-root.service - Switch Root... Apr 16 04:17:06.678212 systemd[1]: Switching root. Apr 16 04:17:06.984145 systemd-journald[193]: Received SIGTERM from PID 1 (systemd). Apr 16 04:17:06.984421 systemd-journald[193]: Journal stopped Apr 16 04:17:25.959309 kernel: SELinux: policy capability network_peer_controls=1 Apr 16 04:17:25.959432 kernel: SELinux: policy capability open_perms=1 Apr 16 04:17:25.959450 kernel: SELinux: policy capability extended_socket_class=1 Apr 16 04:17:25.959465 kernel: SELinux: policy capability always_check_network=0 Apr 16 04:17:25.959482 kernel: SELinux: policy capability cgroup_seclabel=1 Apr 16 04:17:25.959498 kernel: SELinux: policy capability nnp_nosuid_transition=1 Apr 16 04:17:25.959510 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Apr 16 04:17:25.959523 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Apr 16 04:17:25.959542 kernel: audit: type=1403 audit(1776313029.052:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Apr 16 04:17:25.959557 systemd[1]: Successfully loaded SELinux policy in 574.826ms. Apr 16 04:17:25.959583 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 256.239ms. Apr 16 04:17:25.959630 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 16 04:17:25.959646 systemd[1]: Detected virtualization kvm. Apr 16 04:17:25.959665 systemd[1]: Detected architecture x86-64. Apr 16 04:17:25.959678 systemd[1]: Detected first boot. Apr 16 04:17:25.959691 systemd[1]: Initializing machine ID from VM UUID. Apr 16 04:17:25.959704 zram_generator::config[1073]: No configuration found. Apr 16 04:17:25.959717 systemd[1]: Populated /etc with preset unit settings. Apr 16 04:17:25.959729 systemd[1]: Queued start job for default target multi-user.target. Apr 16 04:17:25.959747 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Apr 16 04:17:25.959769 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Apr 16 04:17:25.959786 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Apr 16 04:17:25.959800 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Apr 16 04:17:25.959813 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Apr 16 04:17:25.959826 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Apr 16 04:17:25.959838 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Apr 16 04:17:25.959850 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Apr 16 04:17:25.959863 systemd[1]: Created slice user.slice - User and Session Slice. Apr 16 04:17:25.959877 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 16 04:17:25.959895 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 16 04:17:25.959908 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Apr 16 04:17:25.959920 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Apr 16 04:17:25.959955 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Apr 16 04:17:25.959968 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 16 04:17:25.959981 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Apr 16 04:17:25.959996 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 16 04:17:25.960042 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Apr 16 04:17:25.960057 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 16 04:17:25.960072 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 16 04:17:25.960084 systemd[1]: Reached target slices.target - Slice Units. Apr 16 04:17:25.960096 systemd[1]: Reached target swap.target - Swaps. Apr 16 04:17:25.960107 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Apr 16 04:17:25.960120 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Apr 16 04:17:25.960134 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 16 04:17:25.960149 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Apr 16 04:17:25.960164 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 16 04:17:25.960179 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 16 04:17:25.960193 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 16 04:17:25.960205 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Apr 16 04:17:25.960217 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Apr 16 04:17:25.960230 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Apr 16 04:17:25.960243 systemd[1]: Mounting media.mount - External Media Directory... Apr 16 04:17:25.960256 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 16 04:17:25.960289 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Apr 16 04:17:25.960303 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Apr 16 04:17:25.960414 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Apr 16 04:17:25.960435 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Apr 16 04:17:25.960448 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 16 04:17:25.960463 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 16 04:17:25.960478 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Apr 16 04:17:25.960493 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 16 04:17:25.960507 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 16 04:17:25.960522 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 16 04:17:25.960535 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Apr 16 04:17:25.960553 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 16 04:17:25.960565 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Apr 16 04:17:25.960577 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Apr 16 04:17:25.960589 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Apr 16 04:17:25.960601 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 16 04:17:25.960612 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 16 04:17:25.960627 kernel: fuse: init (API version 7.39) Apr 16 04:17:25.960643 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Apr 16 04:17:25.960660 kernel: loop: module loaded Apr 16 04:17:25.960672 kernel: ACPI: bus type drm_connector registered Apr 16 04:17:25.960705 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Apr 16 04:17:25.960718 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 16 04:17:25.960731 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 16 04:17:25.960775 systemd-journald[1155]: Collecting audit messages is disabled. Apr 16 04:17:25.960806 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Apr 16 04:17:25.960824 systemd-journald[1155]: Journal started Apr 16 04:17:25.960847 systemd-journald[1155]: Runtime Journal (/run/log/journal/c04414558d074d1d8e0486ec9fc8f5ac) is 6.0M, max 48.4M, 42.3M free. Apr 16 04:17:25.972959 systemd[1]: Started systemd-journald.service - Journal Service. Apr 16 04:17:26.011277 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Apr 16 04:17:26.014731 systemd[1]: Mounted media.mount - External Media Directory. Apr 16 04:17:26.018851 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Apr 16 04:17:26.029004 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Apr 16 04:17:26.035228 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Apr 16 04:17:26.053856 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Apr 16 04:17:26.064915 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 16 04:17:26.072593 systemd[1]: modprobe@configfs.service: Deactivated successfully. Apr 16 04:17:26.072915 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Apr 16 04:17:26.094381 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 16 04:17:26.094638 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 16 04:17:26.110183 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 16 04:17:26.110618 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 16 04:17:26.116921 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 16 04:17:26.118763 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 16 04:17:26.128358 systemd[1]: modprobe@fuse.service: Deactivated successfully. Apr 16 04:17:26.128611 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Apr 16 04:17:26.143448 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 16 04:17:26.156392 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 16 04:17:26.168555 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 16 04:17:26.173190 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Apr 16 04:17:26.244879 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Apr 16 04:17:26.336588 systemd[1]: Reached target network-pre.target - Preparation for Network. Apr 16 04:17:26.385131 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Apr 16 04:17:26.401720 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Apr 16 04:17:26.414724 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Apr 16 04:17:26.494418 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Apr 16 04:17:26.542760 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Apr 16 04:17:26.550353 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 16 04:17:26.792630 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Apr 16 04:17:26.795458 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 16 04:17:26.817855 systemd-journald[1155]: Time spent on flushing to /var/log/journal/c04414558d074d1d8e0486ec9fc8f5ac is 207.309ms for 946 entries. Apr 16 04:17:26.817855 systemd-journald[1155]: System Journal (/var/log/journal/c04414558d074d1d8e0486ec9fc8f5ac) is 8.0M, max 195.6M, 187.6M free. Apr 16 04:17:27.260638 systemd-journald[1155]: Received client request to flush runtime journal. Apr 16 04:17:26.832267 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 16 04:17:26.881485 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 16 04:17:26.900732 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 16 04:17:26.907590 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Apr 16 04:17:26.915629 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Apr 16 04:17:26.916625 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Apr 16 04:17:27.057549 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Apr 16 04:17:27.141065 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Apr 16 04:17:27.281882 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Apr 16 04:17:27.325901 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 16 04:17:27.337692 udevadm[1219]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Apr 16 04:17:27.375003 systemd-tmpfiles[1213]: ACLs are not supported, ignoring. Apr 16 04:17:27.375083 systemd-tmpfiles[1213]: ACLs are not supported, ignoring. Apr 16 04:17:27.509520 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 16 04:17:27.555134 systemd[1]: Starting systemd-sysusers.service - Create System Users... Apr 16 04:17:28.079683 systemd[1]: Finished systemd-sysusers.service - Create System Users. Apr 16 04:17:28.181909 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 16 04:17:28.654725 systemd-tmpfiles[1234]: ACLs are not supported, ignoring. Apr 16 04:17:28.659817 systemd-tmpfiles[1234]: ACLs are not supported, ignoring. Apr 16 04:17:28.725107 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 16 04:17:35.753789 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Apr 16 04:17:35.811072 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 16 04:17:36.536309 systemd-udevd[1240]: Using default interface naming scheme 'v255'. Apr 16 04:17:38.165797 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 16 04:17:38.295854 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 16 04:17:38.410905 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Apr 16 04:17:38.723095 systemd[1]: Found device dev-ttyS0.device - /dev/ttyS0. Apr 16 04:17:38.744096 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (1248) Apr 16 04:17:38.782666 systemd[1]: Started systemd-userdbd.service - User Database Manager. Apr 16 04:17:39.113279 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Apr 16 04:17:39.197615 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Apr 16 04:17:39.258996 kernel: ACPI: button: Power Button [PWRF] Apr 16 04:17:39.265603 systemd-networkd[1242]: lo: Link UP Apr 16 04:17:39.265615 systemd-networkd[1242]: lo: Gained carrier Apr 16 04:17:39.268232 systemd-networkd[1242]: Enumeration completed Apr 16 04:17:39.269521 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 16 04:17:39.277893 systemd-networkd[1242]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 16 04:17:39.277902 systemd-networkd[1242]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 16 04:17:39.285295 systemd-networkd[1242]: eth0: Link UP Apr 16 04:17:39.285455 systemd-networkd[1242]: eth0: Gained carrier Apr 16 04:17:39.285614 systemd-networkd[1242]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 16 04:17:39.298754 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Apr 16 04:17:39.331990 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Apr 16 04:17:39.344004 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Apr 16 04:17:39.355204 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Apr 16 04:17:39.355415 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Apr 16 04:17:39.343457 systemd-networkd[1242]: eth0: DHCPv4 address 10.0.0.5/16, gateway 10.0.0.1 acquired from 10.0.0.1 Apr 16 04:17:39.918498 kernel: mousedev: PS/2 mouse device common for all mice Apr 16 04:17:40.030079 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 16 04:17:40.899097 systemd-networkd[1242]: eth0: Gained IPv6LL Apr 16 04:17:40.926234 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Apr 16 04:17:41.703139 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 16 04:17:42.856715 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Apr 16 04:17:42.985673 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Apr 16 04:17:43.822939 lvm[1287]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 16 04:17:43.968206 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Apr 16 04:17:43.973967 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 16 04:17:44.191734 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Apr 16 04:17:45.115683 lvm[1290]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 16 04:17:45.426746 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Apr 16 04:17:45.506476 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 16 04:17:45.524772 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Apr 16 04:17:45.542491 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 16 04:17:45.578896 systemd[1]: Reached target machines.target - Containers. Apr 16 04:17:46.056780 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Apr 16 04:17:46.189170 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Apr 16 04:17:46.311132 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Apr 16 04:17:46.338954 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 16 04:17:46.370124 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Apr 16 04:17:46.395113 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Apr 16 04:17:46.418871 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Apr 16 04:17:46.433144 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Apr 16 04:17:46.467821 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Apr 16 04:17:46.598464 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Apr 16 04:17:46.616372 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Apr 16 04:17:46.643841 kernel: loop0: detected capacity change from 0 to 140768 Apr 16 04:17:46.763095 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Apr 16 04:17:46.962242 kernel: loop1: detected capacity change from 0 to 142488 Apr 16 04:17:47.128149 kernel: loop2: detected capacity change from 0 to 228704 Apr 16 04:17:47.294061 kernel: loop3: detected capacity change from 0 to 140768 Apr 16 04:17:47.398803 kernel: loop4: detected capacity change from 0 to 142488 Apr 16 04:17:47.490279 kernel: loop5: detected capacity change from 0 to 228704 Apr 16 04:17:47.592257 (sd-merge)[1312]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Apr 16 04:17:47.614488 (sd-merge)[1312]: Merged extensions into '/usr'. Apr 16 04:17:47.695472 systemd[1]: Reloading requested from client PID 1299 ('systemd-sysext') (unit systemd-sysext.service)... Apr 16 04:17:47.695841 systemd[1]: Reloading... Apr 16 04:17:47.918279 zram_generator::config[1339]: No configuration found. Apr 16 04:17:49.148547 ldconfig[1294]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Apr 16 04:17:49.504627 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 16 04:17:51.065300 systemd[1]: Reloading finished in 3368 ms. Apr 16 04:17:51.285172 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Apr 16 04:17:51.318378 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Apr 16 04:17:51.546944 systemd[1]: Starting ensure-sysext.service... Apr 16 04:17:51.647855 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 16 04:17:51.681783 systemd[1]: Reloading requested from client PID 1383 ('systemctl') (unit ensure-sysext.service)... Apr 16 04:17:51.681992 systemd[1]: Reloading... Apr 16 04:17:52.508489 systemd-tmpfiles[1384]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Apr 16 04:17:52.508836 systemd-tmpfiles[1384]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Apr 16 04:17:52.538162 systemd-tmpfiles[1384]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Apr 16 04:17:52.565891 systemd-tmpfiles[1384]: ACLs are not supported, ignoring. Apr 16 04:17:52.565989 systemd-tmpfiles[1384]: ACLs are not supported, ignoring. Apr 16 04:17:52.623319 systemd-tmpfiles[1384]: Detected autofs mount point /boot during canonicalization of boot. Apr 16 04:17:52.640771 systemd-tmpfiles[1384]: Skipping /boot Apr 16 04:17:52.764807 systemd-tmpfiles[1384]: Detected autofs mount point /boot during canonicalization of boot. Apr 16 04:17:52.764843 systemd-tmpfiles[1384]: Skipping /boot Apr 16 04:17:52.952064 zram_generator::config[1415]: No configuration found. Apr 16 04:17:59.584085 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 16 04:18:03.105872 systemd[1]: Reloading finished in 11423 ms. Apr 16 04:18:04.387459 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 16 04:18:04.770889 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 16 04:18:05.057250 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Apr 16 04:18:05.168941 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Apr 16 04:18:05.310393 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 16 04:18:05.373760 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Apr 16 04:18:05.644077 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 16 04:18:05.644470 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 16 04:18:05.679999 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 16 04:18:05.800304 augenrules[1480]: No rules Apr 16 04:18:05.828204 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 16 04:18:05.871672 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 16 04:18:05.872847 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 16 04:18:05.873220 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 16 04:18:05.882137 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 16 04:18:05.929244 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Apr 16 04:18:05.952889 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Apr 16 04:18:06.060959 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 16 04:18:06.061330 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 16 04:18:06.103147 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 16 04:18:06.103572 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 16 04:18:06.255671 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 16 04:18:06.266792 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 16 04:18:06.296947 systemd-resolved[1469]: Positive Trust Anchors: Apr 16 04:18:06.296986 systemd-resolved[1469]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 16 04:18:06.297055 systemd-resolved[1469]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 16 04:18:06.297242 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Apr 16 04:18:06.388907 systemd-resolved[1469]: Defaulting to hostname 'linux'. Apr 16 04:18:06.456326 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 16 04:18:06.476747 systemd[1]: Reached target network.target - Network. Apr 16 04:18:06.478789 systemd[1]: Reached target network-online.target - Network is Online. Apr 16 04:18:06.579584 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 16 04:18:06.636268 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 16 04:18:06.638735 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 16 04:18:06.859308 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 16 04:18:07.019556 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 16 04:18:07.223633 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 16 04:18:07.224445 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 16 04:18:07.555124 systemd[1]: Starting systemd-update-done.service - Update is Completed... Apr 16 04:18:07.557434 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Apr 16 04:18:07.557708 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 16 04:18:07.585335 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 16 04:18:07.785941 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 16 04:18:07.824578 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 16 04:18:07.845394 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 16 04:18:07.876462 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 16 04:18:07.876804 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 16 04:18:08.171771 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 16 04:18:08.183447 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 16 04:18:08.206818 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 16 04:18:08.286179 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 16 04:18:08.436955 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 16 04:18:08.483094 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 16 04:18:08.494294 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 16 04:18:08.494960 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Apr 16 04:18:08.498700 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 16 04:18:08.576983 systemd[1]: Finished systemd-update-done.service - Update is Completed. Apr 16 04:18:08.583153 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 16 04:18:08.583380 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 16 04:18:08.597214 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 16 04:18:08.597534 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 16 04:18:08.641838 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 16 04:18:08.662408 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 16 04:18:08.878487 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 16 04:18:08.896662 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 16 04:18:09.133796 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 16 04:18:09.140959 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 16 04:18:09.143321 systemd[1]: Finished ensure-sysext.service. Apr 16 04:18:09.347354 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Apr 16 04:18:13.049562 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Apr 16 04:18:14.058742 systemd-timesyncd[1529]: Contacted time server 10.0.0.1:123 (10.0.0.1). Apr 16 04:18:14.058869 systemd-timesyncd[1529]: Initial clock synchronization to Thu 2026-04-16 04:18:14.058180 UTC. Apr 16 04:18:14.059314 systemd-resolved[1469]: Clock change detected. Flushing caches. Apr 16 04:18:14.067021 systemd[1]: Reached target sysinit.target - System Initialization. Apr 16 04:18:14.126037 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Apr 16 04:18:14.139282 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Apr 16 04:18:14.154311 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Apr 16 04:18:14.161653 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Apr 16 04:18:14.162112 systemd[1]: Reached target paths.target - Path Units. Apr 16 04:18:14.184328 systemd[1]: Reached target time-set.target - System Time Set. Apr 16 04:18:14.248689 systemd[1]: Started logrotate.timer - Daily rotation of log files. Apr 16 04:18:14.286033 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Apr 16 04:18:14.392510 systemd[1]: Reached target timers.target - Timer Units. Apr 16 04:18:14.471000 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Apr 16 04:18:14.681570 systemd[1]: Starting docker.socket - Docker Socket for the API... Apr 16 04:18:14.787372 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Apr 16 04:18:14.832167 systemd[1]: Listening on docker.socket - Docker Socket for the API. Apr 16 04:18:14.993847 systemd[1]: Reached target sockets.target - Socket Units. Apr 16 04:18:15.076311 systemd[1]: Reached target basic.target - Basic System. Apr 16 04:18:15.127527 systemd[1]: System is tainted: cgroupsv1 Apr 16 04:18:15.129086 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Apr 16 04:18:15.129210 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Apr 16 04:18:15.181099 systemd[1]: Starting containerd.service - containerd container runtime... Apr 16 04:18:15.295667 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Apr 16 04:18:15.413322 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Apr 16 04:18:15.444073 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Apr 16 04:18:15.526009 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Apr 16 04:18:15.545394 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Apr 16 04:18:15.603401 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 16 04:18:15.675581 jq[1537]: false Apr 16 04:18:15.689139 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Apr 16 04:18:15.866088 extend-filesystems[1538]: Found loop3 Apr 16 04:18:15.866088 extend-filesystems[1538]: Found loop4 Apr 16 04:18:15.866088 extend-filesystems[1538]: Found loop5 Apr 16 04:18:15.866088 extend-filesystems[1538]: Found sr0 Apr 16 04:18:15.866088 extend-filesystems[1538]: Found vda Apr 16 04:18:15.866088 extend-filesystems[1538]: Found vda1 Apr 16 04:18:15.866088 extend-filesystems[1538]: Found vda2 Apr 16 04:18:15.866088 extend-filesystems[1538]: Found vda3 Apr 16 04:18:15.862638 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Apr 16 04:18:16.048235 extend-filesystems[1538]: Found usr Apr 16 04:18:16.048235 extend-filesystems[1538]: Found vda4 Apr 16 04:18:16.048235 extend-filesystems[1538]: Found vda6 Apr 16 04:18:16.048235 extend-filesystems[1538]: Found vda7 Apr 16 04:18:16.048235 extend-filesystems[1538]: Found vda9 Apr 16 04:18:16.048235 extend-filesystems[1538]: Checking size of /dev/vda9 Apr 16 04:18:16.001124 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Apr 16 04:18:16.049490 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Apr 16 04:18:16.117211 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Apr 16 04:18:16.148397 extend-filesystems[1538]: Resized partition /dev/vda9 Apr 16 04:18:16.200790 extend-filesystems[1561]: resize2fs 1.47.1 (20-May-2024) Apr 16 04:18:16.292144 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Apr 16 04:18:16.283292 systemd[1]: Starting systemd-logind.service - User Login Management... Apr 16 04:18:16.290925 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Apr 16 04:18:16.299746 dbus-daemon[1536]: [system] SELinux support is enabled Apr 16 04:18:16.350177 systemd[1]: Starting update-engine.service - Update Engine... Apr 16 04:18:16.392516 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Apr 16 04:18:16.427554 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Apr 16 04:18:16.479923 systemd[1]: Started dbus.service - D-Bus System Message Bus. Apr 16 04:18:16.497747 extend-filesystems[1561]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Apr 16 04:18:16.497747 extend-filesystems[1561]: old_desc_blocks = 1, new_desc_blocks = 1 Apr 16 04:18:16.497747 extend-filesystems[1561]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Apr 16 04:18:16.611694 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (1577) Apr 16 04:18:16.611764 extend-filesystems[1538]: Resized filesystem in /dev/vda9 Apr 16 04:18:16.631998 update_engine[1568]: I20260416 04:18:16.631846 1568 main.cc:92] Flatcar Update Engine starting Apr 16 04:18:16.632426 jq[1573]: true Apr 16 04:18:16.649695 update_engine[1568]: I20260416 04:18:16.649638 1568 update_check_scheduler.cc:74] Next update check in 8m26s Apr 16 04:18:16.662921 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Apr 16 04:18:16.663423 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Apr 16 04:18:16.724741 systemd[1]: extend-filesystems.service: Deactivated successfully. Apr 16 04:18:16.725298 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Apr 16 04:18:16.767255 systemd[1]: motdgen.service: Deactivated successfully. Apr 16 04:18:16.874767 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Apr 16 04:18:16.886138 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Apr 16 04:18:16.970412 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Apr 16 04:18:16.970903 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Apr 16 04:18:16.986376 systemd-logind[1563]: Watching system buttons on /dev/input/event1 (Power Button) Apr 16 04:18:16.987130 systemd-logind[1563]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Apr 16 04:18:16.987926 systemd-logind[1563]: New seat seat0. Apr 16 04:18:17.045653 systemd[1]: Started systemd-logind.service - User Login Management. Apr 16 04:18:17.105919 sshd_keygen[1572]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Apr 16 04:18:17.121510 (ntainerd)[1593]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Apr 16 04:18:17.146213 systemd[1]: coreos-metadata.service: Deactivated successfully. Apr 16 04:18:17.147837 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Apr 16 04:18:17.326814 jq[1592]: true Apr 16 04:18:17.326587 dbus-daemon[1536]: [system] Successfully activated service 'org.freedesktop.systemd1' Apr 16 04:18:17.350511 tar[1590]: linux-amd64/LICENSE Apr 16 04:18:17.350511 tar[1590]: linux-amd64/helm Apr 16 04:18:17.698380 systemd[1]: Started update-engine.service - Update Engine. Apr 16 04:18:17.711348 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Apr 16 04:18:17.715892 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Apr 16 04:18:17.716117 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Apr 16 04:18:17.728744 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Apr 16 04:18:17.728920 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Apr 16 04:18:17.733083 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Apr 16 04:18:17.751716 systemd[1]: Started locksmithd.service - Cluster reboot manager. Apr 16 04:18:17.910096 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Apr 16 04:18:18.311615 systemd[1]: Starting issuegen.service - Generate /run/issue... Apr 16 04:18:18.534411 bash[1642]: Updated "/home/core/.ssh/authorized_keys" Apr 16 04:18:18.555722 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Apr 16 04:18:18.658920 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Apr 16 04:18:18.853053 systemd[1]: issuegen.service: Deactivated successfully. Apr 16 04:18:18.853911 systemd[1]: Finished issuegen.service - Generate /run/issue. Apr 16 04:18:19.183892 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Apr 16 04:18:19.389048 locksmithd[1634]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Apr 16 04:18:20.234902 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Apr 16 04:18:20.473853 systemd[1]: Started getty@tty1.service - Getty on tty1. Apr 16 04:18:20.565460 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Apr 16 04:18:20.613618 systemd[1]: Reached target getty.target - Login Prompts. Apr 16 04:18:21.054296 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Apr 16 04:18:21.160135 systemd[1]: Started sshd@0-10.0.0.5:22-10.0.0.1:56838.service - OpenSSH per-connection server daemon (10.0.0.1:56838). Apr 16 04:18:21.688137 containerd[1593]: time="2026-04-16T04:18:21.687750029Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Apr 16 04:18:22.150830 containerd[1593]: time="2026-04-16T04:18:22.149673648Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Apr 16 04:18:22.173865 sshd[1665]: Accepted publickey for core from 10.0.0.1 port 56838 ssh2: RSA SHA256:k6txjuWbuT3fOc5R3ejt+fIilHnNyFQrCWM8I+0TlNQ Apr 16 04:18:22.188150 sshd[1665]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 04:18:22.213663 containerd[1593]: time="2026-04-16T04:18:22.211173166Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.127-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Apr 16 04:18:22.213663 containerd[1593]: time="2026-04-16T04:18:22.211933498Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Apr 16 04:18:22.213663 containerd[1593]: time="2026-04-16T04:18:22.212077696Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Apr 16 04:18:22.217660 containerd[1593]: time="2026-04-16T04:18:22.214203682Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Apr 16 04:18:22.217660 containerd[1593]: time="2026-04-16T04:18:22.214258478Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Apr 16 04:18:22.221583 containerd[1593]: time="2026-04-16T04:18:22.220609341Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Apr 16 04:18:22.221985 containerd[1593]: time="2026-04-16T04:18:22.221939982Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Apr 16 04:18:22.224280 containerd[1593]: time="2026-04-16T04:18:22.224239641Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 16 04:18:22.224392 containerd[1593]: time="2026-04-16T04:18:22.224373690Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Apr 16 04:18:22.224568 containerd[1593]: time="2026-04-16T04:18:22.224544272Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Apr 16 04:18:22.226531 containerd[1593]: time="2026-04-16T04:18:22.225510389Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Apr 16 04:18:22.226531 containerd[1593]: time="2026-04-16T04:18:22.225673277Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Apr 16 04:18:22.226531 containerd[1593]: time="2026-04-16T04:18:22.226078520Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Apr 16 04:18:22.226531 containerd[1593]: time="2026-04-16T04:18:22.226320481Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 16 04:18:22.226531 containerd[1593]: time="2026-04-16T04:18:22.226343402Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Apr 16 04:18:22.226531 containerd[1593]: time="2026-04-16T04:18:22.226499540Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Apr 16 04:18:22.226783 containerd[1593]: time="2026-04-16T04:18:22.226664986Z" level=info msg="metadata content store policy set" policy=shared Apr 16 04:18:22.265394 containerd[1593]: time="2026-04-16T04:18:22.265110855Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Apr 16 04:18:22.274516 containerd[1593]: time="2026-04-16T04:18:22.274268713Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Apr 16 04:18:22.279225 containerd[1593]: time="2026-04-16T04:18:22.274786628Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Apr 16 04:18:22.279225 containerd[1593]: time="2026-04-16T04:18:22.274865663Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Apr 16 04:18:22.279225 containerd[1593]: time="2026-04-16T04:18:22.275071804Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Apr 16 04:18:22.282744 containerd[1593]: time="2026-04-16T04:18:22.275427501Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Apr 16 04:18:22.290365 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Apr 16 04:18:22.304087 containerd[1593]: time="2026-04-16T04:18:22.303940006Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Apr 16 04:18:22.349799 containerd[1593]: time="2026-04-16T04:18:22.348659850Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Apr 16 04:18:22.349799 containerd[1593]: time="2026-04-16T04:18:22.350089535Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Apr 16 04:18:22.350833 containerd[1593]: time="2026-04-16T04:18:22.350212293Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Apr 16 04:18:22.350833 containerd[1593]: time="2026-04-16T04:18:22.350326939Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Apr 16 04:18:22.350833 containerd[1593]: time="2026-04-16T04:18:22.350564487Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Apr 16 04:18:22.350833 containerd[1593]: time="2026-04-16T04:18:22.350592381Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Apr 16 04:18:22.350833 containerd[1593]: time="2026-04-16T04:18:22.350611425Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Apr 16 04:18:22.350833 containerd[1593]: time="2026-04-16T04:18:22.350631535Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Apr 16 04:18:22.350833 containerd[1593]: time="2026-04-16T04:18:22.350650171Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Apr 16 04:18:22.350833 containerd[1593]: time="2026-04-16T04:18:22.350667788Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Apr 16 04:18:22.350833 containerd[1593]: time="2026-04-16T04:18:22.350687390Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Apr 16 04:18:22.350833 containerd[1593]: time="2026-04-16T04:18:22.350716857Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Apr 16 04:18:22.350833 containerd[1593]: time="2026-04-16T04:18:22.350735700Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Apr 16 04:18:22.350833 containerd[1593]: time="2026-04-16T04:18:22.350752677Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Apr 16 04:18:22.351266 containerd[1593]: time="2026-04-16T04:18:22.350863714Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Apr 16 04:18:22.351266 containerd[1593]: time="2026-04-16T04:18:22.350910551Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Apr 16 04:18:22.351266 containerd[1593]: time="2026-04-16T04:18:22.350941420Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Apr 16 04:18:22.351266 containerd[1593]: time="2026-04-16T04:18:22.350957214Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Apr 16 04:18:22.351266 containerd[1593]: time="2026-04-16T04:18:22.350995304Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Apr 16 04:18:22.351266 containerd[1593]: time="2026-04-16T04:18:22.351013995Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Apr 16 04:18:22.351266 containerd[1593]: time="2026-04-16T04:18:22.351048522Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Apr 16 04:18:22.351266 containerd[1593]: time="2026-04-16T04:18:22.351089407Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Apr 16 04:18:22.351266 containerd[1593]: time="2026-04-16T04:18:22.351107860Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Apr 16 04:18:22.351266 containerd[1593]: time="2026-04-16T04:18:22.351125641Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Apr 16 04:18:22.351266 containerd[1593]: time="2026-04-16T04:18:22.351146142Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Apr 16 04:18:22.351545 containerd[1593]: time="2026-04-16T04:18:22.351269692Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Apr 16 04:18:22.351545 containerd[1593]: time="2026-04-16T04:18:22.351290162Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Apr 16 04:18:22.351545 containerd[1593]: time="2026-04-16T04:18:22.351305592Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Apr 16 04:18:22.351545 containerd[1593]: time="2026-04-16T04:18:22.351396253Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Apr 16 04:18:22.351545 containerd[1593]: time="2026-04-16T04:18:22.351421889Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Apr 16 04:18:22.351545 containerd[1593]: time="2026-04-16T04:18:22.351485001Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Apr 16 04:18:22.351545 containerd[1593]: time="2026-04-16T04:18:22.351530476Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Apr 16 04:18:22.351545 containerd[1593]: time="2026-04-16T04:18:22.351544951Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Apr 16 04:18:22.351732 containerd[1593]: time="2026-04-16T04:18:22.351562062Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Apr 16 04:18:22.351732 containerd[1593]: time="2026-04-16T04:18:22.351679476Z" level=info msg="NRI interface is disabled by configuration." Apr 16 04:18:22.351732 containerd[1593]: time="2026-04-16T04:18:22.351706377Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Apr 16 04:18:22.354062 containerd[1593]: time="2026-04-16T04:18:22.352745069Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Apr 16 04:18:22.354062 containerd[1593]: time="2026-04-16T04:18:22.352907299Z" level=info msg="Connect containerd service" Apr 16 04:18:22.354062 containerd[1593]: time="2026-04-16T04:18:22.352958781Z" level=info msg="using legacy CRI server" Apr 16 04:18:22.355827 containerd[1593]: time="2026-04-16T04:18:22.352969372Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Apr 16 04:18:22.355827 containerd[1593]: time="2026-04-16T04:18:22.355215560Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Apr 16 04:18:22.356391 containerd[1593]: time="2026-04-16T04:18:22.356307500Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 16 04:18:22.378081 containerd[1593]: time="2026-04-16T04:18:22.370573141Z" level=info msg="Start subscribing containerd event" Apr 16 04:18:22.378081 containerd[1593]: time="2026-04-16T04:18:22.371862313Z" level=info msg="Start recovering state" Apr 16 04:18:22.378081 containerd[1593]: time="2026-04-16T04:18:22.372660614Z" level=info msg="Start event monitor" Apr 16 04:18:22.378081 containerd[1593]: time="2026-04-16T04:18:22.372717785Z" level=info msg="Start snapshots syncer" Apr 16 04:18:22.378081 containerd[1593]: time="2026-04-16T04:18:22.372732770Z" level=info msg="Start cni network conf syncer for default" Apr 16 04:18:22.378081 containerd[1593]: time="2026-04-16T04:18:22.372742240Z" level=info msg="Start streaming server" Apr 16 04:18:22.378081 containerd[1593]: time="2026-04-16T04:18:22.376227043Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Apr 16 04:18:22.378081 containerd[1593]: time="2026-04-16T04:18:22.376279533Z" level=info msg=serving... address=/run/containerd/containerd.sock Apr 16 04:18:22.378081 containerd[1593]: time="2026-04-16T04:18:22.376366051Z" level=info msg="containerd successfully booted in 0.693805s" Apr 16 04:18:22.475307 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Apr 16 04:18:22.521388 systemd[1]: Started containerd.service - containerd container runtime. Apr 16 04:18:22.580965 systemd-logind[1563]: New session 1 of user core. Apr 16 04:18:22.652700 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Apr 16 04:18:22.700714 systemd[1]: Starting user@500.service - User Manager for UID 500... Apr 16 04:18:22.910216 (systemd)[1676]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Apr 16 04:18:25.366209 tar[1590]: linux-amd64/README.md Apr 16 04:18:25.425301 systemd[1676]: Queued start job for default target default.target. Apr 16 04:18:25.430581 systemd[1676]: Created slice app.slice - User Application Slice. Apr 16 04:18:25.430669 systemd[1676]: Reached target paths.target - Paths. Apr 16 04:18:25.430682 systemd[1676]: Reached target timers.target - Timers. Apr 16 04:18:25.485390 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Apr 16 04:18:25.551663 systemd[1676]: Starting dbus.socket - D-Bus User Message Bus Socket... Apr 16 04:18:25.647752 systemd[1676]: Listening on dbus.socket - D-Bus User Message Bus Socket. Apr 16 04:18:25.649134 systemd[1676]: Reached target sockets.target - Sockets. Apr 16 04:18:25.649169 systemd[1676]: Reached target basic.target - Basic System. Apr 16 04:18:25.649223 systemd[1676]: Reached target default.target - Main User Target. Apr 16 04:18:25.649255 systemd[1676]: Startup finished in 2.531s. Apr 16 04:18:25.654266 systemd[1]: Started user@500.service - User Manager for UID 500. Apr 16 04:18:25.920156 systemd[1]: Started session-1.scope - Session 1 of User core. Apr 16 04:18:26.489960 systemd[1]: Started sshd@1-10.0.0.5:22-10.0.0.1:56076.service - OpenSSH per-connection server daemon (10.0.0.1:56076). Apr 16 04:18:28.004176 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 16 04:18:28.077099 (kubelet)[1708]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 16 04:18:28.086516 systemd[1]: Reached target multi-user.target - Multi-User System. Apr 16 04:18:28.093752 systemd[1]: Startup finished in 59.431s (kernel) + 1min 18.605s (userspace) = 2min 18.036s. Apr 16 04:18:28.686769 sshd[1698]: Accepted publickey for core from 10.0.0.1 port 56076 ssh2: RSA SHA256:k6txjuWbuT3fOc5R3ejt+fIilHnNyFQrCWM8I+0TlNQ Apr 16 04:18:28.695238 sshd[1698]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 04:18:29.362317 systemd-logind[1563]: New session 2 of user core. Apr 16 04:18:29.436893 systemd[1]: Started session-2.scope - Session 2 of User core. Apr 16 04:18:30.714382 sshd[1698]: pam_unix(sshd:session): session closed for user core Apr 16 04:18:30.881918 systemd[1]: Started sshd@2-10.0.0.5:22-10.0.0.1:56088.service - OpenSSH per-connection server daemon (10.0.0.1:56088). Apr 16 04:18:30.935925 systemd[1]: sshd@1-10.0.0.5:22-10.0.0.1:56076.service: Deactivated successfully. Apr 16 04:18:31.043987 systemd[1]: session-2.scope: Deactivated successfully. Apr 16 04:18:31.182673 systemd-logind[1563]: Session 2 logged out. Waiting for processes to exit. Apr 16 04:18:31.236689 systemd-logind[1563]: Removed session 2. Apr 16 04:18:31.251204 sshd[1717]: Accepted publickey for core from 10.0.0.1 port 56088 ssh2: RSA SHA256:k6txjuWbuT3fOc5R3ejt+fIilHnNyFQrCWM8I+0TlNQ Apr 16 04:18:31.255882 sshd[1717]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 04:18:31.805566 systemd-logind[1563]: New session 3 of user core. Apr 16 04:18:31.962555 systemd[1]: Started session-3.scope - Session 3 of User core. Apr 16 04:18:32.334641 sshd[1717]: pam_unix(sshd:session): session closed for user core Apr 16 04:18:32.882332 systemd[1]: Started sshd@3-10.0.0.5:22-10.0.0.1:56094.service - OpenSSH per-connection server daemon (10.0.0.1:56094). Apr 16 04:18:32.917623 systemd[1]: sshd@2-10.0.0.5:22-10.0.0.1:56088.service: Deactivated successfully. Apr 16 04:18:32.965823 systemd[1]: session-3.scope: Deactivated successfully. Apr 16 04:18:32.994885 systemd-logind[1563]: Session 3 logged out. Waiting for processes to exit. Apr 16 04:18:33.188915 systemd-logind[1563]: Removed session 3. Apr 16 04:18:33.369305 sshd[1726]: Accepted publickey for core from 10.0.0.1 port 56094 ssh2: RSA SHA256:k6txjuWbuT3fOc5R3ejt+fIilHnNyFQrCWM8I+0TlNQ Apr 16 04:18:33.391680 sshd[1726]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 04:18:34.331574 systemd-logind[1563]: New session 4 of user core. Apr 16 04:18:34.466075 systemd[1]: Started session-4.scope - Session 4 of User core. Apr 16 04:18:35.553338 sshd[1726]: pam_unix(sshd:session): session closed for user core Apr 16 04:18:35.708871 systemd[1]: Started sshd@4-10.0.0.5:22-10.0.0.1:52786.service - OpenSSH per-connection server daemon (10.0.0.1:52786). Apr 16 04:18:35.710647 systemd[1]: sshd@3-10.0.0.5:22-10.0.0.1:56094.service: Deactivated successfully. Apr 16 04:18:35.715082 systemd[1]: session-4.scope: Deactivated successfully. Apr 16 04:18:35.750846 systemd-logind[1563]: Session 4 logged out. Waiting for processes to exit. Apr 16 04:18:35.757306 systemd-logind[1563]: Removed session 4. Apr 16 04:18:36.555094 sshd[1734]: Accepted publickey for core from 10.0.0.1 port 52786 ssh2: RSA SHA256:k6txjuWbuT3fOc5R3ejt+fIilHnNyFQrCWM8I+0TlNQ Apr 16 04:18:36.813661 sshd[1734]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 04:18:37.737644 systemd-logind[1563]: New session 5 of user core. Apr 16 04:18:37.864594 systemd[1]: Started session-5.scope - Session 5 of User core. Apr 16 04:18:38.873111 sudo[1741]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Apr 16 04:18:38.889762 sudo[1741]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 16 04:18:39.165856 sudo[1741]: pam_unix(sudo:session): session closed for user root Apr 16 04:18:39.322408 sshd[1734]: pam_unix(sshd:session): session closed for user core Apr 16 04:18:39.442187 kubelet[1708]: E0416 04:18:39.425738 1708 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 16 04:18:39.442686 systemd[1]: sshd@4-10.0.0.5:22-10.0.0.1:52786.service: Deactivated successfully. Apr 16 04:18:39.523267 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 16 04:18:39.524928 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 16 04:18:39.774066 systemd[1]: session-5.scope: Deactivated successfully. Apr 16 04:18:39.823691 systemd-logind[1563]: Session 5 logged out. Waiting for processes to exit. Apr 16 04:18:40.041775 systemd[1]: Started sshd@5-10.0.0.5:22-10.0.0.1:52790.service - OpenSSH per-connection server daemon (10.0.0.1:52790). Apr 16 04:18:40.045010 systemd-logind[1563]: Removed session 5. Apr 16 04:18:44.903335 sshd[1748]: Accepted publickey for core from 10.0.0.1 port 52790 ssh2: RSA SHA256:k6txjuWbuT3fOc5R3ejt+fIilHnNyFQrCWM8I+0TlNQ Apr 16 04:18:45.063317 sshd[1748]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 04:18:46.947053 systemd-logind[1563]: New session 6 of user core. Apr 16 04:18:47.436247 systemd[1]: Started session-6.scope - Session 6 of User core. Apr 16 04:18:49.255387 sudo[1753]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Apr 16 04:18:49.329039 sudo[1753]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 16 04:18:49.742678 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Apr 16 04:18:49.816232 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 16 04:18:49.875218 sudo[1753]: pam_unix(sudo:session): session closed for user root Apr 16 04:18:50.558834 sudo[1752]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Apr 16 04:18:50.573260 sudo[1752]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 16 04:18:51.874385 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Apr 16 04:18:52.169234 auditctl[1760]: No rules Apr 16 04:18:52.195080 systemd[1]: audit-rules.service: Deactivated successfully. Apr 16 04:18:52.208121 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Apr 16 04:18:52.814992 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 16 04:18:54.226123 augenrules[1779]: No rules Apr 16 04:18:54.263324 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 16 04:18:54.423637 sudo[1752]: pam_unix(sudo:session): session closed for user root Apr 16 04:18:54.494033 sshd[1748]: pam_unix(sshd:session): session closed for user core Apr 16 04:18:54.549309 systemd[1]: Started sshd@6-10.0.0.5:22-10.0.0.1:46074.service - OpenSSH per-connection server daemon (10.0.0.1:46074). Apr 16 04:18:54.550131 systemd[1]: sshd@5-10.0.0.5:22-10.0.0.1:52790.service: Deactivated successfully. Apr 16 04:18:54.554370 systemd[1]: session-6.scope: Deactivated successfully. Apr 16 04:18:54.558540 systemd-logind[1563]: Session 6 logged out. Waiting for processes to exit. Apr 16 04:18:54.578761 systemd-logind[1563]: Removed session 6. Apr 16 04:18:55.551603 sshd[1785]: Accepted publickey for core from 10.0.0.1 port 46074 ssh2: RSA SHA256:k6txjuWbuT3fOc5R3ejt+fIilHnNyFQrCWM8I+0TlNQ Apr 16 04:18:55.574782 sshd[1785]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 04:18:56.361569 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 16 04:18:56.451410 (kubelet)[1798]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 16 04:18:56.781192 systemd-logind[1563]: New session 7 of user core. Apr 16 04:18:56.820702 systemd[1]: Started session-7.scope - Session 7 of User core. Apr 16 04:18:57.461387 sudo[1807]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Apr 16 04:18:57.461803 sudo[1807]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 16 04:19:00.599883 kubelet[1798]: E0416 04:19:00.598136 1798 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 16 04:19:00.638345 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 16 04:19:00.638860 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 16 04:19:02.088844 update_engine[1568]: I20260416 04:19:02.073089 1568 update_attempter.cc:509] Updating boot flags... Apr 16 04:19:02.585094 systemd[1]: Starting docker.service - Docker Application Container Engine... Apr 16 04:19:02.623086 (dockerd)[1834]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Apr 16 04:19:02.740833 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (1837) Apr 16 04:19:03.087635 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (1839) Apr 16 04:19:05.456894 dockerd[1834]: time="2026-04-16T04:19:05.455734985Z" level=info msg="Starting up" Apr 16 04:19:07.124565 dockerd[1834]: time="2026-04-16T04:19:07.118237213Z" level=info msg="Loading containers: start." Apr 16 04:19:10.988352 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Apr 16 04:19:11.098733 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 16 04:19:11.450707 kernel: Initializing XFRM netlink socket Apr 16 04:19:14.205600 systemd-networkd[1242]: docker0: Link UP Apr 16 04:19:14.643295 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 16 04:19:14.980629 dockerd[1834]: time="2026-04-16T04:19:14.972784692Z" level=info msg="Loading containers: done." Apr 16 04:19:14.979927 (kubelet)[1957]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 16 04:19:15.542923 dockerd[1834]: time="2026-04-16T04:19:15.542181084Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Apr 16 04:19:15.542923 dockerd[1834]: time="2026-04-16T04:19:15.542733337Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Apr 16 04:19:15.542923 dockerd[1834]: time="2026-04-16T04:19:15.542903359Z" level=info msg="Daemon has completed initialization" Apr 16 04:19:17.272091 dockerd[1834]: time="2026-04-16T04:19:17.270999475Z" level=info msg="API listen on /run/docker.sock" Apr 16 04:19:17.305930 systemd[1]: Started docker.service - Docker Application Container Engine. Apr 16 04:19:33.051626 kubelet[1957]: E0416 04:19:33.015752 1957 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 16 04:19:33.077926 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 16 04:19:33.078613 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 16 04:19:43.204599 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Apr 16 04:19:43.289284 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 16 04:19:48.549877 containerd[1593]: time="2026-04-16T04:19:48.546154360Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.11\"" Apr 16 04:19:48.743112 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 16 04:19:48.787974 (kubelet)[2024]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 16 04:19:52.831258 kubelet[2024]: E0416 04:19:52.826482 2024 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 16 04:19:52.894957 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 16 04:19:52.896389 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 16 04:20:01.686374 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2995449013.mount: Deactivated successfully. Apr 16 04:20:03.382528 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Apr 16 04:20:03.578040 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 16 04:20:08.048141 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 16 04:20:08.152917 (kubelet)[2060]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 16 04:20:11.259754 kubelet[2060]: E0416 04:20:11.251808 2060 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 16 04:20:11.281209 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 16 04:20:11.291794 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 16 04:20:21.518705 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Apr 16 04:20:21.574704 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 16 04:20:25.654902 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 16 04:20:25.923962 (kubelet)[2101]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 16 04:20:27.729489 kubelet[2101]: E0416 04:20:27.727733 2101 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 16 04:20:27.767350 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 16 04:20:27.767607 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 16 04:20:38.606643 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Apr 16 04:20:38.992818 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 16 04:20:45.981215 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 16 04:20:45.982576 (kubelet)[2147]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 16 04:20:48.241984 kubelet[2147]: E0416 04:20:48.235177 2147 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 16 04:20:48.315873 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 16 04:20:48.316394 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 16 04:20:58.897833 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7. Apr 16 04:21:00.089361 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 16 04:21:04.923686 containerd[1593]: time="2026-04-16T04:21:04.902720739Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 04:21:05.104935 containerd[1593]: time="2026-04-16T04:21:04.924484566Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.11: active requests=0, bytes read=30193427" Apr 16 04:21:06.284119 containerd[1593]: time="2026-04-16T04:21:06.259326841Z" level=info msg="ImageCreate event name:\"sha256:7ea99c30f23b106a042b6c46e565fddb42b20bbe58ba6852e562eed03477aec2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 04:21:13.141601 containerd[1593]: time="2026-04-16T04:21:13.129627838Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:18e9f2b6e4d67c24941e14b2d41ec0aa6e5f628e39f2ef2163e176de85bbe39e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 04:21:15.180061 containerd[1593]: time="2026-04-16T04:21:15.177884526Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.11\" with image id \"sha256:7ea99c30f23b106a042b6c46e565fddb42b20bbe58ba6852e562eed03477aec2\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.11\", repo digest \"registry.k8s.io/kube-apiserver@sha256:18e9f2b6e4d67c24941e14b2d41ec0aa6e5f628e39f2ef2163e176de85bbe39e\", size \"30190588\" in 1m26.631518345s" Apr 16 04:21:15.259871 containerd[1593]: time="2026-04-16T04:21:15.207389613Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.11\" returns image reference \"sha256:7ea99c30f23b106a042b6c46e565fddb42b20bbe58ba6852e562eed03477aec2\"" Apr 16 04:21:15.498842 containerd[1593]: time="2026-04-16T04:21:15.475304672Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.11\"" Apr 16 04:21:20.159476 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 16 04:21:21.537269 (kubelet)[2169]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 16 04:21:34.496954 kubelet[2169]: E0416 04:21:34.489596 2169 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 16 04:21:34.545798 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 16 04:21:34.548063 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 16 04:21:44.964654 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 8. Apr 16 04:21:45.505970 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 16 04:21:55.636880 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 16 04:21:56.142723 (kubelet)[2193]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 16 04:22:31.844279 kubelet[2193]: E0416 04:22:31.823148 2193 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 16 04:22:32.027682 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 16 04:22:32.096272 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 16 04:22:43.026628 containerd[1593]: time="2026-04-16T04:22:43.020356538Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 04:22:43.026537 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 9. Apr 16 04:22:43.261414 containerd[1593]: time="2026-04-16T04:22:43.236181453Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.11: active requests=0, bytes read=26171379" Apr 16 04:22:43.330698 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 16 04:22:43.445806 containerd[1593]: time="2026-04-16T04:22:43.442939272Z" level=info msg="ImageCreate event name:\"sha256:c75dc8a6c47e2f7491fa2e367879f53c6f46053066e6b7135df4b154ddd94a1f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 04:22:47.093934 containerd[1593]: time="2026-04-16T04:22:47.092046005Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:7579451c5b3c2715da4a263c5d80a3367a24fdc12e86fde6851674d567d1dfb2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 04:22:48.460856 containerd[1593]: time="2026-04-16T04:22:48.457539756Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.11\" with image id \"sha256:c75dc8a6c47e2f7491fa2e367879f53c6f46053066e6b7135df4b154ddd94a1f\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.11\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:7579451c5b3c2715da4a263c5d80a3367a24fdc12e86fde6851674d567d1dfb2\", size \"27737794\" in 1m32.950706204s" Apr 16 04:22:48.460856 containerd[1593]: time="2026-04-16T04:22:48.457922055Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.11\" returns image reference \"sha256:c75dc8a6c47e2f7491fa2e367879f53c6f46053066e6b7135df4b154ddd94a1f\"" Apr 16 04:22:48.844101 containerd[1593]: time="2026-04-16T04:22:48.805134519Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.11\"" Apr 16 04:22:58.623663 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 16 04:22:59.201739 (kubelet)[2219]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 16 04:23:05.548948 kubelet[2219]: E0416 04:23:05.538117 2219 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 16 04:23:05.635829 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 16 04:23:05.636260 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 16 04:23:13.560970 containerd[1593]: time="2026-04-16T04:23:13.555662235Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 04:23:13.560970 containerd[1593]: time="2026-04-16T04:23:13.565494827Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.11: active requests=0, bytes read=20289688" Apr 16 04:23:13.877193 containerd[1593]: time="2026-04-16T04:23:13.874075845Z" level=info msg="ImageCreate event name:\"sha256:3febad3451e2d599688a8ad13d19d03c48c9054be209342c748fac2bb6c56f97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 04:23:15.285740 containerd[1593]: time="2026-04-16T04:23:15.282602042Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:5506f0f94c4d9aeb071664893aabc12166bcb7f775008a6fff02d004e6091d28\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 04:23:15.696537 containerd[1593]: time="2026-04-16T04:23:15.696076545Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.11\" with image id \"sha256:3febad3451e2d599688a8ad13d19d03c48c9054be209342c748fac2bb6c56f97\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.11\", repo digest \"registry.k8s.io/kube-scheduler@sha256:5506f0f94c4d9aeb071664893aabc12166bcb7f775008a6fff02d004e6091d28\", size \"21856121\" in 26.889637232s" Apr 16 04:23:15.696537 containerd[1593]: time="2026-04-16T04:23:15.696317471Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.11\" returns image reference \"sha256:3febad3451e2d599688a8ad13d19d03c48c9054be209342c748fac2bb6c56f97\"" Apr 16 04:23:15.739421 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 10. Apr 16 04:23:15.762423 containerd[1593]: time="2026-04-16T04:23:15.762166876Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.11\"" Apr 16 04:23:15.832334 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 16 04:23:24.654472 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 16 04:23:25.133275 (kubelet)[2245]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 16 04:23:29.745130 kubelet[2245]: E0416 04:23:29.742991 2245 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 16 04:23:29.775863 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 16 04:23:29.776906 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 16 04:23:42.568792 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 11. Apr 16 04:23:43.065632 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 16 04:23:47.950137 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 16 04:23:48.793759 (kubelet)[2273]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 16 04:23:52.255262 kubelet[2273]: E0416 04:23:52.253691 2273 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 16 04:23:52.415052 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 16 04:23:52.429042 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 16 04:23:58.706056 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2916842723.mount: Deactivated successfully. Apr 16 04:24:02.571336 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 12. Apr 16 04:24:02.768793 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 16 04:24:06.260618 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 16 04:24:06.305207 (kubelet)[2300]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 16 04:24:07.010343 kubelet[2300]: E0416 04:24:07.009632 2300 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 16 04:24:07.047049 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 16 04:24:07.054738 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 16 04:24:12.667955 containerd[1593]: time="2026-04-16T04:24:12.667031344Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 04:24:12.684193 containerd[1593]: time="2026-04-16T04:24:12.682589219Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.11: active requests=0, bytes read=32010605" Apr 16 04:24:12.693613 containerd[1593]: time="2026-04-16T04:24:12.692294390Z" level=info msg="ImageCreate event name:\"sha256:4ce1332df15d2a0b1c2d3b18292afb4ff670070401211daebb00b7293b26f6d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 04:24:12.726594 containerd[1593]: time="2026-04-16T04:24:12.720868847Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:8d18637b5c5f58a4ca0163d3cf184e53d4c522963c242860562be7cb25e9303e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 04:24:12.787491 containerd[1593]: time="2026-04-16T04:24:12.786136080Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.11\" with image id \"sha256:4ce1332df15d2a0b1c2d3b18292afb4ff670070401211daebb00b7293b26f6d0\", repo tag \"registry.k8s.io/kube-proxy:v1.33.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:8d18637b5c5f58a4ca0163d3cf184e53d4c522963c242860562be7cb25e9303e\", size \"32009730\" in 57.023050372s" Apr 16 04:24:12.791114 containerd[1593]: time="2026-04-16T04:24:12.788655252Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.11\" returns image reference \"sha256:4ce1332df15d2a0b1c2d3b18292afb4ff670070401211daebb00b7293b26f6d0\"" Apr 16 04:24:12.811998 containerd[1593]: time="2026-04-16T04:24:12.808869534Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Apr 16 04:24:17.263470 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 13. Apr 16 04:24:17.316825 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 16 04:24:17.578746 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2762185919.mount: Deactivated successfully. Apr 16 04:24:19.541181 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 16 04:24:19.576098 (kubelet)[2330]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 16 04:24:20.315294 kubelet[2330]: E0416 04:24:20.313894 2330 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 16 04:24:20.320388 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 16 04:24:20.322484 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 16 04:24:31.217588 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 14. Apr 16 04:24:31.584722 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 16 04:24:47.179148 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 16 04:24:47.753330 (kubelet)[2356]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 16 04:25:23.183944 kubelet[2356]: E0416 04:25:23.182334 2356 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 16 04:25:23.219163 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 16 04:25:23.236638 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 16 04:25:33.544221 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 15. Apr 16 04:25:33.639867 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 16 04:25:37.818974 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 16 04:25:37.900799 (kubelet)[2418]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 16 04:25:40.976924 kubelet[2418]: E0416 04:25:40.976132 2418 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 16 04:25:40.983473 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 16 04:25:40.983862 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 16 04:25:52.059431 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 16. Apr 16 04:25:53.097102 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 16 04:25:58.882175 containerd[1593]: time="2026-04-16T04:25:58.881558402Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 04:25:59.203941 containerd[1593]: time="2026-04-16T04:25:59.076191215Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20941714" Apr 16 04:25:59.724130 containerd[1593]: time="2026-04-16T04:25:59.712884512Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 04:26:03.295574 containerd[1593]: time="2026-04-16T04:26:03.281660092Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 04:26:04.375928 containerd[1593]: time="2026-04-16T04:26:04.374247819Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 1m51.562062523s" Apr 16 04:26:04.439112 containerd[1593]: time="2026-04-16T04:26:04.379995453Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Apr 16 04:26:04.509036 containerd[1593]: time="2026-04-16T04:26:04.501699499Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Apr 16 04:26:10.668792 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 16 04:26:10.909836 (kubelet)[2442]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 16 04:26:22.244302 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1089351171.mount: Deactivated successfully. Apr 16 04:26:23.074274 containerd[1593]: time="2026-04-16T04:26:23.059887618Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321070" Apr 16 04:26:23.630557 containerd[1593]: time="2026-04-16T04:26:23.063109665Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 04:26:24.884812 containerd[1593]: time="2026-04-16T04:26:24.882492162Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 04:26:28.462878 containerd[1593]: time="2026-04-16T04:26:28.457547386Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 04:26:29.419617 containerd[1593]: time="2026-04-16T04:26:29.375357581Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 24.82711551s" Apr 16 04:26:29.419617 containerd[1593]: time="2026-04-16T04:26:29.419731972Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Apr 16 04:26:29.765606 containerd[1593]: time="2026-04-16T04:26:29.742050736Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.24-0\"" Apr 16 04:26:43.162122 update_engine[1568]: I20260416 04:26:43.156599 1568 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Apr 16 04:26:43.162122 update_engine[1568]: I20260416 04:26:43.157685 1568 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Apr 16 04:26:43.278470 update_engine[1568]: I20260416 04:26:43.197225 1568 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Apr 16 04:26:43.289463 update_engine[1568]: I20260416 04:26:43.279347 1568 omaha_request_params.cc:62] Current group set to lts Apr 16 04:26:43.295367 update_engine[1568]: I20260416 04:26:43.289939 1568 update_attempter.cc:499] Already updated boot flags. Skipping. Apr 16 04:26:43.295367 update_engine[1568]: I20260416 04:26:43.290216 1568 update_attempter.cc:643] Scheduling an action processor start. Apr 16 04:26:43.295367 update_engine[1568]: I20260416 04:26:43.290259 1568 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Apr 16 04:26:43.302239 update_engine[1568]: I20260416 04:26:43.300997 1568 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Apr 16 04:26:43.376001 update_engine[1568]: I20260416 04:26:43.305905 1568 omaha_request_action.cc:271] Posting an Omaha request to disabled Apr 16 04:26:43.376001 update_engine[1568]: I20260416 04:26:43.333263 1568 omaha_request_action.cc:272] Request: Apr 16 04:26:43.376001 update_engine[1568]: Apr 16 04:26:43.376001 update_engine[1568]: Apr 16 04:26:43.376001 update_engine[1568]: Apr 16 04:26:43.376001 update_engine[1568]: Apr 16 04:26:43.376001 update_engine[1568]: Apr 16 04:26:43.376001 update_engine[1568]: Apr 16 04:26:43.376001 update_engine[1568]: Apr 16 04:26:43.376001 update_engine[1568]: Apr 16 04:26:43.376001 update_engine[1568]: I20260416 04:26:43.337468 1568 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 16 04:26:43.691457 locksmithd[1634]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Apr 16 04:26:43.743254 update_engine[1568]: I20260416 04:26:43.612586 1568 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 16 04:26:43.743972 update_engine[1568]: I20260416 04:26:43.743660 1568 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 16 04:26:43.936989 update_engine[1568]: E20260416 04:26:43.922691 1568 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Apr 16 04:26:43.968942 update_engine[1568]: I20260416 04:26:43.952732 1568 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Apr 16 04:26:48.672271 kubelet[2442]: E0416 04:26:48.671931 2442 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 16 04:26:50.000876 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 16 04:26:50.124781 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 16 04:26:52.587877 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2358293476.mount: Deactivated successfully. Apr 16 04:26:54.152827 update_engine[1568]: I20260416 04:26:54.127797 1568 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 16 04:26:54.188255 update_engine[1568]: I20260416 04:26:54.163341 1568 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 16 04:26:54.188255 update_engine[1568]: I20260416 04:26:54.171783 1568 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 16 04:26:54.244785 update_engine[1568]: E20260416 04:26:54.206238 1568 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Apr 16 04:26:54.244785 update_engine[1568]: I20260416 04:26:54.217919 1568 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Apr 16 04:27:01.304424 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 17. Apr 16 04:27:02.555210 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 16 04:27:04.063410 update_engine[1568]: I20260416 04:27:04.059144 1568 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 16 04:27:04.200600 update_engine[1568]: I20260416 04:27:04.074661 1568 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 16 04:27:04.200600 update_engine[1568]: I20260416 04:27:04.102975 1568 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 16 04:27:04.200600 update_engine[1568]: E20260416 04:27:04.192135 1568 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Apr 16 04:27:04.200600 update_engine[1568]: I20260416 04:27:04.193809 1568 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Apr 16 04:27:14.067071 update_engine[1568]: I20260416 04:27:14.066386 1568 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 16 04:27:14.134110 update_engine[1568]: I20260416 04:27:14.120643 1568 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 16 04:27:14.143032 update_engine[1568]: I20260416 04:27:14.141136 1568 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 16 04:27:14.168896 update_engine[1568]: E20260416 04:27:14.168668 1568 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Apr 16 04:27:14.168896 update_engine[1568]: I20260416 04:27:14.168872 1568 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Apr 16 04:27:14.168896 update_engine[1568]: I20260416 04:27:14.168901 1568 omaha_request_action.cc:617] Omaha request response: Apr 16 04:27:14.272477 update_engine[1568]: E20260416 04:27:14.169032 1568 omaha_request_action.cc:636] Omaha request network transfer failed. Apr 16 04:27:14.272477 update_engine[1568]: I20260416 04:27:14.176326 1568 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Apr 16 04:27:14.272477 update_engine[1568]: I20260416 04:27:14.176570 1568 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Apr 16 04:27:14.272477 update_engine[1568]: I20260416 04:27:14.176579 1568 update_attempter.cc:306] Processing Done. Apr 16 04:27:14.272477 update_engine[1568]: E20260416 04:27:14.176615 1568 update_attempter.cc:619] Update failed. Apr 16 04:27:14.272477 update_engine[1568]: I20260416 04:27:14.176635 1568 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Apr 16 04:27:14.272477 update_engine[1568]: I20260416 04:27:14.176641 1568 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Apr 16 04:27:14.272477 update_engine[1568]: I20260416 04:27:14.176648 1568 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Apr 16 04:27:14.272477 update_engine[1568]: I20260416 04:27:14.176833 1568 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Apr 16 04:27:14.272477 update_engine[1568]: I20260416 04:27:14.176869 1568 omaha_request_action.cc:271] Posting an Omaha request to disabled Apr 16 04:27:14.272477 update_engine[1568]: I20260416 04:27:14.186983 1568 omaha_request_action.cc:272] Request: Apr 16 04:27:14.272477 update_engine[1568]: Apr 16 04:27:14.272477 update_engine[1568]: Apr 16 04:27:14.272477 update_engine[1568]: Apr 16 04:27:14.272477 update_engine[1568]: Apr 16 04:27:14.272477 update_engine[1568]: Apr 16 04:27:14.272477 update_engine[1568]: Apr 16 04:27:14.272477 update_engine[1568]: I20260416 04:27:14.196180 1568 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 16 04:27:14.292741 update_engine[1568]: I20260416 04:27:14.213645 1568 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 16 04:27:14.292741 update_engine[1568]: I20260416 04:27:14.260871 1568 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 16 04:27:14.293540 update_engine[1568]: E20260416 04:27:14.293407 1568 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Apr 16 04:27:14.293682 update_engine[1568]: I20260416 04:27:14.293643 1568 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Apr 16 04:27:14.293682 update_engine[1568]: I20260416 04:27:14.293675 1568 omaha_request_action.cc:617] Omaha request response: Apr 16 04:27:14.293771 update_engine[1568]: I20260416 04:27:14.293687 1568 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Apr 16 04:27:14.293771 update_engine[1568]: I20260416 04:27:14.293694 1568 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Apr 16 04:27:14.293771 update_engine[1568]: I20260416 04:27:14.293700 1568 update_attempter.cc:306] Processing Done. Apr 16 04:27:14.293771 update_engine[1568]: I20260416 04:27:14.293707 1568 update_attempter.cc:310] Error event sent. Apr 16 04:27:14.293771 update_engine[1568]: I20260416 04:27:14.293720 1568 update_check_scheduler.cc:74] Next update check in 44m34s Apr 16 04:27:14.300369 locksmithd[1634]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Apr 16 04:27:14.366106 locksmithd[1634]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Apr 16 04:27:14.593998 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 16 04:27:14.668255 (kubelet)[2480]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 16 04:27:30.138919 kubelet[2480]: E0416 04:27:30.134969 2480 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 16 04:27:30.158744 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 16 04:27:30.159248 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 16 04:27:41.058548 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 18. Apr 16 04:27:41.636948 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 16 04:27:54.756330 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 16 04:27:55.196566 (kubelet)[2505]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 16 04:28:03.220932 kubelet[2505]: E0416 04:28:03.218792 2505 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 16 04:28:03.229981 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 16 04:28:03.231122 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 16 04:28:16.150377 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 19. Apr 16 04:28:16.177217 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 16 04:28:19.471584 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 16 04:28:19.478094 (kubelet)[2554]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 16 04:28:20.241305 kubelet[2554]: E0416 04:28:20.240225 2554 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 16 04:28:20.259262 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 16 04:28:20.261724 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 16 04:28:30.705603 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 20. Apr 16 04:28:30.882693 containerd[1593]: time="2026-04-16T04:28:30.816357462Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.24-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 04:28:30.899582 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 16 04:28:31.006255 containerd[1593]: time="2026-04-16T04:28:31.004757017Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.24-0: active requests=0, bytes read=23718826" Apr 16 04:28:31.265346 containerd[1593]: time="2026-04-16T04:28:31.257368357Z" level=info msg="ImageCreate event name:\"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 04:28:32.747220 containerd[1593]: time="2026-04-16T04:28:32.744128553Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 04:28:33.537787 containerd[1593]: time="2026-04-16T04:28:33.533766810Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.24-0\" with image id \"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\", repo tag \"registry.k8s.io/etcd:3.5.24-0\", repo digest \"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\", size \"23716032\" in 2m3.784954025s" Apr 16 04:28:33.537787 containerd[1593]: time="2026-04-16T04:28:33.540755966Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.24-0\" returns image reference \"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\"" Apr 16 04:28:39.699499 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 16 04:28:40.350117 (kubelet)[2606]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 16 04:29:01.004133 kubelet[2606]: E0416 04:29:01.003536 2606 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 16 04:29:01.043707 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 16 04:29:01.056721 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 16 04:29:11.563654 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 21. Apr 16 04:29:12.195595 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 16 04:29:32.172806 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 16 04:29:33.347870 (kubelet)[2647]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 16 04:29:39.429080 containerd[1593]: time="2026-04-16T04:29:39.427946540Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.8\"" Apr 16 04:29:42.734270 kubelet[2647]: E0416 04:29:42.732734 2647 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 16 04:29:42.779115 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 16 04:29:42.794880 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 16 04:29:53.166850 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 22. Apr 16 04:29:54.953609 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 16 04:30:00.974633 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 16 04:30:01.007533 (kubelet)[2689]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 16 04:30:09.507153 kubelet[2689]: E0416 04:30:09.505072 2689 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 16 04:30:09.542784 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 16 04:30:09.549659 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 16 04:30:20.182809 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 23. Apr 16 04:30:20.488354 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 16 04:30:25.165090 containerd[1593]: time="2026-04-16T04:30:25.156845699Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 04:30:25.240215 containerd[1593]: time="2026-04-16T04:30:25.179879095Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.8: active requests=0, bytes read=29285913" Apr 16 04:30:26.149305 containerd[1593]: time="2026-04-16T04:30:26.137188379Z" level=info msg="ImageCreate event name:\"sha256:dc64713f4ac867ea18e11a58b9d7919f5636e80c652734e5aaba316218bdbbdb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 04:30:29.696665 containerd[1593]: time="2026-04-16T04:30:29.695866008Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:0d1f1afdd389ba0b99233830af563d7da79484b8bae6ff905d6edbcb419127bd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 04:30:30.668668 containerd[1593]: time="2026-04-16T04:30:30.661221686Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.8\" with image id \"sha256:dc64713f4ac867ea18e11a58b9d7919f5636e80c652734e5aaba316218bdbbdb\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.8\", repo digest \"registry.k8s.io/kube-apiserver@sha256:0d1f1afdd389ba0b99233830af563d7da79484b8bae6ff905d6edbcb419127bd\", size \"30111158\" in 51.227127986s" Apr 16 04:30:30.668668 containerd[1593]: time="2026-04-16T04:30:30.661831164Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.8\" returns image reference \"sha256:dc64713f4ac867ea18e11a58b9d7919f5636e80c652734e5aaba316218bdbbdb\"" Apr 16 04:30:30.977829 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 16 04:30:31.663232 containerd[1593]: time="2026-04-16T04:30:31.662804167Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.8\"" Apr 16 04:30:32.659923 (kubelet)[2712]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 16 04:30:36.881281 kubelet[2712]: E0416 04:30:36.786301 2712 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 16 04:30:36.927878 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 16 04:30:36.928198 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 16 04:30:44.228281 containerd[1593]: time="2026-04-16T04:30:44.225259423Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.8: active requests=0, bytes read=26021560" Apr 16 04:30:44.228281 containerd[1593]: time="2026-04-16T04:30:44.226055479Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 04:30:44.640173 containerd[1593]: time="2026-04-16T04:30:44.635835566Z" level=info msg="ImageCreate event name:\"sha256:d6c80027e9465615ba510d0c5f3a98ff50a8cd7eaf378b3aaa107f6c9a92216c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 04:30:46.186235 containerd[1593]: time="2026-04-16T04:30:46.178252189Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:4b93c08a1d78c2065518e8bbcad3132beafab937a9fd0771c82cdb63d2a050b8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 04:30:46.246867 containerd[1593]: time="2026-04-16T04:30:46.243339654Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.8\" with image id \"sha256:d6c80027e9465615ba510d0c5f3a98ff50a8cd7eaf378b3aaa107f6c9a92216c\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.8\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:4b93c08a1d78c2065518e8bbcad3132beafab937a9fd0771c82cdb63d2a050b8\", size \"27678578\" in 14.475491236s" Apr 16 04:30:46.249392 containerd[1593]: time="2026-04-16T04:30:46.247827510Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.8\" returns image reference \"sha256:d6c80027e9465615ba510d0c5f3a98ff50a8cd7eaf378b3aaa107f6c9a92216c\"" Apr 16 04:30:46.598729 containerd[1593]: time="2026-04-16T04:30:46.563285399Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.8\"" Apr 16 04:30:47.368732 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 24. Apr 16 04:30:47.499029 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 16 04:30:52.957700 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 16 04:30:53.017142 (kubelet)[2738]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 16 04:30:55.672531 kubelet[2738]: E0416 04:30:55.672081 2738 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 16 04:30:55.709333 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 16 04:30:55.709819 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 16 04:31:06.173277 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 25. Apr 16 04:31:06.366940 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 16 04:31:08.976476 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 16 04:31:09.009372 (kubelet)[2763]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 16 04:31:09.021611 containerd[1593]: time="2026-04-16T04:31:09.019328971Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 04:31:09.021611 containerd[1593]: time="2026-04-16T04:31:09.022059601Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.8: active requests=0, bytes read=20160949" Apr 16 04:31:09.035847 containerd[1593]: time="2026-04-16T04:31:09.035714717Z" level=info msg="ImageCreate event name:\"sha256:94ca5455c32fc8639aa2138e77a382b04bb32cd3477d3dcfced2fd2dfe4427b7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 04:31:09.200764 containerd[1593]: time="2026-04-16T04:31:09.200202557Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:f6c5eae3f9f702a0c00e5c52aa040b2c685acfc9fd8d2646f150a183de36e72f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 04:31:09.237276 containerd[1593]: time="2026-04-16T04:31:09.235521495Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.8\" with image id \"sha256:94ca5455c32fc8639aa2138e77a382b04bb32cd3477d3dcfced2fd2dfe4427b7\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.8\", repo digest \"registry.k8s.io/kube-scheduler@sha256:f6c5eae3f9f702a0c00e5c52aa040b2c685acfc9fd8d2646f150a183de36e72f\", size \"21817985\" in 22.639948079s" Apr 16 04:31:09.237276 containerd[1593]: time="2026-04-16T04:31:09.235731561Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.8\" returns image reference \"sha256:94ca5455c32fc8639aa2138e77a382b04bb32cd3477d3dcfced2fd2dfe4427b7\"" Apr 16 04:31:09.266812 containerd[1593]: time="2026-04-16T04:31:09.265540606Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.8\"" Apr 16 04:31:09.289017 kubelet[2763]: E0416 04:31:09.288561 2763 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 16 04:31:09.291572 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 16 04:31:09.291955 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 16 04:31:10.202230 systemd[1]: Starting systemd-tmpfiles-clean.service - Cleanup of Temporary Directories... Apr 16 04:31:10.249785 systemd-tmpfiles[2773]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Apr 16 04:31:10.250235 systemd-tmpfiles[2773]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Apr 16 04:31:10.250824 systemd-tmpfiles[2773]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Apr 16 04:31:10.250981 systemd-tmpfiles[2773]: ACLs are not supported, ignoring. Apr 16 04:31:10.251019 systemd-tmpfiles[2773]: ACLs are not supported, ignoring. Apr 16 04:31:10.259656 systemd-tmpfiles[2773]: Detected autofs mount point /boot during canonicalization of boot. Apr 16 04:31:10.259811 systemd-tmpfiles[2773]: Skipping /boot Apr 16 04:31:10.272197 systemd[1]: systemd-tmpfiles-clean.service: Deactivated successfully. Apr 16 04:31:10.272731 systemd[1]: Finished systemd-tmpfiles-clean.service - Cleanup of Temporary Directories. Apr 16 04:31:19.510913 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 26. Apr 16 04:31:19.586014 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 16 04:31:22.201237 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 16 04:31:22.473940 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2012036018.mount: Deactivated successfully. Apr 16 04:31:22.475264 (kubelet)[2793]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 16 04:31:24.754302 kubelet[2793]: E0416 04:31:24.750280 2793 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 16 04:31:24.773650 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 16 04:31:24.782475 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 16 04:31:30.440613 containerd[1593]: time="2026-04-16T04:31:30.435350409Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 04:31:30.440613 containerd[1593]: time="2026-04-16T04:31:30.444618532Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.8: active requests=0, bytes read=31828042" Apr 16 04:31:30.592412 containerd[1593]: time="2026-04-16T04:31:30.588501805Z" level=info msg="ImageCreate event name:\"sha256:85ec3b545d037f93f83e44b07f146127cbabe79932928142521ca2d14f41d608\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 04:31:32.406774 containerd[1593]: time="2026-04-16T04:31:32.403857893Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:00c5df7707d5fc1f8b2c95cf71ec8ea82fd27a01af1b720e1f252ece4f71b17c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 04:31:32.510905 containerd[1593]: time="2026-04-16T04:31:32.505254752Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.8\" with image id \"sha256:85ec3b545d037f93f83e44b07f146127cbabe79932928142521ca2d14f41d608\", repo tag \"registry.k8s.io/kube-proxy:v1.33.8\", repo digest \"registry.k8s.io/kube-proxy@sha256:00c5df7707d5fc1f8b2c95cf71ec8ea82fd27a01af1b720e1f252ece4f71b17c\", size \"31827167\" in 23.2279104s" Apr 16 04:31:32.510905 containerd[1593]: time="2026-04-16T04:31:32.511052539Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.8\" returns image reference \"sha256:85ec3b545d037f93f83e44b07f146127cbabe79932928142521ca2d14f41d608\"" Apr 16 04:31:35.079788 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 27. Apr 16 04:31:35.162012 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 16 04:31:40.384545 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 16 04:31:40.534775 (kubelet)[2822]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 16 04:31:41.836385 kubelet[2822]: E0416 04:31:41.835815 2822 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 16 04:31:41.855335 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 16 04:31:41.855644 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 16 04:31:52.697249 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 28. Apr 16 04:31:53.366326 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 16 04:32:06.990865 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 16 04:32:07.075960 (kubelet)[2843]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 16 04:32:11.595766 kubelet[2843]: E0416 04:32:11.566061 2843 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 16 04:32:11.653427 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 16 04:32:11.655481 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 16 04:32:21.760454 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 29. Apr 16 04:32:21.870779 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 16 04:32:27.441420 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 16 04:32:27.546365 (kubelet)[2866]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 16 04:32:32.571917 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 16 04:32:32.703407 systemd[1]: kubelet.service: Deactivated successfully. Apr 16 04:32:32.728381 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 16 04:32:33.651421 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 16 04:32:36.894025 systemd[1]: Reloading requested from client PID 2883 ('systemctl') (unit session-7.scope)... Apr 16 04:32:36.895844 systemd[1]: Reloading... Apr 16 04:32:40.603172 zram_generator::config[2922]: No configuration found. Apr 16 04:32:54.773763 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 16 04:33:08.126392 systemd[1]: Reloading finished in 31215 ms. Apr 16 04:33:09.671308 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 16 04:33:09.701117 systemd[1]: kubelet.service: Deactivated successfully. Apr 16 04:33:09.770602 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 16 04:33:10.064565 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 16 04:33:41.989393 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 16 04:33:42.728325 (kubelet)[2989]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 16 04:34:38.598177 kubelet[2989]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 16 04:34:38.598177 kubelet[2989]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 16 04:34:38.598177 kubelet[2989]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 16 04:34:38.981466 kubelet[2989]: I0416 04:34:38.612695 2989 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 16 04:35:03.811167 kubelet[2989]: I0416 04:35:03.787009 2989 server.go:530] "Kubelet version" kubeletVersion="v1.33.8" Apr 16 04:35:04.237540 kubelet[2989]: I0416 04:35:03.837046 2989 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 16 04:35:04.237540 kubelet[2989]: I0416 04:35:04.109699 2989 server.go:956] "Client rotation is on, will bootstrap in background" Apr 16 04:35:09.569709 kubelet[2989]: E0416 04:35:09.564855 2989 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.5:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.5:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 16 04:35:12.558299 kubelet[2989]: E0416 04:35:12.555025 2989 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.5:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.5:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 16 04:35:12.754776 kubelet[2989]: I0416 04:35:12.559571 2989 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 16 04:35:17.900800 kubelet[2989]: E0416 04:35:17.899628 2989 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 16 04:35:17.900800 kubelet[2989]: I0416 04:35:17.901679 2989 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Apr 16 04:35:18.626862 kubelet[2989]: E0416 04:35:18.061002 2989 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.5:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.5:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 16 04:35:27.648190 kubelet[2989]: E0416 04:35:27.541153 2989 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.5:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.5:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 16 04:35:27.648190 kubelet[2989]: I0416 04:35:27.648381 2989 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Apr 16 04:35:29.016256 kubelet[2989]: I0416 04:35:28.845811 2989 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 16 04:35:29.750240 kubelet[2989]: I0416 04:35:29.034798 2989 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Apr 16 04:35:29.875082 kubelet[2989]: I0416 04:35:29.859156 2989 topology_manager.go:138] "Creating topology manager with none policy" Apr 16 04:35:29.895632 kubelet[2989]: I0416 04:35:29.878331 2989 container_manager_linux.go:303] "Creating device plugin manager" Apr 16 04:35:30.248917 kubelet[2989]: I0416 04:35:30.245022 2989 state_mem.go:36] "Initialized new in-memory state store" Apr 16 04:35:30.993042 kubelet[2989]: I0416 04:35:30.983222 2989 kubelet.go:480] "Attempting to sync node with API server" Apr 16 04:35:31.070525 kubelet[2989]: I0416 04:35:31.005088 2989 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 16 04:35:31.183365 kubelet[2989]: I0416 04:35:31.178163 2989 kubelet.go:386] "Adding apiserver pod source" Apr 16 04:35:31.234298 kubelet[2989]: I0416 04:35:31.190670 2989 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 16 04:35:31.707329 kubelet[2989]: E0416 04:35:31.702626 2989 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.5:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.5:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 16 04:35:32.210811 kubelet[2989]: E0416 04:35:31.711615 2989 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.5:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.5:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 16 04:35:32.650945 kubelet[2989]: I0416 04:35:32.561210 2989 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 16 04:35:32.975389 kubelet[2989]: I0416 04:35:32.969605 2989 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 16 04:35:33.286816 kubelet[2989]: W0416 04:35:33.265632 2989 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Apr 16 04:35:33.376192 kubelet[2989]: E0416 04:35:33.361110 2989 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.5:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.5:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 16 04:35:34.413257 kubelet[2989]: E0416 04:35:34.350276 2989 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.5:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.5:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 16 04:35:35.785175 kubelet[2989]: I0416 04:35:35.779483 2989 watchdog_linux.go:99] "Systemd watchdog is not enabled" Apr 16 04:35:35.834781 kubelet[2989]: I0416 04:35:35.795398 2989 server.go:1289] "Started kubelet" Apr 16 04:35:35.834781 kubelet[2989]: I0416 04:35:35.828948 2989 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Apr 16 04:35:36.271160 kubelet[2989]: I0416 04:35:35.972399 2989 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 16 04:35:36.827887 kubelet[2989]: I0416 04:35:36.813587 2989 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 16 04:35:36.993415 kubelet[2989]: I0416 04:35:36.988909 2989 volume_manager.go:297] "Starting Kubelet Volume Manager" Apr 16 04:35:37.038248 kubelet[2989]: I0416 04:35:37.036340 2989 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 16 04:35:37.056006 kubelet[2989]: I0416 04:35:37.051601 2989 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Apr 16 04:35:37.207786 kubelet[2989]: I0416 04:35:37.173373 2989 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 16 04:35:37.249923 kubelet[2989]: E0416 04:35:37.155770 2989 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 04:35:37.264431 kubelet[2989]: I0416 04:35:37.253932 2989 server.go:317] "Adding debug handlers to kubelet server" Apr 16 04:35:37.281300 kubelet[2989]: I0416 04:35:37.274912 2989 reconciler.go:26] "Reconciler: start to sync state" Apr 16 04:35:37.314656 kubelet[2989]: E0416 04:35:37.314068 2989 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.5:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.5:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 16 04:35:37.519692 kubelet[2989]: E0416 04:35:37.509250 2989 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.5:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.5:6443: connect: connection refused" interval="200ms" Apr 16 04:35:37.555812 kubelet[2989]: E0416 04:35:37.386298 2989 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 04:35:37.750267 kubelet[2989]: E0416 04:35:37.749476 2989 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.5:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.5:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 16 04:35:37.892704 kubelet[2989]: E0416 04:35:37.858610 2989 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.5:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.5:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 16 04:35:38.698551 kubelet[2989]: E0416 04:35:37.364340 2989 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.5:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.5:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18a6bc4b917eab60 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-16 04:35:35.793806176 +0000 UTC m=+112.092307978,LastTimestamp:2026-04-16 04:35:35.793806176 +0000 UTC m=+112.092307978,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 16 04:35:38.872873 kubelet[2989]: E0416 04:35:38.396676 2989 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 04:35:39.090038 kubelet[2989]: E0416 04:35:39.064308 2989 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 04:35:39.466150 kubelet[2989]: E0416 04:35:39.345024 2989 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 04:35:39.986928 kubelet[2989]: E0416 04:35:39.982345 2989 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 04:35:40.158324 kubelet[2989]: E0416 04:35:40.153226 2989 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.5:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.5:6443: connect: connection refused" interval="400ms" Apr 16 04:35:40.287140 kubelet[2989]: I0416 04:35:40.278365 2989 factory.go:223] Registration of the systemd container factory successfully Apr 16 04:35:40.287140 kubelet[2989]: E0416 04:35:40.278919 2989 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 04:35:40.287140 kubelet[2989]: I0416 04:35:40.279842 2989 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 16 04:35:40.287140 kubelet[2989]: E0416 04:35:40.286519 2989 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.5:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.5:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 16 04:35:40.472874 kubelet[2989]: E0416 04:35:40.468521 2989 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 04:35:40.600059 kubelet[2989]: E0416 04:35:40.587324 2989 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 04:35:41.080145 kubelet[2989]: E0416 04:35:41.066912 2989 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 04:35:41.263099 kubelet[2989]: E0416 04:35:41.234737 2989 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 04:35:41.624202 kubelet[2989]: W0416 04:35:41.555095 2989 logging.go:55] [core] [Channel #9 SubChannel #10]grpc: addrConn.createTransport failed to connect to {Addr: "/run/containerd/containerd.sock", ServerName: "localhost", Attributes: {"<%!p(networktype.keyType=grpc.internal.transport.networktype)>": "unix" }, }. Err: connection error: desc = "transport: Error while dialing: dial unix:///run/containerd/containerd.sock: timeout" Apr 16 04:35:41.790516 kubelet[2989]: E0416 04:35:41.625368 2989 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 04:35:41.938823 kubelet[2989]: E0416 04:35:41.923982 2989 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 04:35:42.072406 kubelet[2989]: E0416 04:35:42.005884 2989 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.5:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.5:6443: connect: connection refused" interval="800ms" Apr 16 04:35:42.072406 kubelet[2989]: E0416 04:35:41.961737 2989 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 16 04:35:42.072406 kubelet[2989]: E0416 04:35:42.074886 2989 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 04:35:42.250356 kubelet[2989]: E0416 04:35:42.181328 2989 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 04:35:42.250356 kubelet[2989]: E0416 04:35:42.182057 2989 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.5:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.5:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 16 04:35:42.711417 kubelet[2989]: E0416 04:35:42.611995 2989 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 04:35:42.776969 kubelet[2989]: I0416 04:35:42.768304 2989 factory.go:221] Registration of the containerd container factory failed: unable to create containerd client: context deadline exceeded Apr 16 04:35:42.987930 kubelet[2989]: E0416 04:35:42.858397 2989 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 04:35:43.137290 kubelet[2989]: E0416 04:35:43.116498 2989 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 04:35:43.337614 kubelet[2989]: E0416 04:35:43.264122 2989 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 04:35:43.337614 kubelet[2989]: E0416 04:35:43.333511 2989 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.5:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.5:6443: connect: connection refused" interval="1.6s" Apr 16 04:35:43.337614 kubelet[2989]: E0416 04:35:43.335341 2989 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.5:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.5:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 16 04:35:43.492478 kubelet[2989]: E0416 04:35:43.484870 2989 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 04:35:43.645052 kubelet[2989]: E0416 04:35:43.632148 2989 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 04:35:43.762314 kubelet[2989]: E0416 04:35:43.761764 2989 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 04:35:44.018023 kubelet[2989]: E0416 04:35:44.016111 2989 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 04:35:44.283936 kubelet[2989]: E0416 04:35:44.279614 2989 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 04:35:44.457652 kubelet[2989]: E0416 04:35:44.412279 2989 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 04:35:44.563700 kubelet[2989]: E0416 04:35:44.561164 2989 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.5:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.5:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 16 04:35:44.616918 kubelet[2989]: E0416 04:35:44.610092 2989 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 04:35:44.869469 kubelet[2989]: E0416 04:35:44.861426 2989 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.5:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.5:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 16 04:35:44.869469 kubelet[2989]: E0416 04:35:44.861384 2989 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 04:35:44.982580 kubelet[2989]: E0416 04:35:44.962367 2989 certificate_manager.go:461] "Reached backoff limit, still unable to rotate certs" err="timed out waiting for the condition" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 16 04:35:45.092006 kubelet[2989]: E0416 04:35:44.981385 2989 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 04:35:45.203824 kubelet[2989]: E0416 04:35:45.201702 2989 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 04:35:45.330239 kubelet[2989]: E0416 04:35:45.319847 2989 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 04:35:45.573417 kubelet[2989]: E0416 04:35:45.558103 2989 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.5:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.5:6443: connect: connection refused" interval="3.2s" Apr 16 04:35:45.672229 kubelet[2989]: E0416 04:35:45.661235 2989 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 04:35:45.809277 kubelet[2989]: E0416 04:35:45.797885 2989 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 04:35:46.112791 kubelet[2989]: E0416 04:35:46.088087 2989 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 04:35:46.238230 kubelet[2989]: E0416 04:35:45.994074 2989 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.5:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.5:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18a6bc4b917eab60 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-16 04:35:35.793806176 +0000 UTC m=+112.092307978,LastTimestamp:2026-04-16 04:35:35.793806176 +0000 UTC m=+112.092307978,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 16 04:35:46.309111 kubelet[2989]: E0416 04:35:46.267503 2989 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 04:35:46.551374 kubelet[2989]: E0416 04:35:46.442183 2989 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 04:35:46.713564 kubelet[2989]: E0416 04:35:46.713042 2989 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 04:35:46.869068 kubelet[2989]: E0416 04:35:46.863540 2989 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 04:35:47.008074 kubelet[2989]: E0416 04:35:47.002840 2989 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 04:35:47.014506 kubelet[2989]: I0416 04:35:47.001392 2989 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Apr 16 04:35:47.286409 kubelet[2989]: E0416 04:35:47.238180 2989 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 04:35:47.454967 kubelet[2989]: E0416 04:35:47.449600 2989 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 04:35:47.576997 kubelet[2989]: E0416 04:35:47.566736 2989 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 04:35:47.725683 kubelet[2989]: E0416 04:35:47.720198 2989 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 04:35:47.954387 kubelet[2989]: E0416 04:35:47.875136 2989 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 04:35:47.954387 kubelet[2989]: I0416 04:35:47.875147 2989 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Apr 16 04:35:47.998804 kubelet[2989]: I0416 04:35:47.964828 2989 status_manager.go:230] "Starting to sync pod status with apiserver" Apr 16 04:35:47.998804 kubelet[2989]: I0416 04:35:47.989130 2989 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 16 04:35:48.160338 kubelet[2989]: I0416 04:35:48.158292 2989 kubelet.go:2436] "Starting kubelet main sync loop" Apr 16 04:35:48.288346 kubelet[2989]: E0416 04:35:48.151401 2989 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 04:35:48.385588 kubelet[2989]: E0416 04:35:48.374274 2989 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 16 04:35:48.401165 kubelet[2989]: E0416 04:35:48.391304 2989 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 04:35:48.869640 kubelet[2989]: E0416 04:35:48.865790 2989 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 16 04:35:49.013571 kubelet[2989]: E0416 04:35:49.012870 2989 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 04:35:49.036388 kubelet[2989]: E0416 04:35:49.036174 2989 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.5:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.5:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 16 04:35:49.367618 kubelet[2989]: E0416 04:35:49.145433 2989 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 16 04:35:49.450997 kubelet[2989]: E0416 04:35:49.277859 2989 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 04:35:49.564925 kubelet[2989]: E0416 04:35:49.563561 2989 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.5:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.5:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 16 04:35:49.564925 kubelet[2989]: E0416 04:35:49.563944 2989 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 04:35:49.564925 kubelet[2989]: E0416 04:35:49.565075 2989 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.5:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.5:6443: connect: connection refused" interval="6.4s" Apr 16 04:35:49.829174 kubelet[2989]: E0416 04:35:49.816408 2989 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 04:35:50.013237 kubelet[2989]: E0416 04:35:49.908604 2989 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 16 04:35:50.129888 kubelet[2989]: E0416 04:35:50.055258 2989 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 04:35:50.250424 kubelet[2989]: E0416 04:35:50.246347 2989 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 04:35:50.509410 kubelet[2989]: E0416 04:35:50.487613 2989 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 04:35:50.632636 kubelet[2989]: E0416 04:35:50.628623 2989 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 04:35:50.814842 kubelet[2989]: E0416 04:35:50.797261 2989 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 04:35:50.882033 kubelet[2989]: E0416 04:35:50.870214 2989 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Apr 16 04:35:50.964974 kubelet[2989]: E0416 04:35:50.942399 2989 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 04:35:50.964974 kubelet[2989]: E0416 04:35:50.942394 2989 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.5:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.5:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 16 04:35:51.084093 kubelet[2989]: E0416 04:35:51.069151 2989 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 04:35:51.674007 kubelet[2989]: E0416 04:35:51.669205 2989 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 04:35:51.809845 kubelet[2989]: E0416 04:35:51.805106 2989 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 04:35:52.078337 kubelet[2989]: E0416 04:35:51.957258 2989 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 04:35:52.233279 kubelet[2989]: E0416 04:35:52.226748 2989 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 04:35:52.700082 kubelet[2989]: E0416 04:35:52.682357 2989 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 04:35:52.891002 kubelet[2989]: E0416 04:35:52.722233 2989 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Apr 16 04:35:53.211566 kubelet[2989]: I0416 04:35:53.206526 2989 cpu_manager.go:221] "Starting CPU manager" policy="none" Apr 16 04:35:53.311303 kubelet[2989]: E0416 04:35:53.208011 2989 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 04:35:53.571137 kubelet[2989]: I0416 04:35:53.558866 2989 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Apr 16 04:35:53.711056 kubelet[2989]: I0416 04:35:53.692677 2989 state_mem.go:36] "Initialized new in-memory state store" Apr 16 04:35:53.769049 kubelet[2989]: E0416 04:35:53.703411 2989 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 04:35:53.919817 kubelet[2989]: E0416 04:35:53.905408 2989 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 04:35:54.080669 kubelet[2989]: E0416 04:35:54.052282 2989 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 04:35:54.236768 kubelet[2989]: E0416 04:35:54.210310 2989 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 04:35:54.378909 kubelet[2989]: I0416 04:35:54.377872 2989 policy_none.go:49] "None policy: Start" Apr 16 04:35:54.378909 kubelet[2989]: I0416 04:35:54.378372 2989 memory_manager.go:186] "Starting memorymanager" policy="None" Apr 16 04:35:54.501316 kubelet[2989]: I0416 04:35:54.397331 2989 state_mem.go:35] "Initializing new in-memory state store" Apr 16 04:35:54.501316 kubelet[2989]: E0416 04:35:54.497032 2989 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 04:35:54.501316 kubelet[2989]: E0416 04:35:54.497219 2989 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.5:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.5:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 16 04:35:54.703155 kubelet[2989]: E0416 04:35:54.639390 2989 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 04:35:55.180792 kubelet[2989]: E0416 04:35:55.180276 2989 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 04:35:55.801216 kubelet[2989]: E0416 04:35:55.797395 2989 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 04:35:56.003048 kubelet[2989]: E0416 04:35:56.002371 2989 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.5:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.5:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 16 04:35:56.227295 kubelet[2989]: E0416 04:35:56.209865 2989 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 04:35:56.366831 kubelet[2989]: E0416 04:35:56.360672 2989 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 04:35:56.752269 kubelet[2989]: E0416 04:35:56.745066 2989 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 04:35:56.875264 kubelet[2989]: E0416 04:35:56.873924 2989 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 04:35:57.397367 kubelet[2989]: E0416 04:35:57.031668 2989 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 04:35:57.721359 kubelet[2989]: E0416 04:35:57.709646 2989 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 04:35:57.967562 kubelet[2989]: E0416 04:35:57.778836 2989 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.5:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.5:6443: connect: connection refused" interval="7s" Apr 16 04:35:58.072203 kubelet[2989]: E0416 04:35:57.967610 2989 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 04:35:58.072203 kubelet[2989]: E0416 04:35:57.996133 2989 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Apr 16 04:35:58.087573 kubelet[2989]: E0416 04:35:58.087370 2989 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.5:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.5:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 16 04:35:58.104178 kubelet[2989]: E0416 04:35:58.087724 2989 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 04:35:58.105725 kubelet[2989]: E0416 04:35:58.102363 2989 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.5:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.5:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18a6bc4b917eab60 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-16 04:35:35.793806176 +0000 UTC m=+112.092307978,LastTimestamp:2026-04-16 04:35:35.793806176 +0000 UTC m=+112.092307978,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 16 04:35:58.558505 kubelet[2989]: E0416 04:35:58.555661 2989 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 04:35:58.671631 kubelet[2989]: E0416 04:35:58.665385 2989 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 04:35:58.801173 kubelet[2989]: E0416 04:35:58.800236 2989 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 04:35:58.975337 kubelet[2989]: E0416 04:35:58.939263 2989 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 04:35:59.490975 kubelet[2989]: E0416 04:35:59.134136 2989 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 04:35:59.592979 kubelet[2989]: E0416 04:35:59.496819 2989 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 04:35:59.654149 kubelet[2989]: E0416 04:35:59.650263 2989 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 04:35:59.832316 kubelet[2989]: E0416 04:35:59.802791 2989 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.5:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.5:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 16 04:35:59.963178 kubelet[2989]: E0416 04:35:59.956153 2989 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.5:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.5:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 16 04:36:00.070084 kubelet[2989]: E0416 04:35:59.884141 2989 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 04:36:00.182843 kubelet[2989]: E0416 04:36:00.111210 2989 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 16 04:36:00.248310 kubelet[2989]: E0416 04:36:00.185627 2989 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 04:36:00.248310 kubelet[2989]: I0416 04:36:00.221175 2989 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 16 04:36:00.298320 kubelet[2989]: I0416 04:36:00.251286 2989 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 16 04:36:00.335427 kubelet[2989]: E0416 04:36:00.333724 2989 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 04:36:00.507860 kubelet[2989]: I0416 04:36:00.502355 2989 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 16 04:36:02.191477 kubelet[2989]: E0416 04:36:02.182718 2989 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 16 04:36:02.583035 kubelet[2989]: E0416 04:36:02.355285 2989 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 16 04:36:02.583035 kubelet[2989]: I0416 04:36:02.462020 2989 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 16 04:36:03.657425 kubelet[2989]: I0416 04:36:03.649426 2989 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/185b4bd2fb947057f02f1a819bbd3411-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"185b4bd2fb947057f02f1a819bbd3411\") " pod="kube-system/kube-apiserver-localhost" Apr 16 04:36:04.159751 kubelet[2989]: E0416 04:36:04.013865 2989 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.5:6443/api/v1/nodes\": dial tcp 10.0.0.5:6443: connect: connection refused" node="localhost" Apr 16 04:36:04.490400 kubelet[2989]: I0416 04:36:04.464210 2989 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/185b4bd2fb947057f02f1a819bbd3411-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"185b4bd2fb947057f02f1a819bbd3411\") " pod="kube-system/kube-apiserver-localhost" Apr 16 04:36:04.796273 kubelet[2989]: I0416 04:36:04.766167 2989 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/185b4bd2fb947057f02f1a819bbd3411-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"185b4bd2fb947057f02f1a819bbd3411\") " pod="kube-system/kube-apiserver-localhost" Apr 16 04:36:05.805935 kubelet[2989]: E0416 04:36:05.576369 2989 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.5:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.5:6443: connect: connection refused" interval="7s" Apr 16 04:36:07.111682 kubelet[2989]: I0416 04:36:07.106921 2989 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 16 04:36:07.510166 kubelet[2989]: E0416 04:36:07.464775 2989 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.5:6443/api/v1/nodes\": dial tcp 10.0.0.5:6443: connect: connection refused" node="localhost" Apr 16 04:36:07.943407 kubelet[2989]: I0416 04:36:07.941994 2989 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/661aacf61b27dbeb7414ee44841cd3ce-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"661aacf61b27dbeb7414ee44841cd3ce\") " pod="kube-system/kube-controller-manager-localhost" Apr 16 04:36:08.270640 kubelet[2989]: I0416 04:36:08.197774 2989 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/661aacf61b27dbeb7414ee44841cd3ce-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"661aacf61b27dbeb7414ee44841cd3ce\") " pod="kube-system/kube-controller-manager-localhost" Apr 16 04:36:08.270640 kubelet[2989]: I0416 04:36:08.225844 2989 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/661aacf61b27dbeb7414ee44841cd3ce-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"661aacf61b27dbeb7414ee44841cd3ce\") " pod="kube-system/kube-controller-manager-localhost" Apr 16 04:36:08.498731 kubelet[2989]: I0416 04:36:08.497101 2989 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/661aacf61b27dbeb7414ee44841cd3ce-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"661aacf61b27dbeb7414ee44841cd3ce\") " pod="kube-system/kube-controller-manager-localhost" Apr 16 04:36:08.756558 kubelet[2989]: I0416 04:36:08.541942 2989 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/661aacf61b27dbeb7414ee44841cd3ce-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"661aacf61b27dbeb7414ee44841cd3ce\") " pod="kube-system/kube-controller-manager-localhost" Apr 16 04:36:09.281702 kubelet[2989]: E0416 04:36:09.091733 2989 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.5:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.5:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18a6bc4b917eab60 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-16 04:35:35.793806176 +0000 UTC m=+112.092307978,LastTimestamp:2026-04-16 04:35:35.793806176 +0000 UTC m=+112.092307978,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 16 04:36:12.929416 kubelet[2989]: E0416 04:36:12.927806 2989 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 16 04:36:13.454182 kubelet[2989]: E0416 04:36:13.084964 2989 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.5:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.5:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 16 04:36:13.489658 kubelet[2989]: I0416 04:36:13.095197 2989 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 16 04:36:14.093509 kubelet[2989]: E0416 04:36:14.071708 2989 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.5:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.5:6443: connect: connection refused" interval="7s" Apr 16 04:36:15.118841 kubelet[2989]: E0416 04:36:15.110829 2989 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.5:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.5:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 16 04:36:16.094750 kubelet[2989]: E0416 04:36:15.843296 2989 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.5:6443/api/v1/nodes\": dial tcp 10.0.0.5:6443: connect: connection refused" node="localhost" Apr 16 04:36:16.797011 kubelet[2989]: I0416 04:36:16.789946 2989 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ae88e85786a13701eebaf6993fb55ff4-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"ae88e85786a13701eebaf6993fb55ff4\") " pod="kube-system/kube-scheduler-localhost" Apr 16 04:36:21.160541 kubelet[2989]: E0416 04:36:21.156677 2989 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.5:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.5:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 16 04:36:23.148397 kubelet[2989]: E0416 04:36:23.145073 2989 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.5:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.5:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 16 04:36:23.342387 kubelet[2989]: E0416 04:36:22.741769 2989 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.5:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.5:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18a6bc4b917eab60 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-16 04:35:35.793806176 +0000 UTC m=+112.092307978,LastTimestamp:2026-04-16 04:35:35.793806176 +0000 UTC m=+112.092307978,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 16 04:36:24.211774 kubelet[2989]: I0416 04:36:24.202396 2989 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 16 04:36:24.980341 kubelet[2989]: E0416 04:36:24.978832 2989 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 16 04:36:25.166269 kubelet[2989]: E0416 04:36:25.154610 2989 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.5:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.5:6443: connect: connection refused" interval="7s" Apr 16 04:36:26.577075 kubelet[2989]: E0416 04:36:26.157630 2989 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.5:6443/api/v1/nodes\": dial tcp 10.0.0.5:6443: connect: connection refused" node="localhost" Apr 16 04:36:28.057275 kubelet[2989]: E0416 04:36:28.052373 2989 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.5:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.5:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 16 04:36:29.164006 kubelet[2989]: E0416 04:36:29.158787 2989 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 16 04:36:32.656148 kubelet[2989]: E0416 04:36:32.443333 2989 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:36:35.263192 kubelet[2989]: E0416 04:36:35.259332 2989 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 16 04:36:36.568415 kubelet[2989]: E0416 04:36:36.566006 2989 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.5:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.5:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 16 04:36:36.806580 kubelet[2989]: E0416 04:36:36.293202 2989 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.5:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.5:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18a6bc4b917eab60 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-16 04:35:35.793806176 +0000 UTC m=+112.092307978,LastTimestamp:2026-04-16 04:35:35.793806176 +0000 UTC m=+112.092307978,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 16 04:36:37.528943 kubelet[2989]: E0416 04:36:37.199062 2989 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.5:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.5:6443: connect: connection refused" interval="7s" Apr 16 04:36:37.845167 kubelet[2989]: I0416 04:36:37.673417 2989 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 16 04:36:39.069357 containerd[1593]: time="2026-04-16T04:36:39.042171200Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:185b4bd2fb947057f02f1a819bbd3411,Namespace:kube-system,Attempt:0,}" Apr 16 04:36:39.393387 kubelet[2989]: E0416 04:36:39.090355 2989 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.5:6443/api/v1/nodes\": dial tcp 10.0.0.5:6443: connect: connection refused" node="localhost" Apr 16 04:36:49.329259 kubelet[2989]: E0416 04:36:48.786118 2989 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 16 04:36:50.275329 kubelet[2989]: E0416 04:36:50.273627 2989 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 16 04:36:52.602022 kubelet[2989]: E0416 04:36:51.502735 2989 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.5:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.5:6443: connect: connection refused" interval="7s" Apr 16 04:36:53.887767 kubelet[2989]: E0416 04:36:51.607509 2989 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.5:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.5:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18a6bc4b917eab60 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-16 04:35:35.793806176 +0000 UTC m=+112.092307978,LastTimestamp:2026-04-16 04:35:35.793806176 +0000 UTC m=+112.092307978,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 16 04:36:56.810590 kubelet[2989]: E0416 04:36:56.783293 2989 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:36:59.819487 kubelet[2989]: I0416 04:36:59.818148 2989 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 16 04:37:00.533660 containerd[1593]: time="2026-04-16T04:36:59.860023952Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:661aacf61b27dbeb7414ee44841cd3ce,Namespace:kube-system,Attempt:0,}" Apr 16 04:37:01.759906 kubelet[2989]: E0416 04:37:01.432432 2989 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 16 04:37:03.115576 kubelet[2989]: E0416 04:37:03.094924 2989 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.5:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.5:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 16 04:37:07.263790 kubelet[2989]: E0416 04:37:06.468399 2989 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.5:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.5:6443: connect: connection refused" interval="7s" Apr 16 04:37:09.111741 kubelet[2989]: E0416 04:37:09.110932 2989 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.5:6443/api/v1/nodes\": dial tcp 10.0.0.5:6443: connect: connection refused" node="localhost" Apr 16 04:37:13.007323 kubelet[2989]: E0416 04:37:13.001785 2989 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.5:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.5:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 16 04:37:14.259091 kubelet[2989]: E0416 04:37:14.251367 2989 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.5:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.5:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 16 04:37:14.754233 kubelet[2989]: E0416 04:37:14.723187 2989 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 16 04:37:16.993544 kubelet[2989]: E0416 04:37:16.987793 2989 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.5:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.5:6443: connect: connection refused" interval="7s" Apr 16 04:37:17.214387 kubelet[2989]: E0416 04:37:17.210772 2989 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.5:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.5:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 16 04:37:17.293229 kubelet[2989]: E0416 04:37:16.834360 2989 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.5:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.5:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18a6bc4b917eab60 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-16 04:35:35.793806176 +0000 UTC m=+112.092307978,LastTimestamp:2026-04-16 04:35:35.793806176 +0000 UTC m=+112.092307978,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 16 04:37:17.599057 kubelet[2989]: E0416 04:37:17.588729 2989 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 16 04:37:17.782817 kubelet[2989]: E0416 04:37:17.780937 2989 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.5:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.5:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 16 04:37:21.306434 kubelet[2989]: E0416 04:37:21.290261 2989 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:37:25.364110 kubelet[2989]: I0416 04:37:25.360393 2989 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 16 04:37:25.817264 kubelet[2989]: E0416 04:37:25.669914 2989 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 16 04:37:25.974172 kubelet[2989]: E0416 04:37:25.969148 2989 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.5:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.5:6443: connect: connection refused" interval="7s" Apr 16 04:37:25.974172 kubelet[2989]: E0416 04:37:25.969619 2989 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.5:6443/api/v1/nodes\": dial tcp 10.0.0.5:6443: connect: connection refused" node="localhost" Apr 16 04:37:25.974172 kubelet[2989]: E0416 04:37:25.969702 2989 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.5:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.5:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 16 04:37:26.296120 containerd[1593]: time="2026-04-16T04:37:26.279700632Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:ae88e85786a13701eebaf6993fb55ff4,Namespace:kube-system,Attempt:0,}" Apr 16 04:37:30.371346 kubelet[2989]: E0416 04:37:30.162333 2989 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.5:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.5:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18a6bc4b917eab60 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-16 04:35:35.793806176 +0000 UTC m=+112.092307978,LastTimestamp:2026-04-16 04:35:35.793806176 +0000 UTC m=+112.092307978,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 16 04:37:37.150365 kubelet[2989]: E0416 04:37:37.134114 2989 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 16 04:37:38.166430 kubelet[2989]: E0416 04:37:37.243914 2989 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.5:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.5:6443: connect: connection refused" interval="7s" Apr 16 04:37:40.185088 kubelet[2989]: I0416 04:37:40.184597 2989 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 16 04:37:40.185986 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3269415904.mount: Deactivated successfully. Apr 16 04:37:40.478209 kubelet[2989]: E0416 04:37:40.440109 2989 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.5:6443/api/v1/nodes\": dial tcp 10.0.0.5:6443: connect: connection refused" node="localhost" Apr 16 04:37:40.696312 kubelet[2989]: E0416 04:37:40.679934 2989 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.5:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.5:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18a6bc4b917eab60 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-16 04:35:35.793806176 +0000 UTC m=+112.092307978,LastTimestamp:2026-04-16 04:35:35.793806176 +0000 UTC m=+112.092307978,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 16 04:37:42.077259 containerd[1593]: time="2026-04-16T04:37:42.059186111Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 16 04:37:44.311102 containerd[1593]: time="2026-04-16T04:37:44.195581169Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=311988" Apr 16 04:37:46.029004 kubelet[2989]: E0416 04:37:46.015412 2989 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.5:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.5:6443: connect: connection refused" interval="7s" Apr 16 04:37:46.662410 containerd[1593]: time="2026-04-16T04:37:46.123307048Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 16 04:37:47.414662 kubelet[2989]: E0416 04:37:47.397293 2989 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 16 04:37:48.947762 containerd[1593]: time="2026-04-16T04:37:48.944473670Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 16 04:37:49.102095 containerd[1593]: time="2026-04-16T04:37:48.946563145Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 16 04:37:50.067937 kubelet[2989]: I0416 04:37:50.058733 2989 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 16 04:37:51.563362 kubelet[2989]: E0416 04:37:50.949257 2989 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.5:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.5:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18a6bc4b917eab60 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-16 04:35:35.793806176 +0000 UTC m=+112.092307978,LastTimestamp:2026-04-16 04:35:35.793806176 +0000 UTC m=+112.092307978,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 16 04:37:51.925591 kubelet[2989]: E0416 04:37:51.868756 2989 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.5:6443/api/v1/nodes\": dial tcp 10.0.0.5:6443: connect: connection refused" node="localhost" Apr 16 04:37:54.880418 kubelet[2989]: E0416 04:37:54.852581 2989 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.5:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.5:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 16 04:37:54.880418 kubelet[2989]: E0416 04:37:54.853155 2989 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.5:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.5:6443: connect: connection refused" interval="7s" Apr 16 04:37:54.880418 kubelet[2989]: E0416 04:37:54.853389 2989 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.5:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.5:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 16 04:37:55.599158 kubelet[2989]: E0416 04:37:55.496251 2989 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.5:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.5:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 16 04:37:56.986926 containerd[1593]: time="2026-04-16T04:37:56.836179850Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 16 04:37:57.689332 kubelet[2989]: E0416 04:37:57.687775 2989 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 16 04:38:00.672101 kubelet[2989]: I0416 04:38:00.658254 2989 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 16 04:38:01.588059 kubelet[2989]: E0416 04:38:01.521794 2989 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.5:6443/api/v1/nodes\": dial tcp 10.0.0.5:6443: connect: connection refused" node="localhost" Apr 16 04:38:02.376429 kubelet[2989]: E0416 04:38:02.316034 2989 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.5:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.5:6443: connect: connection refused" interval="7s" Apr 16 04:38:02.851969 kubelet[2989]: E0416 04:38:02.268027 2989 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.5:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.5:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18a6bc4b917eab60 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-16 04:35:35.793806176 +0000 UTC m=+112.092307978,LastTimestamp:2026-04-16 04:35:35.793806176 +0000 UTC m=+112.092307978,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 16 04:38:02.985539 kubelet[2989]: E0416 04:38:02.981844 2989 event.go:307] "Unable to write event (retry limit exceeded!)" event="&Event{ObjectMeta:{localhost.18a6bc4b917eab60 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-16 04:35:35.793806176 +0000 UTC m=+112.092307978,LastTimestamp:2026-04-16 04:35:35.793806176 +0000 UTC m=+112.092307978,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 16 04:38:08.064836 kubelet[2989]: E0416 04:38:08.050602 2989 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 16 04:38:08.387881 kubelet[2989]: E0416 04:38:08.346697 2989 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.5:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.5:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 16 04:38:09.042972 kubelet[2989]: E0416 04:38:05.076090 2989 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.5:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.5:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18a6bc4bd0edf465 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:CgroupV1,Message:cgroup v1 support is in maintenance mode, please migrate to cgroup v2,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-16 04:35:36.858063973 +0000 UTC m=+113.156565783,LastTimestamp:2026-04-16 04:35:36.858063973 +0000 UTC m=+113.156565783,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 16 04:38:09.253775 containerd[1593]: time="2026-04-16T04:38:09.252599262Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1m9.08215817s" Apr 16 04:38:09.590739 containerd[1593]: time="2026-04-16T04:38:09.576903386Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 16 04:38:10.448906 kubelet[2989]: E0416 04:38:10.446426 2989 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.5:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.5:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18a6bc4bd0edf465 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:CgroupV1,Message:cgroup v1 support is in maintenance mode, please migrate to cgroup v2,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-16 04:35:36.858063973 +0000 UTC m=+113.156565783,LastTimestamp:2026-04-16 04:35:36.858063973 +0000 UTC m=+113.156565783,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 16 04:38:10.448906 kubelet[2989]: E0416 04:38:10.446985 2989 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.5:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.5:6443: connect: connection refused" interval="7s" Apr 16 04:38:11.167051 containerd[1593]: time="2026-04-16T04:38:10.995006658Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1m31.474132083s" Apr 16 04:38:11.861095 containerd[1593]: time="2026-04-16T04:38:11.860106166Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 45.520484696s" Apr 16 04:38:12.085988 kubelet[2989]: I0416 04:38:11.861720 2989 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 16 04:38:13.171218 kubelet[2989]: E0416 04:38:13.152802 2989 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.5:6443/api/v1/nodes\": dial tcp 10.0.0.5:6443: connect: connection refused" node="localhost" Apr 16 04:38:16.117723 containerd[1593]: time="2026-04-16T04:38:16.103164779Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 16 04:38:16.939279 kubelet[2989]: E0416 04:38:16.938041 2989 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.5:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.5:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 16 04:38:17.781296 kubelet[2989]: E0416 04:38:17.775356 2989 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.5:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.5:6443: connect: connection refused" interval="7s" Apr 16 04:38:19.194055 kubelet[2989]: E0416 04:38:19.179998 2989 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 16 04:38:21.165100 kubelet[2989]: E0416 04:38:20.900277 2989 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.5:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.5:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18a6bc4bd0edf465 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:CgroupV1,Message:cgroup v1 support is in maintenance mode, please migrate to cgroup v2,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-16 04:35:36.858063973 +0000 UTC m=+113.156565783,LastTimestamp:2026-04-16 04:35:36.858063973 +0000 UTC m=+113.156565783,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 16 04:38:23.799002 kubelet[2989]: I0416 04:38:23.783193 2989 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 16 04:38:25.817657 systemd-journald[1155]: Under memory pressure, flushing caches. Apr 16 04:38:25.548169 systemd-resolved[1469]: Under memory pressure, flushing caches. Apr 16 04:38:25.683983 systemd-resolved[1469]: Flushed all caches. Apr 16 04:38:28.068215 kubelet[2989]: E0416 04:38:27.958566 2989 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.5:6443/api/v1/nodes\": dial tcp 10.0.0.5:6443: connect: connection refused" node="localhost" Apr 16 04:38:28.912159 kubelet[2989]: E0416 04:38:28.851631 2989 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.5:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.5:6443: connect: connection refused" interval="7s" Apr 16 04:38:29.813352 kubelet[2989]: E0416 04:38:29.801390 2989 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 16 04:38:31.160852 kubelet[2989]: E0416 04:38:31.159584 2989 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.5:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.5:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 16 04:38:33.157392 kubelet[2989]: E0416 04:38:33.144990 2989 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.5:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.5:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 16 04:38:33.890172 kubelet[2989]: E0416 04:38:32.867060 2989 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.5:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.5:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18a6bc4bd0edf465 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:CgroupV1,Message:cgroup v1 support is in maintenance mode, please migrate to cgroup v2,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-16 04:35:36.858063973 +0000 UTC m=+113.156565783,LastTimestamp:2026-04-16 04:35:36.858063973 +0000 UTC m=+113.156565783,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 16 04:38:36.266249 kubelet[2989]: E0416 04:38:36.265746 2989 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.5:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.5:6443: connect: connection refused" interval="7s" Apr 16 04:38:36.451784 kubelet[2989]: I0416 04:38:36.451748 2989 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 16 04:38:36.457755 kubelet[2989]: E0416 04:38:36.457411 2989 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.5:6443/api/v1/nodes\": dial tcp 10.0.0.5:6443: connect: connection refused" node="localhost" Apr 16 04:38:40.626270 kubelet[2989]: E0416 04:38:40.405266 2989 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 16 04:38:41.089903 kubelet[2989]: E0416 04:38:40.963758 2989 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.5:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.5:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 16 04:38:43.717261 kubelet[2989]: E0416 04:38:43.659328 2989 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.5:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.5:6443: connect: connection refused" interval="7s" Apr 16 04:38:44.917683 kubelet[2989]: E0416 04:38:44.915162 2989 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.5:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.5:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18a6bc4bd0edf465 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:CgroupV1,Message:cgroup v1 support is in maintenance mode, please migrate to cgroup v2,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-16 04:35:36.858063973 +0000 UTC m=+113.156565783,LastTimestamp:2026-04-16 04:35:36.858063973 +0000 UTC m=+113.156565783,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 16 04:38:45.339270 containerd[1593]: time="2026-04-16T04:38:44.795620497Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 16 04:38:45.339270 containerd[1593]: time="2026-04-16T04:38:45.049109055Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 16 04:38:45.339270 containerd[1593]: time="2026-04-16T04:38:45.049140916Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 16 04:38:46.216977 containerd[1593]: time="2026-04-16T04:38:46.111252007Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 16 04:38:49.606853 kubelet[2989]: I0416 04:38:49.606488 2989 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 16 04:38:49.959319 kubelet[2989]: E0416 04:38:49.951240 2989 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.5:6443/api/v1/nodes\": dial tcp 10.0.0.5:6443: connect: connection refused" node="localhost" Apr 16 04:38:50.088720 containerd[1593]: time="2026-04-16T04:38:50.059471023Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 16 04:38:50.255403 containerd[1593]: time="2026-04-16T04:38:50.215885249Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 16 04:38:50.644251 containerd[1593]: time="2026-04-16T04:38:50.521756387Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 16 04:38:51.424930 containerd[1593]: time="2026-04-16T04:38:50.993474082Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 16 04:38:52.598919 kubelet[2989]: E0416 04:38:51.808562 2989 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 16 04:38:52.969030 kubelet[2989]: E0416 04:38:52.615319 2989 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.5:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.5:6443: connect: connection refused" interval="7s" Apr 16 04:38:55.971002 kubelet[2989]: E0416 04:38:55.967160 2989 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.5:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.5:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 16 04:38:58.739124 kubelet[2989]: E0416 04:38:58.737799 2989 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.5:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.5:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 16 04:39:00.309608 kubelet[2989]: E0416 04:39:00.045852 2989 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.5:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.5:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18a6bc4bd0edf465 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:CgroupV1,Message:cgroup v1 support is in maintenance mode, please migrate to cgroup v2,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-16 04:35:36.858063973 +0000 UTC m=+113.156565783,LastTimestamp:2026-04-16 04:35:36.858063973 +0000 UTC m=+113.156565783,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 16 04:39:01.389291 kubelet[2989]: E0416 04:39:01.388761 2989 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.5:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.5:6443: connect: connection refused" interval="7s" Apr 16 04:39:01.886476 containerd[1593]: time="2026-04-16T04:39:00.274201748Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 16 04:39:02.286145 containerd[1593]: time="2026-04-16T04:39:01.930973531Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 16 04:39:02.286145 containerd[1593]: time="2026-04-16T04:39:01.988152493Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 16 04:39:02.817252 containerd[1593]: time="2026-04-16T04:39:02.182129393Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 16 04:39:05.271886 kubelet[2989]: E0416 04:39:04.768367 2989 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 16 04:39:06.388202 kubelet[2989]: E0416 04:39:06.373799 2989 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.5:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.5:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 16 04:39:14.255567 systemd[1]: run-containerd-runc-k8s.io-cb270ce5cb4d49fb60eb07b4d238db4eb52c50b320223952fa94f8a5f91de0b2-runc.TIQusb.mount: Deactivated successfully. Apr 16 04:39:23.091891 kubelet[2989]: E0416 04:39:22.794026 2989 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 16 04:39:24.088636 kubelet[2989]: E0416 04:39:23.750906 2989 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.5:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.5:6443: connect: connection refused" interval="7s" Apr 16 04:39:24.877344 kubelet[2989]: I0416 04:39:24.843924 2989 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 16 04:39:26.711548 systemd[1]: run-containerd-runc-k8s.io-bf92b4aec6672d6557d10c26664f1f22ea031577ea6fb7d55a5f42ea7b7a6d16-runc.4hgTOm.mount: Deactivated successfully. Apr 16 04:39:28.311601 kubelet[2989]: E0416 04:39:27.592961 2989 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.5:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.5:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 16 04:39:29.750822 kubelet[2989]: E0416 04:39:25.901098 2989 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.5:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.5:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18a6bc4bd0edf465 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:CgroupV1,Message:cgroup v1 support is in maintenance mode, please migrate to cgroup v2,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-16 04:35:36.858063973 +0000 UTC m=+113.156565783,LastTimestamp:2026-04-16 04:35:36.858063973 +0000 UTC m=+113.156565783,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 16 04:39:30.599629 kubelet[2989]: E0416 04:39:28.752368 2989 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.5:6443/api/v1/nodes\": dial tcp 10.0.0.5:6443: connect: connection refused" node="localhost" Apr 16 04:39:35.342334 kubelet[2989]: E0416 04:39:35.302324 2989 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 16 04:39:38.340423 kubelet[2989]: E0416 04:39:37.208317 2989 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.5:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.5:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 16 04:39:40.157993 kubelet[2989]: E0416 04:39:39.228804 2989 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.5:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.5:6443: connect: connection refused" interval="7s" Apr 16 04:39:45.192972 kubelet[2989]: E0416 04:39:43.292345 2989 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.5:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.5:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18a6bc4bd0edf465 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:CgroupV1,Message:cgroup v1 support is in maintenance mode, please migrate to cgroup v2,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-16 04:35:36.858063973 +0000 UTC m=+113.156565783,LastTimestamp:2026-04-16 04:35:36.858063973 +0000 UTC m=+113.156565783,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 16 04:39:45.641196 containerd[1593]: time="2026-04-16T04:39:45.252415640Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:661aacf61b27dbeb7414ee44841cd3ce,Namespace:kube-system,Attempt:0,} returns sandbox id \"07bc43c2fb0e553fec011ab39615348367275dc431736e7fcf3665b8a5254009\"" Apr 16 04:39:46.069473 kubelet[2989]: E0416 04:39:46.063889 2989 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.5:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.5:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 16 04:39:47.970992 kubelet[2989]: E0416 04:39:47.672281 2989 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 16 04:39:49.829355 kubelet[2989]: E0416 04:39:49.818224 2989 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.5:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.5:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 16 04:39:50.075384 kubelet[2989]: E0416 04:39:50.069586 2989 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.5:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.5:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 16 04:39:50.833181 kubelet[2989]: E0416 04:39:50.815602 2989 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.5:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.5:6443: connect: connection refused" interval="7s" Apr 16 04:39:52.331228 kubelet[2989]: E0416 04:39:52.326190 2989 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:39:53.206593 kubelet[2989]: I0416 04:39:53.101874 2989 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 16 04:39:54.065581 containerd[1593]: time="2026-04-16T04:39:54.050201155Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:185b4bd2fb947057f02f1a819bbd3411,Namespace:kube-system,Attempt:0,} returns sandbox id \"cb270ce5cb4d49fb60eb07b4d238db4eb52c50b320223952fa94f8a5f91de0b2\"" Apr 16 04:39:57.779540 containerd[1593]: time="2026-04-16T04:39:57.768307812Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:ae88e85786a13701eebaf6993fb55ff4,Namespace:kube-system,Attempt:0,} returns sandbox id \"bf92b4aec6672d6557d10c26664f1f22ea031577ea6fb7d55a5f42ea7b7a6d16\"" Apr 16 04:39:58.348091 kubelet[2989]: E0416 04:39:57.209391 2989 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.5:6443/api/v1/nodes\": dial tcp 10.0.0.5:6443: connect: connection refused" node="localhost" Apr 16 04:39:58.693917 kubelet[2989]: E0416 04:39:58.647925 2989 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.5:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.5:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18a6bc4bd0edf465 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:CgroupV1,Message:cgroup v1 support is in maintenance mode, please migrate to cgroup v2,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-16 04:35:36.858063973 +0000 UTC m=+113.156565783,LastTimestamp:2026-04-16 04:35:36.858063973 +0000 UTC m=+113.156565783,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 16 04:39:58.901230 kubelet[2989]: E0416 04:39:58.862824 2989 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:39:59.284816 kubelet[2989]: E0416 04:39:58.779115 2989 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.5:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.5:6443: connect: connection refused" interval="7s" Apr 16 04:39:59.284816 kubelet[2989]: E0416 04:39:58.869881 2989 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 16 04:40:00.270572 kubelet[2989]: E0416 04:40:00.166877 2989 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:40:01.195720 containerd[1593]: time="2026-04-16T04:40:01.156997650Z" level=info msg="CreateContainer within sandbox \"07bc43c2fb0e553fec011ab39615348367275dc431736e7fcf3665b8a5254009\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Apr 16 04:40:02.320042 containerd[1593]: time="2026-04-16T04:40:02.318817380Z" level=info msg="CreateContainer within sandbox \"cb270ce5cb4d49fb60eb07b4d238db4eb52c50b320223952fa94f8a5f91de0b2\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Apr 16 04:40:02.637211 kubelet[2989]: E0416 04:40:02.627202 2989 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.5:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.5:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 16 04:40:03.842594 containerd[1593]: time="2026-04-16T04:40:03.809356281Z" level=info msg="CreateContainer within sandbox \"bf92b4aec6672d6557d10c26664f1f22ea031577ea6fb7d55a5f42ea7b7a6d16\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Apr 16 04:40:04.985670 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3762545436.mount: Deactivated successfully. Apr 16 04:40:07.059826 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount151091437.mount: Deactivated successfully. Apr 16 04:40:07.336519 kubelet[2989]: E0416 04:40:07.317741 2989 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.5:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.5:6443: connect: connection refused" interval="7s" Apr 16 04:40:08.252967 kubelet[2989]: I0416 04:40:08.247782 2989 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 16 04:40:08.593133 kubelet[2989]: E0416 04:40:08.582763 2989 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.5:6443/api/v1/nodes\": dial tcp 10.0.0.5:6443: connect: connection refused" node="localhost" Apr 16 04:40:09.802055 kubelet[2989]: E0416 04:40:09.695221 2989 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 16 04:40:10.367242 kubelet[2989]: E0416 04:40:10.073810 2989 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.5:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.5:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18a6bc4bd0edf465 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:CgroupV1,Message:cgroup v1 support is in maintenance mode, please migrate to cgroup v2,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-16 04:35:36.858063973 +0000 UTC m=+113.156565783,LastTimestamp:2026-04-16 04:35:36.858063973 +0000 UTC m=+113.156565783,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 16 04:40:10.586364 containerd[1593]: time="2026-04-16T04:40:10.455164668Z" level=info msg="CreateContainer within sandbox \"07bc43c2fb0e553fec011ab39615348367275dc431736e7fcf3665b8a5254009\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"456e8d142dfdd65f4acfda0405c9188e421124b6a84c4567eb7354d239050b60\"" Apr 16 04:40:12.064923 containerd[1593]: time="2026-04-16T04:40:12.061875638Z" level=info msg="StartContainer for \"456e8d142dfdd65f4acfda0405c9188e421124b6a84c4567eb7354d239050b60\"" Apr 16 04:40:12.765014 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1580505500.mount: Deactivated successfully. Apr 16 04:40:18.455347 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2985363020.mount: Deactivated successfully. Apr 16 04:40:23.420350 kubelet[2989]: E0416 04:40:23.099411 2989 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 16 04:40:24.766748 kubelet[2989]: E0416 04:40:24.757244 2989 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.5:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.5:6443: connect: connection refused" interval="7s" Apr 16 04:40:24.927365 kubelet[2989]: E0416 04:40:24.926142 2989 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.5:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.5:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 16 04:40:25.357352 kubelet[2989]: E0416 04:40:25.356918 2989 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.5:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.5:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18a6bc4bd0edf465 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:CgroupV1,Message:cgroup v1 support is in maintenance mode, please migrate to cgroup v2,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-16 04:35:36.858063973 +0000 UTC m=+113.156565783,LastTimestamp:2026-04-16 04:35:36.858063973 +0000 UTC m=+113.156565783,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 16 04:40:25.556857 kubelet[2989]: I0416 04:40:25.359758 2989 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 16 04:40:25.899970 kubelet[2989]: E0416 04:40:25.815277 2989 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.5:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.5:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 16 04:40:26.350930 kubelet[2989]: E0416 04:40:25.972344 2989 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.5:6443/api/v1/nodes\": dial tcp 10.0.0.5:6443: connect: connection refused" node="localhost" Apr 16 04:40:26.840612 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3474925866.mount: Deactivated successfully. Apr 16 04:40:27.550071 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2890401752.mount: Deactivated successfully. Apr 16 04:40:27.880060 containerd[1593]: time="2026-04-16T04:40:27.851076220Z" level=info msg="CreateContainer within sandbox \"cb270ce5cb4d49fb60eb07b4d238db4eb52c50b320223952fa94f8a5f91de0b2\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"b74bf9037374eb3c41f39f95c5076b6f707d9610efaf9dc96b2502f48bbe3799\"" Apr 16 04:40:29.602341 containerd[1593]: time="2026-04-16T04:40:29.430122218Z" level=info msg="StartContainer for \"b74bf9037374eb3c41f39f95c5076b6f707d9610efaf9dc96b2502f48bbe3799\"" Apr 16 04:40:29.852330 containerd[1593]: time="2026-04-16T04:40:29.601641429Z" level=info msg="CreateContainer within sandbox \"bf92b4aec6672d6557d10c26664f1f22ea031577ea6fb7d55a5f42ea7b7a6d16\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"768bbe1aeef8b840562b2eb456eada77f62ad68509019eb75ae3ded3cd21509d\"" Apr 16 04:40:30.267923 kubelet[2989]: E0416 04:40:30.266236 2989 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.5:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.5:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 16 04:40:32.838381 containerd[1593]: time="2026-04-16T04:40:32.786973596Z" level=info msg="StartContainer for \"768bbe1aeef8b840562b2eb456eada77f62ad68509019eb75ae3ded3cd21509d\"" Apr 16 04:40:35.038167 kubelet[2989]: E0416 04:40:34.153196 2989 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.5:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.5:6443: connect: connection refused" interval="7s" Apr 16 04:40:35.914264 kubelet[2989]: E0416 04:40:35.883936 2989 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 16 04:40:37.586783 kubelet[2989]: E0416 04:40:37.585127 2989 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.5:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.5:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 16 04:40:37.970361 kubelet[2989]: E0416 04:40:37.457232 2989 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.5:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.5:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18a6bc4bd0edf465 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:CgroupV1,Message:cgroup v1 support is in maintenance mode, please migrate to cgroup v2,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-16 04:35:36.858063973 +0000 UTC m=+113.156565783,LastTimestamp:2026-04-16 04:35:36.858063973 +0000 UTC m=+113.156565783,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 16 04:40:37.970361 kubelet[2989]: E0416 04:40:37.734989 2989 event.go:307] "Unable to write event (retry limit exceeded!)" event="&Event{ObjectMeta:{localhost.18a6bc4bd0edf465 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:CgroupV1,Message:cgroup v1 support is in maintenance mode, please migrate to cgroup v2,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-16 04:35:36.858063973 +0000 UTC m=+113.156565783,LastTimestamp:2026-04-16 04:35:36.858063973 +0000 UTC m=+113.156565783,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 16 04:40:39.450112 kubelet[2989]: I0416 04:40:39.448037 2989 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 16 04:40:40.024619 kubelet[2989]: E0416 04:40:39.819834 2989 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.5:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.5:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18a6bc4cf99553ae default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-16 04:35:41.835088814 +0000 UTC m=+118.133590624,LastTimestamp:2026-04-16 04:35:41.835088814 +0000 UTC m=+118.133590624,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 16 04:40:40.165386 kubelet[2989]: E0416 04:40:40.158344 2989 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.5:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.5:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 16 04:40:40.165386 kubelet[2989]: E0416 04:40:40.158196 2989 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.5:6443/api/v1/nodes\": dial tcp 10.0.0.5:6443: connect: connection refused" node="localhost" Apr 16 04:40:44.872087 kubelet[2989]: E0416 04:40:44.864682 2989 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.5:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.5:6443: connect: connection refused" interval="7s" Apr 16 04:40:46.589083 kubelet[2989]: E0416 04:40:46.559265 2989 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 16 04:40:47.640091 kubelet[2989]: E0416 04:40:47.636296 2989 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.5:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.5:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18a6bc4cf99553ae default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-16 04:35:41.835088814 +0000 UTC m=+118.133590624,LastTimestamp:2026-04-16 04:35:41.835088814 +0000 UTC m=+118.133590624,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 16 04:40:50.605350 kubelet[2989]: I0416 04:40:50.604320 2989 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 16 04:40:51.381616 kubelet[2989]: E0416 04:40:51.362201 2989 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.5:6443/api/v1/nodes\": dial tcp 10.0.0.5:6443: connect: connection refused" node="localhost" Apr 16 04:40:54.674264 kubelet[2989]: E0416 04:40:54.300599 2989 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.5:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.5:6443: connect: connection refused" interval="7s" Apr 16 04:40:56.828422 kubelet[2989]: E0416 04:40:56.754393 2989 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 16 04:41:00.309669 containerd[1593]: time="2026-04-16T04:41:00.133657036Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 16 04:41:01.490625 containerd[1593]: time="2026-04-16T04:41:00.493251447Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 16 04:41:02.961011 containerd[1593]: time="2026-04-16T04:41:02.784708096Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 16 04:41:04.620179 kubelet[2989]: E0416 04:41:01.643673 2989 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.5:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.5:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18a6bc4cf99553ae default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-16 04:35:41.835088814 +0000 UTC m=+118.133590624,LastTimestamp:2026-04-16 04:35:41.835088814 +0000 UTC m=+118.133590624,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 16 04:41:06.621373 containerd[1593]: time="2026-04-16T04:41:05.572256652Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 16 04:41:08.795893 kubelet[2989]: E0416 04:41:08.470867 2989 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.5:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.5:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 16 04:41:11.893182 kubelet[2989]: E0416 04:41:11.652599 2989 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.5:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.5:6443: connect: connection refused" interval="7s" Apr 16 04:41:12.215997 kubelet[2989]: E0416 04:41:11.801038 2989 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 16 04:41:13.294509 kubelet[2989]: E0416 04:41:13.288863 2989 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.5:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.5:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 16 04:41:18.263315 kubelet[2989]: I0416 04:41:18.239218 2989 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 16 04:41:19.009130 kubelet[2989]: E0416 04:41:17.520619 2989 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.5:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.5:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18a6bc4cf99553ae default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-16 04:35:41.835088814 +0000 UTC m=+118.133590624,LastTimestamp:2026-04-16 04:35:41.835088814 +0000 UTC m=+118.133590624,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 16 04:41:19.776600 kubelet[2989]: E0416 04:41:19.762342 2989 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.5:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.5:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 16 04:41:22.785081 kubelet[2989]: E0416 04:41:22.613597 2989 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.5:6443/api/v1/nodes\": dial tcp 10.0.0.5:6443: connect: connection refused" node="localhost" Apr 16 04:41:23.262401 kubelet[2989]: E0416 04:41:22.820746 2989 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.5:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.5:6443: connect: connection refused" interval="7s" Apr 16 04:41:23.570175 kubelet[2989]: E0416 04:41:23.467898 2989 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 16 04:41:24.103275 kubelet[2989]: E0416 04:41:24.101614 2989 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.5:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.5:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 16 04:41:30.247911 systemd[1]: run-containerd-runc-k8s.io-456e8d142dfdd65f4acfda0405c9188e421124b6a84c4567eb7354d239050b60-runc.HlSxea.mount: Deactivated successfully. Apr 16 04:41:35.817956 kubelet[2989]: E0416 04:41:35.739011 2989 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 16 04:41:37.767381 kubelet[2989]: E0416 04:41:34.851061 2989 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.5:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.5:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18a6bc4cf99553ae default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-16 04:35:41.835088814 +0000 UTC m=+118.133590624,LastTimestamp:2026-04-16 04:35:41.835088814 +0000 UTC m=+118.133590624,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 16 04:41:44.873113 systemd[1]: run-containerd-runc-k8s.io-b74bf9037374eb3c41f39f95c5076b6f707d9610efaf9dc96b2502f48bbe3799-runc.j6BSCZ.mount: Deactivated successfully. Apr 16 04:41:47.208072 kubelet[2989]: E0416 04:41:44.248124 2989 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.5:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.5:6443: connect: connection refused" interval="7s" Apr 16 04:41:56.097346 kubelet[2989]: E0416 04:41:56.082728 2989 kubelet.go:2460] "Skipping pod synchronization" err="container runtime is down" Apr 16 04:42:01.966424 kubelet[2989]: E0416 04:42:01.934137 2989 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.5:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.5:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 16 04:42:03.954790 kubelet[2989]: E0416 04:42:02.808147 2989 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 16 04:42:18.185345 containerd[1593]: time="2026-04-16T04:42:17.198270350Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 16 04:42:18.785810 containerd[1593]: time="2026-04-16T04:42:18.061225242Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 16 04:42:18.945103 kubelet[2989]: E0416 04:42:16.612190 2989 log.go:32] "StartContainer from runtime service failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" containerID="456e8d142dfdd65f4acfda0405c9188e421124b6a84c4567eb7354d239050b60" Apr 16 04:42:20.644801 containerd[1593]: time="2026-04-16T04:42:19.765285997Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 16 04:42:26.619496 kubelet[2989]: E0416 04:42:09.789939 2989 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.5:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.5:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18a6bc4cf99553ae default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-16 04:35:41.835088814 +0000 UTC m=+118.133590624,LastTimestamp:2026-04-16 04:35:41.835088814 +0000 UTC m=+118.133590624,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 16 04:42:27.953491 kubelet[2989]: E0416 04:42:23.020526 2989 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.5:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.5:6443: connect: connection refused" interval="7s" Apr 16 04:42:29.109246 containerd[1593]: time="2026-04-16T04:42:27.565115363Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 16 04:42:31.395795 kubelet[2989]: E0416 04:42:31.359038 2989 log.go:32] "StartContainer from runtime service failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" containerID="b74bf9037374eb3c41f39f95c5076b6f707d9610efaf9dc96b2502f48bbe3799" Apr 16 04:42:33.858104 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b74bf9037374eb3c41f39f95c5076b6f707d9610efaf9dc96b2502f48bbe3799-rootfs.mount: Deactivated successfully. Apr 16 04:42:36.702964 containerd[1593]: time="2026-04-16T04:42:36.310499921Z" level=error msg="ttrpc: received message on inactive stream" stream=1 Apr 16 04:42:37.596281 kubelet[2989]: E0416 04:42:36.804701 2989 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.5:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.5:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 16 04:42:38.751346 kubelet[2989]: I0416 04:42:37.742055 2989 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 16 04:42:39.305313 kubelet[2989]: E0416 04:42:38.214207 2989 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:kube-controller-manager,Image:registry.k8s.io/kube-controller-manager:v1.33.8,Command:[kube-controller-manager --allocate-node-cidrs=true --authentication-kubeconfig=/etc/kubernetes/controller-manager.conf --authorization-kubeconfig=/etc/kubernetes/controller-manager.conf --bind-address=127.0.0.1 --client-ca-file=/etc/kubernetes/pki/ca.crt --cluster-cidr=192.168.0.0/17 --cluster-name=kubernetes --cluster-signing-cert-file=/etc/kubernetes/pki/ca.crt --cluster-signing-key-file=/etc/kubernetes/pki/ca.key --controllers=*,bootstrapsigner,tokencleaner --kubeconfig=/etc/kubernetes/controller-manager.conf --leader-elect=true --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt --root-ca-file=/etc/kubernetes/pki/ca.crt --service-account-private-key-file=/etc/kubernetes/pki/sa.key --service-cluster-ip-range=10.96.0.0/12 --use-service-account-credentials=true --flex-volume-plugin-dir=/opt/libexec/kubernetes/kubelet-plugins/volume/exec/],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:ca-certs,ReadOnly:true,MountPath:/etc/ssl/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:flexvolume-dir,ReadOnly:false,MountPath:/opt/libexec/kubernetes/kubelet-plugins/volume/exec/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:k8s-certs,ReadOnly:true,MountPath:/etc/kubernetes/pki,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kubeconfig,ReadOnly:true,MountPath:/etc/kubernetes/controller-manager.conf,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:usr-share-ca-certificates,ReadOnly:true,MountPath:/usr/share/ca-certificates,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 10257 },Host:127.0.0.1,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:15,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:8,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 10257 },Host:127.0.0.1,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:15,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:180,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod kube-controller-manager-localhost_kube-system(661aacf61b27dbeb7414ee44841cd3ce): RunContainerError: context deadline exceeded" logger="UnhandledError" Apr 16 04:42:41.121222 kubelet[2989]: E0416 04:42:38.316229 2989 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:kube-apiserver,Image:registry.k8s.io/kube-apiserver:v1.33.8,Command:[kube-apiserver --advertise-address=10.0.0.5 --allow-privileged=true --authorization-mode=Node,RBAC --client-ca-file=/etc/kubernetes/pki/ca.crt --enable-admission-plugins=NodeRestriction --enable-bootstrap-token-auth=true --etcd-servers=http://10.0.0.4:2379 --kubelet-client-certificate=/etc/kubernetes/pki/apiserver-kubelet-client.crt --kubelet-client-key=/etc/kubernetes/pki/apiserver-kubelet-client.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.crt --proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client.key --requestheader-allowed-names=front-proxy-client --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/etc/kubernetes/pki/sa.pub --service-account-signing-key-file=/etc/kubernetes/pki/sa.key --service-cluster-ip-range=10.96.0.0/12 --tls-cert-file=/etc/kubernetes/pki/apiserver.crt --tls-private-key-file=/etc/kubernetes/pki/apiserver.key],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{250 -3} {} 250m DecimalSI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:ca-certs,ReadOnly:true,MountPath:/etc/ssl/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:k8s-certs,ReadOnly:true,MountPath:/etc/kubernetes/pki,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:usr-share-ca-certificates,ReadOnly:true,MountPath:/usr/share/ca-certificates,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{0 6443 },Host:10.0.0.5,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:15,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:8,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 6443 },Host:10.0.0.5,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:15,PeriodSeconds:1,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{0 6443 },Host:10.0.0.5,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:15,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:180,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod kube-apiserver-localhost_kube-system(185b4bd2fb947057f02f1a819bbd3411): RunContainerError: context deadline exceeded" logger="UnhandledError" Apr 16 04:42:41.645824 kubelet[2989]: E0416 04:42:40.887079 2989 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.5:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.5:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 16 04:42:42.146776 kubelet[2989]: E0416 04:42:36.360204 2989 log.go:32] "StartContainer from runtime service failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" containerID="768bbe1aeef8b840562b2eb456eada77f62ad68509019eb75ae3ded3cd21509d" Apr 16 04:42:42.359676 kubelet[2989]: E0416 04:42:42.357010 2989 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.5:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.5:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 16 04:42:42.359676 kubelet[2989]: E0416 04:42:40.131361 2989 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 16 04:42:42.451303 kubelet[2989]: E0416 04:42:41.547025 2989 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with RunContainerError: \"context deadline exceeded\"" pod="kube-system/kube-controller-manager-localhost" podUID="661aacf61b27dbeb7414ee44841cd3ce" Apr 16 04:42:42.791720 kubelet[2989]: E0416 04:42:41.981131 2989 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with RunContainerError: \"context deadline exceeded\"" pod="kube-system/kube-apiserver-localhost" podUID="185b4bd2fb947057f02f1a819bbd3411" Apr 16 04:42:42.852388 kubelet[2989]: E0416 04:42:42.849141 2989 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.5:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.5:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18a6bc4cf99553ae default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-16 04:35:41.835088814 +0000 UTC m=+118.133590624,LastTimestamp:2026-04-16 04:35:41.835088814 +0000 UTC m=+118.133590624,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 16 04:42:43.955283 containerd[1593]: time="2026-04-16T04:42:43.929771506Z" level=error msg="failed to shutdown shim task and the shim might be leaked" error="context deadline exceeded: unknown" id=b74bf9037374eb3c41f39f95c5076b6f707d9610efaf9dc96b2502f48bbe3799 Apr 16 04:42:44.844652 kubelet[2989]: E0416 04:42:44.794383 2989 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.5:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.5:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 16 04:42:45.510196 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-456e8d142dfdd65f4acfda0405c9188e421124b6a84c4567eb7354d239050b60-rootfs.mount: Deactivated successfully. Apr 16 04:42:47.150280 kubelet[2989]: E0416 04:42:44.246303 2989 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.5:6443/api/v1/nodes\": dial tcp 10.0.0.5:6443: connect: connection refused" node="localhost" Apr 16 04:42:50.102185 containerd[1593]: time="2026-04-16T04:42:44.839699350Z" level=info msg="shim disconnected" id=456e8d142dfdd65f4acfda0405c9188e421124b6a84c4567eb7354d239050b60 namespace=k8s.io Apr 16 04:42:50.767844 containerd[1593]: time="2026-04-16T04:42:50.497034664Z" level=warning msg="cleaning up after shim disconnected" id=456e8d142dfdd65f4acfda0405c9188e421124b6a84c4567eb7354d239050b60 namespace=k8s.io Apr 16 04:42:50.883983 kubelet[2989]: E0416 04:42:50.845641 2989 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:kube-scheduler,Image:registry.k8s.io/kube-scheduler:v1.33.8,Command:[kube-scheduler --authentication-kubeconfig=/etc/kubernetes/scheduler.conf --authorization-kubeconfig=/etc/kubernetes/scheduler.conf --bind-address=127.0.0.1 --kubeconfig=/etc/kubernetes/scheduler.conf --leader-elect=true],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {} 100m DecimalSI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubeconfig,ReadOnly:true,MountPath:/etc/kubernetes/scheduler.conf,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{0 10259 },Host:127.0.0.1,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:15,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:8,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 10259 },Host:127.0.0.1,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:15,PeriodSeconds:1,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{0 10259 },Host:127.0.0.1,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:15,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:180,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod kube-scheduler-localhost_kube-system(ae88e85786a13701eebaf6993fb55ff4): RunContainerError: context deadline exceeded" logger="UnhandledError" Apr 16 04:42:51.161377 kubelet[2989]: E0416 04:42:51.088998 2989 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.5:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.5:6443: connect: connection refused" interval="7s" Apr 16 04:42:51.161377 kubelet[2989]: E0416 04:42:51.116649 2989 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with RunContainerError: \"context deadline exceeded\"" pod="kube-system/kube-scheduler-localhost" podUID="ae88e85786a13701eebaf6993fb55ff4" Apr 16 04:42:51.748333 containerd[1593]: time="2026-04-16T04:42:44.702333129Z" level=error msg="Failed to pipe stdout of container \"456e8d142dfdd65f4acfda0405c9188e421124b6a84c4567eb7354d239050b60\"" error="read /proc/self/fd/33: file already closed" Apr 16 04:42:52.250249 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-768bbe1aeef8b840562b2eb456eada77f62ad68509019eb75ae3ded3cd21509d-rootfs.mount: Deactivated successfully. Apr 16 04:42:52.778823 containerd[1593]: time="2026-04-16T04:42:51.774396862Z" level=error msg="Failed to pipe stdout of container \"768bbe1aeef8b840562b2eb456eada77f62ad68509019eb75ae3ded3cd21509d\"" error="reading from a closed fifo" Apr 16 04:42:53.817298 containerd[1593]: time="2026-04-16T04:42:51.695290663Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 16 04:42:54.472776 containerd[1593]: time="2026-04-16T04:42:53.918329884Z" level=error msg="Failed to pipe stderr of container \"768bbe1aeef8b840562b2eb456eada77f62ad68509019eb75ae3ded3cd21509d\"" error="reading from a closed fifo" Apr 16 04:42:55.174653 containerd[1593]: time="2026-04-16T04:42:50.080385646Z" level=info msg="shim disconnected" id=768bbe1aeef8b840562b2eb456eada77f62ad68509019eb75ae3ded3cd21509d namespace=k8s.io Apr 16 04:42:55.174653 containerd[1593]: time="2026-04-16T04:42:48.264232565Z" level=info msg="shim disconnected" id=b74bf9037374eb3c41f39f95c5076b6f707d9610efaf9dc96b2502f48bbe3799 namespace=k8s.io Apr 16 04:42:56.472811 containerd[1593]: time="2026-04-16T04:42:48.264546075Z" level=error msg="Failed to pipe stderr of container \"456e8d142dfdd65f4acfda0405c9188e421124b6a84c4567eb7354d239050b60\"" error="reading from a closed fifo" Apr 16 04:42:57.389950 containerd[1593]: time="2026-04-16T04:42:56.417104438Z" level=warning msg="cleaning up after shim disconnected" id=768bbe1aeef8b840562b2eb456eada77f62ad68509019eb75ae3ded3cd21509d namespace=k8s.io Apr 16 04:42:57.953312 containerd[1593]: time="2026-04-16T04:42:57.539314158Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 16 04:42:58.693897 containerd[1593]: time="2026-04-16T04:42:57.945190277Z" level=error msg="StartContainer for \"768bbe1aeef8b840562b2eb456eada77f62ad68509019eb75ae3ded3cd21509d\" failed" error="failed to create containerd task: failed to create shim task: context deadline exceeded: unknown" Apr 16 04:42:58.814178 kubelet[2989]: E0416 04:42:56.396087 2989 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 16 04:42:59.374072 containerd[1593]: time="2026-04-16T04:42:56.182030253Z" level=warning msg="cleaning up after shim disconnected" id=b74bf9037374eb3c41f39f95c5076b6f707d9610efaf9dc96b2502f48bbe3799 namespace=k8s.io Apr 16 04:43:00.030524 kubelet[2989]: E0416 04:42:58.287722 2989 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.5:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.5:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18a6bc4cf99553ae default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-16 04:35:41.835088814 +0000 UTC m=+118.133590624,LastTimestamp:2026-04-16 04:35:41.835088814 +0000 UTC m=+118.133590624,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 16 04:43:02.211921 containerd[1593]: time="2026-04-16T04:42:54.690065311Z" level=error msg="failed to delete shim" error="1 error occurred:\n\t* close wait error: context deadline exceeded\n\n" id=b74bf9037374eb3c41f39f95c5076b6f707d9610efaf9dc96b2502f48bbe3799 Apr 16 04:43:02.906145 containerd[1593]: time="2026-04-16T04:42:51.774404775Z" level=error msg="StartContainer for \"456e8d142dfdd65f4acfda0405c9188e421124b6a84c4567eb7354d239050b60\" failed" error="failed to create containerd task: failed to create shim task: context deadline exceeded: unknown" Apr 16 04:43:02.906145 containerd[1593]: time="2026-04-16T04:43:01.345000928Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 16 04:43:03.052286 kubelet[2989]: E0416 04:43:02.595142 2989 kubelet.go:2460] "Skipping pod synchronization" err="container runtime is down" Apr 16 04:43:03.205109 containerd[1593]: time="2026-04-16T04:42:58.791800603Z" level=error msg="failed to delete" cmd="/usr/bin/containerd-shim-runc-v2 -namespace k8s.io -address /run/containerd/containerd.sock -publish-binary /usr/bin/containerd -id 456e8d142dfdd65f4acfda0405c9188e421124b6a84c4567eb7354d239050b60 -bundle /run/containerd/io.containerd.runtime.v2.task/k8s.io/456e8d142dfdd65f4acfda0405c9188e421124b6a84c4567eb7354d239050b60 delete" error="signal: killed" namespace=k8s.io Apr 16 04:43:03.554077 containerd[1593]: time="2026-04-16T04:43:03.192390821Z" level=warning msg="failed to clean up after shim disconnected" error=": signal: killed" id=456e8d142dfdd65f4acfda0405c9188e421124b6a84c4567eb7354d239050b60 namespace=k8s.io Apr 16 04:43:03.791181 kubelet[2989]: E0416 04:43:03.537271 2989 kubelet.go:2460] "Skipping pod synchronization" err="container runtime is down" Apr 16 04:43:03.936117 containerd[1593]: time="2026-04-16T04:43:03.019394223Z" level=error msg="failed to delete" cmd="/usr/bin/containerd-shim-runc-v2 -namespace k8s.io -address /run/containerd/containerd.sock -publish-binary /usr/bin/containerd -id b74bf9037374eb3c41f39f95c5076b6f707d9610efaf9dc96b2502f48bbe3799 -bundle /run/containerd/io.containerd.runtime.v2.task/k8s.io/b74bf9037374eb3c41f39f95c5076b6f707d9610efaf9dc96b2502f48bbe3799 delete" error="context deadline exceeded" namespace=k8s.io Apr 16 04:43:04.485834 kubelet[2989]: E0416 04:43:04.209009 2989 kubelet.go:2460] "Skipping pod synchronization" err="container runtime is down" Apr 16 04:43:05.192382 kubelet[2989]: E0416 04:43:05.172414 2989 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.5:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.5:6443: connect: connection refused" interval="7s" Apr 16 04:43:05.394141 containerd[1593]: time="2026-04-16T04:43:03.584691837Z" level=error msg="failed to delete" cmd="/usr/bin/containerd-shim-runc-v2 -namespace k8s.io -address /run/containerd/containerd.sock -publish-binary /usr/bin/containerd -id 768bbe1aeef8b840562b2eb456eada77f62ad68509019eb75ae3ded3cd21509d -bundle /run/containerd/io.containerd.runtime.v2.task/k8s.io/768bbe1aeef8b840562b2eb456eada77f62ad68509019eb75ae3ded3cd21509d delete" error="signal: killed" namespace=k8s.io Apr 16 04:43:05.866884 containerd[1593]: time="2026-04-16T04:43:04.375928906Z" level=error msg="Failed to pipe stdout of container \"b74bf9037374eb3c41f39f95c5076b6f707d9610efaf9dc96b2502f48bbe3799\"" error="reading from a closed fifo" Apr 16 04:43:05.961170 containerd[1593]: time="2026-04-16T04:43:05.612350443Z" level=warning msg="failed to clean up after shim disconnected" error=": signal: killed" id=768bbe1aeef8b840562b2eb456eada77f62ad68509019eb75ae3ded3cd21509d namespace=k8s.io Apr 16 04:43:06.077625 kubelet[2989]: E0416 04:43:05.292268 2989 kubelet.go:2460] "Skipping pod synchronization" err="container runtime is down" Apr 16 04:43:06.417676 containerd[1593]: time="2026-04-16T04:43:04.929135658Z" level=error msg="Failed to pipe stderr of container \"b74bf9037374eb3c41f39f95c5076b6f707d9610efaf9dc96b2502f48bbe3799\"" error="reading from a closed fifo" Apr 16 04:43:06.736244 containerd[1593]: time="2026-04-16T04:43:05.112752633Z" level=warning msg="failed to clean up after shim disconnected" error=": context deadline exceeded" id=b74bf9037374eb3c41f39f95c5076b6f707d9610efaf9dc96b2502f48bbe3799 namespace=k8s.io Apr 16 04:43:06.736244 containerd[1593]: time="2026-04-16T04:43:06.717250507Z" level=error msg="StartContainer for \"b74bf9037374eb3c41f39f95c5076b6f707d9610efaf9dc96b2502f48bbe3799\" failed" error="failed to create containerd task: failed to create shim task: context deadline exceeded: unknown" Apr 16 04:43:07.283413 kubelet[2989]: E0416 04:43:07.275195 2989 kubelet.go:2460] "Skipping pod synchronization" err="container runtime is down" Apr 16 04:43:09.335166 kubelet[2989]: E0416 04:43:09.323162 2989 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 16 04:43:10.256033 kubelet[2989]: E0416 04:43:09.893404 2989 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.5:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.5:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 16 04:43:13.118549 kubelet[2989]: I0416 04:43:12.360063 2989 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 16 04:43:18.205568 kubelet[2989]: E0416 04:43:18.190669 2989 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.5:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.5:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 16 04:43:19.268075 kubelet[2989]: E0416 04:43:15.859743 2989 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.5:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.5:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18a6bc4cf99553ae default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-16 04:35:41.835088814 +0000 UTC m=+118.133590624,LastTimestamp:2026-04-16 04:35:41.835088814 +0000 UTC m=+118.133590624,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 16 04:43:19.577969 kubelet[2989]: E0416 04:43:18.456558 2989 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.5:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.5:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 16 04:43:19.815341 kubelet[2989]: E0416 04:43:19.808210 2989 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.5:6443/api/v1/nodes\": dial tcp 10.0.0.5:6443: connect: connection refused" node="localhost" Apr 16 04:43:20.248249 kubelet[2989]: E0416 04:43:20.246220 2989 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 16 04:43:21.058836 kubelet[2989]: E0416 04:43:20.700116 2989 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.5:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.5:6443: connect: connection refused" interval="7s" Apr 16 04:43:23.301265 kubelet[2989]: E0416 04:43:23.280174 2989 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.5:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.5:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 16 04:43:24.021365 kubelet[2989]: E0416 04:43:23.418235 2989 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.5:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.5:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 16 04:43:32.192927 kubelet[2989]: E0416 04:43:32.188388 2989 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 16 04:43:35.041283 kubelet[2989]: E0416 04:43:35.037016 2989 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.5:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.5:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18a6bc4cf99553ae default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-16 04:35:41.835088814 +0000 UTC m=+118.133590624,LastTimestamp:2026-04-16 04:35:41.835088814 +0000 UTC m=+118.133590624,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 16 04:43:37.281944 kubelet[2989]: I0416 04:43:37.276242 2989 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 16 04:43:38.409906 kubelet[2989]: E0416 04:43:38.372510 2989 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.5:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.5:6443: connect: connection refused" interval="7s" Apr 16 04:43:42.057208 kubelet[2989]: E0416 04:43:42.050753 2989 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.5:6443/api/v1/nodes\": dial tcp 10.0.0.5:6443: connect: connection refused" node="localhost" Apr 16 04:43:44.139151 kubelet[2989]: E0416 04:43:44.136599 2989 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 16 04:43:44.553750 kubelet[2989]: E0416 04:43:44.194266 2989 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 16 04:43:46.311431 kubelet[2989]: I0416 04:43:46.300408 2989 scope.go:117] "RemoveContainer" containerID="768bbe1aeef8b840562b2eb456eada77f62ad68509019eb75ae3ded3cd21509d" Apr 16 04:43:47.251689 kubelet[2989]: E0416 04:43:47.249824 2989 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.5:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.5:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18a6bc4cf99553ae default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-16 04:35:41.835088814 +0000 UTC m=+118.133590624,LastTimestamp:2026-04-16 04:35:41.835088814 +0000 UTC m=+118.133590624,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 16 04:43:47.986316 kubelet[2989]: E0416 04:43:47.577471 2989 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:43:51.199186 kubelet[2989]: E0416 04:43:51.193982 2989 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.5:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.5:6443: connect: connection refused" interval="7s" Apr 16 04:43:53.917276 kubelet[2989]: E0416 04:43:53.305420 2989 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.5:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.5:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 16 04:43:56.111638 kubelet[2989]: E0416 04:43:56.049689 2989 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 16 04:43:56.949781 kubelet[2989]: I0416 04:43:56.949278 2989 scope.go:117] "RemoveContainer" containerID="b74bf9037374eb3c41f39f95c5076b6f707d9610efaf9dc96b2502f48bbe3799" Apr 16 04:43:56.968033 kubelet[2989]: E0416 04:43:56.950415 2989 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:43:58.022256 kubelet[2989]: E0416 04:43:58.002317 2989 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 16 04:44:01.092477 kubelet[2989]: I0416 04:44:01.061540 2989 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 16 04:44:02.553314 containerd[1593]: time="2026-04-16T04:44:02.042425834Z" level=info msg="CreateContainer within sandbox \"bf92b4aec6672d6557d10c26664f1f22ea031577ea6fb7d55a5f42ea7b7a6d16\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Apr 16 04:44:03.423519 kubelet[2989]: E0416 04:44:02.546333 2989 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.5:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.5:6443: connect: connection refused" interval="7s" Apr 16 04:44:05.765427 kubelet[2989]: E0416 04:44:05.755430 2989 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.5:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.5:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 16 04:44:08.718707 kubelet[2989]: E0416 04:44:08.718021 2989 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 16 04:44:09.356291 kubelet[2989]: E0416 04:44:06.984604 2989 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.5:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.5:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18a6bc4cf99553ae default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-16 04:35:41.835088814 +0000 UTC m=+118.133590624,LastTimestamp:2026-04-16 04:35:41.835088814 +0000 UTC m=+118.133590624,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 16 04:44:10.780155 kubelet[2989]: E0416 04:44:10.761077 2989 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.5:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.5:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 16 04:44:11.277748 kubelet[2989]: E0416 04:44:10.605314 2989 event.go:307] "Unable to write event (retry limit exceeded!)" event="&Event{ObjectMeta:{localhost.18a6bc4cf99553ae default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-16 04:35:41.835088814 +0000 UTC m=+118.133590624,LastTimestamp:2026-04-16 04:35:41.835088814 +0000 UTC m=+118.133590624,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 16 04:44:12.072321 kubelet[2989]: E0416 04:44:11.808122 2989 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.5:6443/api/v1/nodes\": dial tcp 10.0.0.5:6443: connect: connection refused" node="localhost" Apr 16 04:44:16.362359 kubelet[2989]: E0416 04:44:16.178557 2989 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.5:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.5:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 16 04:44:16.763299 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1904794530.mount: Deactivated successfully. Apr 16 04:44:20.361824 kubelet[2989]: E0416 04:44:13.942137 2989 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.5:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.5:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18a6bc4eeee2e95b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node localhost status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-16 04:35:50.245558619 +0000 UTC m=+126.544060426,LastTimestamp:2026-04-16 04:35:50.245558619 +0000 UTC m=+126.544060426,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 16 04:44:21.961693 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3764609528.mount: Deactivated successfully. Apr 16 04:44:24.903529 kubelet[2989]: E0416 04:44:23.857367 2989 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.5:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.5:6443: connect: connection refused" interval="7s" Apr 16 04:44:26.060812 containerd[1593]: time="2026-04-16T04:44:25.938397300Z" level=info msg="CreateContainer within sandbox \"bf92b4aec6672d6557d10c26664f1f22ea031577ea6fb7d55a5f42ea7b7a6d16\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"c27a2b324b88b3b33bd2482267d17e8f72ea67b4372cbaa24dcaec665df621e4\"" Apr 16 04:44:27.240165 kubelet[2989]: E0416 04:44:26.765983 2989 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 16 04:44:32.276803 kubelet[2989]: E0416 04:44:32.264465 2989 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.5:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.5:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 16 04:44:33.080166 kubelet[2989]: I0416 04:44:32.854423 2989 scope.go:117] "RemoveContainer" containerID="768bbe1aeef8b840562b2eb456eada77f62ad68509019eb75ae3ded3cd21509d" Apr 16 04:44:33.238842 containerd[1593]: time="2026-04-16T04:44:32.374826987Z" level=info msg="StartContainer for \"c27a2b324b88b3b33bd2482267d17e8f72ea67b4372cbaa24dcaec665df621e4\"" Apr 16 04:44:34.001933 containerd[1593]: time="2026-04-16T04:44:33.986414426Z" level=info msg="CreateContainer within sandbox \"cb270ce5cb4d49fb60eb07b4d238db4eb52c50b320223952fa94f8a5f91de0b2\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:1,}" Apr 16 04:44:35.352389 kubelet[2989]: E0416 04:44:32.905235 2989 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.5:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.5:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18a6bc4eeee2e95b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node localhost status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-16 04:35:50.245558619 +0000 UTC m=+126.544060426,LastTimestamp:2026-04-16 04:35:50.245558619 +0000 UTC m=+126.544060426,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 16 04:44:38.551363 kubelet[2989]: E0416 04:44:37.663997 2989 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.5:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.5:6443: connect: connection refused" interval="7s" Apr 16 04:44:40.162489 kubelet[2989]: E0416 04:44:40.147890 2989 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 16 04:44:41.196354 kubelet[2989]: E0416 04:44:41.166265 2989 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.5:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.5:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 16 04:44:41.985266 kubelet[2989]: E0416 04:44:41.982165 2989 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 16 04:44:42.513389 kubelet[2989]: I0416 04:44:42.079076 2989 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 16 04:44:44.312286 kubelet[2989]: I0416 04:44:44.302180 2989 scope.go:117] "RemoveContainer" containerID="456e8d142dfdd65f4acfda0405c9188e421124b6a84c4567eb7354d239050b60" Apr 16 04:44:45.943360 kubelet[2989]: E0416 04:44:45.938369 2989 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:44:48.135921 kubelet[2989]: E0416 04:44:46.906878 2989 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.5:6443/api/v1/nodes\": dial tcp 10.0.0.5:6443: connect: connection refused" node="localhost" Apr 16 04:44:52.198763 kubelet[2989]: E0416 04:44:52.157794 2989 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 16 04:44:57.424824 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1499016841.mount: Deactivated successfully. Apr 16 04:45:03.237973 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3753530007.mount: Deactivated successfully. Apr 16 04:45:05.253922 kubelet[2989]: E0416 04:45:05.253816 2989 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.5:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.5:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 16 04:45:05.289215 kubelet[2989]: E0416 04:45:04.783796 2989 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.5:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.5:6443: connect: connection refused" interval="7s" Apr 16 04:45:05.289215 kubelet[2989]: E0416 04:44:55.787971 2989 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.5:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.5:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18a6bc4eeee2e95b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node localhost status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-16 04:35:50.245558619 +0000 UTC m=+126.544060426,LastTimestamp:2026-04-16 04:35:50.245558619 +0000 UTC m=+126.544060426,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 16 04:45:05.766554 kubelet[2989]: E0416 04:45:05.764129 2989 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 16 04:45:07.689924 kubelet[2989]: E0416 04:45:07.670139 2989 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.5:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.5:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 16 04:45:08.416876 kubelet[2989]: E0416 04:45:07.467932 2989 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.5:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.5:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 16 04:45:08.760860 containerd[1593]: time="2026-04-16T04:45:08.522987047Z" level=info msg="CreateContainer within sandbox \"cb270ce5cb4d49fb60eb07b4d238db4eb52c50b320223952fa94f8a5f91de0b2\" for &ContainerMetadata{Name:kube-apiserver,Attempt:1,} returns container id \"13ad8f69d1c5dc31f6de3cdb96c5b08f1abebb618c5c149f2d2ddabff8cb35e9\"" Apr 16 04:45:10.848203 containerd[1593]: time="2026-04-16T04:45:10.803178300Z" level=info msg="RemoveContainer for \"768bbe1aeef8b840562b2eb456eada77f62ad68509019eb75ae3ded3cd21509d\"" Apr 16 04:45:16.384008 containerd[1593]: time="2026-04-16T04:45:16.352295038Z" level=info msg="StartContainer for \"13ad8f69d1c5dc31f6de3cdb96c5b08f1abebb618c5c149f2d2ddabff8cb35e9\"" Apr 16 04:45:21.562834 kubelet[2989]: E0416 04:45:20.906821 2989 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.5:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.5:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 16 04:45:22.498331 containerd[1593]: time="2026-04-16T04:45:22.470268152Z" level=info msg="RemoveContainer for \"768bbe1aeef8b840562b2eb456eada77f62ad68509019eb75ae3ded3cd21509d\" returns successfully" Apr 16 04:45:24.738910 kubelet[2989]: E0416 04:45:23.273978 2989 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.5:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.5:6443: connect: connection refused" interval="7s" Apr 16 04:45:25.883380 kubelet[2989]: E0416 04:45:25.881350 2989 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 16 04:45:27.257701 kubelet[2989]: E0416 04:45:25.596802 2989 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.5:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.5:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18a6bc4eeee2e95b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node localhost status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-16 04:35:50.245558619 +0000 UTC m=+126.544060426,LastTimestamp:2026-04-16 04:35:50.245558619 +0000 UTC m=+126.544060426,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 16 04:45:31.990896 kubelet[2989]: E0416 04:45:30.844334 2989 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.5:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.5:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 16 04:45:35.594613 kubelet[2989]: I0416 04:45:35.592161 2989 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 16 04:45:37.271050 kubelet[2989]: E0416 04:45:37.269185 2989 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.5:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.5:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 16 04:45:38.180510 kubelet[2989]: E0416 04:45:38.114793 2989 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.5:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.5:6443: connect: connection refused" interval="7s" Apr 16 04:45:38.765135 containerd[1593]: time="2026-04-16T04:45:38.763702134Z" level=info msg="CreateContainer within sandbox \"07bc43c2fb0e553fec011ab39615348367275dc431736e7fcf3665b8a5254009\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Apr 16 04:45:39.384106 kubelet[2989]: E0416 04:45:39.240779 2989 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 16 04:45:42.339900 kubelet[2989]: E0416 04:45:42.058355 2989 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.5:6443/api/v1/nodes\": dial tcp 10.0.0.5:6443: connect: connection refused" node="localhost" Apr 16 04:45:44.282221 kubelet[2989]: E0416 04:45:41.391710 2989 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.5:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.5:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18a6bc4eeee2e95b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node localhost status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-16 04:35:50.245558619 +0000 UTC m=+126.544060426,LastTimestamp:2026-04-16 04:35:50.245558619 +0000 UTC m=+126.544060426,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 16 04:45:48.062409 kubelet[2989]: E0416 04:45:47.783160 2989 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.5:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.5:6443: connect: connection refused" interval="7s" Apr 16 04:45:52.320134 kubelet[2989]: E0416 04:45:51.614206 2989 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 16 04:45:54.100380 kubelet[2989]: E0416 04:45:54.083780 2989 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.5:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.5:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 16 04:45:58.554745 kubelet[2989]: E0416 04:45:57.850895 2989 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.5:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.5:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18a6bc4eeee2e95b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node localhost status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-16 04:35:50.245558619 +0000 UTC m=+126.544060426,LastTimestamp:2026-04-16 04:35:50.245558619 +0000 UTC m=+126.544060426,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 16 04:46:01.855668 kubelet[2989]: E0416 04:46:01.690511 2989 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.5:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.5:6443: connect: connection refused" interval="7s" Apr 16 04:46:02.590924 kubelet[2989]: E0416 04:46:01.992261 2989 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.5:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.5:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 16 04:46:03.297080 containerd[1593]: time="2026-04-16T04:46:02.345136797Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 16 04:46:04.070726 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount994533314.mount: Deactivated successfully. Apr 16 04:46:04.242736 containerd[1593]: time="2026-04-16T04:46:03.816226656Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 16 04:46:04.380470 kubelet[2989]: E0416 04:46:04.355483 2989 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 16 04:46:04.894972 containerd[1593]: time="2026-04-16T04:46:04.358294765Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 16 04:46:06.584328 kubelet[2989]: I0416 04:46:06.581637 2989 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 16 04:46:06.994073 kubelet[2989]: E0416 04:46:06.582310 2989 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.5:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.5:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 16 04:46:07.136016 containerd[1593]: time="2026-04-16T04:46:06.715331982Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 16 04:46:08.173312 kubelet[2989]: E0416 04:46:08.153263 2989 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.5:6443/api/v1/nodes\": dial tcp 10.0.0.5:6443: connect: connection refused" node="localhost" Apr 16 04:46:11.270661 kubelet[2989]: E0416 04:46:11.256085 2989 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.5:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.5:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 16 04:46:12.271716 kubelet[2989]: E0416 04:46:12.269342 2989 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.5:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.5:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18a6bc4eeee2e95b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node localhost status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-16 04:35:50.245558619 +0000 UTC m=+126.544060426,LastTimestamp:2026-04-16 04:35:50.245558619 +0000 UTC m=+126.544060426,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 16 04:46:13.076071 kubelet[2989]: E0416 04:46:12.673149 2989 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.5:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.5:6443: connect: connection refused" interval="7s" Apr 16 04:46:14.988175 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3952187482.mount: Deactivated successfully. Apr 16 04:46:18.176712 kubelet[2989]: E0416 04:46:18.174238 2989 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 16 04:46:20.371973 containerd[1593]: time="2026-04-16T04:46:20.348782993Z" level=info msg="CreateContainer within sandbox \"07bc43c2fb0e553fec011ab39615348367275dc431736e7fcf3665b8a5254009\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"4ef5d13f6fad41c0a7d3203dbea2ec815f7fdb3da824b546f1134b551a9ef458\"" Apr 16 04:46:20.613117 kubelet[2989]: E0416 04:46:20.605199 2989 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.5:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.5:6443: connect: connection refused" interval="7s" Apr 16 04:46:25.650070 kubelet[2989]: I0416 04:46:25.649609 2989 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 16 04:46:26.167718 containerd[1593]: time="2026-04-16T04:46:26.082388989Z" level=info msg="StartContainer for \"4ef5d13f6fad41c0a7d3203dbea2ec815f7fdb3da824b546f1134b551a9ef458\"" Apr 16 04:46:27.778507 kubelet[2989]: E0416 04:46:25.763231 2989 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.5:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.5:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18a6bc4eeee2e95b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node localhost status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-16 04:35:50.245558619 +0000 UTC m=+126.544060426,LastTimestamp:2026-04-16 04:35:50.245558619 +0000 UTC m=+126.544060426,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 16 04:46:29.089833 kubelet[2989]: E0416 04:46:29.084046 2989 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 16 04:46:30.758007 kubelet[2989]: E0416 04:46:30.350539 2989 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.5:6443/api/v1/nodes\": dial tcp 10.0.0.5:6443: connect: connection refused" node="localhost" Apr 16 04:46:31.492954 kubelet[2989]: E0416 04:46:31.162140 2989 log.go:32] "StartContainer from runtime service failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" containerID="c27a2b324b88b3b33bd2482267d17e8f72ea67b4372cbaa24dcaec665df621e4" Apr 16 04:46:32.265911 kubelet[2989]: E0416 04:46:32.257359 2989 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.5:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.5:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 16 04:46:32.569848 kubelet[2989]: E0416 04:46:32.262361 2989 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:kube-scheduler,Image:registry.k8s.io/kube-scheduler:v1.33.8,Command:[kube-scheduler --authentication-kubeconfig=/etc/kubernetes/scheduler.conf --authorization-kubeconfig=/etc/kubernetes/scheduler.conf --bind-address=127.0.0.1 --kubeconfig=/etc/kubernetes/scheduler.conf --leader-elect=true],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {} 100m DecimalSI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubeconfig,ReadOnly:true,MountPath:/etc/kubernetes/scheduler.conf,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{0 10259 },Host:127.0.0.1,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:15,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:8,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 10259 },Host:127.0.0.1,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:15,PeriodSeconds:1,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{0 10259 },Host:127.0.0.1,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:15,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:180,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod kube-scheduler-localhost_kube-system(ae88e85786a13701eebaf6993fb55ff4): RunContainerError: context deadline exceeded" logger="UnhandledError" Apr 16 04:46:32.859343 kubelet[2989]: E0416 04:46:32.003997 2989 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.5:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.5:6443: connect: connection refused" interval="7s" Apr 16 04:46:33.606863 kubelet[2989]: E0416 04:46:33.489122 2989 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with RunContainerError: \"context deadline exceeded\"" pod="kube-system/kube-scheduler-localhost" podUID="ae88e85786a13701eebaf6993fb55ff4" Apr 16 04:46:38.075768 kubelet[2989]: E0416 04:46:38.075628 2989 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.5:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.5:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 16 04:46:38.888851 containerd[1593]: time="2026-04-16T04:46:38.074296578Z" level=info msg="shim disconnected" id=c27a2b324b88b3b33bd2482267d17e8f72ea67b4372cbaa24dcaec665df621e4 namespace=k8s.io Apr 16 04:46:39.363250 containerd[1593]: time="2026-04-16T04:46:39.293779971Z" level=warning msg="cleaning up after shim disconnected" id=c27a2b324b88b3b33bd2482267d17e8f72ea67b4372cbaa24dcaec665df621e4 namespace=k8s.io Apr 16 04:46:39.547168 containerd[1593]: time="2026-04-16T04:46:39.542698027Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 16 04:46:40.203986 containerd[1593]: time="2026-04-16T04:46:39.884052570Z" level=error msg="Failed to pipe stderr of container \"c27a2b324b88b3b33bd2482267d17e8f72ea67b4372cbaa24dcaec665df621e4\"" error="reading from a closed fifo" Apr 16 04:46:40.801634 containerd[1593]: time="2026-04-16T04:46:40.101235379Z" level=error msg="Failed to pipe stdout of container \"c27a2b324b88b3b33bd2482267d17e8f72ea67b4372cbaa24dcaec665df621e4\"" error="reading from a closed fifo" Apr 16 04:46:40.489915 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c27a2b324b88b3b33bd2482267d17e8f72ea67b4372cbaa24dcaec665df621e4-rootfs.mount: Deactivated successfully. Apr 16 04:46:41.479147 containerd[1593]: time="2026-04-16T04:46:41.455265678Z" level=error msg="StartContainer for \"c27a2b324b88b3b33bd2482267d17e8f72ea67b4372cbaa24dcaec665df621e4\" failed" error="failed to create containerd task: failed to create shim task: context canceled: unknown" Apr 16 04:46:45.347259 containerd[1593]: time="2026-04-16T04:46:45.342857885Z" level=error msg="failed to delete" cmd="/usr/bin/containerd-shim-runc-v2 -namespace k8s.io -address /run/containerd/containerd.sock -publish-binary /usr/bin/containerd -id c27a2b324b88b3b33bd2482267d17e8f72ea67b4372cbaa24dcaec665df621e4 -bundle /run/containerd/io.containerd.runtime.v2.task/k8s.io/c27a2b324b88b3b33bd2482267d17e8f72ea67b4372cbaa24dcaec665df621e4 delete" error="signal: killed" namespace=k8s.io Apr 16 04:46:45.872296 containerd[1593]: time="2026-04-16T04:46:45.858208377Z" level=warning msg="failed to clean up after shim disconnected" error=": signal: killed" id=c27a2b324b88b3b33bd2482267d17e8f72ea67b4372cbaa24dcaec665df621e4 namespace=k8s.io Apr 16 04:46:46.362896 kubelet[2989]: E0416 04:46:45.888270 2989 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.5:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.5:6443: connect: connection refused" interval="7s" Apr 16 04:46:46.606306 kubelet[2989]: E0416 04:46:46.548407 2989 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 16 04:46:46.606306 kubelet[2989]: E0416 04:46:45.853848 2989 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.5:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.5:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18a6bc4eeee2e95b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node localhost status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-16 04:35:50.245558619 +0000 UTC m=+126.544060426,LastTimestamp:2026-04-16 04:35:50.245558619 +0000 UTC m=+126.544060426,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 16 04:46:48.158977 kubelet[2989]: E0416 04:46:48.158347 2989 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.5:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.5:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 16 04:46:48.158977 kubelet[2989]: E0416 04:46:48.159297 2989 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.5:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.5:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 16 04:46:48.595867 kubelet[2989]: I0416 04:46:48.552201 2989 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 16 04:46:49.455396 kubelet[2989]: I0416 04:46:49.419247 2989 scope.go:117] "RemoveContainer" containerID="456e8d142dfdd65f4acfda0405c9188e421124b6a84c4567eb7354d239050b60" Apr 16 04:46:49.958297 kubelet[2989]: E0416 04:46:49.457424 2989 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.5:6443/api/v1/nodes\": dial tcp 10.0.0.5:6443: connect: connection refused" node="localhost" Apr 16 04:46:53.307183 containerd[1593]: time="2026-04-16T04:46:52.903082486Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 16 04:46:53.697209 containerd[1593]: time="2026-04-16T04:46:53.430009975Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 16 04:46:54.145199 containerd[1593]: time="2026-04-16T04:46:53.866014844Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 16 04:46:54.294642 containerd[1593]: time="2026-04-16T04:46:54.290318538Z" level=info msg="RemoveContainer for \"456e8d142dfdd65f4acfda0405c9188e421124b6a84c4567eb7354d239050b60\"" Apr 16 04:46:55.071853 containerd[1593]: time="2026-04-16T04:46:55.013360059Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 16 04:46:55.317981 kubelet[2989]: E0416 04:46:55.096246 2989 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.5:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.5:6443: connect: connection refused" interval="7s" Apr 16 04:46:56.810178 kubelet[2989]: E0416 04:46:56.808173 2989 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 16 04:46:56.946061 kubelet[2989]: E0416 04:46:56.651299 2989 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.5:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.5:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 16 04:46:56.946061 kubelet[2989]: E0416 04:46:56.945478 2989 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.5:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.5:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18a6bc4eeee2e95b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node localhost status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-16 04:35:50.245558619 +0000 UTC m=+126.544060426,LastTimestamp:2026-04-16 04:35:50.245558619 +0000 UTC m=+126.544060426,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 16 04:46:56.953947 containerd[1593]: time="2026-04-16T04:46:56.943502498Z" level=info msg="RemoveContainer for \"456e8d142dfdd65f4acfda0405c9188e421124b6a84c4567eb7354d239050b60\" returns successfully" Apr 16 04:46:57.112724 kubelet[2989]: I0416 04:46:57.095003 2989 scope.go:117] "RemoveContainer" containerID="b74bf9037374eb3c41f39f95c5076b6f707d9610efaf9dc96b2502f48bbe3799" Apr 16 04:46:57.415084 kubelet[2989]: E0416 04:46:57.372050 2989 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.5:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.5:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 16 04:46:57.423995 kubelet[2989]: I0416 04:46:57.423597 2989 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 16 04:46:57.424204 containerd[1593]: time="2026-04-16T04:46:57.424029456Z" level=info msg="RemoveContainer for \"b74bf9037374eb3c41f39f95c5076b6f707d9610efaf9dc96b2502f48bbe3799\"" Apr 16 04:46:57.424641 kubelet[2989]: E0416 04:46:57.424604 2989 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.5:6443/api/v1/nodes\": dial tcp 10.0.0.5:6443: connect: connection refused" node="localhost" Apr 16 04:46:57.711152 containerd[1593]: time="2026-04-16T04:46:57.707748301Z" level=info msg="RemoveContainer for \"b74bf9037374eb3c41f39f95c5076b6f707d9610efaf9dc96b2502f48bbe3799\" returns successfully" Apr 16 04:46:57.770175 kubelet[2989]: E0416 04:46:57.768225 2989 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 16 04:46:57.770175 kubelet[2989]: I0416 04:46:57.771521 2989 scope.go:117] "RemoveContainer" containerID="c27a2b324b88b3b33bd2482267d17e8f72ea67b4372cbaa24dcaec665df621e4" Apr 16 04:46:57.891844 kubelet[2989]: E0416 04:46:57.776242 2989 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:46:58.001087 systemd[1]: run-containerd-runc-k8s.io-13ad8f69d1c5dc31f6de3cdb96c5b08f1abebb618c5c149f2d2ddabff8cb35e9-runc.giQfSh.mount: Deactivated successfully. Apr 16 04:46:58.220228 containerd[1593]: time="2026-04-16T04:46:58.219398731Z" level=info msg="CreateContainer within sandbox \"bf92b4aec6672d6557d10c26664f1f22ea031577ea6fb7d55a5f42ea7b7a6d16\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:2,}" Apr 16 04:46:58.386955 containerd[1593]: time="2026-04-16T04:46:58.348646503Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 16 04:46:58.386955 containerd[1593]: time="2026-04-16T04:46:58.349181264Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 16 04:46:58.386955 containerd[1593]: time="2026-04-16T04:46:58.349192355Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 16 04:46:58.466690 containerd[1593]: time="2026-04-16T04:46:58.427879057Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 16 04:46:59.055202 containerd[1593]: time="2026-04-16T04:46:59.055043031Z" level=info msg="CreateContainer within sandbox \"bf92b4aec6672d6557d10c26664f1f22ea031577ea6fb7d55a5f42ea7b7a6d16\" for &ContainerMetadata{Name:kube-scheduler,Attempt:2,} returns container id \"5e4213911f440210d7b28ae4c0e8617cfdec6359c05fe7ab5a1059f34406a663\"" Apr 16 04:46:59.305187 containerd[1593]: time="2026-04-16T04:46:59.304981397Z" level=info msg="StartContainer for \"5e4213911f440210d7b28ae4c0e8617cfdec6359c05fe7ab5a1059f34406a663\"" Apr 16 04:46:59.583888 containerd[1593]: time="2026-04-16T04:46:59.580980262Z" level=info msg="StartContainer for \"13ad8f69d1c5dc31f6de3cdb96c5b08f1abebb618c5c149f2d2ddabff8cb35e9\" returns successfully" Apr 16 04:46:59.755710 containerd[1593]: time="2026-04-16T04:46:59.749808075Z" level=info msg="StartContainer for \"4ef5d13f6fad41c0a7d3203dbea2ec815f7fdb3da824b546f1134b551a9ef458\" returns successfully" Apr 16 04:46:59.769726 kubelet[2989]: E0416 04:46:59.769390 2989 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 16 04:46:59.769726 kubelet[2989]: E0416 04:46:59.769547 2989 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:46:59.817806 containerd[1593]: time="2026-04-16T04:46:59.816856770Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 16 04:46:59.817806 containerd[1593]: time="2026-04-16T04:46:59.816927654Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 16 04:46:59.817806 containerd[1593]: time="2026-04-16T04:46:59.816940633Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 16 04:46:59.817806 containerd[1593]: time="2026-04-16T04:46:59.817013544Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 16 04:47:00.524664 containerd[1593]: time="2026-04-16T04:47:00.518722496Z" level=info msg="StartContainer for \"5e4213911f440210d7b28ae4c0e8617cfdec6359c05fe7ab5a1059f34406a663\" returns successfully" Apr 16 04:47:01.408289 kubelet[2989]: E0416 04:47:01.403306 2989 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 16 04:47:01.494419 kubelet[2989]: E0416 04:47:01.403743 2989 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:47:01.623691 kubelet[2989]: E0416 04:47:01.623200 2989 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 16 04:47:01.625859 kubelet[2989]: E0416 04:47:01.625563 2989 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:47:01.798788 kubelet[2989]: E0416 04:47:01.791837 2989 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 16 04:47:01.798788 kubelet[2989]: E0416 04:47:01.797328 2989 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:47:02.951075 kubelet[2989]: E0416 04:47:02.949114 2989 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 16 04:47:02.962122 kubelet[2989]: E0416 04:47:02.953076 2989 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:47:02.962122 kubelet[2989]: E0416 04:47:02.956872 2989 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 16 04:47:02.962122 kubelet[2989]: E0416 04:47:02.957234 2989 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:47:02.962122 kubelet[2989]: E0416 04:47:02.957889 2989 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 16 04:47:02.962122 kubelet[2989]: E0416 04:47:02.958020 2989 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:47:04.454935 kubelet[2989]: E0416 04:47:04.453804 2989 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 16 04:47:04.465022 kubelet[2989]: E0416 04:47:04.464107 2989 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:47:04.507032 kubelet[2989]: I0416 04:47:04.505182 2989 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 16 04:47:04.872144 kubelet[2989]: E0416 04:47:04.863007 2989 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 16 04:47:04.898981 kubelet[2989]: E0416 04:47:04.898270 2989 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:47:06.961122 kubelet[2989]: E0416 04:47:06.960404 2989 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 16 04:47:10.356374 kubelet[2989]: E0416 04:47:10.351034 2989 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 16 04:47:10.372369 kubelet[2989]: E0416 04:47:10.360310 2989 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:47:12.243336 kubelet[2989]: E0416 04:47:12.240234 2989 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.5:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded" interval="7s" Apr 16 04:47:14.595134 kubelet[2989]: E0416 04:47:14.594059 2989 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.5:6443/api/v1/nodes\": net/http: TLS handshake timeout" node="localhost" Apr 16 04:47:15.048825 kubelet[2989]: E0416 04:47:15.048163 2989 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 16 04:47:15.056752 kubelet[2989]: E0416 04:47:15.050743 2989 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:47:16.965978 kubelet[2989]: E0416 04:47:16.965522 2989 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 16 04:47:16.965978 kubelet[2989]: E0416 04:47:16.965654 2989 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.5:6443/api/v1/namespaces/default/events\": net/http: TLS handshake timeout" event="&Event{ObjectMeta:{localhost.18a6bc4eeee2e95b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node localhost status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-16 04:35:50.245558619 +0000 UTC m=+126.544060426,LastTimestamp:2026-04-16 04:35:50.245558619 +0000 UTC m=+126.544060426,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 16 04:47:21.738865 kubelet[2989]: I0416 04:47:21.738228 2989 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 16 04:47:28.028584 kubelet[2989]: E0416 04:47:27.850574 2989 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 16 04:47:36.333635 systemd-journald[1155]: Under memory pressure, flushing caches. Apr 16 04:47:36.149307 systemd-resolved[1469]: Under memory pressure, flushing caches. Apr 16 04:47:36.149378 systemd-resolved[1469]: Flushed all caches. Apr 16 04:47:44.138945 kubelet[2989]: E0416 04:47:44.138057 2989 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.5:6443/api/v1/nodes\": net/http: TLS handshake timeout" node="localhost" Apr 16 04:47:44.289115 kubelet[2989]: E0416 04:47:44.282195 2989 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.5:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" interval="7s" Apr 16 04:47:44.289115 kubelet[2989]: E0416 04:47:44.284663 2989 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 16 04:47:44.560050 kubelet[2989]: E0416 04:47:44.554896 2989 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 16 04:47:44.560050 kubelet[2989]: E0416 04:47:44.556003 2989 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:47:52.349195 kubelet[2989]: I0416 04:47:52.320463 2989 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 16 04:47:54.010973 kubelet[2989]: E0416 04:47:53.976255 2989 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.5:6443/api/v1/namespaces/default/events\": net/http: TLS handshake timeout" event="&Event{ObjectMeta:{localhost.18a6bc4eeee2e95b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node localhost status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-16 04:35:50.245558619 +0000 UTC m=+126.544060426,LastTimestamp:2026-04-16 04:35:50.245558619 +0000 UTC m=+126.544060426,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 16 04:47:54.027306 kubelet[2989]: E0416 04:47:54.011691 2989 event.go:307] "Unable to write event (retry limit exceeded!)" event="&Event{ObjectMeta:{localhost.18a6bc4eeee2e95b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node localhost status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-16 04:35:50.245558619 +0000 UTC m=+126.544060426,LastTimestamp:2026-04-16 04:35:50.245558619 +0000 UTC m=+126.544060426,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 16 04:47:54.149378 kubelet[2989]: E0416 04:47:54.148899 2989 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.5:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 16 04:47:54.318542 kubelet[2989]: E0416 04:47:54.302537 2989 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 16 04:47:54.319696 kubelet[2989]: E0416 04:47:54.316395 2989 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.5:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": net/http: TLS handshake timeout" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 16 04:47:54.319993 kubelet[2989]: E0416 04:47:54.319792 2989 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.5:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 16 04:47:56.216886 kubelet[2989]: E0416 04:47:56.207694 2989 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.5:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 16 04:47:57.129576 kubelet[2989]: E0416 04:47:57.129055 2989 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.5:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 16 04:48:01.574678 kubelet[2989]: E0416 04:48:01.570176 2989 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Apr 16 04:48:03.870886 systemd-journald[1155]: Under memory pressure, flushing caches. Apr 16 04:48:03.850336 systemd-resolved[1469]: Under memory pressure, flushing caches. Apr 16 04:48:03.851108 systemd-resolved[1469]: Flushed all caches. Apr 16 04:48:04.750148 kubelet[2989]: E0416 04:48:04.701984 2989 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.18a6bc4eeeedccce default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node localhost status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-16 04:35:50.246272206 +0000 UTC m=+126.544774014,LastTimestamp:2026-04-16 04:35:50.246272206 +0000 UTC m=+126.544774014,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 16 04:48:05.997555 systemd-journald[1155]: Under memory pressure, flushing caches. Apr 16 04:48:06.300074 systemd-resolved[1469]: Under memory pressure, flushing caches. Apr 16 04:48:06.449581 systemd-resolved[1469]: Flushed all caches. Apr 16 04:48:07.405248 kubelet[2989]: E0416 04:48:07.368641 2989 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 16 04:48:07.799427 kubelet[2989]: I0416 04:48:07.422716 2989 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Apr 16 04:48:07.799427 kubelet[2989]: E0416 04:48:07.422798 2989 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Apr 16 04:48:08.255571 systemd-journald[1155]: Under memory pressure, flushing caches. Apr 16 04:48:07.853735 systemd-resolved[1469]: Under memory pressure, flushing caches. Apr 16 04:48:07.853748 systemd-resolved[1469]: Flushed all caches. Apr 16 04:48:10.054017 kubelet[2989]: E0416 04:48:10.053737 2989 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 04:48:10.196187 kubelet[2989]: E0416 04:48:10.194012 2989 kubelet_node_status.go:460] "Node not becoming ready in time after startup" Apr 16 04:48:13.822652 kubelet[2989]: E0416 04:48:13.804920 2989 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 04:48:17.438688 kubelet[2989]: E0416 04:48:17.430401 2989 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 16 04:48:18.945338 kubelet[2989]: E0416 04:48:18.944388 2989 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 04:48:20.423863 kubelet[2989]: E0416 04:48:20.419109 2989 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Apr 16 04:48:23.976165 kubelet[2989]: E0416 04:48:23.975560 2989 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 04:48:24.663229 kubelet[2989]: E0416 04:48:24.650357 2989 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 16 04:48:24.698075 kubelet[2989]: E0416 04:48:24.697487 2989 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:48:27.446960 kubelet[2989]: E0416 04:48:27.446490 2989 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 16 04:48:27.974060 kubelet[2989]: I0416 04:48:27.972475 2989 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Apr 16 04:48:28.006720 kubelet[2989]: I0416 04:48:28.004577 2989 apiserver.go:52] "Watching apiserver" Apr 16 04:48:28.448819 kubelet[2989]: I0416 04:48:28.426282 2989 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Apr 16 04:48:29.107323 kubelet[2989]: I0416 04:48:29.076858 2989 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Apr 16 04:48:29.228418 kubelet[2989]: E0416 04:48:29.199574 2989 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:48:29.742291 kubelet[2989]: I0416 04:48:29.742089 2989 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Apr 16 04:48:30.290818 kubelet[2989]: E0416 04:48:29.945881 2989 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:48:30.322347 kubelet[2989]: E0416 04:48:30.320587 2989 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 04:48:32.576242 kubelet[2989]: E0416 04:48:32.570987 2989 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.098s" Apr 16 04:48:32.684424 kubelet[2989]: E0416 04:48:32.669269 2989 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:48:33.369840 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4ef5d13f6fad41c0a7d3203dbea2ec815f7fdb3da824b546f1134b551a9ef458-rootfs.mount: Deactivated successfully. Apr 16 04:48:33.546177 containerd[1593]: time="2026-04-16T04:48:33.545139218Z" level=info msg="shim disconnected" id=4ef5d13f6fad41c0a7d3203dbea2ec815f7fdb3da824b546f1134b551a9ef458 namespace=k8s.io Apr 16 04:48:33.546177 containerd[1593]: time="2026-04-16T04:48:33.546066357Z" level=warning msg="cleaning up after shim disconnected" id=4ef5d13f6fad41c0a7d3203dbea2ec815f7fdb3da824b546f1134b551a9ef458 namespace=k8s.io Apr 16 04:48:33.546177 containerd[1593]: time="2026-04-16T04:48:33.546080634Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 16 04:48:34.146605 kubelet[2989]: I0416 04:48:34.146310 2989 scope.go:117] "RemoveContainer" containerID="4ef5d13f6fad41c0a7d3203dbea2ec815f7fdb3da824b546f1134b551a9ef458" Apr 16 04:48:34.169232 kubelet[2989]: E0416 04:48:34.165354 2989 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:48:34.352370 kubelet[2989]: E0416 04:48:34.346966 2989 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-controller-manager pod=kube-controller-manager-localhost_kube-system(661aacf61b27dbeb7414ee44841cd3ce)\"" pod="kube-system/kube-controller-manager-localhost" podUID="661aacf61b27dbeb7414ee44841cd3ce" Apr 16 04:48:34.944060 kubelet[2989]: I0416 04:48:34.914784 2989 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=6.911491074 podStartE2EDuration="6.911491074s" podCreationTimestamp="2026-04-16 04:48:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-16 04:48:34.9042657 +0000 UTC m=+891.202767496" watchObservedRunningTime="2026-04-16 04:48:34.911491074 +0000 UTC m=+891.209992875" Apr 16 04:48:35.167830 kubelet[2989]: I0416 04:48:35.167411 2989 scope.go:117] "RemoveContainer" containerID="4ef5d13f6fad41c0a7d3203dbea2ec815f7fdb3da824b546f1134b551a9ef458" Apr 16 04:48:35.167830 kubelet[2989]: E0416 04:48:35.168085 2989 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:48:35.169751 kubelet[2989]: E0416 04:48:35.168485 2989 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-controller-manager pod=kube-controller-manager-localhost_kube-system(661aacf61b27dbeb7414ee44841cd3ce)\"" pod="kube-system/kube-controller-manager-localhost" podUID="661aacf61b27dbeb7414ee44841cd3ce" Apr 16 04:48:35.414648 kubelet[2989]: I0416 04:48:35.398622 2989 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=5.398543929 podStartE2EDuration="5.398543929s" podCreationTimestamp="2026-04-16 04:48:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-16 04:48:35.398301912 +0000 UTC m=+891.696803726" watchObservedRunningTime="2026-04-16 04:48:35.398543929 +0000 UTC m=+891.697045737" Apr 16 04:48:35.507818 kubelet[2989]: E0416 04:48:35.453361 2989 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 04:48:38.587427 systemd[1]: Reloading requested from client PID 3505 ('systemctl') (unit session-7.scope)... Apr 16 04:48:38.600246 systemd[1]: Reloading... Apr 16 04:48:39.790599 zram_generator::config[3544]: No configuration found. Apr 16 04:48:40.632643 kubelet[2989]: E0416 04:48:40.630717 2989 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 04:48:41.531690 kubelet[2989]: E0416 04:48:41.530985 2989 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:48:42.570882 kubelet[2989]: E0416 04:48:42.570691 2989 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:48:43.557584 kubelet[2989]: E0416 04:48:43.557367 2989 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.21s" Apr 16 04:48:43.568263 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 16 04:48:43.620419 kubelet[2989]: E0416 04:48:43.613901 2989 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:48:44.831646 systemd[1]: Reloading finished in 6223 ms. Apr 16 04:48:45.282480 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 16 04:48:45.345498 systemd[1]: kubelet.service: Deactivated successfully. Apr 16 04:48:45.346354 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 16 04:48:45.511717 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 16 04:48:47.999532 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 16 04:48:48.072300 (kubelet)[3602]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 16 04:48:49.740813 kubelet[3602]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 16 04:48:49.740813 kubelet[3602]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 16 04:48:49.740813 kubelet[3602]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 16 04:48:49.740813 kubelet[3602]: I0416 04:48:49.738474 3602 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 16 04:48:49.906235 kubelet[3602]: I0416 04:48:49.905049 3602 server.go:530] "Kubelet version" kubeletVersion="v1.33.8" Apr 16 04:48:49.906235 kubelet[3602]: I0416 04:48:49.905250 3602 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 16 04:48:49.934891 kubelet[3602]: I0416 04:48:49.920965 3602 server.go:956] "Client rotation is on, will bootstrap in background" Apr 16 04:48:50.000803 kubelet[3602]: I0416 04:48:49.998050 3602 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Apr 16 04:48:50.105469 kubelet[3602]: I0416 04:48:50.105041 3602 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 16 04:48:50.604970 kubelet[3602]: E0416 04:48:50.595995 3602 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 16 04:48:50.604970 kubelet[3602]: I0416 04:48:50.605194 3602 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Apr 16 04:48:50.916717 kubelet[3602]: I0416 04:48:50.915961 3602 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Apr 16 04:48:50.996725 kubelet[3602]: I0416 04:48:50.981313 3602 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 16 04:48:51.001889 kubelet[3602]: I0416 04:48:50.982769 3602 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Apr 16 04:48:51.017339 kubelet[3602]: I0416 04:48:51.003991 3602 topology_manager.go:138] "Creating topology manager with none policy" Apr 16 04:48:51.017339 kubelet[3602]: I0416 04:48:51.016744 3602 container_manager_linux.go:303] "Creating device plugin manager" Apr 16 04:48:51.018497 kubelet[3602]: I0416 04:48:51.018421 3602 state_mem.go:36] "Initialized new in-memory state store" Apr 16 04:48:51.020246 kubelet[3602]: I0416 04:48:51.020188 3602 kubelet.go:480] "Attempting to sync node with API server" Apr 16 04:48:51.020246 kubelet[3602]: I0416 04:48:51.020245 3602 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 16 04:48:51.020672 kubelet[3602]: I0416 04:48:51.020297 3602 kubelet.go:386] "Adding apiserver pod source" Apr 16 04:48:51.084289 kubelet[3602]: I0416 04:48:51.083778 3602 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 16 04:48:51.322284 kubelet[3602]: I0416 04:48:51.315760 3602 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 16 04:48:51.342964 kubelet[3602]: I0416 04:48:51.341838 3602 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 16 04:48:51.539738 kubelet[3602]: I0416 04:48:51.538686 3602 watchdog_linux.go:99] "Systemd watchdog is not enabled" Apr 16 04:48:51.539738 kubelet[3602]: I0416 04:48:51.539800 3602 server.go:1289] "Started kubelet" Apr 16 04:48:51.540721 kubelet[3602]: I0416 04:48:51.540634 3602 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 16 04:48:51.557152 kubelet[3602]: I0416 04:48:51.554360 3602 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 16 04:48:51.557152 kubelet[3602]: I0416 04:48:51.554925 3602 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Apr 16 04:48:51.685790 kubelet[3602]: I0416 04:48:51.685757 3602 server.go:317] "Adding debug handlers to kubelet server" Apr 16 04:48:51.692152 kubelet[3602]: I0416 04:48:51.691481 3602 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 16 04:48:51.700168 kubelet[3602]: I0416 04:48:51.695279 3602 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 16 04:48:51.756045 kubelet[3602]: I0416 04:48:51.755110 3602 volume_manager.go:297] "Starting Kubelet Volume Manager" Apr 16 04:48:51.756045 kubelet[3602]: I0416 04:48:51.755686 3602 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Apr 16 04:48:51.934936 kubelet[3602]: I0416 04:48:51.934556 3602 factory.go:223] Registration of the systemd container factory successfully Apr 16 04:48:51.951097 kubelet[3602]: I0416 04:48:51.949905 3602 reconciler.go:26] "Reconciler: start to sync state" Apr 16 04:48:51.953771 kubelet[3602]: I0416 04:48:51.953739 3602 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 16 04:48:51.956937 kubelet[3602]: I0416 04:48:51.956871 3602 factory.go:223] Registration of the containerd container factory successfully Apr 16 04:48:52.099392 kubelet[3602]: I0416 04:48:52.098995 3602 apiserver.go:52] "Watching apiserver" Apr 16 04:48:52.365651 kubelet[3602]: I0416 04:48:52.364607 3602 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Apr 16 04:48:52.368845 kubelet[3602]: I0416 04:48:52.368799 3602 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Apr 16 04:48:52.368914 kubelet[3602]: I0416 04:48:52.368892 3602 status_manager.go:230] "Starting to sync pod status with apiserver" Apr 16 04:48:52.368943 kubelet[3602]: I0416 04:48:52.368914 3602 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 16 04:48:52.368943 kubelet[3602]: I0416 04:48:52.368920 3602 kubelet.go:2436] "Starting kubelet main sync loop" Apr 16 04:48:52.397325 kubelet[3602]: E0416 04:48:52.397184 3602 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 16 04:48:52.499158 kubelet[3602]: E0416 04:48:52.498953 3602 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Apr 16 04:48:52.643212 sudo[3641]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Apr 16 04:48:52.649623 sudo[3641]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Apr 16 04:48:52.696753 kubelet[3602]: I0416 04:48:52.696582 3602 cpu_manager.go:221] "Starting CPU manager" policy="none" Apr 16 04:48:52.696753 kubelet[3602]: I0416 04:48:52.696606 3602 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Apr 16 04:48:52.696753 kubelet[3602]: I0416 04:48:52.696627 3602 state_mem.go:36] "Initialized new in-memory state store" Apr 16 04:48:52.696753 kubelet[3602]: I0416 04:48:52.696860 3602 state_mem.go:88] "Updated default CPUSet" cpuSet="" Apr 16 04:48:52.696753 kubelet[3602]: I0416 04:48:52.696868 3602 state_mem.go:96] "Updated CPUSet assignments" assignments={} Apr 16 04:48:52.696753 kubelet[3602]: I0416 04:48:52.696882 3602 policy_none.go:49] "None policy: Start" Apr 16 04:48:52.696753 kubelet[3602]: I0416 04:48:52.696890 3602 memory_manager.go:186] "Starting memorymanager" policy="None" Apr 16 04:48:52.696753 kubelet[3602]: I0416 04:48:52.696967 3602 state_mem.go:35] "Initializing new in-memory state store" Apr 16 04:48:52.697784 kubelet[3602]: I0416 04:48:52.697193 3602 state_mem.go:75] "Updated machine memory state" Apr 16 04:48:52.698706 kubelet[3602]: E0416 04:48:52.698666 3602 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 16 04:48:52.698952 kubelet[3602]: I0416 04:48:52.698920 3602 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 16 04:48:52.698989 kubelet[3602]: I0416 04:48:52.698942 3602 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 16 04:48:52.701982 kubelet[3602]: I0416 04:48:52.701726 3602 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 16 04:48:52.737829 kubelet[3602]: E0416 04:48:52.731756 3602 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 16 04:48:52.737829 kubelet[3602]: I0416 04:48:52.732139 3602 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Apr 16 04:48:52.737829 kubelet[3602]: I0416 04:48:52.734751 3602 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Apr 16 04:48:52.875281 kubelet[3602]: I0416 04:48:52.825603 3602 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Apr 16 04:48:52.892768 kubelet[3602]: E0416 04:48:52.875294 3602 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Apr 16 04:48:52.893239 kubelet[3602]: E0416 04:48:52.875326 3602 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Apr 16 04:48:52.910863 kubelet[3602]: I0416 04:48:52.910465 3602 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/185b4bd2fb947057f02f1a819bbd3411-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"185b4bd2fb947057f02f1a819bbd3411\") " pod="kube-system/kube-apiserver-localhost" Apr 16 04:48:52.910863 kubelet[3602]: I0416 04:48:52.910573 3602 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/185b4bd2fb947057f02f1a819bbd3411-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"185b4bd2fb947057f02f1a819bbd3411\") " pod="kube-system/kube-apiserver-localhost" Apr 16 04:48:52.910863 kubelet[3602]: I0416 04:48:52.910587 3602 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/661aacf61b27dbeb7414ee44841cd3ce-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"661aacf61b27dbeb7414ee44841cd3ce\") " pod="kube-system/kube-controller-manager-localhost" Apr 16 04:48:52.910863 kubelet[3602]: I0416 04:48:52.910601 3602 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/661aacf61b27dbeb7414ee44841cd3ce-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"661aacf61b27dbeb7414ee44841cd3ce\") " pod="kube-system/kube-controller-manager-localhost" Apr 16 04:48:52.910863 kubelet[3602]: I0416 04:48:52.910618 3602 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/661aacf61b27dbeb7414ee44841cd3ce-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"661aacf61b27dbeb7414ee44841cd3ce\") " pod="kube-system/kube-controller-manager-localhost" Apr 16 04:48:52.912264 kubelet[3602]: I0416 04:48:52.910636 3602 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/185b4bd2fb947057f02f1a819bbd3411-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"185b4bd2fb947057f02f1a819bbd3411\") " pod="kube-system/kube-apiserver-localhost" Apr 16 04:48:52.912264 kubelet[3602]: I0416 04:48:52.910710 3602 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/661aacf61b27dbeb7414ee44841cd3ce-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"661aacf61b27dbeb7414ee44841cd3ce\") " pod="kube-system/kube-controller-manager-localhost" Apr 16 04:48:52.912264 kubelet[3602]: I0416 04:48:52.910723 3602 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/661aacf61b27dbeb7414ee44841cd3ce-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"661aacf61b27dbeb7414ee44841cd3ce\") " pod="kube-system/kube-controller-manager-localhost" Apr 16 04:48:52.912264 kubelet[3602]: I0416 04:48:52.910735 3602 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ae88e85786a13701eebaf6993fb55ff4-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"ae88e85786a13701eebaf6993fb55ff4\") " pod="kube-system/kube-scheduler-localhost" Apr 16 04:48:52.918700 kubelet[3602]: I0416 04:48:52.916704 3602 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 16 04:48:52.996739 kubelet[3602]: I0416 04:48:52.996500 3602 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Apr 16 04:48:52.999704 kubelet[3602]: I0416 04:48:52.998585 3602 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Apr 16 04:48:53.048061 kubelet[3602]: E0416 04:48:53.047385 3602 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:48:53.216465 kubelet[3602]: I0416 04:48:53.213827 3602 scope.go:117] "RemoveContainer" containerID="4ef5d13f6fad41c0a7d3203dbea2ec815f7fdb3da824b546f1134b551a9ef458" Apr 16 04:48:53.220254 kubelet[3602]: E0416 04:48:53.219534 3602 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:48:53.220254 kubelet[3602]: E0416 04:48:53.219314 3602 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:48:53.254347 containerd[1593]: time="2026-04-16T04:48:53.254273109Z" level=info msg="CreateContainer within sandbox \"07bc43c2fb0e553fec011ab39615348367275dc431736e7fcf3665b8a5254009\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:2,}" Apr 16 04:48:53.794473 kubelet[3602]: E0416 04:48:53.794332 3602 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:48:53.794804 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2290750803.mount: Deactivated successfully. Apr 16 04:48:53.971276 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1400118553.mount: Deactivated successfully. Apr 16 04:48:54.029880 kubelet[3602]: E0416 04:48:53.797588 3602 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:48:54.041752 containerd[1593]: time="2026-04-16T04:48:54.039818674Z" level=info msg="CreateContainer within sandbox \"07bc43c2fb0e553fec011ab39615348367275dc431736e7fcf3665b8a5254009\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:2,} returns container id \"3c43c9294240403b4632ecac926a20f777e955894e60d70e844397f1c39df0ad\"" Apr 16 04:48:54.073765 containerd[1593]: time="2026-04-16T04:48:54.072182115Z" level=info msg="StartContainer for \"3c43c9294240403b4632ecac926a20f777e955894e60d70e844397f1c39df0ad\"" Apr 16 04:48:55.152115 containerd[1593]: time="2026-04-16T04:48:55.147240739Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 16 04:48:55.152115 containerd[1593]: time="2026-04-16T04:48:55.147528494Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 16 04:48:55.152115 containerd[1593]: time="2026-04-16T04:48:55.147572782Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 16 04:48:55.207810 containerd[1593]: time="2026-04-16T04:48:55.152631451Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 16 04:48:55.392899 kubelet[3602]: E0416 04:48:55.389328 3602 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:48:55.582843 kubelet[3602]: E0416 04:48:55.582191 3602 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:48:56.481671 kubelet[3602]: E0416 04:48:56.481217 3602 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:48:58.081972 containerd[1593]: time="2026-04-16T04:48:57.977403433Z" level=info msg="StartContainer for \"3c43c9294240403b4632ecac926a20f777e955894e60d70e844397f1c39df0ad\" returns successfully" Apr 16 04:48:58.768744 sudo[3641]: pam_unix(sudo:session): session closed for user root Apr 16 04:48:59.166075 kubelet[3602]: E0416 04:48:59.148003 3602 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:49:00.803992 kubelet[3602]: E0416 04:49:00.801905 3602 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:49:02.307920 kubelet[3602]: E0416 04:49:02.303020 3602 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:49:04.042865 kubelet[3602]: E0416 04:49:04.042240 3602 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.484s" Apr 16 04:49:04.042865 kubelet[3602]: E0416 04:49:04.043014 3602 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:49:04.074052 kubelet[3602]: E0416 04:49:04.073229 3602 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:49:12.789646 kubelet[3602]: E0416 04:49:12.789288 3602 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:49:24.853262 sudo[1807]: pam_unix(sudo:session): session closed for user root Apr 16 04:49:24.911978 sshd[1785]: pam_unix(sshd:session): session closed for user core Apr 16 04:49:25.079603 systemd[1]: sshd@6-10.0.0.5:22-10.0.0.1:46074.service: Deactivated successfully. Apr 16 04:49:25.116039 systemd[1]: session-7.scope: Deactivated successfully. Apr 16 04:49:25.211597 systemd-logind[1563]: Session 7 logged out. Waiting for processes to exit. Apr 16 04:49:25.316722 systemd-logind[1563]: Removed session 7. Apr 16 04:49:37.068786 kubelet[3602]: I0416 04:49:37.066230 3602 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Apr 16 04:49:37.070504 containerd[1593]: time="2026-04-16T04:49:37.070429810Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Apr 16 04:49:37.145002 kubelet[3602]: I0416 04:49:37.137697 3602 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Apr 16 04:49:37.244850 kubelet[3602]: I0416 04:49:37.240840 3602 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/55f8aaf5-0c58-4cd7-acb0-1b0526bb00d7-kube-proxy\") pod \"kube-proxy-nxbfg\" (UID: \"55f8aaf5-0c58-4cd7-acb0-1b0526bb00d7\") " pod="kube-system/kube-proxy-nxbfg" Apr 16 04:49:37.244850 kubelet[3602]: I0416 04:49:37.241000 3602 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/55f8aaf5-0c58-4cd7-acb0-1b0526bb00d7-xtables-lock\") pod \"kube-proxy-nxbfg\" (UID: \"55f8aaf5-0c58-4cd7-acb0-1b0526bb00d7\") " pod="kube-system/kube-proxy-nxbfg" Apr 16 04:49:37.244850 kubelet[3602]: I0416 04:49:37.241049 3602 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/55f8aaf5-0c58-4cd7-acb0-1b0526bb00d7-lib-modules\") pod \"kube-proxy-nxbfg\" (UID: \"55f8aaf5-0c58-4cd7-acb0-1b0526bb00d7\") " pod="kube-system/kube-proxy-nxbfg" Apr 16 04:49:37.244850 kubelet[3602]: I0416 04:49:37.241067 3602 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2mrpn\" (UniqueName: \"kubernetes.io/projected/55f8aaf5-0c58-4cd7-acb0-1b0526bb00d7-kube-api-access-2mrpn\") pod \"kube-proxy-nxbfg\" (UID: \"55f8aaf5-0c58-4cd7-acb0-1b0526bb00d7\") " pod="kube-system/kube-proxy-nxbfg" Apr 16 04:49:37.582183 kubelet[3602]: I0416 04:49:37.581978 3602 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/62367a97-162f-47df-afc7-43e025426f94-cni-path\") pod \"cilium-vgzf5\" (UID: \"62367a97-162f-47df-afc7-43e025426f94\") " pod="kube-system/cilium-vgzf5" Apr 16 04:49:37.583554 kubelet[3602]: I0416 04:49:37.583524 3602 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/62367a97-162f-47df-afc7-43e025426f94-lib-modules\") pod \"cilium-vgzf5\" (UID: \"62367a97-162f-47df-afc7-43e025426f94\") " pod="kube-system/cilium-vgzf5" Apr 16 04:49:37.583554 kubelet[3602]: I0416 04:49:37.583553 3602 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/62367a97-162f-47df-afc7-43e025426f94-hostproc\") pod \"cilium-vgzf5\" (UID: \"62367a97-162f-47df-afc7-43e025426f94\") " pod="kube-system/cilium-vgzf5" Apr 16 04:49:37.584092 kubelet[3602]: I0416 04:49:37.583567 3602 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/62367a97-162f-47df-afc7-43e025426f94-cilium-cgroup\") pod \"cilium-vgzf5\" (UID: \"62367a97-162f-47df-afc7-43e025426f94\") " pod="kube-system/cilium-vgzf5" Apr 16 04:49:37.584092 kubelet[3602]: I0416 04:49:37.583578 3602 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/62367a97-162f-47df-afc7-43e025426f94-xtables-lock\") pod \"cilium-vgzf5\" (UID: \"62367a97-162f-47df-afc7-43e025426f94\") " pod="kube-system/cilium-vgzf5" Apr 16 04:49:37.584092 kubelet[3602]: I0416 04:49:37.583591 3602 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gwvsc\" (UniqueName: \"kubernetes.io/projected/62367a97-162f-47df-afc7-43e025426f94-kube-api-access-gwvsc\") pod \"cilium-vgzf5\" (UID: \"62367a97-162f-47df-afc7-43e025426f94\") " pod="kube-system/cilium-vgzf5" Apr 16 04:49:37.584092 kubelet[3602]: I0416 04:49:37.583639 3602 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/62367a97-162f-47df-afc7-43e025426f94-etc-cni-netd\") pod \"cilium-vgzf5\" (UID: \"62367a97-162f-47df-afc7-43e025426f94\") " pod="kube-system/cilium-vgzf5" Apr 16 04:49:37.584092 kubelet[3602]: I0416 04:49:37.583649 3602 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/62367a97-162f-47df-afc7-43e025426f94-clustermesh-secrets\") pod \"cilium-vgzf5\" (UID: \"62367a97-162f-47df-afc7-43e025426f94\") " pod="kube-system/cilium-vgzf5" Apr 16 04:49:37.584183 kubelet[3602]: I0416 04:49:37.583661 3602 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/62367a97-162f-47df-afc7-43e025426f94-cilium-config-path\") pod \"cilium-vgzf5\" (UID: \"62367a97-162f-47df-afc7-43e025426f94\") " pod="kube-system/cilium-vgzf5" Apr 16 04:49:37.584183 kubelet[3602]: I0416 04:49:37.583671 3602 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/62367a97-162f-47df-afc7-43e025426f94-host-proc-sys-net\") pod \"cilium-vgzf5\" (UID: \"62367a97-162f-47df-afc7-43e025426f94\") " pod="kube-system/cilium-vgzf5" Apr 16 04:49:37.584183 kubelet[3602]: I0416 04:49:37.583855 3602 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/62367a97-162f-47df-afc7-43e025426f94-hubble-tls\") pod \"cilium-vgzf5\" (UID: \"62367a97-162f-47df-afc7-43e025426f94\") " pod="kube-system/cilium-vgzf5" Apr 16 04:49:37.584183 kubelet[3602]: I0416 04:49:37.583872 3602 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/62367a97-162f-47df-afc7-43e025426f94-cilium-run\") pod \"cilium-vgzf5\" (UID: \"62367a97-162f-47df-afc7-43e025426f94\") " pod="kube-system/cilium-vgzf5" Apr 16 04:49:37.584183 kubelet[3602]: I0416 04:49:37.583884 3602 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/62367a97-162f-47df-afc7-43e025426f94-bpf-maps\") pod \"cilium-vgzf5\" (UID: \"62367a97-162f-47df-afc7-43e025426f94\") " pod="kube-system/cilium-vgzf5" Apr 16 04:49:37.584183 kubelet[3602]: I0416 04:49:37.583895 3602 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/62367a97-162f-47df-afc7-43e025426f94-host-proc-sys-kernel\") pod \"cilium-vgzf5\" (UID: \"62367a97-162f-47df-afc7-43e025426f94\") " pod="kube-system/cilium-vgzf5" Apr 16 04:49:37.783592 kubelet[3602]: E0416 04:49:37.783224 3602 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:49:37.798404 kubelet[3602]: I0416 04:49:37.798171 3602 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a84bdfa6-83f7-4857-9344-aa7e6b030848-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-tbrgm\" (UID: \"a84bdfa6-83f7-4857-9344-aa7e6b030848\") " pod="kube-system/cilium-operator-6c4d7847fc-tbrgm" Apr 16 04:49:37.798404 kubelet[3602]: I0416 04:49:37.798328 3602 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2gznj\" (UniqueName: \"kubernetes.io/projected/a84bdfa6-83f7-4857-9344-aa7e6b030848-kube-api-access-2gznj\") pod \"cilium-operator-6c4d7847fc-tbrgm\" (UID: \"a84bdfa6-83f7-4857-9344-aa7e6b030848\") " pod="kube-system/cilium-operator-6c4d7847fc-tbrgm" Apr 16 04:49:37.919901 containerd[1593]: time="2026-04-16T04:49:37.918360338Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-nxbfg,Uid:55f8aaf5-0c58-4cd7-acb0-1b0526bb00d7,Namespace:kube-system,Attempt:0,}" Apr 16 04:49:38.072769 containerd[1593]: time="2026-04-16T04:49:38.072159456Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 16 04:49:38.072769 containerd[1593]: time="2026-04-16T04:49:38.072839548Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 16 04:49:38.072769 containerd[1593]: time="2026-04-16T04:49:38.072862759Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 16 04:49:38.073906 containerd[1593]: time="2026-04-16T04:49:38.073091661Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 16 04:49:38.155004 containerd[1593]: time="2026-04-16T04:49:38.154846600Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-nxbfg,Uid:55f8aaf5-0c58-4cd7-acb0-1b0526bb00d7,Namespace:kube-system,Attempt:0,} returns sandbox id \"ffcabad7335b14f01e821ee6d16ebb039bf9addc499cd57b8ddfc3174a579700\"" Apr 16 04:49:38.155816 kubelet[3602]: E0416 04:49:38.155776 3602 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:49:38.205008 kubelet[3602]: E0416 04:49:38.198225 3602 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:49:38.205855 containerd[1593]: time="2026-04-16T04:49:38.198961273Z" level=info msg="CreateContainer within sandbox \"ffcabad7335b14f01e821ee6d16ebb039bf9addc499cd57b8ddfc3174a579700\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Apr 16 04:49:38.212096 containerd[1593]: time="2026-04-16T04:49:38.211980270Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-vgzf5,Uid:62367a97-162f-47df-afc7-43e025426f94,Namespace:kube-system,Attempt:0,}" Apr 16 04:49:38.278813 kubelet[3602]: E0416 04:49:38.278248 3602 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:49:38.311357 containerd[1593]: time="2026-04-16T04:49:38.310948151Z" level=info msg="CreateContainer within sandbox \"ffcabad7335b14f01e821ee6d16ebb039bf9addc499cd57b8ddfc3174a579700\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"41a1b3bd5438b8e9bb146cebb3c3d9f596633bd4393261aa52370d36ada68bdc\"" Apr 16 04:49:38.312701 containerd[1593]: time="2026-04-16T04:49:38.311602791Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-tbrgm,Uid:a84bdfa6-83f7-4857-9344-aa7e6b030848,Namespace:kube-system,Attempt:0,}" Apr 16 04:49:38.313336 containerd[1593]: time="2026-04-16T04:49:38.313246006Z" level=info msg="StartContainer for \"41a1b3bd5438b8e9bb146cebb3c3d9f596633bd4393261aa52370d36ada68bdc\"" Apr 16 04:49:38.411584 containerd[1593]: time="2026-04-16T04:49:38.410281830Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 16 04:49:38.411584 containerd[1593]: time="2026-04-16T04:49:38.410522752Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 16 04:49:38.411584 containerd[1593]: time="2026-04-16T04:49:38.410562556Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 16 04:49:38.411584 containerd[1593]: time="2026-04-16T04:49:38.410926206Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 16 04:49:38.495248 containerd[1593]: time="2026-04-16T04:49:38.489693314Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 16 04:49:38.495248 containerd[1593]: time="2026-04-16T04:49:38.489751837Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 16 04:49:38.495248 containerd[1593]: time="2026-04-16T04:49:38.489764840Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 16 04:49:38.495248 containerd[1593]: time="2026-04-16T04:49:38.493949575Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 16 04:49:38.548824 containerd[1593]: time="2026-04-16T04:49:38.548645829Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-vgzf5,Uid:62367a97-162f-47df-afc7-43e025426f94,Namespace:kube-system,Attempt:0,} returns sandbox id \"1d4b2365f5eb6126b5e838486e625056acedc26e4612382b656083cae100759c\"" Apr 16 04:49:38.550785 kubelet[3602]: E0416 04:49:38.550105 3602 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:49:38.552629 containerd[1593]: time="2026-04-16T04:49:38.552594870Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Apr 16 04:49:38.713122 containerd[1593]: time="2026-04-16T04:49:38.711600409Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-tbrgm,Uid:a84bdfa6-83f7-4857-9344-aa7e6b030848,Namespace:kube-system,Attempt:0,} returns sandbox id \"ae21fbf5010da70405ecfee7bae3d7c4971498bcc474e8f11a60245a5af16053\"" Apr 16 04:49:38.723470 kubelet[3602]: E0416 04:49:38.721982 3602 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:49:38.761457 containerd[1593]: time="2026-04-16T04:49:38.751195427Z" level=info msg="StartContainer for \"41a1b3bd5438b8e9bb146cebb3c3d9f596633bd4393261aa52370d36ada68bdc\" returns successfully" Apr 16 04:49:39.863808 kubelet[3602]: E0416 04:49:39.858041 3602 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:49:40.912722 kubelet[3602]: E0416 04:49:40.911539 3602 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:49:45.233561 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount114565704.mount: Deactivated successfully. Apr 16 04:49:51.211978 systemd[1]: Started sshd@7-10.0.0.5:22-10.0.0.1:48622.service - OpenSSH per-connection server daemon (10.0.0.1:48622). Apr 16 04:49:51.445041 sshd[4060]: Accepted publickey for core from 10.0.0.1 port 48622 ssh2: RSA SHA256:k6txjuWbuT3fOc5R3ejt+fIilHnNyFQrCWM8I+0TlNQ Apr 16 04:49:51.453028 sshd[4060]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 04:49:51.546919 containerd[1593]: time="2026-04-16T04:49:51.544946522Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 04:49:51.546919 containerd[1593]: time="2026-04-16T04:49:51.545607747Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Apr 16 04:49:51.553134 systemd-logind[1563]: New session 8 of user core. Apr 16 04:49:51.555657 containerd[1593]: time="2026-04-16T04:49:51.555556739Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 04:49:51.557220 containerd[1593]: time="2026-04-16T04:49:51.557028024Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 13.004394333s" Apr 16 04:49:51.557220 containerd[1593]: time="2026-04-16T04:49:51.557090233Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Apr 16 04:49:51.558841 systemd[1]: Started session-8.scope - Session 8 of User core. Apr 16 04:49:51.559266 containerd[1593]: time="2026-04-16T04:49:51.559223555Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Apr 16 04:49:51.599331 containerd[1593]: time="2026-04-16T04:49:51.598938711Z" level=info msg="CreateContainer within sandbox \"1d4b2365f5eb6126b5e838486e625056acedc26e4612382b656083cae100759c\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Apr 16 04:49:51.686516 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2803574851.mount: Deactivated successfully. Apr 16 04:49:51.713720 containerd[1593]: time="2026-04-16T04:49:51.713521757Z" level=info msg="CreateContainer within sandbox \"1d4b2365f5eb6126b5e838486e625056acedc26e4612382b656083cae100759c\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"a38330f4f88b291005ecf5655b3fa6e1fd8b6095210006f42220f437e5010d4a\"" Apr 16 04:49:51.719661 containerd[1593]: time="2026-04-16T04:49:51.719527031Z" level=info msg="StartContainer for \"a38330f4f88b291005ecf5655b3fa6e1fd8b6095210006f42220f437e5010d4a\"" Apr 16 04:49:52.079378 containerd[1593]: time="2026-04-16T04:49:52.078952382Z" level=info msg="StartContainer for \"a38330f4f88b291005ecf5655b3fa6e1fd8b6095210006f42220f437e5010d4a\" returns successfully" Apr 16 04:49:52.283883 sshd[4060]: pam_unix(sshd:session): session closed for user core Apr 16 04:49:52.306329 systemd-logind[1563]: Session 8 logged out. Waiting for processes to exit. Apr 16 04:49:52.306964 systemd[1]: sshd@7-10.0.0.5:22-10.0.0.1:48622.service: Deactivated successfully. Apr 16 04:49:52.318700 systemd[1]: session-8.scope: Deactivated successfully. Apr 16 04:49:52.327761 systemd-logind[1563]: Removed session 8. Apr 16 04:49:52.527359 kubelet[3602]: E0416 04:49:52.526562 3602 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:49:52.687639 containerd[1593]: time="2026-04-16T04:49:52.671872873Z" level=info msg="shim disconnected" id=a38330f4f88b291005ecf5655b3fa6e1fd8b6095210006f42220f437e5010d4a namespace=k8s.io Apr 16 04:49:52.687639 containerd[1593]: time="2026-04-16T04:49:52.678722227Z" level=warning msg="cleaning up after shim disconnected" id=a38330f4f88b291005ecf5655b3fa6e1fd8b6095210006f42220f437e5010d4a namespace=k8s.io Apr 16 04:49:52.687639 containerd[1593]: time="2026-04-16T04:49:52.678742092Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 16 04:49:52.682092 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a38330f4f88b291005ecf5655b3fa6e1fd8b6095210006f42220f437e5010d4a-rootfs.mount: Deactivated successfully. Apr 16 04:49:52.783856 kubelet[3602]: I0416 04:49:52.782016 3602 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-nxbfg" podStartSLOduration=16.782000276 podStartE2EDuration="16.782000276s" podCreationTimestamp="2026-04-16 04:49:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-16 04:49:40.171759809 +0000 UTC m=+51.919967771" watchObservedRunningTime="2026-04-16 04:49:52.782000276 +0000 UTC m=+64.530208241" Apr 16 04:49:53.450231 kubelet[3602]: E0416 04:49:53.449905 3602 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:49:53.471487 containerd[1593]: time="2026-04-16T04:49:53.469543447Z" level=info msg="CreateContainer within sandbox \"1d4b2365f5eb6126b5e838486e625056acedc26e4612382b656083cae100759c\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Apr 16 04:49:53.545384 containerd[1593]: time="2026-04-16T04:49:53.545218713Z" level=info msg="CreateContainer within sandbox \"1d4b2365f5eb6126b5e838486e625056acedc26e4612382b656083cae100759c\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"e8ff972282e27832aa9eb7b4994313e982687434aabefca8a247f5b3aa4adf13\"" Apr 16 04:49:53.549704 containerd[1593]: time="2026-04-16T04:49:53.549676075Z" level=info msg="StartContainer for \"e8ff972282e27832aa9eb7b4994313e982687434aabefca8a247f5b3aa4adf13\"" Apr 16 04:49:53.845864 containerd[1593]: time="2026-04-16T04:49:53.844686128Z" level=info msg="StartContainer for \"e8ff972282e27832aa9eb7b4994313e982687434aabefca8a247f5b3aa4adf13\" returns successfully" Apr 16 04:49:53.878681 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 16 04:49:53.884205 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 16 04:49:53.889202 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Apr 16 04:49:54.055402 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 16 04:49:54.190601 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 16 04:49:54.878856 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1464626233.mount: Deactivated successfully. Apr 16 04:49:55.515310 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e8ff972282e27832aa9eb7b4994313e982687434aabefca8a247f5b3aa4adf13-rootfs.mount: Deactivated successfully. Apr 16 04:49:56.003918 containerd[1593]: time="2026-04-16T04:49:56.001829790Z" level=info msg="shim disconnected" id=e8ff972282e27832aa9eb7b4994313e982687434aabefca8a247f5b3aa4adf13 namespace=k8s.io Apr 16 04:49:56.003918 containerd[1593]: time="2026-04-16T04:49:56.002987186Z" level=warning msg="cleaning up after shim disconnected" id=e8ff972282e27832aa9eb7b4994313e982687434aabefca8a247f5b3aa4adf13 namespace=k8s.io Apr 16 04:49:56.003918 containerd[1593]: time="2026-04-16T04:49:56.003005556Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 16 04:49:56.117985 kubelet[3602]: E0416 04:49:56.087275 3602 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:49:56.628742 kubelet[3602]: E0416 04:49:56.624767 3602 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:49:58.818249 systemd[1]: Started sshd@8-10.0.0.5:22-10.0.0.1:59360.service - OpenSSH per-connection server daemon (10.0.0.1:59360). Apr 16 04:49:58.951057 containerd[1593]: time="2026-04-16T04:49:58.950949857Z" level=info msg="CreateContainer within sandbox \"1d4b2365f5eb6126b5e838486e625056acedc26e4612382b656083cae100759c\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Apr 16 04:49:59.148740 sshd[4224]: Accepted publickey for core from 10.0.0.1 port 59360 ssh2: RSA SHA256:k6txjuWbuT3fOc5R3ejt+fIilHnNyFQrCWM8I+0TlNQ Apr 16 04:49:59.153661 sshd[4224]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 04:49:59.174334 systemd-logind[1563]: New session 9 of user core. Apr 16 04:49:59.211811 systemd[1]: Started session-9.scope - Session 9 of User core. Apr 16 04:49:59.251815 containerd[1593]: time="2026-04-16T04:49:59.251084349Z" level=info msg="CreateContainer within sandbox \"1d4b2365f5eb6126b5e838486e625056acedc26e4612382b656083cae100759c\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"5be376ae14e71390d0a232f486b4b103c9f97c9c9a8e9a92d0c44d62820a247e\"" Apr 16 04:49:59.257779 containerd[1593]: time="2026-04-16T04:49:59.257741751Z" level=info msg="StartContainer for \"5be376ae14e71390d0a232f486b4b103c9f97c9c9a8e9a92d0c44d62820a247e\"" Apr 16 04:49:59.469387 systemd[1]: run-containerd-runc-k8s.io-5be376ae14e71390d0a232f486b4b103c9f97c9c9a8e9a92d0c44d62820a247e-runc.23xI3E.mount: Deactivated successfully. Apr 16 04:49:59.816424 containerd[1593]: time="2026-04-16T04:49:59.812871139Z" level=info msg="StartContainer for \"5be376ae14e71390d0a232f486b4b103c9f97c9c9a8e9a92d0c44d62820a247e\" returns successfully" Apr 16 04:50:00.052554 containerd[1593]: time="2026-04-16T04:50:00.051064737Z" level=info msg="shim disconnected" id=5be376ae14e71390d0a232f486b4b103c9f97c9c9a8e9a92d0c44d62820a247e namespace=k8s.io Apr 16 04:50:00.052554 containerd[1593]: time="2026-04-16T04:50:00.051193481Z" level=warning msg="cleaning up after shim disconnected" id=5be376ae14e71390d0a232f486b4b103c9f97c9c9a8e9a92d0c44d62820a247e namespace=k8s.io Apr 16 04:50:00.052554 containerd[1593]: time="2026-04-16T04:50:00.051201127Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 16 04:50:00.213572 kubelet[3602]: E0416 04:50:00.213173 3602 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:50:00.378563 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5be376ae14e71390d0a232f486b4b103c9f97c9c9a8e9a92d0c44d62820a247e-rootfs.mount: Deactivated successfully. Apr 16 04:50:01.350608 sshd[4224]: pam_unix(sshd:session): session closed for user core Apr 16 04:50:01.672863 systemd[1]: sshd@8-10.0.0.5:22-10.0.0.1:59360.service: Deactivated successfully. Apr 16 04:50:01.754036 systemd[1]: session-9.scope: Deactivated successfully. Apr 16 04:50:01.798109 systemd-logind[1563]: Session 9 logged out. Waiting for processes to exit. Apr 16 04:50:01.829339 systemd-logind[1563]: Removed session 9. Apr 16 04:50:01.842407 kubelet[3602]: E0416 04:50:01.842011 3602 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.432s" Apr 16 04:50:01.845865 kubelet[3602]: E0416 04:50:01.845842 3602 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:50:01.851872 containerd[1593]: time="2026-04-16T04:50:01.851832632Z" level=info msg="CreateContainer within sandbox \"1d4b2365f5eb6126b5e838486e625056acedc26e4612382b656083cae100759c\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Apr 16 04:50:02.216099 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3804305706.mount: Deactivated successfully. Apr 16 04:50:02.253585 containerd[1593]: time="2026-04-16T04:50:02.215423839Z" level=info msg="CreateContainer within sandbox \"1d4b2365f5eb6126b5e838486e625056acedc26e4612382b656083cae100759c\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"e4cf83f19d1ecfb28297c4672beafb81296c0b425624ae8186968af0b9c0d84d\"" Apr 16 04:50:02.499919 containerd[1593]: time="2026-04-16T04:50:02.495609987Z" level=info msg="StartContainer for \"e4cf83f19d1ecfb28297c4672beafb81296c0b425624ae8186968af0b9c0d84d\"" Apr 16 04:50:03.046128 containerd[1593]: time="2026-04-16T04:50:03.045934687Z" level=info msg="StartContainer for \"e4cf83f19d1ecfb28297c4672beafb81296c0b425624ae8186968af0b9c0d84d\" returns successfully" Apr 16 04:50:03.169887 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e4cf83f19d1ecfb28297c4672beafb81296c0b425624ae8186968af0b9c0d84d-rootfs.mount: Deactivated successfully. Apr 16 04:50:03.269098 containerd[1593]: time="2026-04-16T04:50:03.266184793Z" level=info msg="shim disconnected" id=e4cf83f19d1ecfb28297c4672beafb81296c0b425624ae8186968af0b9c0d84d namespace=k8s.io Apr 16 04:50:03.269098 containerd[1593]: time="2026-04-16T04:50:03.268476721Z" level=warning msg="cleaning up after shim disconnected" id=e4cf83f19d1ecfb28297c4672beafb81296c0b425624ae8186968af0b9c0d84d namespace=k8s.io Apr 16 04:50:03.269098 containerd[1593]: time="2026-04-16T04:50:03.268495376Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 16 04:50:03.890241 containerd[1593]: time="2026-04-16T04:50:03.889242928Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Apr 16 04:50:03.890241 containerd[1593]: time="2026-04-16T04:50:03.889782894Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 04:50:03.903406 containerd[1593]: time="2026-04-16T04:50:03.903198835Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 04:50:03.923882 containerd[1593]: time="2026-04-16T04:50:03.923311469Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 12.364043428s" Apr 16 04:50:03.923882 containerd[1593]: time="2026-04-16T04:50:03.923473847Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Apr 16 04:50:04.037058 kubelet[3602]: E0416 04:50:04.036565 3602 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:50:04.042095 containerd[1593]: time="2026-04-16T04:50:04.041034299Z" level=info msg="CreateContainer within sandbox \"ae21fbf5010da70405ecfee7bae3d7c4971498bcc474e8f11a60245a5af16053\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Apr 16 04:50:04.095098 containerd[1593]: time="2026-04-16T04:50:04.091302656Z" level=info msg="CreateContainer within sandbox \"1d4b2365f5eb6126b5e838486e625056acedc26e4612382b656083cae100759c\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Apr 16 04:50:04.272319 containerd[1593]: time="2026-04-16T04:50:04.272140161Z" level=info msg="CreateContainer within sandbox \"ae21fbf5010da70405ecfee7bae3d7c4971498bcc474e8f11a60245a5af16053\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"a3e3b2783acdaf79152006590c4fd893eb9afffd6f05e8c15e48948a63663734\"" Apr 16 04:50:04.342985 containerd[1593]: time="2026-04-16T04:50:04.342570704Z" level=info msg="StartContainer for \"a3e3b2783acdaf79152006590c4fd893eb9afffd6f05e8c15e48948a63663734\"" Apr 16 04:50:05.071836 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3211474739.mount: Deactivated successfully. Apr 16 04:50:05.368227 containerd[1593]: time="2026-04-16T04:50:05.366636602Z" level=info msg="CreateContainer within sandbox \"1d4b2365f5eb6126b5e838486e625056acedc26e4612382b656083cae100759c\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"0d527491900e93d4ae2d765c72af9db55ea1aa0052e425ed3adf5d124fc32640\"" Apr 16 04:50:05.435026 kubelet[3602]: E0416 04:50:05.433158 3602 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:50:05.467072 containerd[1593]: time="2026-04-16T04:50:05.465561510Z" level=info msg="StartContainer for \"0d527491900e93d4ae2d765c72af9db55ea1aa0052e425ed3adf5d124fc32640\"" Apr 16 04:50:05.879588 containerd[1593]: time="2026-04-16T04:50:05.878362264Z" level=info msg="StartContainer for \"a3e3b2783acdaf79152006590c4fd893eb9afffd6f05e8c15e48948a63663734\" returns successfully" Apr 16 04:50:05.974852 containerd[1593]: time="2026-04-16T04:50:05.974212775Z" level=info msg="StartContainer for \"0d527491900e93d4ae2d765c72af9db55ea1aa0052e425ed3adf5d124fc32640\" returns successfully" Apr 16 04:50:06.185295 systemd[1]: run-containerd-runc-k8s.io-0d527491900e93d4ae2d765c72af9db55ea1aa0052e425ed3adf5d124fc32640-runc.2iNT6X.mount: Deactivated successfully. Apr 16 04:50:06.452875 systemd[1]: Started sshd@9-10.0.0.5:22-10.0.0.1:32832.service - OpenSSH per-connection server daemon (10.0.0.1:32832). Apr 16 04:50:06.715696 sshd[4457]: Accepted publickey for core from 10.0.0.1 port 32832 ssh2: RSA SHA256:k6txjuWbuT3fOc5R3ejt+fIilHnNyFQrCWM8I+0TlNQ Apr 16 04:50:06.714917 sshd[4457]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 04:50:06.733419 kubelet[3602]: I0416 04:50:06.716136 3602 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Apr 16 04:50:06.739628 systemd-logind[1563]: New session 10 of user core. Apr 16 04:50:06.775128 systemd[1]: Started session-10.scope - Session 10 of User core. Apr 16 04:50:06.788387 kubelet[3602]: E0416 04:50:06.784519 3602 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:50:07.386715 kubelet[3602]: E0416 04:50:07.386138 3602 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:50:08.444978 kubelet[3602]: I0416 04:50:08.442938 3602 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-tbrgm" podStartSLOduration=6.121807244 podStartE2EDuration="31.353179231s" podCreationTimestamp="2026-04-16 04:49:37 +0000 UTC" firstStartedPulling="2026-04-16 04:49:38.764042008 +0000 UTC m=+50.512249960" lastFinishedPulling="2026-04-16 04:50:03.995413987 +0000 UTC m=+75.743621947" observedRunningTime="2026-04-16 04:50:07.053401806 +0000 UTC m=+78.801609765" watchObservedRunningTime="2026-04-16 04:50:08.353179231 +0000 UTC m=+80.101387182" Apr 16 04:50:08.860261 kubelet[3602]: E0416 04:50:08.857429 3602 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:50:09.666026 kubelet[3602]: E0416 04:50:09.658590 3602 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:50:09.791103 kubelet[3602]: E0416 04:50:09.784856 3602 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:50:10.257752 kubelet[3602]: E0416 04:50:10.256809 3602 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:50:10.630312 sshd[4457]: pam_unix(sshd:session): session closed for user core Apr 16 04:50:10.706829 systemd[1]: Started sshd@10-10.0.0.5:22-10.0.0.1:32842.service - OpenSSH per-connection server daemon (10.0.0.1:32842). Apr 16 04:50:10.720188 systemd[1]: sshd@9-10.0.0.5:22-10.0.0.1:32832.service: Deactivated successfully. Apr 16 04:50:10.775502 systemd[1]: session-10.scope: Deactivated successfully. Apr 16 04:50:10.778574 systemd-logind[1563]: Session 10 logged out. Waiting for processes to exit. Apr 16 04:50:10.780353 systemd-logind[1563]: Removed session 10. Apr 16 04:50:11.001606 sshd[4526]: Accepted publickey for core from 10.0.0.1 port 32842 ssh2: RSA SHA256:k6txjuWbuT3fOc5R3ejt+fIilHnNyFQrCWM8I+0TlNQ Apr 16 04:50:11.052754 sshd[4526]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 04:50:11.327759 systemd-logind[1563]: New session 11 of user core. Apr 16 04:50:11.351818 systemd[1]: Started session-11.scope - Session 11 of User core. Apr 16 04:50:14.636918 sshd[4526]: pam_unix(sshd:session): session closed for user core Apr 16 04:50:14.723235 systemd[1]: Started sshd@11-10.0.0.5:22-10.0.0.1:32848.service - OpenSSH per-connection server daemon (10.0.0.1:32848). Apr 16 04:50:14.740897 systemd[1]: sshd@10-10.0.0.5:22-10.0.0.1:32842.service: Deactivated successfully. Apr 16 04:50:14.779106 systemd[1]: session-11.scope: Deactivated successfully. Apr 16 04:50:14.814208 systemd-logind[1563]: Session 11 logged out. Waiting for processes to exit. Apr 16 04:50:14.990158 systemd-logind[1563]: Removed session 11. Apr 16 04:50:15.270334 sshd[4540]: Accepted publickey for core from 10.0.0.1 port 32848 ssh2: RSA SHA256:k6txjuWbuT3fOc5R3ejt+fIilHnNyFQrCWM8I+0TlNQ Apr 16 04:50:15.317058 sshd[4540]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 04:50:15.322715 systemd-logind[1563]: New session 12 of user core. Apr 16 04:50:15.346018 systemd[1]: Started session-12.scope - Session 12 of User core. Apr 16 04:50:16.024252 sshd[4540]: pam_unix(sshd:session): session closed for user core Apr 16 04:50:16.066269 systemd[1]: sshd@11-10.0.0.5:22-10.0.0.1:32848.service: Deactivated successfully. Apr 16 04:50:16.105404 systemd[1]: session-12.scope: Deactivated successfully. Apr 16 04:50:16.117723 systemd-logind[1563]: Session 12 logged out. Waiting for processes to exit. Apr 16 04:50:16.121280 systemd-logind[1563]: Removed session 12. Apr 16 04:50:20.351820 systemd-networkd[1242]: cilium_host: Link UP Apr 16 04:50:20.398339 systemd-networkd[1242]: cilium_net: Link UP Apr 16 04:50:20.398914 systemd-networkd[1242]: cilium_net: Gained carrier Apr 16 04:50:20.399021 systemd-networkd[1242]: cilium_host: Gained carrier Apr 16 04:50:20.765741 systemd-networkd[1242]: cilium_net: Gained IPv6LL Apr 16 04:50:21.068707 systemd[1]: Started sshd@12-10.0.0.5:22-10.0.0.1:47724.service - OpenSSH per-connection server daemon (10.0.0.1:47724). Apr 16 04:50:21.473116 systemd-networkd[1242]: cilium_host: Gained IPv6LL Apr 16 04:50:21.621790 sshd[4614]: Accepted publickey for core from 10.0.0.1 port 47724 ssh2: RSA SHA256:k6txjuWbuT3fOc5R3ejt+fIilHnNyFQrCWM8I+0TlNQ Apr 16 04:50:21.635801 systemd-networkd[1242]: cilium_vxlan: Link UP Apr 16 04:50:21.635807 systemd-networkd[1242]: cilium_vxlan: Gained carrier Apr 16 04:50:21.678292 sshd[4614]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 04:50:21.783002 systemd-logind[1563]: New session 13 of user core. Apr 16 04:50:21.820041 systemd[1]: Started session-13.scope - Session 13 of User core. Apr 16 04:50:23.109247 systemd-networkd[1242]: cilium_vxlan: Gained IPv6LL Apr 16 04:50:24.687810 sshd[4614]: pam_unix(sshd:session): session closed for user core Apr 16 04:50:24.726868 systemd[1]: sshd@12-10.0.0.5:22-10.0.0.1:47724.service: Deactivated successfully. Apr 16 04:50:24.740999 systemd[1]: session-13.scope: Deactivated successfully. Apr 16 04:50:24.750080 systemd-logind[1563]: Session 13 logged out. Waiting for processes to exit. Apr 16 04:50:24.756155 systemd-logind[1563]: Removed session 13. Apr 16 04:50:25.666717 kernel: NET: Registered PF_ALG protocol family Apr 16 04:50:28.434195 kubelet[3602]: E0416 04:50:28.432774 3602 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:50:29.831804 systemd[1]: Started sshd@13-10.0.0.5:22-10.0.0.1:50500.service - OpenSSH per-connection server daemon (10.0.0.1:50500). Apr 16 04:50:30.203618 sshd[4707]: Accepted publickey for core from 10.0.0.1 port 50500 ssh2: RSA SHA256:k6txjuWbuT3fOc5R3ejt+fIilHnNyFQrCWM8I+0TlNQ Apr 16 04:50:30.351588 sshd[4707]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 04:50:30.557532 systemd-logind[1563]: New session 14 of user core. Apr 16 04:50:30.571044 systemd[1]: Started session-14.scope - Session 14 of User core. Apr 16 04:50:33.206164 sshd[4707]: pam_unix(sshd:session): session closed for user core Apr 16 04:50:33.353870 systemd[1]: sshd@13-10.0.0.5:22-10.0.0.1:50500.service: Deactivated successfully. Apr 16 04:50:33.552178 systemd[1]: session-14.scope: Deactivated successfully. Apr 16 04:50:33.579191 systemd-logind[1563]: Session 14 logged out. Waiting for processes to exit. Apr 16 04:50:33.782240 systemd-logind[1563]: Removed session 14. Apr 16 04:50:39.319902 systemd[1]: Started sshd@14-10.0.0.5:22-10.0.0.1:51168.service - OpenSSH per-connection server daemon (10.0.0.1:51168). Apr 16 04:50:40.486347 systemd-networkd[1242]: lxc_health: Link UP Apr 16 04:50:40.806964 systemd-networkd[1242]: lxc_health: Gained carrier Apr 16 04:50:41.061955 kubelet[3602]: E0416 04:50:41.057363 3602 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.478s" Apr 16 04:50:41.156230 kubelet[3602]: E0416 04:50:41.144829 3602 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:50:41.350550 sshd[4927]: Accepted publickey for core from 10.0.0.1 port 51168 ssh2: RSA SHA256:k6txjuWbuT3fOc5R3ejt+fIilHnNyFQrCWM8I+0TlNQ Apr 16 04:50:41.345292 sshd[4927]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 04:50:42.147614 systemd-logind[1563]: New session 15 of user core. Apr 16 04:50:42.150856 systemd[1]: Started session-15.scope - Session 15 of User core. Apr 16 04:50:42.651291 systemd-networkd[1242]: lxc_health: Gained IPv6LL Apr 16 04:50:42.989991 kubelet[3602]: E0416 04:50:42.949840 3602 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.834s" Apr 16 04:50:44.122349 kubelet[3602]: E0416 04:50:44.114242 3602 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.164s" Apr 16 04:50:45.649584 kubelet[3602]: I0416 04:50:45.641127 3602 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-vgzf5" podStartSLOduration=55.634667925 podStartE2EDuration="1m8.641111967s" podCreationTimestamp="2026-04-16 04:49:37 +0000 UTC" firstStartedPulling="2026-04-16 04:49:38.551756714 +0000 UTC m=+50.299964673" lastFinishedPulling="2026-04-16 04:49:51.558200757 +0000 UTC m=+63.306408715" observedRunningTime="2026-04-16 04:50:09.657918255 +0000 UTC m=+81.406126213" watchObservedRunningTime="2026-04-16 04:50:45.641111967 +0000 UTC m=+117.389319918" Apr 16 04:50:46.503331 kubelet[3602]: E0416 04:50:46.503204 3602 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:50:47.850286 kubelet[3602]: E0416 04:50:47.850229 3602 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.076s" Apr 16 04:50:48.046378 sshd[4927]: pam_unix(sshd:session): session closed for user core Apr 16 04:50:48.138912 systemd[1]: Started sshd@15-10.0.0.5:22-10.0.0.1:59064.service - OpenSSH per-connection server daemon (10.0.0.1:59064). Apr 16 04:50:48.172926 systemd[1]: sshd@14-10.0.0.5:22-10.0.0.1:51168.service: Deactivated successfully. Apr 16 04:50:48.384045 systemd[1]: session-15.scope: Deactivated successfully. Apr 16 04:50:48.442325 systemd-logind[1563]: Session 15 logged out. Waiting for processes to exit. Apr 16 04:50:48.481640 systemd-logind[1563]: Removed session 15. Apr 16 04:50:48.779853 sshd[4974]: Accepted publickey for core from 10.0.0.1 port 59064 ssh2: RSA SHA256:k6txjuWbuT3fOc5R3ejt+fIilHnNyFQrCWM8I+0TlNQ Apr 16 04:50:48.779757 sshd[4974]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 04:50:48.819104 systemd-logind[1563]: New session 16 of user core. Apr 16 04:50:48.933956 systemd[1]: Started session-16.scope - Session 16 of User core. Apr 16 04:50:50.784385 sshd[4974]: pam_unix(sshd:session): session closed for user core Apr 16 04:50:50.806112 systemd[1]: Started sshd@16-10.0.0.5:22-10.0.0.1:59080.service - OpenSSH per-connection server daemon (10.0.0.1:59080). Apr 16 04:50:50.807115 systemd[1]: sshd@15-10.0.0.5:22-10.0.0.1:59064.service: Deactivated successfully. Apr 16 04:50:50.818056 systemd[1]: session-16.scope: Deactivated successfully. Apr 16 04:50:50.821856 systemd-logind[1563]: Session 16 logged out. Waiting for processes to exit. Apr 16 04:50:50.844551 systemd-logind[1563]: Removed session 16. Apr 16 04:50:50.910259 sshd[4995]: Accepted publickey for core from 10.0.0.1 port 59080 ssh2: RSA SHA256:k6txjuWbuT3fOc5R3ejt+fIilHnNyFQrCWM8I+0TlNQ Apr 16 04:50:50.912690 sshd[4995]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 04:50:51.018279 systemd-logind[1563]: New session 17 of user core. Apr 16 04:50:51.033097 systemd[1]: Started session-17.scope - Session 17 of User core. Apr 16 04:50:53.550997 sshd[4995]: pam_unix(sshd:session): session closed for user core Apr 16 04:50:53.591899 systemd[1]: Started sshd@17-10.0.0.5:22-10.0.0.1:59094.service - OpenSSH per-connection server daemon (10.0.0.1:59094). Apr 16 04:50:53.609918 systemd[1]: sshd@16-10.0.0.5:22-10.0.0.1:59080.service: Deactivated successfully. Apr 16 04:50:53.630660 systemd[1]: session-17.scope: Deactivated successfully. Apr 16 04:50:53.656698 systemd-logind[1563]: Session 17 logged out. Waiting for processes to exit. Apr 16 04:50:53.672407 systemd-logind[1563]: Removed session 17. Apr 16 04:50:53.792639 sshd[5024]: Accepted publickey for core from 10.0.0.1 port 59094 ssh2: RSA SHA256:k6txjuWbuT3fOc5R3ejt+fIilHnNyFQrCWM8I+0TlNQ Apr 16 04:50:53.822207 sshd[5024]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 04:50:53.852517 systemd-logind[1563]: New session 18 of user core. Apr 16 04:50:53.867476 systemd[1]: Started session-18.scope - Session 18 of User core. Apr 16 04:50:56.157526 sshd[5024]: pam_unix(sshd:session): session closed for user core Apr 16 04:50:56.185082 systemd[1]: Started sshd@18-10.0.0.5:22-10.0.0.1:60498.service - OpenSSH per-connection server daemon (10.0.0.1:60498). Apr 16 04:50:56.193046 systemd[1]: sshd@17-10.0.0.5:22-10.0.0.1:59094.service: Deactivated successfully. Apr 16 04:50:56.217595 systemd[1]: session-18.scope: Deactivated successfully. Apr 16 04:50:56.219066 systemd-logind[1563]: Session 18 logged out. Waiting for processes to exit. Apr 16 04:50:56.242125 systemd-logind[1563]: Removed session 18. Apr 16 04:50:56.295578 sshd[5040]: Accepted publickey for core from 10.0.0.1 port 60498 ssh2: RSA SHA256:k6txjuWbuT3fOc5R3ejt+fIilHnNyFQrCWM8I+0TlNQ Apr 16 04:50:56.297919 sshd[5040]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 04:50:56.310945 systemd-logind[1563]: New session 19 of user core. Apr 16 04:50:56.330352 systemd[1]: Started session-19.scope - Session 19 of User core. Apr 16 04:50:56.887201 sshd[5040]: pam_unix(sshd:session): session closed for user core Apr 16 04:50:56.894883 systemd-logind[1563]: Session 19 logged out. Waiting for processes to exit. Apr 16 04:50:56.895698 systemd[1]: sshd@18-10.0.0.5:22-10.0.0.1:60498.service: Deactivated successfully. Apr 16 04:50:56.899580 systemd[1]: session-19.scope: Deactivated successfully. Apr 16 04:50:56.901717 systemd-logind[1563]: Removed session 19. Apr 16 04:51:01.997984 systemd[1]: Started sshd@19-10.0.0.5:22-10.0.0.1:60514.service - OpenSSH per-connection server daemon (10.0.0.1:60514). Apr 16 04:51:02.107485 sshd[5064]: Accepted publickey for core from 10.0.0.1 port 60514 ssh2: RSA SHA256:k6txjuWbuT3fOc5R3ejt+fIilHnNyFQrCWM8I+0TlNQ Apr 16 04:51:02.113076 sshd[5064]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 04:51:02.140681 systemd-logind[1563]: New session 20 of user core. Apr 16 04:51:02.153425 systemd[1]: Started session-20.scope - Session 20 of User core. Apr 16 04:51:02.382075 kubelet[3602]: E0416 04:51:02.374287 3602 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:51:02.454040 sshd[5064]: pam_unix(sshd:session): session closed for user core Apr 16 04:51:02.472864 systemd[1]: sshd@19-10.0.0.5:22-10.0.0.1:60514.service: Deactivated successfully. Apr 16 04:51:02.477721 systemd-logind[1563]: Session 20 logged out. Waiting for processes to exit. Apr 16 04:51:02.481052 systemd[1]: session-20.scope: Deactivated successfully. Apr 16 04:51:02.485085 systemd-logind[1563]: Removed session 20. Apr 16 04:51:07.503737 systemd[1]: Started sshd@20-10.0.0.5:22-10.0.0.1:59830.service - OpenSSH per-connection server daemon (10.0.0.1:59830). Apr 16 04:51:07.556726 sshd[5081]: Accepted publickey for core from 10.0.0.1 port 59830 ssh2: RSA SHA256:k6txjuWbuT3fOc5R3ejt+fIilHnNyFQrCWM8I+0TlNQ Apr 16 04:51:07.558209 sshd[5081]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 04:51:07.573629 systemd-logind[1563]: New session 21 of user core. Apr 16 04:51:07.582822 systemd[1]: Started session-21.scope - Session 21 of User core. Apr 16 04:51:08.454012 sshd[5081]: pam_unix(sshd:session): session closed for user core Apr 16 04:51:08.488129 systemd[1]: Started sshd@21-10.0.0.5:22-10.0.0.1:59846.service - OpenSSH per-connection server daemon (10.0.0.1:59846). Apr 16 04:51:08.499968 systemd[1]: sshd@20-10.0.0.5:22-10.0.0.1:59830.service: Deactivated successfully. Apr 16 04:51:08.547582 systemd[1]: session-21.scope: Deactivated successfully. Apr 16 04:51:08.550828 systemd-logind[1563]: Session 21 logged out. Waiting for processes to exit. Apr 16 04:51:08.552285 systemd-logind[1563]: Removed session 21. Apr 16 04:51:08.710717 sshd[5097]: Accepted publickey for core from 10.0.0.1 port 59846 ssh2: RSA SHA256:k6txjuWbuT3fOc5R3ejt+fIilHnNyFQrCWM8I+0TlNQ Apr 16 04:51:08.712187 sshd[5097]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 04:51:08.759144 systemd-logind[1563]: New session 22 of user core. Apr 16 04:51:08.788590 systemd[1]: Started session-22.scope - Session 22 of User core. Apr 16 04:51:12.666144 containerd[1593]: time="2026-04-16T04:51:12.658941734Z" level=info msg="StopContainer for \"a3e3b2783acdaf79152006590c4fd893eb9afffd6f05e8c15e48948a63663734\" with timeout 30 (s)" Apr 16 04:51:12.679215 containerd[1593]: time="2026-04-16T04:51:12.675349899Z" level=info msg="Stop container \"a3e3b2783acdaf79152006590c4fd893eb9afffd6f05e8c15e48948a63663734\" with signal terminated" Apr 16 04:51:13.279049 containerd[1593]: time="2026-04-16T04:51:13.278919232Z" level=info msg="StopContainer for \"0d527491900e93d4ae2d765c72af9db55ea1aa0052e425ed3adf5d124fc32640\" with timeout 2 (s)" Apr 16 04:51:13.279289 containerd[1593]: time="2026-04-16T04:51:13.279088070Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 16 04:51:13.291277 containerd[1593]: time="2026-04-16T04:51:13.289063945Z" level=info msg="Stop container \"0d527491900e93d4ae2d765c72af9db55ea1aa0052e425ed3adf5d124fc32640\" with signal terminated" Apr 16 04:51:13.489006 systemd-networkd[1242]: lxc_health: Link DOWN Apr 16 04:51:13.489014 systemd-networkd[1242]: lxc_health: Lost carrier Apr 16 04:51:13.587905 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a3e3b2783acdaf79152006590c4fd893eb9afffd6f05e8c15e48948a63663734-rootfs.mount: Deactivated successfully. Apr 16 04:51:13.700519 containerd[1593]: time="2026-04-16T04:51:13.699022013Z" level=info msg="shim disconnected" id=a3e3b2783acdaf79152006590c4fd893eb9afffd6f05e8c15e48948a63663734 namespace=k8s.io Apr 16 04:51:13.700519 containerd[1593]: time="2026-04-16T04:51:13.699456727Z" level=warning msg="cleaning up after shim disconnected" id=a3e3b2783acdaf79152006590c4fd893eb9afffd6f05e8c15e48948a63663734 namespace=k8s.io Apr 16 04:51:13.700519 containerd[1593]: time="2026-04-16T04:51:13.699466495Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 16 04:51:13.742257 sshd[5097]: pam_unix(sshd:session): session closed for user core Apr 16 04:51:13.769066 systemd[1]: Started sshd@22-10.0.0.5:22-10.0.0.1:59858.service - OpenSSH per-connection server daemon (10.0.0.1:59858). Apr 16 04:51:13.769884 systemd[1]: sshd@21-10.0.0.5:22-10.0.0.1:59846.service: Deactivated successfully. Apr 16 04:51:13.775762 systemd[1]: session-22.scope: Deactivated successfully. Apr 16 04:51:13.776073 containerd[1593]: time="2026-04-16T04:51:13.775851108Z" level=warning msg="cleanup warnings time=\"2026-04-16T04:51:13Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Apr 16 04:51:13.777542 systemd-logind[1563]: Session 22 logged out. Waiting for processes to exit. Apr 16 04:51:13.779433 systemd-logind[1563]: Removed session 22. Apr 16 04:51:13.790176 containerd[1593]: time="2026-04-16T04:51:13.787411681Z" level=info msg="StopContainer for \"a3e3b2783acdaf79152006590c4fd893eb9afffd6f05e8c15e48948a63663734\" returns successfully" Apr 16 04:51:13.790176 containerd[1593]: time="2026-04-16T04:51:13.789893204Z" level=info msg="StopPodSandbox for \"ae21fbf5010da70405ecfee7bae3d7c4971498bcc474e8f11a60245a5af16053\"" Apr 16 04:51:13.790176 containerd[1593]: time="2026-04-16T04:51:13.789944841Z" level=info msg="Container to stop \"a3e3b2783acdaf79152006590c4fd893eb9afffd6f05e8c15e48948a63663734\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 16 04:51:13.810910 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ae21fbf5010da70405ecfee7bae3d7c4971498bcc474e8f11a60245a5af16053-shm.mount: Deactivated successfully. Apr 16 04:51:13.900374 sshd[5167]: Accepted publickey for core from 10.0.0.1 port 59858 ssh2: RSA SHA256:k6txjuWbuT3fOc5R3ejt+fIilHnNyFQrCWM8I+0TlNQ Apr 16 04:51:13.908077 sshd[5167]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 04:51:14.006365 systemd-logind[1563]: New session 23 of user core. Apr 16 04:51:14.012210 systemd[1]: Started session-23.scope - Session 23 of User core. Apr 16 04:51:14.116939 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ae21fbf5010da70405ecfee7bae3d7c4971498bcc474e8f11a60245a5af16053-rootfs.mount: Deactivated successfully. Apr 16 04:51:14.119474 containerd[1593]: time="2026-04-16T04:51:14.106428478Z" level=info msg="shim disconnected" id=ae21fbf5010da70405ecfee7bae3d7c4971498bcc474e8f11a60245a5af16053 namespace=k8s.io Apr 16 04:51:14.119474 containerd[1593]: time="2026-04-16T04:51:14.117498194Z" level=warning msg="cleaning up after shim disconnected" id=ae21fbf5010da70405ecfee7bae3d7c4971498bcc474e8f11a60245a5af16053 namespace=k8s.io Apr 16 04:51:14.119474 containerd[1593]: time="2026-04-16T04:51:14.117618596Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 16 04:51:14.170564 containerd[1593]: time="2026-04-16T04:51:14.169568114Z" level=info msg="TearDown network for sandbox \"ae21fbf5010da70405ecfee7bae3d7c4971498bcc474e8f11a60245a5af16053\" successfully" Apr 16 04:51:14.170564 containerd[1593]: time="2026-04-16T04:51:14.169886635Z" level=info msg="StopPodSandbox for \"ae21fbf5010da70405ecfee7bae3d7c4971498bcc474e8f11a60245a5af16053\" returns successfully" Apr 16 04:51:14.318086 kubelet[3602]: I0416 04:51:14.317363 3602 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2gznj\" (UniqueName: \"kubernetes.io/projected/a84bdfa6-83f7-4857-9344-aa7e6b030848-kube-api-access-2gznj\") pod \"a84bdfa6-83f7-4857-9344-aa7e6b030848\" (UID: \"a84bdfa6-83f7-4857-9344-aa7e6b030848\") " Apr 16 04:51:14.319330 kubelet[3602]: I0416 04:51:14.318638 3602 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a84bdfa6-83f7-4857-9344-aa7e6b030848-cilium-config-path\") pod \"a84bdfa6-83f7-4857-9344-aa7e6b030848\" (UID: \"a84bdfa6-83f7-4857-9344-aa7e6b030848\") " Apr 16 04:51:14.342539 kubelet[3602]: I0416 04:51:14.341285 3602 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a84bdfa6-83f7-4857-9344-aa7e6b030848-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "a84bdfa6-83f7-4857-9344-aa7e6b030848" (UID: "a84bdfa6-83f7-4857-9344-aa7e6b030848"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 16 04:51:14.345276 kubelet[3602]: I0416 04:51:14.345183 3602 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a84bdfa6-83f7-4857-9344-aa7e6b030848-kube-api-access-2gznj" (OuterVolumeSpecName: "kube-api-access-2gznj") pod "a84bdfa6-83f7-4857-9344-aa7e6b030848" (UID: "a84bdfa6-83f7-4857-9344-aa7e6b030848"). InnerVolumeSpecName "kube-api-access-2gznj". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 16 04:51:14.347901 systemd[1]: var-lib-kubelet-pods-a84bdfa6\x2d83f7\x2d4857\x2d9344\x2daa7e6b030848-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d2gznj.mount: Deactivated successfully. Apr 16 04:51:14.429328 kubelet[3602]: I0416 04:51:14.427981 3602 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-2gznj\" (UniqueName: \"kubernetes.io/projected/a84bdfa6-83f7-4857-9344-aa7e6b030848-kube-api-access-2gznj\") on node \"localhost\" DevicePath \"\"" Apr 16 04:51:14.429328 kubelet[3602]: I0416 04:51:14.428249 3602 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a84bdfa6-83f7-4857-9344-aa7e6b030848-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Apr 16 04:51:14.747929 kubelet[3602]: I0416 04:51:14.747135 3602 scope.go:117] "RemoveContainer" containerID="a3e3b2783acdaf79152006590c4fd893eb9afffd6f05e8c15e48948a63663734" Apr 16 04:51:14.784025 containerd[1593]: time="2026-04-16T04:51:14.783127909Z" level=info msg="RemoveContainer for \"a3e3b2783acdaf79152006590c4fd893eb9afffd6f05e8c15e48948a63663734\"" Apr 16 04:51:14.856039 containerd[1593]: time="2026-04-16T04:51:14.855425614Z" level=info msg="RemoveContainer for \"a3e3b2783acdaf79152006590c4fd893eb9afffd6f05e8c15e48948a63663734\" returns successfully" Apr 16 04:51:14.857355 kubelet[3602]: I0416 04:51:14.857089 3602 scope.go:117] "RemoveContainer" containerID="a3e3b2783acdaf79152006590c4fd893eb9afffd6f05e8c15e48948a63663734" Apr 16 04:51:14.868016 containerd[1593]: time="2026-04-16T04:51:14.867474488Z" level=error msg="ContainerStatus for \"a3e3b2783acdaf79152006590c4fd893eb9afffd6f05e8c15e48948a63663734\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a3e3b2783acdaf79152006590c4fd893eb9afffd6f05e8c15e48948a63663734\": not found" Apr 16 04:51:14.875482 kubelet[3602]: E0416 04:51:14.875373 3602 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a3e3b2783acdaf79152006590c4fd893eb9afffd6f05e8c15e48948a63663734\": not found" containerID="a3e3b2783acdaf79152006590c4fd893eb9afffd6f05e8c15e48948a63663734" Apr 16 04:51:14.876815 kubelet[3602]: I0416 04:51:14.876354 3602 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a3e3b2783acdaf79152006590c4fd893eb9afffd6f05e8c15e48948a63663734"} err="failed to get container status \"a3e3b2783acdaf79152006590c4fd893eb9afffd6f05e8c15e48948a63663734\": rpc error: code = NotFound desc = an error occurred when try to find container \"a3e3b2783acdaf79152006590c4fd893eb9afffd6f05e8c15e48948a63663734\": not found" Apr 16 04:51:15.532540 containerd[1593]: time="2026-04-16T04:51:15.530895780Z" level=info msg="Kill container \"0d527491900e93d4ae2d765c72af9db55ea1aa0052e425ed3adf5d124fc32640\"" Apr 16 04:51:15.608123 sshd[5167]: pam_unix(sshd:session): session closed for user core Apr 16 04:51:15.662397 systemd[1]: Started sshd@23-10.0.0.5:22-10.0.0.1:59564.service - OpenSSH per-connection server daemon (10.0.0.1:59564). Apr 16 04:51:15.712078 systemd[1]: sshd@22-10.0.0.5:22-10.0.0.1:59858.service: Deactivated successfully. Apr 16 04:51:15.815621 systemd[1]: session-23.scope: Deactivated successfully. Apr 16 04:51:15.822073 systemd-logind[1563]: Session 23 logged out. Waiting for processes to exit. Apr 16 04:51:15.877247 systemd-logind[1563]: Removed session 23. Apr 16 04:51:16.553479 kubelet[3602]: I0416 04:51:16.552521 3602 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a84bdfa6-83f7-4857-9344-aa7e6b030848" path="/var/lib/kubelet/pods/a84bdfa6-83f7-4857-9344-aa7e6b030848/volumes" Apr 16 04:51:16.722538 containerd[1593]: time="2026-04-16T04:51:16.721825768Z" level=info msg="shim disconnected" id=0d527491900e93d4ae2d765c72af9db55ea1aa0052e425ed3adf5d124fc32640 namespace=k8s.io Apr 16 04:51:16.722538 containerd[1593]: time="2026-04-16T04:51:16.722181168Z" level=warning msg="cleaning up after shim disconnected" id=0d527491900e93d4ae2d765c72af9db55ea1aa0052e425ed3adf5d124fc32640 namespace=k8s.io Apr 16 04:51:16.722538 containerd[1593]: time="2026-04-16T04:51:16.722189097Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 16 04:51:16.723618 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0d527491900e93d4ae2d765c72af9db55ea1aa0052e425ed3adf5d124fc32640-rootfs.mount: Deactivated successfully. Apr 16 04:51:16.823251 sshd[5225]: Accepted publickey for core from 10.0.0.1 port 59564 ssh2: RSA SHA256:k6txjuWbuT3fOc5R3ejt+fIilHnNyFQrCWM8I+0TlNQ Apr 16 04:51:16.825357 sshd[5225]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 04:51:16.900316 systemd-logind[1563]: New session 24 of user core. Apr 16 04:51:16.911515 containerd[1593]: time="2026-04-16T04:51:16.911376420Z" level=info msg="StopContainer for \"0d527491900e93d4ae2d765c72af9db55ea1aa0052e425ed3adf5d124fc32640\" returns successfully" Apr 16 04:51:16.926239 containerd[1593]: time="2026-04-16T04:51:16.925874386Z" level=info msg="StopPodSandbox for \"1d4b2365f5eb6126b5e838486e625056acedc26e4612382b656083cae100759c\"" Apr 16 04:51:16.926239 containerd[1593]: time="2026-04-16T04:51:16.926273013Z" level=info msg="Container to stop \"e8ff972282e27832aa9eb7b4994313e982687434aabefca8a247f5b3aa4adf13\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 16 04:51:16.926239 containerd[1593]: time="2026-04-16T04:51:16.926286237Z" level=info msg="Container to stop \"5be376ae14e71390d0a232f486b4b103c9f97c9c9a8e9a92d0c44d62820a247e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 16 04:51:16.926239 containerd[1593]: time="2026-04-16T04:51:16.926295924Z" level=info msg="Container to stop \"e4cf83f19d1ecfb28297c4672beafb81296c0b425624ae8186968af0b9c0d84d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 16 04:51:16.926239 containerd[1593]: time="2026-04-16T04:51:16.926303438Z" level=info msg="Container to stop \"0d527491900e93d4ae2d765c72af9db55ea1aa0052e425ed3adf5d124fc32640\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 16 04:51:16.926239 containerd[1593]: time="2026-04-16T04:51:16.926310626Z" level=info msg="Container to stop \"a38330f4f88b291005ecf5655b3fa6e1fd8b6095210006f42220f437e5010d4a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 16 04:51:16.935497 systemd[1]: Started session-24.scope - Session 24 of User core. Apr 16 04:51:16.950192 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-1d4b2365f5eb6126b5e838486e625056acedc26e4612382b656083cae100759c-shm.mount: Deactivated successfully. Apr 16 04:51:17.077240 sshd[5225]: pam_unix(sshd:session): session closed for user core Apr 16 04:51:17.140923 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1d4b2365f5eb6126b5e838486e625056acedc26e4612382b656083cae100759c-rootfs.mount: Deactivated successfully. Apr 16 04:51:17.160107 containerd[1593]: time="2026-04-16T04:51:17.142483808Z" level=info msg="shim disconnected" id=1d4b2365f5eb6126b5e838486e625056acedc26e4612382b656083cae100759c namespace=k8s.io Apr 16 04:51:17.160107 containerd[1593]: time="2026-04-16T04:51:17.149769986Z" level=warning msg="cleaning up after shim disconnected" id=1d4b2365f5eb6126b5e838486e625056acedc26e4612382b656083cae100759c namespace=k8s.io Apr 16 04:51:17.160107 containerd[1593]: time="2026-04-16T04:51:17.149908440Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 16 04:51:17.302010 systemd[1]: Started sshd@24-10.0.0.5:22-10.0.0.1:59566.service - OpenSSH per-connection server daemon (10.0.0.1:59566). Apr 16 04:51:17.303585 systemd[1]: sshd@23-10.0.0.5:22-10.0.0.1:59564.service: Deactivated successfully. Apr 16 04:51:17.469928 systemd[1]: session-24.scope: Deactivated successfully. Apr 16 04:51:17.499487 kubelet[3602]: E0416 04:51:17.491475 3602 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 04:51:17.525977 systemd-logind[1563]: Session 24 logged out. Waiting for processes to exit. Apr 16 04:51:17.549965 systemd-logind[1563]: Removed session 24. Apr 16 04:51:17.938671 sshd[5280]: Accepted publickey for core from 10.0.0.1 port 59566 ssh2: RSA SHA256:k6txjuWbuT3fOc5R3ejt+fIilHnNyFQrCWM8I+0TlNQ Apr 16 04:51:17.996762 sshd[5280]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 04:51:18.013012 containerd[1593]: time="2026-04-16T04:51:17.997630198Z" level=info msg="TearDown network for sandbox \"1d4b2365f5eb6126b5e838486e625056acedc26e4612382b656083cae100759c\" successfully" Apr 16 04:51:18.013012 containerd[1593]: time="2026-04-16T04:51:17.997726352Z" level=info msg="StopPodSandbox for \"1d4b2365f5eb6126b5e838486e625056acedc26e4612382b656083cae100759c\" returns successfully" Apr 16 04:51:18.186021 systemd-logind[1563]: New session 25 of user core. Apr 16 04:51:18.236250 systemd[1]: Started session-25.scope - Session 25 of User core. Apr 16 04:51:18.427368 kubelet[3602]: E0416 04:51:18.421693 3602 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:51:18.549280 kubelet[3602]: I0416 04:51:18.543150 3602 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gwvsc\" (UniqueName: \"kubernetes.io/projected/62367a97-162f-47df-afc7-43e025426f94-kube-api-access-gwvsc\") pod \"62367a97-162f-47df-afc7-43e025426f94\" (UID: \"62367a97-162f-47df-afc7-43e025426f94\") " Apr 16 04:51:18.752324 kubelet[3602]: I0416 04:51:18.705300 3602 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/62367a97-162f-47df-afc7-43e025426f94-cni-path\") pod \"62367a97-162f-47df-afc7-43e025426f94\" (UID: \"62367a97-162f-47df-afc7-43e025426f94\") " Apr 16 04:51:18.752324 kubelet[3602]: I0416 04:51:18.745220 3602 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/62367a97-162f-47df-afc7-43e025426f94-lib-modules\") pod \"62367a97-162f-47df-afc7-43e025426f94\" (UID: \"62367a97-162f-47df-afc7-43e025426f94\") " Apr 16 04:51:18.785934 kubelet[3602]: I0416 04:51:18.784082 3602 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/62367a97-162f-47df-afc7-43e025426f94-host-proc-sys-kernel\") pod \"62367a97-162f-47df-afc7-43e025426f94\" (UID: \"62367a97-162f-47df-afc7-43e025426f94\") " Apr 16 04:51:18.785934 kubelet[3602]: I0416 04:51:18.784266 3602 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/62367a97-162f-47df-afc7-43e025426f94-cilium-cgroup\") pod \"62367a97-162f-47df-afc7-43e025426f94\" (UID: \"62367a97-162f-47df-afc7-43e025426f94\") " Apr 16 04:51:18.941853 kubelet[3602]: I0416 04:51:18.812410 3602 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/62367a97-162f-47df-afc7-43e025426f94-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "62367a97-162f-47df-afc7-43e025426f94" (UID: "62367a97-162f-47df-afc7-43e025426f94"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 16 04:51:18.948630 kubelet[3602]: I0416 04:51:18.947045 3602 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/62367a97-162f-47df-afc7-43e025426f94-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "62367a97-162f-47df-afc7-43e025426f94" (UID: "62367a97-162f-47df-afc7-43e025426f94"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 16 04:51:18.948630 kubelet[3602]: I0416 04:51:18.948308 3602 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/62367a97-162f-47df-afc7-43e025426f94-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "62367a97-162f-47df-afc7-43e025426f94" (UID: "62367a97-162f-47df-afc7-43e025426f94"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 16 04:51:18.971956 kubelet[3602]: I0416 04:51:18.954301 3602 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/62367a97-162f-47df-afc7-43e025426f94-etc-cni-netd\") pod \"62367a97-162f-47df-afc7-43e025426f94\" (UID: \"62367a97-162f-47df-afc7-43e025426f94\") " Apr 16 04:51:18.971956 kubelet[3602]: I0416 04:51:18.955351 3602 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/62367a97-162f-47df-afc7-43e025426f94-cilium-config-path\") pod \"62367a97-162f-47df-afc7-43e025426f94\" (UID: \"62367a97-162f-47df-afc7-43e025426f94\") " Apr 16 04:51:18.971956 kubelet[3602]: I0416 04:51:18.955381 3602 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/62367a97-162f-47df-afc7-43e025426f94-hubble-tls\") pod \"62367a97-162f-47df-afc7-43e025426f94\" (UID: \"62367a97-162f-47df-afc7-43e025426f94\") " Apr 16 04:51:18.971956 kubelet[3602]: I0416 04:51:18.955475 3602 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/62367a97-162f-47df-afc7-43e025426f94-xtables-lock\") pod \"62367a97-162f-47df-afc7-43e025426f94\" (UID: \"62367a97-162f-47df-afc7-43e025426f94\") " Apr 16 04:51:18.971956 kubelet[3602]: I0416 04:51:18.955488 3602 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/62367a97-162f-47df-afc7-43e025426f94-host-proc-sys-net\") pod \"62367a97-162f-47df-afc7-43e025426f94\" (UID: \"62367a97-162f-47df-afc7-43e025426f94\") " Apr 16 04:51:18.971956 kubelet[3602]: I0416 04:51:18.955502 3602 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/62367a97-162f-47df-afc7-43e025426f94-hostproc\") pod \"62367a97-162f-47df-afc7-43e025426f94\" (UID: \"62367a97-162f-47df-afc7-43e025426f94\") " Apr 16 04:51:18.974060 kubelet[3602]: I0416 04:51:18.955620 3602 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/62367a97-162f-47df-afc7-43e025426f94-cilium-run\") pod \"62367a97-162f-47df-afc7-43e025426f94\" (UID: \"62367a97-162f-47df-afc7-43e025426f94\") " Apr 16 04:51:18.974060 kubelet[3602]: I0416 04:51:18.955634 3602 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/62367a97-162f-47df-afc7-43e025426f94-clustermesh-secrets\") pod \"62367a97-162f-47df-afc7-43e025426f94\" (UID: \"62367a97-162f-47df-afc7-43e025426f94\") " Apr 16 04:51:18.974060 kubelet[3602]: I0416 04:51:18.955647 3602 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/62367a97-162f-47df-afc7-43e025426f94-bpf-maps\") pod \"62367a97-162f-47df-afc7-43e025426f94\" (UID: \"62367a97-162f-47df-afc7-43e025426f94\") " Apr 16 04:51:18.974060 kubelet[3602]: I0416 04:51:18.955685 3602 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/62367a97-162f-47df-afc7-43e025426f94-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Apr 16 04:51:18.974060 kubelet[3602]: I0416 04:51:18.955693 3602 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/62367a97-162f-47df-afc7-43e025426f94-lib-modules\") on node \"localhost\" DevicePath \"\"" Apr 16 04:51:18.974060 kubelet[3602]: I0416 04:51:18.955700 3602 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/62367a97-162f-47df-afc7-43e025426f94-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Apr 16 04:51:18.974060 kubelet[3602]: I0416 04:51:18.955994 3602 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/62367a97-162f-47df-afc7-43e025426f94-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "62367a97-162f-47df-afc7-43e025426f94" (UID: "62367a97-162f-47df-afc7-43e025426f94"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 16 04:51:18.974343 kubelet[3602]: I0416 04:51:18.797725 3602 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/62367a97-162f-47df-afc7-43e025426f94-cni-path" (OuterVolumeSpecName: "cni-path") pod "62367a97-162f-47df-afc7-43e025426f94" (UID: "62367a97-162f-47df-afc7-43e025426f94"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 16 04:51:18.980959 kubelet[3602]: I0416 04:51:18.976395 3602 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/62367a97-162f-47df-afc7-43e025426f94-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "62367a97-162f-47df-afc7-43e025426f94" (UID: "62367a97-162f-47df-afc7-43e025426f94"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 16 04:51:18.998064 kubelet[3602]: I0416 04:51:18.997604 3602 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/62367a97-162f-47df-afc7-43e025426f94-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "62367a97-162f-47df-afc7-43e025426f94" (UID: "62367a97-162f-47df-afc7-43e025426f94"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 16 04:51:19.000226 kubelet[3602]: I0416 04:51:19.000201 3602 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/62367a97-162f-47df-afc7-43e025426f94-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "62367a97-162f-47df-afc7-43e025426f94" (UID: "62367a97-162f-47df-afc7-43e025426f94"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 16 04:51:19.115338 kubelet[3602]: I0416 04:51:19.038084 3602 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/62367a97-162f-47df-afc7-43e025426f94-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "62367a97-162f-47df-afc7-43e025426f94" (UID: "62367a97-162f-47df-afc7-43e025426f94"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 16 04:51:19.312538 kubelet[3602]: I0416 04:51:19.299629 3602 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/62367a97-162f-47df-afc7-43e025426f94-bpf-maps\") on node \"localhost\" DevicePath \"\"" Apr 16 04:51:19.312538 kubelet[3602]: I0416 04:51:19.299774 3602 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/62367a97-162f-47df-afc7-43e025426f94-cni-path\") on node \"localhost\" DevicePath \"\"" Apr 16 04:51:19.312538 kubelet[3602]: I0416 04:51:19.299782 3602 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/62367a97-162f-47df-afc7-43e025426f94-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Apr 16 04:51:19.312538 kubelet[3602]: I0416 04:51:19.299791 3602 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/62367a97-162f-47df-afc7-43e025426f94-xtables-lock\") on node \"localhost\" DevicePath \"\"" Apr 16 04:51:19.312538 kubelet[3602]: I0416 04:51:19.299799 3602 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/62367a97-162f-47df-afc7-43e025426f94-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Apr 16 04:51:19.312538 kubelet[3602]: I0416 04:51:19.300340 3602 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/62367a97-162f-47df-afc7-43e025426f94-hostproc" (OuterVolumeSpecName: "hostproc") pod "62367a97-162f-47df-afc7-43e025426f94" (UID: "62367a97-162f-47df-afc7-43e025426f94"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 16 04:51:19.316282 systemd[1]: var-lib-kubelet-pods-62367a97\x2d162f\x2d47df\x2dafc7\x2d43e025426f94-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dgwvsc.mount: Deactivated successfully. Apr 16 04:51:19.340646 kubelet[3602]: I0416 04:51:19.332205 3602 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/62367a97-162f-47df-afc7-43e025426f94-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "62367a97-162f-47df-afc7-43e025426f94" (UID: "62367a97-162f-47df-afc7-43e025426f94"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 16 04:51:19.354334 systemd[1]: var-lib-kubelet-pods-62367a97\x2d162f\x2d47df\x2dafc7\x2d43e025426f94-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Apr 16 04:51:19.356271 kubelet[3602]: I0416 04:51:19.355809 3602 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/62367a97-162f-47df-afc7-43e025426f94-kube-api-access-gwvsc" (OuterVolumeSpecName: "kube-api-access-gwvsc") pod "62367a97-162f-47df-afc7-43e025426f94" (UID: "62367a97-162f-47df-afc7-43e025426f94"). InnerVolumeSpecName "kube-api-access-gwvsc". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 16 04:51:19.358862 kubelet[3602]: I0416 04:51:19.358836 3602 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/62367a97-162f-47df-afc7-43e025426f94-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "62367a97-162f-47df-afc7-43e025426f94" (UID: "62367a97-162f-47df-afc7-43e025426f94"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 16 04:51:19.426595 kubelet[3602]: I0416 04:51:19.404317 3602 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/62367a97-162f-47df-afc7-43e025426f94-hostproc\") on node \"localhost\" DevicePath \"\"" Apr 16 04:51:19.437644 kubelet[3602]: I0416 04:51:19.430287 3602 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/62367a97-162f-47df-afc7-43e025426f94-cilium-run\") on node \"localhost\" DevicePath \"\"" Apr 16 04:51:19.437644 kubelet[3602]: I0416 04:51:19.431758 3602 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-gwvsc\" (UniqueName: \"kubernetes.io/projected/62367a97-162f-47df-afc7-43e025426f94-kube-api-access-gwvsc\") on node \"localhost\" DevicePath \"\"" Apr 16 04:51:19.437644 kubelet[3602]: I0416 04:51:19.431897 3602 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/62367a97-162f-47df-afc7-43e025426f94-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Apr 16 04:51:19.437644 kubelet[3602]: I0416 04:51:19.431909 3602 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/62367a97-162f-47df-afc7-43e025426f94-hubble-tls\") on node \"localhost\" DevicePath \"\"" Apr 16 04:51:19.450827 kubelet[3602]: I0416 04:51:19.448103 3602 setters.go:618] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-16T04:51:19Z","lastTransitionTime":"2026-04-16T04:51:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Apr 16 04:51:19.581043 kubelet[3602]: I0416 04:51:19.559412 3602 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/62367a97-162f-47df-afc7-43e025426f94-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "62367a97-162f-47df-afc7-43e025426f94" (UID: "62367a97-162f-47df-afc7-43e025426f94"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 16 04:51:19.606641 systemd[1]: var-lib-kubelet-pods-62367a97\x2d162f\x2d47df\x2dafc7\x2d43e025426f94-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Apr 16 04:51:19.844992 kubelet[3602]: I0416 04:51:19.844682 3602 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/d241ed92-f4f1-4b43-86e2-d83270875316-clustermesh-secrets\") pod \"cilium-hvp9q\" (UID: \"d241ed92-f4f1-4b43-86e2-d83270875316\") " pod="kube-system/cilium-hvp9q" Apr 16 04:51:19.844992 kubelet[3602]: I0416 04:51:19.844807 3602 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/d241ed92-f4f1-4b43-86e2-d83270875316-host-proc-sys-net\") pod \"cilium-hvp9q\" (UID: \"d241ed92-f4f1-4b43-86e2-d83270875316\") " pod="kube-system/cilium-hvp9q" Apr 16 04:51:19.844992 kubelet[3602]: I0416 04:51:19.844829 3602 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/d241ed92-f4f1-4b43-86e2-d83270875316-cilium-run\") pod \"cilium-hvp9q\" (UID: \"d241ed92-f4f1-4b43-86e2-d83270875316\") " pod="kube-system/cilium-hvp9q" Apr 16 04:51:19.844992 kubelet[3602]: I0416 04:51:19.844842 3602 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/d241ed92-f4f1-4b43-86e2-d83270875316-hostproc\") pod \"cilium-hvp9q\" (UID: \"d241ed92-f4f1-4b43-86e2-d83270875316\") " pod="kube-system/cilium-hvp9q" Apr 16 04:51:19.844992 kubelet[3602]: I0416 04:51:19.844854 3602 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d241ed92-f4f1-4b43-86e2-d83270875316-lib-modules\") pod \"cilium-hvp9q\" (UID: \"d241ed92-f4f1-4b43-86e2-d83270875316\") " pod="kube-system/cilium-hvp9q" Apr 16 04:51:19.844992 kubelet[3602]: I0416 04:51:19.844866 3602 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/d241ed92-f4f1-4b43-86e2-d83270875316-bpf-maps\") pod \"cilium-hvp9q\" (UID: \"d241ed92-f4f1-4b43-86e2-d83270875316\") " pod="kube-system/cilium-hvp9q" Apr 16 04:51:19.845298 kubelet[3602]: I0416 04:51:19.844878 3602 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d241ed92-f4f1-4b43-86e2-d83270875316-xtables-lock\") pod \"cilium-hvp9q\" (UID: \"d241ed92-f4f1-4b43-86e2-d83270875316\") " pod="kube-system/cilium-hvp9q" Apr 16 04:51:19.845298 kubelet[3602]: I0416 04:51:19.844891 3602 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/d241ed92-f4f1-4b43-86e2-d83270875316-host-proc-sys-kernel\") pod \"cilium-hvp9q\" (UID: \"d241ed92-f4f1-4b43-86e2-d83270875316\") " pod="kube-system/cilium-hvp9q" Apr 16 04:51:19.845298 kubelet[3602]: I0416 04:51:19.844902 3602 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/d241ed92-f4f1-4b43-86e2-d83270875316-hubble-tls\") pod \"cilium-hvp9q\" (UID: \"d241ed92-f4f1-4b43-86e2-d83270875316\") " pod="kube-system/cilium-hvp9q" Apr 16 04:51:19.845298 kubelet[3602]: I0416 04:51:19.844970 3602 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d241ed92-f4f1-4b43-86e2-d83270875316-etc-cni-netd\") pod \"cilium-hvp9q\" (UID: \"d241ed92-f4f1-4b43-86e2-d83270875316\") " pod="kube-system/cilium-hvp9q" Apr 16 04:51:19.845298 kubelet[3602]: I0416 04:51:19.844982 3602 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/d241ed92-f4f1-4b43-86e2-d83270875316-cilium-ipsec-secrets\") pod \"cilium-hvp9q\" (UID: \"d241ed92-f4f1-4b43-86e2-d83270875316\") " pod="kube-system/cilium-hvp9q" Apr 16 04:51:19.845298 kubelet[3602]: I0416 04:51:19.844997 3602 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/d241ed92-f4f1-4b43-86e2-d83270875316-cilium-cgroup\") pod \"cilium-hvp9q\" (UID: \"d241ed92-f4f1-4b43-86e2-d83270875316\") " pod="kube-system/cilium-hvp9q" Apr 16 04:51:19.845503 kubelet[3602]: I0416 04:51:19.845008 3602 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/d241ed92-f4f1-4b43-86e2-d83270875316-cni-path\") pod \"cilium-hvp9q\" (UID: \"d241ed92-f4f1-4b43-86e2-d83270875316\") " pod="kube-system/cilium-hvp9q" Apr 16 04:51:19.845503 kubelet[3602]: I0416 04:51:19.845021 3602 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d241ed92-f4f1-4b43-86e2-d83270875316-cilium-config-path\") pod \"cilium-hvp9q\" (UID: \"d241ed92-f4f1-4b43-86e2-d83270875316\") " pod="kube-system/cilium-hvp9q" Apr 16 04:51:19.845503 kubelet[3602]: I0416 04:51:19.845033 3602 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-67m7k\" (UniqueName: \"kubernetes.io/projected/d241ed92-f4f1-4b43-86e2-d83270875316-kube-api-access-67m7k\") pod \"cilium-hvp9q\" (UID: \"d241ed92-f4f1-4b43-86e2-d83270875316\") " pod="kube-system/cilium-hvp9q" Apr 16 04:51:19.845503 kubelet[3602]: I0416 04:51:19.845050 3602 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/62367a97-162f-47df-afc7-43e025426f94-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Apr 16 04:51:20.222918 kubelet[3602]: I0416 04:51:20.220605 3602 scope.go:117] "RemoveContainer" containerID="0d527491900e93d4ae2d765c72af9db55ea1aa0052e425ed3adf5d124fc32640" Apr 16 04:51:20.254233 containerd[1593]: time="2026-04-16T04:51:20.253908007Z" level=info msg="RemoveContainer for \"0d527491900e93d4ae2d765c72af9db55ea1aa0052e425ed3adf5d124fc32640\"" Apr 16 04:51:20.292146 kubelet[3602]: E0416 04:51:20.290067 3602 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:51:20.298003 containerd[1593]: time="2026-04-16T04:51:20.297789703Z" level=info msg="RemoveContainer for \"0d527491900e93d4ae2d765c72af9db55ea1aa0052e425ed3adf5d124fc32640\" returns successfully" Apr 16 04:51:20.299037 containerd[1593]: time="2026-04-16T04:51:20.298855677Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-hvp9q,Uid:d241ed92-f4f1-4b43-86e2-d83270875316,Namespace:kube-system,Attempt:0,}" Apr 16 04:51:20.311703 kubelet[3602]: I0416 04:51:20.311137 3602 scope.go:117] "RemoveContainer" containerID="e4cf83f19d1ecfb28297c4672beafb81296c0b425624ae8186968af0b9c0d84d" Apr 16 04:51:20.352790 containerd[1593]: time="2026-04-16T04:51:20.352511584Z" level=info msg="RemoveContainer for \"e4cf83f19d1ecfb28297c4672beafb81296c0b425624ae8186968af0b9c0d84d\"" Apr 16 04:51:20.368306 containerd[1593]: time="2026-04-16T04:51:20.367848148Z" level=info msg="RemoveContainer for \"e4cf83f19d1ecfb28297c4672beafb81296c0b425624ae8186968af0b9c0d84d\" returns successfully" Apr 16 04:51:20.371894 kubelet[3602]: I0416 04:51:20.371841 3602 scope.go:117] "RemoveContainer" containerID="5be376ae14e71390d0a232f486b4b103c9f97c9c9a8e9a92d0c44d62820a247e" Apr 16 04:51:20.377157 kubelet[3602]: I0416 04:51:20.376818 3602 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="62367a97-162f-47df-afc7-43e025426f94" path="/var/lib/kubelet/pods/62367a97-162f-47df-afc7-43e025426f94/volumes" Apr 16 04:51:20.410779 containerd[1593]: time="2026-04-16T04:51:20.410505946Z" level=info msg="RemoveContainer for \"5be376ae14e71390d0a232f486b4b103c9f97c9c9a8e9a92d0c44d62820a247e\"" Apr 16 04:51:20.419286 containerd[1593]: time="2026-04-16T04:51:20.419049164Z" level=info msg="RemoveContainer for \"5be376ae14e71390d0a232f486b4b103c9f97c9c9a8e9a92d0c44d62820a247e\" returns successfully" Apr 16 04:51:20.423623 kubelet[3602]: I0416 04:51:20.423091 3602 scope.go:117] "RemoveContainer" containerID="e8ff972282e27832aa9eb7b4994313e982687434aabefca8a247f5b3aa4adf13" Apr 16 04:51:20.429459 containerd[1593]: time="2026-04-16T04:51:20.428792080Z" level=info msg="RemoveContainer for \"e8ff972282e27832aa9eb7b4994313e982687434aabefca8a247f5b3aa4adf13\"" Apr 16 04:51:20.437183 containerd[1593]: time="2026-04-16T04:51:20.432334258Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 16 04:51:20.442025 containerd[1593]: time="2026-04-16T04:51:20.441281965Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 16 04:51:20.442025 containerd[1593]: time="2026-04-16T04:51:20.441593263Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 16 04:51:20.443599 containerd[1593]: time="2026-04-16T04:51:20.443547395Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 16 04:51:20.449428 containerd[1593]: time="2026-04-16T04:51:20.448646900Z" level=info msg="RemoveContainer for \"e8ff972282e27832aa9eb7b4994313e982687434aabefca8a247f5b3aa4adf13\" returns successfully" Apr 16 04:51:20.453195 kubelet[3602]: I0416 04:51:20.453160 3602 scope.go:117] "RemoveContainer" containerID="a38330f4f88b291005ecf5655b3fa6e1fd8b6095210006f42220f437e5010d4a" Apr 16 04:51:20.469272 containerd[1593]: time="2026-04-16T04:51:20.469046507Z" level=info msg="RemoveContainer for \"a38330f4f88b291005ecf5655b3fa6e1fd8b6095210006f42220f437e5010d4a\"" Apr 16 04:51:20.478538 containerd[1593]: time="2026-04-16T04:51:20.477427806Z" level=info msg="RemoveContainer for \"a38330f4f88b291005ecf5655b3fa6e1fd8b6095210006f42220f437e5010d4a\" returns successfully" Apr 16 04:51:20.599211 containerd[1593]: time="2026-04-16T04:51:20.598851563Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-hvp9q,Uid:d241ed92-f4f1-4b43-86e2-d83270875316,Namespace:kube-system,Attempt:0,} returns sandbox id \"f295bbf54734292018f5e6a99940d896a1465348b387605c41d6d0e8f1ee6df7\"" Apr 16 04:51:20.602634 kubelet[3602]: E0416 04:51:20.602604 3602 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:51:20.628359 containerd[1593]: time="2026-04-16T04:51:20.628123146Z" level=info msg="CreateContainer within sandbox \"f295bbf54734292018f5e6a99940d896a1465348b387605c41d6d0e8f1ee6df7\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Apr 16 04:51:20.693453 containerd[1593]: time="2026-04-16T04:51:20.692654733Z" level=info msg="CreateContainer within sandbox \"f295bbf54734292018f5e6a99940d896a1465348b387605c41d6d0e8f1ee6df7\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"389bbb3f39613f77d60c3fb11d769fbf4a058c5ed39e3902a9015f3809fcd297\"" Apr 16 04:51:20.723806 containerd[1593]: time="2026-04-16T04:51:20.719198594Z" level=info msg="StartContainer for \"389bbb3f39613f77d60c3fb11d769fbf4a058c5ed39e3902a9015f3809fcd297\"" Apr 16 04:51:21.383922 containerd[1593]: time="2026-04-16T04:51:21.382418020Z" level=info msg="StartContainer for \"389bbb3f39613f77d60c3fb11d769fbf4a058c5ed39e3902a9015f3809fcd297\" returns successfully" Apr 16 04:51:21.466411 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-389bbb3f39613f77d60c3fb11d769fbf4a058c5ed39e3902a9015f3809fcd297-rootfs.mount: Deactivated successfully. Apr 16 04:51:21.490059 containerd[1593]: time="2026-04-16T04:51:21.489425719Z" level=info msg="shim disconnected" id=389bbb3f39613f77d60c3fb11d769fbf4a058c5ed39e3902a9015f3809fcd297 namespace=k8s.io Apr 16 04:51:21.490059 containerd[1593]: time="2026-04-16T04:51:21.489937752Z" level=warning msg="cleaning up after shim disconnected" id=389bbb3f39613f77d60c3fb11d769fbf4a058c5ed39e3902a9015f3809fcd297 namespace=k8s.io Apr 16 04:51:21.490059 containerd[1593]: time="2026-04-16T04:51:21.489949073Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 16 04:51:22.353371 kubelet[3602]: E0416 04:51:22.353262 3602 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:51:22.380959 containerd[1593]: time="2026-04-16T04:51:22.380870226Z" level=info msg="CreateContainer within sandbox \"f295bbf54734292018f5e6a99940d896a1465348b387605c41d6d0e8f1ee6df7\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Apr 16 04:51:22.624069 containerd[1593]: time="2026-04-16T04:51:22.620847347Z" level=info msg="CreateContainer within sandbox \"f295bbf54734292018f5e6a99940d896a1465348b387605c41d6d0e8f1ee6df7\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"bba4d0420ecab0bdf3dfa23a17e903c78636192271be5b9d7f537b0dc58f61d1\"" Apr 16 04:51:22.626130 containerd[1593]: time="2026-04-16T04:51:22.626101495Z" level=info msg="StartContainer for \"bba4d0420ecab0bdf3dfa23a17e903c78636192271be5b9d7f537b0dc58f61d1\"" Apr 16 04:51:22.683925 kubelet[3602]: E0416 04:51:22.682602 3602 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 04:51:22.883607 containerd[1593]: time="2026-04-16T04:51:22.875674751Z" level=info msg="StartContainer for \"bba4d0420ecab0bdf3dfa23a17e903c78636192271be5b9d7f537b0dc58f61d1\" returns successfully" Apr 16 04:51:22.999778 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bba4d0420ecab0bdf3dfa23a17e903c78636192271be5b9d7f537b0dc58f61d1-rootfs.mount: Deactivated successfully. Apr 16 04:51:23.032386 containerd[1593]: time="2026-04-16T04:51:23.031710813Z" level=info msg="shim disconnected" id=bba4d0420ecab0bdf3dfa23a17e903c78636192271be5b9d7f537b0dc58f61d1 namespace=k8s.io Apr 16 04:51:23.032386 containerd[1593]: time="2026-04-16T04:51:23.032341368Z" level=warning msg="cleaning up after shim disconnected" id=bba4d0420ecab0bdf3dfa23a17e903c78636192271be5b9d7f537b0dc58f61d1 namespace=k8s.io Apr 16 04:51:23.032386 containerd[1593]: time="2026-04-16T04:51:23.032358838Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 16 04:51:23.430277 kubelet[3602]: E0416 04:51:23.430166 3602 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:51:23.486459 containerd[1593]: time="2026-04-16T04:51:23.486342004Z" level=info msg="CreateContainer within sandbox \"f295bbf54734292018f5e6a99940d896a1465348b387605c41d6d0e8f1ee6df7\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Apr 16 04:51:23.549626 containerd[1593]: time="2026-04-16T04:51:23.549564242Z" level=info msg="CreateContainer within sandbox \"f295bbf54734292018f5e6a99940d896a1465348b387605c41d6d0e8f1ee6df7\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"a509854a88525534a163c7831bdb9890e1212a804d944f7ca536a481f7632f45\"" Apr 16 04:51:23.551828 containerd[1593]: time="2026-04-16T04:51:23.551719755Z" level=info msg="StartContainer for \"a509854a88525534a163c7831bdb9890e1212a804d944f7ca536a481f7632f45\"" Apr 16 04:51:23.953616 containerd[1593]: time="2026-04-16T04:51:23.943429128Z" level=info msg="StartContainer for \"a509854a88525534a163c7831bdb9890e1212a804d944f7ca536a481f7632f45\" returns successfully" Apr 16 04:51:24.218158 containerd[1593]: time="2026-04-16T04:51:24.210318597Z" level=info msg="shim disconnected" id=a509854a88525534a163c7831bdb9890e1212a804d944f7ca536a481f7632f45 namespace=k8s.io Apr 16 04:51:24.218158 containerd[1593]: time="2026-04-16T04:51:24.211354608Z" level=warning msg="cleaning up after shim disconnected" id=a509854a88525534a163c7831bdb9890e1212a804d944f7ca536a481f7632f45 namespace=k8s.io Apr 16 04:51:24.218158 containerd[1593]: time="2026-04-16T04:51:24.211368037Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 16 04:51:24.213542 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a509854a88525534a163c7831bdb9890e1212a804d944f7ca536a481f7632f45-rootfs.mount: Deactivated successfully. Apr 16 04:51:24.555379 kubelet[3602]: E0416 04:51:24.554605 3602 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:51:25.593698 kubelet[3602]: E0416 04:51:25.593500 3602 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:51:25.716881 containerd[1593]: time="2026-04-16T04:51:25.716543863Z" level=info msg="CreateContainer within sandbox \"f295bbf54734292018f5e6a99940d896a1465348b387605c41d6d0e8f1ee6df7\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Apr 16 04:51:25.847959 containerd[1593]: time="2026-04-16T04:51:25.843270674Z" level=info msg="CreateContainer within sandbox \"f295bbf54734292018f5e6a99940d896a1465348b387605c41d6d0e8f1ee6df7\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"826bfff016a1d16bae7a7ac36e6bc7b5997f51f05d3c99b57fa05ce6a0b8ffbd\"" Apr 16 04:51:25.892493 containerd[1593]: time="2026-04-16T04:51:25.890699325Z" level=info msg="StartContainer for \"826bfff016a1d16bae7a7ac36e6bc7b5997f51f05d3c99b57fa05ce6a0b8ffbd\"" Apr 16 04:51:26.851118 containerd[1593]: time="2026-04-16T04:51:26.850881743Z" level=info msg="StartContainer for \"826bfff016a1d16bae7a7ac36e6bc7b5997f51f05d3c99b57fa05ce6a0b8ffbd\" returns successfully" Apr 16 04:51:26.981111 containerd[1593]: time="2026-04-16T04:51:26.980857562Z" level=info msg="shim disconnected" id=826bfff016a1d16bae7a7ac36e6bc7b5997f51f05d3c99b57fa05ce6a0b8ffbd namespace=k8s.io Apr 16 04:51:26.981974 containerd[1593]: time="2026-04-16T04:51:26.981151763Z" level=warning msg="cleaning up after shim disconnected" id=826bfff016a1d16bae7a7ac36e6bc7b5997f51f05d3c99b57fa05ce6a0b8ffbd namespace=k8s.io Apr 16 04:51:26.981974 containerd[1593]: time="2026-04-16T04:51:26.981161205Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 16 04:51:26.983177 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-826bfff016a1d16bae7a7ac36e6bc7b5997f51f05d3c99b57fa05ce6a0b8ffbd-rootfs.mount: Deactivated successfully. Apr 16 04:51:27.742292 kubelet[3602]: E0416 04:51:27.741720 3602 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 04:51:28.056230 kubelet[3602]: E0416 04:51:28.051055 3602 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:51:28.152712 containerd[1593]: time="2026-04-16T04:51:28.150227545Z" level=info msg="CreateContainer within sandbox \"f295bbf54734292018f5e6a99940d896a1465348b387605c41d6d0e8f1ee6df7\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Apr 16 04:51:28.379310 containerd[1593]: time="2026-04-16T04:51:28.377522251Z" level=info msg="CreateContainer within sandbox \"f295bbf54734292018f5e6a99940d896a1465348b387605c41d6d0e8f1ee6df7\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"2b5bb930acbc6a067a906f3177f101e365f558298f5609d7f4d953073654c35d\"" Apr 16 04:51:28.419767 containerd[1593]: time="2026-04-16T04:51:28.419286075Z" level=info msg="StartContainer for \"2b5bb930acbc6a067a906f3177f101e365f558298f5609d7f4d953073654c35d\"" Apr 16 04:51:28.801377 containerd[1593]: time="2026-04-16T04:51:28.801191223Z" level=info msg="StartContainer for \"2b5bb930acbc6a067a906f3177f101e365f558298f5609d7f4d953073654c35d\" returns successfully" Apr 16 04:51:30.476954 kubelet[3602]: E0416 04:51:30.473207 3602 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:51:30.650505 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Apr 16 04:51:30.783016 kubelet[3602]: I0416 04:51:30.780758 3602 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-hvp9q" podStartSLOduration=15.780736923 podStartE2EDuration="15.780736923s" podCreationTimestamp="2026-04-16 04:51:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-16 04:51:30.762811146 +0000 UTC m=+162.511019113" watchObservedRunningTime="2026-04-16 04:51:30.780736923 +0000 UTC m=+162.528944873" Apr 16 04:51:32.302815 kubelet[3602]: E0416 04:51:32.301062 3602 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:51:37.386555 kubelet[3602]: E0416 04:51:37.386087 3602 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:51:39.907138 systemd-networkd[1242]: lxc_health: Link UP Apr 16 04:51:39.916664 systemd-networkd[1242]: lxc_health: Gained carrier Apr 16 04:51:40.471023 kubelet[3602]: E0416 04:51:40.470995 3602 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:51:40.641385 kubelet[3602]: E0416 04:51:40.641184 3602 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:51:41.083730 kubelet[3602]: E0416 04:51:41.083622 3602 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:51:41.763199 systemd-networkd[1242]: lxc_health: Gained IPv6LL Apr 16 04:51:41.942926 kubelet[3602]: E0416 04:51:41.941996 3602 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:51:46.242005 systemd[1]: run-containerd-runc-k8s.io-2b5bb930acbc6a067a906f3177f101e365f558298f5609d7f4d953073654c35d-runc.3yf5Nv.mount: Deactivated successfully. Apr 16 04:51:47.368744 sshd[5280]: pam_unix(sshd:session): session closed for user core Apr 16 04:51:47.566720 systemd[1]: sshd@24-10.0.0.5:22-10.0.0.1:59566.service: Deactivated successfully. Apr 16 04:51:47.570396 systemd[1]: session-25.scope: Deactivated successfully. Apr 16 04:51:47.572080 systemd-logind[1563]: Session 25 logged out. Waiting for processes to exit. Apr 16 04:51:47.573776 systemd-logind[1563]: Removed session 25.