Apr 13 23:03:50.483705 kernel: Linux version 6.6.127-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Mon Apr 13 18:40:27 -00 2026 Apr 13 23:03:50.483752 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=c1ba97db2f6278922cfc5bd0ca74b4bb573fca2c3aed19c121a34271e693e156 Apr 13 23:03:50.483765 kernel: BIOS-provided physical RAM map: Apr 13 23:03:50.483772 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Apr 13 23:03:50.483779 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Apr 13 23:03:50.483786 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Apr 13 23:03:50.483794 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Apr 13 23:03:50.483800 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Apr 13 23:03:50.483806 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Apr 13 23:03:50.483814 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Apr 13 23:03:50.483821 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Apr 13 23:03:50.483828 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Apr 13 23:03:50.483835 kernel: NX (Execute Disable) protection: active Apr 13 23:03:50.483842 kernel: APIC: Static calls initialized Apr 13 23:03:50.483851 kernel: SMBIOS 2.8 present. Apr 13 23:03:50.483888 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Apr 13 23:03:50.483897 kernel: Hypervisor detected: KVM Apr 13 23:03:50.483905 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Apr 13 23:03:50.483912 kernel: kvm-clock: using sched offset of 7435730513 cycles Apr 13 23:03:50.483921 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Apr 13 23:03:50.483929 kernel: tsc: Detected 2793.438 MHz processor Apr 13 23:03:50.483937 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Apr 13 23:03:50.483945 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Apr 13 23:03:50.483953 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x10000000000 Apr 13 23:03:50.483964 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Apr 13 23:03:50.483973 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Apr 13 23:03:50.483981 kernel: Using GB pages for direct mapping Apr 13 23:03:50.483989 kernel: ACPI: Early table checksum verification disabled Apr 13 23:03:50.483996 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Apr 13 23:03:50.484004 kernel: ACPI: RSDT 0x000000009CFE241A 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 13 23:03:50.484012 kernel: ACPI: FACP 0x000000009CFE21FA 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Apr 13 23:03:50.484020 kernel: ACPI: DSDT 0x000000009CFE0040 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 13 23:03:50.484046 kernel: ACPI: FACS 0x000000009CFE0000 000040 Apr 13 23:03:50.484067 kernel: ACPI: APIC 0x000000009CFE22EE 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 13 23:03:50.484076 kernel: ACPI: HPET 0x000000009CFE237E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 13 23:03:50.484083 kernel: ACPI: MCFG 0x000000009CFE23B6 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 13 23:03:50.484091 kernel: ACPI: WAET 0x000000009CFE23F2 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 13 23:03:50.484099 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21fa-0x9cfe22ed] Apr 13 23:03:50.484107 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21f9] Apr 13 23:03:50.484115 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Apr 13 23:03:50.484128 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22ee-0x9cfe237d] Apr 13 23:03:50.484137 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe237e-0x9cfe23b5] Apr 13 23:03:50.484145 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23b6-0x9cfe23f1] Apr 13 23:03:50.484153 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23f2-0x9cfe2419] Apr 13 23:03:50.484161 kernel: No NUMA configuration found Apr 13 23:03:50.484170 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Apr 13 23:03:50.484178 kernel: NODE_DATA(0) allocated [mem 0x9cfd6000-0x9cfdbfff] Apr 13 23:03:50.484188 kernel: Zone ranges: Apr 13 23:03:50.484197 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Apr 13 23:03:50.484205 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Apr 13 23:03:50.484214 kernel: Normal empty Apr 13 23:03:50.484222 kernel: Movable zone start for each node Apr 13 23:03:50.484230 kernel: Early memory node ranges Apr 13 23:03:50.484239 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Apr 13 23:03:50.484247 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Apr 13 23:03:50.484256 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Apr 13 23:03:50.484264 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Apr 13 23:03:50.484275 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Apr 13 23:03:50.484284 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Apr 13 23:03:50.484292 kernel: ACPI: PM-Timer IO Port: 0x608 Apr 13 23:03:50.484300 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Apr 13 23:03:50.484309 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Apr 13 23:03:50.484318 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Apr 13 23:03:50.484326 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Apr 13 23:03:50.484334 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Apr 13 23:03:50.484343 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Apr 13 23:03:50.484353 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Apr 13 23:03:50.484361 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Apr 13 23:03:50.484370 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Apr 13 23:03:50.484378 kernel: TSC deadline timer available Apr 13 23:03:50.484387 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Apr 13 23:03:50.484395 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Apr 13 23:03:50.484403 kernel: kvm-guest: KVM setup pv remote TLB flush Apr 13 23:03:50.484411 kernel: kvm-guest: setup PV sched yield Apr 13 23:03:50.484420 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Apr 13 23:03:50.484430 kernel: Booting paravirtualized kernel on KVM Apr 13 23:03:50.484438 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Apr 13 23:03:50.484447 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Apr 13 23:03:50.484455 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u524288 Apr 13 23:03:50.484463 kernel: pcpu-alloc: s196328 r8192 d28952 u524288 alloc=1*2097152 Apr 13 23:03:50.484470 kernel: pcpu-alloc: [0] 0 1 2 3 Apr 13 23:03:50.484478 kernel: kvm-guest: PV spinlocks enabled Apr 13 23:03:50.484486 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Apr 13 23:03:50.484496 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=c1ba97db2f6278922cfc5bd0ca74b4bb573fca2c3aed19c121a34271e693e156 Apr 13 23:03:50.484507 kernel: random: crng init done Apr 13 23:03:50.484516 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Apr 13 23:03:50.484524 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Apr 13 23:03:50.484533 kernel: Fallback order for Node 0: 0 Apr 13 23:03:50.484540 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632732 Apr 13 23:03:50.484548 kernel: Policy zone: DMA32 Apr 13 23:03:50.484556 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Apr 13 23:03:50.484564 kernel: Memory: 2433644K/2571752K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42896K init, 2300K bss, 137904K reserved, 0K cma-reserved) Apr 13 23:03:50.484574 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Apr 13 23:03:50.484581 kernel: ftrace: allocating 37996 entries in 149 pages Apr 13 23:03:50.484589 kernel: ftrace: allocated 149 pages with 4 groups Apr 13 23:03:50.484597 kernel: Dynamic Preempt: voluntary Apr 13 23:03:50.484605 kernel: rcu: Preemptible hierarchical RCU implementation. Apr 13 23:03:50.484613 kernel: rcu: RCU event tracing is enabled. Apr 13 23:03:50.484621 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Apr 13 23:03:50.484629 kernel: Trampoline variant of Tasks RCU enabled. Apr 13 23:03:50.484637 kernel: Rude variant of Tasks RCU enabled. Apr 13 23:03:50.484647 kernel: Tracing variant of Tasks RCU enabled. Apr 13 23:03:50.484656 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Apr 13 23:03:50.484664 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Apr 13 23:03:50.484673 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Apr 13 23:03:50.484681 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Apr 13 23:03:50.484689 kernel: Console: colour VGA+ 80x25 Apr 13 23:03:50.484697 kernel: printk: console [ttyS0] enabled Apr 13 23:03:50.484705 kernel: ACPI: Core revision 20230628 Apr 13 23:03:50.484713 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Apr 13 23:03:50.484723 kernel: APIC: Switch to symmetric I/O mode setup Apr 13 23:03:50.484731 kernel: x2apic enabled Apr 13 23:03:50.484739 kernel: APIC: Switched APIC routing to: physical x2apic Apr 13 23:03:50.484747 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Apr 13 23:03:50.484755 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Apr 13 23:03:50.484763 kernel: kvm-guest: setup PV IPIs Apr 13 23:03:50.484770 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Apr 13 23:03:50.484778 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x284409db922, max_idle_ns: 440795228871 ns Apr 13 23:03:50.484796 kernel: Calibrating delay loop (skipped) preset value.. 5586.87 BogoMIPS (lpj=2793438) Apr 13 23:03:50.484805 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Apr 13 23:03:50.484814 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Apr 13 23:03:50.484823 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Apr 13 23:03:50.484834 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Apr 13 23:03:50.484842 kernel: Spectre V2 : Mitigation: Retpolines Apr 13 23:03:50.484851 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Apr 13 23:03:50.484860 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Apr 13 23:03:50.485083 kernel: RETBleed: Vulnerable Apr 13 23:03:50.485094 kernel: Speculative Store Bypass: Vulnerable Apr 13 23:03:50.485102 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Apr 13 23:03:50.485112 kernel: GDS: Unknown: Dependent on hypervisor status Apr 13 23:03:50.485121 kernel: active return thunk: its_return_thunk Apr 13 23:03:50.485130 kernel: ITS: Mitigation: Aligned branch/return thunks Apr 13 23:03:50.485139 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Apr 13 23:03:50.485148 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Apr 13 23:03:50.485157 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Apr 13 23:03:50.485169 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Apr 13 23:03:50.485179 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Apr 13 23:03:50.485190 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Apr 13 23:03:50.485199 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Apr 13 23:03:50.485209 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Apr 13 23:03:50.485218 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Apr 13 23:03:50.485228 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Apr 13 23:03:50.485238 kernel: x86/fpu: Enabled xstate features 0xe7, context size is 2432 bytes, using 'compacted' format. Apr 13 23:03:50.485247 kernel: Freeing SMP alternatives memory: 32K Apr 13 23:03:50.485258 kernel: pid_max: default: 32768 minimum: 301 Apr 13 23:03:50.485267 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Apr 13 23:03:50.485276 kernel: landlock: Up and running. Apr 13 23:03:50.485285 kernel: SELinux: Initializing. Apr 13 23:03:50.485293 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 13 23:03:50.485302 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 13 23:03:50.485311 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8370C CPU @ 2.80GHz (family: 0x6, model: 0x6a, stepping: 0x6) Apr 13 23:03:50.485320 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Apr 13 23:03:50.485329 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Apr 13 23:03:50.485339 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Apr 13 23:03:50.485348 kernel: Performance Events: unsupported p6 CPU model 106 no PMU driver, software events only. Apr 13 23:03:50.485357 kernel: signal: max sigframe size: 3632 Apr 13 23:03:50.485365 kernel: rcu: Hierarchical SRCU implementation. Apr 13 23:03:50.485374 kernel: rcu: Max phase no-delay instances is 400. Apr 13 23:03:50.485383 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Apr 13 23:03:50.485393 kernel: smp: Bringing up secondary CPUs ... Apr 13 23:03:50.485402 kernel: smpboot: x86: Booting SMP configuration: Apr 13 23:03:50.485411 kernel: .... node #0, CPUs: #1 #2 #3 Apr 13 23:03:50.485422 kernel: smp: Brought up 1 node, 4 CPUs Apr 13 23:03:50.485430 kernel: smpboot: Max logical packages: 1 Apr 13 23:03:50.485439 kernel: smpboot: Total of 4 processors activated (22347.50 BogoMIPS) Apr 13 23:03:50.485447 kernel: devtmpfs: initialized Apr 13 23:03:50.485455 kernel: x86/mm: Memory block size: 128MB Apr 13 23:03:50.485463 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Apr 13 23:03:50.485472 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Apr 13 23:03:50.485480 kernel: pinctrl core: initialized pinctrl subsystem Apr 13 23:03:50.485488 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Apr 13 23:03:50.485498 kernel: audit: initializing netlink subsys (disabled) Apr 13 23:03:50.485507 kernel: audit: type=2000 audit(1776121427.158:1): state=initialized audit_enabled=0 res=1 Apr 13 23:03:50.485515 kernel: thermal_sys: Registered thermal governor 'step_wise' Apr 13 23:03:50.485524 kernel: thermal_sys: Registered thermal governor 'user_space' Apr 13 23:03:50.485532 kernel: cpuidle: using governor menu Apr 13 23:03:50.485541 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Apr 13 23:03:50.485551 kernel: dca service started, version 1.12.1 Apr 13 23:03:50.485560 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Apr 13 23:03:50.485569 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Apr 13 23:03:50.485580 kernel: PCI: Using configuration type 1 for base access Apr 13 23:03:50.485589 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Apr 13 23:03:50.485598 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Apr 13 23:03:50.485607 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Apr 13 23:03:50.485615 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Apr 13 23:03:50.485624 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Apr 13 23:03:50.485632 kernel: ACPI: Added _OSI(Module Device) Apr 13 23:03:50.485640 kernel: ACPI: Added _OSI(Processor Device) Apr 13 23:03:50.485648 kernel: ACPI: Added _OSI(Processor Aggregator Device) Apr 13 23:03:50.485658 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Apr 13 23:03:50.485667 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Apr 13 23:03:50.485676 kernel: ACPI: Interpreter enabled Apr 13 23:03:50.485685 kernel: ACPI: PM: (supports S0 S3 S5) Apr 13 23:03:50.485693 kernel: ACPI: Using IOAPIC for interrupt routing Apr 13 23:03:50.485701 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Apr 13 23:03:50.485709 kernel: PCI: Using E820 reservations for host bridge windows Apr 13 23:03:50.485717 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Apr 13 23:03:50.485725 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Apr 13 23:03:50.485963 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Apr 13 23:03:50.486116 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Apr 13 23:03:50.486204 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Apr 13 23:03:50.486215 kernel: PCI host bridge to bus 0000:00 Apr 13 23:03:50.486298 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Apr 13 23:03:50.486365 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Apr 13 23:03:50.486438 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Apr 13 23:03:50.486503 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Apr 13 23:03:50.486567 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Apr 13 23:03:50.486634 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Apr 13 23:03:50.486701 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Apr 13 23:03:50.486795 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Apr 13 23:03:50.487131 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Apr 13 23:03:50.487225 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Apr 13 23:03:50.487303 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Apr 13 23:03:50.487374 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Apr 13 23:03:50.487452 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Apr 13 23:03:50.487543 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Apr 13 23:03:50.487618 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc0c0-0xc0df] Apr 13 23:03:50.487703 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Apr 13 23:03:50.487783 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Apr 13 23:03:50.488080 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Apr 13 23:03:50.488178 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc000-0xc07f] Apr 13 23:03:50.488258 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Apr 13 23:03:50.488337 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Apr 13 23:03:50.488423 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Apr 13 23:03:50.488555 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc0e0-0xc0ff] Apr 13 23:03:50.488635 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] Apr 13 23:03:50.488715 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] Apr 13 23:03:50.488791 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Apr 13 23:03:50.489045 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Apr 13 23:03:50.489129 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Apr 13 23:03:50.489212 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Apr 13 23:03:50.489293 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc100-0xc11f] Apr 13 23:03:50.489365 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd4000-0xfebd4fff] Apr 13 23:03:50.489448 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Apr 13 23:03:50.489520 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Apr 13 23:03:50.489531 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Apr 13 23:03:50.489540 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Apr 13 23:03:50.489548 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Apr 13 23:03:50.489557 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Apr 13 23:03:50.489568 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Apr 13 23:03:50.489576 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Apr 13 23:03:50.489585 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Apr 13 23:03:50.489593 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Apr 13 23:03:50.489601 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Apr 13 23:03:50.489610 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Apr 13 23:03:50.489618 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Apr 13 23:03:50.489626 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Apr 13 23:03:50.489634 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Apr 13 23:03:50.489644 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Apr 13 23:03:50.489653 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Apr 13 23:03:50.489662 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Apr 13 23:03:50.489671 kernel: iommu: Default domain type: Translated Apr 13 23:03:50.489680 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Apr 13 23:03:50.489690 kernel: PCI: Using ACPI for IRQ routing Apr 13 23:03:50.489698 kernel: PCI: pci_cache_line_size set to 64 bytes Apr 13 23:03:50.489707 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Apr 13 23:03:50.489715 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Apr 13 23:03:50.489798 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Apr 13 23:03:50.489905 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Apr 13 23:03:50.489979 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Apr 13 23:03:50.489990 kernel: vgaarb: loaded Apr 13 23:03:50.489998 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Apr 13 23:03:50.490007 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Apr 13 23:03:50.490015 kernel: clocksource: Switched to clocksource kvm-clock Apr 13 23:03:50.490584 kernel: VFS: Disk quotas dquot_6.6.0 Apr 13 23:03:50.490646 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Apr 13 23:03:50.490655 kernel: pnp: PnP ACPI init Apr 13 23:03:50.490766 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Apr 13 23:03:50.490779 kernel: pnp: PnP ACPI: found 6 devices Apr 13 23:03:50.490789 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Apr 13 23:03:50.490798 kernel: NET: Registered PF_INET protocol family Apr 13 23:03:50.490807 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Apr 13 23:03:50.490815 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Apr 13 23:03:50.491315 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Apr 13 23:03:50.491332 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Apr 13 23:03:50.491341 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Apr 13 23:03:50.491350 kernel: TCP: Hash tables configured (established 32768 bind 32768) Apr 13 23:03:50.491359 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 13 23:03:50.491369 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 13 23:03:50.491378 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Apr 13 23:03:50.491388 kernel: NET: Registered PF_XDP protocol family Apr 13 23:03:50.491491 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Apr 13 23:03:50.491574 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Apr 13 23:03:50.491650 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Apr 13 23:03:50.491720 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Apr 13 23:03:50.491790 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Apr 13 23:03:50.491853 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Apr 13 23:03:50.491896 kernel: PCI: CLS 0 bytes, default 64 Apr 13 23:03:50.491905 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Apr 13 23:03:50.491915 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x284409db922, max_idle_ns: 440795228871 ns Apr 13 23:03:50.491927 kernel: Initialise system trusted keyrings Apr 13 23:03:50.491935 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Apr 13 23:03:50.491944 kernel: Key type asymmetric registered Apr 13 23:03:50.491952 kernel: Asymmetric key parser 'x509' registered Apr 13 23:03:50.491960 kernel: hrtimer: interrupt took 12132566 ns Apr 13 23:03:50.491969 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Apr 13 23:03:50.491978 kernel: io scheduler mq-deadline registered Apr 13 23:03:50.491986 kernel: io scheduler kyber registered Apr 13 23:03:50.491995 kernel: io scheduler bfq registered Apr 13 23:03:50.492005 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Apr 13 23:03:50.492014 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Apr 13 23:03:50.492042 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Apr 13 23:03:50.492051 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Apr 13 23:03:50.492060 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Apr 13 23:03:50.492069 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Apr 13 23:03:50.492078 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Apr 13 23:03:50.492086 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Apr 13 23:03:50.492095 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Apr 13 23:03:50.492578 kernel: rtc_cmos 00:04: RTC can wake from S4 Apr 13 23:03:50.492599 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Apr 13 23:03:50.492674 kernel: rtc_cmos 00:04: registered as rtc0 Apr 13 23:03:50.492747 kernel: rtc_cmos 00:04: setting system clock to 2026-04-13T23:03:49 UTC (1776121429) Apr 13 23:03:50.492817 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Apr 13 23:03:50.492829 kernel: intel_pstate: CPU model not supported Apr 13 23:03:50.492839 kernel: NET: Registered PF_INET6 protocol family Apr 13 23:03:50.492848 kernel: Segment Routing with IPv6 Apr 13 23:03:50.492899 kernel: In-situ OAM (IOAM) with IPv6 Apr 13 23:03:50.492909 kernel: NET: Registered PF_PACKET protocol family Apr 13 23:03:50.492918 kernel: Key type dns_resolver registered Apr 13 23:03:50.492926 kernel: IPI shorthand broadcast: enabled Apr 13 23:03:50.492934 kernel: sched_clock: Marking stable (1639017069, 543433002)->(2642148711, -459698640) Apr 13 23:03:50.492943 kernel: registered taskstats version 1 Apr 13 23:03:50.492951 kernel: Loading compiled-in X.509 certificates Apr 13 23:03:50.492959 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.127-flatcar: 51221ce98a81ccf90ef3d16403b42695603c5d00' Apr 13 23:03:50.492968 kernel: Key type .fscrypt registered Apr 13 23:03:50.492978 kernel: Key type fscrypt-provisioning registered Apr 13 23:03:50.492986 kernel: ima: No TPM chip found, activating TPM-bypass! Apr 13 23:03:50.492995 kernel: ima: Allocated hash algorithm: sha1 Apr 13 23:03:50.493004 kernel: ima: No architecture policies found Apr 13 23:03:50.493013 kernel: clk: Disabling unused clocks Apr 13 23:03:50.493042 kernel: Freeing unused kernel image (initmem) memory: 42896K Apr 13 23:03:50.493051 kernel: Write protecting the kernel read-only data: 36864k Apr 13 23:03:50.493060 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Apr 13 23:03:50.493069 kernel: Run /init as init process Apr 13 23:03:50.493080 kernel: with arguments: Apr 13 23:03:50.493090 kernel: /init Apr 13 23:03:50.493098 kernel: with environment: Apr 13 23:03:50.493107 kernel: HOME=/ Apr 13 23:03:50.493116 kernel: TERM=linux Apr 13 23:03:50.493129 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 13 23:03:50.493141 systemd[1]: Detected virtualization kvm. Apr 13 23:03:50.493153 systemd[1]: Detected architecture x86-64. Apr 13 23:03:50.493164 systemd[1]: Running in initrd. Apr 13 23:03:50.493174 systemd[1]: No hostname configured, using default hostname. Apr 13 23:03:50.493183 systemd[1]: Hostname set to . Apr 13 23:03:50.493193 systemd[1]: Initializing machine ID from VM UUID. Apr 13 23:03:50.493203 systemd[1]: Queued start job for default target initrd.target. Apr 13 23:03:50.493213 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 13 23:03:50.493222 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 13 23:03:50.493233 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Apr 13 23:03:50.493245 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 13 23:03:50.493257 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Apr 13 23:03:50.493278 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Apr 13 23:03:50.493293 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Apr 13 23:03:50.493304 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Apr 13 23:03:50.493316 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 13 23:03:50.493327 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 13 23:03:50.493338 systemd[1]: Reached target paths.target - Path Units. Apr 13 23:03:50.493349 systemd[1]: Reached target slices.target - Slice Units. Apr 13 23:03:50.493359 systemd[1]: Reached target swap.target - Swaps. Apr 13 23:03:50.493369 systemd[1]: Reached target timers.target - Timer Units. Apr 13 23:03:50.493379 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Apr 13 23:03:50.493389 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 13 23:03:50.493401 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 13 23:03:50.493411 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Apr 13 23:03:50.493422 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 13 23:03:50.493433 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 13 23:03:50.493443 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 13 23:03:50.493453 systemd[1]: Reached target sockets.target - Socket Units. Apr 13 23:03:50.493463 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Apr 13 23:03:50.493473 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 13 23:03:50.493483 systemd[1]: Finished network-cleanup.service - Network Cleanup. Apr 13 23:03:50.493495 systemd[1]: Starting systemd-fsck-usr.service... Apr 13 23:03:50.493504 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 13 23:03:50.493514 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 13 23:03:50.493524 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 13 23:03:50.493557 systemd-journald[193]: Collecting audit messages is disabled. Apr 13 23:03:50.493584 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Apr 13 23:03:50.493594 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 13 23:03:50.493604 systemd[1]: Finished systemd-fsck-usr.service. Apr 13 23:03:50.493619 systemd-journald[193]: Journal started Apr 13 23:03:50.493643 systemd-journald[193]: Runtime Journal (/run/log/journal/cb8a9dffd941451eae5a291802873d9e) is 6.0M, max 48.4M, 42.3M free. Apr 13 23:03:50.499819 systemd[1]: Started systemd-journald.service - Journal Service. Apr 13 23:03:50.503353 systemd-modules-load[194]: Inserted module 'overlay' Apr 13 23:03:50.673336 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Apr 13 23:03:50.673380 kernel: Bridge firewalling registered Apr 13 23:03:50.513548 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 13 23:03:50.554392 systemd-modules-load[194]: Inserted module 'br_netfilter' Apr 13 23:03:50.688671 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 13 23:03:50.690018 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 13 23:03:50.699622 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 13 23:03:50.699933 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 13 23:03:50.707639 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 13 23:03:50.713088 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 13 23:03:50.717698 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 13 23:03:50.739654 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 13 23:03:50.744822 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 13 23:03:50.748563 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 13 23:03:50.753342 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 13 23:03:50.779126 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Apr 13 23:03:50.785220 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 13 23:03:50.809737 dracut-cmdline[229]: dracut-dracut-053 Apr 13 23:03:50.814138 dracut-cmdline[229]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=c1ba97db2f6278922cfc5bd0ca74b4bb573fca2c3aed19c121a34271e693e156 Apr 13 23:03:50.877499 systemd-resolved[232]: Positive Trust Anchors: Apr 13 23:03:50.878421 systemd-resolved[232]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 13 23:03:50.878466 systemd-resolved[232]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 13 23:03:50.882779 systemd-resolved[232]: Defaulting to hostname 'linux'. Apr 13 23:03:50.884999 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 13 23:03:50.888577 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 13 23:03:51.019730 kernel: SCSI subsystem initialized Apr 13 23:03:51.069357 kernel: Loading iSCSI transport class v2.0-870. Apr 13 23:03:51.095124 kernel: iscsi: registered transport (tcp) Apr 13 23:03:51.123214 kernel: iscsi: registered transport (qla4xxx) Apr 13 23:03:51.123389 kernel: QLogic iSCSI HBA Driver Apr 13 23:03:51.218771 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Apr 13 23:03:51.272621 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Apr 13 23:03:51.340270 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Apr 13 23:03:51.340422 kernel: device-mapper: uevent: version 1.0.3 Apr 13 23:03:51.340438 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Apr 13 23:03:51.483210 kernel: raid6: avx512x4 gen() 29273 MB/s Apr 13 23:03:51.501134 kernel: raid6: avx512x2 gen() 28437 MB/s Apr 13 23:03:51.520257 kernel: raid6: avx512x1 gen() 24380 MB/s Apr 13 23:03:51.537837 kernel: raid6: avx2x4 gen() 9799 MB/s Apr 13 23:03:51.557337 kernel: raid6: avx2x2 gen() 20616 MB/s Apr 13 23:03:51.574484 kernel: raid6: avx2x1 gen() 8793 MB/s Apr 13 23:03:51.574659 kernel: raid6: using algorithm avx512x4 gen() 29273 MB/s Apr 13 23:03:51.596662 kernel: raid6: .... xor() 7786 MB/s, rmw enabled Apr 13 23:03:51.596858 kernel: raid6: using avx512x2 recovery algorithm Apr 13 23:03:51.693274 kernel: xor: automatically using best checksumming function avx Apr 13 23:03:51.964524 kernel: Btrfs loaded, zoned=no, fsverity=no Apr 13 23:03:52.023590 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Apr 13 23:03:52.078543 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 13 23:03:52.119631 systemd-udevd[416]: Using default interface naming scheme 'v255'. Apr 13 23:03:52.131582 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 13 23:03:52.147300 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Apr 13 23:03:52.192309 dracut-pre-trigger[430]: rd.md=0: removing MD RAID activation Apr 13 23:03:52.276652 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Apr 13 23:03:52.294947 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 13 23:03:52.360505 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 13 23:03:52.389678 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Apr 13 23:03:52.406614 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Apr 13 23:03:52.412745 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Apr 13 23:03:52.418832 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 13 23:03:52.462704 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 13 23:03:52.471918 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Apr 13 23:03:52.477262 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Apr 13 23:03:52.481921 kernel: cryptd: max_cpu_qlen set to 1000 Apr 13 23:03:52.495558 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Apr 13 23:03:52.509129 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Apr 13 23:03:52.509313 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Apr 13 23:03:52.509337 kernel: GPT:9289727 != 19775487 Apr 13 23:03:52.509349 kernel: GPT:Alternate GPT header not at the end of the disk. Apr 13 23:03:52.509359 kernel: GPT:9289727 != 19775487 Apr 13 23:03:52.509370 kernel: GPT: Use GNU Parted to correct GPT errors. Apr 13 23:03:52.509381 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 13 23:03:52.502660 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 13 23:03:52.502731 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 13 23:03:52.516201 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 13 23:03:52.517087 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 13 23:03:52.517172 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 13 23:03:52.531271 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Apr 13 23:03:52.550596 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 13 23:03:52.564991 kernel: AVX2 version of gcm_enc/dec engaged. Apr 13 23:03:52.567901 kernel: AES CTR mode by8 optimization enabled Apr 13 23:03:52.567959 kernel: libata version 3.00 loaded. Apr 13 23:03:52.579247 kernel: BTRFS: device fsid de1edd48-4571-4695-92f0-7af6e33c4e3d devid 1 transid 31 /dev/vda3 scanned by (udev-worker) (476) Apr 13 23:03:52.581954 kernel: ahci 0000:00:1f.2: version 3.0 Apr 13 23:03:52.582208 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Apr 13 23:03:52.582225 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (463) Apr 13 23:03:52.585014 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Apr 13 23:03:52.585274 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Apr 13 23:03:52.588786 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Apr 13 23:03:52.771788 kernel: scsi host0: ahci Apr 13 23:03:52.774184 kernel: scsi host1: ahci Apr 13 23:03:52.774314 kernel: scsi host2: ahci Apr 13 23:03:52.774433 kernel: scsi host3: ahci Apr 13 23:03:52.774587 kernel: scsi host4: ahci Apr 13 23:03:52.774681 kernel: scsi host5: ahci Apr 13 23:03:52.774774 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 Apr 13 23:03:52.774786 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 Apr 13 23:03:52.774798 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 Apr 13 23:03:52.774809 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 Apr 13 23:03:52.774825 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 Apr 13 23:03:52.774837 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 Apr 13 23:03:52.778429 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 13 23:03:52.795754 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Apr 13 23:03:52.819913 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Apr 13 23:03:52.856535 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Apr 13 23:03:52.871982 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Apr 13 23:03:52.884242 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Apr 13 23:03:52.889748 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 13 23:03:52.904262 disk-uuid[560]: Primary Header is updated. Apr 13 23:03:52.904262 disk-uuid[560]: Secondary Entries is updated. Apr 13 23:03:52.904262 disk-uuid[560]: Secondary Header is updated. Apr 13 23:03:52.907904 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 13 23:03:52.924140 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Apr 13 23:03:52.924249 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Apr 13 23:03:52.924262 kernel: ata3.00: applying bridge limits Apr 13 23:03:52.924272 kernel: ata4: SATA link down (SStatus 0 SControl 300) Apr 13 23:03:52.920988 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 13 23:03:52.945363 kernel: ata1: SATA link down (SStatus 0 SControl 300) Apr 13 23:03:52.948013 kernel: ata2: SATA link down (SStatus 0 SControl 300) Apr 13 23:03:52.948243 kernel: ata3.00: configured for UDMA/100 Apr 13 23:03:52.949985 kernel: ata6: SATA link down (SStatus 0 SControl 300) Apr 13 23:03:52.951937 kernel: ata5: SATA link down (SStatus 0 SControl 300) Apr 13 23:03:52.963203 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Apr 13 23:03:53.054210 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Apr 13 23:03:53.054515 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Apr 13 23:03:53.073909 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Apr 13 23:03:53.984421 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 13 23:03:53.985531 disk-uuid[565]: The operation has completed successfully. Apr 13 23:03:54.043441 systemd[1]: disk-uuid.service: Deactivated successfully. Apr 13 23:03:54.043950 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Apr 13 23:03:54.115301 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Apr 13 23:03:54.172307 sh[599]: Success Apr 13 23:03:54.205732 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Apr 13 23:03:54.257037 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Apr 13 23:03:54.276621 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Apr 13 23:03:54.285387 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Apr 13 23:03:54.375493 kernel: BTRFS info (device dm-0): first mount of filesystem de1edd48-4571-4695-92f0-7af6e33c4e3d Apr 13 23:03:54.375654 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Apr 13 23:03:54.375671 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Apr 13 23:03:54.377488 kernel: BTRFS info (device dm-0): disabling log replay at mount time Apr 13 23:03:54.378967 kernel: BTRFS info (device dm-0): using free space tree Apr 13 23:03:54.418712 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Apr 13 23:03:54.421743 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Apr 13 23:03:54.443629 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Apr 13 23:03:54.449619 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Apr 13 23:03:54.487256 kernel: BTRFS info (device vda6): first mount of filesystem 7dd1319a-da93-42af-ac3b-f04d4587a8af Apr 13 23:03:54.487736 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 13 23:03:54.487753 kernel: BTRFS info (device vda6): using free space tree Apr 13 23:03:54.498913 kernel: BTRFS info (device vda6): auto enabling async discard Apr 13 23:03:54.512547 systemd[1]: mnt-oem.mount: Deactivated successfully. Apr 13 23:03:54.516719 kernel: BTRFS info (device vda6): last unmount of filesystem 7dd1319a-da93-42af-ac3b-f04d4587a8af Apr 13 23:03:54.581414 systemd[1]: Finished ignition-setup.service - Ignition (setup). Apr 13 23:03:54.597261 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Apr 13 23:03:54.844931 ignition[700]: Ignition 2.19.0 Apr 13 23:03:54.844951 ignition[700]: Stage: fetch-offline Apr 13 23:03:54.845737 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 13 23:03:54.845105 ignition[700]: no configs at "/usr/lib/ignition/base.d" Apr 13 23:03:54.845116 ignition[700]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 13 23:03:54.845318 ignition[700]: parsed url from cmdline: "" Apr 13 23:03:54.845321 ignition[700]: no config URL provided Apr 13 23:03:54.845327 ignition[700]: reading system config file "/usr/lib/ignition/user.ign" Apr 13 23:03:54.862439 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 13 23:03:54.845335 ignition[700]: no config at "/usr/lib/ignition/user.ign" Apr 13 23:03:54.856373 ignition[700]: op(1): [started] loading QEMU firmware config module Apr 13 23:03:54.856454 ignition[700]: op(1): executing: "modprobe" "qemu_fw_cfg" Apr 13 23:03:54.899648 ignition[700]: op(1): [finished] loading QEMU firmware config module Apr 13 23:03:55.021739 systemd-networkd[786]: lo: Link UP Apr 13 23:03:55.021761 systemd-networkd[786]: lo: Gained carrier Apr 13 23:03:55.025487 systemd-networkd[786]: Enumeration completed Apr 13 23:03:55.026458 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 13 23:03:55.027916 systemd-networkd[786]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 13 23:03:55.027964 systemd-networkd[786]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 13 23:03:55.032854 systemd-networkd[786]: eth0: Link UP Apr 13 23:03:55.033119 systemd-networkd[786]: eth0: Gained carrier Apr 13 23:03:55.033565 systemd-networkd[786]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 13 23:03:55.039180 systemd[1]: Reached target network.target - Network. Apr 13 23:03:55.097941 systemd-networkd[786]: eth0: DHCPv4 address 10.0.0.132/16, gateway 10.0.0.1 acquired from 10.0.0.1 Apr 13 23:03:55.214951 ignition[700]: parsing config with SHA512: fc2e28e265baba13045d6d385759719e8064b2078555953036ceba9f23cfd82a871881ccc8c24024877de569da7bebb4a140381115457c27f8bb3fed023ce00d Apr 13 23:03:55.234583 unknown[700]: fetched base config from "system" Apr 13 23:03:55.234639 unknown[700]: fetched user config from "qemu" Apr 13 23:03:55.240170 systemd-resolved[232]: Detected conflict on linux IN A 10.0.0.132 Apr 13 23:03:55.243819 ignition[700]: fetch-offline: fetch-offline passed Apr 13 23:03:55.240184 systemd-resolved[232]: Hostname conflict, changing published hostname from 'linux' to 'linux11'. Apr 13 23:03:55.243964 ignition[700]: Ignition finished successfully Apr 13 23:03:55.248602 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Apr 13 23:03:55.252514 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Apr 13 23:03:55.273291 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Apr 13 23:03:55.436404 ignition[792]: Ignition 2.19.0 Apr 13 23:03:55.436430 ignition[792]: Stage: kargs Apr 13 23:03:55.436727 ignition[792]: no configs at "/usr/lib/ignition/base.d" Apr 13 23:03:55.436738 ignition[792]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 13 23:03:55.439165 ignition[792]: kargs: kargs passed Apr 13 23:03:55.439557 ignition[792]: Ignition finished successfully Apr 13 23:03:55.450838 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Apr 13 23:03:55.481487 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Apr 13 23:03:55.670307 ignition[799]: Ignition 2.19.0 Apr 13 23:03:55.671581 ignition[799]: Stage: disks Apr 13 23:03:55.673512 ignition[799]: no configs at "/usr/lib/ignition/base.d" Apr 13 23:03:55.674646 ignition[799]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 13 23:03:55.681153 ignition[799]: disks: disks passed Apr 13 23:03:55.681779 ignition[799]: Ignition finished successfully Apr 13 23:03:55.687379 systemd[1]: Finished ignition-disks.service - Ignition (disks). Apr 13 23:03:55.693779 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Apr 13 23:03:55.700274 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 13 23:03:55.706731 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 13 23:03:55.708028 systemd[1]: Reached target sysinit.target - System Initialization. Apr 13 23:03:55.708994 systemd[1]: Reached target basic.target - Basic System. Apr 13 23:03:55.768629 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Apr 13 23:03:55.802103 systemd-fsck[809]: ROOT: clean, 14/553520 files, 52654/553472 blocks Apr 13 23:03:55.812480 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Apr 13 23:03:55.885544 systemd[1]: Mounting sysroot.mount - /sysroot... Apr 13 23:03:56.163539 systemd[1]: Mounted sysroot.mount - /sysroot. Apr 13 23:03:56.168081 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Apr 13 23:03:56.175646 kernel: EXT4-fs (vda9): mounted filesystem e02793bf-3e0d-4c7e-b11a-92c664da7ce3 r/w with ordered data mode. Quota mode: none. Apr 13 23:03:56.186405 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 13 23:03:56.190811 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Apr 13 23:03:56.192187 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Apr 13 23:03:56.192230 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Apr 13 23:03:56.192274 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Apr 13 23:03:56.213684 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Apr 13 23:03:56.219721 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Apr 13 23:03:56.242235 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (817) Apr 13 23:03:56.249995 kernel: BTRFS info (device vda6): first mount of filesystem 7dd1319a-da93-42af-ac3b-f04d4587a8af Apr 13 23:03:56.250306 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 13 23:03:56.250320 kernel: BTRFS info (device vda6): using free space tree Apr 13 23:03:56.289086 kernel: BTRFS info (device vda6): auto enabling async discard Apr 13 23:03:56.292097 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 13 23:03:56.411788 initrd-setup-root[841]: cut: /sysroot/etc/passwd: No such file or directory Apr 13 23:03:56.455613 initrd-setup-root[848]: cut: /sysroot/etc/group: No such file or directory Apr 13 23:03:56.471967 initrd-setup-root[855]: cut: /sysroot/etc/shadow: No such file or directory Apr 13 23:03:56.487358 initrd-setup-root[862]: cut: /sysroot/etc/gshadow: No such file or directory Apr 13 23:03:56.935658 systemd-networkd[786]: eth0: Gained IPv6LL Apr 13 23:03:57.002990 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Apr 13 23:03:57.031331 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Apr 13 23:03:57.040096 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Apr 13 23:03:57.061813 systemd[1]: sysroot-oem.mount: Deactivated successfully. Apr 13 23:03:57.065037 kernel: BTRFS info (device vda6): last unmount of filesystem 7dd1319a-da93-42af-ac3b-f04d4587a8af Apr 13 23:03:57.342473 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Apr 13 23:03:57.443559 ignition[932]: INFO : Ignition 2.19.0 Apr 13 23:03:57.443559 ignition[932]: INFO : Stage: mount Apr 13 23:03:57.447800 ignition[932]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 13 23:03:57.447800 ignition[932]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 13 23:03:57.447800 ignition[932]: INFO : mount: mount passed Apr 13 23:03:57.453735 ignition[932]: INFO : Ignition finished successfully Apr 13 23:03:57.453815 systemd[1]: Finished ignition-mount.service - Ignition (mount). Apr 13 23:03:57.476481 systemd[1]: Starting ignition-files.service - Ignition (files)... Apr 13 23:03:57.592540 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 13 23:03:57.616795 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (945) Apr 13 23:03:57.620645 kernel: BTRFS info (device vda6): first mount of filesystem 7dd1319a-da93-42af-ac3b-f04d4587a8af Apr 13 23:03:57.620787 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 13 23:03:57.620800 kernel: BTRFS info (device vda6): using free space tree Apr 13 23:03:57.630261 kernel: BTRFS info (device vda6): auto enabling async discard Apr 13 23:03:57.638937 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 13 23:03:57.806632 ignition[963]: INFO : Ignition 2.19.0 Apr 13 23:03:57.809421 ignition[963]: INFO : Stage: files Apr 13 23:03:57.812790 ignition[963]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 13 23:03:57.812790 ignition[963]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 13 23:03:57.817402 ignition[963]: DEBUG : files: compiled without relabeling support, skipping Apr 13 23:03:57.821277 ignition[963]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Apr 13 23:03:57.825405 ignition[963]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Apr 13 23:03:57.835022 ignition[963]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Apr 13 23:03:57.843089 ignition[963]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Apr 13 23:03:57.849439 unknown[963]: wrote ssh authorized keys file for user: core Apr 13 23:03:57.853374 ignition[963]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Apr 13 23:03:57.862190 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Apr 13 23:03:57.867176 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Apr 13 23:03:58.050423 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Apr 13 23:03:58.407375 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Apr 13 23:03:58.407375 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Apr 13 23:03:58.419432 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Apr 13 23:03:58.422956 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Apr 13 23:03:58.426294 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Apr 13 23:03:58.426294 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 13 23:03:58.435552 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 13 23:03:58.435552 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 13 23:03:58.435552 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 13 23:03:58.435552 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Apr 13 23:03:58.435552 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Apr 13 23:03:58.435552 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Apr 13 23:03:58.435552 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Apr 13 23:03:58.435552 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Apr 13 23:03:58.492969 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.4-x86-64.raw: attempt #1 Apr 13 23:03:58.845685 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Apr 13 23:04:05.774455 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Apr 13 23:04:05.780636 ignition[963]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Apr 13 23:04:05.798323 ignition[963]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 13 23:04:05.798323 ignition[963]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 13 23:04:05.798323 ignition[963]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Apr 13 23:04:05.798323 ignition[963]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Apr 13 23:04:05.798323 ignition[963]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Apr 13 23:04:05.798323 ignition[963]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Apr 13 23:04:05.798323 ignition[963]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Apr 13 23:04:05.798323 ignition[963]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Apr 13 23:04:05.971712 ignition[963]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Apr 13 23:04:06.043123 ignition[963]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Apr 13 23:04:06.047730 ignition[963]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Apr 13 23:04:06.047730 ignition[963]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Apr 13 23:04:06.047730 ignition[963]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Apr 13 23:04:06.060615 ignition[963]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Apr 13 23:04:06.060615 ignition[963]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Apr 13 23:04:06.060615 ignition[963]: INFO : files: files passed Apr 13 23:04:06.060615 ignition[963]: INFO : Ignition finished successfully Apr 13 23:04:06.053475 systemd[1]: Finished ignition-files.service - Ignition (files). Apr 13 23:04:06.076233 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Apr 13 23:04:06.083943 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Apr 13 23:04:06.092292 systemd[1]: ignition-quench.service: Deactivated successfully. Apr 13 23:04:06.092444 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Apr 13 23:04:06.138011 initrd-setup-root-after-ignition[990]: grep: /sysroot/oem/oem-release: No such file or directory Apr 13 23:04:06.147986 initrd-setup-root-after-ignition[992]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 13 23:04:06.147986 initrd-setup-root-after-ignition[992]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Apr 13 23:04:06.155665 initrd-setup-root-after-ignition[996]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 13 23:04:06.165134 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 13 23:04:06.170462 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Apr 13 23:04:06.189672 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Apr 13 23:04:06.398841 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Apr 13 23:04:06.400578 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Apr 13 23:04:06.415245 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Apr 13 23:04:06.421595 systemd[1]: Reached target initrd.target - Initrd Default Target. Apr 13 23:04:06.427379 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Apr 13 23:04:06.442524 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Apr 13 23:04:06.609263 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 13 23:04:06.700781 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Apr 13 23:04:06.803846 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Apr 13 23:04:06.808406 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 13 23:04:06.819336 systemd[1]: Stopped target timers.target - Timer Units. Apr 13 23:04:06.821720 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Apr 13 23:04:06.881697 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 13 23:04:06.897691 systemd[1]: Stopped target initrd.target - Initrd Default Target. Apr 13 23:04:06.906477 systemd[1]: Stopped target basic.target - Basic System. Apr 13 23:04:06.909649 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Apr 13 23:04:06.916237 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Apr 13 23:04:06.932627 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Apr 13 23:04:06.934897 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Apr 13 23:04:06.942447 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Apr 13 23:04:06.947763 systemd[1]: Stopped target sysinit.target - System Initialization. Apr 13 23:04:06.954045 systemd[1]: Stopped target local-fs.target - Local File Systems. Apr 13 23:04:06.957576 systemd[1]: Stopped target swap.target - Swaps. Apr 13 23:04:06.962626 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Apr 13 23:04:06.962846 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Apr 13 23:04:06.978366 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Apr 13 23:04:06.990952 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 13 23:04:07.000711 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Apr 13 23:04:07.003796 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 13 23:04:07.015716 systemd[1]: dracut-initqueue.service: Deactivated successfully. Apr 13 23:04:07.018739 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Apr 13 23:04:07.083799 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Apr 13 23:04:07.089984 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Apr 13 23:04:07.103613 systemd[1]: Stopped target paths.target - Path Units. Apr 13 23:04:07.110081 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Apr 13 23:04:07.116413 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 13 23:04:07.120930 systemd[1]: Stopped target slices.target - Slice Units. Apr 13 23:04:07.128772 systemd[1]: Stopped target sockets.target - Socket Units. Apr 13 23:04:07.132710 systemd[1]: iscsid.socket: Deactivated successfully. Apr 13 23:04:07.132809 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Apr 13 23:04:07.136000 systemd[1]: iscsiuio.socket: Deactivated successfully. Apr 13 23:04:07.136090 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 13 23:04:07.138083 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Apr 13 23:04:07.138235 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 13 23:04:07.146142 systemd[1]: ignition-files.service: Deactivated successfully. Apr 13 23:04:07.146308 systemd[1]: Stopped ignition-files.service - Ignition (files). Apr 13 23:04:07.183015 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Apr 13 23:04:07.217081 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Apr 13 23:04:07.223764 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Apr 13 23:04:07.251783 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Apr 13 23:04:07.260940 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Apr 13 23:04:07.265353 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Apr 13 23:04:07.279431 systemd[1]: initrd-cleanup.service: Deactivated successfully. Apr 13 23:04:07.279526 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Apr 13 23:04:07.296379 ignition[1016]: INFO : Ignition 2.19.0 Apr 13 23:04:07.301476 ignition[1016]: INFO : Stage: umount Apr 13 23:04:07.303900 ignition[1016]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 13 23:04:07.303900 ignition[1016]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 13 23:04:07.308330 ignition[1016]: INFO : umount: umount passed Apr 13 23:04:07.308330 ignition[1016]: INFO : Ignition finished successfully Apr 13 23:04:07.313620 systemd[1]: ignition-mount.service: Deactivated successfully. Apr 13 23:04:07.313741 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Apr 13 23:04:07.324413 systemd[1]: sysroot-boot.mount: Deactivated successfully. Apr 13 23:04:07.326341 systemd[1]: Stopped target network.target - Network. Apr 13 23:04:07.333189 systemd[1]: ignition-disks.service: Deactivated successfully. Apr 13 23:04:07.336487 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Apr 13 23:04:07.339742 systemd[1]: ignition-kargs.service: Deactivated successfully. Apr 13 23:04:07.339808 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Apr 13 23:04:07.353464 systemd[1]: ignition-setup.service: Deactivated successfully. Apr 13 23:04:07.355493 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Apr 13 23:04:07.364763 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Apr 13 23:04:07.365235 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Apr 13 23:04:07.371327 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Apr 13 23:04:07.374307 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Apr 13 23:04:07.389654 systemd[1]: systemd-resolved.service: Deactivated successfully. Apr 13 23:04:07.390556 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Apr 13 23:04:07.393332 systemd[1]: sysroot-boot.service: Deactivated successfully. Apr 13 23:04:07.393425 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Apr 13 23:04:07.411191 systemd-networkd[786]: eth0: DHCPv6 lease lost Apr 13 23:04:07.421839 systemd[1]: systemd-networkd.service: Deactivated successfully. Apr 13 23:04:07.423731 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Apr 13 23:04:07.465983 systemd[1]: systemd-networkd.socket: Deactivated successfully. Apr 13 23:04:07.466131 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Apr 13 23:04:07.473828 systemd[1]: initrd-setup-root.service: Deactivated successfully. Apr 13 23:04:07.474277 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Apr 13 23:04:07.492390 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Apr 13 23:04:07.498553 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Apr 13 23:04:07.500051 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 13 23:04:07.510439 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 13 23:04:07.516592 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 13 23:04:07.522588 systemd[1]: systemd-modules-load.service: Deactivated successfully. Apr 13 23:04:07.592077 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Apr 13 23:04:07.606810 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Apr 13 23:04:07.609590 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 13 23:04:07.628076 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 13 23:04:07.648760 systemd[1]: systemd-udevd.service: Deactivated successfully. Apr 13 23:04:07.652746 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 13 23:04:07.664759 systemd[1]: network-cleanup.service: Deactivated successfully. Apr 13 23:04:07.665895 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Apr 13 23:04:07.671945 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Apr 13 23:04:07.672633 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Apr 13 23:04:07.688859 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Apr 13 23:04:07.690765 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Apr 13 23:04:07.697634 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Apr 13 23:04:07.697822 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Apr 13 23:04:07.705734 systemd[1]: dracut-cmdline.service: Deactivated successfully. Apr 13 23:04:07.705933 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Apr 13 23:04:07.718706 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 13 23:04:07.723280 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 13 23:04:07.807515 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Apr 13 23:04:07.816333 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Apr 13 23:04:07.816444 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 13 23:04:07.827679 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Apr 13 23:04:07.827958 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 13 23:04:07.836422 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Apr 13 23:04:07.836489 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Apr 13 23:04:07.857755 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 13 23:04:07.858545 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 13 23:04:07.873806 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Apr 13 23:04:07.873959 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Apr 13 23:04:07.900207 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Apr 13 23:04:07.917589 systemd[1]: Starting initrd-switch-root.service - Switch Root... Apr 13 23:04:08.063527 systemd[1]: Switching root. Apr 13 23:04:08.241730 systemd-journald[193]: Journal stopped Apr 13 23:04:22.335670 systemd-journald[193]: Received SIGTERM from PID 1 (systemd). Apr 13 23:04:22.335765 kernel: SELinux: policy capability network_peer_controls=1 Apr 13 23:04:22.335787 kernel: SELinux: policy capability open_perms=1 Apr 13 23:04:22.335801 kernel: SELinux: policy capability extended_socket_class=1 Apr 13 23:04:22.335816 kernel: SELinux: policy capability always_check_network=0 Apr 13 23:04:22.335829 kernel: SELinux: policy capability cgroup_seclabel=1 Apr 13 23:04:22.335843 kernel: SELinux: policy capability nnp_nosuid_transition=1 Apr 13 23:04:22.335857 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Apr 13 23:04:22.335909 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Apr 13 23:04:22.335924 kernel: audit: type=1403 audit(1776121448.644:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Apr 13 23:04:22.335944 systemd[1]: Successfully loaded SELinux policy in 146.191ms. Apr 13 23:04:22.335972 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 204.496ms. Apr 13 23:04:22.335988 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 13 23:04:22.336003 systemd[1]: Detected virtualization kvm. Apr 13 23:04:22.336017 systemd[1]: Detected architecture x86-64. Apr 13 23:04:22.336032 systemd[1]: Detected first boot. Apr 13 23:04:22.336046 systemd[1]: Initializing machine ID from VM UUID. Apr 13 23:04:22.336061 zram_generator::config[1059]: No configuration found. Apr 13 23:04:22.336109 systemd[1]: Populated /etc with preset unit settings. Apr 13 23:04:22.336132 systemd[1]: initrd-switch-root.service: Deactivated successfully. Apr 13 23:04:22.336147 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Apr 13 23:04:22.342678 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Apr 13 23:04:22.343079 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Apr 13 23:04:22.343104 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Apr 13 23:04:22.343118 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Apr 13 23:04:22.343131 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Apr 13 23:04:22.343143 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Apr 13 23:04:22.350513 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Apr 13 23:04:22.351359 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Apr 13 23:04:22.351394 systemd[1]: Created slice user.slice - User and Session Slice. Apr 13 23:04:22.351410 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 13 23:04:22.351426 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 13 23:04:22.351439 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Apr 13 23:04:22.351451 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Apr 13 23:04:22.351465 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Apr 13 23:04:22.351478 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 13 23:04:22.351497 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Apr 13 23:04:22.351512 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 13 23:04:22.351526 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Apr 13 23:04:22.351646 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Apr 13 23:04:22.351662 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Apr 13 23:04:22.351677 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Apr 13 23:04:22.351690 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 13 23:04:22.351704 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 13 23:04:22.351724 systemd[1]: Reached target slices.target - Slice Units. Apr 13 23:04:22.351740 systemd[1]: Reached target swap.target - Swaps. Apr 13 23:04:22.351755 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Apr 13 23:04:22.351769 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Apr 13 23:04:22.351783 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 13 23:04:22.351796 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 13 23:04:22.351810 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 13 23:04:22.351830 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Apr 13 23:04:22.351844 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Apr 13 23:04:22.351894 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Apr 13 23:04:22.351912 systemd[1]: Mounting media.mount - External Media Directory... Apr 13 23:04:22.351929 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 13 23:04:22.351943 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Apr 13 23:04:22.351957 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Apr 13 23:04:22.351970 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Apr 13 23:04:22.351984 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Apr 13 23:04:22.352018 systemd[1]: Reached target machines.target - Containers. Apr 13 23:04:22.352032 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Apr 13 23:04:22.352049 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 13 23:04:22.352064 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 13 23:04:22.352077 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Apr 13 23:04:22.352091 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 13 23:04:22.352104 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 13 23:04:22.352118 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 13 23:04:22.352132 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Apr 13 23:04:22.352146 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 13 23:04:22.352181 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Apr 13 23:04:22.352196 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Apr 13 23:04:22.352211 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Apr 13 23:04:22.352224 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Apr 13 23:04:22.352238 systemd[1]: Stopped systemd-fsck-usr.service. Apr 13 23:04:22.352251 kernel: loop: module loaded Apr 13 23:04:22.352265 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 13 23:04:22.352279 kernel: fuse: init (API version 7.39) Apr 13 23:04:22.352400 systemd-journald[1143]: Collecting audit messages is disabled. Apr 13 23:04:22.352435 systemd-journald[1143]: Journal started Apr 13 23:04:22.352581 systemd-journald[1143]: Runtime Journal (/run/log/journal/cb8a9dffd941451eae5a291802873d9e) is 6.0M, max 48.4M, 42.3M free. Apr 13 23:04:22.378400 kernel: ACPI: bus type drm_connector registered Apr 13 23:04:19.574315 systemd[1]: Queued start job for default target multi-user.target. Apr 13 23:04:19.879260 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Apr 13 23:04:19.885691 systemd[1]: systemd-journald.service: Deactivated successfully. Apr 13 23:04:19.921825 systemd[1]: systemd-journald.service: Consumed 1.024s CPU time. Apr 13 23:04:22.393939 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 13 23:04:22.570719 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Apr 13 23:04:22.680269 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Apr 13 23:04:22.739815 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 13 23:04:22.759251 systemd[1]: verity-setup.service: Deactivated successfully. Apr 13 23:04:22.763578 systemd[1]: Stopped verity-setup.service. Apr 13 23:04:22.790234 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 13 23:04:22.862144 systemd[1]: Started systemd-journald.service - Journal Service. Apr 13 23:04:22.882631 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Apr 13 23:04:22.895942 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Apr 13 23:04:22.902087 systemd[1]: Mounted media.mount - External Media Directory. Apr 13 23:04:22.908646 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Apr 13 23:04:22.924414 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Apr 13 23:04:23.001584 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Apr 13 23:04:23.012081 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Apr 13 23:04:23.036579 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 13 23:04:23.073609 systemd[1]: modprobe@configfs.service: Deactivated successfully. Apr 13 23:04:23.075473 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Apr 13 23:04:23.084058 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 13 23:04:23.089198 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 13 23:04:23.096297 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 13 23:04:23.098577 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 13 23:04:23.109048 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 13 23:04:23.117629 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 13 23:04:23.134721 systemd[1]: modprobe@fuse.service: Deactivated successfully. Apr 13 23:04:23.139131 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Apr 13 23:04:23.162647 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 13 23:04:23.166514 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 13 23:04:23.173789 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 13 23:04:23.190146 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Apr 13 23:04:23.234749 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Apr 13 23:04:23.432373 systemd[1]: Reached target network-pre.target - Preparation for Network. Apr 13 23:04:23.479797 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Apr 13 23:04:23.550834 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Apr 13 23:04:23.580636 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Apr 13 23:04:23.584392 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 13 23:04:23.608720 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Apr 13 23:04:23.714520 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Apr 13 23:04:23.751713 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Apr 13 23:04:23.762036 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 13 23:04:23.796010 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Apr 13 23:04:23.841324 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Apr 13 23:04:23.854546 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 13 23:04:23.866612 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Apr 13 23:04:23.881901 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 13 23:04:23.916116 systemd-journald[1143]: Time spent on flushing to /var/log/journal/cb8a9dffd941451eae5a291802873d9e is 153.802ms for 952 entries. Apr 13 23:04:23.916116 systemd-journald[1143]: System Journal (/var/log/journal/cb8a9dffd941451eae5a291802873d9e) is 8.0M, max 195.6M, 187.6M free. Apr 13 23:04:24.351745 systemd-journald[1143]: Received client request to flush runtime journal. Apr 13 23:04:24.358569 kernel: loop0: detected capacity change from 0 to 140768 Apr 13 23:04:23.928653 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 13 23:04:23.948858 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Apr 13 23:04:23.998849 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 13 23:04:24.129108 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 13 23:04:24.150342 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Apr 13 23:04:24.173414 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Apr 13 23:04:24.190389 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Apr 13 23:04:24.199090 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Apr 13 23:04:24.224892 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Apr 13 23:04:24.343641 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Apr 13 23:04:24.381478 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Apr 13 23:04:24.386269 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Apr 13 23:04:24.404434 systemd-tmpfiles[1174]: ACLs are not supported, ignoring. Apr 13 23:04:24.404457 systemd-tmpfiles[1174]: ACLs are not supported, ignoring. Apr 13 23:04:24.410919 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 13 23:04:24.610791 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Apr 13 23:04:24.561973 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 13 23:04:24.654200 systemd[1]: Starting systemd-sysusers.service - Create System Users... Apr 13 23:04:24.675373 udevadm[1184]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Apr 13 23:04:24.688033 kernel: loop1: detected capacity change from 0 to 219192 Apr 13 23:04:25.718155 systemd[1]: Finished systemd-sysusers.service - Create System Users. Apr 13 23:04:25.788383 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 13 23:04:25.838790 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Apr 13 23:04:25.842142 kernel: loop2: detected capacity change from 0 to 142488 Apr 13 23:04:25.856077 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Apr 13 23:04:26.096079 kernel: loop3: detected capacity change from 0 to 140768 Apr 13 23:04:26.301391 systemd-tmpfiles[1195]: ACLs are not supported, ignoring. Apr 13 23:04:26.319117 systemd-tmpfiles[1195]: ACLs are not supported, ignoring. Apr 13 23:04:26.445721 kernel: loop4: detected capacity change from 0 to 219192 Apr 13 23:04:26.580597 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 13 23:04:26.747303 kernel: loop5: detected capacity change from 0 to 142488 Apr 13 23:04:27.044506 (sd-merge)[1199]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Apr 13 23:04:27.080065 (sd-merge)[1199]: Merged extensions into '/usr'. Apr 13 23:04:27.112511 systemd[1]: Reloading requested from client PID 1173 ('systemd-sysext') (unit systemd-sysext.service)... Apr 13 23:04:27.112695 systemd[1]: Reloading... Apr 13 23:04:28.016147 zram_generator::config[1226]: No configuration found. Apr 13 23:04:29.597047 ldconfig[1168]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Apr 13 23:04:30.049754 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 13 23:04:30.856778 systemd[1]: Reloading finished in 3738 ms. Apr 13 23:04:31.011181 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Apr 13 23:04:31.123680 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Apr 13 23:04:31.279964 systemd[1]: Starting ensure-sysext.service... Apr 13 23:04:31.344183 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 13 23:04:31.415666 systemd[1]: Reloading requested from client PID 1263 ('systemctl') (unit ensure-sysext.service)... Apr 13 23:04:31.420301 systemd[1]: Reloading... Apr 13 23:04:31.629515 systemd-tmpfiles[1264]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Apr 13 23:04:31.631142 systemd-tmpfiles[1264]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Apr 13 23:04:31.642097 systemd-tmpfiles[1264]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Apr 13 23:04:31.655089 systemd-tmpfiles[1264]: ACLs are not supported, ignoring. Apr 13 23:04:31.657806 systemd-tmpfiles[1264]: ACLs are not supported, ignoring. Apr 13 23:04:31.722093 systemd-tmpfiles[1264]: Detected autofs mount point /boot during canonicalization of boot. Apr 13 23:04:31.783762 systemd-tmpfiles[1264]: Skipping /boot Apr 13 23:04:31.845774 systemd-tmpfiles[1264]: Detected autofs mount point /boot during canonicalization of boot. Apr 13 23:04:31.853179 systemd-tmpfiles[1264]: Skipping /boot Apr 13 23:04:32.335280 zram_generator::config[1291]: No configuration found. Apr 13 23:04:33.289840 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 13 23:04:34.596797 systemd[1]: Reloading finished in 3172 ms. Apr 13 23:04:34.806803 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 13 23:04:34.868181 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 13 23:04:35.006170 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Apr 13 23:04:35.140368 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Apr 13 23:04:35.298269 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 13 23:04:35.419155 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Apr 13 23:04:35.648688 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Apr 13 23:04:35.694639 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Apr 13 23:04:35.748463 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 13 23:04:35.753337 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 13 23:04:35.780164 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 13 23:04:35.897045 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 13 23:04:35.897387 augenrules[1352]: No rules Apr 13 23:04:35.964062 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 13 23:04:35.968909 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 13 23:04:35.969406 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 13 23:04:35.979797 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 13 23:04:36.014644 systemd[1]: Started systemd-userdbd.service - User Database Manager. Apr 13 23:04:36.049480 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Apr 13 23:04:36.089714 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 13 23:04:36.094173 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 13 23:04:36.120413 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 13 23:04:36.193772 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 13 23:04:36.206799 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 13 23:04:36.221954 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 13 23:04:36.414457 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Apr 13 23:04:37.358968 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 13 23:04:37.359411 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 13 23:04:37.421267 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 13 23:04:37.431543 systemd-resolved[1339]: Positive Trust Anchors: Apr 13 23:04:37.433179 systemd-resolved[1339]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 13 23:04:37.433309 systemd-resolved[1339]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 13 23:04:37.481390 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 13 23:04:37.517006 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 13 23:04:37.527982 systemd-resolved[1339]: Defaulting to hostname 'linux'. Apr 13 23:04:37.552505 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 13 23:04:37.558398 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 13 23:04:37.565648 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Apr 13 23:04:37.593650 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 13 23:04:37.763926 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 13 23:04:37.841323 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 13 23:04:37.841523 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 13 23:04:37.850990 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 13 23:04:37.851181 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 13 23:04:37.888270 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 13 23:04:37.890353 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 13 23:04:37.910803 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 13 23:04:37.918938 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 13 23:04:38.022116 systemd[1]: Finished ensure-sysext.service. Apr 13 23:04:38.047431 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 13 23:04:38.067442 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 13 23:04:38.089281 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 13 23:04:38.187517 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Apr 13 23:04:38.656733 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Apr 13 23:04:38.667483 systemd[1]: Reached target time-set.target - System Time Set. Apr 13 23:04:56.735006 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Apr 13 23:04:56.949816 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 13 23:04:57.017766 systemd[1]: Starting systemd-update-done.service - Update is Completed... Apr 13 23:04:57.303484 systemd[1]: Finished systemd-update-done.service - Update is Completed. Apr 13 23:04:57.380446 systemd-udevd[1382]: Using default interface naming scheme 'v255'. Apr 13 23:04:58.909277 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 13 23:04:58.985303 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 13 23:04:59.344766 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Apr 13 23:04:59.374834 systemd-networkd[1395]: lo: Link UP Apr 13 23:04:59.374846 systemd-networkd[1395]: lo: Gained carrier Apr 13 23:04:59.414617 systemd-networkd[1395]: Enumeration completed Apr 13 23:04:59.417179 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 13 23:04:59.488506 systemd-networkd[1395]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 13 23:04:59.488510 systemd-networkd[1395]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 13 23:04:59.491219 systemd-networkd[1395]: eth0: Link UP Apr 13 23:04:59.491225 systemd-networkd[1395]: eth0: Gained carrier Apr 13 23:04:59.491238 systemd-networkd[1395]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 13 23:04:59.493693 systemd[1]: Reached target network.target - Network. Apr 13 23:04:59.548294 systemd-networkd[1395]: eth0: DHCPv4 address 10.0.0.132/16, gateway 10.0.0.1 acquired from 10.0.0.1 Apr 13 23:04:59.559622 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Apr 13 23:04:59.568263 systemd-timesyncd[1379]: Network configuration changed, trying to establish connection. Apr 13 23:04:59.582478 systemd-timesyncd[1379]: Contacted time server 10.0.0.1:123 (10.0.0.1). Apr 13 23:04:59.583260 systemd-timesyncd[1379]: Initial clock synchronization to Mon 2026-04-13 23:04:59.670081 UTC. Apr 13 23:04:59.738947 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 31 scanned by (udev-worker) (1400) Apr 13 23:04:59.803440 systemd-networkd[1395]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 13 23:05:00.202999 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Apr 13 23:05:00.218692 kernel: ACPI: button: Power Button [PWRF] Apr 13 23:05:00.305938 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Apr 13 23:05:00.317838 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Apr 13 23:05:00.317925 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Apr 13 23:05:00.320086 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Apr 13 23:05:00.321285 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Apr 13 23:05:00.351964 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Apr 13 23:05:00.586010 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Apr 13 23:05:00.814216 kernel: mousedev: PS/2 mouse device common for all mice Apr 13 23:05:00.816198 systemd-networkd[1395]: eth0: Gained IPv6LL Apr 13 23:05:00.957958 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Apr 13 23:05:01.277252 systemd[1]: Reached target network-online.target - Network is Online. Apr 13 23:05:01.537798 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 13 23:05:01.899303 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Apr 13 23:05:02.034664 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Apr 13 23:05:02.567383 lvm[1424]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 13 23:05:03.759041 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 13 23:05:03.878505 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Apr 13 23:05:03.927101 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 13 23:05:03.934425 systemd[1]: Reached target sysinit.target - System Initialization. Apr 13 23:05:03.973301 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Apr 13 23:05:03.982348 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Apr 13 23:05:03.988019 systemd[1]: Started logrotate.timer - Daily rotation of log files. Apr 13 23:05:03.993803 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Apr 13 23:05:04.016503 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Apr 13 23:05:04.025630 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Apr 13 23:05:04.029193 systemd[1]: Reached target paths.target - Path Units. Apr 13 23:05:04.038802 systemd[1]: Reached target timers.target - Timer Units. Apr 13 23:05:04.172070 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Apr 13 23:05:04.343556 systemd[1]: Starting docker.socket - Docker Socket for the API... Apr 13 23:05:04.421827 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Apr 13 23:05:04.503136 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Apr 13 23:05:04.557935 systemd[1]: Listening on docker.socket - Docker Socket for the API. Apr 13 23:05:04.565219 systemd[1]: Reached target sockets.target - Socket Units. Apr 13 23:05:04.574655 systemd[1]: Reached target basic.target - Basic System. Apr 13 23:05:04.592731 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Apr 13 23:05:04.594588 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Apr 13 23:05:04.693059 systemd[1]: Starting containerd.service - containerd container runtime... Apr 13 23:05:04.702841 lvm[1432]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 13 23:05:04.708563 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Apr 13 23:05:04.761439 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Apr 13 23:05:04.831905 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Apr 13 23:05:04.877232 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Apr 13 23:05:04.881931 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Apr 13 23:05:04.900664 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 13 23:05:04.926159 jq[1436]: false Apr 13 23:05:04.928962 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Apr 13 23:05:04.970833 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Apr 13 23:05:04.976213 extend-filesystems[1437]: Found loop3 Apr 13 23:05:04.976213 extend-filesystems[1437]: Found loop4 Apr 13 23:05:04.976213 extend-filesystems[1437]: Found loop5 Apr 13 23:05:04.976213 extend-filesystems[1437]: Found sr0 Apr 13 23:05:04.976213 extend-filesystems[1437]: Found vda Apr 13 23:05:04.976213 extend-filesystems[1437]: Found vda1 Apr 13 23:05:04.976213 extend-filesystems[1437]: Found vda2 Apr 13 23:05:04.976213 extend-filesystems[1437]: Found vda3 Apr 13 23:05:04.976213 extend-filesystems[1437]: Found usr Apr 13 23:05:04.976213 extend-filesystems[1437]: Found vda4 Apr 13 23:05:04.976213 extend-filesystems[1437]: Found vda6 Apr 13 23:05:04.976213 extend-filesystems[1437]: Found vda7 Apr 13 23:05:04.976213 extend-filesystems[1437]: Found vda9 Apr 13 23:05:04.976213 extend-filesystems[1437]: Checking size of /dev/vda9 Apr 13 23:05:04.989308 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Apr 13 23:05:05.031652 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Apr 13 23:05:05.090318 extend-filesystems[1437]: Resized partition /dev/vda9 Apr 13 23:05:05.094406 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Apr 13 23:05:05.103648 extend-filesystems[1454]: resize2fs 1.47.1 (20-May-2024) Apr 13 23:05:05.144303 systemd[1]: Starting systemd-logind.service - User Login Management... Apr 13 23:05:05.159089 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Apr 13 23:05:05.165922 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Apr 13 23:05:05.216780 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Apr 13 23:05:05.182489 systemd[1]: Starting update-engine.service - Update Engine... Apr 13 23:05:05.228238 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Apr 13 23:05:05.255859 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Apr 13 23:05:05.338825 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Apr 13 23:05:05.339104 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Apr 13 23:05:05.366717 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Apr 13 23:05:05.394592 systemd[1]: motdgen.service: Deactivated successfully. Apr 13 23:05:05.394823 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Apr 13 23:05:05.610906 extend-filesystems[1454]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Apr 13 23:05:05.610906 extend-filesystems[1454]: old_desc_blocks = 1, new_desc_blocks = 1 Apr 13 23:05:05.610906 extend-filesystems[1454]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Apr 13 23:05:05.610491 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Apr 13 23:05:05.710515 jq[1458]: true Apr 13 23:05:05.723926 extend-filesystems[1437]: Resized filesystem in /dev/vda9 Apr 13 23:05:05.610723 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Apr 13 23:05:05.633357 systemd[1]: extend-filesystems.service: Deactivated successfully. Apr 13 23:05:05.633582 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Apr 13 23:05:05.802454 (ntainerd)[1467]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Apr 13 23:05:05.919338 dbus-daemon[1435]: [system] SELinux support is enabled Apr 13 23:05:05.971850 systemd[1]: Started dbus.service - D-Bus System Message Bus. Apr 13 23:05:06.030424 systemd[1]: coreos-metadata.service: Deactivated successfully. Apr 13 23:05:06.038971 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Apr 13 23:05:06.134434 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Apr 13 23:05:06.134608 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Apr 13 23:05:06.134638 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Apr 13 23:05:06.141025 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Apr 13 23:05:06.141145 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Apr 13 23:05:06.180007 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Apr 13 23:05:06.244208 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 31 scanned by (udev-worker) (1488) Apr 13 23:05:06.292405 jq[1473]: true Apr 13 23:05:06.349903 update_engine[1457]: I20260413 23:05:06.227638 1457 main.cc:92] Flatcar Update Engine starting Apr 13 23:05:06.349903 update_engine[1457]: I20260413 23:05:06.306380 1457 update_check_scheduler.cc:74] Next update check in 6m24s Apr 13 23:05:06.416721 tar[1466]: linux-amd64/LICENSE Apr 13 23:05:06.416721 tar[1466]: linux-amd64/helm Apr 13 23:05:06.340911 systemd[1]: Started update-engine.service - Update Engine. Apr 13 23:05:06.434826 systemd[1]: Started locksmithd.service - Cluster reboot manager. Apr 13 23:05:06.979259 bash[1517]: Updated "/home/core/.ssh/authorized_keys" Apr 13 23:05:06.983542 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Apr 13 23:05:07.077803 systemd-logind[1455]: Watching system buttons on /dev/input/event1 (Power Button) Apr 13 23:05:07.079580 systemd-logind[1455]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Apr 13 23:05:07.098947 systemd-logind[1455]: New seat seat0. Apr 13 23:05:07.109759 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Apr 13 23:05:07.149191 systemd[1]: Started systemd-logind.service - User Login Management. Apr 13 23:05:07.534080 sshd_keygen[1476]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Apr 13 23:05:07.856609 locksmithd[1499]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Apr 13 23:05:07.986605 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Apr 13 23:05:08.067921 systemd[1]: Starting issuegen.service - Generate /run/issue... Apr 13 23:05:08.346389 systemd[1]: issuegen.service: Deactivated successfully. Apr 13 23:05:08.348844 systemd[1]: Finished issuegen.service - Generate /run/issue. Apr 13 23:05:09.104486 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Apr 13 23:05:09.606542 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Apr 13 23:05:09.910048 systemd[1]: Started getty@tty1.service - Getty on tty1. Apr 13 23:05:10.088798 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Apr 13 23:05:10.299066 systemd[1]: Reached target getty.target - Login Prompts. Apr 13 23:05:10.524395 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Apr 13 23:05:12.155614 systemd[1]: Started sshd@0-10.0.0.132:22-10.0.0.1:41366.service - OpenSSH per-connection server daemon (10.0.0.1:41366). Apr 13 23:05:12.939397 containerd[1467]: time="2026-04-13T23:05:12.938501707Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Apr 13 23:05:13.733418 containerd[1467]: time="2026-04-13T23:05:13.684811696Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Apr 13 23:05:14.081529 containerd[1467]: time="2026-04-13T23:05:14.044168531Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.127-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Apr 13 23:05:14.101620 containerd[1467]: time="2026-04-13T23:05:14.082268857Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Apr 13 23:05:14.119958 containerd[1467]: time="2026-04-13T23:05:14.118076286Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Apr 13 23:05:14.138552 containerd[1467]: time="2026-04-13T23:05:14.134732423Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Apr 13 23:05:14.146492 containerd[1467]: time="2026-04-13T23:05:14.146281106Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Apr 13 23:05:14.182568 containerd[1467]: time="2026-04-13T23:05:14.179334421Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Apr 13 23:05:14.192133 containerd[1467]: time="2026-04-13T23:05:14.181852043Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Apr 13 23:05:14.200392 containerd[1467]: time="2026-04-13T23:05:14.199832877Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 13 23:05:14.200392 containerd[1467]: time="2026-04-13T23:05:14.200034533Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Apr 13 23:05:14.200392 containerd[1467]: time="2026-04-13T23:05:14.200135650Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Apr 13 23:05:14.200392 containerd[1467]: time="2026-04-13T23:05:14.200152378Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Apr 13 23:05:14.209524 containerd[1467]: time="2026-04-13T23:05:14.207785891Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Apr 13 23:05:14.233389 containerd[1467]: time="2026-04-13T23:05:14.225772326Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Apr 13 23:05:14.233389 containerd[1467]: time="2026-04-13T23:05:14.229583574Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 13 23:05:14.233389 containerd[1467]: time="2026-04-13T23:05:14.229753537Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Apr 13 23:05:14.242475 containerd[1467]: time="2026-04-13T23:05:14.238825865Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Apr 13 23:05:14.242475 containerd[1467]: time="2026-04-13T23:05:14.239277699Z" level=info msg="metadata content store policy set" policy=shared Apr 13 23:05:14.596088 containerd[1467]: time="2026-04-13T23:05:14.592200635Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Apr 13 23:05:14.605270 containerd[1467]: time="2026-04-13T23:05:14.599532776Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Apr 13 23:05:14.613024 containerd[1467]: time="2026-04-13T23:05:14.605455584Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Apr 13 23:05:14.613024 containerd[1467]: time="2026-04-13T23:05:14.605479061Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Apr 13 23:05:14.636035 containerd[1467]: time="2026-04-13T23:05:14.629535211Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Apr 13 23:05:14.693451 containerd[1467]: time="2026-04-13T23:05:14.692628657Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Apr 13 23:05:14.912186 containerd[1467]: time="2026-04-13T23:05:14.900851305Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Apr 13 23:05:14.963136 containerd[1467]: time="2026-04-13T23:05:14.958666241Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Apr 13 23:05:15.036930 containerd[1467]: time="2026-04-13T23:05:15.034578630Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Apr 13 23:05:15.050471 containerd[1467]: time="2026-04-13T23:05:15.043199788Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Apr 13 23:05:15.064180 containerd[1467]: time="2026-04-13T23:05:15.062235921Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Apr 13 23:05:15.073165 containerd[1467]: time="2026-04-13T23:05:15.069975660Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Apr 13 23:05:15.073165 containerd[1467]: time="2026-04-13T23:05:15.072909474Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Apr 13 23:05:15.073165 containerd[1467]: time="2026-04-13T23:05:15.073120796Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Apr 13 23:05:15.088078 containerd[1467]: time="2026-04-13T23:05:15.073213785Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Apr 13 23:05:15.088078 containerd[1467]: time="2026-04-13T23:05:15.074440842Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Apr 13 23:05:15.088078 containerd[1467]: time="2026-04-13T23:05:15.075732507Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Apr 13 23:05:15.088078 containerd[1467]: time="2026-04-13T23:05:15.078211373Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Apr 13 23:05:15.088078 containerd[1467]: time="2026-04-13T23:05:15.080297929Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Apr 13 23:05:15.088078 containerd[1467]: time="2026-04-13T23:05:15.080709125Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Apr 13 23:05:15.088078 containerd[1467]: time="2026-04-13T23:05:15.080730409Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Apr 13 23:05:15.088078 containerd[1467]: time="2026-04-13T23:05:15.086592026Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Apr 13 23:05:15.088078 containerd[1467]: time="2026-04-13T23:05:15.086747519Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Apr 13 23:05:15.088078 containerd[1467]: time="2026-04-13T23:05:15.087304963Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Apr 13 23:05:15.088078 containerd[1467]: time="2026-04-13T23:05:15.087420093Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Apr 13 23:05:15.088078 containerd[1467]: time="2026-04-13T23:05:15.087456171Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Apr 13 23:05:15.088078 containerd[1467]: time="2026-04-13T23:05:15.087595885Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Apr 13 23:05:15.088078 containerd[1467]: time="2026-04-13T23:05:15.087619273Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Apr 13 23:05:15.103357 containerd[1467]: time="2026-04-13T23:05:15.087639018Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Apr 13 23:05:15.103357 containerd[1467]: time="2026-04-13T23:05:15.087654114Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Apr 13 23:05:15.103357 containerd[1467]: time="2026-04-13T23:05:15.087685763Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Apr 13 23:05:15.103357 containerd[1467]: time="2026-04-13T23:05:15.087707865Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Apr 13 23:05:15.103357 containerd[1467]: time="2026-04-13T23:05:15.087941100Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Apr 13 23:05:15.103357 containerd[1467]: time="2026-04-13T23:05:15.088025990Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Apr 13 23:05:15.103357 containerd[1467]: time="2026-04-13T23:05:15.088041895Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Apr 13 23:05:15.103357 containerd[1467]: time="2026-04-13T23:05:15.098310069Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Apr 13 23:05:15.103357 containerd[1467]: time="2026-04-13T23:05:15.098862001Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Apr 13 23:05:15.103357 containerd[1467]: time="2026-04-13T23:05:15.099018048Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Apr 13 23:05:15.103357 containerd[1467]: time="2026-04-13T23:05:15.099038586Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Apr 13 23:05:15.103357 containerd[1467]: time="2026-04-13T23:05:15.099053448Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Apr 13 23:05:15.103357 containerd[1467]: time="2026-04-13T23:05:15.099500925Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Apr 13 23:05:15.103357 containerd[1467]: time="2026-04-13T23:05:15.099667807Z" level=info msg="NRI interface is disabled by configuration." Apr 13 23:05:15.103668 containerd[1467]: time="2026-04-13T23:05:15.099682318Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Apr 13 23:05:15.103693 containerd[1467]: time="2026-04-13T23:05:15.102143131Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Apr 13 23:05:15.103693 containerd[1467]: time="2026-04-13T23:05:15.102316549Z" level=info msg="Connect containerd service" Apr 13 23:05:15.103693 containerd[1467]: time="2026-04-13T23:05:15.102441108Z" level=info msg="using legacy CRI server" Apr 13 23:05:15.103693 containerd[1467]: time="2026-04-13T23:05:15.102451706Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Apr 13 23:05:15.103693 containerd[1467]: time="2026-04-13T23:05:15.102919882Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Apr 13 23:05:15.106529 containerd[1467]: time="2026-04-13T23:05:15.104675243Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 13 23:05:15.118438 containerd[1467]: time="2026-04-13T23:05:15.108579741Z" level=info msg="Start subscribing containerd event" Apr 13 23:05:15.118438 containerd[1467]: time="2026-04-13T23:05:15.109715150Z" level=info msg="Start recovering state" Apr 13 23:05:15.118438 containerd[1467]: time="2026-04-13T23:05:15.110157767Z" level=info msg="Start event monitor" Apr 13 23:05:15.118438 containerd[1467]: time="2026-04-13T23:05:15.110304396Z" level=info msg="Start snapshots syncer" Apr 13 23:05:15.118438 containerd[1467]: time="2026-04-13T23:05:15.110380074Z" level=info msg="Start cni network conf syncer for default" Apr 13 23:05:15.118438 containerd[1467]: time="2026-04-13T23:05:15.110401987Z" level=info msg="Start streaming server" Apr 13 23:05:15.118639 containerd[1467]: time="2026-04-13T23:05:15.118598562Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Apr 13 23:05:15.118753 containerd[1467]: time="2026-04-13T23:05:15.118706500Z" level=info msg=serving... address=/run/containerd/containerd.sock Apr 13 23:05:15.130761 containerd[1467]: time="2026-04-13T23:05:15.128685862Z" level=info msg="containerd successfully booted in 2.566912s" Apr 13 23:05:15.144136 systemd[1]: Started containerd.service - containerd container runtime. Apr 13 23:05:15.234576 tar[1466]: linux-amd64/README.md Apr 13 23:05:15.822195 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Apr 13 23:05:15.899842 sshd[1547]: Accepted publickey for core from 10.0.0.1 port 41366 ssh2: RSA SHA256:W5N50Zpm460ysbmW58qL6krVZjeW4Y8kSGRetRBMpjQ Apr 13 23:05:16.057494 sshd[1547]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 23:05:17.076739 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 23:05:17.112255 systemd[1]: Reached target multi-user.target - Multi-User System. Apr 13 23:05:17.134768 systemd[1]: Startup finished in 2.044s (kernel) + 18.543s (initrd) + 1min 8.639s (userspace) = 1min 29.227s. Apr 13 23:05:17.240452 (kubelet)[1565]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 13 23:05:19.083189 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Apr 13 23:05:19.323379 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Apr 13 23:05:19.970440 systemd-logind[1455]: New session 1 of user core. Apr 13 23:05:22.003278 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Apr 13 23:05:22.344398 systemd[1]: Starting user@500.service - User Manager for UID 500... Apr 13 23:05:22.728426 (systemd)[1574]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Apr 13 23:05:27.574583 systemd[1574]: Queued start job for default target default.target. Apr 13 23:05:27.691510 systemd[1574]: Created slice app.slice - User Application Slice. Apr 13 23:05:27.695078 systemd[1574]: Reached target paths.target - Paths. Apr 13 23:05:27.695254 systemd[1574]: Reached target timers.target - Timers. Apr 13 23:05:27.780381 systemd[1574]: Starting dbus.socket - D-Bus User Message Bus Socket... Apr 13 23:05:28.561501 systemd[1574]: Listening on dbus.socket - D-Bus User Message Bus Socket. Apr 13 23:05:28.561694 systemd[1574]: Reached target sockets.target - Sockets. Apr 13 23:05:28.561705 systemd[1574]: Reached target basic.target - Basic System. Apr 13 23:05:28.563146 systemd[1574]: Reached target default.target - Main User Target. Apr 13 23:05:28.563221 systemd[1574]: Startup finished in 5.344s. Apr 13 23:05:28.607977 systemd[1]: Started user@500.service - User Manager for UID 500. Apr 13 23:05:28.783722 systemd[1]: Started session-1.scope - Session 1 of User core. Apr 13 23:05:29.540107 kubelet[1565]: E0413 23:05:29.539567 1565 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 13 23:05:29.595096 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 13 23:05:29.598214 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 13 23:05:29.653076 systemd[1]: kubelet.service: Consumed 12.499s CPU time. Apr 13 23:05:30.569522 systemd[1]: Started sshd@1-10.0.0.132:22-10.0.0.1:56816.service - OpenSSH per-connection server daemon (10.0.0.1:56816). Apr 13 23:05:31.523795 sshd[1586]: Accepted publickey for core from 10.0.0.1 port 56816 ssh2: RSA SHA256:W5N50Zpm460ysbmW58qL6krVZjeW4Y8kSGRetRBMpjQ Apr 13 23:05:31.560547 sshd[1586]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 23:05:33.152139 systemd-logind[1455]: New session 2 of user core. Apr 13 23:05:33.289421 systemd[1]: Started session-2.scope - Session 2 of User core. Apr 13 23:05:34.786335 sshd[1586]: pam_unix(sshd:session): session closed for user core Apr 13 23:05:35.866771 systemd[1]: sshd@1-10.0.0.132:22-10.0.0.1:56816.service: Deactivated successfully. Apr 13 23:05:36.034520 systemd[1]: session-2.scope: Deactivated successfully. Apr 13 23:05:36.363405 systemd-logind[1455]: Session 2 logged out. Waiting for processes to exit. Apr 13 23:05:36.571787 systemd[1]: Started sshd@2-10.0.0.132:22-10.0.0.1:56832.service - OpenSSH per-connection server daemon (10.0.0.1:56832). Apr 13 23:05:36.909654 systemd-logind[1455]: Removed session 2. Apr 13 23:05:37.739504 sshd[1593]: Accepted publickey for core from 10.0.0.1 port 56832 ssh2: RSA SHA256:W5N50Zpm460ysbmW58qL6krVZjeW4Y8kSGRetRBMpjQ Apr 13 23:05:37.842651 sshd[1593]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 23:05:39.067067 systemd-logind[1455]: New session 3 of user core. Apr 13 23:05:39.184861 systemd[1]: Started session-3.scope - Session 3 of User core. Apr 13 23:05:39.694144 sshd[1593]: pam_unix(sshd:session): session closed for user core Apr 13 23:05:39.965455 systemd[1]: sshd@2-10.0.0.132:22-10.0.0.1:56832.service: Deactivated successfully. Apr 13 23:05:40.213249 systemd[1]: session-3.scope: Deactivated successfully. Apr 13 23:05:40.728149 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Apr 13 23:05:41.219727 systemd-logind[1455]: Session 3 logged out. Waiting for processes to exit. Apr 13 23:05:41.385227 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 13 23:05:41.627280 systemd[1]: Started sshd@3-10.0.0.132:22-10.0.0.1:48766.service - OpenSSH per-connection server daemon (10.0.0.1:48766). Apr 13 23:05:42.131137 systemd-logind[1455]: Removed session 3. Apr 13 23:05:42.167849 sshd[1601]: Accepted publickey for core from 10.0.0.1 port 48766 ssh2: RSA SHA256:W5N50Zpm460ysbmW58qL6krVZjeW4Y8kSGRetRBMpjQ Apr 13 23:05:42.204479 sshd[1601]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 23:05:43.896417 systemd-logind[1455]: New session 4 of user core. Apr 13 23:05:44.160575 systemd[1]: Started session-4.scope - Session 4 of User core. Apr 13 23:05:45.256632 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 23:05:45.499817 (kubelet)[1609]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 13 23:05:47.257210 sshd[1601]: pam_unix(sshd:session): session closed for user core Apr 13 23:05:47.633714 systemd[1]: sshd@3-10.0.0.132:22-10.0.0.1:48766.service: Deactivated successfully. Apr 13 23:05:47.786329 systemd[1]: session-4.scope: Deactivated successfully. Apr 13 23:05:47.882651 systemd-logind[1455]: Session 4 logged out. Waiting for processes to exit. Apr 13 23:05:48.099307 systemd[1]: Started sshd@4-10.0.0.132:22-10.0.0.1:58124.service - OpenSSH per-connection server daemon (10.0.0.1:58124). Apr 13 23:05:48.439777 systemd-logind[1455]: Removed session 4. Apr 13 23:05:48.966193 sshd[1622]: Accepted publickey for core from 10.0.0.1 port 58124 ssh2: RSA SHA256:W5N50Zpm460ysbmW58qL6krVZjeW4Y8kSGRetRBMpjQ Apr 13 23:05:49.004253 sshd[1622]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 23:05:50.180396 systemd-logind[1455]: New session 5 of user core. Apr 13 23:05:50.372362 systemd[1]: Started session-5.scope - Session 5 of User core. Apr 13 23:05:51.357489 sudo[1625]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Apr 13 23:05:51.375327 sudo[1625]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 13 23:05:51.457413 kubelet[1609]: E0413 23:05:51.454658 1609 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 13 23:05:51.501079 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 13 23:05:51.505622 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 13 23:05:51.525715 systemd[1]: kubelet.service: Consumed 6.064s CPU time. Apr 13 23:05:51.861180 sudo[1625]: pam_unix(sudo:session): session closed for user root Apr 13 23:05:51.902821 sshd[1622]: pam_unix(sshd:session): session closed for user core Apr 13 23:05:51.913588 update_engine[1457]: I20260413 23:05:51.910737 1457 update_attempter.cc:509] Updating boot flags... Apr 13 23:05:52.112834 systemd[1]: sshd@4-10.0.0.132:22-10.0.0.1:58124.service: Deactivated successfully. Apr 13 23:05:52.161614 systemd[1]: session-5.scope: Deactivated successfully. Apr 13 23:05:52.374141 systemd-logind[1455]: Session 5 logged out. Waiting for processes to exit. Apr 13 23:05:52.457386 systemd[1]: Started sshd@5-10.0.0.132:22-10.0.0.1:58140.service - OpenSSH per-connection server daemon (10.0.0.1:58140). Apr 13 23:05:52.498419 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 31 scanned by (udev-worker) (1638) Apr 13 23:05:52.708594 systemd-logind[1455]: Removed session 5. Apr 13 23:05:53.307358 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 31 scanned by (udev-worker) (1637) Apr 13 23:05:53.625762 sshd[1639]: Accepted publickey for core from 10.0.0.1 port 58140 ssh2: RSA SHA256:W5N50Zpm460ysbmW58qL6krVZjeW4Y8kSGRetRBMpjQ Apr 13 23:05:53.706443 sshd[1639]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 23:05:53.795978 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 31 scanned by (udev-worker) (1637) Apr 13 23:05:57.256827 systemd-logind[1455]: New session 6 of user core. Apr 13 23:05:57.365983 systemd[1]: Started session-6.scope - Session 6 of User core. Apr 13 23:05:59.097856 sudo[1650]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Apr 13 23:05:59.136853 sudo[1650]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 13 23:05:59.680180 sudo[1650]: pam_unix(sudo:session): session closed for user root Apr 13 23:06:00.490504 sudo[1649]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Apr 13 23:06:00.492188 sudo[1649]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 13 23:06:01.864045 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Apr 13 23:06:03.147859 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Apr 13 23:06:03.334990 auditctl[1653]: No rules Apr 13 23:06:03.478727 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 13 23:06:03.652279 systemd[1]: audit-rules.service: Deactivated successfully. Apr 13 23:06:03.675799 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Apr 13 23:06:04.248250 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 13 23:06:05.135496 augenrules[1674]: No rules Apr 13 23:06:05.272527 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 13 23:06:05.345851 sudo[1649]: pam_unix(sudo:session): session closed for user root Apr 13 23:06:05.377808 sshd[1639]: pam_unix(sshd:session): session closed for user core Apr 13 23:06:06.060243 systemd[1]: sshd@5-10.0.0.132:22-10.0.0.1:58140.service: Deactivated successfully. Apr 13 23:06:06.432929 systemd[1]: session-6.scope: Deactivated successfully. Apr 13 23:06:06.433370 systemd[1]: session-6.scope: Consumed 1.740s CPU time. Apr 13 23:06:06.570844 systemd-logind[1455]: Session 6 logged out. Waiting for processes to exit. Apr 13 23:06:06.897267 systemd[1]: Started sshd@6-10.0.0.132:22-10.0.0.1:47758.service - OpenSSH per-connection server daemon (10.0.0.1:47758). Apr 13 23:06:07.310356 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 23:06:07.432316 (kubelet)[1687]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 13 23:06:07.529352 systemd-logind[1455]: Removed session 6. Apr 13 23:06:07.550789 sshd[1684]: Accepted publickey for core from 10.0.0.1 port 47758 ssh2: RSA SHA256:W5N50Zpm460ysbmW58qL6krVZjeW4Y8kSGRetRBMpjQ Apr 13 23:06:07.557320 sshd[1684]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 23:06:09.105798 systemd-logind[1455]: New session 7 of user core. Apr 13 23:06:09.270677 systemd[1]: Started session-7.scope - Session 7 of User core. Apr 13 23:06:10.444839 sudo[1697]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Apr 13 23:06:10.484794 sudo[1697]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 13 23:06:13.558781 kubelet[1687]: E0413 23:06:13.551001 1687 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 13 23:06:13.602375 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 13 23:06:13.610239 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 13 23:06:13.676476 systemd[1]: kubelet.service: Consumed 5.634s CPU time. Apr 13 23:06:18.607813 systemd[1]: Starting docker.service - Docker Application Container Engine... Apr 13 23:06:18.978472 (dockerd)[1717]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Apr 13 23:06:22.170530 dockerd[1717]: time="2026-04-13T23:06:22.169575831Z" level=info msg="Starting up" Apr 13 23:06:23.999277 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Apr 13 23:06:24.296380 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 13 23:06:24.491022 dockerd[1717]: time="2026-04-13T23:06:24.490189389Z" level=info msg="Loading containers: start." Apr 13 23:06:27.808601 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 23:06:27.880386 (kubelet)[1794]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 13 23:06:27.962676 kernel: Initializing XFRM netlink socket Apr 13 23:06:29.812600 systemd-networkd[1395]: docker0: Link UP Apr 13 23:06:30.357774 dockerd[1717]: time="2026-04-13T23:06:30.357153516Z" level=info msg="Loading containers: done." Apr 13 23:06:30.925005 dockerd[1717]: time="2026-04-13T23:06:30.921154838Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Apr 13 23:06:30.925005 dockerd[1717]: time="2026-04-13T23:06:30.925104629Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Apr 13 23:06:30.952839 dockerd[1717]: time="2026-04-13T23:06:30.925388571Z" level=info msg="Daemon has completed initialization" Apr 13 23:06:32.454596 dockerd[1717]: time="2026-04-13T23:06:32.453771606Z" level=info msg="API listen on /run/docker.sock" Apr 13 23:06:32.462801 systemd[1]: Started docker.service - Docker Application Container Engine. Apr 13 23:06:32.938523 kubelet[1794]: E0413 23:06:32.932086 1794 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 13 23:06:32.991527 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 13 23:06:32.994570 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 13 23:06:33.005625 systemd[1]: kubelet.service: Consumed 5.034s CPU time. Apr 13 23:06:42.767761 kernel: clocksource: Long readout interval, skipping watchdog check: cs_nsec: 1307427307 wd_nsec: 1307427286 Apr 13 23:06:43.709538 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Apr 13 23:06:44.071576 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 13 23:06:48.037390 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 23:06:48.196130 (kubelet)[1891]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 13 23:07:06.605830 containerd[1467]: time="2026-04-13T23:07:06.604121460Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.4\"" Apr 13 23:07:11.121709 kubelet[1891]: E0413 23:07:11.107792 1891 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 13 23:07:11.261947 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 13 23:07:11.262464 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 13 23:07:11.275924 systemd[1]: kubelet.service: Consumed 18.157s CPU time. Apr 13 23:07:17.655560 containerd[1467]: time="2026-04-13T23:07:17.651490581Z" level=info msg="trying next host" error="failed to do request: Head \"https://registry.k8s.io/v2/kube-apiserver/manifests/v1.34.4\": net/http: TLS handshake timeout" host=registry.k8s.io Apr 13 23:07:17.772828 containerd[1467]: time="2026-04-13T23:07:17.765336130Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.34.4: active requests=0, bytes read=0" Apr 13 23:07:17.783000 containerd[1467]: time="2026-04-13T23:07:17.771295611Z" level=error msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.4\" failed" error="failed to pull and unpack image \"registry.k8s.io/kube-apiserver:v1.34.4\": failed to resolve reference \"registry.k8s.io/kube-apiserver:v1.34.4\": failed to do request: Head \"https://registry.k8s.io/v2/kube-apiserver/manifests/v1.34.4\": net/http: TLS handshake timeout" Apr 13 23:07:17.964605 containerd[1467]: time="2026-04-13T23:07:17.954505778Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.4\"" Apr 13 23:07:21.593363 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Apr 13 23:07:21.965497 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 13 23:07:25.249969 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 23:07:25.474629 (kubelet)[1912]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 13 23:07:32.970183 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2000938517.mount: Deactivated successfully. Apr 13 23:07:33.086633 kubelet[1912]: E0413 23:07:33.083751 1912 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 13 23:07:33.113452 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 13 23:07:33.119253 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 13 23:07:33.131402 systemd[1]: kubelet.service: Consumed 6.419s CPU time. Apr 13 23:07:43.330566 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Apr 13 23:07:43.492380 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 13 23:07:46.267678 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 23:07:46.327137 (kubelet)[1948]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 13 23:07:54.206656 kubelet[1948]: E0413 23:07:54.173455 1948 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 13 23:07:54.271535 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 13 23:07:54.284275 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 13 23:07:54.350462 systemd[1]: kubelet.service: Consumed 6.222s CPU time. Apr 13 23:08:04.800955 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7. Apr 13 23:08:05.151054 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 13 23:08:07.304393 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 23:08:07.365496 (kubelet)[2006]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 13 23:08:11.388041 kubelet[2006]: E0413 23:08:11.387139 2006 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 13 23:08:11.427131 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 13 23:08:11.429854 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 13 23:08:11.438564 systemd[1]: kubelet.service: Consumed 4.042s CPU time. Apr 13 23:08:21.972171 containerd[1467]: time="2026-04-13T23:08:21.971191681Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.34.4: active requests=0, bytes read=27072019" Apr 13 23:08:21.982848 containerd[1467]: time="2026-04-13T23:08:21.978130248Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.34.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 23:08:21.978819 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 8. Apr 13 23:08:22.158794 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 13 23:08:22.193468 containerd[1467]: time="2026-04-13T23:08:22.192559211Z" level=info msg="ImageCreate event name:\"sha256:580dc2bd813334b9ca30ac3a513b3577d055dd0bc8a7018a424b552afd7319f9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 23:08:24.220597 containerd[1467]: time="2026-04-13T23:08:24.171415059Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:f2b5a686d329b24ef4f4b057ddaf61e01874122d584e99c2a19d1e1714e4b7ae\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 23:08:25.286571 containerd[1467]: time="2026-04-13T23:08:25.273291631Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.34.4\" with image id \"sha256:580dc2bd813334b9ca30ac3a513b3577d055dd0bc8a7018a424b552afd7319f9\", repo tag \"registry.k8s.io/kube-apiserver:v1.34.4\", repo digest \"registry.k8s.io/kube-apiserver@sha256:f2b5a686d329b24ef4f4b057ddaf61e01874122d584e99c2a19d1e1714e4b7ae\", size \"27069180\" in 1m7.29699746s" Apr 13 23:08:25.380538 containerd[1467]: time="2026-04-13T23:08:25.306439434Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.4\" returns image reference \"sha256:580dc2bd813334b9ca30ac3a513b3577d055dd0bc8a7018a424b552afd7319f9\"" Apr 13 23:08:25.595685 containerd[1467]: time="2026-04-13T23:08:25.588398948Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.4\"" Apr 13 23:08:25.598753 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 23:08:25.691691 (kubelet)[2021]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 13 23:08:34.556719 kubelet[2021]: E0413 23:08:34.555562 2021 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 13 23:08:34.599990 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 13 23:08:34.610685 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 13 23:08:34.644503 systemd[1]: kubelet.service: Consumed 5.893s CPU time. Apr 13 23:08:45.148087 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 9. Apr 13 23:08:45.385187 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 13 23:08:48.436564 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 23:08:48.679904 (kubelet)[2042]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 13 23:08:52.453987 kubelet[2042]: E0413 23:08:52.451802 2042 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 13 23:08:52.490432 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 13 23:08:52.490764 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 13 23:08:52.492016 systemd[1]: kubelet.service: Consumed 3.417s CPU time. Apr 13 23:09:03.736669 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 10. Apr 13 23:09:04.906271 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 13 23:09:05.337563 containerd[1467]: time="2026-04-13T23:09:05.336215789Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.34.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 23:09:05.368413 containerd[1467]: time="2026-04-13T23:09:05.366404542Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.34.4: active requests=0, bytes read=21163889" Apr 13 23:09:05.660958 containerd[1467]: time="2026-04-13T23:09:05.655695895Z" level=info msg="ImageCreate event name:\"sha256:608737e269607b0d5c252a3296dc4fd80e7f2e90907f46ad5c8cf3e4f23c6d0d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 23:09:06.935117 containerd[1467]: time="2026-04-13T23:09:06.933964658Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:b8f0ae8a1bddb70981f4999e63df7e59838b9b4ee27831831802317101164e1e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 23:09:07.487778 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 23:09:07.526628 (kubelet)[2058]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 13 23:09:08.394110 containerd[1467]: time="2026-04-13T23:09:08.392557607Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.34.4\" with image id \"sha256:608737e269607b0d5c252a3296dc4fd80e7f2e90907f46ad5c8cf3e4f23c6d0d\", repo tag \"registry.k8s.io/kube-controller-manager:v1.34.4\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:b8f0ae8a1bddb70981f4999e63df7e59838b9b4ee27831831802317101164e1e\", size \"22820907\" in 42.798169712s" Apr 13 23:09:08.434884 containerd[1467]: time="2026-04-13T23:09:08.425676783Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.4\" returns image reference \"sha256:608737e269607b0d5c252a3296dc4fd80e7f2e90907f46ad5c8cf3e4f23c6d0d\"" Apr 13 23:09:08.595547 containerd[1467]: time="2026-04-13T23:09:08.590460251Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.4\"" Apr 13 23:09:13.143387 kubelet[2058]: E0413 23:09:13.130573 2058 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 13 23:09:13.159753 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 13 23:09:13.183460 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 13 23:09:13.239520 systemd[1]: kubelet.service: Consumed 4.149s CPU time. Apr 13 23:09:23.621817 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 11. Apr 13 23:09:24.244545 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 13 23:09:27.287158 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 23:09:27.291771 (kubelet)[2081]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 13 23:09:31.786348 kubelet[2081]: E0413 23:09:31.762663 2081 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 13 23:09:31.849376 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 13 23:09:31.868218 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 13 23:09:31.962247 systemd[1]: kubelet.service: Consumed 3.391s CPU time. Apr 13 23:09:42.221611 containerd[1467]: time="2026-04-13T23:09:42.211273782Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.34.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 23:09:42.221611 containerd[1467]: time="2026-04-13T23:09:42.213300148Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.34.4: active requests=0, bytes read=15727822" Apr 13 23:09:42.315048 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 12. Apr 13 23:09:42.409807 containerd[1467]: time="2026-04-13T23:09:42.409052095Z" level=info msg="ImageCreate event name:\"sha256:5ad88f27116a5809b6bdb7b410bc4c456e918bc25e96804201540fd30892e7aa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 23:09:42.591097 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 13 23:09:44.516275 containerd[1467]: time="2026-04-13T23:09:44.512855184Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:5b0dcf6f7178b6bff5cbf59f2a695b13987181cb1610bfca63cad50b1df8f982\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 23:09:45.581719 containerd[1467]: time="2026-04-13T23:09:45.576306666Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.34.4\" with image id \"sha256:5ad88f27116a5809b6bdb7b410bc4c456e918bc25e96804201540fd30892e7aa\", repo tag \"registry.k8s.io/kube-scheduler:v1.34.4\", repo digest \"registry.k8s.io/kube-scheduler@sha256:5b0dcf6f7178b6bff5cbf59f2a695b13987181cb1610bfca63cad50b1df8f982\", size \"17384858\" in 36.95931772s" Apr 13 23:09:45.649692 containerd[1467]: time="2026-04-13T23:09:45.591371094Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.4\" returns image reference \"sha256:5ad88f27116a5809b6bdb7b410bc4c456e918bc25e96804201540fd30892e7aa\"" Apr 13 23:09:45.786760 containerd[1467]: time="2026-04-13T23:09:45.784194496Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.4\"" Apr 13 23:09:47.494198 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 23:09:47.619263 (kubelet)[2099]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 13 23:09:58.697407 kubelet[2099]: E0413 23:09:58.662069 2099 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 13 23:09:58.868263 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 13 23:09:58.885345 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 13 23:09:58.904662 systemd[1]: kubelet.service: Consumed 8.546s CPU time. Apr 13 23:10:09.305219 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 13. Apr 13 23:10:09.825618 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 13 23:10:13.566373 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 23:10:13.595317 (kubelet)[2123]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 13 23:10:17.963371 kubelet[2123]: E0413 23:10:17.945854 2123 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 13 23:10:18.122622 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 13 23:10:18.135946 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 13 23:10:18.171422 systemd[1]: kubelet.service: Consumed 4.246s CPU time. Apr 13 23:10:28.524351 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 14. Apr 13 23:10:28.997575 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 13 23:10:33.145085 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 23:10:33.315428 (kubelet)[2138]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 13 23:10:43.941761 kubelet[2138]: E0413 23:10:43.940550 2138 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 13 23:10:44.073764 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 13 23:10:44.093403 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 13 23:10:44.114445 systemd[1]: kubelet.service: Consumed 7.919s CPU time. Apr 13 23:10:54.442642 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 15. Apr 13 23:10:55.094629 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 13 23:10:59.508271 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 23:10:59.729780 (kubelet)[2155]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 13 23:11:19.090660 kubelet[2155]: E0413 23:11:19.074165 2155 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 13 23:11:19.153434 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 13 23:11:19.155524 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 13 23:11:19.207962 systemd[1]: kubelet.service: Consumed 16.806s CPU time. Apr 13 23:11:29.736058 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 16. Apr 13 23:11:30.381166 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 13 23:11:31.072860 update_engine[1457]: I20260413 23:11:31.068408 1457 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Apr 13 23:11:31.087753 update_engine[1457]: I20260413 23:11:31.077191 1457 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Apr 13 23:11:31.081909 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2253104918.mount: Deactivated successfully. Apr 13 23:11:31.124264 update_engine[1457]: I20260413 23:11:31.114854 1457 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Apr 13 23:11:31.214253 update_engine[1457]: I20260413 23:11:31.210537 1457 omaha_request_params.cc:62] Current group set to lts Apr 13 23:11:31.221421 update_engine[1457]: I20260413 23:11:31.217003 1457 update_attempter.cc:499] Already updated boot flags. Skipping. Apr 13 23:11:31.221421 update_engine[1457]: I20260413 23:11:31.217027 1457 update_attempter.cc:643] Scheduling an action processor start. Apr 13 23:11:31.221421 update_engine[1457]: I20260413 23:11:31.219033 1457 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Apr 13 23:11:31.221421 update_engine[1457]: I20260413 23:11:31.219558 1457 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Apr 13 23:11:31.221421 update_engine[1457]: I20260413 23:11:31.219717 1457 omaha_request_action.cc:271] Posting an Omaha request to disabled Apr 13 23:11:31.221421 update_engine[1457]: I20260413 23:11:31.219723 1457 omaha_request_action.cc:272] Request: Apr 13 23:11:31.221421 update_engine[1457]: Apr 13 23:11:31.221421 update_engine[1457]: Apr 13 23:11:31.221421 update_engine[1457]: Apr 13 23:11:31.221421 update_engine[1457]: Apr 13 23:11:31.221421 update_engine[1457]: Apr 13 23:11:31.221421 update_engine[1457]: Apr 13 23:11:31.221421 update_engine[1457]: Apr 13 23:11:31.221421 update_engine[1457]: Apr 13 23:11:31.221421 update_engine[1457]: I20260413 23:11:31.219740 1457 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 13 23:11:31.457129 update_engine[1457]: I20260413 23:11:31.453969 1457 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 13 23:11:31.579976 update_engine[1457]: I20260413 23:11:31.579847 1457 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 13 23:11:31.641480 update_engine[1457]: E20260413 23:11:31.638792 1457 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Apr 13 23:11:31.684380 locksmithd[1499]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Apr 13 23:11:31.779325 update_engine[1457]: I20260413 23:11:31.680787 1457 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Apr 13 23:11:35.105433 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 23:11:35.225101 (kubelet)[2175]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 13 23:11:38.474314 kubelet[2175]: E0413 23:11:38.473704 2175 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 13 23:11:38.610613 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 13 23:11:38.645312 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 13 23:11:38.720761 systemd[1]: kubelet.service: Consumed 4.403s CPU time. Apr 13 23:11:41.925739 update_engine[1457]: I20260413 23:11:41.923772 1457 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 13 23:11:41.960483 update_engine[1457]: I20260413 23:11:41.933440 1457 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 13 23:11:41.960483 update_engine[1457]: I20260413 23:11:41.952091 1457 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 13 23:11:41.973356 update_engine[1457]: E20260413 23:11:41.971055 1457 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Apr 13 23:11:41.973635 update_engine[1457]: I20260413 23:11:41.973354 1457 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Apr 13 23:11:47.662705 containerd[1467]: time="2026-04-13T23:11:47.658324510Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.34.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 23:11:47.793655 containerd[1467]: time="2026-04-13T23:11:47.608749384Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.34.4: active requests=0, bytes read=25859803" Apr 13 23:11:49.180517 containerd[1467]: time="2026-04-13T23:11:49.170715045Z" level=info msg="ImageCreate event name:\"sha256:ccb613b010acadd9a69cf0ea80a60105c0d14106903c2572e2c6452f8615b3c7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 23:11:49.883455 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 17. Apr 13 23:11:50.542238 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 13 23:11:51.928111 update_engine[1457]: I20260413 23:11:51.921540 1457 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 13 23:11:51.948140 update_engine[1457]: I20260413 23:11:51.940034 1457 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 13 23:11:51.948140 update_engine[1457]: I20260413 23:11:51.947850 1457 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 13 23:11:51.968504 update_engine[1457]: E20260413 23:11:51.966655 1457 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Apr 13 23:11:51.968504 update_engine[1457]: I20260413 23:11:51.967633 1457 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Apr 13 23:11:52.399627 containerd[1467]: time="2026-04-13T23:11:52.390958051Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:be6f624483c350da6022d54965ba5b01b35f067737959d7fb11d625f1d975045\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 23:11:53.305673 containerd[1467]: time="2026-04-13T23:11:53.302504764Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.34.4\" with image id \"sha256:ccb613b010acadd9a69cf0ea80a60105c0d14106903c2572e2c6452f8615b3c7\", repo tag \"registry.k8s.io/kube-proxy:v1.34.4\", repo digest \"registry.k8s.io/kube-proxy@sha256:be6f624483c350da6022d54965ba5b01b35f067737959d7fb11d625f1d975045\", size \"25858928\" in 2m7.502630471s" Apr 13 23:11:53.332463 containerd[1467]: time="2026-04-13T23:11:53.330195553Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.4\" returns image reference \"sha256:ccb613b010acadd9a69cf0ea80a60105c0d14106903c2572e2c6452f8615b3c7\"" Apr 13 23:11:53.502470 containerd[1467]: time="2026-04-13T23:11:53.497644973Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\"" Apr 13 23:11:57.951252 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 23:11:57.984686 (kubelet)[2192]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 13 23:12:00.208755 kubelet[2192]: E0413 23:12:00.208220 2192 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 13 23:12:00.242266 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 13 23:12:00.242564 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 13 23:12:00.246909 systemd[1]: kubelet.service: Consumed 4.722s CPU time. Apr 13 23:12:01.502943 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3541248812.mount: Deactivated successfully. Apr 13 23:12:01.933278 update_engine[1457]: I20260413 23:12:01.910219 1457 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 13 23:12:01.982816 update_engine[1457]: I20260413 23:12:01.964517 1457 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 13 23:12:01.982816 update_engine[1457]: I20260413 23:12:01.967408 1457 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 13 23:12:02.003492 update_engine[1457]: E20260413 23:12:02.001299 1457 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Apr 13 23:12:02.029112 update_engine[1457]: I20260413 23:12:02.017331 1457 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Apr 13 23:12:02.029112 update_engine[1457]: I20260413 23:12:02.023922 1457 omaha_request_action.cc:617] Omaha request response: Apr 13 23:12:02.029112 update_engine[1457]: E20260413 23:12:02.025557 1457 omaha_request_action.cc:636] Omaha request network transfer failed. Apr 13 23:12:02.050430 update_engine[1457]: I20260413 23:12:02.047103 1457 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Apr 13 23:12:02.050430 update_engine[1457]: I20260413 23:12:02.047544 1457 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Apr 13 23:12:02.050430 update_engine[1457]: I20260413 23:12:02.047572 1457 update_attempter.cc:306] Processing Done. Apr 13 23:12:02.050430 update_engine[1457]: E20260413 23:12:02.049918 1457 update_attempter.cc:619] Update failed. Apr 13 23:12:02.050430 update_engine[1457]: I20260413 23:12:02.049971 1457 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Apr 13 23:12:02.050430 update_engine[1457]: I20260413 23:12:02.049978 1457 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Apr 13 23:12:02.050430 update_engine[1457]: I20260413 23:12:02.049984 1457 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Apr 13 23:12:02.050748 update_engine[1457]: I20260413 23:12:02.050621 1457 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Apr 13 23:12:02.050779 update_engine[1457]: I20260413 23:12:02.050753 1457 omaha_request_action.cc:271] Posting an Omaha request to disabled Apr 13 23:12:02.050779 update_engine[1457]: I20260413 23:12:02.050762 1457 omaha_request_action.cc:272] Request: Apr 13 23:12:02.050779 update_engine[1457]: Apr 13 23:12:02.050779 update_engine[1457]: Apr 13 23:12:02.050779 update_engine[1457]: Apr 13 23:12:02.050779 update_engine[1457]: Apr 13 23:12:02.050779 update_engine[1457]: Apr 13 23:12:02.050779 update_engine[1457]: Apr 13 23:12:02.050779 update_engine[1457]: I20260413 23:12:02.050769 1457 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 13 23:12:02.051179 update_engine[1457]: I20260413 23:12:02.051118 1457 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 13 23:12:02.051943 update_engine[1457]: I20260413 23:12:02.051822 1457 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 13 23:12:02.069316 locksmithd[1499]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Apr 13 23:12:02.103157 update_engine[1457]: E20260413 23:12:02.087771 1457 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Apr 13 23:12:02.103157 update_engine[1457]: I20260413 23:12:02.088445 1457 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Apr 13 23:12:02.103157 update_engine[1457]: I20260413 23:12:02.088512 1457 omaha_request_action.cc:617] Omaha request response: Apr 13 23:12:02.103157 update_engine[1457]: I20260413 23:12:02.088570 1457 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Apr 13 23:12:02.103157 update_engine[1457]: I20260413 23:12:02.088578 1457 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Apr 13 23:12:02.103157 update_engine[1457]: I20260413 23:12:02.088584 1457 update_attempter.cc:306] Processing Done. Apr 13 23:12:02.103157 update_engine[1457]: I20260413 23:12:02.088593 1457 update_attempter.cc:310] Error event sent. Apr 13 23:12:02.103157 update_engine[1457]: I20260413 23:12:02.088605 1457 update_check_scheduler.cc:74] Next update check in 42m53s Apr 13 23:12:02.149444 locksmithd[1499]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Apr 13 23:12:10.692813 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 18. Apr 13 23:12:11.139064 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 13 23:12:14.651610 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 23:12:14.774391 (kubelet)[2222]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 13 23:12:19.594701 kubelet[2222]: E0413 23:12:19.592687 2222 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 13 23:12:19.670471 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 13 23:12:19.695145 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 13 23:12:19.713467 systemd[1]: kubelet.service: Consumed 5.044s CPU time. Apr 13 23:12:30.091762 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 19. Apr 13 23:12:30.385576 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 13 23:12:35.173620 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 23:12:35.281654 (kubelet)[2239]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 13 23:12:55.091977 kubelet[2239]: E0413 23:12:55.079528 2239 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 13 23:12:55.171291 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 13 23:12:55.183613 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 13 23:12:55.190082 systemd[1]: kubelet.service: Consumed 17.551s CPU time. Apr 13 23:13:05.531181 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 20. Apr 13 23:13:05.715126 containerd[1467]: time="2026-04-13T23:13:05.712409189Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 23:13:05.799485 containerd[1467]: time="2026-04-13T23:13:05.714036059Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.1: active requests=0, bytes read=22387483" Apr 13 23:13:06.088619 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 13 23:13:06.216693 containerd[1467]: time="2026-04-13T23:13:06.216327593Z" level=info msg="ImageCreate event name:\"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 23:13:08.344400 containerd[1467]: time="2026-04-13T23:13:08.343645650Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 23:13:09.078577 containerd[1467]: time="2026-04-13T23:13:09.078442976Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.1\" with image id \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\", size \"22384805\" in 1m15.580285929s" Apr 13 23:13:09.079508 containerd[1467]: time="2026-04-13T23:13:09.079482250Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\" returns image reference \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\"" Apr 13 23:13:09.280444 containerd[1467]: time="2026-04-13T23:13:09.279910665Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Apr 13 23:13:09.721380 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 23:13:10.591355 (kubelet)[2296]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 13 23:13:15.332392 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1731022591.mount: Deactivated successfully. Apr 13 23:13:15.572767 containerd[1467]: time="2026-04-13T23:13:15.565561332Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 23:13:15.615757 containerd[1467]: time="2026-04-13T23:13:15.612006011Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=321150" Apr 13 23:13:15.684232 containerd[1467]: time="2026-04-13T23:13:15.667624536Z" level=info msg="ImageCreate event name:\"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 23:13:15.778647 containerd[1467]: time="2026-04-13T23:13:15.777562263Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 23:13:16.123092 containerd[1467]: time="2026-04-13T23:13:16.122561472Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"320448\" in 6.840122758s" Apr 13 23:13:16.153032 containerd[1467]: time="2026-04-13T23:13:16.137318785Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\"" Apr 13 23:13:16.162780 containerd[1467]: time="2026-04-13T23:13:16.162723865Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.5-0\"" Apr 13 23:13:17.395258 kubelet[2296]: E0413 23:13:17.389123 2296 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 13 23:13:17.564701 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 13 23:13:17.565250 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 13 23:13:17.571318 systemd[1]: kubelet.service: Consumed 6.851s CPU time. Apr 13 23:13:26.053120 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2069655920.mount: Deactivated successfully. Apr 13 23:13:28.608543 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 21. Apr 13 23:13:29.120138 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 13 23:13:33.174112 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 23:13:33.234432 (kubelet)[2328]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 13 23:13:37.456679 kubelet[2328]: E0413 23:13:37.402441 2328 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 13 23:13:37.601208 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 13 23:13:37.606100 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 13 23:13:37.687341 systemd[1]: kubelet.service: Consumed 2.609s CPU time. Apr 13 23:13:47.966500 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 22. Apr 13 23:13:48.304254 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 13 23:13:51.109377 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 23:13:51.111078 (kubelet)[2360]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 13 23:13:54.392002 kubelet[2360]: E0413 23:13:54.387968 2360 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 13 23:13:54.432625 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 13 23:13:54.439654 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 13 23:13:54.477201 systemd[1]: kubelet.service: Consumed 2.489s CPU time. Apr 13 23:14:05.539750 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 23. Apr 13 23:14:05.990382 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 13 23:14:11.522598 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 23:14:11.661749 (kubelet)[2400]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 13 23:14:13.045463 kubelet[2400]: E0413 23:14:13.044835 2400 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 13 23:14:13.134538 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 13 23:14:13.138727 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 13 23:14:13.176608 systemd[1]: kubelet.service: Consumed 2.068s CPU time. Apr 13 23:14:22.203501 containerd[1467]: time="2026-04-13T23:14:22.198455300Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.5-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 23:14:22.246453 containerd[1467]: time="2026-04-13T23:14:22.219527184Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.5-0: active requests=0, bytes read=22860188" Apr 13 23:14:23.166231 containerd[1467]: time="2026-04-13T23:14:23.156553790Z" level=info msg="ImageCreate event name:\"sha256:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 23:14:23.431471 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 24. Apr 13 23:14:23.965560 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 13 23:14:24.094435 containerd[1467]: time="2026-04-13T23:14:24.093179814Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 23:14:24.134382 containerd[1467]: time="2026-04-13T23:14:24.126080469Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.5-0\" with image id \"sha256:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1\", repo tag \"registry.k8s.io/etcd:3.6.5-0\", repo digest \"registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534\", size \"22871747\" in 1m7.963149663s" Apr 13 23:14:24.134382 containerd[1467]: time="2026-04-13T23:14:24.126305374Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.5-0\" returns image reference \"sha256:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1\"" Apr 13 23:14:26.684376 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 23:14:26.794038 (kubelet)[2438]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 13 23:14:49.893181 kubelet[2438]: E0413 23:14:49.843614 2438 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 13 23:14:50.218364 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 13 23:14:50.335142 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 13 23:14:50.541560 systemd[1]: kubelet.service: Consumed 7.516s CPU time. Apr 13 23:15:00.297357 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 25. Apr 13 23:15:00.994442 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 13 23:15:06.886792 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 23:15:07.735732 (kubelet)[2477]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 13 23:15:11.704278 kubelet[2477]: E0413 23:15:11.692934 2477 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 13 23:15:11.771685 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 13 23:15:11.788549 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 13 23:15:11.891182 systemd[1]: kubelet.service: Consumed 3.836s CPU time. Apr 13 23:15:21.851315 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 26. Apr 13 23:15:22.180814 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 13 23:15:27.765338 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 23:15:28.014758 (kubelet)[2492]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 13 23:15:33.084499 kubelet[2492]: E0413 23:15:33.083498 2492 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 13 23:15:33.122346 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 13 23:15:33.124772 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 13 23:15:33.133269 systemd[1]: kubelet.service: Consumed 3.565s CPU time. Apr 13 23:15:36.874501 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 23:15:36.956832 systemd[1]: kubelet.service: Consumed 3.565s CPU time. Apr 13 23:15:37.390035 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 13 23:15:40.269736 systemd[1]: Reloading requested from client PID 2508 ('systemctl') (unit session-7.scope)... Apr 13 23:15:40.272068 systemd[1]: Reloading... Apr 13 23:15:53.232725 zram_generator::config[2547]: No configuration found. Apr 13 23:16:01.063347 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 13 23:16:07.686568 systemd[1]: Reloading finished in 27411 ms. Apr 13 23:16:11.001758 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Apr 13 23:16:11.002167 systemd[1]: kubelet.service: Failed with result 'signal'. Apr 13 23:16:11.082229 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 23:16:11.113414 systemd[1]: kubelet.service: Consumed 1.689s CPU time, 35.7M memory peak, 0B memory swap peak. Apr 13 23:16:11.618291 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 13 23:16:15.473248 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 23:16:15.568046 (kubelet)[2594]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 13 23:16:17.359413 kubelet[2594]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 13 23:16:17.376992 kubelet[2594]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 13 23:16:17.376992 kubelet[2594]: I0413 23:16:17.372733 2594 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 13 23:16:23.397450 kubelet[2594]: I0413 23:16:23.392244 2594 server.go:529] "Kubelet version" kubeletVersion="v1.34.4" Apr 13 23:16:23.446044 kubelet[2594]: I0413 23:16:23.403828 2594 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 13 23:16:23.446044 kubelet[2594]: I0413 23:16:23.428232 2594 watchdog_linux.go:95] "Systemd watchdog is not enabled" Apr 13 23:16:23.446044 kubelet[2594]: I0413 23:16:23.436293 2594 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 13 23:16:23.485916 kubelet[2594]: I0413 23:16:23.482247 2594 server.go:956] "Client rotation is on, will bootstrap in background" Apr 13 23:16:23.601005 kubelet[2594]: I0413 23:16:23.599654 2594 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 13 23:16:23.630999 kubelet[2594]: E0413 23:16:23.626211 2594 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.132:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.132:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 13 23:16:24.032097 kubelet[2594]: E0413 23:16:24.030849 2594 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 13 23:16:24.051158 kubelet[2594]: I0413 23:16:24.049018 2594 server.go:1400] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Apr 13 23:16:25.363497 kubelet[2594]: I0413 23:16:25.358626 2594 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Apr 13 23:16:25.437513 kubelet[2594]: I0413 23:16:25.404979 2594 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 13 23:16:25.476289 kubelet[2594]: I0413 23:16:25.460210 2594 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 13 23:16:25.476289 kubelet[2594]: I0413 23:16:25.475584 2594 topology_manager.go:138] "Creating topology manager with none policy" Apr 13 23:16:25.476289 kubelet[2594]: I0413 23:16:25.475768 2594 container_manager_linux.go:306] "Creating device plugin manager" Apr 13 23:16:25.580264 kubelet[2594]: I0413 23:16:25.517309 2594 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Apr 13 23:16:26.044825 kubelet[2594]: I0413 23:16:26.042691 2594 state_mem.go:36] "Initialized new in-memory state store" Apr 13 23:16:26.250309 kubelet[2594]: I0413 23:16:26.231065 2594 kubelet.go:475] "Attempting to sync node with API server" Apr 13 23:16:26.266789 kubelet[2594]: I0413 23:16:26.261163 2594 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 13 23:16:26.266789 kubelet[2594]: E0413 23:16:26.261744 2594 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.132:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.132:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 13 23:16:26.277609 kubelet[2594]: I0413 23:16:26.268647 2594 kubelet.go:387] "Adding apiserver pod source" Apr 13 23:16:26.277609 kubelet[2594]: I0413 23:16:26.268811 2594 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 13 23:16:26.326669 kubelet[2594]: E0413 23:16:26.290739 2594 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.132:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.132:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 13 23:16:26.326669 kubelet[2594]: E0413 23:16:26.301300 2594 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.132:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.132:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 13 23:16:26.418464 kubelet[2594]: I0413 23:16:26.417536 2594 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 13 23:16:26.499442 kubelet[2594]: I0413 23:16:26.492138 2594 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 13 23:16:26.602695 kubelet[2594]: I0413 23:16:26.503331 2594 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Apr 13 23:16:26.602695 kubelet[2594]: W0413 23:16:26.584423 2594 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Apr 13 23:16:26.933078 kubelet[2594]: I0413 23:16:26.926692 2594 server.go:1262] "Started kubelet" Apr 13 23:16:26.933078 kubelet[2594]: I0413 23:16:26.927011 2594 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Apr 13 23:16:26.956627 kubelet[2594]: I0413 23:16:26.952262 2594 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 13 23:16:26.976251 kubelet[2594]: E0413 23:16:26.952302 2594 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.132:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.132:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18a60db7fbafb0f0 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-13 23:16:26.897010928 +0000 UTC m=+11.252171675,LastTimestamp:2026-04-13 23:16:26.897010928 +0000 UTC m=+11.252171675,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 13 23:16:26.998838 kubelet[2594]: I0413 23:16:26.998116 2594 server_v1.go:49] "podresources" method="list" useActivePods=true Apr 13 23:16:27.019439 kubelet[2594]: I0413 23:16:27.017659 2594 server.go:310] "Adding debug handlers to kubelet server" Apr 13 23:16:27.033072 kubelet[2594]: I0413 23:16:27.033047 2594 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 13 23:16:27.038391 kubelet[2594]: I0413 23:16:27.036069 2594 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 13 23:16:27.049785 kubelet[2594]: I0413 23:16:27.047259 2594 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 13 23:16:27.078242 kubelet[2594]: E0413 23:16:27.076389 2594 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 13 23:16:27.078242 kubelet[2594]: I0413 23:16:27.076530 2594 volume_manager.go:313] "Starting Kubelet Volume Manager" Apr 13 23:16:27.078242 kubelet[2594]: I0413 23:16:27.076836 2594 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Apr 13 23:16:27.096390 kubelet[2594]: I0413 23:16:27.080333 2594 reconciler.go:29] "Reconciler: start to sync state" Apr 13 23:16:27.112954 kubelet[2594]: E0413 23:16:27.108751 2594 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.132:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.132:6443: connect: connection refused" interval="200ms" Apr 13 23:16:27.186394 kubelet[2594]: E0413 23:16:27.183082 2594 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.132:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.132:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 13 23:16:27.197021 kubelet[2594]: E0413 23:16:27.186427 2594 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 13 23:16:27.210256 kubelet[2594]: I0413 23:16:27.209549 2594 factory.go:223] Registration of the systemd container factory successfully Apr 13 23:16:27.213558 kubelet[2594]: I0413 23:16:27.210791 2594 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 13 23:16:27.289455 kubelet[2594]: E0413 23:16:27.288449 2594 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 13 23:16:27.503274 kubelet[2594]: E0413 23:16:27.471219 2594 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 13 23:16:27.503274 kubelet[2594]: E0413 23:16:27.488719 2594 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.132:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.132:6443: connect: connection refused" interval="400ms" Apr 13 23:16:27.503274 kubelet[2594]: E0413 23:16:27.492305 2594 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.132:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.132:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 13 23:16:27.503274 kubelet[2594]: E0413 23:16:27.502471 2594 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.132:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.132:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 13 23:16:27.642686 kubelet[2594]: E0413 23:16:27.608776 2594 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 13 23:16:27.662898 kubelet[2594]: I0413 23:16:27.626655 2594 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Apr 13 23:16:27.722379 kubelet[2594]: E0413 23:16:27.718107 2594 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 13 23:16:27.774235 kubelet[2594]: E0413 23:16:27.757773 2594 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 13 23:16:27.827459 kubelet[2594]: E0413 23:16:27.819263 2594 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 13 23:16:27.879528 kubelet[2594]: I0413 23:16:27.859284 2594 factory.go:223] Registration of the containerd container factory successfully Apr 13 23:16:27.907619 kubelet[2594]: E0413 23:16:27.907190 2594 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.132:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.132:6443: connect: connection refused" interval="800ms" Apr 13 23:16:27.963287 kubelet[2594]: E0413 23:16:27.962235 2594 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 13 23:16:28.079761 kubelet[2594]: E0413 23:16:28.073702 2594 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 13 23:16:28.295050 kubelet[2594]: E0413 23:16:28.293433 2594 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 13 23:16:28.350829 kubelet[2594]: I0413 23:16:28.344844 2594 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Apr 13 23:16:28.361645 kubelet[2594]: I0413 23:16:28.358195 2594 status_manager.go:244] "Starting to sync pod status with apiserver" Apr 13 23:16:28.369385 kubelet[2594]: I0413 23:16:28.369340 2594 kubelet.go:2428] "Starting kubelet main sync loop" Apr 13 23:16:28.373923 kubelet[2594]: E0413 23:16:28.373158 2594 kubelet.go:2452] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 13 23:16:28.412170 kubelet[2594]: E0413 23:16:28.411481 2594 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 13 23:16:28.503544 kubelet[2594]: E0413 23:16:28.502952 2594 kubelet.go:2452] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 13 23:16:28.518854 kubelet[2594]: E0413 23:16:28.518306 2594 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 13 23:16:28.528694 kubelet[2594]: E0413 23:16:28.527355 2594 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.132:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.132:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 13 23:16:28.668551 kubelet[2594]: E0413 23:16:28.662115 2594 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 13 23:16:28.706722 kubelet[2594]: E0413 23:16:28.706212 2594 kubelet.go:2452] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Apr 13 23:16:28.723340 kubelet[2594]: E0413 23:16:28.706836 2594 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.132:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.132:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 13 23:16:28.771190 kubelet[2594]: E0413 23:16:28.769999 2594 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.132:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.132:6443: connect: connection refused" interval="1.6s" Apr 13 23:16:28.789338 kubelet[2594]: E0413 23:16:28.788591 2594 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 13 23:16:28.899055 kubelet[2594]: E0413 23:16:28.893610 2594 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 13 23:16:28.925030 kubelet[2594]: I0413 23:16:28.905622 2594 cpu_manager.go:221] "Starting CPU manager" policy="none" Apr 13 23:16:28.925030 kubelet[2594]: I0413 23:16:28.905751 2594 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Apr 13 23:16:28.925030 kubelet[2594]: I0413 23:16:28.905961 2594 state_mem.go:36] "Initialized new in-memory state store" Apr 13 23:16:28.959088 kubelet[2594]: I0413 23:16:28.958455 2594 policy_none.go:49] "None policy: Start" Apr 13 23:16:28.972028 kubelet[2594]: I0413 23:16:28.962441 2594 memory_manager.go:187] "Starting memorymanager" policy="None" Apr 13 23:16:28.972028 kubelet[2594]: I0413 23:16:28.965058 2594 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Apr 13 23:16:29.006828 kubelet[2594]: E0413 23:16:29.005418 2594 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 13 23:16:29.017638 kubelet[2594]: I0413 23:16:29.014574 2594 policy_none.go:47] "Start" Apr 13 23:16:29.148205 kubelet[2594]: E0413 23:16:29.142819 2594 kubelet.go:2452] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Apr 13 23:16:29.148205 kubelet[2594]: E0413 23:16:29.146492 2594 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 13 23:16:29.276739 kubelet[2594]: E0413 23:16:29.276300 2594 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 13 23:16:29.395209 kubelet[2594]: E0413 23:16:29.393816 2594 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 13 23:16:29.573582 kubelet[2594]: E0413 23:16:29.563266 2594 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 13 23:16:29.596518 kubelet[2594]: E0413 23:16:29.591593 2594 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.132:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.132:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 13 23:16:29.703247 kubelet[2594]: E0413 23:16:29.701300 2594 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 13 23:16:29.817779 kubelet[2594]: E0413 23:16:29.812755 2594 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 13 23:16:29.945808 kubelet[2594]: E0413 23:16:29.943750 2594 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 13 23:16:29.955071 kubelet[2594]: E0413 23:16:29.944218 2594 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.132:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.132:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 13 23:16:29.955071 kubelet[2594]: E0413 23:16:29.946707 2594 kubelet.go:2452] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Apr 13 23:16:30.072603 kubelet[2594]: E0413 23:16:30.068790 2594 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 13 23:16:30.265193 kubelet[2594]: E0413 23:16:30.259322 2594 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 13 23:16:30.371762 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Apr 13 23:16:30.443232 kubelet[2594]: E0413 23:16:30.417719 2594 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 13 23:16:30.465270 kubelet[2594]: E0413 23:16:30.459314 2594 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.132:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.132:6443: connect: connection refused" interval="3.2s" Apr 13 23:16:30.610548 kubelet[2594]: E0413 23:16:30.608989 2594 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 13 23:16:30.654512 kubelet[2594]: E0413 23:16:30.604830 2594 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.132:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.132:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 13 23:16:30.685010 kubelet[2594]: E0413 23:16:30.682843 2594 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.132:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.132:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 13 23:16:30.725768 kubelet[2594]: E0413 23:16:30.718652 2594 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 13 23:16:30.901287 kubelet[2594]: E0413 23:16:30.877083 2594 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 13 23:16:31.019477 kubelet[2594]: E0413 23:16:31.016782 2594 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 13 23:16:31.174660 kubelet[2594]: E0413 23:16:31.168135 2594 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 13 23:16:31.280226 kubelet[2594]: E0413 23:16:31.276689 2594 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 13 23:16:31.382216 kubelet[2594]: E0413 23:16:31.381644 2594 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 13 23:16:31.411264 kubelet[2594]: E0413 23:16:31.410742 2594 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.132:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.132:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 13 23:16:31.566097 kubelet[2594]: E0413 23:16:31.515245 2594 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 13 23:16:31.580698 kubelet[2594]: E0413 23:16:31.577951 2594 kubelet.go:2452] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Apr 13 23:16:31.675958 kubelet[2594]: E0413 23:16:31.675639 2594 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 13 23:16:31.785979 kubelet[2594]: E0413 23:16:31.782759 2594 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 13 23:16:31.935793 kubelet[2594]: E0413 23:16:31.932075 2594 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 13 23:16:31.985303 kubelet[2594]: E0413 23:16:31.984696 2594 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.132:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.132:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 13 23:16:32.044241 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Apr 13 23:16:32.056212 kubelet[2594]: E0413 23:16:32.055726 2594 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 13 23:16:32.164268 kubelet[2594]: E0413 23:16:32.163277 2594 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 13 23:16:32.279214 kubelet[2594]: E0413 23:16:32.268213 2594 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 13 23:16:32.407819 kubelet[2594]: E0413 23:16:32.390816 2594 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 13 23:16:32.542250 kubelet[2594]: E0413 23:16:32.532231 2594 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 13 23:16:32.664210 kubelet[2594]: E0413 23:16:32.660843 2594 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 13 23:16:32.782938 kubelet[2594]: E0413 23:16:32.777966 2594 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 13 23:16:32.902688 kubelet[2594]: E0413 23:16:32.900344 2594 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 13 23:16:33.013063 kubelet[2594]: E0413 23:16:33.006834 2594 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 13 23:16:33.165354 kubelet[2594]: E0413 23:16:33.162537 2594 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 13 23:16:33.294850 kubelet[2594]: E0413 23:16:33.294592 2594 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 13 23:16:33.455261 kubelet[2594]: E0413 23:16:33.450522 2594 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 13 23:16:33.501515 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Apr 13 23:16:33.585340 kubelet[2594]: E0413 23:16:33.566362 2594 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 13 23:16:33.713812 kubelet[2594]: E0413 23:16:33.704779 2594 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 13 23:16:33.745237 kubelet[2594]: E0413 23:16:33.744195 2594 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.132:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.132:6443: connect: connection refused" interval="6.4s" Apr 13 23:16:33.756402 kubelet[2594]: E0413 23:16:33.741331 2594 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.132:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.132:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18a60db7fbafb0f0 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-13 23:16:26.897010928 +0000 UTC m=+11.252171675,LastTimestamp:2026-04-13 23:16:26.897010928 +0000 UTC m=+11.252171675,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 13 23:16:33.900044 kubelet[2594]: E0413 23:16:33.892189 2594 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 13 23:16:34.092261 kubelet[2594]: E0413 23:16:34.090764 2594 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 13 23:16:34.103367 kubelet[2594]: E0413 23:16:34.091638 2594 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 13 23:16:34.200919 kubelet[2594]: I0413 23:16:34.196841 2594 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 13 23:16:34.221295 kubelet[2594]: I0413 23:16:34.209761 2594 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 13 23:16:34.231260 kubelet[2594]: E0413 23:16:34.220849 2594 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 13 23:16:34.243747 kubelet[2594]: I0413 23:16:34.243653 2594 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 13 23:16:34.705128 kubelet[2594]: E0413 23:16:34.703454 2594 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 13 23:16:34.741111 kubelet[2594]: E0413 23:16:34.724850 2594 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 13 23:16:34.808619 kubelet[2594]: I0413 23:16:34.788255 2594 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 13 23:16:34.997338 kubelet[2594]: E0413 23:16:34.989046 2594 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.132:6443/api/v1/nodes\": dial tcp 10.0.0.132:6443: connect: connection refused" node="localhost" Apr 13 23:16:34.997338 kubelet[2594]: I0413 23:16:34.989947 2594 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3b7943798e53a2fbe77bb8405c2b7b02-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"3b7943798e53a2fbe77bb8405c2b7b02\") " pod="kube-system/kube-apiserver-localhost" Apr 13 23:16:35.106650 kubelet[2594]: I0413 23:16:35.106299 2594 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3b7943798e53a2fbe77bb8405c2b7b02-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"3b7943798e53a2fbe77bb8405c2b7b02\") " pod="kube-system/kube-apiserver-localhost" Apr 13 23:16:35.192355 kubelet[2594]: I0413 23:16:35.191822 2594 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3b7943798e53a2fbe77bb8405c2b7b02-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"3b7943798e53a2fbe77bb8405c2b7b02\") " pod="kube-system/kube-apiserver-localhost" Apr 13 23:16:35.299335 kubelet[2594]: I0413 23:16:35.297368 2594 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 13 23:16:35.389066 kubelet[2594]: E0413 23:16:35.387257 2594 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.132:6443/api/v1/nodes\": dial tcp 10.0.0.132:6443: connect: connection refused" node="localhost" Apr 13 23:16:35.469603 kubelet[2594]: I0413 23:16:35.468032 2594 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/82faa9ca0765979bc0118d46e6420ed8-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"82faa9ca0765979bc0118d46e6420ed8\") " pod="kube-system/kube-controller-manager-localhost" Apr 13 23:16:35.489311 kubelet[2594]: I0413 23:16:35.485268 2594 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/82faa9ca0765979bc0118d46e6420ed8-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"82faa9ca0765979bc0118d46e6420ed8\") " pod="kube-system/kube-controller-manager-localhost" Apr 13 23:16:35.505272 kubelet[2594]: I0413 23:16:35.493604 2594 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/82faa9ca0765979bc0118d46e6420ed8-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"82faa9ca0765979bc0118d46e6420ed8\") " pod="kube-system/kube-controller-manager-localhost" Apr 13 23:16:35.505272 kubelet[2594]: I0413 23:16:35.493636 2594 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/82faa9ca0765979bc0118d46e6420ed8-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"82faa9ca0765979bc0118d46e6420ed8\") " pod="kube-system/kube-controller-manager-localhost" Apr 13 23:16:35.505272 kubelet[2594]: I0413 23:16:35.493702 2594 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/82faa9ca0765979bc0118d46e6420ed8-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"82faa9ca0765979bc0118d46e6420ed8\") " pod="kube-system/kube-controller-manager-localhost" Apr 13 23:16:35.685399 kubelet[2594]: E0413 23:16:35.680554 2594 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.132:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.132:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 13 23:16:35.713286 kubelet[2594]: I0413 23:16:35.711261 2594 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/66a243c17a59d09458bf3b09d66260f5-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"66a243c17a59d09458bf3b09d66260f5\") " pod="kube-system/kube-scheduler-localhost" Apr 13 23:16:36.147227 kubelet[2594]: E0413 23:16:36.146633 2594 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.132:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.132:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 13 23:16:36.168155 kubelet[2594]: E0413 23:16:36.146615 2594 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.132:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.132:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 13 23:16:36.187794 kubelet[2594]: I0413 23:16:36.174274 2594 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 13 23:16:36.245455 kubelet[2594]: E0413 23:16:36.218703 2594 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.132:6443/api/v1/nodes\": dial tcp 10.0.0.132:6443: connect: connection refused" node="localhost" Apr 13 23:16:36.290148 kubelet[2594]: E0413 23:16:36.289184 2594 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.132:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.132:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 13 23:16:37.412217 kubelet[2594]: I0413 23:16:37.407773 2594 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 13 23:16:37.419320 systemd[1]: Created slice kubepods-burstable-pod3b7943798e53a2fbe77bb8405c2b7b02.slice - libcontainer container kubepods-burstable-pod3b7943798e53a2fbe77bb8405c2b7b02.slice. Apr 13 23:16:37.462376 kubelet[2594]: E0413 23:16:37.457968 2594 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.132:6443/api/v1/nodes\": dial tcp 10.0.0.132:6443: connect: connection refused" node="localhost" Apr 13 23:16:38.073713 kubelet[2594]: E0413 23:16:38.073505 2594 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 13 23:16:38.365701 kubelet[2594]: E0413 23:16:38.329705 2594 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 13 23:16:38.892552 systemd[1]: Created slice kubepods-burstable-pod82faa9ca0765979bc0118d46e6420ed8.slice - libcontainer container kubepods-burstable-pod82faa9ca0765979bc0118d46e6420ed8.slice. Apr 13 23:16:39.011131 containerd[1467]: time="2026-04-13T23:16:39.006621180Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:3b7943798e53a2fbe77bb8405c2b7b02,Namespace:kube-system,Attempt:0,}" Apr 13 23:16:39.586823 kubelet[2594]: E0413 23:16:39.586418 2594 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 13 23:16:39.611600 kubelet[2594]: E0413 23:16:39.592730 2594 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.132:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.132:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 13 23:16:39.699526 kubelet[2594]: E0413 23:16:39.698311 2594 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 13 23:16:39.751247 kubelet[2594]: I0413 23:16:39.749435 2594 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 13 23:16:39.806937 kubelet[2594]: E0413 23:16:39.789132 2594 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.132:6443/api/v1/nodes\": dial tcp 10.0.0.132:6443: connect: connection refused" node="localhost" Apr 13 23:16:39.968750 containerd[1467]: time="2026-04-13T23:16:39.962170082Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:82faa9ca0765979bc0118d46e6420ed8,Namespace:kube-system,Attempt:0,}" Apr 13 23:16:40.202552 kubelet[2594]: E0413 23:16:40.202171 2594 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.132:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.132:6443: connect: connection refused" interval="7s" Apr 13 23:16:40.241642 systemd[1]: Created slice kubepods-burstable-pod66a243c17a59d09458bf3b09d66260f5.slice - libcontainer container kubepods-burstable-pod66a243c17a59d09458bf3b09d66260f5.slice. Apr 13 23:16:40.965844 kubelet[2594]: E0413 23:16:40.965315 2594 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 13 23:16:41.302294 kubelet[2594]: E0413 23:16:41.300175 2594 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 13 23:16:41.580593 containerd[1467]: time="2026-04-13T23:16:41.575775299Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:66a243c17a59d09458bf3b09d66260f5,Namespace:kube-system,Attempt:0,}" Apr 13 23:16:43.469687 kubelet[2594]: I0413 23:16:43.467417 2594 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 13 23:16:43.690846 kubelet[2594]: E0413 23:16:43.664846 2594 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.132:6443/api/v1/nodes\": dial tcp 10.0.0.132:6443: connect: connection refused" node="localhost" Apr 13 23:16:44.004352 kubelet[2594]: E0413 23:16:43.961202 2594 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.132:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.132:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18a60db7fbafb0f0 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-13 23:16:26.897010928 +0000 UTC m=+11.252171675,LastTimestamp:2026-04-13 23:16:26.897010928 +0000 UTC m=+11.252171675,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 13 23:16:44.748848 kubelet[2594]: E0413 23:16:44.748266 2594 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 13 23:16:45.718294 kubelet[2594]: E0413 23:16:45.711384 2594 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.132:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.132:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 13 23:16:45.898553 kubelet[2594]: E0413 23:16:45.898190 2594 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.132:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.132:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 13 23:16:46.119837 kubelet[2594]: E0413 23:16:46.105570 2594 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.132:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.132:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 13 23:16:47.029240 kubelet[2594]: E0413 23:16:47.028218 2594 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.132:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.132:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 13 23:16:47.192228 containerd[1467]: time="2026-04-13T23:16:47.173265530Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 13 23:16:47.201325 containerd[1467]: time="2026-04-13T23:16:47.175327166Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=311988" Apr 13 23:16:47.336291 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1022055459.mount: Deactivated successfully. Apr 13 23:16:47.585257 kubelet[2594]: E0413 23:16:47.562380 2594 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.132:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.132:6443: connect: connection refused" interval="7s" Apr 13 23:16:47.705480 containerd[1467]: time="2026-04-13T23:16:47.687312449Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 13 23:16:47.776610 containerd[1467]: time="2026-04-13T23:16:47.772351241Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 13 23:16:48.177451 containerd[1467]: time="2026-04-13T23:16:48.175362521Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 13 23:16:50.292601 kubelet[2594]: I0413 23:16:50.291505 2594 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 13 23:16:50.334502 kubelet[2594]: E0413 23:16:50.332294 2594 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.132:6443/api/v1/nodes\": dial tcp 10.0.0.132:6443: connect: connection refused" node="localhost" Apr 13 23:16:50.348233 containerd[1467]: time="2026-04-13T23:16:50.343771280Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 13 23:16:52.617601 containerd[1467]: time="2026-04-13T23:16:52.615466867Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 13 23:16:53.409390 containerd[1467]: time="2026-04-13T23:16:53.397695871Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 14.341855685s" Apr 13 23:16:53.673696 containerd[1467]: time="2026-04-13T23:16:53.669531276Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 12.082655535s" Apr 13 23:16:53.905495 containerd[1467]: time="2026-04-13T23:16:53.891516966Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 13.821912891s" Apr 13 23:16:54.278302 kubelet[2594]: E0413 23:16:54.264945 2594 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.132:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.132:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18a60db7fbafb0f0 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-13 23:16:26.897010928 +0000 UTC m=+11.252171675,LastTimestamp:2026-04-13 23:16:26.897010928 +0000 UTC m=+11.252171675,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 13 23:16:54.487674 containerd[1467]: time="2026-04-13T23:16:54.485756643Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 13 23:16:54.786531 kubelet[2594]: E0413 23:16:54.696469 2594 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.132:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.132:6443: connect: connection refused" interval="7s" Apr 13 23:16:54.786531 kubelet[2594]: E0413 23:16:54.799615 2594 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 13 23:16:56.788119 kubelet[2594]: E0413 23:16:56.786723 2594 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.132:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.132:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 13 23:16:56.788119 kubelet[2594]: E0413 23:16:56.787240 2594 certificate_manager.go:461] "Reached backoff limit, still unable to rotate certs" err="timed out waiting for the condition" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 13 23:16:57.587429 kubelet[2594]: I0413 23:16:57.585418 2594 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 13 23:16:57.645298 kubelet[2594]: E0413 23:16:57.642340 2594 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.132:6443/api/v1/nodes\": dial tcp 10.0.0.132:6443: connect: connection refused" node="localhost" Apr 13 23:16:57.712425 containerd[1467]: time="2026-04-13T23:16:57.704150420Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 23:16:57.712425 containerd[1467]: time="2026-04-13T23:16:57.705851594Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 23:16:57.712425 containerd[1467]: time="2026-04-13T23:16:57.708618026Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 23:16:57.971616 containerd[1467]: time="2026-04-13T23:16:57.948841326Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 23:16:57.990220 containerd[1467]: time="2026-04-13T23:16:57.973805393Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 23:16:57.990220 containerd[1467]: time="2026-04-13T23:16:57.975260962Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 23:16:57.990220 containerd[1467]: time="2026-04-13T23:16:57.977984234Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 23:16:58.099198 containerd[1467]: time="2026-04-13T23:16:58.083484743Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 23:16:59.044795 containerd[1467]: time="2026-04-13T23:16:58.992818919Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 23:16:59.075429 containerd[1467]: time="2026-04-13T23:16:59.070320507Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 23:16:59.212699 containerd[1467]: time="2026-04-13T23:16:59.142767013Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 23:16:59.282201 containerd[1467]: time="2026-04-13T23:16:59.278471457Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 23:17:01.976405 kubelet[2594]: E0413 23:17:01.971794 2594 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.132:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.132:6443: connect: connection refused" interval="7s" Apr 13 23:17:02.460418 kubelet[2594]: E0413 23:17:02.455084 2594 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.132:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.132:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 13 23:17:03.209534 systemd[1]: Started cri-containerd-3f178d89e85baa3fb4759fcf4202b16fbc10472d689b31ccefe4360053ec3750.scope - libcontainer container 3f178d89e85baa3fb4759fcf4202b16fbc10472d689b31ccefe4360053ec3750. Apr 13 23:17:03.575424 systemd[1]: Started cri-containerd-6074b44e58859132531ca8cd5a42b5b8d5beffb410b58ea4f0d3f28a413306bf.scope - libcontainer container 6074b44e58859132531ca8cd5a42b5b8d5beffb410b58ea4f0d3f28a413306bf. Apr 13 23:17:03.874689 systemd[1]: Started cri-containerd-8bac472e4e244abf32bc235cf5cd32acc55e4106939eb4c4a6a95cb7a7785c9f.scope - libcontainer container 8bac472e4e244abf32bc235cf5cd32acc55e4106939eb4c4a6a95cb7a7785c9f. Apr 13 23:17:04.317541 kubelet[2594]: E0413 23:17:04.312374 2594 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.132:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.132:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18a60db7fbafb0f0 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-13 23:16:26.897010928 +0000 UTC m=+11.252171675,LastTimestamp:2026-04-13 23:16:26.897010928 +0000 UTC m=+11.252171675,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 13 23:17:05.172135 kubelet[2594]: E0413 23:17:05.170833 2594 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 13 23:17:05.763419 kubelet[2594]: E0413 23:17:05.762622 2594 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.132:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.132:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 13 23:17:05.812818 kubelet[2594]: I0413 23:17:05.794697 2594 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 13 23:17:06.002684 kubelet[2594]: E0413 23:17:06.000070 2594 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.132:6443/api/v1/nodes\": dial tcp 10.0.0.132:6443: connect: connection refused" node="localhost" Apr 13 23:17:07.117667 containerd[1467]: time="2026-04-13T23:17:07.114769204Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:82faa9ca0765979bc0118d46e6420ed8,Namespace:kube-system,Attempt:0,} returns sandbox id \"3f178d89e85baa3fb4759fcf4202b16fbc10472d689b31ccefe4360053ec3750\"" Apr 13 23:17:07.318186 containerd[1467]: time="2026-04-13T23:17:07.311810766Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:66a243c17a59d09458bf3b09d66260f5,Namespace:kube-system,Attempt:0,} returns sandbox id \"8bac472e4e244abf32bc235cf5cd32acc55e4106939eb4c4a6a95cb7a7785c9f\"" Apr 13 23:17:07.602698 containerd[1467]: time="2026-04-13T23:17:07.600284559Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:3b7943798e53a2fbe77bb8405c2b7b02,Namespace:kube-system,Attempt:0,} returns sandbox id \"6074b44e58859132531ca8cd5a42b5b8d5beffb410b58ea4f0d3f28a413306bf\"" Apr 13 23:17:07.689766 kubelet[2594]: E0413 23:17:07.688796 2594 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 13 23:17:07.702700 kubelet[2594]: E0413 23:17:07.688803 2594 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 13 23:17:07.848107 kubelet[2594]: E0413 23:17:07.847272 2594 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 13 23:17:08.529437 containerd[1467]: time="2026-04-13T23:17:08.513331616Z" level=info msg="CreateContainer within sandbox \"6074b44e58859132531ca8cd5a42b5b8d5beffb410b58ea4f0d3f28a413306bf\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Apr 13 23:17:08.557754 containerd[1467]: time="2026-04-13T23:17:08.517118352Z" level=info msg="CreateContainer within sandbox \"3f178d89e85baa3fb4759fcf4202b16fbc10472d689b31ccefe4360053ec3750\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Apr 13 23:17:08.603400 kubelet[2594]: E0413 23:17:08.564459 2594 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.132:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.132:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 13 23:17:08.829179 containerd[1467]: time="2026-04-13T23:17:08.811619339Z" level=info msg="CreateContainer within sandbox \"8bac472e4e244abf32bc235cf5cd32acc55e4106939eb4c4a6a95cb7a7785c9f\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Apr 13 23:17:09.292102 kubelet[2594]: E0413 23:17:09.183541 2594 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.132:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.132:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 13 23:17:09.341792 kubelet[2594]: E0413 23:17:09.300284 2594 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.132:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.132:6443: connect: connection refused" interval="7s" Apr 13 23:17:10.856966 containerd[1467]: time="2026-04-13T23:17:10.851778107Z" level=info msg="CreateContainer within sandbox \"3f178d89e85baa3fb4759fcf4202b16fbc10472d689b31ccefe4360053ec3750\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"e5c717aa0d43e4f26344e7e6e8ec37f636eba21b26c3e9fa88adbfcc3c364c71\"" Apr 13 23:17:10.948477 containerd[1467]: time="2026-04-13T23:17:10.941053401Z" level=info msg="CreateContainer within sandbox \"6074b44e58859132531ca8cd5a42b5b8d5beffb410b58ea4f0d3f28a413306bf\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"934a80bfa8a2b9b3d9accc7d1e0d6710ded79a89696df0312678fbdd57d979ff\"" Apr 13 23:17:11.190666 containerd[1467]: time="2026-04-13T23:17:11.178832094Z" level=info msg="CreateContainer within sandbox \"8bac472e4e244abf32bc235cf5cd32acc55e4106939eb4c4a6a95cb7a7785c9f\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"2bc65ab1f02feecef2922d7be36a9101b0c542efe6fb8e58c45685b12c5414dc\"" Apr 13 23:17:11.219037 containerd[1467]: time="2026-04-13T23:17:11.213235232Z" level=info msg="StartContainer for \"e5c717aa0d43e4f26344e7e6e8ec37f636eba21b26c3e9fa88adbfcc3c364c71\"" Apr 13 23:17:11.263378 containerd[1467]: time="2026-04-13T23:17:11.262575969Z" level=info msg="StartContainer for \"934a80bfa8a2b9b3d9accc7d1e0d6710ded79a89696df0312678fbdd57d979ff\"" Apr 13 23:17:11.289446 containerd[1467]: time="2026-04-13T23:17:11.286357400Z" level=info msg="StartContainer for \"2bc65ab1f02feecef2922d7be36a9101b0c542efe6fb8e58c45685b12c5414dc\"" Apr 13 23:17:14.493413 kubelet[2594]: I0413 23:17:14.492073 2594 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 13 23:17:14.518407 kubelet[2594]: E0413 23:17:14.497196 2594 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.132:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.132:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18a60db7fbafb0f0 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-13 23:16:26.897010928 +0000 UTC m=+11.252171675,LastTimestamp:2026-04-13 23:16:26.897010928 +0000 UTC m=+11.252171675,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 13 23:17:14.613225 kubelet[2594]: E0413 23:17:14.612735 2594 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.132:6443/api/v1/nodes\": dial tcp 10.0.0.132:6443: connect: connection refused" node="localhost" Apr 13 23:17:15.325467 kubelet[2594]: E0413 23:17:15.303737 2594 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 13 23:17:16.481295 kubelet[2594]: E0413 23:17:16.480440 2594 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.132:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.132:6443: connect: connection refused" interval="7s" Apr 13 23:17:16.667445 systemd[1]: Started cri-containerd-2bc65ab1f02feecef2922d7be36a9101b0c542efe6fb8e58c45685b12c5414dc.scope - libcontainer container 2bc65ab1f02feecef2922d7be36a9101b0c542efe6fb8e58c45685b12c5414dc. Apr 13 23:17:16.902013 systemd[1]: Started cri-containerd-934a80bfa8a2b9b3d9accc7d1e0d6710ded79a89696df0312678fbdd57d979ff.scope - libcontainer container 934a80bfa8a2b9b3d9accc7d1e0d6710ded79a89696df0312678fbdd57d979ff. Apr 13 23:17:17.216679 systemd[1]: Started cri-containerd-e5c717aa0d43e4f26344e7e6e8ec37f636eba21b26c3e9fa88adbfcc3c364c71.scope - libcontainer container e5c717aa0d43e4f26344e7e6e8ec37f636eba21b26c3e9fa88adbfcc3c364c71. Apr 13 23:17:21.609428 containerd[1467]: time="2026-04-13T23:17:21.590661282Z" level=info msg="StartContainer for \"934a80bfa8a2b9b3d9accc7d1e0d6710ded79a89696df0312678fbdd57d979ff\" returns successfully" Apr 13 23:17:21.710121 containerd[1467]: time="2026-04-13T23:17:21.685473438Z" level=info msg="StartContainer for \"2bc65ab1f02feecef2922d7be36a9101b0c542efe6fb8e58c45685b12c5414dc\" returns successfully" Apr 13 23:17:25.014116 kubelet[2594]: E0413 23:17:24.997665 2594 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.132:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.132:6443: connect: connection refused" interval="7s" Apr 13 23:17:25.786326 containerd[1467]: time="2026-04-13T23:17:25.778780694Z" level=info msg="StartContainer for \"e5c717aa0d43e4f26344e7e6e8ec37f636eba21b26c3e9fa88adbfcc3c364c71\" returns successfully" Apr 13 23:17:26.291547 kubelet[2594]: E0413 23:17:26.256429 2594 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 13 23:17:26.537890 kubelet[2594]: E0413 23:17:26.237824 2594 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.132:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.132:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18a60db7fbafb0f0 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-13 23:16:26.897010928 +0000 UTC m=+11.252171675,LastTimestamp:2026-04-13 23:16:26.897010928 +0000 UTC m=+11.252171675,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 13 23:17:28.058243 kubelet[2594]: I0413 23:17:28.056808 2594 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 13 23:17:29.228078 kubelet[2594]: E0413 23:17:29.227449 2594 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.132:6443/api/v1/nodes\": dial tcp 10.0.0.132:6443: connect: connection refused" node="localhost" Apr 13 23:17:30.974151 kubelet[2594]: E0413 23:17:30.954849 2594 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.132:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.132:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 13 23:17:32.398446 kubelet[2594]: E0413 23:17:32.397148 2594 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.132:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.132:6443: connect: connection refused" interval="7s" Apr 13 23:17:36.704220 kubelet[2594]: E0413 23:17:36.601805 2594 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 13 23:17:36.801496 kubelet[2594]: E0413 23:17:36.794808 2594 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.132:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.132:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18a60db7fbafb0f0 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-13 23:16:26.897010928 +0000 UTC m=+11.252171675,LastTimestamp:2026-04-13 23:16:26.897010928 +0000 UTC m=+11.252171675,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 13 23:17:37.715344 kubelet[2594]: E0413 23:17:37.599808 2594 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 13 23:17:37.871123 kubelet[2594]: I0413 23:17:37.790833 2594 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 13 23:17:38.072034 kubelet[2594]: E0413 23:17:38.063271 2594 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 13 23:17:38.072034 kubelet[2594]: E0413 23:17:38.071256 2594 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.132:6443/api/v1/nodes\": dial tcp 10.0.0.132:6443: connect: connection refused" node="localhost" Apr 13 23:17:39.498317 kubelet[2594]: E0413 23:17:39.497348 2594 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 13 23:17:39.728277 kubelet[2594]: E0413 23:17:39.703033 2594 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.132:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.132:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 13 23:17:39.756917 kubelet[2594]: E0413 23:17:39.749575 2594 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 13 23:17:40.246448 kubelet[2594]: E0413 23:17:40.099561 2594 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.132:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.132:6443: connect: connection refused" interval="7s" Apr 13 23:17:42.531375 kubelet[2594]: E0413 23:17:42.528489 2594 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 13 23:17:42.670350 kubelet[2594]: E0413 23:17:42.610488 2594 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 13 23:17:44.234777 kubelet[2594]: E0413 23:17:44.233438 2594 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 13 23:17:44.480323 kubelet[2594]: E0413 23:17:44.473206 2594 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 13 23:17:44.488430 kubelet[2594]: E0413 23:17:44.475545 2594 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 13 23:17:44.488430 kubelet[2594]: E0413 23:17:44.486541 2594 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 13 23:17:44.512060 kubelet[2594]: E0413 23:17:44.511289 2594 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 13 23:17:44.801590 kubelet[2594]: E0413 23:17:44.801182 2594 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 13 23:17:46.807539 kubelet[2594]: E0413 23:17:46.802912 2594 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 13 23:17:46.807539 kubelet[2594]: E0413 23:17:46.804291 2594 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.132:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.132:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18a60db7fbafb0f0 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-13 23:16:26.897010928 +0000 UTC m=+11.252171675,LastTimestamp:2026-04-13 23:16:26.897010928 +0000 UTC m=+11.252171675,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 13 23:17:47.707420 kubelet[2594]: I0413 23:17:47.707022 2594 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 13 23:17:47.733942 kubelet[2594]: E0413 23:17:47.730008 2594 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 13 23:17:47.735214 kubelet[2594]: E0413 23:17:47.731830 2594 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.132:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.132:6443: connect: connection refused" interval="7s" Apr 13 23:17:47.736901 kubelet[2594]: E0413 23:17:47.735621 2594 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.132:6443/api/v1/nodes\": dial tcp 10.0.0.132:6443: connect: connection refused" node="localhost" Apr 13 23:17:47.736901 kubelet[2594]: E0413 23:17:47.735687 2594 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 13 23:17:49.588438 kubelet[2594]: E0413 23:17:49.587465 2594 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 13 23:17:49.811784 kubelet[2594]: E0413 23:17:49.807053 2594 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 13 23:17:53.138814 kubelet[2594]: E0413 23:17:53.137115 2594 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 13 23:17:53.177144 kubelet[2594]: E0413 23:17:53.176122 2594 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 13 23:17:54.975541 kubelet[2594]: I0413 23:17:54.974562 2594 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 13 23:17:56.619369 kubelet[2594]: E0413 23:17:56.611627 2594 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 13 23:17:56.777364 kubelet[2594]: E0413 23:17:56.775070 2594 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 13 23:17:56.854855 kubelet[2594]: E0413 23:17:56.844618 2594 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 13 23:17:58.293279 kubelet[2594]: E0413 23:17:58.291723 2594 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.132:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 13 23:17:59.826773 kubelet[2594]: E0413 23:17:59.818737 2594 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.132:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 13 23:18:04.788526 kubelet[2594]: E0413 23:18:04.785842 2594 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.132:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded" interval="7s" Apr 13 23:18:05.113461 kubelet[2594]: E0413 23:18:05.109816 2594 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.132:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 13 23:18:05.186299 kubelet[2594]: E0413 23:18:05.179342 2594 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.132:6443/api/v1/nodes\": net/http: TLS handshake timeout" node="localhost" Apr 13 23:18:06.877953 kubelet[2594]: E0413 23:18:06.876608 2594 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 13 23:18:07.012301 kubelet[2594]: E0413 23:18:06.998545 2594 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.132:6443/api/v1/namespaces/default/events\": net/http: TLS handshake timeout" event="&Event{ObjectMeta:{localhost.18a60db7fbafb0f0 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-13 23:16:26.897010928 +0000 UTC m=+11.252171675,LastTimestamp:2026-04-13 23:16:26.897010928 +0000 UTC m=+11.252171675,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 13 23:18:11.197639 kubelet[2594]: E0413 23:18:11.196415 2594 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.132:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": net/http: TLS handshake timeout" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 13 23:18:12.854121 kubelet[2594]: I0413 23:18:12.853449 2594 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 13 23:18:16.902471 kubelet[2594]: E0413 23:18:16.897331 2594 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 13 23:18:21.887626 kubelet[2594]: E0413 23:18:21.878847 2594 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.132:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" interval="7s" Apr 13 23:18:22.996006 kubelet[2594]: E0413 23:18:22.985707 2594 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.132:6443/api/v1/nodes\": net/http: TLS handshake timeout" node="localhost" Apr 13 23:18:26.964662 kubelet[2594]: E0413 23:18:26.962580 2594 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 13 23:18:27.330247 kubelet[2594]: E0413 23:18:27.307458 2594 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.132:6443/api/v1/namespaces/default/events\": net/http: TLS handshake timeout" event="&Event{ObjectMeta:{localhost.18a60db7fbafb0f0 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-13 23:16:26.897010928 +0000 UTC m=+11.252171675,LastTimestamp:2026-04-13 23:16:26.897010928 +0000 UTC m=+11.252171675,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 13 23:18:27.785828 kubelet[2594]: E0413 23:18:27.745146 2594 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 13 23:18:27.882823 kubelet[2594]: E0413 23:18:27.877574 2594 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 13 23:18:30.653640 kubelet[2594]: I0413 23:18:30.653232 2594 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 13 23:18:37.142807 kubelet[2594]: E0413 23:18:37.125840 2594 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 13 23:18:39.174569 kubelet[2594]: E0413 23:18:39.172116 2594 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.132:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded" interval="7s" Apr 13 23:18:40.798782 kubelet[2594]: E0413 23:18:40.798017 2594 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.132:6443/api/v1/nodes\": net/http: TLS handshake timeout" node="localhost" Apr 13 23:18:43.188935 kubelet[2594]: E0413 23:18:43.184720 2594 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.132:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": net/http: TLS handshake timeout" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 13 23:18:45.632359 kubelet[2594]: E0413 23:18:45.629657 2594 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.132:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 13 23:18:46.369542 kubelet[2594]: E0413 23:18:46.365689 2594 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.132:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 13 23:18:47.276954 kubelet[2594]: E0413 23:18:47.270799 2594 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 13 23:18:47.594779 kubelet[2594]: E0413 23:18:47.576767 2594 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.132:6443/api/v1/namespaces/default/events\": net/http: TLS handshake timeout" event="&Event{ObjectMeta:{localhost.18a60db7fbafb0f0 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-13 23:16:26.897010928 +0000 UTC m=+11.252171675,LastTimestamp:2026-04-13 23:16:26.897010928 +0000 UTC m=+11.252171675,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 13 23:18:47.594779 kubelet[2594]: E0413 23:18:47.585346 2594 event.go:307] "Unable to write event (retry limit exceeded!)" event="&Event{ObjectMeta:{localhost.18a60db7fbafb0f0 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-13 23:16:26.897010928 +0000 UTC m=+11.252171675,LastTimestamp:2026-04-13 23:16:26.897010928 +0000 UTC m=+11.252171675,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 13 23:18:48.243823 kubelet[2594]: I0413 23:18:48.242023 2594 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 13 23:18:51.451645 systemd[1]: Starting systemd-tmpfiles-clean.service - Cleanup of Temporary Directories... Apr 13 23:18:53.204713 systemd-tmpfiles[2900]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Apr 13 23:18:53.218209 systemd-tmpfiles[2900]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Apr 13 23:18:53.269132 systemd-tmpfiles[2900]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Apr 13 23:18:53.269476 systemd-tmpfiles[2900]: ACLs are not supported, ignoring. Apr 13 23:18:53.269528 systemd-tmpfiles[2900]: ACLs are not supported, ignoring. Apr 13 23:18:53.312683 systemd-tmpfiles[2900]: Detected autofs mount point /boot during canonicalization of boot. Apr 13 23:18:53.312692 systemd-tmpfiles[2900]: Skipping /boot Apr 13 23:18:53.460359 systemd[1]: systemd-tmpfiles-clean.service: Deactivated successfully. Apr 13 23:18:53.508853 systemd[1]: Finished systemd-tmpfiles-clean.service - Cleanup of Temporary Directories. Apr 13 23:18:56.419596 kubelet[2594]: E0413 23:18:56.381541 2594 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.132:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" interval="7s" Apr 13 23:18:57.380237 kubelet[2594]: E0413 23:18:57.362741 2594 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 13 23:18:57.951321 kubelet[2594]: E0413 23:18:57.893829 2594 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.132:6443/api/v1/namespaces/default/events\": net/http: TLS handshake timeout" event="&Event{ObjectMeta:{localhost.18a60db82e74ff2a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-13 23:16:27.748802346 +0000 UTC m=+12.103963130,LastTimestamp:2026-04-13 23:16:27.748802346 +0000 UTC m=+12.103963130,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 13 23:18:58.486415 kubelet[2594]: E0413 23:18:58.385811 2594 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.132:6443/api/v1/nodes\": net/http: TLS handshake timeout" node="localhost" Apr 13 23:18:59.587276 kubelet[2594]: E0413 23:18:59.581360 2594 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 13 23:18:59.884185 kubelet[2594]: E0413 23:18:59.880343 2594 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 13 23:19:02.290632 kubelet[2594]: E0413 23:19:02.289717 2594 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.132:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 13 23:19:04.344799 kubelet[2594]: E0413 23:19:04.343340 2594 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.132:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 13 23:19:06.066326 kubelet[2594]: I0413 23:19:06.065816 2594 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 13 23:19:07.440413 kubelet[2594]: E0413 23:19:07.405706 2594 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 13 23:19:09.179438 kubelet[2594]: E0413 23:19:09.154169 2594 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.132:6443/api/v1/namespaces/default/events\": net/http: TLS handshake timeout" event="&Event{ObjectMeta:{localhost.18a60db82e74ff2a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-13 23:16:27.748802346 +0000 UTC m=+12.103963130,LastTimestamp:2026-04-13 23:16:27.748802346 +0000 UTC m=+12.103963130,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 13 23:19:11.385547 kubelet[2594]: E0413 23:19:11.384688 2594 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 13 23:19:11.536608 kubelet[2594]: E0413 23:19:11.535894 2594 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 13 23:19:13.499315 kubelet[2594]: E0413 23:19:13.498301 2594 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.132:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" interval="7s" Apr 13 23:19:15.865305 kubelet[2594]: E0413 23:19:15.859939 2594 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.132:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": net/http: TLS handshake timeout" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 13 23:19:16.652027 kubelet[2594]: E0413 23:19:16.578933 2594 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.132:6443/api/v1/nodes\": net/http: TLS handshake timeout" node="localhost" Apr 13 23:19:17.554610 kubelet[2594]: E0413 23:19:17.549840 2594 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 13 23:19:24.163367 kubelet[2594]: I0413 23:19:24.155650 2594 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 13 23:19:27.604219 kubelet[2594]: E0413 23:19:27.590087 2594 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 13 23:19:30.096562 kubelet[2594]: E0413 23:19:29.962297 2594 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.132:6443/api/v1/namespaces/default/events\": net/http: TLS handshake timeout" event="&Event{ObjectMeta:{localhost.18a60db82e74ff2a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-13 23:16:27.748802346 +0000 UTC m=+12.103963130,LastTimestamp:2026-04-13 23:16:27.748802346 +0000 UTC m=+12.103963130,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 13 23:19:30.564947 kubelet[2594]: E0413 23:19:30.537748 2594 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.132:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded" interval="7s" Apr 13 23:19:35.300442 kubelet[2594]: E0413 23:19:35.299332 2594 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.132:6443/api/v1/nodes\": net/http: TLS handshake timeout" node="localhost" Apr 13 23:19:36.384057 kubelet[2594]: E0413 23:19:36.316271 2594 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 13 23:19:36.764531 kubelet[2594]: E0413 23:19:36.757773 2594 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 13 23:19:38.386519 kubelet[2594]: E0413 23:19:38.384197 2594 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 13 23:19:43.405286 kubelet[2594]: I0413 23:19:43.380682 2594 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 13 23:19:45.289259 kubelet[2594]: E0413 23:19:45.284844 2594 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.132:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 13 23:19:47.598426 kubelet[2594]: E0413 23:19:47.593226 2594 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.132:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": net/http: TLS handshake timeout" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 13 23:19:48.065434 kubelet[2594]: E0413 23:19:47.902806 2594 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.132:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded" interval="7s" Apr 13 23:19:48.511458 kubelet[2594]: E0413 23:19:48.489083 2594 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 13 23:19:49.584778 kubelet[2594]: E0413 23:19:49.551671 2594 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.132:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 13 23:19:49.584778 kubelet[2594]: E0413 23:19:49.552164 2594 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.132:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 13 23:19:50.802717 kubelet[2594]: E0413 23:19:50.713724 2594 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.132:6443/api/v1/namespaces/default/events\": net/http: TLS handshake timeout" event="&Event{ObjectMeta:{localhost.18a60db82e74ff2a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-13 23:16:27.748802346 +0000 UTC m=+12.103963130,LastTimestamp:2026-04-13 23:16:27.748802346 +0000 UTC m=+12.103963130,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 13 23:19:53.626363 kubelet[2594]: E0413 23:19:53.518568 2594 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.132:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 13 23:19:54.561980 kubelet[2594]: E0413 23:19:54.551423 2594 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.132:6443/api/v1/nodes\": net/http: TLS handshake timeout" node="localhost" Apr 13 23:19:58.809086 kubelet[2594]: E0413 23:19:58.648804 2594 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 13 23:20:02.822195 kubelet[2594]: I0413 23:20:02.817616 2594 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 13 23:20:05.333282 kubelet[2594]: E0413 23:20:05.327255 2594 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.132:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded" interval="7s" Apr 13 23:20:09.042990 kubelet[2594]: E0413 23:20:09.028400 2594 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 13 23:20:11.001436 kubelet[2594]: E0413 23:20:10.996541 2594 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.132:6443/api/v1/namespaces/default/events\": net/http: TLS handshake timeout" event="&Event{ObjectMeta:{localhost.18a60db82e74ff2a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-13 23:16:27.748802346 +0000 UTC m=+12.103963130,LastTimestamp:2026-04-13 23:16:27.748802346 +0000 UTC m=+12.103963130,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 13 23:20:12.839945 kubelet[2594]: E0413 23:20:12.839358 2594 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.132:6443/api/v1/nodes\": net/http: TLS handshake timeout" node="localhost" Apr 13 23:20:12.898376 kubelet[2594]: E0413 23:20:12.895374 2594 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 13 23:20:12.930365 kubelet[2594]: E0413 23:20:12.903662 2594 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 13 23:20:16.938728 kubelet[2594]: E0413 23:20:16.938126 2594 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 13 23:20:17.028835 kubelet[2594]: E0413 23:20:17.020815 2594 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 13 23:20:19.465603 kubelet[2594]: E0413 23:20:19.465114 2594 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 13 23:20:19.737856 kubelet[2594]: E0413 23:20:19.705849 2594 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.132:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": net/http: TLS handshake timeout" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 13 23:20:20.319093 kubelet[2594]: I0413 23:20:20.317844 2594 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 13 23:20:22.516587 kubelet[2594]: E0413 23:20:22.511759 2594 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.132:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" interval="7s" Apr 13 23:20:30.082764 kubelet[2594]: E0413 23:20:30.017838 2594 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 13 23:20:30.846829 kubelet[2594]: E0413 23:20:30.782213 2594 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.132:6443/api/v1/nodes\": net/http: TLS handshake timeout" node="localhost" Apr 13 23:20:32.863764 kubelet[2594]: E0413 23:20:32.547705 2594 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.132:6443/api/v1/namespaces/default/events\": net/http: TLS handshake timeout" event="&Event{ObjectMeta:{localhost.18a60db82e74ff2a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-13 23:16:27.748802346 +0000 UTC m=+12.103963130,LastTimestamp:2026-04-13 23:16:27.748802346 +0000 UTC m=+12.103963130,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 13 23:20:40.142259 kubelet[2594]: E0413 23:20:40.141700 2594 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.132:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" interval="7s" Apr 13 23:20:40.237165 kubelet[2594]: E0413 23:20:40.142031 2594 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.132:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 13 23:20:40.237165 kubelet[2594]: E0413 23:20:40.142135 2594 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 13 23:20:41.250921 kubelet[2594]: I0413 23:20:41.250404 2594 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 13 23:20:42.991521 kubelet[2594]: E0413 23:20:42.984554 2594 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.132:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 13 23:20:50.301312 kubelet[2594]: E0413 23:20:50.299338 2594 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 13 23:20:50.996573 kubelet[2594]: E0413 23:20:50.995078 2594 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.132:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 13 23:20:51.402597 kubelet[2594]: E0413 23:20:51.400763 2594 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.132:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": net/http: TLS handshake timeout" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 13 23:20:51.993334 kubelet[2594]: E0413 23:20:51.963326 2594 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.132:6443/api/v1/nodes\": net/http: TLS handshake timeout" node="localhost" Apr 13 23:20:53.417045 kubelet[2594]: E0413 23:20:53.390762 2594 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.132:6443/api/v1/namespaces/default/events\": net/http: TLS handshake timeout" event="&Event{ObjectMeta:{localhost.18a60db82e74ff2a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-13 23:16:27.748802346 +0000 UTC m=+12.103963130,LastTimestamp:2026-04-13 23:16:27.748802346 +0000 UTC m=+12.103963130,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 13 23:20:55.302292 kubelet[2594]: E0413 23:20:55.216813 2594 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.132:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 13 23:20:57.885639 kubelet[2594]: E0413 23:20:57.882729 2594 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.132:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded" interval="7s" Apr 13 23:20:57.948316 kubelet[2594]: E0413 23:20:57.947799 2594 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 13 23:20:58.454804 kubelet[2594]: E0413 23:20:58.453335 2594 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 13 23:21:00.455380 kubelet[2594]: E0413 23:21:00.453824 2594 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 13 23:21:00.713263 kubelet[2594]: I0413 23:21:00.707088 2594 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 13 23:21:10.475685 kubelet[2594]: E0413 23:21:10.474518 2594 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 13 23:21:10.853558 kubelet[2594]: E0413 23:21:10.850727 2594 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.132:6443/api/v1/nodes\": net/http: TLS handshake timeout" node="localhost" Apr 13 23:21:13.556799 kubelet[2594]: E0413 23:21:13.556402 2594 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.132:6443/api/v1/namespaces/default/events\": net/http: TLS handshake timeout" event="&Event{ObjectMeta:{localhost.18a60db82e74ff2a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-13 23:16:27.748802346 +0000 UTC m=+12.103963130,LastTimestamp:2026-04-13 23:16:27.748802346 +0000 UTC m=+12.103963130,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 13 23:21:15.107328 kubelet[2594]: E0413 23:21:15.104793 2594 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.132:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" interval="7s" Apr 13 23:21:18.132125 kubelet[2594]: I0413 23:21:18.130521 2594 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 13 23:21:20.513276 kubelet[2594]: E0413 23:21:20.512419 2594 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 13 23:21:23.838472 kubelet[2594]: E0413 23:21:23.814832 2594 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.132:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": net/http: TLS handshake timeout" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 13 23:21:29.288418 kubelet[2594]: E0413 23:21:29.180385 2594 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.132:6443/api/v1/nodes\": net/http: TLS handshake timeout" node="localhost" Apr 13 23:21:29.729965 kubelet[2594]: E0413 23:21:29.723503 2594 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 13 23:21:29.758248 kubelet[2594]: E0413 23:21:29.730314 2594 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 13 23:21:30.099176 kubelet[2594]: E0413 23:21:30.096556 2594 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 13 23:21:30.202006 kubelet[2594]: E0413 23:21:30.201399 2594 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 13 23:21:30.737142 kubelet[2594]: E0413 23:21:30.736659 2594 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 13 23:21:32.287458 kubelet[2594]: E0413 23:21:32.286155 2594 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.132:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded" interval="7s" Apr 13 23:21:33.891372 kubelet[2594]: E0413 23:21:33.880450 2594 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.132:6443/api/v1/namespaces/default/events\": net/http: TLS handshake timeout" event="&Event{ObjectMeta:{localhost.18a60db82e74ff2a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-13 23:16:27.748802346 +0000 UTC m=+12.103963130,LastTimestamp:2026-04-13 23:16:27.748802346 +0000 UTC m=+12.103963130,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 13 23:21:35.919566 kubelet[2594]: E0413 23:21:35.917849 2594 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.132:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 13 23:21:36.994415 kubelet[2594]: I0413 23:21:36.991555 2594 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 13 23:21:40.841320 kubelet[2594]: E0413 23:21:40.831486 2594 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 13 23:21:41.752731 kubelet[2594]: E0413 23:21:41.750391 2594 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.132:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 13 23:21:42.704725 kubelet[2594]: E0413 23:21:42.696746 2594 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.132:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 13 23:21:44.295263 kubelet[2594]: E0413 23:21:44.294801 2594 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.132:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 13 23:21:47.071485 kubelet[2594]: E0413 23:21:47.069851 2594 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.132:6443/api/v1/nodes\": net/http: TLS handshake timeout" node="localhost" Apr 13 23:21:49.367729 kubelet[2594]: E0413 23:21:49.367005 2594 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.132:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded" interval="7s" Apr 13 23:21:50.871816 kubelet[2594]: E0413 23:21:50.871550 2594 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 13 23:21:53.983211 kubelet[2594]: E0413 23:21:53.980358 2594 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.132:6443/api/v1/namespaces/default/events\": net/http: TLS handshake timeout" event="&Event{ObjectMeta:{localhost.18a60db82e74ff2a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-13 23:16:27.748802346 +0000 UTC m=+12.103963130,LastTimestamp:2026-04-13 23:16:27.748802346 +0000 UTC m=+12.103963130,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 13 23:21:54.364697 kubelet[2594]: I0413 23:21:54.364409 2594 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 13 23:21:55.070850 kubelet[2594]: E0413 23:21:55.070050 2594 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.132:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": net/http: TLS handshake timeout" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 13 23:22:00.878187 kubelet[2594]: E0413 23:22:00.876809 2594 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 13 23:22:04.372077 kubelet[2594]: E0413 23:22:04.371746 2594 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.132:6443/api/v1/nodes\": net/http: TLS handshake timeout" node="localhost" Apr 13 23:22:06.380762 kubelet[2594]: E0413 23:22:06.379972 2594 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.132:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded" interval="7s" Apr 13 23:22:10.882331 kubelet[2594]: E0413 23:22:10.881646 2594 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 13 23:22:11.548629 kubelet[2594]: I0413 23:22:11.548257 2594 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 13 23:22:12.667713 kubelet[2594]: E0413 23:22:12.664629 2594 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 13 23:22:12.703378 kubelet[2594]: E0413 23:22:12.701623 2594 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 13 23:22:15.320713 kubelet[2594]: E0413 23:22:15.320445 2594 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Apr 13 23:22:15.569393 kubelet[2594]: E0413 23:22:15.568483 2594 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.18a60db82e74ff2a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-13 23:16:27.748802346 +0000 UTC m=+12.103963130,LastTimestamp:2026-04-13 23:16:27.748802346 +0000 UTC m=+12.103963130,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 13 23:22:15.663248 kubelet[2594]: I0413 23:22:15.657526 2594 apiserver.go:52] "Watching apiserver" Apr 13 23:22:15.869361 kubelet[2594]: I0413 23:22:15.816338 2594 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Apr 13 23:22:15.885837 kubelet[2594]: E0413 23:22:15.869860 2594 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Apr 13 23:22:15.897633 kubelet[2594]: I0413 23:22:15.897223 2594 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Apr 13 23:22:16.168623 kubelet[2594]: E0413 23:22:16.167456 2594 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.18a60db86787fcbc default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node localhost status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-13 23:16:28.70634822 +0000 UTC m=+13.061508970,LastTimestamp:2026-04-13 23:16:28.70634822 +0000 UTC m=+13.061508970,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 13 23:22:16.221702 kubelet[2594]: I0413 23:22:16.221283 2594 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Apr 13 23:22:16.643216 kubelet[2594]: I0413 23:22:16.642780 2594 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Apr 13 23:22:16.652161 kubelet[2594]: E0413 23:22:16.650826 2594 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.18a60db8678c30e9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node localhost status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-13 23:16:28.706623721 +0000 UTC m=+13.061784470,LastTimestamp:2026-04-13 23:16:28.706623721 +0000 UTC m=+13.061784470,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 13 23:22:16.989342 kubelet[2594]: I0413 23:22:16.971524 2594 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Apr 13 23:22:17.116444 kubelet[2594]: E0413 23:22:17.090646 2594 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 13 23:22:17.189386 kubelet[2594]: E0413 23:22:17.188984 2594 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 13 23:22:17.701558 kubelet[2594]: E0413 23:22:17.701406 2594 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 13 23:22:17.923512 kubelet[2594]: E0413 23:22:17.920453 2594 kubelet_node_status.go:398] "Node not becoming ready in time after startup" Apr 13 23:22:19.934456 kubelet[2594]: I0413 23:22:19.932052 2594 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=3.909162392 podStartE2EDuration="3.909162392s" podCreationTimestamp="2026-04-13 23:22:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-13 23:22:19.50108148 +0000 UTC m=+363.856242225" watchObservedRunningTime="2026-04-13 23:22:19.909162392 +0000 UTC m=+364.264364418" Apr 13 23:22:20.170494 kubelet[2594]: I0413 23:22:20.168800 2594 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=4.168754888 podStartE2EDuration="4.168754888s" podCreationTimestamp="2026-04-13 23:22:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-13 23:22:19.932481385 +0000 UTC m=+364.287642182" watchObservedRunningTime="2026-04-13 23:22:20.168754888 +0000 UTC m=+364.523915639" Apr 13 23:22:20.501695 kubelet[2594]: E0413 23:22:20.490627 2594 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 13 23:22:26.155316 kubelet[2594]: E0413 23:22:26.142683 2594 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 13 23:22:26.264590 kubelet[2594]: E0413 23:22:26.154818 2594 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.761s" Apr 13 23:22:27.975596 kubelet[2594]: E0413 23:22:27.974777 2594 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.595s" Apr 13 23:22:29.541562 kubelet[2594]: E0413 23:22:29.540743 2594 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.168s" Apr 13 23:22:31.562435 kubelet[2594]: E0413 23:22:31.560225 2594 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 13 23:22:31.907285 kubelet[2594]: E0413 23:22:31.905320 2594 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.522s" Apr 13 23:22:36.430646 kubelet[2594]: E0413 23:22:36.418649 2594 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.037s" Apr 13 23:22:37.119432 kubelet[2594]: E0413 23:22:37.118698 2594 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 13 23:22:38.135857 kubelet[2594]: E0413 23:22:38.134463 2594 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.694s" Apr 13 23:22:40.580634 kubelet[2594]: E0413 23:22:40.560976 2594 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.186s" Apr 13 23:22:42.768705 kubelet[2594]: E0413 23:22:42.720416 2594 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.996s" Apr 13 23:22:42.793551 kubelet[2594]: E0413 23:22:42.777894 2594 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 13 23:22:43.886508 kubelet[2594]: E0413 23:22:43.884778 2594 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.072s" Apr 13 23:22:46.367470 kubelet[2594]: E0413 23:22:46.365430 2594 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.931s" Apr 13 23:22:48.183187 kubelet[2594]: E0413 23:22:48.146595 2594 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.688s" Apr 13 23:22:48.385452 kubelet[2594]: E0413 23:22:48.381169 2594 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 13 23:22:49.832185 kubelet[2594]: E0413 23:22:49.700701 2594 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.319s" Apr 13 23:22:53.196800 kubelet[2594]: E0413 23:22:53.195649 2594 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.81s" Apr 13 23:22:53.810264 kubelet[2594]: E0413 23:22:53.808470 2594 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 13 23:22:56.828452 kubelet[2594]: E0413 23:22:56.821014 2594 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.475s" Apr 13 23:22:58.401083 kubelet[2594]: E0413 23:22:58.397317 2594 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.566s" Apr 13 23:22:59.268083 kubelet[2594]: E0413 23:22:59.261549 2594 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 13 23:23:00.408172 kubelet[2594]: E0413 23:23:00.402718 2594 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.903s" Apr 13 23:23:02.250367 kubelet[2594]: E0413 23:23:02.249608 2594 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.836s" Apr 13 23:23:04.798189 kubelet[2594]: E0413 23:23:04.598615 2594 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.076s" Apr 13 23:23:05.586854 kubelet[2594]: E0413 23:23:05.583313 2594 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 13 23:23:09.886545 kubelet[2594]: E0413 23:23:09.521448 2594 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="4.65s" Apr 13 23:23:11.403294 kubelet[2594]: E0413 23:23:11.288689 2594 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 13 23:23:14.533355 kubelet[2594]: E0413 23:23:14.531420 2594 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="4.649s" Apr 13 23:23:17.665367 kubelet[2594]: E0413 23:23:17.660620 2594 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 13 23:23:17.811325 kubelet[2594]: E0413 23:23:17.799460 2594 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.113s" Apr 13 23:23:19.073911 kubelet[2594]: E0413 23:23:19.073239 2594 controller.go:195] "Failed to update lease" err="Put \"https://10.0.0.132:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Apr 13 23:23:19.597144 kubelet[2594]: E0413 23:23:19.595664 2594 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.779s" Apr 13 23:23:20.133315 kubelet[2594]: E0413 23:23:20.118334 2594 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 13 23:23:23.545309 kubelet[2594]: E0413 23:23:23.526655 2594 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 13 23:23:23.611022 kubelet[2594]: E0413 23:23:23.589993 2594 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.146s" Apr 13 23:23:30.311359 kubelet[2594]: E0413 23:23:30.308112 2594 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 13 23:23:30.350926 kubelet[2594]: E0413 23:23:30.350454 2594 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="5.681s" Apr 13 23:23:31.924908 kubelet[2594]: E0413 23:23:31.923998 2594 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 13 23:23:32.869187 kubelet[2594]: E0413 23:23:32.851353 2594 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 13 23:23:34.094549 kubelet[2594]: E0413 23:23:34.093232 2594 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.47s" Apr 13 23:23:35.616457 kubelet[2594]: E0413 23:23:35.615495 2594 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 13 23:23:39.486319 kubelet[2594]: E0413 23:23:39.481386 2594 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.077s" Apr 13 23:23:41.439203 kubelet[2594]: E0413 23:23:41.438438 2594 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.063s" Apr 13 23:23:41.774130 kubelet[2594]: E0413 23:23:41.765334 2594 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 13 23:23:44.485603 kubelet[2594]: E0413 23:23:44.474587 2594 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.094s" Apr 13 23:23:46.305492 kubelet[2594]: E0413 23:23:46.284622 2594 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.81s" Apr 13 23:23:47.742366 kubelet[2594]: E0413 23:23:47.730285 2594 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 13 23:23:48.864969 kubelet[2594]: E0413 23:23:48.835336 2594 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.432s" Apr 13 23:23:51.042380 kubelet[2594]: E0413 23:23:51.037683 2594 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.184s" Apr 13 23:23:53.874714 kubelet[2594]: E0413 23:23:53.869581 2594 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.785s" Apr 13 23:23:53.894454 kubelet[2594]: E0413 23:23:53.808780 2594 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 13 23:23:55.692163 kubelet[2594]: E0413 23:23:55.684403 2594 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.782s" Apr 13 23:23:58.390477 kubelet[2594]: E0413 23:23:58.363753 2594 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.672s" Apr 13 23:24:00.155326 kubelet[2594]: E0413 23:24:00.150453 2594 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 13 23:24:03.389546 kubelet[2594]: E0413 23:24:03.385009 2594 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="4.519s" Apr 13 23:24:09.047227 kubelet[2594]: E0413 23:24:09.015712 2594 controller.go:195] "Failed to update lease" err="Put \"https://10.0.0.132:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Apr 13 23:24:12.147683 kubelet[2594]: E0413 23:24:12.133830 2594 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 13 23:24:12.932069 kubelet[2594]: E0413 23:24:12.922197 2594 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="9.485s" Apr 13 23:24:18.809561 kubelet[2594]: E0413 23:24:18.592823 2594 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 13 23:24:21.541771 kubelet[2594]: E0413 23:24:21.537547 2594 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="8.553s" Apr 13 23:24:24.653734 kubelet[2594]: E0413 23:24:24.640612 2594 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.103s" Apr 13 23:24:24.772305 kubelet[2594]: E0413 23:24:24.642300 2594 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 13 23:24:25.872580 kubelet[2594]: E0413 23:24:25.822409 2594 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.17s" Apr 13 23:24:26.474591 kubelet[2594]: E0413 23:24:26.317504 2594 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 13 23:24:30.887684 kubelet[2594]: E0413 23:24:30.616448 2594 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 13 23:24:32.283473 kubelet[2594]: E0413 23:24:32.282358 2594 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="5.849s" Apr 13 23:24:34.348443 kubelet[2594]: E0413 23:24:34.347360 2594 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.77s" Apr 13 23:24:34.613644 kubelet[2594]: E0413 23:24:34.606082 2594 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 13 23:24:35.640323 kubelet[2594]: E0413 23:24:35.633737 2594 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.218s" Apr 13 23:24:37.386336 kubelet[2594]: E0413 23:24:37.145828 2594 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 13 23:24:39.617002 kubelet[2594]: E0413 23:24:39.616527 2594 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.912s" Apr 13 23:24:42.536942 kubelet[2594]: E0413 23:24:42.536635 2594 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.755s" Apr 13 23:24:43.078680 kubelet[2594]: E0413 23:24:43.063411 2594 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 13 23:24:43.557231 kubelet[2594]: E0413 23:24:43.521967 2594 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 13 23:24:43.821831 kubelet[2594]: E0413 23:24:43.819530 2594 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.214s" Apr 13 23:24:48.608502 kubelet[2594]: E0413 23:24:48.604568 2594 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.207s" Apr 13 23:24:49.175335 kubelet[2594]: E0413 23:24:49.174717 2594 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 13 23:24:49.881757 kubelet[2594]: E0413 23:24:49.880493 2594 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.201s" Apr 13 23:24:54.000465 kubelet[2594]: E0413 23:24:53.966676 2594 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.591s" Apr 13 23:24:56.359272 kubelet[2594]: E0413 23:24:56.358900 2594 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 13 23:24:56.903661 kubelet[2594]: E0413 23:24:56.902214 2594 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.548s" Apr 13 23:25:03.011690 kubelet[2594]: E0413 23:25:02.986070 2594 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 13 23:25:03.399400 kubelet[2594]: E0413 23:25:03.385702 2594 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="6.474s" Apr 13 23:25:09.217488 kubelet[2594]: E0413 23:25:09.217259 2594 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 13 23:25:09.247530 kubelet[2594]: E0413 23:25:09.247303 2594 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="5.861s" Apr 13 23:25:14.020482 kubelet[2594]: E0413 23:25:14.005322 2594 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.624s" Apr 13 23:25:14.394571 kubelet[2594]: E0413 23:25:14.386418 2594 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 13 23:25:15.503058 kubelet[2594]: E0413 23:25:15.502637 2594 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.128s" Apr 13 23:25:18.332396 kubelet[2594]: E0413 23:25:18.313040 2594 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.903s" Apr 13 23:25:19.916343 kubelet[2594]: E0413 23:25:19.915090 2594 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 13 23:25:23.514254 kubelet[2594]: E0413 23:25:23.512395 2594 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.119s" Apr 13 23:25:25.301298 kubelet[2594]: E0413 23:25:25.291819 2594 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 13 23:25:25.542767 kubelet[2594]: E0413 23:25:25.541545 2594 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.157s" Apr 13 23:25:30.707113 kubelet[2594]: E0413 23:25:30.702909 2594 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 13 23:25:36.262274 kubelet[2594]: E0413 23:25:36.215590 2594 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 13 23:25:41.406599 kubelet[2594]: E0413 23:25:41.406121 2594 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 13 23:25:41.689082 kubelet[2594]: E0413 23:25:41.688198 2594 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.303s" Apr 13 23:25:45.023690 kubelet[2594]: E0413 23:25:45.023379 2594 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 13 23:25:46.484299 kubelet[2594]: E0413 23:25:46.423366 2594 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 13 23:25:46.704381 kubelet[2594]: E0413 23:25:46.703784 2594 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 13 23:25:52.716573 kubelet[2594]: E0413 23:25:52.712693 2594 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 13 23:25:52.908118 kubelet[2594]: E0413 23:25:52.906284 2594 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 13 23:25:53.699192 kubelet[2594]: E0413 23:25:53.696285 2594 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.142s" Apr 13 23:25:53.871301 systemd[1]: cri-containerd-e5c717aa0d43e4f26344e7e6e8ec37f636eba21b26c3e9fa88adbfcc3c364c71.scope: Deactivated successfully. Apr 13 23:25:53.884483 systemd[1]: cri-containerd-e5c717aa0d43e4f26344e7e6e8ec37f636eba21b26c3e9fa88adbfcc3c364c71.scope: Consumed 53.834s CPU time. Apr 13 23:25:57.014540 kubelet[2594]: E0413 23:25:57.013929 2594 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.558s" Apr 13 23:25:59.220504 kubelet[2594]: E0413 23:25:59.206348 2594 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 13 23:25:59.601018 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e5c717aa0d43e4f26344e7e6e8ec37f636eba21b26c3e9fa88adbfcc3c364c71-rootfs.mount: Deactivated successfully. Apr 13 23:26:00.294106 containerd[1467]: time="2026-04-13T23:26:00.281547644Z" level=info msg="shim disconnected" id=e5c717aa0d43e4f26344e7e6e8ec37f636eba21b26c3e9fa88adbfcc3c364c71 namespace=k8s.io Apr 13 23:26:00.294106 containerd[1467]: time="2026-04-13T23:26:00.287453575Z" level=warning msg="cleaning up after shim disconnected" id=e5c717aa0d43e4f26344e7e6e8ec37f636eba21b26c3e9fa88adbfcc3c364c71 namespace=k8s.io Apr 13 23:26:00.294106 containerd[1467]: time="2026-04-13T23:26:00.287631512Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 13 23:26:01.901285 kubelet[2594]: E0413 23:26:01.899358 2594 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.307s" Apr 13 23:26:03.009257 kubelet[2594]: I0413 23:26:03.008095 2594 scope.go:117] "RemoveContainer" containerID="e5c717aa0d43e4f26344e7e6e8ec37f636eba21b26c3e9fa88adbfcc3c364c71" Apr 13 23:26:03.016989 kubelet[2594]: E0413 23:26:03.010193 2594 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 13 23:26:03.324778 containerd[1467]: time="2026-04-13T23:26:03.321418716Z" level=info msg="CreateContainer within sandbox \"3f178d89e85baa3fb4759fcf4202b16fbc10472d689b31ccefe4360053ec3750\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Apr 13 23:26:03.598601 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2974043386.mount: Deactivated successfully. Apr 13 23:26:03.727048 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3112780138.mount: Deactivated successfully. Apr 13 23:26:03.741978 kubelet[2594]: I0413 23:26:03.739029 2594 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=226.738430517 podStartE2EDuration="3m46.738430517s" podCreationTimestamp="2026-04-13 23:22:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-13 23:22:20.180372172 +0000 UTC m=+364.535532908" watchObservedRunningTime="2026-04-13 23:26:03.738430517 +0000 UTC m=+588.093591264" Apr 13 23:26:03.913475 containerd[1467]: time="2026-04-13T23:26:03.905303346Z" level=info msg="CreateContainer within sandbox \"3f178d89e85baa3fb4759fcf4202b16fbc10472d689b31ccefe4360053ec3750\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"12a6f3ad79b5990855b73744577f39516b67893c53f0c22c77c3c3fa47fa416b\"" Apr 13 23:26:04.182801 containerd[1467]: time="2026-04-13T23:26:04.166709134Z" level=info msg="StartContainer for \"12a6f3ad79b5990855b73744577f39516b67893c53f0c22c77c3c3fa47fa416b\"" Apr 13 23:26:04.308487 kubelet[2594]: E0413 23:26:04.308317 2594 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 13 23:26:05.169177 systemd[1]: Started cri-containerd-12a6f3ad79b5990855b73744577f39516b67893c53f0c22c77c3c3fa47fa416b.scope - libcontainer container 12a6f3ad79b5990855b73744577f39516b67893c53f0c22c77c3c3fa47fa416b. Apr 13 23:26:06.716492 containerd[1467]: time="2026-04-13T23:26:06.708364940Z" level=info msg="StartContainer for \"12a6f3ad79b5990855b73744577f39516b67893c53f0c22c77c3c3fa47fa416b\" returns successfully" Apr 13 23:26:09.487565 kubelet[2594]: E0413 23:26:09.486413 2594 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.064s" Apr 13 23:26:09.881543 kubelet[2594]: E0413 23:26:09.878282 2594 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 13 23:26:09.881543 kubelet[2594]: E0413 23:26:09.878541 2594 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 13 23:26:10.639309 kubelet[2594]: E0413 23:26:10.628599 2594 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 13 23:26:11.583285 kubelet[2594]: E0413 23:26:11.581801 2594 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.203s" Apr 13 23:26:13.501242 kubelet[2594]: E0413 23:26:13.493362 2594 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.112s" Apr 13 23:26:15.412205 kubelet[2594]: E0413 23:26:15.409498 2594 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 13 23:26:17.864294 kubelet[2594]: E0413 23:26:17.850775 2594 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.444s" Apr 13 23:26:20.796312 kubelet[2594]: E0413 23:26:20.795930 2594 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 13 23:26:24.690400 kubelet[2594]: E0413 23:26:24.679104 2594 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 13 23:26:26.048754 kubelet[2594]: E0413 23:26:26.047552 2594 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 13 23:26:26.273912 kubelet[2594]: E0413 23:26:26.273173 2594 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 13 23:26:29.218047 kubelet[2594]: E0413 23:26:29.217079 2594 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 13 23:26:32.098122 kubelet[2594]: E0413 23:26:32.094213 2594 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 13 23:26:32.098122 kubelet[2594]: E0413 23:26:32.095990 2594 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.714s" Apr 13 23:26:32.098122 kubelet[2594]: E0413 23:26:32.097619 2594 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 13 23:26:33.473562 kubelet[2594]: E0413 23:26:33.469816 2594 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.065s" Apr 13 23:26:34.863902 systemd[1]: Reloading requested from client PID 2998 ('systemctl') (unit session-7.scope)... Apr 13 23:26:34.863925 systemd[1]: Reloading... Apr 13 23:26:36.219022 zram_generator::config[3037]: No configuration found. Apr 13 23:26:37.197491 kubelet[2594]: E0413 23:26:37.197409 2594 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 13 23:26:37.690984 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 13 23:26:39.235538 systemd[1]: Reloading finished in 4371 ms. Apr 13 23:26:39.572856 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 13 23:26:39.585021 systemd[1]: kubelet.service: Deactivated successfully. Apr 13 23:26:39.585391 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 23:26:39.585735 systemd[1]: kubelet.service: Consumed 6min 11.056s CPU time, 140.3M memory peak, 0B memory swap peak. Apr 13 23:26:39.598589 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 13 23:26:41.156303 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 23:26:41.183701 (kubelet)[3082]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 13 23:26:42.462801 kubelet[3082]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 13 23:26:42.462801 kubelet[3082]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 13 23:26:42.474402 kubelet[3082]: I0413 23:26:42.463370 3082 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 13 23:26:42.683282 kubelet[3082]: I0413 23:26:42.682150 3082 server.go:529] "Kubelet version" kubeletVersion="v1.34.4" Apr 13 23:26:42.694015 kubelet[3082]: I0413 23:26:42.684446 3082 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 13 23:26:42.694015 kubelet[3082]: I0413 23:26:42.685216 3082 watchdog_linux.go:95] "Systemd watchdog is not enabled" Apr 13 23:26:42.694015 kubelet[3082]: I0413 23:26:42.685323 3082 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 13 23:26:42.724059 kubelet[3082]: I0413 23:26:42.696220 3082 server.go:956] "Client rotation is on, will bootstrap in background" Apr 13 23:26:42.805237 kubelet[3082]: I0413 23:26:42.733420 3082 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Apr 13 23:26:43.078562 kubelet[3082]: I0413 23:26:43.077821 3082 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 13 23:26:43.417449 kubelet[3082]: E0413 23:26:43.413977 3082 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 13 23:26:43.417449 kubelet[3082]: I0413 23:26:43.414434 3082 server.go:1400] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Apr 13 23:26:43.598526 kubelet[3082]: I0413 23:26:43.598154 3082 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Apr 13 23:26:43.601320 kubelet[3082]: I0413 23:26:43.601100 3082 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 13 23:26:43.601854 kubelet[3082]: I0413 23:26:43.601311 3082 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 13 23:26:43.604015 kubelet[3082]: I0413 23:26:43.601926 3082 topology_manager.go:138] "Creating topology manager with none policy" Apr 13 23:26:43.604015 kubelet[3082]: I0413 23:26:43.601938 3082 container_manager_linux.go:306] "Creating device plugin manager" Apr 13 23:26:43.604015 kubelet[3082]: I0413 23:26:43.603927 3082 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Apr 13 23:26:43.626827 kubelet[3082]: I0413 23:26:43.625045 3082 state_mem.go:36] "Initialized new in-memory state store" Apr 13 23:26:43.714075 kubelet[3082]: I0413 23:26:43.711205 3082 kubelet.go:475] "Attempting to sync node with API server" Apr 13 23:26:43.720703 kubelet[3082]: I0413 23:26:43.714292 3082 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 13 23:26:43.723137 kubelet[3082]: I0413 23:26:43.722951 3082 kubelet.go:387] "Adding apiserver pod source" Apr 13 23:26:43.723375 kubelet[3082]: I0413 23:26:43.723330 3082 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 13 23:26:43.897085 kubelet[3082]: I0413 23:26:43.896387 3082 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 13 23:26:44.191551 kubelet[3082]: I0413 23:26:44.189933 3082 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 13 23:26:44.214658 kubelet[3082]: I0413 23:26:44.192688 3082 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Apr 13 23:26:44.348125 kubelet[3082]: I0413 23:26:44.345564 3082 server.go:1262] "Started kubelet" Apr 13 23:26:44.376710 kubelet[3082]: I0413 23:26:44.352116 3082 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Apr 13 23:26:44.376710 kubelet[3082]: I0413 23:26:44.357806 3082 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 13 23:26:44.376710 kubelet[3082]: I0413 23:26:44.368573 3082 server_v1.go:49] "podresources" method="list" useActivePods=true Apr 13 23:26:44.409514 kubelet[3082]: I0413 23:26:44.377756 3082 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 13 23:26:44.730797 kubelet[3082]: I0413 23:26:44.726749 3082 apiserver.go:52] "Watching apiserver" Apr 13 23:26:45.732222 kubelet[3082]: I0413 23:26:45.727510 3082 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 13 23:26:46.009438 kubelet[3082]: I0413 23:26:46.000945 3082 factory.go:223] Registration of the systemd container factory successfully Apr 13 23:26:46.104169 kubelet[3082]: I0413 23:26:46.100420 3082 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 13 23:26:46.146474 kubelet[3082]: I0413 23:26:46.017632 3082 volume_manager.go:313] "Starting Kubelet Volume Manager" Apr 13 23:26:46.155203 kubelet[3082]: I0413 23:26:46.119823 3082 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 13 23:26:46.177077 kubelet[3082]: I0413 23:26:46.146141 3082 server.go:310] "Adding debug handlers to kubelet server" Apr 13 23:26:46.177077 kubelet[3082]: I0413 23:26:46.130808 3082 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Apr 13 23:26:46.334797 kubelet[3082]: I0413 23:26:46.334414 3082 reconciler.go:29] "Reconciler: start to sync state" Apr 13 23:26:46.517116 kubelet[3082]: E0413 23:26:46.514983 3082 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 13 23:26:46.716145 kubelet[3082]: I0413 23:26:46.714622 3082 factory.go:223] Registration of the containerd container factory successfully Apr 13 23:26:46.985457 kubelet[3082]: I0413 23:26:46.984041 3082 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Apr 13 23:26:47.083041 kubelet[3082]: I0413 23:26:47.079346 3082 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Apr 13 23:26:47.083041 kubelet[3082]: I0413 23:26:47.080966 3082 status_manager.go:244] "Starting to sync pod status with apiserver" Apr 13 23:26:47.083041 kubelet[3082]: I0413 23:26:47.081687 3082 kubelet.go:2428] "Starting kubelet main sync loop" Apr 13 23:26:47.156430 kubelet[3082]: E0413 23:26:47.154654 3082 kubelet.go:2452] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 13 23:26:47.280583 kubelet[3082]: E0413 23:26:47.279111 3082 kubelet.go:2452] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 13 23:26:47.493030 kubelet[3082]: E0413 23:26:47.492720 3082 kubelet.go:2452] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Apr 13 23:26:47.895772 kubelet[3082]: E0413 23:26:47.895289 3082 kubelet.go:2452] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Apr 13 23:26:48.697841 kubelet[3082]: E0413 23:26:48.697269 3082 kubelet.go:2452] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Apr 13 23:26:50.263101 kubelet[3082]: I0413 23:26:50.262811 3082 cpu_manager.go:221] "Starting CPU manager" policy="none" Apr 13 23:26:50.263101 kubelet[3082]: I0413 23:26:50.262835 3082 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Apr 13 23:26:50.263101 kubelet[3082]: I0413 23:26:50.262929 3082 state_mem.go:36] "Initialized new in-memory state store" Apr 13 23:26:50.263101 kubelet[3082]: I0413 23:26:50.263252 3082 state_mem.go:88] "Updated default CPUSet" cpuSet="" Apr 13 23:26:50.267801 kubelet[3082]: I0413 23:26:50.263295 3082 state_mem.go:96] "Updated CPUSet assignments" assignments={} Apr 13 23:26:50.267801 kubelet[3082]: I0413 23:26:50.263370 3082 policy_none.go:49] "None policy: Start" Apr 13 23:26:50.267801 kubelet[3082]: I0413 23:26:50.263380 3082 memory_manager.go:187] "Starting memorymanager" policy="None" Apr 13 23:26:50.267801 kubelet[3082]: I0413 23:26:50.263389 3082 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Apr 13 23:26:50.267801 kubelet[3082]: I0413 23:26:50.263529 3082 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Apr 13 23:26:50.267801 kubelet[3082]: I0413 23:26:50.263536 3082 policy_none.go:47] "Start" Apr 13 23:26:50.322223 kubelet[3082]: E0413 23:26:50.320711 3082 kubelet.go:2452] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Apr 13 23:26:50.480223 kubelet[3082]: E0413 23:26:50.479805 3082 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 13 23:26:50.536625 kubelet[3082]: I0413 23:26:50.532208 3082 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 13 23:26:50.592079 kubelet[3082]: I0413 23:26:50.532552 3082 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 13 23:26:50.815495 kubelet[3082]: I0413 23:26:50.807165 3082 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 13 23:26:51.392410 kubelet[3082]: E0413 23:26:51.390561 3082 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 13 23:26:53.505185 kubelet[3082]: I0413 23:26:53.463639 3082 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 13 23:26:53.681177 kubelet[3082]: I0413 23:26:53.673851 3082 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3b7943798e53a2fbe77bb8405c2b7b02-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"3b7943798e53a2fbe77bb8405c2b7b02\") " pod="kube-system/kube-apiserver-localhost" Apr 13 23:26:53.681177 kubelet[3082]: I0413 23:26:53.676770 3082 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3b7943798e53a2fbe77bb8405c2b7b02-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"3b7943798e53a2fbe77bb8405c2b7b02\") " pod="kube-system/kube-apiserver-localhost" Apr 13 23:26:53.912408 kubelet[3082]: I0413 23:26:53.909703 3082 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Apr 13 23:26:53.916196 kubelet[3082]: I0413 23:26:53.915667 3082 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Apr 13 23:26:53.918306 kubelet[3082]: I0413 23:26:53.917748 3082 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Apr 13 23:26:53.928444 kubelet[3082]: I0413 23:26:53.928269 3082 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3b7943798e53a2fbe77bb8405c2b7b02-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"3b7943798e53a2fbe77bb8405c2b7b02\") " pod="kube-system/kube-apiserver-localhost" Apr 13 23:26:54.132295 kubelet[3082]: I0413 23:26:54.130073 3082 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/66a243c17a59d09458bf3b09d66260f5-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"66a243c17a59d09458bf3b09d66260f5\") " pod="kube-system/kube-scheduler-localhost" Apr 13 23:26:54.161377 kubelet[3082]: I0413 23:26:54.159556 3082 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/82faa9ca0765979bc0118d46e6420ed8-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"82faa9ca0765979bc0118d46e6420ed8\") " pod="kube-system/kube-controller-manager-localhost" Apr 13 23:26:54.212437 kubelet[3082]: I0413 23:26:54.201742 3082 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/82faa9ca0765979bc0118d46e6420ed8-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"82faa9ca0765979bc0118d46e6420ed8\") " pod="kube-system/kube-controller-manager-localhost" Apr 13 23:26:54.220825 kubelet[3082]: I0413 23:26:54.211462 3082 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/82faa9ca0765979bc0118d46e6420ed8-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"82faa9ca0765979bc0118d46e6420ed8\") " pod="kube-system/kube-controller-manager-localhost" Apr 13 23:26:54.228170 kubelet[3082]: I0413 23:26:54.227432 3082 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/82faa9ca0765979bc0118d46e6420ed8-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"82faa9ca0765979bc0118d46e6420ed8\") " pod="kube-system/kube-controller-manager-localhost" Apr 13 23:26:54.228170 kubelet[3082]: I0413 23:26:54.228320 3082 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/82faa9ca0765979bc0118d46e6420ed8-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"82faa9ca0765979bc0118d46e6420ed8\") " pod="kube-system/kube-controller-manager-localhost" Apr 13 23:26:54.903482 kubelet[3082]: E0413 23:26:54.788372 3082 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 13 23:26:57.015213 kubelet[3082]: I0413 23:26:57.011979 3082 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Apr 13 23:26:57.273230 kubelet[3082]: I0413 23:26:57.258522 3082 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Apr 13 23:26:57.667988 kubelet[3082]: E0413 23:26:57.657968 3082 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Apr 13 23:26:57.889442 kubelet[3082]: E0413 23:26:57.885596 3082 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 13 23:26:58.207200 kubelet[3082]: E0413 23:26:58.206933 3082 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 13 23:26:59.499366 kubelet[3082]: E0413 23:26:59.475360 3082 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="4.368s" Apr 13 23:26:59.699513 kubelet[3082]: E0413 23:26:59.695842 3082 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Apr 13 23:26:59.813310 kubelet[3082]: E0413 23:26:59.811290 3082 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 13 23:27:00.794371 kubelet[3082]: E0413 23:27:00.790557 3082 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 13 23:27:01.365629 kubelet[3082]: E0413 23:27:01.365272 3082 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 13 23:27:01.712145 kubelet[3082]: E0413 23:27:01.610192 3082 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.17s" Apr 13 23:27:02.423206 kubelet[3082]: E0413 23:27:02.423057 3082 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 13 23:27:02.605221 kubelet[3082]: E0413 23:27:02.605034 3082 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 13 23:27:04.493170 kubelet[3082]: E0413 23:27:04.490368 3082 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 13 23:27:06.290246 kubelet[3082]: E0413 23:27:06.289914 3082 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.188s" Apr 13 23:27:06.485009 kubelet[3082]: E0413 23:27:06.484346 3082 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 13 23:27:09.428298 kubelet[3082]: E0413 23:27:09.427567 3082 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 13 23:27:10.389156 kubelet[3082]: E0413 23:27:10.376807 3082 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.14s" Apr 13 23:27:10.601418 kubelet[3082]: E0413 23:27:10.601010 3082 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 13 23:27:10.821037 kubelet[3082]: E0413 23:27:10.820824 3082 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 13 23:27:14.300514 kubelet[3082]: E0413 23:27:14.300157 3082 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.205s" Apr 13 23:27:26.132672 kubelet[3082]: E0413 23:27:26.132157 3082 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.049s" Apr 13 23:27:37.129318 kubelet[3082]: I0413 23:27:37.127892 3082 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Apr 13 23:27:37.160843 containerd[1467]: time="2026-04-13T23:27:37.160631550Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Apr 13 23:27:37.162929 kubelet[3082]: I0413 23:27:37.162440 3082 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Apr 13 23:27:37.791404 systemd[1]: Created slice kubepods-besteffort-pod6d7954ed_855e_464f_9790_d5133fd9b5a5.slice - libcontainer container kubepods-besteffort-pod6d7954ed_855e_464f_9790_d5133fd9b5a5.slice. Apr 13 23:27:37.915079 kubelet[3082]: I0413 23:27:37.914536 3082 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/6d7954ed-855e-464f-9790-d5133fd9b5a5-kube-proxy\") pod \"kube-proxy-hvhq8\" (UID: \"6d7954ed-855e-464f-9790-d5133fd9b5a5\") " pod="kube-system/kube-proxy-hvhq8" Apr 13 23:27:37.915079 kubelet[3082]: I0413 23:27:37.914603 3082 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6d7954ed-855e-464f-9790-d5133fd9b5a5-xtables-lock\") pod \"kube-proxy-hvhq8\" (UID: \"6d7954ed-855e-464f-9790-d5133fd9b5a5\") " pod="kube-system/kube-proxy-hvhq8" Apr 13 23:27:37.915079 kubelet[3082]: I0413 23:27:37.914614 3082 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tggkd\" (UniqueName: \"kubernetes.io/projected/6d7954ed-855e-464f-9790-d5133fd9b5a5-kube-api-access-tggkd\") pod \"kube-proxy-hvhq8\" (UID: \"6d7954ed-855e-464f-9790-d5133fd9b5a5\") " pod="kube-system/kube-proxy-hvhq8" Apr 13 23:27:37.915079 kubelet[3082]: I0413 23:27:37.914682 3082 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6d7954ed-855e-464f-9790-d5133fd9b5a5-lib-modules\") pod \"kube-proxy-hvhq8\" (UID: \"6d7954ed-855e-464f-9790-d5133fd9b5a5\") " pod="kube-system/kube-proxy-hvhq8" Apr 13 23:27:38.577441 kubelet[3082]: I0413 23:27:38.574117 3082 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-czx7p\" (UniqueName: \"kubernetes.io/projected/9ba7f47e-d568-4bd6-bfd8-bb591b51d487-kube-api-access-czx7p\") pod \"tigera-operator-5588576f44-glld7\" (UID: \"9ba7f47e-d568-4bd6-bfd8-bb591b51d487\") " pod="tigera-operator/tigera-operator-5588576f44-glld7" Apr 13 23:27:38.591048 kubelet[3082]: I0413 23:27:38.590645 3082 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/9ba7f47e-d568-4bd6-bfd8-bb591b51d487-var-lib-calico\") pod \"tigera-operator-5588576f44-glld7\" (UID: \"9ba7f47e-d568-4bd6-bfd8-bb591b51d487\") " pod="tigera-operator/tigera-operator-5588576f44-glld7" Apr 13 23:27:38.703944 systemd[1]: Created slice kubepods-besteffort-pod9ba7f47e_d568_4bd6_bfd8_bb591b51d487.slice - libcontainer container kubepods-besteffort-pod9ba7f47e_d568_4bd6_bfd8_bb591b51d487.slice. Apr 13 23:27:38.884450 kubelet[3082]: E0413 23:27:38.882833 3082 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 13 23:27:39.062563 containerd[1467]: time="2026-04-13T23:27:39.047554018Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-hvhq8,Uid:6d7954ed-855e-464f-9790-d5133fd9b5a5,Namespace:kube-system,Attempt:0,}" Apr 13 23:27:39.263219 containerd[1467]: time="2026-04-13T23:27:39.262409897Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5588576f44-glld7,Uid:9ba7f47e-d568-4bd6-bfd8-bb591b51d487,Namespace:tigera-operator,Attempt:0,}" Apr 13 23:27:39.562511 containerd[1467]: time="2026-04-13T23:27:39.559960189Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 23:27:39.571150 containerd[1467]: time="2026-04-13T23:27:39.565372875Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 23:27:39.571150 containerd[1467]: time="2026-04-13T23:27:39.565475052Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 23:27:39.571150 containerd[1467]: time="2026-04-13T23:27:39.565731480Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 23:27:39.688260 containerd[1467]: time="2026-04-13T23:27:39.688129083Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 23:27:39.688260 containerd[1467]: time="2026-04-13T23:27:39.688169067Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 23:27:39.688260 containerd[1467]: time="2026-04-13T23:27:39.688176964Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 23:27:39.688622 containerd[1467]: time="2026-04-13T23:27:39.688254596Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 23:27:39.697160 systemd[1]: Started cri-containerd-d57dd2fe31bb1301ada467c528ee97a76aec9ad4311e6da518a7efa7239f9bb0.scope - libcontainer container d57dd2fe31bb1301ada467c528ee97a76aec9ad4311e6da518a7efa7239f9bb0. Apr 13 23:27:39.790642 systemd[1]: Started cri-containerd-74f05b10925a3c052bff322093f2351f3e4298324c33e1fbd0f36a38befc4218.scope - libcontainer container 74f05b10925a3c052bff322093f2351f3e4298324c33e1fbd0f36a38befc4218. Apr 13 23:27:40.088184 containerd[1467]: time="2026-04-13T23:27:40.079375838Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-hvhq8,Uid:6d7954ed-855e-464f-9790-d5133fd9b5a5,Namespace:kube-system,Attempt:0,} returns sandbox id \"d57dd2fe31bb1301ada467c528ee97a76aec9ad4311e6da518a7efa7239f9bb0\"" Apr 13 23:27:40.167122 kubelet[3082]: E0413 23:27:40.165527 3082 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 13 23:27:40.225735 containerd[1467]: time="2026-04-13T23:27:40.224375096Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5588576f44-glld7,Uid:9ba7f47e-d568-4bd6-bfd8-bb591b51d487,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"74f05b10925a3c052bff322093f2351f3e4298324c33e1fbd0f36a38befc4218\"" Apr 13 23:27:40.308413 containerd[1467]: time="2026-04-13T23:27:40.307194553Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\"" Apr 13 23:27:40.524636 containerd[1467]: time="2026-04-13T23:27:40.523362830Z" level=info msg="CreateContainer within sandbox \"d57dd2fe31bb1301ada467c528ee97a76aec9ad4311e6da518a7efa7239f9bb0\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Apr 13 23:27:40.767641 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3590594702.mount: Deactivated successfully. Apr 13 23:27:40.911453 containerd[1467]: time="2026-04-13T23:27:40.904482211Z" level=info msg="CreateContainer within sandbox \"d57dd2fe31bb1301ada467c528ee97a76aec9ad4311e6da518a7efa7239f9bb0\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"84ca3c97dd60cbc8a779176fcdbb999eb31eb4b2c22a16d5fc02deac53d31eaf\"" Apr 13 23:27:40.962888 containerd[1467]: time="2026-04-13T23:27:40.960885469Z" level=info msg="StartContainer for \"84ca3c97dd60cbc8a779176fcdbb999eb31eb4b2c22a16d5fc02deac53d31eaf\"" Apr 13 23:27:41.076436 systemd[1]: Started cri-containerd-84ca3c97dd60cbc8a779176fcdbb999eb31eb4b2c22a16d5fc02deac53d31eaf.scope - libcontainer container 84ca3c97dd60cbc8a779176fcdbb999eb31eb4b2c22a16d5fc02deac53d31eaf. Apr 13 23:27:41.185082 containerd[1467]: time="2026-04-13T23:27:41.184326903Z" level=info msg="StartContainer for \"84ca3c97dd60cbc8a779176fcdbb999eb31eb4b2c22a16d5fc02deac53d31eaf\" returns successfully" Apr 13 23:27:42.421936 kubelet[3082]: E0413 23:27:42.421616 3082 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 13 23:27:43.863573 kubelet[3082]: E0413 23:27:43.823180 3082 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 13 23:27:44.296102 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2559220270.mount: Deactivated successfully. Apr 13 23:27:46.435306 kubelet[3082]: E0413 23:27:46.434121 3082 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.229s" Apr 13 23:27:52.395757 kubelet[3082]: E0413 23:27:52.392296 3082 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="5.284s" Apr 13 23:27:53.782518 kubelet[3082]: E0413 23:27:53.778778 3082 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.386s" Apr 13 23:27:53.793961 kubelet[3082]: I0413 23:27:53.792183 3082 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-hvhq8" podStartSLOduration=16.792055821 podStartE2EDuration="16.792055821s" podCreationTimestamp="2026-04-13 23:27:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-13 23:27:43.431058137 +0000 UTC m=+62.208694667" watchObservedRunningTime="2026-04-13 23:27:53.792055821 +0000 UTC m=+72.569692371" Apr 13 23:27:56.293575 kubelet[3082]: E0413 23:27:56.293292 3082 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.033s" Apr 13 23:28:04.523154 containerd[1467]: time="2026-04-13T23:28:04.521968492Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.40.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 23:28:04.527621 containerd[1467]: time="2026-04-13T23:28:04.526499150Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.40.7: active requests=0, bytes read=40846156" Apr 13 23:28:04.640302 containerd[1467]: time="2026-04-13T23:28:04.639437370Z" level=info msg="ImageCreate event name:\"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 23:28:04.710573 containerd[1467]: time="2026-04-13T23:28:04.707377743Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 23:28:04.776118 containerd[1467]: time="2026-04-13T23:28:04.774452505Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.40.7\" with image id \"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\", repo tag \"quay.io/tigera/operator:v1.40.7\", repo digest \"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\", size \"40842151\" in 24.467145065s" Apr 13 23:28:04.776118 containerd[1467]: time="2026-04-13T23:28:04.774497131Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\" returns image reference \"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\"" Apr 13 23:28:05.120991 containerd[1467]: time="2026-04-13T23:28:05.118659366Z" level=info msg="CreateContainer within sandbox \"74f05b10925a3c052bff322093f2351f3e4298324c33e1fbd0f36a38befc4218\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Apr 13 23:28:05.326764 containerd[1467]: time="2026-04-13T23:28:05.325613099Z" level=info msg="CreateContainer within sandbox \"74f05b10925a3c052bff322093f2351f3e4298324c33e1fbd0f36a38befc4218\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"bb0d4f2bc1e35bee22d9ffecf5292bad3a6a8a3273a06b81eafe62f557ccb1e3\"" Apr 13 23:28:05.390326 containerd[1467]: time="2026-04-13T23:28:05.387968997Z" level=info msg="StartContainer for \"bb0d4f2bc1e35bee22d9ffecf5292bad3a6a8a3273a06b81eafe62f557ccb1e3\"" Apr 13 23:28:06.045693 systemd[1]: Started cri-containerd-bb0d4f2bc1e35bee22d9ffecf5292bad3a6a8a3273a06b81eafe62f557ccb1e3.scope - libcontainer container bb0d4f2bc1e35bee22d9ffecf5292bad3a6a8a3273a06b81eafe62f557ccb1e3. Apr 13 23:28:06.518793 containerd[1467]: time="2026-04-13T23:28:06.516772402Z" level=info msg="StartContainer for \"bb0d4f2bc1e35bee22d9ffecf5292bad3a6a8a3273a06b81eafe62f557ccb1e3\" returns successfully" Apr 13 23:28:09.401436 kubelet[3082]: I0413 23:28:09.398978 3082 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-5588576f44-glld7" podStartSLOduration=7.8084743880000005 podStartE2EDuration="32.39894274s" podCreationTimestamp="2026-04-13 23:27:37 +0000 UTC" firstStartedPulling="2026-04-13 23:27:40.303682716 +0000 UTC m=+59.081319245" lastFinishedPulling="2026-04-13 23:28:04.894151057 +0000 UTC m=+83.671787597" observedRunningTime="2026-04-13 23:28:09.398306708 +0000 UTC m=+88.175943242" watchObservedRunningTime="2026-04-13 23:28:09.39894274 +0000 UTC m=+88.176579281" Apr 13 23:28:10.697446 sudo[1697]: pam_unix(sudo:session): session closed for user root Apr 13 23:28:10.723361 sshd[1684]: pam_unix(sshd:session): session closed for user core Apr 13 23:28:10.763014 systemd[1]: sshd@6-10.0.0.132:22-10.0.0.1:47758.service: Deactivated successfully. Apr 13 23:28:10.800198 systemd[1]: session-7.scope: Deactivated successfully. Apr 13 23:28:10.800622 systemd[1]: session-7.scope: Consumed 2min 715ms CPU time, 161.8M memory peak, 0B memory swap peak. Apr 13 23:28:10.879625 systemd-logind[1455]: Session 7 logged out. Waiting for processes to exit. Apr 13 23:28:10.895373 systemd-logind[1455]: Removed session 7.