Apr 16 04:16:20.102586 kernel: Linux version 6.6.127-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Wed Apr 15 22:45:03 -00 2026 Apr 16 04:16:20.102618 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=27643dbc59f658eac8bb37add3a8b4ed010a3c31134319f01549aa493a1f070c Apr 16 04:16:20.102632 kernel: BIOS-provided physical RAM map: Apr 16 04:16:20.102639 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Apr 16 04:16:20.102646 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Apr 16 04:16:20.102653 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Apr 16 04:16:20.102661 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Apr 16 04:16:20.102669 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Apr 16 04:16:20.102675 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Apr 16 04:16:20.102684 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Apr 16 04:16:20.102691 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Apr 16 04:16:20.102697 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Apr 16 04:16:20.103869 kernel: NX (Execute Disable) protection: active Apr 16 04:16:20.103890 kernel: APIC: Static calls initialized Apr 16 04:16:20.103902 kernel: SMBIOS 2.8 present. Apr 16 04:16:20.104617 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Apr 16 04:16:20.104635 kernel: Hypervisor detected: KVM Apr 16 04:16:20.104641 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Apr 16 04:16:20.104646 kernel: kvm-clock: using sched offset of 12060271826 cycles Apr 16 04:16:20.104652 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Apr 16 04:16:20.104658 kernel: tsc: Detected 2793.438 MHz processor Apr 16 04:16:20.104663 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Apr 16 04:16:20.104669 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Apr 16 04:16:20.104674 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x10000000000 Apr 16 04:16:20.104690 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Apr 16 04:16:20.104698 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Apr 16 04:16:20.104707 kernel: Using GB pages for direct mapping Apr 16 04:16:20.104716 kernel: ACPI: Early table checksum verification disabled Apr 16 04:16:20.104723 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Apr 16 04:16:20.104730 kernel: ACPI: RSDT 0x000000009CFE241A 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 16 04:16:20.104738 kernel: ACPI: FACP 0x000000009CFE21FA 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Apr 16 04:16:20.104746 kernel: ACPI: DSDT 0x000000009CFE0040 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 16 04:16:20.104753 kernel: ACPI: FACS 0x000000009CFE0000 000040 Apr 16 04:16:20.104764 kernel: ACPI: APIC 0x000000009CFE22EE 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 16 04:16:20.104771 kernel: ACPI: HPET 0x000000009CFE237E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 16 04:16:20.104778 kernel: ACPI: MCFG 0x000000009CFE23B6 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 16 04:16:20.104785 kernel: ACPI: WAET 0x000000009CFE23F2 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 16 04:16:20.104793 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21fa-0x9cfe22ed] Apr 16 04:16:20.104801 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21f9] Apr 16 04:16:20.104810 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Apr 16 04:16:20.104822 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22ee-0x9cfe237d] Apr 16 04:16:20.104833 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe237e-0x9cfe23b5] Apr 16 04:16:20.104841 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23b6-0x9cfe23f1] Apr 16 04:16:20.104850 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23f2-0x9cfe2419] Apr 16 04:16:20.104859 kernel: No NUMA configuration found Apr 16 04:16:20.104868 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Apr 16 04:16:20.104877 kernel: NODE_DATA(0) allocated [mem 0x9cfd6000-0x9cfdbfff] Apr 16 04:16:20.104889 kernel: Zone ranges: Apr 16 04:16:20.104898 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Apr 16 04:16:20.104907 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Apr 16 04:16:20.104916 kernel: Normal empty Apr 16 04:16:20.104925 kernel: Movable zone start for each node Apr 16 04:16:20.104978 kernel: Early memory node ranges Apr 16 04:16:20.104987 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Apr 16 04:16:20.104996 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Apr 16 04:16:20.105004 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Apr 16 04:16:20.105016 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Apr 16 04:16:20.105025 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Apr 16 04:16:20.105048 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Apr 16 04:16:20.105058 kernel: ACPI: PM-Timer IO Port: 0x608 Apr 16 04:16:20.105066 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Apr 16 04:16:20.105091 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Apr 16 04:16:20.105100 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Apr 16 04:16:20.105110 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Apr 16 04:16:20.105118 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Apr 16 04:16:20.105130 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Apr 16 04:16:20.105139 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Apr 16 04:16:20.105148 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Apr 16 04:16:20.105157 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Apr 16 04:16:20.105166 kernel: TSC deadline timer available Apr 16 04:16:20.105175 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Apr 16 04:16:20.105183 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Apr 16 04:16:20.105192 kernel: kvm-guest: KVM setup pv remote TLB flush Apr 16 04:16:20.105200 kernel: kvm-guest: setup PV sched yield Apr 16 04:16:20.105898 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Apr 16 04:16:20.105971 kernel: Booting paravirtualized kernel on KVM Apr 16 04:16:20.105977 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Apr 16 04:16:20.105983 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Apr 16 04:16:20.105988 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u524288 Apr 16 04:16:20.105996 kernel: pcpu-alloc: s196328 r8192 d28952 u524288 alloc=1*2097152 Apr 16 04:16:20.106006 kernel: pcpu-alloc: [0] 0 1 2 3 Apr 16 04:16:20.106014 kernel: kvm-guest: PV spinlocks enabled Apr 16 04:16:20.106023 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Apr 16 04:16:20.106037 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=27643dbc59f658eac8bb37add3a8b4ed010a3c31134319f01549aa493a1f070c Apr 16 04:16:20.106048 kernel: random: crng init done Apr 16 04:16:20.106058 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Apr 16 04:16:20.106067 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Apr 16 04:16:20.106474 kernel: Fallback order for Node 0: 0 Apr 16 04:16:20.106483 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632732 Apr 16 04:16:20.106491 kernel: Policy zone: DMA32 Apr 16 04:16:20.106499 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Apr 16 04:16:20.106508 kernel: Memory: 2433648K/2571752K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42896K init, 2300K bss, 137900K reserved, 0K cma-reserved) Apr 16 04:16:20.107036 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Apr 16 04:16:20.107045 kernel: ftrace: allocating 37996 entries in 149 pages Apr 16 04:16:20.107052 kernel: ftrace: allocated 149 pages with 4 groups Apr 16 04:16:20.107060 kernel: Dynamic Preempt: voluntary Apr 16 04:16:20.107067 kernel: rcu: Preemptible hierarchical RCU implementation. Apr 16 04:16:20.107094 kernel: rcu: RCU event tracing is enabled. Apr 16 04:16:20.107102 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Apr 16 04:16:20.107110 kernel: Trampoline variant of Tasks RCU enabled. Apr 16 04:16:20.107118 kernel: Rude variant of Tasks RCU enabled. Apr 16 04:16:20.107130 kernel: Tracing variant of Tasks RCU enabled. Apr 16 04:16:20.107138 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Apr 16 04:16:20.107146 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Apr 16 04:16:20.107154 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Apr 16 04:16:20.107173 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Apr 16 04:16:20.107181 kernel: Console: colour VGA+ 80x25 Apr 16 04:16:20.107189 kernel: printk: console [ttyS0] enabled Apr 16 04:16:20.107197 kernel: ACPI: Core revision 20230628 Apr 16 04:16:20.107205 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Apr 16 04:16:20.107215 kernel: APIC: Switch to symmetric I/O mode setup Apr 16 04:16:20.107223 kernel: x2apic enabled Apr 16 04:16:20.107230 kernel: APIC: Switched APIC routing to: physical x2apic Apr 16 04:16:20.107238 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Apr 16 04:16:20.107246 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Apr 16 04:16:20.107254 kernel: kvm-guest: setup PV IPIs Apr 16 04:16:20.107261 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Apr 16 04:16:20.107269 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x284409db922, max_idle_ns: 440795228871 ns Apr 16 04:16:20.107287 kernel: Calibrating delay loop (skipped) preset value.. 5586.87 BogoMIPS (lpj=2793438) Apr 16 04:16:20.107295 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Apr 16 04:16:20.107303 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Apr 16 04:16:20.107313 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Apr 16 04:16:20.107321 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Apr 16 04:16:20.107329 kernel: Spectre V2 : Mitigation: Retpolines Apr 16 04:16:20.107338 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Apr 16 04:16:20.107347 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Apr 16 04:16:20.107359 kernel: RETBleed: Vulnerable Apr 16 04:16:20.107368 kernel: Speculative Store Bypass: Vulnerable Apr 16 04:16:20.107373 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Apr 16 04:16:20.107697 kernel: GDS: Unknown: Dependent on hypervisor status Apr 16 04:16:20.107721 kernel: active return thunk: its_return_thunk Apr 16 04:16:20.107727 kernel: ITS: Mitigation: Aligned branch/return thunks Apr 16 04:16:20.107733 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Apr 16 04:16:20.107738 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Apr 16 04:16:20.107744 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Apr 16 04:16:20.107762 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Apr 16 04:16:20.107768 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Apr 16 04:16:20.107774 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Apr 16 04:16:20.107780 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Apr 16 04:16:20.107785 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Apr 16 04:16:20.107791 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Apr 16 04:16:20.107797 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Apr 16 04:16:20.107803 kernel: x86/fpu: Enabled xstate features 0xe7, context size is 2432 bytes, using 'compacted' format. Apr 16 04:16:20.107808 kernel: Freeing SMP alternatives memory: 32K Apr 16 04:16:20.107816 kernel: pid_max: default: 32768 minimum: 301 Apr 16 04:16:20.107822 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Apr 16 04:16:20.107828 kernel: landlock: Up and running. Apr 16 04:16:20.107833 kernel: SELinux: Initializing. Apr 16 04:16:20.107839 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 16 04:16:20.107845 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 16 04:16:20.107851 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8370C CPU @ 2.80GHz (family: 0x6, model: 0x6a, stepping: 0x6) Apr 16 04:16:20.107872 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Apr 16 04:16:20.107878 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Apr 16 04:16:20.107886 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Apr 16 04:16:20.107892 kernel: Performance Events: unsupported p6 CPU model 106 no PMU driver, software events only. Apr 16 04:16:20.107898 kernel: signal: max sigframe size: 3632 Apr 16 04:16:20.107903 kernel: rcu: Hierarchical SRCU implementation. Apr 16 04:16:20.107909 kernel: rcu: Max phase no-delay instances is 400. Apr 16 04:16:20.107915 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Apr 16 04:16:20.107921 kernel: smp: Bringing up secondary CPUs ... Apr 16 04:16:20.107926 kernel: smpboot: x86: Booting SMP configuration: Apr 16 04:16:20.107969 kernel: .... node #0, CPUs: #1 #2 #3 Apr 16 04:16:20.107978 kernel: smp: Brought up 1 node, 4 CPUs Apr 16 04:16:20.107984 kernel: smpboot: Max logical packages: 1 Apr 16 04:16:20.107989 kernel: smpboot: Total of 4 processors activated (22347.50 BogoMIPS) Apr 16 04:16:20.107995 kernel: devtmpfs: initialized Apr 16 04:16:20.108001 kernel: x86/mm: Memory block size: 128MB Apr 16 04:16:20.108007 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Apr 16 04:16:20.108012 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Apr 16 04:16:20.108018 kernel: pinctrl core: initialized pinctrl subsystem Apr 16 04:16:20.108024 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Apr 16 04:16:20.108032 kernel: audit: initializing netlink subsys (disabled) Apr 16 04:16:20.108038 kernel: audit: type=2000 audit(1776312972.404:1): state=initialized audit_enabled=0 res=1 Apr 16 04:16:20.108044 kernel: thermal_sys: Registered thermal governor 'step_wise' Apr 16 04:16:20.108050 kernel: thermal_sys: Registered thermal governor 'user_space' Apr 16 04:16:20.108055 kernel: cpuidle: using governor menu Apr 16 04:16:20.108061 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Apr 16 04:16:20.108067 kernel: dca service started, version 1.12.1 Apr 16 04:16:20.108086 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Apr 16 04:16:20.108092 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Apr 16 04:16:20.108100 kernel: PCI: Using configuration type 1 for base access Apr 16 04:16:20.108106 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Apr 16 04:16:20.108111 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Apr 16 04:16:20.108117 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Apr 16 04:16:20.108123 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Apr 16 04:16:20.108128 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Apr 16 04:16:20.108134 kernel: ACPI: Added _OSI(Module Device) Apr 16 04:16:20.108140 kernel: ACPI: Added _OSI(Processor Device) Apr 16 04:16:20.108146 kernel: ACPI: Added _OSI(Processor Aggregator Device) Apr 16 04:16:20.108153 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Apr 16 04:16:20.108159 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Apr 16 04:16:20.108165 kernel: ACPI: Interpreter enabled Apr 16 04:16:20.108170 kernel: ACPI: PM: (supports S0 S3 S5) Apr 16 04:16:20.108176 kernel: ACPI: Using IOAPIC for interrupt routing Apr 16 04:16:20.108182 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Apr 16 04:16:20.108188 kernel: PCI: Using E820 reservations for host bridge windows Apr 16 04:16:20.108194 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Apr 16 04:16:20.108200 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Apr 16 04:16:20.111853 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Apr 16 04:16:20.112046 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Apr 16 04:16:20.115549 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Apr 16 04:16:20.115580 kernel: PCI host bridge to bus 0000:00 Apr 16 04:16:20.115771 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Apr 16 04:16:20.115854 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Apr 16 04:16:20.116928 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Apr 16 04:16:20.117161 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Apr 16 04:16:20.117253 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Apr 16 04:16:20.117335 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Apr 16 04:16:20.117416 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Apr 16 04:16:20.120889 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Apr 16 04:16:20.122416 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Apr 16 04:16:20.122623 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Apr 16 04:16:20.122715 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Apr 16 04:16:20.122808 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Apr 16 04:16:20.122898 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Apr 16 04:16:20.127982 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Apr 16 04:16:20.128150 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc0c0-0xc0df] Apr 16 04:16:20.128264 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Apr 16 04:16:20.128362 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Apr 16 04:16:20.128518 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Apr 16 04:16:20.128621 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc000-0xc07f] Apr 16 04:16:20.128722 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Apr 16 04:16:20.128816 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Apr 16 04:16:20.129651 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Apr 16 04:16:20.129795 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc0e0-0xc0ff] Apr 16 04:16:20.129883 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] Apr 16 04:16:20.130417 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] Apr 16 04:16:20.130490 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Apr 16 04:16:20.130600 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Apr 16 04:16:20.130664 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Apr 16 04:16:20.132679 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Apr 16 04:16:20.132769 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc100-0xc11f] Apr 16 04:16:20.132830 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd4000-0xfebd4fff] Apr 16 04:16:20.132927 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Apr 16 04:16:20.133025 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Apr 16 04:16:20.133033 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Apr 16 04:16:20.133039 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Apr 16 04:16:20.133046 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Apr 16 04:16:20.133055 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Apr 16 04:16:20.133061 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Apr 16 04:16:20.133066 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Apr 16 04:16:20.133842 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Apr 16 04:16:20.133850 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Apr 16 04:16:20.133855 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Apr 16 04:16:20.133861 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Apr 16 04:16:20.133867 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Apr 16 04:16:20.133873 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Apr 16 04:16:20.133913 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Apr 16 04:16:20.133919 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Apr 16 04:16:20.133925 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Apr 16 04:16:20.133969 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Apr 16 04:16:20.133975 kernel: iommu: Default domain type: Translated Apr 16 04:16:20.133981 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Apr 16 04:16:20.133987 kernel: PCI: Using ACPI for IRQ routing Apr 16 04:16:20.133993 kernel: PCI: pci_cache_line_size set to 64 bytes Apr 16 04:16:20.133999 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Apr 16 04:16:20.134008 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Apr 16 04:16:20.136226 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Apr 16 04:16:20.136659 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Apr 16 04:16:20.136740 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Apr 16 04:16:20.136748 kernel: vgaarb: loaded Apr 16 04:16:20.136754 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Apr 16 04:16:20.136760 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Apr 16 04:16:20.136766 kernel: clocksource: Switched to clocksource kvm-clock Apr 16 04:16:20.136772 kernel: VFS: Disk quotas dquot_6.6.0 Apr 16 04:16:20.136784 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Apr 16 04:16:20.136790 kernel: pnp: PnP ACPI init Apr 16 04:16:20.137791 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Apr 16 04:16:20.137807 kernel: pnp: PnP ACPI: found 6 devices Apr 16 04:16:20.137814 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Apr 16 04:16:20.137821 kernel: NET: Registered PF_INET protocol family Apr 16 04:16:20.137828 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Apr 16 04:16:20.137835 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Apr 16 04:16:20.137847 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Apr 16 04:16:20.137854 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Apr 16 04:16:20.137862 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Apr 16 04:16:20.137869 kernel: TCP: Hash tables configured (established 32768 bind 32768) Apr 16 04:16:20.137875 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 16 04:16:20.137882 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 16 04:16:20.137889 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Apr 16 04:16:20.137896 kernel: NET: Registered PF_XDP protocol family Apr 16 04:16:20.137999 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Apr 16 04:16:20.138061 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Apr 16 04:16:20.138143 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Apr 16 04:16:20.138200 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Apr 16 04:16:20.138255 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Apr 16 04:16:20.138311 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Apr 16 04:16:20.138318 kernel: PCI: CLS 0 bytes, default 64 Apr 16 04:16:20.138324 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Apr 16 04:16:20.138330 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x284409db922, max_idle_ns: 440795228871 ns Apr 16 04:16:20.138339 kernel: Initialise system trusted keyrings Apr 16 04:16:20.138345 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Apr 16 04:16:20.138351 kernel: Key type asymmetric registered Apr 16 04:16:20.138358 kernel: Asymmetric key parser 'x509' registered Apr 16 04:16:20.138596 kernel: hrtimer: interrupt took 13019982 ns Apr 16 04:16:20.138607 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Apr 16 04:16:20.138615 kernel: io scheduler mq-deadline registered Apr 16 04:16:20.138624 kernel: io scheduler kyber registered Apr 16 04:16:20.138634 kernel: io scheduler bfq registered Apr 16 04:16:20.138646 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Apr 16 04:16:20.138656 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Apr 16 04:16:20.138666 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Apr 16 04:16:20.138676 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Apr 16 04:16:20.138687 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Apr 16 04:16:20.138693 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Apr 16 04:16:20.138699 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Apr 16 04:16:20.138705 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Apr 16 04:16:20.138710 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Apr 16 04:16:20.139516 kernel: rtc_cmos 00:04: RTC can wake from S4 Apr 16 04:16:20.139531 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Apr 16 04:16:20.139537 kernel: hpet: Lost 1 RTC interrupts Apr 16 04:16:20.139598 kernel: rtc_cmos 00:04: registered as rtc0 Apr 16 04:16:20.139655 kernel: rtc_cmos 00:04: setting system clock to 2026-04-16T04:16:17 UTC (1776312977) Apr 16 04:16:20.139715 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Apr 16 04:16:20.139726 kernel: intel_pstate: CPU model not supported Apr 16 04:16:20.139736 kernel: NET: Registered PF_INET6 protocol family Apr 16 04:16:20.139751 kernel: Segment Routing with IPv6 Apr 16 04:16:20.139759 kernel: In-situ OAM (IOAM) with IPv6 Apr 16 04:16:20.139768 kernel: NET: Registered PF_PACKET protocol family Apr 16 04:16:20.139777 kernel: Key type dns_resolver registered Apr 16 04:16:20.139786 kernel: IPI shorthand broadcast: enabled Apr 16 04:16:20.139795 kernel: sched_clock: Marking stable (5791021298, 549547499)->(7074749022, -734180225) Apr 16 04:16:20.139804 kernel: registered taskstats version 1 Apr 16 04:16:20.139813 kernel: Loading compiled-in X.509 certificates Apr 16 04:16:20.139823 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.127-flatcar: 6e6d886174c86dc730e1b14e46a1dab518d9b090' Apr 16 04:16:20.139835 kernel: Key type .fscrypt registered Apr 16 04:16:20.139843 kernel: Key type fscrypt-provisioning registered Apr 16 04:16:20.139852 kernel: ima: No TPM chip found, activating TPM-bypass! Apr 16 04:16:20.139861 kernel: ima: Allocated hash algorithm: sha1 Apr 16 04:16:20.139869 kernel: ima: No architecture policies found Apr 16 04:16:20.139878 kernel: clk: Disabling unused clocks Apr 16 04:16:20.139887 kernel: Freeing unused kernel image (initmem) memory: 42896K Apr 16 04:16:20.139896 kernel: Write protecting the kernel read-only data: 36864k Apr 16 04:16:20.139905 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Apr 16 04:16:20.139916 kernel: Run /init as init process Apr 16 04:16:20.139926 kernel: with arguments: Apr 16 04:16:20.142568 kernel: /init Apr 16 04:16:20.142725 kernel: with environment: Apr 16 04:16:20.142737 kernel: HOME=/ Apr 16 04:16:20.142746 kernel: TERM=linux Apr 16 04:16:20.142760 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 16 04:16:20.142772 systemd[1]: Detected virtualization kvm. Apr 16 04:16:20.144201 systemd[1]: Detected architecture x86-64. Apr 16 04:16:20.144221 systemd[1]: Running in initrd. Apr 16 04:16:20.144231 systemd[1]: No hostname configured, using default hostname. Apr 16 04:16:20.144244 systemd[1]: Hostname set to . Apr 16 04:16:20.144255 systemd[1]: Initializing machine ID from VM UUID. Apr 16 04:16:20.144265 systemd[1]: Queued start job for default target initrd.target. Apr 16 04:16:20.144275 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 16 04:16:20.144285 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 16 04:16:20.144377 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Apr 16 04:16:20.144388 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 16 04:16:20.144409 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Apr 16 04:16:20.144422 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Apr 16 04:16:20.144435 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Apr 16 04:16:20.144448 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Apr 16 04:16:20.144457 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 16 04:16:20.144467 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 16 04:16:20.144477 systemd[1]: Reached target paths.target - Path Units. Apr 16 04:16:20.144487 systemd[1]: Reached target slices.target - Slice Units. Apr 16 04:16:20.144497 systemd[1]: Reached target swap.target - Swaps. Apr 16 04:16:20.144507 systemd[1]: Reached target timers.target - Timer Units. Apr 16 04:16:20.144517 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Apr 16 04:16:20.144529 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 16 04:16:20.144540 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 16 04:16:20.144552 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Apr 16 04:16:20.144562 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 16 04:16:20.144572 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 16 04:16:20.144583 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 16 04:16:20.144593 systemd[1]: Reached target sockets.target - Socket Units. Apr 16 04:16:20.144603 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Apr 16 04:16:20.144614 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 16 04:16:20.144627 systemd[1]: Finished network-cleanup.service - Network Cleanup. Apr 16 04:16:20.144637 systemd[1]: Starting systemd-fsck-usr.service... Apr 16 04:16:20.144647 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 16 04:16:20.144657 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 16 04:16:20.144667 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 16 04:16:20.144677 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Apr 16 04:16:20.144686 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 16 04:16:20.144696 systemd[1]: Finished systemd-fsck-usr.service. Apr 16 04:16:20.146759 systemd-journald[195]: Collecting audit messages is disabled. Apr 16 04:16:20.147883 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 16 04:16:20.148065 systemd-journald[195]: Journal started Apr 16 04:16:20.148191 systemd-journald[195]: Runtime Journal (/run/log/journal/33a8e9e357134390adcb0cc12bea46fd) is 6.0M, max 48.4M, 42.3M free. Apr 16 04:16:20.100887 systemd-modules-load[196]: Inserted module 'overlay' Apr 16 04:16:20.840207 systemd[1]: Started systemd-journald.service - Journal Service. Apr 16 04:16:20.840244 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Apr 16 04:16:20.840259 kernel: Bridge firewalling registered Apr 16 04:16:20.283435 systemd-modules-load[196]: Inserted module 'br_netfilter' Apr 16 04:16:20.829853 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 16 04:16:20.850701 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 16 04:16:20.930283 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 16 04:16:20.962315 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 16 04:16:20.993810 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 16 04:16:21.007041 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 16 04:16:21.036875 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 16 04:16:21.098554 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 16 04:16:21.132284 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 16 04:16:21.133042 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 16 04:16:21.214996 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 16 04:16:21.287521 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Apr 16 04:16:21.372894 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 16 04:16:21.481581 dracut-cmdline[230]: dracut-dracut-053 Apr 16 04:16:21.507387 dracut-cmdline[230]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=27643dbc59f658eac8bb37add3a8b4ed010a3c31134319f01549aa493a1f070c Apr 16 04:16:21.907687 systemd-resolved[231]: Positive Trust Anchors: Apr 16 04:16:21.907726 systemd-resolved[231]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 16 04:16:21.907762 systemd-resolved[231]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 16 04:16:21.958994 systemd-resolved[231]: Defaulting to hostname 'linux'. Apr 16 04:16:21.971312 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 16 04:16:21.979741 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 16 04:16:23.405661 kernel: SCSI subsystem initialized Apr 16 04:16:23.468919 kernel: Loading iSCSI transport class v2.0-870. Apr 16 04:16:23.636255 kernel: iscsi: registered transport (tcp) Apr 16 04:16:23.832464 kernel: iscsi: registered transport (qla4xxx) Apr 16 04:16:23.832886 kernel: QLogic iSCSI HBA Driver Apr 16 04:16:24.706642 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Apr 16 04:16:24.761684 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Apr 16 04:16:24.943979 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Apr 16 04:16:24.945519 kernel: device-mapper: uevent: version 1.0.3 Apr 16 04:16:24.986928 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Apr 16 04:16:25.340259 kernel: raid6: avx512x4 gen() 13500 MB/s Apr 16 04:16:25.361483 kernel: raid6: avx512x2 gen() 16716 MB/s Apr 16 04:16:25.385683 kernel: raid6: avx512x1 gen() 5523 MB/s Apr 16 04:16:25.407565 kernel: raid6: avx2x4 gen() 7453 MB/s Apr 16 04:16:25.427686 kernel: raid6: avx2x2 gen() 10034 MB/s Apr 16 04:16:25.451117 kernel: raid6: avx2x1 gen() 8216 MB/s Apr 16 04:16:25.451561 kernel: raid6: using algorithm avx512x2 gen() 16716 MB/s Apr 16 04:16:25.512266 kernel: raid6: .... xor() 4361 MB/s, rmw enabled Apr 16 04:16:25.512542 kernel: raid6: using avx512x2 recovery algorithm Apr 16 04:16:25.750854 kernel: xor: automatically using best checksumming function avx Apr 16 04:16:28.363040 kernel: Btrfs loaded, zoned=no, fsverity=no Apr 16 04:16:29.097721 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Apr 16 04:16:29.273459 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 16 04:16:30.340503 systemd-udevd[414]: Using default interface naming scheme 'v255'. Apr 16 04:16:30.740855 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 16 04:16:30.821108 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Apr 16 04:16:31.217415 dracut-pre-trigger[425]: rd.md=0: removing MD RAID activation Apr 16 04:16:31.409195 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Apr 16 04:16:31.465076 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 16 04:16:31.741863 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 16 04:16:31.845321 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Apr 16 04:16:31.891575 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Apr 16 04:16:31.902003 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Apr 16 04:16:31.910583 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 16 04:16:31.922874 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 16 04:16:31.946721 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Apr 16 04:16:31.984272 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Apr 16 04:16:32.070043 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Apr 16 04:16:32.077258 kernel: cryptd: max_cpu_qlen set to 1000 Apr 16 04:16:32.086245 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Apr 16 04:16:32.103344 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 16 04:16:32.126182 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Apr 16 04:16:32.126215 kernel: GPT:9289727 != 19775487 Apr 16 04:16:32.126225 kernel: GPT:Alternate GPT header not at the end of the disk. Apr 16 04:16:32.126236 kernel: GPT:9289727 != 19775487 Apr 16 04:16:32.126246 kernel: GPT: Use GNU Parted to correct GPT errors. Apr 16 04:16:32.126255 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 16 04:16:32.103606 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 16 04:16:32.121775 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 16 04:16:32.155277 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 16 04:16:32.155833 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 16 04:16:32.163561 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Apr 16 04:16:32.225497 kernel: libata version 3.00 loaded. Apr 16 04:16:32.224806 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 16 04:16:32.709069 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (474) Apr 16 04:16:33.021303 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Apr 16 04:16:33.560333 kernel: AVX2 version of gcm_enc/dec engaged. Apr 16 04:16:33.561181 kernel: AES CTR mode by8 optimization enabled Apr 16 04:16:33.561225 kernel: ahci 0000:00:1f.2: version 3.0 Apr 16 04:16:33.561905 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Apr 16 04:16:33.561926 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Apr 16 04:16:33.562212 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Apr 16 04:16:33.562377 kernel: BTRFS: device fsid 936fcbd8-a8ab-4e87-b115-d77c7a08e984 devid 1 transid 34 /dev/vda3 scanned by (udev-worker) (466) Apr 16 04:16:33.562393 kernel: scsi host0: ahci Apr 16 04:16:33.562681 kernel: scsi host1: ahci Apr 16 04:16:33.562878 kernel: scsi host2: ahci Apr 16 04:16:33.574427 kernel: scsi host3: ahci Apr 16 04:16:33.575853 kernel: scsi host4: ahci Apr 16 04:16:33.576107 kernel: scsi host5: ahci Apr 16 04:16:33.577404 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 Apr 16 04:16:33.577423 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 Apr 16 04:16:33.577469 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 Apr 16 04:16:33.577484 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 Apr 16 04:16:33.577639 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 Apr 16 04:16:33.577668 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 Apr 16 04:16:33.577786 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 16 04:16:33.676235 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Apr 16 04:16:33.778782 kernel: ata1: SATA link down (SStatus 0 SControl 300) Apr 16 04:16:33.802474 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Apr 16 04:16:33.802595 kernel: ata6: SATA link down (SStatus 0 SControl 300) Apr 16 04:16:33.804668 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Apr 16 04:16:33.837874 kernel: ata2: SATA link down (SStatus 0 SControl 300) Apr 16 04:16:33.837918 kernel: ata4: SATA link down (SStatus 0 SControl 300) Apr 16 04:16:33.837969 kernel: ata5: SATA link down (SStatus 0 SControl 300) Apr 16 04:16:33.837981 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Apr 16 04:16:33.837992 kernel: ata3.00: applying bridge limits Apr 16 04:16:33.838003 kernel: ata3.00: configured for UDMA/100 Apr 16 04:16:33.847191 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Apr 16 04:16:33.870735 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Apr 16 04:16:33.871106 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Apr 16 04:16:33.941279 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Apr 16 04:16:33.962774 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 16 04:16:34.043320 disk-uuid[557]: Primary Header is updated. Apr 16 04:16:34.043320 disk-uuid[557]: Secondary Entries is updated. Apr 16 04:16:34.043320 disk-uuid[557]: Secondary Header is updated. Apr 16 04:16:34.070267 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 16 04:16:34.110921 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 16 04:16:34.125287 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 16 04:16:34.140558 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 16 04:16:34.171340 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Apr 16 04:16:34.171969 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Apr 16 04:16:34.199276 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Apr 16 04:16:35.172370 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 16 04:16:35.177468 disk-uuid[559]: The operation has completed successfully. Apr 16 04:16:35.306266 systemd[1]: disk-uuid.service: Deactivated successfully. Apr 16 04:16:35.310503 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Apr 16 04:16:35.384300 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Apr 16 04:16:35.401642 sh[596]: Success Apr 16 04:16:35.547454 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Apr 16 04:16:36.102613 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Apr 16 04:16:36.146628 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Apr 16 04:16:36.149080 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Apr 16 04:16:36.291565 kernel: BTRFS info (device dm-0): first mount of filesystem 936fcbd8-a8ab-4e87-b115-d77c7a08e984 Apr 16 04:16:36.292102 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Apr 16 04:16:36.292126 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Apr 16 04:16:36.297687 kernel: BTRFS info (device dm-0): disabling log replay at mount time Apr 16 04:16:36.305262 kernel: BTRFS info (device dm-0): using free space tree Apr 16 04:16:36.390716 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Apr 16 04:16:36.397288 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Apr 16 04:16:36.438780 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Apr 16 04:16:36.469582 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Apr 16 04:16:36.566401 kernel: BTRFS info (device vda6): first mount of filesystem 90718864-f2fc-45a7-9234-85fc9574bf9c Apr 16 04:16:36.566652 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 16 04:16:36.566664 kernel: BTRFS info (device vda6): using free space tree Apr 16 04:16:36.594177 kernel: BTRFS info (device vda6): auto enabling async discard Apr 16 04:16:36.629526 systemd[1]: mnt-oem.mount: Deactivated successfully. Apr 16 04:16:36.637476 kernel: BTRFS info (device vda6): last unmount of filesystem 90718864-f2fc-45a7-9234-85fc9574bf9c Apr 16 04:16:36.674614 systemd[1]: Finished ignition-setup.service - Ignition (setup). Apr 16 04:16:36.716658 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Apr 16 04:16:37.936767 ignition[702]: Ignition 2.19.0 Apr 16 04:16:37.936859 ignition[702]: Stage: fetch-offline Apr 16 04:16:37.962919 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 16 04:16:37.937202 ignition[702]: no configs at "/usr/lib/ignition/base.d" Apr 16 04:16:37.937236 ignition[702]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 16 04:16:37.937671 ignition[702]: parsed url from cmdline: "" Apr 16 04:16:37.937676 ignition[702]: no config URL provided Apr 16 04:16:37.937684 ignition[702]: reading system config file "/usr/lib/ignition/user.ign" Apr 16 04:16:37.937696 ignition[702]: no config at "/usr/lib/ignition/user.ign" Apr 16 04:16:37.937825 ignition[702]: op(1): [started] loading QEMU firmware config module Apr 16 04:16:37.937832 ignition[702]: op(1): executing: "modprobe" "qemu_fw_cfg" Apr 16 04:16:38.095504 ignition[702]: op(1): [finished] loading QEMU firmware config module Apr 16 04:16:38.126837 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 16 04:16:38.370113 systemd-networkd[785]: lo: Link UP Apr 16 04:16:38.371256 systemd-networkd[785]: lo: Gained carrier Apr 16 04:16:38.539178 systemd-networkd[785]: Enumeration completed Apr 16 04:16:38.584801 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 16 04:16:38.622876 ignition[702]: parsing config with SHA512: 41e4f91e8faa69e3be944658cc4950bd29e1251422d4952bbdc18a309138ae8924da57e14edf978e86b54477c52f154ba765d7f06e748bf26a05261e0d56aa7f Apr 16 04:16:38.625024 systemd-networkd[785]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 16 04:16:38.625028 systemd-networkd[785]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 16 04:16:38.626269 systemd[1]: Reached target network.target - Network. Apr 16 04:16:38.635405 systemd-networkd[785]: eth0: Link UP Apr 16 04:16:38.635411 systemd-networkd[785]: eth0: Gained carrier Apr 16 04:16:38.635424 systemd-networkd[785]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 16 04:16:38.700433 unknown[702]: fetched base config from "system" Apr 16 04:16:38.700449 unknown[702]: fetched user config from "qemu" Apr 16 04:16:38.701074 ignition[702]: fetch-offline: fetch-offline passed Apr 16 04:16:38.705722 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Apr 16 04:16:38.701216 ignition[702]: Ignition finished successfully Apr 16 04:16:38.712781 systemd-networkd[785]: eth0: DHCPv4 address 10.0.0.7/16, gateway 10.0.0.1 acquired from 10.0.0.1 Apr 16 04:16:38.717645 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Apr 16 04:16:38.763441 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Apr 16 04:16:39.318099 ignition[789]: Ignition 2.19.0 Apr 16 04:16:39.337403 ignition[789]: Stage: kargs Apr 16 04:16:39.415122 ignition[789]: no configs at "/usr/lib/ignition/base.d" Apr 16 04:16:39.419992 ignition[789]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 16 04:16:39.436462 ignition[789]: kargs: kargs passed Apr 16 04:16:39.446208 systemd-resolved[231]: Detected conflict on linux IN A 10.0.0.7 Apr 16 04:16:39.436832 ignition[789]: Ignition finished successfully Apr 16 04:16:39.446240 systemd-resolved[231]: Hostname conflict, changing published hostname from 'linux' to 'linux4'. Apr 16 04:16:39.520779 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Apr 16 04:16:39.586480 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Apr 16 04:16:40.205855 systemd-networkd[785]: eth0: Gained IPv6LL Apr 16 04:16:40.249137 ignition[797]: Ignition 2.19.0 Apr 16 04:16:40.249223 ignition[797]: Stage: disks Apr 16 04:16:40.249838 ignition[797]: no configs at "/usr/lib/ignition/base.d" Apr 16 04:16:40.249851 ignition[797]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 16 04:16:40.287444 ignition[797]: disks: disks passed Apr 16 04:16:40.287728 ignition[797]: Ignition finished successfully Apr 16 04:16:40.292823 systemd[1]: Finished ignition-disks.service - Ignition (disks). Apr 16 04:16:40.308920 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Apr 16 04:16:40.327769 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 16 04:16:40.328584 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 16 04:16:40.347688 systemd[1]: Reached target sysinit.target - System Initialization. Apr 16 04:16:40.357107 systemd[1]: Reached target basic.target - Basic System. Apr 16 04:16:40.388027 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Apr 16 04:16:40.722253 systemd-fsck[807]: ROOT: clean, 14/553520 files, 52654/553472 blocks Apr 16 04:16:40.741440 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Apr 16 04:16:40.792676 systemd[1]: Mounting sysroot.mount - /sysroot... Apr 16 04:16:41.530706 kernel: EXT4-fs (vda9): mounted filesystem 9ac74074-8829-477f-a4c4-5563740ec49b r/w with ordered data mode. Quota mode: none. Apr 16 04:16:41.553582 systemd[1]: Mounted sysroot.mount - /sysroot. Apr 16 04:16:41.576831 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Apr 16 04:16:41.640450 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 16 04:16:41.655762 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Apr 16 04:16:41.675532 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Apr 16 04:16:41.714523 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (817) Apr 16 04:16:41.714566 kernel: BTRFS info (device vda6): first mount of filesystem 90718864-f2fc-45a7-9234-85fc9574bf9c Apr 16 04:16:41.714579 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 16 04:16:41.714590 kernel: BTRFS info (device vda6): using free space tree Apr 16 04:16:41.689196 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Apr 16 04:16:41.689370 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Apr 16 04:16:41.754113 kernel: BTRFS info (device vda6): auto enabling async discard Apr 16 04:16:41.756288 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 16 04:16:41.797054 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Apr 16 04:16:41.848494 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Apr 16 04:16:42.543833 initrd-setup-root[841]: cut: /sysroot/etc/passwd: No such file or directory Apr 16 04:16:42.600176 initrd-setup-root[848]: cut: /sysroot/etc/group: No such file or directory Apr 16 04:16:42.650592 initrd-setup-root[855]: cut: /sysroot/etc/shadow: No such file or directory Apr 16 04:16:42.710744 initrd-setup-root[862]: cut: /sysroot/etc/gshadow: No such file or directory Apr 16 04:16:44.907323 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Apr 16 04:16:44.980543 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Apr 16 04:16:44.999592 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Apr 16 04:16:45.109795 kernel: BTRFS info (device vda6): last unmount of filesystem 90718864-f2fc-45a7-9234-85fc9574bf9c Apr 16 04:16:45.110927 systemd[1]: sysroot-oem.mount: Deactivated successfully. Apr 16 04:16:45.265798 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Apr 16 04:16:45.288412 ignition[931]: INFO : Ignition 2.19.0 Apr 16 04:16:45.288412 ignition[931]: INFO : Stage: mount Apr 16 04:16:45.288412 ignition[931]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 16 04:16:45.288412 ignition[931]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 16 04:16:45.305014 ignition[931]: INFO : mount: mount passed Apr 16 04:16:45.305014 ignition[931]: INFO : Ignition finished successfully Apr 16 04:16:45.304474 systemd[1]: Finished ignition-mount.service - Ignition (mount). Apr 16 04:16:45.450251 systemd[1]: Starting ignition-files.service - Ignition (files)... Apr 16 04:16:45.577360 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 16 04:16:45.748293 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (945) Apr 16 04:16:45.765045 kernel: BTRFS info (device vda6): first mount of filesystem 90718864-f2fc-45a7-9234-85fc9574bf9c Apr 16 04:16:45.765333 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 16 04:16:45.767922 kernel: BTRFS info (device vda6): using free space tree Apr 16 04:16:45.821752 kernel: BTRFS info (device vda6): auto enabling async discard Apr 16 04:16:45.889285 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 16 04:16:46.355422 ignition[961]: INFO : Ignition 2.19.0 Apr 16 04:16:46.366080 ignition[961]: INFO : Stage: files Apr 16 04:16:46.366080 ignition[961]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 16 04:16:46.366080 ignition[961]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 16 04:16:46.366080 ignition[961]: DEBUG : files: compiled without relabeling support, skipping Apr 16 04:16:46.399969 ignition[961]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Apr 16 04:16:46.399969 ignition[961]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Apr 16 04:16:46.432818 ignition[961]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Apr 16 04:16:46.457855 ignition[961]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Apr 16 04:16:46.467860 unknown[961]: wrote ssh authorized keys file for user: core Apr 16 04:16:46.475242 ignition[961]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Apr 16 04:16:46.508596 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Apr 16 04:16:46.522847 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Apr 16 04:16:46.758139 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Apr 16 04:16:47.223787 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Apr 16 04:16:47.286470 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Apr 16 04:16:47.538824 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Apr 16 04:16:47.548812 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Apr 16 04:16:47.561563 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Apr 16 04:16:47.561563 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 16 04:16:47.561563 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 16 04:16:47.561563 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 16 04:16:47.614007 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 16 04:16:47.614007 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Apr 16 04:16:47.614007 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Apr 16 04:16:47.614007 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Apr 16 04:16:47.614007 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Apr 16 04:16:47.614007 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Apr 16 04:16:47.614007 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.4-x86-64.raw: attempt #1 Apr 16 04:16:48.465265 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Apr 16 04:16:54.089760 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Apr 16 04:16:54.089760 ignition[961]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Apr 16 04:16:54.123525 ignition[961]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 16 04:16:54.142733 ignition[961]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 16 04:16:54.142733 ignition[961]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Apr 16 04:16:54.142733 ignition[961]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Apr 16 04:16:54.142733 ignition[961]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Apr 16 04:16:54.142733 ignition[961]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Apr 16 04:16:54.142733 ignition[961]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Apr 16 04:16:54.142733 ignition[961]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Apr 16 04:16:55.022913 ignition[961]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Apr 16 04:16:55.155730 ignition[961]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Apr 16 04:16:55.171551 ignition[961]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Apr 16 04:16:55.171551 ignition[961]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Apr 16 04:16:55.171551 ignition[961]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Apr 16 04:16:55.224724 ignition[961]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Apr 16 04:16:55.224724 ignition[961]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Apr 16 04:16:55.224724 ignition[961]: INFO : files: files passed Apr 16 04:16:55.224724 ignition[961]: INFO : Ignition finished successfully Apr 16 04:16:55.186482 systemd[1]: Finished ignition-files.service - Ignition (files). Apr 16 04:16:55.292517 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Apr 16 04:16:55.334780 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Apr 16 04:16:55.346485 systemd[1]: ignition-quench.service: Deactivated successfully. Apr 16 04:16:55.351698 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Apr 16 04:16:55.564903 initrd-setup-root-after-ignition[990]: grep: /sysroot/oem/oem-release: No such file or directory Apr 16 04:16:55.637716 initrd-setup-root-after-ignition[992]: grep: Apr 16 04:16:55.648796 initrd-setup-root-after-ignition[996]: grep: /sysroot/etc/flatcar/enabled-sysext.conf Apr 16 04:16:55.715075 initrd-setup-root-after-ignition[992]: /sysroot/etc/flatcar/enabled-sysext.conf Apr 16 04:16:55.715075 initrd-setup-root-after-ignition[996]: : No such file or directory Apr 16 04:16:55.738274 initrd-setup-root-after-ignition[992]: : No such file or directory Apr 16 04:16:55.738274 initrd-setup-root-after-ignition[992]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Apr 16 04:16:55.794513 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 16 04:16:55.821857 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Apr 16 04:16:55.890795 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Apr 16 04:16:56.518329 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Apr 16 04:16:56.532336 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Apr 16 04:16:56.564552 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Apr 16 04:16:56.608239 systemd[1]: Reached target initrd.target - Initrd Default Target. Apr 16 04:16:56.625374 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Apr 16 04:16:56.661741 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Apr 16 04:16:56.799212 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 16 04:16:56.835167 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Apr 16 04:16:57.101109 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Apr 16 04:16:57.150566 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 16 04:16:57.239048 systemd[1]: Stopped target timers.target - Timer Units. Apr 16 04:16:57.264274 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Apr 16 04:16:57.278827 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 16 04:16:57.301621 systemd[1]: Stopped target initrd.target - Initrd Default Target. Apr 16 04:16:57.313180 systemd[1]: Stopped target basic.target - Basic System. Apr 16 04:16:57.334780 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Apr 16 04:16:57.379777 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Apr 16 04:16:57.404862 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Apr 16 04:16:57.425144 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Apr 16 04:16:57.443896 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Apr 16 04:16:57.444663 systemd[1]: Stopped target sysinit.target - System Initialization. Apr 16 04:16:57.513915 systemd[1]: Stopped target local-fs.target - Local File Systems. Apr 16 04:16:57.536146 systemd[1]: Stopped target swap.target - Swaps. Apr 16 04:16:57.581055 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Apr 16 04:16:57.582624 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Apr 16 04:16:57.589887 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Apr 16 04:16:57.590405 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 16 04:16:57.590668 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Apr 16 04:16:57.596892 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 16 04:16:57.616330 systemd[1]: dracut-initqueue.service: Deactivated successfully. Apr 16 04:16:57.616714 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Apr 16 04:16:57.639648 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Apr 16 04:16:57.651893 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Apr 16 04:16:57.672059 systemd[1]: Stopped target paths.target - Path Units. Apr 16 04:16:57.682001 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Apr 16 04:16:57.692555 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 16 04:16:57.726897 systemd[1]: Stopped target slices.target - Slice Units. Apr 16 04:16:57.767873 systemd[1]: Stopped target sockets.target - Socket Units. Apr 16 04:16:57.790888 systemd[1]: iscsid.socket: Deactivated successfully. Apr 16 04:16:57.791126 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Apr 16 04:16:57.813537 systemd[1]: iscsiuio.socket: Deactivated successfully. Apr 16 04:16:57.814015 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 16 04:16:57.846299 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Apr 16 04:16:57.849440 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 16 04:16:57.906063 systemd[1]: ignition-files.service: Deactivated successfully. Apr 16 04:16:57.919865 systemd[1]: Stopped ignition-files.service - Ignition (files). Apr 16 04:16:57.967974 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Apr 16 04:16:58.012473 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Apr 16 04:16:58.030708 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Apr 16 04:16:58.042096 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Apr 16 04:16:58.056325 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Apr 16 04:16:58.056584 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Apr 16 04:16:58.113480 systemd[1]: initrd-cleanup.service: Deactivated successfully. Apr 16 04:16:58.113606 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Apr 16 04:16:58.137651 systemd[1]: sysroot-boot.mount: Deactivated successfully. Apr 16 04:16:58.205108 systemd[1]: sysroot-boot.service: Deactivated successfully. Apr 16 04:16:58.206536 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Apr 16 04:16:58.234864 ignition[1016]: INFO : Ignition 2.19.0 Apr 16 04:16:58.245595 ignition[1016]: INFO : Stage: umount Apr 16 04:16:58.245595 ignition[1016]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 16 04:16:58.245595 ignition[1016]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 16 04:16:58.278346 ignition[1016]: INFO : umount: umount passed Apr 16 04:16:58.278346 ignition[1016]: INFO : Ignition finished successfully Apr 16 04:16:58.258409 systemd[1]: ignition-mount.service: Deactivated successfully. Apr 16 04:16:58.258655 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Apr 16 04:16:58.283887 systemd[1]: Stopped target network.target - Network. Apr 16 04:16:58.304622 systemd[1]: ignition-disks.service: Deactivated successfully. Apr 16 04:16:58.305241 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Apr 16 04:16:58.313145 systemd[1]: ignition-kargs.service: Deactivated successfully. Apr 16 04:16:58.317820 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Apr 16 04:16:58.335061 systemd[1]: ignition-setup.service: Deactivated successfully. Apr 16 04:16:58.335590 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Apr 16 04:16:58.348670 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Apr 16 04:16:58.350544 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Apr 16 04:16:58.434880 systemd[1]: initrd-setup-root.service: Deactivated successfully. Apr 16 04:16:58.436328 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Apr 16 04:16:58.472543 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Apr 16 04:16:58.474134 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Apr 16 04:16:58.516206 systemd[1]: systemd-resolved.service: Deactivated successfully. Apr 16 04:16:58.516517 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Apr 16 04:16:58.540779 systemd-networkd[785]: eth0: DHCPv6 lease lost Apr 16 04:16:58.578331 systemd[1]: systemd-networkd.service: Deactivated successfully. Apr 16 04:16:58.584427 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Apr 16 04:16:58.605478 systemd[1]: systemd-networkd.socket: Deactivated successfully. Apr 16 04:16:58.605653 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Apr 16 04:16:58.637723 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Apr 16 04:16:58.655068 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Apr 16 04:16:58.660155 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 16 04:16:58.678432 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 16 04:16:58.678658 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 16 04:16:58.695337 systemd[1]: systemd-modules-load.service: Deactivated successfully. Apr 16 04:16:58.695753 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Apr 16 04:16:58.706819 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Apr 16 04:16:58.710675 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 16 04:16:58.714101 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 16 04:16:58.750915 systemd[1]: systemd-udevd.service: Deactivated successfully. Apr 16 04:16:58.756442 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 16 04:16:58.790512 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Apr 16 04:16:58.794518 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Apr 16 04:16:58.808833 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Apr 16 04:16:58.809152 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Apr 16 04:16:58.821686 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Apr 16 04:16:58.822049 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Apr 16 04:16:58.845789 systemd[1]: dracut-cmdline.service: Deactivated successfully. Apr 16 04:16:58.846135 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Apr 16 04:16:58.910775 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 16 04:16:58.912336 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 16 04:16:58.973114 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Apr 16 04:16:58.986154 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Apr 16 04:16:58.987263 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 16 04:16:58.999095 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Apr 16 04:16:58.999282 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 16 04:16:59.029305 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Apr 16 04:16:59.029648 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Apr 16 04:16:59.056342 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 16 04:16:59.057141 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 16 04:16:59.077609 systemd[1]: network-cleanup.service: Deactivated successfully. Apr 16 04:16:59.092343 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Apr 16 04:16:59.148172 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Apr 16 04:16:59.149246 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Apr 16 04:16:59.261594 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Apr 16 04:16:59.403180 systemd[1]: Starting initrd-switch-root.service - Switch Root... Apr 16 04:16:59.589306 systemd[1]: Switching root. Apr 16 04:16:59.826183 systemd-journald[195]: Received SIGTERM from PID 1 (systemd). Apr 16 04:16:59.826527 systemd-journald[195]: Journal stopped Apr 16 04:17:13.726044 kernel: SELinux: policy capability network_peer_controls=1 Apr 16 04:17:13.726184 kernel: SELinux: policy capability open_perms=1 Apr 16 04:17:13.726199 kernel: SELinux: policy capability extended_socket_class=1 Apr 16 04:17:13.726215 kernel: SELinux: policy capability always_check_network=0 Apr 16 04:17:13.726229 kernel: SELinux: policy capability cgroup_seclabel=1 Apr 16 04:17:13.726241 kernel: SELinux: policy capability nnp_nosuid_transition=1 Apr 16 04:17:13.726257 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Apr 16 04:17:13.726268 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Apr 16 04:17:13.731522 kernel: audit: type=1403 audit(1776313020.814:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Apr 16 04:17:13.732028 systemd[1]: Successfully loaded SELinux policy in 342.176ms. Apr 16 04:17:13.732105 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 83.618ms. Apr 16 04:17:13.732120 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 16 04:17:13.732133 systemd[1]: Detected virtualization kvm. Apr 16 04:17:13.732144 systemd[1]: Detected architecture x86-64. Apr 16 04:17:13.732157 systemd[1]: Detected first boot. Apr 16 04:17:13.732174 systemd[1]: Initializing machine ID from VM UUID. Apr 16 04:17:13.732186 zram_generator::config[1061]: No configuration found. Apr 16 04:17:13.732199 systemd[1]: Populated /etc with preset unit settings. Apr 16 04:17:13.732211 systemd[1]: initrd-switch-root.service: Deactivated successfully. Apr 16 04:17:13.732223 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Apr 16 04:17:13.732234 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Apr 16 04:17:13.732268 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Apr 16 04:17:13.732300 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Apr 16 04:17:13.732337 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Apr 16 04:17:13.732349 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Apr 16 04:17:13.732365 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Apr 16 04:17:13.732378 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Apr 16 04:17:13.732390 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Apr 16 04:17:13.732401 systemd[1]: Created slice user.slice - User and Session Slice. Apr 16 04:17:13.732412 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 16 04:17:13.732424 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 16 04:17:13.732435 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Apr 16 04:17:13.732449 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Apr 16 04:17:13.732461 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Apr 16 04:17:13.732473 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 16 04:17:13.739639 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Apr 16 04:17:13.740113 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 16 04:17:13.740132 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Apr 16 04:17:13.740147 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Apr 16 04:17:13.740159 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Apr 16 04:17:13.740172 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Apr 16 04:17:13.740191 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 16 04:17:13.740203 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 16 04:17:13.740216 systemd[1]: Reached target slices.target - Slice Units. Apr 16 04:17:13.740227 systemd[1]: Reached target swap.target - Swaps. Apr 16 04:17:13.740240 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Apr 16 04:17:13.740257 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Apr 16 04:17:13.742385 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 16 04:17:13.742411 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 16 04:17:13.742434 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 16 04:17:13.742446 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Apr 16 04:17:13.742458 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Apr 16 04:17:13.742470 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Apr 16 04:17:13.742481 systemd[1]: Mounting media.mount - External Media Directory... Apr 16 04:17:13.742494 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 16 04:17:13.742505 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Apr 16 04:17:13.742517 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Apr 16 04:17:13.742530 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Apr 16 04:17:13.742543 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Apr 16 04:17:13.742555 systemd[1]: Reached target machines.target - Containers. Apr 16 04:17:13.742566 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Apr 16 04:17:13.742578 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 16 04:17:13.742589 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 16 04:17:13.742601 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Apr 16 04:17:13.742613 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 16 04:17:13.742625 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 16 04:17:13.742638 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 16 04:17:13.742655 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Apr 16 04:17:13.742667 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 16 04:17:13.742679 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Apr 16 04:17:13.742691 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Apr 16 04:17:13.742703 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Apr 16 04:17:13.742714 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Apr 16 04:17:13.742726 systemd[1]: Stopped systemd-fsck-usr.service. Apr 16 04:17:13.742740 kernel: fuse: init (API version 7.39) Apr 16 04:17:13.742753 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 16 04:17:13.742764 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 16 04:17:13.742775 kernel: ACPI: bus type drm_connector registered Apr 16 04:17:13.742786 kernel: loop: module loaded Apr 16 04:17:13.742797 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Apr 16 04:17:13.742808 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Apr 16 04:17:13.742820 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 16 04:17:13.742832 systemd[1]: verity-setup.service: Deactivated successfully. Apr 16 04:17:13.742845 systemd[1]: Stopped verity-setup.service. Apr 16 04:17:13.742857 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 16 04:17:13.743012 systemd-journald[1138]: Collecting audit messages is disabled. Apr 16 04:17:13.743043 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Apr 16 04:17:13.743056 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Apr 16 04:17:13.743071 systemd-journald[1138]: Journal started Apr 16 04:17:13.743099 systemd-journald[1138]: Runtime Journal (/run/log/journal/33a8e9e357134390adcb0cc12bea46fd) is 6.0M, max 48.4M, 42.3M free. Apr 16 04:17:10.197206 systemd[1]: Queued start job for default target multi-user.target. Apr 16 04:17:10.682325 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Apr 16 04:17:10.683685 systemd[1]: systemd-journald.service: Deactivated successfully. Apr 16 04:17:10.684214 systemd[1]: systemd-journald.service: Consumed 1.709s CPU time. Apr 16 04:17:13.777400 systemd[1]: Started systemd-journald.service - Journal Service. Apr 16 04:17:13.819035 systemd[1]: Mounted media.mount - External Media Directory. Apr 16 04:17:13.830911 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Apr 16 04:17:13.847034 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Apr 16 04:17:13.853538 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Apr 16 04:17:13.865577 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Apr 16 04:17:13.874435 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 16 04:17:13.883151 systemd[1]: modprobe@configfs.service: Deactivated successfully. Apr 16 04:17:13.883401 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Apr 16 04:17:13.896921 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 16 04:17:13.900633 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 16 04:17:13.913259 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 16 04:17:13.913549 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 16 04:17:13.921529 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 16 04:17:13.921748 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 16 04:17:13.928347 systemd[1]: modprobe@fuse.service: Deactivated successfully. Apr 16 04:17:13.935910 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Apr 16 04:17:13.942553 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 16 04:17:13.942789 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 16 04:17:13.948378 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 16 04:17:13.951432 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Apr 16 04:17:13.958124 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Apr 16 04:17:14.007414 systemd[1]: Reached target network-pre.target - Preparation for Network. Apr 16 04:17:14.025665 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Apr 16 04:17:14.051635 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Apr 16 04:17:14.057561 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Apr 16 04:17:14.058084 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 16 04:17:14.065650 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Apr 16 04:17:14.101926 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Apr 16 04:17:14.108436 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Apr 16 04:17:14.112796 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 16 04:17:14.130150 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Apr 16 04:17:14.137058 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Apr 16 04:17:14.139915 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 16 04:17:14.142398 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Apr 16 04:17:14.145664 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 16 04:17:14.167133 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 16 04:17:14.193354 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Apr 16 04:17:14.199524 systemd-journald[1138]: Time spent on flushing to /var/log/journal/33a8e9e357134390adcb0cc12bea46fd is 59.074ms for 960 entries. Apr 16 04:17:14.199524 systemd-journald[1138]: System Journal (/var/log/journal/33a8e9e357134390adcb0cc12bea46fd) is 8.0M, max 195.6M, 187.6M free. Apr 16 04:17:14.417012 systemd-journald[1138]: Received client request to flush runtime journal. Apr 16 04:17:14.213008 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 16 04:17:14.410496 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 16 04:17:14.429328 kernel: loop0: detected capacity change from 0 to 140768 Apr 16 04:17:14.429879 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Apr 16 04:17:14.529731 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Apr 16 04:17:14.544850 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Apr 16 04:17:14.562167 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Apr 16 04:17:14.565988 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Apr 16 04:17:14.595590 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Apr 16 04:17:14.630618 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Apr 16 04:17:14.656907 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Apr 16 04:17:14.665666 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Apr 16 04:17:14.677214 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 16 04:17:14.698368 udevadm[1190]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Apr 16 04:17:14.745262 systemd-tmpfiles[1176]: ACLs are not supported, ignoring. Apr 16 04:17:14.761561 kernel: loop1: detected capacity change from 0 to 142488 Apr 16 04:17:14.745307 systemd-tmpfiles[1176]: ACLs are not supported, ignoring. Apr 16 04:17:14.823602 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 16 04:17:14.909704 systemd[1]: Starting systemd-sysusers.service - Create System Users... Apr 16 04:17:14.929504 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Apr 16 04:17:14.959682 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Apr 16 04:17:15.034219 kernel: loop2: detected capacity change from 0 to 219192 Apr 16 04:17:15.212212 kernel: loop3: detected capacity change from 0 to 140768 Apr 16 04:17:15.332235 kernel: loop4: detected capacity change from 0 to 142488 Apr 16 04:17:15.467663 systemd[1]: Finished systemd-sysusers.service - Create System Users. Apr 16 04:17:15.503345 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 16 04:17:15.518853 kernel: loop5: detected capacity change from 0 to 219192 Apr 16 04:17:16.028926 (sd-merge)[1198]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Apr 16 04:17:16.036380 (sd-merge)[1198]: Merged extensions into '/usr'. Apr 16 04:17:16.095732 systemd[1]: Reloading requested from client PID 1175 ('systemd-sysext') (unit systemd-sysext.service)... Apr 16 04:17:16.095760 systemd[1]: Reloading... Apr 16 04:17:16.223776 systemd-tmpfiles[1200]: ACLs are not supported, ignoring. Apr 16 04:17:16.223822 systemd-tmpfiles[1200]: ACLs are not supported, ignoring. Apr 16 04:17:17.759343 zram_generator::config[1227]: No configuration found. Apr 16 04:17:18.339438 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 16 04:17:18.634416 systemd[1]: Reloading finished in 2531 ms. Apr 16 04:17:18.732086 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 16 04:17:18.747588 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Apr 16 04:17:18.840257 systemd[1]: Starting ensure-sysext.service... Apr 16 04:17:18.879629 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 16 04:17:18.983599 systemd[1]: Reloading requested from client PID 1265 ('systemctl') (unit ensure-sysext.service)... Apr 16 04:17:18.983616 systemd[1]: Reloading... Apr 16 04:17:19.070053 systemd-tmpfiles[1266]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Apr 16 04:17:19.070519 systemd-tmpfiles[1266]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Apr 16 04:17:19.071600 systemd-tmpfiles[1266]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Apr 16 04:17:19.071899 systemd-tmpfiles[1266]: ACLs are not supported, ignoring. Apr 16 04:17:19.075121 systemd-tmpfiles[1266]: ACLs are not supported, ignoring. Apr 16 04:17:19.082874 systemd-tmpfiles[1266]: Detected autofs mount point /boot during canonicalization of boot. Apr 16 04:17:19.082889 systemd-tmpfiles[1266]: Skipping /boot Apr 16 04:17:19.089724 ldconfig[1170]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Apr 16 04:17:19.110370 systemd-tmpfiles[1266]: Detected autofs mount point /boot during canonicalization of boot. Apr 16 04:17:19.110388 systemd-tmpfiles[1266]: Skipping /boot Apr 16 04:17:19.176089 zram_generator::config[1294]: No configuration found. Apr 16 04:17:19.846564 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 16 04:17:20.379699 systemd[1]: Reloading finished in 1392 ms. Apr 16 04:17:20.658468 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Apr 16 04:17:20.725746 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Apr 16 04:17:20.852852 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 16 04:17:21.002489 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 16 04:17:21.024071 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Apr 16 04:17:21.037011 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Apr 16 04:17:21.048268 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 16 04:17:21.059404 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 16 04:17:21.065108 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Apr 16 04:17:21.086349 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Apr 16 04:17:21.104463 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 16 04:17:21.104799 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 16 04:17:21.128179 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 16 04:17:21.152146 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 16 04:17:21.238147 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 16 04:17:21.249807 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 16 04:17:21.262627 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 16 04:17:21.268863 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 16 04:17:21.271503 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 16 04:17:21.321061 systemd-udevd[1340]: Using default interface naming scheme 'v255'. Apr 16 04:17:21.360300 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 16 04:17:21.361667 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 16 04:17:21.385010 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 16 04:17:21.398609 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 16 04:17:21.399277 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 16 04:17:21.445680 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Apr 16 04:17:21.524751 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Apr 16 04:17:21.553496 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 16 04:17:21.553808 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 16 04:17:21.568910 augenrules[1361]: No rules Apr 16 04:17:21.571410 systemd[1]: Started systemd-userdbd.service - User Database Manager. Apr 16 04:17:21.601909 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 16 04:17:21.608134 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 16 04:17:21.633926 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 16 04:17:21.634262 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 16 04:17:21.646833 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 16 04:17:21.647114 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 16 04:17:21.698449 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Apr 16 04:17:21.714079 systemd[1]: Finished ensure-sysext.service. Apr 16 04:17:21.719285 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 16 04:17:21.719515 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 16 04:17:21.735464 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 16 04:17:21.910063 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 16 04:17:21.936439 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 16 04:17:21.974187 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 16 04:17:21.977716 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 16 04:17:22.011731 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 16 04:17:22.161901 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Apr 16 04:17:22.227790 systemd[1]: Starting systemd-update-done.service - Update is Completed... Apr 16 04:17:22.246914 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Apr 16 04:17:22.247145 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 16 04:17:22.259535 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 16 04:17:22.259748 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 16 04:17:22.275050 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 16 04:17:22.275264 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 16 04:17:22.282117 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 16 04:17:22.282429 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 16 04:17:22.291978 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 16 04:17:22.292218 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 16 04:17:22.329822 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 16 04:17:22.337630 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 16 04:17:22.340441 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Apr 16 04:17:22.425781 systemd-resolved[1338]: Positive Trust Anchors: Apr 16 04:17:22.425803 systemd-resolved[1338]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 16 04:17:22.425838 systemd-resolved[1338]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 16 04:17:22.455488 systemd[1]: Finished systemd-update-done.service - Update is Completed. Apr 16 04:17:22.489806 systemd-resolved[1338]: Defaulting to hostname 'linux'. Apr 16 04:17:22.500857 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 16 04:17:22.508042 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 16 04:17:22.743097 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (1392) Apr 16 04:17:22.837584 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Apr 16 04:17:22.841878 systemd[1]: Reached target time-set.target - System Time Set. Apr 16 04:17:22.879431 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Apr 16 04:17:22.887001 systemd-networkd[1402]: lo: Link UP Apr 16 04:17:22.887022 systemd-networkd[1402]: lo: Gained carrier Apr 16 04:17:22.888836 systemd-networkd[1402]: Enumeration completed Apr 16 04:17:22.893583 systemd-networkd[1402]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 16 04:17:22.893591 systemd-networkd[1402]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 16 04:17:22.894867 systemd-networkd[1402]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 16 04:17:22.894894 systemd-networkd[1402]: eth0: Link UP Apr 16 04:17:22.894898 systemd-networkd[1402]: eth0: Gained carrier Apr 16 04:17:22.894907 systemd-networkd[1402]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 16 04:17:22.902671 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Apr 16 04:17:22.906173 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 16 04:17:22.915060 systemd[1]: Reached target network.target - Network. Apr 16 04:17:22.931026 systemd-networkd[1402]: eth0: DHCPv4 address 10.0.0.7/16, gateway 10.0.0.1 acquired from 10.0.0.1 Apr 16 04:17:22.938476 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Apr 16 04:17:22.939289 systemd-timesyncd[1403]: Network configuration changed, trying to establish connection. Apr 16 04:17:23.899124 systemd-timesyncd[1403]: Contacted time server 10.0.0.1:123 (10.0.0.1). Apr 16 04:17:23.899209 systemd-timesyncd[1403]: Initial clock synchronization to Thu 2026-04-16 04:17:23.898933 UTC. Apr 16 04:17:23.900196 systemd-resolved[1338]: Clock change detected. Flushing caches. Apr 16 04:17:23.988522 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Apr 16 04:17:24.002679 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Apr 16 04:17:24.062394 kernel: ACPI: button: Power Button [PWRF] Apr 16 04:17:24.148574 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Apr 16 04:17:24.149408 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Apr 16 04:17:24.153763 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Apr 16 04:17:24.171760 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Apr 16 04:17:24.925748 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 16 04:17:25.457009 systemd-networkd[1402]: eth0: Gained IPv6LL Apr 16 04:17:25.510834 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Apr 16 04:17:25.518024 systemd[1]: Reached target network-online.target - Network is Online. Apr 16 04:17:25.627596 kernel: mousedev: PS/2 mouse device common for all mice Apr 16 04:17:25.768680 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 16 04:17:26.458023 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Apr 16 04:17:26.589254 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Apr 16 04:17:26.926064 lvm[1435]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 16 04:17:27.313235 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Apr 16 04:17:27.336831 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 16 04:17:27.358420 systemd[1]: Reached target sysinit.target - System Initialization. Apr 16 04:17:27.368889 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Apr 16 04:17:27.381227 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Apr 16 04:17:27.389842 systemd[1]: Started logrotate.timer - Daily rotation of log files. Apr 16 04:17:27.408415 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Apr 16 04:17:27.486515 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Apr 16 04:17:27.525417 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Apr 16 04:17:27.526031 systemd[1]: Reached target paths.target - Path Units. Apr 16 04:17:27.539520 systemd[1]: Reached target timers.target - Timer Units. Apr 16 04:17:27.545972 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Apr 16 04:17:27.551903 systemd[1]: Starting docker.socket - Docker Socket for the API... Apr 16 04:17:27.594243 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Apr 16 04:17:27.828393 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Apr 16 04:17:27.839236 systemd[1]: Listening on docker.socket - Docker Socket for the API. Apr 16 04:17:27.915741 systemd[1]: Reached target sockets.target - Socket Units. Apr 16 04:17:27.937514 systemd[1]: Reached target basic.target - Basic System. Apr 16 04:17:27.968348 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Apr 16 04:17:27.973587 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Apr 16 04:17:28.000695 lvm[1439]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 16 04:17:28.060986 systemd[1]: Starting containerd.service - containerd container runtime... Apr 16 04:17:28.192708 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Apr 16 04:17:28.244126 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Apr 16 04:17:28.274823 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Apr 16 04:17:28.305737 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Apr 16 04:17:28.315013 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Apr 16 04:17:28.321406 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 16 04:17:28.328078 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Apr 16 04:17:28.329539 jq[1443]: false Apr 16 04:17:28.355329 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Apr 16 04:17:28.368650 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Apr 16 04:17:28.382994 extend-filesystems[1444]: Found loop3 Apr 16 04:17:28.391376 extend-filesystems[1444]: Found loop4 Apr 16 04:17:28.391376 extend-filesystems[1444]: Found loop5 Apr 16 04:17:28.391376 extend-filesystems[1444]: Found sr0 Apr 16 04:17:28.391376 extend-filesystems[1444]: Found vda Apr 16 04:17:28.391376 extend-filesystems[1444]: Found vda1 Apr 16 04:17:28.391376 extend-filesystems[1444]: Found vda2 Apr 16 04:17:28.391376 extend-filesystems[1444]: Found vda3 Apr 16 04:17:28.391376 extend-filesystems[1444]: Found usr Apr 16 04:17:28.391376 extend-filesystems[1444]: Found vda4 Apr 16 04:17:28.391376 extend-filesystems[1444]: Found vda6 Apr 16 04:17:28.391376 extend-filesystems[1444]: Found vda7 Apr 16 04:17:28.391376 extend-filesystems[1444]: Found vda9 Apr 16 04:17:28.391376 extend-filesystems[1444]: Checking size of /dev/vda9 Apr 16 04:17:28.490699 dbus-daemon[1442]: [system] SELinux support is enabled Apr 16 04:17:28.600166 extend-filesystems[1444]: Resized partition /dev/vda9 Apr 16 04:17:28.403285 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Apr 16 04:17:28.530846 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Apr 16 04:17:28.585100 systemd[1]: Starting systemd-logind.service - User Login Management... Apr 16 04:17:28.597336 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Apr 16 04:17:28.604092 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Apr 16 04:17:28.614514 extend-filesystems[1464]: resize2fs 1.47.1 (20-May-2024) Apr 16 04:17:28.649737 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Apr 16 04:17:28.649798 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (1392) Apr 16 04:17:28.625615 systemd[1]: Starting update-engine.service - Update Engine... Apr 16 04:17:28.660064 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Apr 16 04:17:28.666341 systemd[1]: Started dbus.service - D-Bus System Message Bus. Apr 16 04:17:28.700615 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Apr 16 04:17:28.773739 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Apr 16 04:17:28.774062 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Apr 16 04:17:28.780669 systemd[1]: motdgen.service: Deactivated successfully. Apr 16 04:17:28.780943 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Apr 16 04:17:28.787104 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Apr 16 04:17:28.798579 jq[1470]: true Apr 16 04:17:28.808834 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Apr 16 04:17:28.809085 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Apr 16 04:17:28.827225 update_engine[1467]: I20260416 04:17:28.820040 1467 main.cc:92] Flatcar Update Engine starting Apr 16 04:17:28.829544 update_engine[1467]: I20260416 04:17:28.829272 1467 update_check_scheduler.cc:74] Next update check in 2m12s Apr 16 04:17:28.945066 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Apr 16 04:17:28.925508 (ntainerd)[1480]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Apr 16 04:17:28.949100 extend-filesystems[1464]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Apr 16 04:17:28.949100 extend-filesystems[1464]: old_desc_blocks = 1, new_desc_blocks = 1 Apr 16 04:17:28.949100 extend-filesystems[1464]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Apr 16 04:17:29.017659 jq[1478]: true Apr 16 04:17:28.948773 systemd[1]: coreos-metadata.service: Deactivated successfully. Apr 16 04:17:29.030235 extend-filesystems[1444]: Resized filesystem in /dev/vda9 Apr 16 04:17:28.949069 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Apr 16 04:17:28.964114 systemd[1]: extend-filesystems.service: Deactivated successfully. Apr 16 04:17:28.964690 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Apr 16 04:17:29.371958 tar[1477]: linux-amd64/LICENSE Apr 16 04:17:29.387930 tar[1477]: linux-amd64/helm Apr 16 04:17:29.388257 systemd-logind[1466]: Watching system buttons on /dev/input/event1 (Power Button) Apr 16 04:17:29.388282 systemd-logind[1466]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Apr 16 04:17:29.491797 systemd-logind[1466]: New seat seat0. Apr 16 04:17:29.516851 systemd[1]: Started systemd-logind.service - User Login Management. Apr 16 04:17:29.612859 systemd[1]: Started update-engine.service - Update Engine. Apr 16 04:17:29.643292 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Apr 16 04:17:29.643745 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Apr 16 04:17:29.644016 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Apr 16 04:17:29.664437 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Apr 16 04:17:29.668680 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Apr 16 04:17:29.719694 systemd[1]: Started locksmithd.service - Cluster reboot manager. Apr 16 04:17:29.910187 bash[1512]: Updated "/home/core/.ssh/authorized_keys" Apr 16 04:17:29.918442 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Apr 16 04:17:30.005718 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Apr 16 04:17:30.251738 locksmithd[1513]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Apr 16 04:17:31.623903 sshd_keygen[1472]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Apr 16 04:17:32.072050 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Apr 16 04:17:32.351644 systemd[1]: Starting issuegen.service - Generate /run/issue... Apr 16 04:17:32.537109 systemd[1]: issuegen.service: Deactivated successfully. Apr 16 04:17:32.537349 systemd[1]: Finished issuegen.service - Generate /run/issue. Apr 16 04:17:32.637609 containerd[1480]: time="2026-04-16T04:17:32.631378550Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Apr 16 04:17:32.642576 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Apr 16 04:17:33.191286 containerd[1480]: time="2026-04-16T04:17:33.190200435Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Apr 16 04:17:33.201553 containerd[1480]: time="2026-04-16T04:17:33.201253541Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.127-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Apr 16 04:17:33.209434 containerd[1480]: time="2026-04-16T04:17:33.202357670Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Apr 16 04:17:33.209434 containerd[1480]: time="2026-04-16T04:17:33.202539532Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Apr 16 04:17:33.278125 containerd[1480]: time="2026-04-16T04:17:33.277685690Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Apr 16 04:17:33.279612 containerd[1480]: time="2026-04-16T04:17:33.279582502Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Apr 16 04:17:33.280220 containerd[1480]: time="2026-04-16T04:17:33.280187482Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Apr 16 04:17:33.280382 containerd[1480]: time="2026-04-16T04:17:33.280368823Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Apr 16 04:17:33.281556 containerd[1480]: time="2026-04-16T04:17:33.281513320Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 16 04:17:33.281641 containerd[1480]: time="2026-04-16T04:17:33.281628387Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Apr 16 04:17:33.281695 containerd[1480]: time="2026-04-16T04:17:33.281682472Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Apr 16 04:17:33.281736 containerd[1480]: time="2026-04-16T04:17:33.281726774Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Apr 16 04:17:33.287525 containerd[1480]: time="2026-04-16T04:17:33.287216925Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Apr 16 04:17:33.322051 containerd[1480]: time="2026-04-16T04:17:33.321598796Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Apr 16 04:17:33.337962 containerd[1480]: time="2026-04-16T04:17:33.337417000Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 16 04:17:33.337962 containerd[1480]: time="2026-04-16T04:17:33.337766404Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Apr 16 04:17:33.338740 containerd[1480]: time="2026-04-16T04:17:33.338647403Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Apr 16 04:17:33.361882 containerd[1480]: time="2026-04-16T04:17:33.345687858Z" level=info msg="metadata content store policy set" policy=shared Apr 16 04:17:33.370367 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Apr 16 04:17:33.528878 containerd[1480]: time="2026-04-16T04:17:33.528215251Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Apr 16 04:17:33.536021 containerd[1480]: time="2026-04-16T04:17:33.529017695Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Apr 16 04:17:33.536021 containerd[1480]: time="2026-04-16T04:17:33.529124100Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Apr 16 04:17:33.536021 containerd[1480]: time="2026-04-16T04:17:33.529148639Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Apr 16 04:17:33.536021 containerd[1480]: time="2026-04-16T04:17:33.529209096Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Apr 16 04:17:33.536021 containerd[1480]: time="2026-04-16T04:17:33.529670625Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Apr 16 04:17:33.536021 containerd[1480]: time="2026-04-16T04:17:33.531635926Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Apr 16 04:17:33.540055 containerd[1480]: time="2026-04-16T04:17:33.539730925Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Apr 16 04:17:33.540055 containerd[1480]: time="2026-04-16T04:17:33.539851025Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Apr 16 04:17:33.540055 containerd[1480]: time="2026-04-16T04:17:33.539955741Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Apr 16 04:17:33.540055 containerd[1480]: time="2026-04-16T04:17:33.539977801Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Apr 16 04:17:33.540621 containerd[1480]: time="2026-04-16T04:17:33.540430377Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Apr 16 04:17:33.540621 containerd[1480]: time="2026-04-16T04:17:33.540534229Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Apr 16 04:17:33.543395 containerd[1480]: time="2026-04-16T04:17:33.540700923Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Apr 16 04:17:33.543395 containerd[1480]: time="2026-04-16T04:17:33.543045884Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Apr 16 04:17:33.576276 systemd[1]: Started getty@tty1.service - Getty on tty1. Apr 16 04:17:33.590874 containerd[1480]: time="2026-04-16T04:17:33.590199439Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Apr 16 04:17:33.590874 containerd[1480]: time="2026-04-16T04:17:33.590577756Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Apr 16 04:17:33.590874 containerd[1480]: time="2026-04-16T04:17:33.590702157Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Apr 16 04:17:33.590874 containerd[1480]: time="2026-04-16T04:17:33.591007943Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Apr 16 04:17:33.590874 containerd[1480]: time="2026-04-16T04:17:33.591032852Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Apr 16 04:17:33.590874 containerd[1480]: time="2026-04-16T04:17:33.591048229Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Apr 16 04:17:33.590874 containerd[1480]: time="2026-04-16T04:17:33.591064120Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Apr 16 04:17:33.590874 containerd[1480]: time="2026-04-16T04:17:33.591078415Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Apr 16 04:17:33.590874 containerd[1480]: time="2026-04-16T04:17:33.591093537Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Apr 16 04:17:33.608935 containerd[1480]: time="2026-04-16T04:17:33.595836335Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Apr 16 04:17:33.608935 containerd[1480]: time="2026-04-16T04:17:33.602628080Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Apr 16 04:17:33.612664 containerd[1480]: time="2026-04-16T04:17:33.609105684Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Apr 16 04:17:33.612664 containerd[1480]: time="2026-04-16T04:17:33.609305001Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Apr 16 04:17:33.612664 containerd[1480]: time="2026-04-16T04:17:33.609323920Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Apr 16 04:17:33.612664 containerd[1480]: time="2026-04-16T04:17:33.612355827Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Apr 16 04:17:33.612664 containerd[1480]: time="2026-04-16T04:17:33.612590626Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Apr 16 04:17:33.612664 containerd[1480]: time="2026-04-16T04:17:33.612696312Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Apr 16 04:17:33.612664 containerd[1480]: time="2026-04-16T04:17:33.613037351Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Apr 16 04:17:33.614868 containerd[1480]: time="2026-04-16T04:17:33.613182130Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Apr 16 04:17:33.614868 containerd[1480]: time="2026-04-16T04:17:33.613200912Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Apr 16 04:17:33.614868 containerd[1480]: time="2026-04-16T04:17:33.613642898Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Apr 16 04:17:33.614868 containerd[1480]: time="2026-04-16T04:17:33.613683553Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Apr 16 04:17:33.614868 containerd[1480]: time="2026-04-16T04:17:33.613701999Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Apr 16 04:17:33.614868 containerd[1480]: time="2026-04-16T04:17:33.613719327Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Apr 16 04:17:33.614868 containerd[1480]: time="2026-04-16T04:17:33.613731986Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Apr 16 04:17:33.614868 containerd[1480]: time="2026-04-16T04:17:33.613758942Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Apr 16 04:17:33.614868 containerd[1480]: time="2026-04-16T04:17:33.613849835Z" level=info msg="NRI interface is disabled by configuration." Apr 16 04:17:33.614868 containerd[1480]: time="2026-04-16T04:17:33.613863558Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Apr 16 04:17:33.873266 containerd[1480]: time="2026-04-16T04:17:33.615626295Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Apr 16 04:17:33.873266 containerd[1480]: time="2026-04-16T04:17:33.615844250Z" level=info msg="Connect containerd service" Apr 16 04:17:33.873266 containerd[1480]: time="2026-04-16T04:17:33.615920475Z" level=info msg="using legacy CRI server" Apr 16 04:17:33.873266 containerd[1480]: time="2026-04-16T04:17:33.615945087Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Apr 16 04:17:33.873266 containerd[1480]: time="2026-04-16T04:17:33.616178794Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Apr 16 04:17:33.875068 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Apr 16 04:17:33.889579 containerd[1480]: time="2026-04-16T04:17:33.874998254Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 16 04:17:33.892159 systemd[1]: Reached target getty.target - Login Prompts. Apr 16 04:17:33.921653 containerd[1480]: time="2026-04-16T04:17:33.904192842Z" level=info msg="Start subscribing containerd event" Apr 16 04:17:33.921653 containerd[1480]: time="2026-04-16T04:17:33.919778949Z" level=info msg="Start recovering state" Apr 16 04:17:33.929168 containerd[1480]: time="2026-04-16T04:17:33.928809882Z" level=info msg="Start event monitor" Apr 16 04:17:33.934995 containerd[1480]: time="2026-04-16T04:17:33.932911911Z" level=info msg="Start snapshots syncer" Apr 16 04:17:33.934995 containerd[1480]: time="2026-04-16T04:17:33.933127265Z" level=info msg="Start cni network conf syncer for default" Apr 16 04:17:33.934995 containerd[1480]: time="2026-04-16T04:17:33.933139645Z" level=info msg="Start streaming server" Apr 16 04:17:33.964980 containerd[1480]: time="2026-04-16T04:17:33.964180547Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Apr 16 04:17:33.984101 containerd[1480]: time="2026-04-16T04:17:33.982783418Z" level=info msg=serving... address=/run/containerd/containerd.sock Apr 16 04:17:34.007511 containerd[1480]: time="2026-04-16T04:17:34.003349843Z" level=info msg="containerd successfully booted in 1.389918s" Apr 16 04:17:34.004578 systemd[1]: Started containerd.service - containerd container runtime. Apr 16 04:17:34.364005 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Apr 16 04:17:34.387594 systemd[1]: Started sshd@0-10.0.0.7:22-10.0.0.1:48606.service - OpenSSH per-connection server daemon (10.0.0.1:48606). Apr 16 04:17:34.798715 tar[1477]: linux-amd64/README.md Apr 16 04:17:34.898819 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Apr 16 04:17:35.537407 sshd[1546]: Accepted publickey for core from 10.0.0.1 port 48606 ssh2: RSA SHA256:k6txjuWbuT3fOc5R3ejt+fIilHnNyFQrCWM8I+0TlNQ Apr 16 04:17:35.541227 sshd[1546]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 04:17:35.989270 systemd-logind[1466]: New session 1 of user core. Apr 16 04:17:36.045001 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Apr 16 04:17:36.189736 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Apr 16 04:17:36.704528 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Apr 16 04:17:36.743102 systemd[1]: Starting user@500.service - User Manager for UID 500... Apr 16 04:17:36.911079 (systemd)[1553]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Apr 16 04:17:38.315381 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 16 04:17:38.359645 systemd[1553]: Queued start job for default target default.target. Apr 16 04:17:38.396837 (kubelet)[1564]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 16 04:17:38.401600 systemd[1553]: Created slice app.slice - User Application Slice. Apr 16 04:17:38.401758 systemd[1553]: Reached target paths.target - Paths. Apr 16 04:17:38.401779 systemd[1553]: Reached target timers.target - Timers. Apr 16 04:17:38.416658 systemd[1]: Reached target multi-user.target - Multi-User System. Apr 16 04:17:38.437767 systemd[1553]: Starting dbus.socket - D-Bus User Message Bus Socket... Apr 16 04:17:38.456213 systemd[1553]: Listening on dbus.socket - D-Bus User Message Bus Socket. Apr 16 04:17:38.457269 systemd[1553]: Reached target sockets.target - Sockets. Apr 16 04:17:38.457298 systemd[1553]: Reached target basic.target - Basic System. Apr 16 04:17:38.467540 systemd[1553]: Reached target default.target - Main User Target. Apr 16 04:17:38.467727 systemd[1553]: Startup finished in 1.488s. Apr 16 04:17:38.476762 systemd[1]: Started user@500.service - User Manager for UID 500. Apr 16 04:17:38.891066 systemd[1]: Started session-1.scope - Session 1 of User core. Apr 16 04:17:38.918038 systemd[1]: Startup finished in 6.348s (kernel) + 42.216s (initrd) + 37.531s (userspace) = 1min 26.095s. Apr 16 04:17:39.135038 systemd[1]: Started sshd@1-10.0.0.7:22-10.0.0.1:54650.service - OpenSSH per-connection server daemon (10.0.0.1:54650). Apr 16 04:17:39.722259 sshd[1572]: Accepted publickey for core from 10.0.0.1 port 54650 ssh2: RSA SHA256:k6txjuWbuT3fOc5R3ejt+fIilHnNyFQrCWM8I+0TlNQ Apr 16 04:17:39.744332 sshd[1572]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 04:17:40.076392 systemd-logind[1466]: New session 2 of user core. Apr 16 04:17:40.288670 systemd[1]: Started session-2.scope - Session 2 of User core. Apr 16 04:17:41.067132 sshd[1572]: pam_unix(sshd:session): session closed for user core Apr 16 04:17:41.233309 systemd[1]: sshd@1-10.0.0.7:22-10.0.0.1:54650.service: Deactivated successfully. Apr 16 04:17:41.255342 systemd[1]: session-2.scope: Deactivated successfully. Apr 16 04:17:41.278555 systemd-logind[1466]: Session 2 logged out. Waiting for processes to exit. Apr 16 04:17:41.390274 systemd[1]: Started sshd@2-10.0.0.7:22-10.0.0.1:54656.service - OpenSSH per-connection server daemon (10.0.0.1:54656). Apr 16 04:17:41.446024 systemd-logind[1466]: Removed session 2. Apr 16 04:17:41.674445 sshd[1586]: Accepted publickey for core from 10.0.0.1 port 54656 ssh2: RSA SHA256:k6txjuWbuT3fOc5R3ejt+fIilHnNyFQrCWM8I+0TlNQ Apr 16 04:17:41.688646 sshd[1586]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 04:17:41.917626 systemd-logind[1466]: New session 3 of user core. Apr 16 04:17:41.939353 systemd[1]: Started session-3.scope - Session 3 of User core. Apr 16 04:17:42.203123 sshd[1586]: pam_unix(sshd:session): session closed for user core Apr 16 04:17:42.319741 systemd[1]: sshd@2-10.0.0.7:22-10.0.0.1:54656.service: Deactivated successfully. Apr 16 04:17:42.357774 systemd[1]: session-3.scope: Deactivated successfully. Apr 16 04:17:42.621370 systemd-logind[1466]: Session 3 logged out. Waiting for processes to exit. Apr 16 04:17:42.744357 systemd[1]: Started sshd@3-10.0.0.7:22-10.0.0.1:54658.service - OpenSSH per-connection server daemon (10.0.0.1:54658). Apr 16 04:17:42.846000 systemd-logind[1466]: Removed session 3. Apr 16 04:17:43.415219 sshd[1594]: Accepted publickey for core from 10.0.0.1 port 54658 ssh2: RSA SHA256:k6txjuWbuT3fOc5R3ejt+fIilHnNyFQrCWM8I+0TlNQ Apr 16 04:17:43.417790 sshd[1594]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 04:17:43.772161 systemd-logind[1466]: New session 4 of user core. Apr 16 04:17:43.791327 systemd[1]: Started session-4.scope - Session 4 of User core. Apr 16 04:17:44.150208 sshd[1594]: pam_unix(sshd:session): session closed for user core Apr 16 04:17:44.414651 systemd[1]: sshd@3-10.0.0.7:22-10.0.0.1:54658.service: Deactivated successfully. Apr 16 04:17:44.442631 systemd[1]: session-4.scope: Deactivated successfully. Apr 16 04:17:44.461738 systemd-logind[1466]: Session 4 logged out. Waiting for processes to exit. Apr 16 04:17:44.533798 systemd[1]: Started sshd@4-10.0.0.7:22-10.0.0.1:54670.service - OpenSSH per-connection server daemon (10.0.0.1:54670). Apr 16 04:17:44.588047 systemd-logind[1466]: Removed session 4. Apr 16 04:17:44.784127 kubelet[1564]: E0416 04:17:44.772730 1564 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 16 04:17:44.790860 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 16 04:17:44.800113 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 16 04:17:44.803119 systemd[1]: kubelet.service: Consumed 5.938s CPU time. Apr 16 04:17:44.866773 sshd[1601]: Accepted publickey for core from 10.0.0.1 port 54670 ssh2: RSA SHA256:k6txjuWbuT3fOc5R3ejt+fIilHnNyFQrCWM8I+0TlNQ Apr 16 04:17:44.873740 sshd[1601]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 04:17:44.967781 systemd-logind[1466]: New session 5 of user core. Apr 16 04:17:45.191269 systemd[1]: Started session-5.scope - Session 5 of User core. Apr 16 04:17:47.110796 sudo[1605]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Apr 16 04:17:47.113598 sudo[1605]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 16 04:17:47.391333 sudo[1605]: pam_unix(sudo:session): session closed for user root Apr 16 04:17:47.433931 sshd[1601]: pam_unix(sshd:session): session closed for user core Apr 16 04:17:47.497218 systemd[1]: sshd@4-10.0.0.7:22-10.0.0.1:54670.service: Deactivated successfully. Apr 16 04:17:47.671037 systemd[1]: session-5.scope: Deactivated successfully. Apr 16 04:17:47.701181 systemd-logind[1466]: Session 5 logged out. Waiting for processes to exit. Apr 16 04:17:47.895072 systemd[1]: Started sshd@5-10.0.0.7:22-10.0.0.1:49502.service - OpenSSH per-connection server daemon (10.0.0.1:49502). Apr 16 04:17:47.951828 systemd-logind[1466]: Removed session 5. Apr 16 04:17:49.282176 sshd[1610]: Accepted publickey for core from 10.0.0.1 port 49502 ssh2: RSA SHA256:k6txjuWbuT3fOc5R3ejt+fIilHnNyFQrCWM8I+0TlNQ Apr 16 04:17:49.285139 sshd[1610]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 04:17:49.468544 systemd-logind[1466]: New session 6 of user core. Apr 16 04:17:49.490625 systemd[1]: Started session-6.scope - Session 6 of User core. Apr 16 04:17:49.829727 sudo[1614]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Apr 16 04:17:49.830277 sudo[1614]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 16 04:17:50.089151 sudo[1614]: pam_unix(sudo:session): session closed for user root Apr 16 04:17:50.229239 sudo[1613]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Apr 16 04:17:50.239216 sudo[1613]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 16 04:17:50.618053 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Apr 16 04:17:50.780570 auditctl[1617]: No rules Apr 16 04:17:50.860600 systemd[1]: audit-rules.service: Deactivated successfully. Apr 16 04:17:50.861698 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Apr 16 04:17:50.978049 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 16 04:17:51.464412 augenrules[1635]: No rules Apr 16 04:17:51.481675 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 16 04:17:51.515300 sudo[1613]: pam_unix(sudo:session): session closed for user root Apr 16 04:17:51.546901 sshd[1610]: pam_unix(sshd:session): session closed for user core Apr 16 04:17:51.622507 systemd[1]: sshd@5-10.0.0.7:22-10.0.0.1:49502.service: Deactivated successfully. Apr 16 04:17:51.711429 systemd[1]: session-6.scope: Deactivated successfully. Apr 16 04:17:51.756339 systemd-logind[1466]: Session 6 logged out. Waiting for processes to exit. Apr 16 04:17:51.881162 systemd[1]: Started sshd@6-10.0.0.7:22-10.0.0.1:49512.service - OpenSSH per-connection server daemon (10.0.0.1:49512). Apr 16 04:17:51.953228 systemd-logind[1466]: Removed session 6. Apr 16 04:17:53.763046 sshd[1643]: Accepted publickey for core from 10.0.0.1 port 49512 ssh2: RSA SHA256:k6txjuWbuT3fOc5R3ejt+fIilHnNyFQrCWM8I+0TlNQ Apr 16 04:17:53.886749 sshd[1643]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 04:17:54.271614 systemd-logind[1466]: New session 7 of user core. Apr 16 04:17:54.494112 systemd[1]: Started session-7.scope - Session 7 of User core. Apr 16 04:17:54.838096 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Apr 16 04:17:54.844172 sudo[1646]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Apr 16 04:17:54.844673 sudo[1646]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 16 04:17:54.877722 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 16 04:17:57.718908 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 16 04:17:58.193144 (kubelet)[1672]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 16 04:18:00.244950 kubelet[1672]: E0416 04:18:00.243363 1672 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 16 04:18:00.290858 systemd[1]: Starting docker.service - Docker Application Container Engine... Apr 16 04:18:00.310855 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 16 04:18:00.311109 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 16 04:18:00.311690 systemd[1]: kubelet.service: Consumed 2.355s CPU time. Apr 16 04:18:00.355385 (dockerd)[1681]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Apr 16 04:18:04.745670 kernel: clocksource: Long readout interval, skipping watchdog check: cs_nsec: 1228820887 wd_nsec: 1228820897 Apr 16 04:18:10.413525 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Apr 16 04:18:10.532543 dockerd[1681]: time="2026-04-16T04:18:10.530772497Z" level=info msg="Starting up" Apr 16 04:18:10.597523 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 16 04:18:13.041057 dockerd[1681]: time="2026-04-16T04:18:13.025670041Z" level=info msg="Loading containers: start." Apr 16 04:18:13.736384 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 16 04:18:13.736623 (kubelet)[1717]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 16 04:18:13.835458 update_engine[1467]: I20260416 04:18:13.830071 1467 update_attempter.cc:509] Updating boot flags... Apr 16 04:18:14.177782 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (1744) Apr 16 04:18:14.691808 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (1748) Apr 16 04:18:15.004982 kubelet[1717]: E0416 04:18:15.001310 1717 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 16 04:18:15.055675 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 16 04:18:15.055934 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 16 04:18:15.062168 systemd[1]: kubelet.service: Consumed 2.027s CPU time. Apr 16 04:18:16.694758 kernel: Initializing XFRM netlink socket Apr 16 04:18:22.004654 systemd-networkd[1402]: docker0: Link UP Apr 16 04:18:22.355385 dockerd[1681]: time="2026-04-16T04:18:22.354777834Z" level=info msg="Loading containers: done." Apr 16 04:18:22.717778 dockerd[1681]: time="2026-04-16T04:18:22.700317499Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Apr 16 04:18:22.717778 dockerd[1681]: time="2026-04-16T04:18:22.717431222Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Apr 16 04:18:22.770995 dockerd[1681]: time="2026-04-16T04:18:22.721750398Z" level=info msg="Daemon has completed initialization" Apr 16 04:18:24.461021 dockerd[1681]: time="2026-04-16T04:18:24.459033710Z" level=info msg="API listen on /run/docker.sock" Apr 16 04:18:24.497378 systemd[1]: Started docker.service - Docker Application Container Engine. Apr 16 04:18:25.124382 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Apr 16 04:18:25.169916 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 16 04:18:28.523037 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 16 04:18:28.569164 (kubelet)[1864]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 16 04:18:30.197327 kubelet[1864]: E0416 04:18:30.196734 1864 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 16 04:18:30.202543 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 16 04:18:30.227240 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 16 04:18:30.269306 systemd[1]: kubelet.service: Consumed 1.970s CPU time. Apr 16 04:18:40.360810 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Apr 16 04:18:40.487895 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 16 04:18:42.839840 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 16 04:18:42.896981 (kubelet)[1882]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 16 04:18:44.069114 kubelet[1882]: E0416 04:18:44.057858 1882 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 16 04:18:44.120399 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 16 04:18:44.120882 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 16 04:18:44.125973 systemd[1]: kubelet.service: Consumed 1.677s CPU time. Apr 16 04:18:44.721563 containerd[1480]: time="2026-04-16T04:18:44.712122278Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.7\"" Apr 16 04:18:53.270878 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2684471354.mount: Deactivated successfully. Apr 16 04:18:54.426714 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Apr 16 04:18:54.512259 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 16 04:18:57.017107 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 16 04:18:57.103902 (kubelet)[1913]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 16 04:18:59.371325 kubelet[1913]: E0416 04:18:59.370445 1913 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 16 04:18:59.396667 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 16 04:18:59.397031 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 16 04:18:59.397845 systemd[1]: kubelet.service: Consumed 2.104s CPU time. Apr 16 04:19:09.637426 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Apr 16 04:19:09.990196 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 16 04:19:12.616540 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 16 04:19:12.655585 (kubelet)[1975]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 16 04:19:13.788946 kubelet[1975]: E0416 04:19:13.788545 1975 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 16 04:19:13.794938 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 16 04:19:13.795177 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 16 04:19:13.800708 systemd[1]: kubelet.service: Consumed 1.575s CPU time. Apr 16 04:19:31.596875 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7. Apr 16 04:19:31.764259 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 16 04:19:35.848921 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 16 04:19:35.856297 (kubelet)[1991]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 16 04:19:37.300051 containerd[1480]: time="2026-04-16T04:19:37.298376765Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.34.7: active requests=0, bytes read=27099952" Apr 16 04:19:37.389946 containerd[1480]: time="2026-04-16T04:19:37.340277339Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.34.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 04:19:37.403001 kubelet[1991]: E0416 04:19:37.396418 1991 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 16 04:19:37.478256 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 16 04:19:37.486371 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 16 04:19:37.498939 containerd[1480]: time="2026-04-16T04:19:37.494162846Z" level=info msg="ImageCreate event name:\"sha256:c15709457ff55a861a7259eb631c447f9bf906267615f9d8dcc820635a0bfb95\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 04:19:37.499695 systemd[1]: kubelet.service: Consumed 2.381s CPU time. Apr 16 04:19:37.960089 containerd[1480]: time="2026-04-16T04:19:37.956704073Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:b96b8464d152a24c81d7f0435fd2198f8486970cd26a9e0e9c20826c73d1441c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 04:19:38.510556 containerd[1480]: time="2026-04-16T04:19:38.509497323Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.34.7\" with image id \"sha256:c15709457ff55a861a7259eb631c447f9bf906267615f9d8dcc820635a0bfb95\", repo tag \"registry.k8s.io/kube-apiserver:v1.34.7\", repo digest \"registry.k8s.io/kube-apiserver@sha256:b96b8464d152a24c81d7f0435fd2198f8486970cd26a9e0e9c20826c73d1441c\", size \"27097113\" in 53.795615034s" Apr 16 04:19:38.510556 containerd[1480]: time="2026-04-16T04:19:38.510419105Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.7\" returns image reference \"sha256:c15709457ff55a861a7259eb631c447f9bf906267615f9d8dcc820635a0bfb95\"" Apr 16 04:19:38.679675 containerd[1480]: time="2026-04-16T04:19:38.672695579Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.7\"" Apr 16 04:19:40.828712 update_engine[1467]: I20260416 04:19:40.827903 1467 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Apr 16 04:19:40.828712 update_engine[1467]: I20260416 04:19:40.828425 1467 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Apr 16 04:19:40.839941 update_engine[1467]: I20260416 04:19:40.839383 1467 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Apr 16 04:19:40.849252 update_engine[1467]: I20260416 04:19:40.847896 1467 omaha_request_params.cc:62] Current group set to lts Apr 16 04:19:40.849252 update_engine[1467]: I20260416 04:19:40.848724 1467 update_attempter.cc:499] Already updated boot flags. Skipping. Apr 16 04:19:40.849252 update_engine[1467]: I20260416 04:19:40.848747 1467 update_attempter.cc:643] Scheduling an action processor start. Apr 16 04:19:40.849252 update_engine[1467]: I20260416 04:19:40.848766 1467 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Apr 16 04:19:40.849252 update_engine[1467]: I20260416 04:19:40.849038 1467 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Apr 16 04:19:40.871916 update_engine[1467]: I20260416 04:19:40.864761 1467 omaha_request_action.cc:271] Posting an Omaha request to disabled Apr 16 04:19:40.871916 update_engine[1467]: I20260416 04:19:40.865023 1467 omaha_request_action.cc:272] Request: Apr 16 04:19:40.871916 update_engine[1467]: Apr 16 04:19:40.871916 update_engine[1467]: Apr 16 04:19:40.871916 update_engine[1467]: Apr 16 04:19:40.871916 update_engine[1467]: Apr 16 04:19:40.871916 update_engine[1467]: Apr 16 04:19:40.871916 update_engine[1467]: Apr 16 04:19:40.871916 update_engine[1467]: Apr 16 04:19:40.871916 update_engine[1467]: Apr 16 04:19:40.871916 update_engine[1467]: I20260416 04:19:40.865036 1467 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 16 04:19:40.981719 locksmithd[1513]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Apr 16 04:19:41.069998 update_engine[1467]: I20260416 04:19:41.069782 1467 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 16 04:19:41.070898 update_engine[1467]: I20260416 04:19:41.070833 1467 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 16 04:19:41.102590 update_engine[1467]: E20260416 04:19:41.101536 1467 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Apr 16 04:19:41.102590 update_engine[1467]: I20260416 04:19:41.101812 1467 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Apr 16 04:19:47.797019 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 8. Apr 16 04:19:48.059567 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 16 04:19:49.903275 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 16 04:19:50.004596 (kubelet)[2012]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 16 04:19:51.554453 kubelet[2012]: E0416 04:19:51.553955 2012 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 16 04:19:51.563873 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 16 04:19:51.564800 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 16 04:19:51.565594 systemd[1]: kubelet.service: Consumed 1.440s CPU time. Apr 16 04:19:51.854236 update_engine[1467]: I20260416 04:19:51.850373 1467 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 16 04:19:51.866586 update_engine[1467]: I20260416 04:19:51.856412 1467 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 16 04:19:51.866586 update_engine[1467]: I20260416 04:19:51.857110 1467 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 16 04:19:51.896167 update_engine[1467]: E20260416 04:19:51.883036 1467 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Apr 16 04:19:51.899575 update_engine[1467]: I20260416 04:19:51.899511 1467 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Apr 16 04:19:56.739139 containerd[1480]: time="2026-04-16T04:19:56.729768474Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.34.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 04:19:56.757882 containerd[1480]: time="2026-04-16T04:19:56.755178753Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.34.7: active requests=0, bytes read=21252670" Apr 16 04:19:56.806941 containerd[1480]: time="2026-04-16T04:19:56.802934676Z" level=info msg="ImageCreate event name:\"sha256:23986a24c803336f2a2dfbcaaf0547ee8bcf6638f23bec8967e210909d00a97a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 04:19:57.700281 containerd[1480]: time="2026-04-16T04:19:57.695591917Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:7d759bdc4fef10a3fc1ad60ce9439d58e1a4df7ebb22751f7cc0201ce55f280b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 04:19:58.064665 containerd[1480]: time="2026-04-16T04:19:58.044131771Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.34.7\" with image id \"sha256:23986a24c803336f2a2dfbcaaf0547ee8bcf6638f23bec8967e210909d00a97a\", repo tag \"registry.k8s.io/kube-controller-manager:v1.34.7\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:7d759bdc4fef10a3fc1ad60ce9439d58e1a4df7ebb22751f7cc0201ce55f280b\", size \"22819085\" in 19.359895735s" Apr 16 04:19:58.064665 containerd[1480]: time="2026-04-16T04:19:58.044399608Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.7\" returns image reference \"sha256:23986a24c803336f2a2dfbcaaf0547ee8bcf6638f23bec8967e210909d00a97a\"" Apr 16 04:19:58.142429 containerd[1480]: time="2026-04-16T04:19:58.131429741Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.7\"" Apr 16 04:20:01.585706 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 9. Apr 16 04:20:01.642784 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 16 04:20:01.856799 update_engine[1467]: I20260416 04:20:01.845635 1467 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 16 04:20:01.856799 update_engine[1467]: I20260416 04:20:01.855422 1467 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 16 04:20:01.925764 update_engine[1467]: I20260416 04:20:01.864260 1467 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 16 04:20:01.925764 update_engine[1467]: E20260416 04:20:01.893386 1467 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Apr 16 04:20:01.925764 update_engine[1467]: I20260416 04:20:01.897215 1467 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Apr 16 04:20:05.222969 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 16 04:20:05.277849 (kubelet)[2034]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 16 04:20:08.050083 kubelet[2034]: E0416 04:20:08.049363 2034 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 16 04:20:08.083641 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 16 04:20:08.083989 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 16 04:20:08.098061 systemd[1]: kubelet.service: Consumed 2.603s CPU time. Apr 16 04:20:11.837217 update_engine[1467]: I20260416 04:20:11.835243 1467 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 16 04:20:11.837217 update_engine[1467]: I20260416 04:20:11.836153 1467 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 16 04:20:11.837217 update_engine[1467]: I20260416 04:20:11.836773 1467 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 16 04:20:11.851373 update_engine[1467]: E20260416 04:20:11.851271 1467 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Apr 16 04:20:11.851645 update_engine[1467]: I20260416 04:20:11.851387 1467 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Apr 16 04:20:11.851645 update_engine[1467]: I20260416 04:20:11.851444 1467 omaha_request_action.cc:617] Omaha request response: Apr 16 04:20:11.851943 update_engine[1467]: E20260416 04:20:11.851861 1467 omaha_request_action.cc:636] Omaha request network transfer failed. Apr 16 04:20:11.860725 update_engine[1467]: I20260416 04:20:11.852455 1467 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Apr 16 04:20:11.860725 update_engine[1467]: I20260416 04:20:11.852510 1467 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Apr 16 04:20:11.860725 update_engine[1467]: I20260416 04:20:11.852517 1467 update_attempter.cc:306] Processing Done. Apr 16 04:20:11.860725 update_engine[1467]: E20260416 04:20:11.852535 1467 update_attempter.cc:619] Update failed. Apr 16 04:20:11.860725 update_engine[1467]: I20260416 04:20:11.852542 1467 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Apr 16 04:20:11.860725 update_engine[1467]: I20260416 04:20:11.852548 1467 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Apr 16 04:20:11.860725 update_engine[1467]: I20260416 04:20:11.852555 1467 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Apr 16 04:20:11.860725 update_engine[1467]: I20260416 04:20:11.857039 1467 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Apr 16 04:20:11.863991 update_engine[1467]: I20260416 04:20:11.861541 1467 omaha_request_action.cc:271] Posting an Omaha request to disabled Apr 16 04:20:11.863991 update_engine[1467]: I20260416 04:20:11.861850 1467 omaha_request_action.cc:272] Request: Apr 16 04:20:11.863991 update_engine[1467]: Apr 16 04:20:11.863991 update_engine[1467]: Apr 16 04:20:11.863991 update_engine[1467]: Apr 16 04:20:11.863991 update_engine[1467]: Apr 16 04:20:11.863991 update_engine[1467]: Apr 16 04:20:11.863991 update_engine[1467]: Apr 16 04:20:11.863991 update_engine[1467]: I20260416 04:20:11.861864 1467 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 16 04:20:11.863991 update_engine[1467]: I20260416 04:20:11.863692 1467 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 16 04:20:11.870393 update_engine[1467]: I20260416 04:20:11.866340 1467 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 16 04:20:11.897758 locksmithd[1513]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Apr 16 04:20:11.902968 update_engine[1467]: E20260416 04:20:11.897231 1467 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Apr 16 04:20:11.904198 update_engine[1467]: I20260416 04:20:11.900844 1467 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Apr 16 04:20:11.904198 update_engine[1467]: I20260416 04:20:11.903232 1467 omaha_request_action.cc:617] Omaha request response: Apr 16 04:20:11.904198 update_engine[1467]: I20260416 04:20:11.903250 1467 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Apr 16 04:20:11.904198 update_engine[1467]: I20260416 04:20:11.903256 1467 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Apr 16 04:20:11.904198 update_engine[1467]: I20260416 04:20:11.903262 1467 update_attempter.cc:306] Processing Done. Apr 16 04:20:11.904198 update_engine[1467]: I20260416 04:20:11.903337 1467 update_attempter.cc:310] Error event sent. Apr 16 04:20:11.904198 update_engine[1467]: I20260416 04:20:11.903580 1467 update_check_scheduler.cc:74] Next update check in 44m12s Apr 16 04:20:11.905205 locksmithd[1513]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Apr 16 04:20:15.755289 containerd[1480]: time="2026-04-16T04:20:15.660423096Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.34.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 04:20:15.808146 containerd[1480]: time="2026-04-16T04:20:15.769983222Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.34.7: active requests=0, bytes read=15810823" Apr 16 04:20:15.887819 containerd[1480]: time="2026-04-16T04:20:15.879561839Z" level=info msg="ImageCreate event name:\"sha256:568f1856b0e1c464b0b50ab2879ebd535623c1a620b1d2530ba5dd594237dc82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 04:20:17.313341 containerd[1480]: time="2026-04-16T04:20:17.312455261Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:4ab32f707ff84beaac431797999707757b885196b0b9a52d29cb67f95efce7c1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 04:20:17.505129 containerd[1480]: time="2026-04-16T04:20:17.504524919Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.34.7\" with image id \"sha256:568f1856b0e1c464b0b50ab2879ebd535623c1a620b1d2530ba5dd594237dc82\", repo tag \"registry.k8s.io/kube-scheduler:v1.34.7\", repo digest \"registry.k8s.io/kube-scheduler@sha256:4ab32f707ff84beaac431797999707757b885196b0b9a52d29cb67f95efce7c1\", size \"17377256\" in 19.367587571s" Apr 16 04:20:17.505129 containerd[1480]: time="2026-04-16T04:20:17.504863886Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.7\" returns image reference \"sha256:568f1856b0e1c464b0b50ab2879ebd535623c1a620b1d2530ba5dd594237dc82\"" Apr 16 04:20:17.635052 containerd[1480]: time="2026-04-16T04:20:17.631572106Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.7\"" Apr 16 04:20:18.351842 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 10. Apr 16 04:20:18.369719 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 16 04:20:22.754417 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 16 04:20:22.843343 (kubelet)[2050]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 16 04:20:24.869173 kubelet[2050]: E0416 04:20:24.866402 2050 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 16 04:20:24.938097 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 16 04:20:24.941420 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 16 04:20:24.945448 systemd[1]: kubelet.service: Consumed 2.775s CPU time. Apr 16 04:20:35.181356 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 11. Apr 16 04:20:35.494958 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 16 04:20:38.927441 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 16 04:20:38.973863 (kubelet)[2073]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 16 04:20:43.680364 kubelet[2073]: E0416 04:20:43.678421 2073 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 16 04:20:43.816906 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 16 04:20:43.817159 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 16 04:20:43.829131 systemd[1]: kubelet.service: Consumed 3.720s CPU time. Apr 16 04:20:53.247336 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount576628318.mount: Deactivated successfully. Apr 16 04:20:54.014548 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 12. Apr 16 04:20:54.388166 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 16 04:21:03.738841 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 16 04:21:03.902670 (kubelet)[2094]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 16 04:21:25.561807 containerd[1480]: time="2026-04-16T04:21:25.559219295Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.34.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 04:21:25.662986 containerd[1480]: time="2026-04-16T04:21:25.595856353Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.34.7: active requests=0, bytes read=25972848" Apr 16 04:21:27.140909 containerd[1480]: time="2026-04-16T04:21:27.098784412Z" level=info msg="ImageCreate event name:\"sha256:345c2b8919907fbb425a843da24d86a16708ee53a49ad3fa2e6dc229c7b34643\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 04:21:32.138246 containerd[1480]: time="2026-04-16T04:21:32.137455304Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:062519bc0a14769e2f98c6bdff7816a17e6252de3f3c9cb102e6be33fe38d9e2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 04:21:33.308595 containerd[1480]: time="2026-04-16T04:21:33.301968188Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.34.7\" with image id \"sha256:345c2b8919907fbb425a843da24d86a16708ee53a49ad3fa2e6dc229c7b34643\", repo tag \"registry.k8s.io/kube-proxy:v1.34.7\", repo digest \"registry.k8s.io/kube-proxy@sha256:062519bc0a14769e2f98c6bdff7816a17e6252de3f3c9cb102e6be33fe38d9e2\", size \"25971973\" in 1m15.645558586s" Apr 16 04:21:33.308595 containerd[1480]: time="2026-04-16T04:21:33.302771869Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.7\" returns image reference \"sha256:345c2b8919907fbb425a843da24d86a16708ee53a49ad3fa2e6dc229c7b34643\"" Apr 16 04:21:33.315839 containerd[1480]: time="2026-04-16T04:21:33.315734607Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\"" Apr 16 04:21:33.534214 kubelet[2094]: E0416 04:21:33.528423 2094 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 16 04:21:33.552142 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 16 04:21:33.556451 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 16 04:21:33.557889 systemd[1]: kubelet.service: Consumed 19.834s CPU time. Apr 16 04:21:44.244406 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 13. Apr 16 04:21:44.396696 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 16 04:21:48.828434 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount673754279.mount: Deactivated successfully. Apr 16 04:21:52.583209 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 16 04:21:52.980378 (kubelet)[2117]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 16 04:22:08.017227 kubelet[2117]: E0416 04:22:07.998784 2117 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 16 04:22:08.142271 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 16 04:22:08.143045 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 16 04:22:08.338849 systemd[1]: kubelet.service: Consumed 11.622s CPU time. Apr 16 04:22:18.550194 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 14. Apr 16 04:22:18.837553 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 16 04:22:28.900627 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 16 04:22:29.065600 (kubelet)[2143]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 16 04:22:49.159584 kubelet[2143]: E0416 04:22:49.156032 2143 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 16 04:22:49.304410 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 16 04:22:49.344993 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 16 04:22:49.390936 systemd[1]: kubelet.service: Consumed 15.881s CPU time. Apr 16 04:22:59.341838 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 15. Apr 16 04:22:59.653243 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 16 04:23:05.510352 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 16 04:23:05.519557 (kubelet)[2162]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 16 04:23:07.120994 kubelet[2162]: E0416 04:23:07.120495 2162 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 16 04:23:07.146934 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 16 04:23:07.149895 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 16 04:23:07.154446 systemd[1]: kubelet.service: Consumed 2.976s CPU time. Apr 16 04:23:17.339987 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 16. Apr 16 04:23:17.426930 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 16 04:23:22.270253 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 16 04:23:22.482238 (kubelet)[2221]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 16 04:23:28.547131 kubelet[2221]: E0416 04:23:28.546834 2221 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 16 04:23:28.595355 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 16 04:23:28.604628 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 16 04:23:28.606931 systemd[1]: kubelet.service: Consumed 5.604s CPU time. Apr 16 04:23:29.281675 containerd[1480]: time="2026-04-16T04:23:29.278246791Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 04:23:29.323504 containerd[1480]: time="2026-04-16T04:23:29.317820397Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.1: active requests=0, bytes read=22387483" Apr 16 04:23:29.405714 containerd[1480]: time="2026-04-16T04:23:29.404315800Z" level=info msg="ImageCreate event name:\"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 04:23:29.423978 containerd[1480]: time="2026-04-16T04:23:29.421102535Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 04:23:29.589381 containerd[1480]: time="2026-04-16T04:23:29.564206507Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.1\" with image id \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\", size \"22384805\" in 1m56.248416691s" Apr 16 04:23:29.593909 containerd[1480]: time="2026-04-16T04:23:29.590551703Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\" returns image reference \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\"" Apr 16 04:23:29.745036 containerd[1480]: time="2026-04-16T04:23:29.743840662Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Apr 16 04:23:35.002066 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1667720218.mount: Deactivated successfully. Apr 16 04:23:35.386788 containerd[1480]: time="2026-04-16T04:23:35.385607533Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 04:23:35.533658 containerd[1480]: time="2026-04-16T04:23:35.532276937Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=321150" Apr 16 04:23:35.874461 containerd[1480]: time="2026-04-16T04:23:35.866139686Z" level=info msg="ImageCreate event name:\"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 04:23:39.365246 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 17. Apr 16 04:23:40.591819 containerd[1480]: time="2026-04-16T04:23:40.591408663Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 04:23:40.597040 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 16 04:23:41.675704 containerd[1480]: time="2026-04-16T04:23:41.667066258Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"320448\" in 11.916441091s" Apr 16 04:23:41.797924 containerd[1480]: time="2026-04-16T04:23:41.678931421Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\"" Apr 16 04:23:41.852501 containerd[1480]: time="2026-04-16T04:23:41.851621760Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.5-0\"" Apr 16 04:23:46.052671 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 16 04:23:46.110188 (kubelet)[2241]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 16 04:23:47.918757 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount845787866.mount: Deactivated successfully. Apr 16 04:23:49.287771 kubelet[2241]: E0416 04:23:49.280912 2241 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 16 04:23:49.345104 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 16 04:23:49.345582 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 16 04:23:49.362500 systemd[1]: kubelet.service: Consumed 3.609s CPU time. Apr 16 04:23:59.607637 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 18. Apr 16 04:23:59.660340 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 16 04:24:02.662685 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 16 04:24:02.689098 (kubelet)[2273]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 16 04:24:03.902338 kubelet[2273]: E0416 04:24:03.901703 2273 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 16 04:24:03.928675 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 16 04:24:03.929033 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 16 04:24:03.937651 systemd[1]: kubelet.service: Consumed 1.797s CPU time. Apr 16 04:24:14.092095 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 19. Apr 16 04:24:14.175932 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 16 04:24:15.725939 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 16 04:24:15.735520 (kubelet)[2337]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 16 04:24:15.801434 containerd[1480]: time="2026-04-16T04:24:15.795176537Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.5-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 04:24:15.808136 containerd[1480]: time="2026-04-16T04:24:15.807994870Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.5-0: active requests=0, bytes read=22874255" Apr 16 04:24:15.809548 containerd[1480]: time="2026-04-16T04:24:15.809497691Z" level=info msg="ImageCreate event name:\"sha256:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 04:24:15.856925 containerd[1480]: time="2026-04-16T04:24:15.848160995Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 04:24:15.903938 containerd[1480]: time="2026-04-16T04:24:15.903335296Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.5-0\" with image id \"sha256:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1\", repo tag \"registry.k8s.io/etcd:3.6.5-0\", repo digest \"registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534\", size \"22871747\" in 34.051509401s" Apr 16 04:24:15.903938 containerd[1480]: time="2026-04-16T04:24:15.903609716Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.5-0\" returns image reference \"sha256:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1\"" Apr 16 04:24:17.073696 kubelet[2337]: E0416 04:24:17.072751 2337 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 16 04:24:17.107256 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 16 04:24:17.107731 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 16 04:24:17.120320 systemd[1]: kubelet.service: Consumed 1.536s CPU time. Apr 16 04:24:27.695430 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 20. Apr 16 04:24:28.173588 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 16 04:24:38.965044 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 16 04:24:39.427802 (kubelet)[2367]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 16 04:24:44.113305 kubelet[2367]: E0416 04:24:44.109896 2367 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 16 04:24:44.137184 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 16 04:24:44.137789 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 16 04:24:44.153416 systemd[1]: kubelet.service: Consumed 7.319s CPU time, 112.5M memory peak, 0B memory swap peak. Apr 16 04:24:49.865985 containerd[1480]: time="2026-04-16T04:24:49.861024065Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.4\"" Apr 16 04:24:54.499762 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 21. Apr 16 04:24:54.639940 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 16 04:25:02.286269 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 16 04:25:02.310279 (kubelet)[2398]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 16 04:25:09.385383 kubelet[2398]: E0416 04:25:09.372252 2398 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 16 04:25:09.545231 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 16 04:25:09.551078 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 16 04:25:09.586763 systemd[1]: kubelet.service: Consumed 7.735s CPU time. Apr 16 04:25:20.281861 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 22. Apr 16 04:25:20.797430 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 16 04:25:25.394408 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 16 04:25:25.466216 (kubelet)[2431]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 16 04:25:26.454490 kubelet[2431]: E0416 04:25:26.452619 2431 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 16 04:25:26.480263 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 16 04:25:26.481213 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 16 04:25:26.481808 systemd[1]: kubelet.service: Consumed 2.316s CPU time. Apr 16 04:25:30.398499 containerd[1480]: time="2026-04-16T04:25:30.397713958Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.34.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 04:25:30.404609 containerd[1480]: time="2026-04-16T04:25:30.401727722Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.34.4: active requests=0, bytes read=26243935" Apr 16 04:25:30.406600 containerd[1480]: time="2026-04-16T04:25:30.405638258Z" level=info msg="ImageCreate event name:\"sha256:580dc2bd813334b9ca30ac3a513b3577d055dd0bc8a7018a424b552afd7319f9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 04:25:30.411329 containerd[1480]: time="2026-04-16T04:25:30.410338787Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:f2b5a686d329b24ef4f4b057ddaf61e01874122d584e99c2a19d1e1714e4b7ae\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 04:25:30.555116 containerd[1480]: time="2026-04-16T04:25:30.554230365Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.34.4\" with image id \"sha256:580dc2bd813334b9ca30ac3a513b3577d055dd0bc8a7018a424b552afd7319f9\", repo tag \"registry.k8s.io/kube-apiserver:v1.34.4\", repo digest \"registry.k8s.io/kube-apiserver@sha256:f2b5a686d329b24ef4f4b057ddaf61e01874122d584e99c2a19d1e1714e4b7ae\", size \"27069180\" in 40.688810487s" Apr 16 04:25:30.555116 containerd[1480]: time="2026-04-16T04:25:30.554692863Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.4\" returns image reference \"sha256:580dc2bd813334b9ca30ac3a513b3577d055dd0bc8a7018a424b552afd7319f9\"" Apr 16 04:25:30.647979 containerd[1480]: time="2026-04-16T04:25:30.645229470Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.4\"" Apr 16 04:25:36.686151 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 23. Apr 16 04:25:36.959418 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 16 04:25:41.272121 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 16 04:25:41.402347 (kubelet)[2452]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 16 04:25:44.657988 kubelet[2452]: E0416 04:25:44.650431 2452 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 16 04:25:44.677923 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 16 04:25:44.678266 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 16 04:25:44.702305 systemd[1]: kubelet.service: Consumed 3.423s CPU time. Apr 16 04:25:55.381015 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 24. Apr 16 04:25:55.951209 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 16 04:26:02.365678 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 16 04:26:02.485399 (kubelet)[2470]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 16 04:26:02.645700 containerd[1480]: time="2026-04-16T04:26:02.440536017Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.34.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 04:26:02.707704 containerd[1480]: time="2026-04-16T04:26:02.492368304Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.34.4: active requests=0, bytes read=21163889" Apr 16 04:26:03.856231 containerd[1480]: time="2026-04-16T04:26:03.855141466Z" level=info msg="ImageCreate event name:\"sha256:608737e269607b0d5c252a3296dc4fd80e7f2e90907f46ad5c8cf3e4f23c6d0d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 04:26:06.578995 containerd[1480]: time="2026-04-16T04:26:06.509406532Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:b8f0ae8a1bddb70981f4999e63df7e59838b9b4ee27831831802317101164e1e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 04:26:08.082947 containerd[1480]: time="2026-04-16T04:26:08.072197250Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.34.4\" with image id \"sha256:608737e269607b0d5c252a3296dc4fd80e7f2e90907f46ad5c8cf3e4f23c6d0d\", repo tag \"registry.k8s.io/kube-controller-manager:v1.34.4\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:b8f0ae8a1bddb70981f4999e63df7e59838b9b4ee27831831802317101164e1e\", size \"22820907\" in 37.423905851s" Apr 16 04:26:08.144865 containerd[1480]: time="2026-04-16T04:26:08.087306311Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.4\" returns image reference \"sha256:608737e269607b0d5c252a3296dc4fd80e7f2e90907f46ad5c8cf3e4f23c6d0d\"" Apr 16 04:26:09.001632 containerd[1480]: time="2026-04-16T04:26:09.000958762Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.4\"" Apr 16 04:26:17.575834 kubelet[2470]: E0416 04:26:17.563374 2470 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 16 04:26:17.585830 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 16 04:26:17.586157 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 16 04:26:17.602769 systemd[1]: kubelet.service: Consumed 10.106s CPU time. Apr 16 04:26:28.452077 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 25. Apr 16 04:26:28.807928 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 16 04:26:36.359460 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 16 04:26:36.407456 (kubelet)[2491]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 16 04:26:38.455352 kubelet[2491]: E0416 04:26:38.454779 2491 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 16 04:26:38.482204 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 16 04:26:38.489302 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 16 04:26:38.509616 systemd[1]: kubelet.service: Consumed 4.780s CPU time. Apr 16 04:26:48.998969 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 26. Apr 16 04:26:49.212050 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 16 04:26:54.800387 containerd[1480]: time="2026-04-16T04:26:54.799661792Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.34.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 04:26:54.992900 containerd[1480]: time="2026-04-16T04:26:54.867085355Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.34.4: active requests=0, bytes read=15727822" Apr 16 04:26:55.601034 containerd[1480]: time="2026-04-16T04:26:55.582117411Z" level=info msg="ImageCreate event name:\"sha256:5ad88f27116a5809b6bdb7b410bc4c456e918bc25e96804201540fd30892e7aa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 04:26:57.344128 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 16 04:26:57.456033 (kubelet)[2508]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 16 04:26:57.993864 containerd[1480]: time="2026-04-16T04:26:57.992330968Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:5b0dcf6f7178b6bff5cbf59f2a695b13987181cb1610bfca63cad50b1df8f982\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 04:26:58.405143 containerd[1480]: time="2026-04-16T04:26:58.399547038Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.34.4\" with image id \"sha256:5ad88f27116a5809b6bdb7b410bc4c456e918bc25e96804201540fd30892e7aa\", repo tag \"registry.k8s.io/kube-scheduler:v1.34.4\", repo digest \"registry.k8s.io/kube-scheduler@sha256:5b0dcf6f7178b6bff5cbf59f2a695b13987181cb1610bfca63cad50b1df8f982\", size \"17384858\" in 49.333827307s" Apr 16 04:26:58.420259 containerd[1480]: time="2026-04-16T04:26:58.417066317Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.4\" returns image reference \"sha256:5ad88f27116a5809b6bdb7b410bc4c456e918bc25e96804201540fd30892e7aa\"" Apr 16 04:26:59.234857 containerd[1480]: time="2026-04-16T04:26:59.206894998Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.4\"" Apr 16 04:27:11.387266 kubelet[2508]: E0416 04:27:11.386283 2508 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 16 04:27:11.448399 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 16 04:27:11.448932 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 16 04:27:11.452449 systemd[1]: kubelet.service: Consumed 12.588s CPU time. Apr 16 04:27:21.876630 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 27. Apr 16 04:27:22.062311 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 16 04:27:28.646154 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 16 04:27:29.033301 (kubelet)[2529]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 16 04:27:35.806255 kubelet[2529]: E0416 04:27:35.792800 2529 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 16 04:27:36.061345 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 16 04:27:36.072429 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 16 04:27:36.078791 systemd[1]: kubelet.service: Consumed 6.758s CPU time. Apr 16 04:27:47.107454 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 28. Apr 16 04:27:47.485884 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 16 04:27:56.968804 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 16 04:27:57.446942 (kubelet)[2547]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 16 04:28:03.847956 kubelet[2547]: E0416 04:28:03.845965 2547 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 16 04:28:03.864984 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 16 04:28:03.872787 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 16 04:28:03.911890 systemd[1]: kubelet.service: Consumed 7.471s CPU time. Apr 16 04:28:16.175993 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 29. Apr 16 04:28:16.288295 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 16 04:28:16.623880 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2825941331.mount: Deactivated successfully. Apr 16 04:28:18.508451 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 16 04:28:18.565342 (kubelet)[2567]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 16 04:28:19.294799 kubelet[2567]: E0416 04:28:19.294319 2567 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 16 04:28:19.377724 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 16 04:28:19.389857 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 16 04:28:19.403255 systemd[1]: kubelet.service: Consumed 1.414s CPU time. Apr 16 04:28:20.157269 containerd[1480]: time="2026-04-16T04:28:20.155606142Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.34.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 04:28:20.164739 containerd[1480]: time="2026-04-16T04:28:20.164505578Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.34.4: active requests=0, bytes read=25859803" Apr 16 04:28:20.173722 containerd[1480]: time="2026-04-16T04:28:20.166683411Z" level=info msg="ImageCreate event name:\"sha256:ccb613b010acadd9a69cf0ea80a60105c0d14106903c2572e2c6452f8615b3c7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 04:28:20.395271 containerd[1480]: time="2026-04-16T04:28:20.336272382Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:be6f624483c350da6022d54965ba5b01b35f067737959d7fb11d625f1d975045\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 04:28:20.483638 containerd[1480]: time="2026-04-16T04:28:20.474045852Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.34.4\" with image id \"sha256:ccb613b010acadd9a69cf0ea80a60105c0d14106903c2572e2c6452f8615b3c7\", repo tag \"registry.k8s.io/kube-proxy:v1.34.4\", repo digest \"registry.k8s.io/kube-proxy@sha256:be6f624483c350da6022d54965ba5b01b35f067737959d7fb11d625f1d975045\", size \"25858928\" in 1m21.227526094s" Apr 16 04:28:20.483638 containerd[1480]: time="2026-04-16T04:28:20.474373407Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.4\" returns image reference \"sha256:ccb613b010acadd9a69cf0ea80a60105c0d14106903c2572e2c6452f8615b3c7\"" Apr 16 04:28:29.816660 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 30. Apr 16 04:28:29.897917 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 16 04:28:33.784061 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 16 04:28:33.889849 (kubelet)[2585]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 16 04:28:36.605142 kubelet[2585]: E0416 04:28:36.604211 2585 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 16 04:28:36.634199 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 16 04:28:36.634688 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 16 04:28:36.637311 systemd[1]: kubelet.service: Consumed 2.951s CPU time. Apr 16 04:28:47.431353 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 31. Apr 16 04:28:47.905672 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 16 04:29:06.899645 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 16 04:29:07.409170 (kubelet)[2603]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 16 04:29:17.751537 kubelet[2603]: E0416 04:29:17.717813 2603 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 16 04:29:17.995910 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 16 04:29:18.011567 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 16 04:29:18.038567 systemd[1]: kubelet.service: Consumed 14.787s CPU time. Apr 16 04:29:28.101529 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 32. Apr 16 04:29:28.218761 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 16 04:29:39.863366 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 16 04:29:39.972036 (kubelet)[2621]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 16 04:29:42.576312 kubelet[2621]: E0416 04:29:42.574710 2621 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 16 04:29:42.645386 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 16 04:29:42.648415 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 16 04:29:42.652900 systemd[1]: kubelet.service: Consumed 7.303s CPU time. Apr 16 04:29:52.982109 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 33. Apr 16 04:29:53.250532 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 16 04:30:09.577251 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 16 04:30:10.059074 (kubelet)[2637]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 16 04:30:19.849337 kubelet[2637]: E0416 04:30:19.840219 2637 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 16 04:30:19.994221 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 16 04:30:20.032117 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 16 04:30:20.051266 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 16 04:30:20.087790 systemd[1]: kubelet.service: Consumed 11.059s CPU time, 112.6M memory peak, 0B memory swap peak. Apr 16 04:30:20.670239 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 16 04:30:26.386303 systemd[1]: Reloading requested from client PID 2653 ('systemctl') (unit session-7.scope)... Apr 16 04:30:26.391161 systemd[1]: Reloading... Apr 16 04:30:35.708654 zram_generator::config[2693]: No configuration found. Apr 16 04:30:40.342582 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 16 04:30:44.250764 systemd[1]: Reloading finished in 17838 ms. Apr 16 04:30:46.615168 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 16 04:30:46.652269 systemd[1]: kubelet.service: Deactivated successfully. Apr 16 04:30:46.652801 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 16 04:30:46.652916 systemd[1]: kubelet.service: Consumed 4.899s CPU time, 49.9M memory peak, 0B memory swap peak. Apr 16 04:30:46.829745 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 16 04:31:07.286029 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 16 04:31:07.521981 (kubelet)[2744]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 16 04:31:09.569918 kubelet[2744]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 16 04:31:09.569918 kubelet[2744]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 16 04:31:09.579808 kubelet[2744]: I0416 04:31:09.575862 2744 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 16 04:31:11.518326 kubelet[2744]: I0416 04:31:11.515398 2744 server.go:529] "Kubelet version" kubeletVersion="v1.34.4" Apr 16 04:31:11.518326 kubelet[2744]: I0416 04:31:11.518069 2744 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 16 04:31:11.526128 kubelet[2744]: I0416 04:31:11.518878 2744 watchdog_linux.go:95] "Systemd watchdog is not enabled" Apr 16 04:31:11.526128 kubelet[2744]: I0416 04:31:11.520748 2744 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 16 04:31:11.526128 kubelet[2744]: I0416 04:31:11.525532 2744 server.go:956] "Client rotation is on, will bootstrap in background" Apr 16 04:31:11.697767 kubelet[2744]: E0416 04:31:11.697344 2744 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.7:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.7:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 16 04:31:11.722845 kubelet[2744]: I0416 04:31:11.718746 2744 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 16 04:31:11.899445 kubelet[2744]: E0416 04:31:11.898430 2744 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 16 04:31:11.903578 kubelet[2744]: I0416 04:31:11.899620 2744 server.go:1400] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Apr 16 04:31:12.216700 kubelet[2744]: I0416 04:31:12.210163 2744 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Apr 16 04:31:12.227088 kubelet[2744]: I0416 04:31:12.226686 2744 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 16 04:31:12.228003 kubelet[2744]: I0416 04:31:12.227058 2744 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 16 04:31:12.283683 kubelet[2744]: I0416 04:31:12.228246 2744 topology_manager.go:138] "Creating topology manager with none policy" Apr 16 04:31:12.283683 kubelet[2744]: I0416 04:31:12.228262 2744 container_manager_linux.go:306] "Creating device plugin manager" Apr 16 04:31:12.288019 kubelet[2744]: I0416 04:31:12.286947 2744 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Apr 16 04:31:12.319878 kubelet[2744]: I0416 04:31:12.318318 2744 state_mem.go:36] "Initialized new in-memory state store" Apr 16 04:31:12.324684 kubelet[2744]: I0416 04:31:12.323220 2744 kubelet.go:475] "Attempting to sync node with API server" Apr 16 04:31:12.324684 kubelet[2744]: I0416 04:31:12.323945 2744 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 16 04:31:12.324684 kubelet[2744]: I0416 04:31:12.324416 2744 kubelet.go:387] "Adding apiserver pod source" Apr 16 04:31:12.324684 kubelet[2744]: I0416 04:31:12.324565 2744 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 16 04:31:12.337576 kubelet[2744]: E0416 04:31:12.337260 2744 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.7:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.7:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 16 04:31:12.337576 kubelet[2744]: E0416 04:31:12.337331 2744 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.7:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.7:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 16 04:31:12.351057 kubelet[2744]: I0416 04:31:12.350117 2744 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 16 04:31:12.357117 kubelet[2744]: I0416 04:31:12.356927 2744 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 16 04:31:12.358807 kubelet[2744]: I0416 04:31:12.357124 2744 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Apr 16 04:31:12.359270 kubelet[2744]: W0416 04:31:12.359106 2744 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Apr 16 04:31:12.435536 kubelet[2744]: I0416 04:31:12.434919 2744 server.go:1262] "Started kubelet" Apr 16 04:31:12.448072 kubelet[2744]: I0416 04:31:12.436448 2744 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Apr 16 04:31:12.448072 kubelet[2744]: I0416 04:31:12.436693 2744 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 16 04:31:12.448072 kubelet[2744]: I0416 04:31:12.436800 2744 server_v1.go:49] "podresources" method="list" useActivePods=true Apr 16 04:31:12.455274 kubelet[2744]: I0416 04:31:12.454773 2744 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 16 04:31:12.460842 kubelet[2744]: I0416 04:31:12.459842 2744 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 16 04:31:12.474291 kubelet[2744]: I0416 04:31:12.472201 2744 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 16 04:31:12.518272 kubelet[2744]: I0416 04:31:12.513681 2744 volume_manager.go:313] "Starting Kubelet Volume Manager" Apr 16 04:31:12.547914 kubelet[2744]: E0416 04:31:12.537806 2744 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.7:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.7:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18a6bc0e3f6a9835 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-16 04:31:12.423753781 +0000 UTC m=+4.588104694,LastTimestamp:2026-04-16 04:31:12.423753781 +0000 UTC m=+4.588104694,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 16 04:31:12.553992 kubelet[2744]: E0416 04:31:12.550652 2744 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 04:31:12.562412 kubelet[2744]: E0416 04:31:12.562351 2744 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.7:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.7:6443: connect: connection refused" interval="200ms" Apr 16 04:31:12.562924 kubelet[2744]: I0416 04:31:12.562909 2744 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Apr 16 04:31:12.563295 kubelet[2744]: I0416 04:31:12.563278 2744 reconciler.go:29] "Reconciler: start to sync state" Apr 16 04:31:12.584981 kubelet[2744]: I0416 04:31:12.584796 2744 server.go:310] "Adding debug handlers to kubelet server" Apr 16 04:31:12.610667 kubelet[2744]: E0416 04:31:12.609430 2744 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.7:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.7:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 16 04:31:12.677553 kubelet[2744]: E0416 04:31:12.672885 2744 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 04:31:12.700021 kubelet[2744]: I0416 04:31:12.699268 2744 factory.go:223] Registration of the systemd container factory successfully Apr 16 04:31:12.708845 kubelet[2744]: I0416 04:31:12.708573 2744 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 16 04:31:12.732269 kubelet[2744]: I0416 04:31:12.731171 2744 factory.go:223] Registration of the containerd container factory successfully Apr 16 04:31:12.749315 kubelet[2744]: E0416 04:31:12.747127 2744 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 16 04:31:12.776402 kubelet[2744]: E0416 04:31:12.776124 2744 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 04:31:12.779536 kubelet[2744]: E0416 04:31:12.778716 2744 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.7:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.7:6443: connect: connection refused" interval="400ms" Apr 16 04:31:12.877621 kubelet[2744]: E0416 04:31:12.877124 2744 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 04:31:12.947774 kubelet[2744]: I0416 04:31:12.947521 2744 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Apr 16 04:31:12.963188 kubelet[2744]: I0416 04:31:12.962057 2744 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Apr 16 04:31:12.964627 kubelet[2744]: I0416 04:31:12.962650 2744 status_manager.go:244] "Starting to sync pod status with apiserver" Apr 16 04:31:12.964972 kubelet[2744]: I0416 04:31:12.964960 2744 kubelet.go:2428] "Starting kubelet main sync loop" Apr 16 04:31:12.965248 kubelet[2744]: E0416 04:31:12.965166 2744 kubelet.go:2452] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 16 04:31:12.997590 kubelet[2744]: E0416 04:31:12.982638 2744 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 04:31:12.983361 systemd[1]: Starting systemd-tmpfiles-clean.service - Cleanup of Temporary Directories... Apr 16 04:31:13.066549 kubelet[2744]: E0416 04:31:12.998454 2744 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.7:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.7:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 16 04:31:13.066549 kubelet[2744]: E0416 04:31:13.065859 2744 kubelet.go:2452] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 16 04:31:13.066549 kubelet[2744]: I0416 04:31:13.066543 2744 cpu_manager.go:221] "Starting CPU manager" policy="none" Apr 16 04:31:13.067115 kubelet[2744]: I0416 04:31:13.066562 2744 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Apr 16 04:31:13.067115 kubelet[2744]: I0416 04:31:13.066743 2744 state_mem.go:36] "Initialized new in-memory state store" Apr 16 04:31:13.077189 kubelet[2744]: I0416 04:31:13.077000 2744 policy_none.go:49] "None policy: Start" Apr 16 04:31:13.084446 kubelet[2744]: I0416 04:31:13.080980 2744 memory_manager.go:187] "Starting memorymanager" policy="None" Apr 16 04:31:13.084446 kubelet[2744]: I0416 04:31:13.081859 2744 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Apr 16 04:31:13.084446 kubelet[2744]: E0416 04:31:13.083871 2744 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 04:31:13.098377 kubelet[2744]: I0416 04:31:13.098118 2744 policy_none.go:47] "Start" Apr 16 04:31:13.144305 systemd-tmpfiles[2778]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Apr 16 04:31:13.147940 systemd-tmpfiles[2778]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Apr 16 04:31:13.148954 systemd-tmpfiles[2778]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Apr 16 04:31:13.150895 systemd-tmpfiles[2778]: ACLs are not supported, ignoring. Apr 16 04:31:13.150981 systemd-tmpfiles[2778]: ACLs are not supported, ignoring. Apr 16 04:31:13.155093 systemd-tmpfiles[2778]: Detected autofs mount point /boot during canonicalization of boot. Apr 16 04:31:13.155123 systemd-tmpfiles[2778]: Skipping /boot Apr 16 04:31:13.194389 kubelet[2744]: E0416 04:31:13.190460 2744 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 04:31:13.253074 kubelet[2744]: E0416 04:31:13.252616 2744 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.7:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.7:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 16 04:31:13.253074 kubelet[2744]: E0416 04:31:13.252876 2744 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.7:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.7:6443: connect: connection refused" interval="800ms" Apr 16 04:31:13.252658 systemd[1]: systemd-tmpfiles-clean.service: Deactivated successfully. Apr 16 04:31:13.253597 systemd[1]: Finished systemd-tmpfiles-clean.service - Cleanup of Temporary Directories. Apr 16 04:31:13.267058 kubelet[2744]: E0416 04:31:13.266219 2744 kubelet.go:2452] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Apr 16 04:31:13.280833 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Apr 16 04:31:13.306817 kubelet[2744]: E0416 04:31:13.306340 2744 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 04:31:13.439157 kubelet[2744]: E0416 04:31:13.438166 2744 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 04:31:13.553922 kubelet[2744]: E0416 04:31:13.552779 2744 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 04:31:13.589136 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Apr 16 04:31:13.665411 kubelet[2744]: E0416 04:31:13.660416 2744 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 04:31:13.696021 kubelet[2744]: E0416 04:31:13.667233 2744 kubelet.go:2452] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Apr 16 04:31:13.696021 kubelet[2744]: E0416 04:31:13.670545 2744 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.7:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.7:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 16 04:31:13.738395 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Apr 16 04:31:13.768515 kubelet[2744]: E0416 04:31:13.767919 2744 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 04:31:13.890872 kubelet[2744]: E0416 04:31:13.809210 2744 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.7:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.7:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 16 04:31:13.892535 kubelet[2744]: E0416 04:31:13.892137 2744 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 16 04:31:13.892607 kubelet[2744]: E0416 04:31:13.892567 2744 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 04:31:13.897340 kubelet[2744]: I0416 04:31:13.896815 2744 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 16 04:31:13.913843 kubelet[2744]: I0416 04:31:13.910644 2744 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 16 04:31:13.920982 kubelet[2744]: I0416 04:31:13.920920 2744 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 16 04:31:13.922571 kubelet[2744]: E0416 04:31:13.922363 2744 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.7:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.7:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 16 04:31:13.967648 kubelet[2744]: E0416 04:31:13.967459 2744 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 16 04:31:13.972426 kubelet[2744]: E0416 04:31:13.972320 2744 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 16 04:31:14.107800 kubelet[2744]: E0416 04:31:14.107340 2744 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.7:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.7:6443: connect: connection refused" interval="1.6s" Apr 16 04:31:14.115443 kubelet[2744]: I0416 04:31:14.115387 2744 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 16 04:31:14.120121 kubelet[2744]: E0416 04:31:14.119609 2744 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.7:6443/api/v1/nodes\": dial tcp 10.0.0.7:6443: connect: connection refused" node="localhost" Apr 16 04:31:14.405599 kubelet[2744]: I0416 04:31:14.405190 2744 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 16 04:31:14.410293 kubelet[2744]: E0416 04:31:14.408344 2744 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.7:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.7:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 16 04:31:14.410542 kubelet[2744]: E0416 04:31:14.410156 2744 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.7:6443/api/v1/nodes\": dial tcp 10.0.0.7:6443: connect: connection refused" node="localhost" Apr 16 04:31:14.539897 kubelet[2744]: I0416 04:31:14.538961 2744 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5cc8263309e5e610f62f7f401f49f55d-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"5cc8263309e5e610f62f7f401f49f55d\") " pod="kube-system/kube-apiserver-localhost" Apr 16 04:31:14.539897 kubelet[2744]: I0416 04:31:14.539577 2744 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5cc8263309e5e610f62f7f401f49f55d-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"5cc8263309e5e610f62f7f401f49f55d\") " pod="kube-system/kube-apiserver-localhost" Apr 16 04:31:14.539897 kubelet[2744]: I0416 04:31:14.540072 2744 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5cc8263309e5e610f62f7f401f49f55d-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"5cc8263309e5e610f62f7f401f49f55d\") " pod="kube-system/kube-apiserver-localhost" Apr 16 04:31:14.658670 kubelet[2744]: I0416 04:31:14.655785 2744 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/82faa9ca0765979bc0118d46e6420ed8-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"82faa9ca0765979bc0118d46e6420ed8\") " pod="kube-system/kube-controller-manager-localhost" Apr 16 04:31:14.658670 kubelet[2744]: I0416 04:31:14.657317 2744 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/82faa9ca0765979bc0118d46e6420ed8-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"82faa9ca0765979bc0118d46e6420ed8\") " pod="kube-system/kube-controller-manager-localhost" Apr 16 04:31:14.658670 kubelet[2744]: I0416 04:31:14.657559 2744 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/82faa9ca0765979bc0118d46e6420ed8-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"82faa9ca0765979bc0118d46e6420ed8\") " pod="kube-system/kube-controller-manager-localhost" Apr 16 04:31:14.658670 kubelet[2744]: I0416 04:31:14.658312 2744 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/82faa9ca0765979bc0118d46e6420ed8-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"82faa9ca0765979bc0118d46e6420ed8\") " pod="kube-system/kube-controller-manager-localhost" Apr 16 04:31:14.658670 kubelet[2744]: I0416 04:31:14.658337 2744 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/82faa9ca0765979bc0118d46e6420ed8-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"82faa9ca0765979bc0118d46e6420ed8\") " pod="kube-system/kube-controller-manager-localhost" Apr 16 04:31:14.836308 kubelet[2744]: I0416 04:31:14.836123 2744 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 16 04:31:14.839337 kubelet[2744]: E0416 04:31:14.836890 2744 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.7:6443/api/v1/nodes\": dial tcp 10.0.0.7:6443: connect: connection refused" node="localhost" Apr 16 04:31:14.902316 kubelet[2744]: I0416 04:31:14.901663 2744 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/66a243c17a59d09458bf3b09d66260f5-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"66a243c17a59d09458bf3b09d66260f5\") " pod="kube-system/kube-scheduler-localhost" Apr 16 04:31:14.916361 kubelet[2744]: E0416 04:31:14.916027 2744 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.7:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.7:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 16 04:31:14.917244 systemd[1]: Created slice kubepods-burstable-pod5cc8263309e5e610f62f7f401f49f55d.slice - libcontainer container kubepods-burstable-pod5cc8263309e5e610f62f7f401f49f55d.slice. Apr 16 04:31:15.097639 kubelet[2744]: E0416 04:31:15.097348 2744 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 16 04:31:15.127611 kubelet[2744]: E0416 04:31:15.125623 2744 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:31:15.171348 containerd[1480]: time="2026-04-16T04:31:15.162973631Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:5cc8263309e5e610f62f7f401f49f55d,Namespace:kube-system,Attempt:0,}" Apr 16 04:31:15.173095 systemd[1]: Created slice kubepods-burstable-pod82faa9ca0765979bc0118d46e6420ed8.slice - libcontainer container kubepods-burstable-pod82faa9ca0765979bc0118d46e6420ed8.slice. Apr 16 04:31:15.263268 kubelet[2744]: E0416 04:31:15.262718 2744 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 16 04:31:15.273910 systemd[1]: Created slice kubepods-burstable-pod66a243c17a59d09458bf3b09d66260f5.slice - libcontainer container kubepods-burstable-pod66a243c17a59d09458bf3b09d66260f5.slice. Apr 16 04:31:15.332015 kubelet[2744]: E0416 04:31:15.331354 2744 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:31:15.334063 kubelet[2744]: E0416 04:31:15.333901 2744 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 16 04:31:15.342846 containerd[1480]: time="2026-04-16T04:31:15.341339666Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:82faa9ca0765979bc0118d46e6420ed8,Namespace:kube-system,Attempt:0,}" Apr 16 04:31:15.345849 kubelet[2744]: E0416 04:31:15.345757 2744 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:31:15.350552 containerd[1480]: time="2026-04-16T04:31:15.350258092Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:66a243c17a59d09458bf3b09d66260f5,Namespace:kube-system,Attempt:0,}" Apr 16 04:31:15.807868 kubelet[2744]: E0416 04:31:15.807268 2744 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.7:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.7:6443: connect: connection refused" interval="3.2s" Apr 16 04:31:16.099055 kubelet[2744]: I0416 04:31:16.098693 2744 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 16 04:31:16.121027 kubelet[2744]: E0416 04:31:16.119255 2744 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.7:6443/api/v1/nodes\": dial tcp 10.0.0.7:6443: connect: connection refused" node="localhost" Apr 16 04:31:16.121616 kubelet[2744]: E0416 04:31:16.121296 2744 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.7:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.7:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 16 04:31:16.504957 kubelet[2744]: E0416 04:31:16.499340 2744 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.7:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.7:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 16 04:31:17.598303 kubelet[2744]: E0416 04:31:17.597721 2744 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.7:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.7:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 16 04:31:17.598303 kubelet[2744]: E0416 04:31:17.597677 2744 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.7:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.7:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18a6bc0e3f6a9835 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-16 04:31:12.423753781 +0000 UTC m=+4.588104694,LastTimestamp:2026-04-16 04:31:12.423753781 +0000 UTC m=+4.588104694,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 16 04:31:17.795355 kubelet[2744]: I0416 04:31:17.794923 2744 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 16 04:31:17.799277 kubelet[2744]: E0416 04:31:17.796349 2744 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.7:6443/api/v1/nodes\": dial tcp 10.0.0.7:6443: connect: connection refused" node="localhost" Apr 16 04:31:17.977971 kubelet[2744]: E0416 04:31:17.968596 2744 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.7:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.7:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 16 04:31:18.514132 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2595043691.mount: Deactivated successfully. Apr 16 04:31:18.591830 containerd[1480]: time="2026-04-16T04:31:18.586318108Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 16 04:31:18.610924 containerd[1480]: time="2026-04-16T04:31:18.610788770Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=311988" Apr 16 04:31:18.668761 containerd[1480]: time="2026-04-16T04:31:18.665428332Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 16 04:31:18.668761 containerd[1480]: time="2026-04-16T04:31:18.668640811Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 16 04:31:18.684975 containerd[1480]: time="2026-04-16T04:31:18.684772856Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 16 04:31:18.691061 containerd[1480]: time="2026-04-16T04:31:18.690712261Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 16 04:31:18.885555 containerd[1480]: time="2026-04-16T04:31:18.884271476Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 16 04:31:19.126062 kubelet[2744]: E0416 04:31:19.113412 2744 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.7:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.7:6443: connect: connection refused" interval="6.4s" Apr 16 04:31:19.150651 containerd[1480]: time="2026-04-16T04:31:19.139330869Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 16 04:31:19.359099 containerd[1480]: time="2026-04-16T04:31:19.357987070Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 4.007279496s" Apr 16 04:31:19.493842 kubelet[2744]: E0416 04:31:19.491188 2744 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.7:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.7:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 16 04:31:19.506597 kubelet[2744]: E0416 04:31:19.498795 2744 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.7:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.7:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 16 04:31:19.521052 containerd[1480]: time="2026-04-16T04:31:19.520710784Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 4.356174869s" Apr 16 04:31:19.867621 containerd[1480]: time="2026-04-16T04:31:19.867174536Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 4.525454438s" Apr 16 04:31:20.346974 containerd[1480]: time="2026-04-16T04:31:20.339870869Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 16 04:31:20.346974 containerd[1480]: time="2026-04-16T04:31:20.340009772Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 16 04:31:20.346974 containerd[1480]: time="2026-04-16T04:31:20.340034152Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 16 04:31:20.346974 containerd[1480]: time="2026-04-16T04:31:20.340219210Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 16 04:31:20.366839 containerd[1480]: time="2026-04-16T04:31:20.361329327Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 16 04:31:20.366839 containerd[1480]: time="2026-04-16T04:31:20.361557204Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 16 04:31:20.366839 containerd[1480]: time="2026-04-16T04:31:20.361574002Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 16 04:31:20.366839 containerd[1480]: time="2026-04-16T04:31:20.361664324Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 16 04:31:20.497284 containerd[1480]: time="2026-04-16T04:31:20.496328611Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 16 04:31:20.497284 containerd[1480]: time="2026-04-16T04:31:20.497217303Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 16 04:31:20.497284 containerd[1480]: time="2026-04-16T04:31:20.497318813Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 16 04:31:20.503135 containerd[1480]: time="2026-04-16T04:31:20.497900414Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 16 04:31:20.814033 systemd[1]: Started cri-containerd-088626ec656781b4f4cd0e095fbe8c9c3aef97c5d3b2fed8398328e71e392cff.scope - libcontainer container 088626ec656781b4f4cd0e095fbe8c9c3aef97c5d3b2fed8398328e71e392cff. Apr 16 04:31:20.884364 systemd[1]: Started cri-containerd-644349821040e95ea568f6e1759fa4c4b8a74a20d79aaf0f1dce324ce64e5914.scope - libcontainer container 644349821040e95ea568f6e1759fa4c4b8a74a20d79aaf0f1dce324ce64e5914. Apr 16 04:31:21.052122 systemd[1]: Started cri-containerd-e5ac11065ebeed995688ed5c666d4c4cb7a23481c846cf92360ae9219484fcb7.scope - libcontainer container e5ac11065ebeed995688ed5c666d4c4cb7a23481c846cf92360ae9219484fcb7. Apr 16 04:31:21.271367 kubelet[2744]: I0416 04:31:21.213617 2744 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 16 04:31:21.279427 kubelet[2744]: E0416 04:31:21.279386 2744 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.7:6443/api/v1/nodes\": dial tcp 10.0.0.7:6443: connect: connection refused" node="localhost" Apr 16 04:31:21.284962 kubelet[2744]: E0416 04:31:21.284154 2744 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.7:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.7:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 16 04:31:21.359882 kubelet[2744]: E0416 04:31:21.359695 2744 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.7:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.7:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 16 04:31:21.548382 containerd[1480]: time="2026-04-16T04:31:21.547104452Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:5cc8263309e5e610f62f7f401f49f55d,Namespace:kube-system,Attempt:0,} returns sandbox id \"088626ec656781b4f4cd0e095fbe8c9c3aef97c5d3b2fed8398328e71e392cff\"" Apr 16 04:31:21.548382 containerd[1480]: time="2026-04-16T04:31:21.547282774Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:82faa9ca0765979bc0118d46e6420ed8,Namespace:kube-system,Attempt:0,} returns sandbox id \"644349821040e95ea568f6e1759fa4c4b8a74a20d79aaf0f1dce324ce64e5914\"" Apr 16 04:31:21.548382 containerd[1480]: time="2026-04-16T04:31:21.547301971Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:66a243c17a59d09458bf3b09d66260f5,Namespace:kube-system,Attempt:0,} returns sandbox id \"e5ac11065ebeed995688ed5c666d4c4cb7a23481c846cf92360ae9219484fcb7\"" Apr 16 04:31:21.568896 kubelet[2744]: E0416 04:31:21.567211 2744 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:31:21.699113 kubelet[2744]: E0416 04:31:21.697372 2744 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:31:21.735647 kubelet[2744]: E0416 04:31:21.734971 2744 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:31:22.253175 containerd[1480]: time="2026-04-16T04:31:22.252024439Z" level=info msg="CreateContainer within sandbox \"e5ac11065ebeed995688ed5c666d4c4cb7a23481c846cf92360ae9219484fcb7\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Apr 16 04:31:22.268249 containerd[1480]: time="2026-04-16T04:31:22.260917047Z" level=info msg="CreateContainer within sandbox \"088626ec656781b4f4cd0e095fbe8c9c3aef97c5d3b2fed8398328e71e392cff\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Apr 16 04:31:22.346316 containerd[1480]: time="2026-04-16T04:31:22.344932440Z" level=info msg="CreateContainer within sandbox \"644349821040e95ea568f6e1759fa4c4b8a74a20d79aaf0f1dce324ce64e5914\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Apr 16 04:31:22.767925 containerd[1480]: time="2026-04-16T04:31:22.767790614Z" level=info msg="CreateContainer within sandbox \"e5ac11065ebeed995688ed5c666d4c4cb7a23481c846cf92360ae9219484fcb7\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"4c156d6a49e78b3ec26800168ff04cd026a1be81a59e72255f07c9af989f65f7\"" Apr 16 04:31:22.770822 containerd[1480]: time="2026-04-16T04:31:22.770760015Z" level=info msg="StartContainer for \"4c156d6a49e78b3ec26800168ff04cd026a1be81a59e72255f07c9af989f65f7\"" Apr 16 04:31:22.772690 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3831268690.mount: Deactivated successfully. Apr 16 04:31:22.875010 containerd[1480]: time="2026-04-16T04:31:22.874804496Z" level=info msg="CreateContainer within sandbox \"644349821040e95ea568f6e1759fa4c4b8a74a20d79aaf0f1dce324ce64e5914\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"062cbc5686a451a617a9b42cac5acdf2cec03e5c8356235fe76e44367c15ae51\"" Apr 16 04:31:22.875010 containerd[1480]: time="2026-04-16T04:31:22.874826756Z" level=info msg="CreateContainer within sandbox \"088626ec656781b4f4cd0e095fbe8c9c3aef97c5d3b2fed8398328e71e392cff\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"2c09a95f34c0313027d68a9b00c67981b9bf301254cd329093300d7ec4ed0e08\"" Apr 16 04:31:22.925731 containerd[1480]: time="2026-04-16T04:31:22.918213644Z" level=info msg="StartContainer for \"062cbc5686a451a617a9b42cac5acdf2cec03e5c8356235fe76e44367c15ae51\"" Apr 16 04:31:22.950154 containerd[1480]: time="2026-04-16T04:31:22.947954383Z" level=info msg="StartContainer for \"2c09a95f34c0313027d68a9b00c67981b9bf301254cd329093300d7ec4ed0e08\"" Apr 16 04:31:23.002968 systemd[1]: Started cri-containerd-4c156d6a49e78b3ec26800168ff04cd026a1be81a59e72255f07c9af989f65f7.scope - libcontainer container 4c156d6a49e78b3ec26800168ff04cd026a1be81a59e72255f07c9af989f65f7. Apr 16 04:31:23.326391 systemd[1]: Started cri-containerd-062cbc5686a451a617a9b42cac5acdf2cec03e5c8356235fe76e44367c15ae51.scope - libcontainer container 062cbc5686a451a617a9b42cac5acdf2cec03e5c8356235fe76e44367c15ae51. Apr 16 04:31:23.490289 systemd[1]: Started cri-containerd-2c09a95f34c0313027d68a9b00c67981b9bf301254cd329093300d7ec4ed0e08.scope - libcontainer container 2c09a95f34c0313027d68a9b00c67981b9bf301254cd329093300d7ec4ed0e08. Apr 16 04:31:23.986226 kubelet[2744]: E0416 04:31:23.985370 2744 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 16 04:31:24.396020 containerd[1480]: time="2026-04-16T04:31:24.385739743Z" level=info msg="StartContainer for \"4c156d6a49e78b3ec26800168ff04cd026a1be81a59e72255f07c9af989f65f7\" returns successfully" Apr 16 04:31:24.704440 containerd[1480]: time="2026-04-16T04:31:24.675850728Z" level=info msg="StartContainer for \"062cbc5686a451a617a9b42cac5acdf2cec03e5c8356235fe76e44367c15ae51\" returns successfully" Apr 16 04:31:25.354367 containerd[1480]: time="2026-04-16T04:31:25.351752663Z" level=info msg="StartContainer for \"2c09a95f34c0313027d68a9b00c67981b9bf301254cd329093300d7ec4ed0e08\" returns successfully" Apr 16 04:31:26.004758 kubelet[2744]: E0416 04:31:25.999622 2744 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.7:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.7:6443: connect: connection refused" interval="7s" Apr 16 04:31:27.053450 kubelet[2744]: E0416 04:31:27.053098 2744 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 16 04:31:27.155517 kubelet[2744]: E0416 04:31:27.146297 2744 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:31:27.761621 kubelet[2744]: E0416 04:31:27.758375 2744 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 16 04:31:28.137719 kubelet[2744]: E0416 04:31:27.954754 2744 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:31:28.345302 kubelet[2744]: I0416 04:31:28.344432 2744 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 16 04:31:30.533453 kubelet[2744]: E0416 04:31:30.532994 2744 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 16 04:31:30.553614 kubelet[2744]: E0416 04:31:30.553294 2744 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:31:31.462713 kubelet[2744]: E0416 04:31:31.462368 2744 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 16 04:31:31.462713 kubelet[2744]: E0416 04:31:31.462896 2744 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:31:32.667267 kubelet[2744]: E0416 04:31:32.666871 2744 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 16 04:31:32.771081 kubelet[2744]: E0416 04:31:32.748331 2744 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:31:33.973619 kubelet[2744]: E0416 04:31:33.973290 2744 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 16 04:31:34.044195 kubelet[2744]: E0416 04:31:33.985672 2744 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 16 04:31:34.067822 kubelet[2744]: E0416 04:31:34.067365 2744 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 16 04:31:34.100358 kubelet[2744]: E0416 04:31:34.096677 2744 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 16 04:31:34.100358 kubelet[2744]: E0416 04:31:34.097053 2744 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:31:34.163714 kubelet[2744]: E0416 04:31:34.161405 2744 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:31:34.163714 kubelet[2744]: E0416 04:31:34.161789 2744 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:31:35.950404 kubelet[2744]: E0416 04:31:35.946768 2744 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 16 04:31:36.031537 kubelet[2744]: E0416 04:31:36.030936 2744 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:31:36.386861 kubelet[2744]: E0416 04:31:36.383296 2744 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.7:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": net/http: TLS handshake timeout" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 16 04:31:37.499159 kubelet[2744]: E0416 04:31:37.497056 2744 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.7:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 16 04:31:37.715661 kubelet[2744]: E0416 04:31:37.714911 2744 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.7:6443/api/v1/namespaces/default/events\": net/http: TLS handshake timeout" event="&Event{ObjectMeta:{localhost.18a6bc0e3f6a9835 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-16 04:31:12.423753781 +0000 UTC m=+4.588104694,LastTimestamp:2026-04-16 04:31:12.423753781 +0000 UTC m=+4.588104694,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 16 04:31:38.462195 kubelet[2744]: E0416 04:31:38.453742 2744 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.7:6443/api/v1/nodes\": net/http: TLS handshake timeout" node="localhost" Apr 16 04:31:39.455830 kubelet[2744]: E0416 04:31:39.455322 2744 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.7:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 16 04:31:39.875394 kubelet[2744]: E0416 04:31:39.871746 2744 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.7:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 16 04:31:43.012329 kubelet[2744]: E0416 04:31:43.011936 2744 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.7:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded" interval="7s" Apr 16 04:31:43.243150 kubelet[2744]: E0416 04:31:43.241014 2744 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.7:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 16 04:31:44.239754 kubelet[2744]: E0416 04:31:44.238121 2744 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 16 04:31:44.617547 kubelet[2744]: E0416 04:31:44.612107 2744 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 16 04:31:44.617547 kubelet[2744]: E0416 04:31:44.612951 2744 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:31:45.758734 kubelet[2744]: I0416 04:31:45.757688 2744 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 16 04:31:49.578831 kubelet[2744]: E0416 04:31:49.576923 2744 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 16 04:31:49.578831 kubelet[2744]: E0416 04:31:49.578435 2744 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:31:54.269123 kubelet[2744]: E0416 04:31:54.268582 2744 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 16 04:31:56.995140 kubelet[2744]: E0416 04:31:56.268186 2744 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.7:6443/api/v1/nodes\": net/http: TLS handshake timeout" node="localhost" Apr 16 04:31:58.910905 kubelet[2744]: E0416 04:31:58.062000 2744 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.7:6443/api/v1/namespaces/default/events\": net/http: TLS handshake timeout" event="&Event{ObjectMeta:{localhost.18a6bc0e3f6a9835 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-16 04:31:12.423753781 +0000 UTC m=+4.588104694,LastTimestamp:2026-04-16 04:31:12.423753781 +0000 UTC m=+4.588104694,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 16 04:32:00.656266 kubelet[2744]: E0416 04:32:00.654967 2744 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.7:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded" interval="7s" Apr 16 04:32:03.970296 kubelet[2744]: E0416 04:32:03.968539 2744 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.7:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": net/http: TLS handshake timeout" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 16 04:32:03.970296 kubelet[2744]: E0416 04:32:03.969800 2744 certificate_manager.go:461] "Reached backoff limit, still unable to rotate certs" err="timed out waiting for the condition" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 16 04:32:04.307364 kubelet[2744]: E0416 04:32:04.300361 2744 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 16 04:32:05.260205 kubelet[2744]: I0416 04:32:05.259831 2744 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 16 04:32:09.157624 kubelet[2744]: E0416 04:32:09.157193 2744 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.7:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 16 04:32:09.247659 kubelet[2744]: E0416 04:32:09.244390 2744 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.7:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 16 04:32:12.676292 kubelet[2744]: E0416 04:32:12.673048 2744 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.7:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 16 04:32:14.350432 kubelet[2744]: E0416 04:32:14.350098 2744 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 16 04:32:14.929046 kubelet[2744]: E0416 04:32:14.928250 2744 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.7:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 16 04:32:15.355398 kubelet[2744]: E0416 04:32:15.353151 2744 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.7:6443/api/v1/nodes\": net/http: TLS handshake timeout" node="localhost" Apr 16 04:32:18.706920 kubelet[2744]: E0416 04:32:18.706452 2744 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.7:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": net/http: TLS handshake timeout (Client.Timeout exceeded while awaiting headers)" interval="7s" Apr 16 04:32:19.155760 kubelet[2744]: E0416 04:32:19.145123 2744 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.7:6443/api/v1/namespaces/default/events\": net/http: TLS handshake timeout" event="&Event{ObjectMeta:{localhost.18a6bc0e3f6a9835 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-16 04:31:12.423753781 +0000 UTC m=+4.588104694,LastTimestamp:2026-04-16 04:31:12.423753781 +0000 UTC m=+4.588104694,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 16 04:32:22.818911 kubelet[2744]: I0416 04:32:22.813100 2744 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 16 04:32:24.374932 kubelet[2744]: E0416 04:32:24.370996 2744 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 16 04:32:34.985524 kubelet[2744]: E0416 04:32:34.967060 2744 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 16 04:32:36.217571 kubelet[2744]: E0416 04:32:36.215042 2744 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.7:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Apr 16 04:32:43.584841 kubelet[2744]: I0416 04:32:43.584324 2744 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Apr 16 04:32:43.584841 kubelet[2744]: E0416 04:32:43.585217 2744 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Apr 16 04:32:44.388599 kubelet[2744]: E0416 04:32:44.220573 2744 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.18a6bc0e3f6a9835 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-16 04:31:12.423753781 +0000 UTC m=+4.588104694,LastTimestamp:2026-04-16 04:31:12.423753781 +0000 UTC m=+4.588104694,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 16 04:32:45.045930 kubelet[2744]: E0416 04:32:45.045393 2744 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 16 04:32:48.658095 kubelet[2744]: E0416 04:32:48.558232 2744 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.18a6bc0e52b04ec5 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-16 04:31:12.747089605 +0000 UTC m=+4.911440500,LastTimestamp:2026-04-16 04:31:12.747089605 +0000 UTC m=+4.911440500,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 16 04:32:49.908224 kubelet[2744]: E0416 04:32:49.897425 2744 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 04:32:50.112454 kubelet[2744]: E0416 04:32:50.039290 2744 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 04:32:50.263252 kubelet[2744]: E0416 04:32:50.260936 2744 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 04:32:50.443561 kubelet[2744]: E0416 04:32:50.433926 2744 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 04:32:50.654063 kubelet[2744]: E0416 04:32:50.560874 2744 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 04:32:50.681710 kubelet[2744]: E0416 04:32:50.681110 2744 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 04:32:50.990291 kubelet[2744]: E0416 04:32:50.989173 2744 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 04:32:51.157401 kubelet[2744]: E0416 04:32:51.157267 2744 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 04:32:51.357369 kubelet[2744]: E0416 04:32:51.313206 2744 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 04:32:51.571969 kubelet[2744]: E0416 04:32:51.552056 2744 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 04:32:51.571969 kubelet[2744]: E0416 04:32:51.451319 2744 controller.go:145] "Failed to ensure lease exists, will retry" err="namespaces \"kube-node-lease\" not found" interval="7s" Apr 16 04:32:51.713045 kubelet[2744]: E0416 04:32:51.708757 2744 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 04:32:52.074151 kubelet[2744]: E0416 04:32:52.000154 2744 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 04:32:52.307006 kubelet[2744]: E0416 04:32:52.269406 2744 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 04:32:52.426964 kubelet[2744]: E0416 04:32:52.425300 2744 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 04:32:52.643046 kubelet[2744]: E0416 04:32:52.642304 2744 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 04:32:52.790900 kubelet[2744]: E0416 04:32:52.752276 2744 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 04:32:53.008595 kubelet[2744]: E0416 04:32:53.006511 2744 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 04:32:53.496344 kubelet[2744]: E0416 04:32:53.494961 2744 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 04:32:53.496344 kubelet[2744]: E0416 04:32:53.494939 2744 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.18a6bc0e5fc49f02 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node localhost status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-16 04:31:12.966524674 +0000 UTC m=+5.130875582,LastTimestamp:2026-04-16 04:31:12.966524674 +0000 UTC m=+5.130875582,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 16 04:32:53.768802 kubelet[2744]: E0416 04:32:53.751315 2744 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 04:32:53.909229 kubelet[2744]: E0416 04:32:53.908307 2744 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 04:32:54.253312 kubelet[2744]: I0416 04:32:54.201297 2744 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Apr 16 04:32:54.512166 kubelet[2744]: I0416 04:32:54.506923 2744 apiserver.go:52] "Watching apiserver" Apr 16 04:32:54.963755 kubelet[2744]: I0416 04:32:54.963220 2744 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Apr 16 04:32:55.262573 kubelet[2744]: I0416 04:32:55.235825 2744 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Apr 16 04:32:56.052631 kubelet[2744]: I0416 04:32:56.049052 2744 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Apr 16 04:32:57.034702 kubelet[2744]: E0416 04:32:57.033607 2744 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.037s" Apr 16 04:32:59.151851 kubelet[2744]: E0416 04:32:59.151723 2744 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.118s" Apr 16 04:32:59.340262 kubelet[2744]: I0416 04:32:59.336851 2744 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Apr 16 04:33:00.284375 kubelet[2744]: E0416 04:33:00.284246 2744 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.132s" Apr 16 04:33:03.656078 kubelet[2744]: E0416 04:33:03.655788 2744 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:33:05.336443 kubelet[2744]: E0416 04:33:05.324964 2744 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:33:05.862868 kubelet[2744]: E0416 04:33:05.862596 2744 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Apr 16 04:33:06.368219 kubelet[2744]: E0416 04:33:06.156402 2744 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:33:06.910095 kubelet[2744]: E0416 04:33:06.453065 2744 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="4.701s" Apr 16 04:33:08.933205 kubelet[2744]: E0416 04:33:08.928042 2744 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.436s" Apr 16 04:33:10.355240 kubelet[2744]: E0416 04:33:10.351727 2744 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.412s" Apr 16 04:33:10.583710 kubelet[2744]: E0416 04:33:10.581566 2744 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:33:12.035984 kubelet[2744]: I0416 04:33:11.998873 2744 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=17.996633921 podStartE2EDuration="17.996633921s" podCreationTimestamp="2026-04-16 04:32:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-16 04:33:10.759755783 +0000 UTC m=+122.924106697" watchObservedRunningTime="2026-04-16 04:33:11.996633921 +0000 UTC m=+124.160984820" Apr 16 04:33:16.811145 kubelet[2744]: E0416 04:33:16.808389 2744 kubelet_node_status.go:398] "Node not becoming ready in time after startup" Apr 16 04:33:17.720218 kubelet[2744]: E0416 04:33:17.713409 2744 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="4.175s" Apr 16 04:33:19.264632 kubelet[2744]: I0416 04:33:19.263968 2744 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=24.263910916 podStartE2EDuration="24.263910916s" podCreationTimestamp="2026-04-16 04:32:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-16 04:33:12.487817618 +0000 UTC m=+124.652168518" watchObservedRunningTime="2026-04-16 04:33:19.263910916 +0000 UTC m=+131.428261825" Apr 16 04:33:23.865810 kubelet[2744]: E0416 04:33:23.849150 2744 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 04:33:28.646332 kubelet[2744]: E0416 04:33:28.645821 2744 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="10.735s" Apr 16 04:33:30.834525 kubelet[2744]: E0416 04:33:30.829000 2744 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 04:33:33.246875 kubelet[2744]: E0416 04:33:33.239966 2744 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="4.593s" Apr 16 04:33:39.674913 kubelet[2744]: E0416 04:33:39.461257 2744 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 04:33:53.609442 kubelet[2744]: E0416 04:33:53.336392 2744 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 04:33:57.671913 kubelet[2744]: E0416 04:33:57.664262 2744 controller.go:195] "Failed to update lease" err="Put \"https://10.0.0.7:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Apr 16 04:34:10.812917 kubelet[2744]: E0416 04:34:09.967560 2744 controller.go:195] "Failed to update lease" err="Put \"https://10.0.0.7:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded" Apr 16 04:34:12.054124 kubelet[2744]: E0416 04:34:11.151553 2744 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 04:34:12.877436 kubelet[2744]: E0416 04:34:11.346847 2744 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="38.107s" Apr 16 04:34:25.375041 kubelet[2744]: E0416 04:34:24.766521 2744 controller.go:195] "Failed to update lease" err="Put \"https://10.0.0.7:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Apr 16 04:34:31.969794 kubelet[2744]: E0416 04:34:31.964275 2744 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 04:34:38.670285 kubelet[2744]: E0416 04:34:38.664163 2744 controller.go:195] "Failed to update lease" err="Put \"https://10.0.0.7:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded" Apr 16 04:34:48.162049 kubelet[2744]: E0416 04:34:46.051383 2744 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 04:34:52.891937 kubelet[2744]: E0416 04:34:52.081369 2744 controller.go:195] "Failed to update lease" err="Put \"https://10.0.0.7:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded" Apr 16 04:34:54.978193 kubelet[2744]: I0416 04:34:54.962197 2744 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Apr 16 04:35:00.085360 kubelet[2744]: E0416 04:35:00.072821 2744 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="47.741s" Apr 16 04:35:01.757168 kubelet[2744]: E0416 04:35:01.086443 2744 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 04:35:04.098405 kubelet[2744]: E0416 04:35:04.097167 2744 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:35:05.468266 kubelet[2744]: E0416 04:35:04.760417 2744 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:35:31.524107 kubelet[2744]: E0416 04:35:29.237296 2744 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.7:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded" interval="200ms" Apr 16 04:35:37.292796 kubelet[2744]: E0416 04:35:37.289847 2744 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 04:35:44.696436 kubelet[2744]: E0416 04:35:44.683564 2744 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:35:52.472159 kubelet[2744]: E0416 04:35:52.469972 2744 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.7:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded" interval="400ms" Apr 16 04:35:55.296274 kubelet[2744]: E0416 04:35:55.280440 2744 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 04:36:12.603376 kubelet[2744]: E0416 04:36:12.597918 2744 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 04:36:18.766522 kubelet[2744]: E0416 04:36:16.795166 2744 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.7:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded" interval="800ms" Apr 16 04:36:37.084387 kubelet[2744]: E0416 04:36:35.093234 2744 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 04:36:42.795076 kubelet[2744]: E0416 04:36:41.180221 2744 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.7:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" interval="1.6s" Apr 16 04:36:51.759620 kubelet[2744]: E0416 04:36:51.648437 2744 status_manager.go:1018] "Failed to get status for pod" err="the server was unable to return a response in the time allotted, but may still be processing the request (get pods kube-scheduler-localhost)" podUID="66a243c17a59d09458bf3b09d66260f5" pod="kube-system/kube-scheduler-localhost" Apr 16 04:36:55.314924 kubelet[2744]: E0416 04:36:54.853351 2744 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.7:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded" interval="3.2s" Apr 16 04:36:56.612182 kubelet[2744]: E0416 04:36:56.164370 2744 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 04:37:04.545071 kubelet[2744]: E0416 04:37:04.356421 2744 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2m1.208s" Apr 16 04:37:09.068159 kubelet[2744]: E0416 04:37:08.014263 2744 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 04:37:12.910973 kubelet[2744]: E0416 04:37:12.865084 2744 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.7:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded" interval="6.4s" Apr 16 04:37:18.048221 kubelet[2744]: E0416 04:37:16.609964 2744 event.go:359] "Server rejected event (will not retry!)" err="the server was unable to return a response in the time allotted, but may still be processing the request (patch events localhost.18a6bc0e5fc49f02)" event="&Event{ObjectMeta:{localhost.18a6bc0e5fc49f02 default 115 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node localhost status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-16 04:31:12 +0000 UTC,LastTimestamp:2026-04-16 04:31:14.657862467 +0000 UTC m=+6.822213373,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 16 04:37:35.404360 kubelet[2744]: E0416 04:37:34.030080 2744 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 04:37:41.736184 kubelet[2744]: E0416 04:37:41.170133 2744 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.7:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded" interval="7s" Apr 16 04:38:00.000986 kubelet[2744]: E0416 04:37:58.260791 2744 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 04:38:06.593382 kubelet[2744]: E0416 04:38:06.579457 2744 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.7:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Apr 16 04:38:16.485429 kubelet[2744]: E0416 04:38:16.270977 2744 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 04:38:26.441016 kubelet[2744]: E0416 04:38:25.233132 2744 status_manager.go:1018] "Failed to get status for pod" err="stream error when reading response body, may be caused by closed connection. Please retry. Original error: stream error: stream ID 161; INTERNAL_ERROR; received from peer" podUID="66a243c17a59d09458bf3b09d66260f5" pod="kube-system/kube-scheduler-localhost" Apr 16 04:38:29.969363 kubelet[2744]: E0416 04:38:29.291990 2744 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1m23.851s" Apr 16 04:38:32.092082 kubelet[2744]: E0416 04:38:31.899564 2744 kubelet_node_status.go:486] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-04-16T04:38:15Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-16T04:38:15Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-16T04:38:15Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-16T04:38:15Z\\\",\\\"type\\\":\\\"Ready\\\"}]}}\" for node \"localhost\": Patch \"https://10.0.0.7:6443/api/v1/nodes/localhost/status?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Apr 16 04:38:33.642358 kubelet[2744]: E0416 04:38:32.003279 2744 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.7:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded" interval="7s" Apr 16 04:38:38.859742 kubelet[2744]: E0416 04:38:38.840361 2744 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 04:38:44.657603 kubelet[2744]: E0416 04:38:44.060673 2744 event.go:359] "Server rejected event (will not retry!)" err="the server was unable to return a response in the time allotted, but may still be processing the request (patch events localhost.18a6bc0e5fc4d9e9)" event="&Event{ObjectMeta:{localhost.18a6bc0e5fc4d9e9 default 118 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node localhost status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-16 04:31:12 +0000 UTC,LastTimestamp:2026-04-16 04:31:14.657966865 +0000 UTC m=+6.822317772,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 16 04:38:48.486197 kubelet[2744]: E0416 04:38:47.722085 2744 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"localhost\": Get \"https://10.0.0.7:6443/api/v1/nodes/localhost?timeout=10s\": context deadline exceeded" Apr 16 04:38:54.887117 kubelet[2744]: E0416 04:38:54.885909 2744 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.7:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Apr 16 04:39:01.453374 kubelet[2744]: E0416 04:39:01.442790 2744 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"localhost\": Get \"https://10.0.0.7:6443/api/v1/nodes/localhost?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Apr 16 04:39:04.423052 kubelet[2744]: E0416 04:39:03.283329 2744 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 04:39:15.189019 kubelet[2744]: E0416 04:39:12.423289 2744 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"localhost\": Get \"https://10.0.0.7:6443/api/v1/nodes/localhost?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Apr 16 04:39:26.394800 kubelet[2744]: E0416 04:39:24.887434 2744 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.7:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded" interval="7s" Apr 16 04:39:44.352075 kubelet[2744]: E0416 04:39:44.340289 2744 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"localhost\": Get \"https://10.0.0.7:6443/api/v1/nodes/localhost?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Apr 16 04:39:47.777939 kubelet[2744]: E0416 04:39:47.759820 2744 kubelet_node_status.go:473] "Unable to update node status" err="update node status exceeds retry count" Apr 16 04:39:52.615000 kubelet[2744]: E0416 04:39:52.608952 2744 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 04:39:57.205191 kubelet[2744]: E0416 04:39:57.089337 2744 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1m27.055s" Apr 16 04:39:58.281101 kubelet[2744]: E0416 04:39:57.464425 2744 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.7:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" interval="7s" Apr 16 04:39:59.791774 kubelet[2744]: E0416 04:39:59.789325 2744 kubelet.go:2452] "Skipping pod synchronization" err="container runtime is down" Apr 16 04:40:08.068109 kubelet[2744]: E0416 04:40:08.060573 2744 status_manager.go:1018] "Failed to get status for pod" err="the server was unable to return a response in the time allotted, but may still be processing the request (get pods kube-scheduler-localhost)" podUID="66a243c17a59d09458bf3b09d66260f5" pod="kube-system/kube-scheduler-localhost" Apr 16 04:40:10.012283 kubelet[2744]: E0416 04:40:10.003302 2744 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 04:40:11.849202 kubelet[2744]: E0416 04:40:11.846401 2744 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:40:17.078373 kubelet[2744]: E0416 04:40:15.394388 2744 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.7:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded" interval="7s" Apr 16 04:40:26.514320 kubelet[2744]: E0416 04:40:26.399024 2744 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 04:40:26.940146 kubelet[2744]: E0416 04:40:26.569372 2744 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:40:27.815350 kubelet[2744]: E0416 04:40:27.814453 2744 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:40:37.514586 kubelet[2744]: E0416 04:40:37.465433 2744 kubelet_node_status.go:486] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-04-16T04:40:16Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-16T04:40:16Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-16T04:40:16Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-16T04:40:16Z\\\",\\\"type\\\":\\\"Ready\\\"}]}}\" for node \"localhost\": Patch \"https://10.0.0.7:6443/api/v1/nodes/localhost/status?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Apr 16 04:40:39.364640 kubelet[2744]: E0416 04:40:39.357369 2744 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 04:40:42.308491 kubelet[2744]: E0416 04:40:40.479444 2744 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.7:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Apr 16 04:40:47.891721 kubelet[2744]: E0416 04:40:47.832036 2744 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="42.761s" Apr 16 04:40:50.783362 kubelet[2744]: E0416 04:40:50.768133 2744 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 04:40:55.266054 kubelet[2744]: E0416 04:40:54.288315 2744 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"localhost\": Get \"https://10.0.0.7:6443/api/v1/nodes/localhost?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Apr 16 04:40:59.343222 kubelet[2744]: E0416 04:40:55.569356 2744 event.go:359] "Server rejected event (will not retry!)" err="the server was unable to return a response in the time allotted, but may still be processing the request (patch events localhost.18a6bc0e5fc4ecaf)" event="&Event{ObjectMeta:{localhost.18a6bc0e5fc4ecaf default 123 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node localhost status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-16 04:31:12 +0000 UTC,LastTimestamp:2026-04-16 04:31:14.657989704 +0000 UTC m=+6.822340617,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 16 04:41:05.812288 kubelet[2744]: E0416 04:41:05.611455 2744 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.7:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Apr 16 04:41:07.767020 kubelet[2744]: E0416 04:41:07.762449 2744 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"localhost\": Get \"https://10.0.0.7:6443/api/v1/nodes/localhost?timeout=10s\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Apr 16 04:41:12.400279 kubelet[2744]: E0416 04:41:12.314161 2744 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 04:41:20.742557 kubelet[2744]: E0416 04:41:20.731153 2744 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"localhost\": Get \"https://10.0.0.7:6443/api/v1/nodes/localhost?timeout=10s\": context deadline exceeded" Apr 16 04:41:25.959738 kubelet[2744]: E0416 04:41:25.943868 2744 status_manager.go:1018] "Failed to get status for pod" err="the server was unable to return a response in the time allotted, but may still be processing the request (get pods kube-scheduler-localhost)" podUID="66a243c17a59d09458bf3b09d66260f5" pod="kube-system/kube-scheduler-localhost" Apr 16 04:41:33.701762 kubelet[2744]: E0416 04:41:31.493132 2744 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.7:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded" interval="7s" Apr 16 04:41:35.680992 kubelet[2744]: E0416 04:41:33.695858 2744 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 04:41:49.595059 kubelet[2744]: E0416 04:41:49.566092 2744 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"localhost\": Get \"https://10.0.0.7:6443/api/v1/nodes/localhost?timeout=10s\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Apr 16 04:41:52.195631 kubelet[2744]: E0416 04:41:49.687173 2744 kubelet_node_status.go:473] "Unable to update node status" err="update node status exceeds retry count" Apr 16 04:42:07.782297 kubelet[2744]: E0416 04:42:04.089210 2744 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 04:42:09.346190 kubelet[2744]: E0416 04:42:09.100461 2744 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.7:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded" interval="7s" Apr 16 04:42:30.676758 kubelet[2744]: E0416 04:42:19.279752 2744 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://10.0.0.7:6443/api/v1/namespaces/default/events/localhost.18a6bc0e5fc49f02\": stream error: stream ID 211; INTERNAL_ERROR; received from peer" event="&Event{ObjectMeta:{localhost.18a6bc0e5fc49f02 default 115 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node localhost status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-16 04:31:12 +0000 UTC,LastTimestamp:2026-04-16 04:31:14.659211229 +0000 UTC m=+6.823562135,Count:8,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 16 04:42:38.045268 kubelet[2744]: E0416 04:42:37.020761 2744 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.7:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded" interval="7s" Apr 16 04:42:40.582295 kubelet[2744]: E0416 04:42:40.265328 2744 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 04:43:05.952133 kubelet[2744]: E0416 04:43:04.191584 2744 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.7:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Apr 16 04:43:15.543387 kubelet[2744]: E0416 04:43:13.816348 2744 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 04:43:18.477186 kubelet[2744]: E0416 04:43:18.467608 2744 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2m29.617s" Apr 16 04:43:20.152458 kubelet[2744]: E0416 04:43:19.990418 2744 kubelet.go:2452] "Skipping pod synchronization" err="container runtime is down" Apr 16 04:43:25.662107 kubelet[2744]: E0416 04:43:25.545139 2744 kubelet.go:2452] "Skipping pod synchronization" err="container runtime is down" Apr 16 04:43:40.096372 kubelet[2744]: E0416 04:43:38.318082 2744 kubelet_node_status.go:486] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-04-16T04:42:46Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-16T04:42:46Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-16T04:42:46Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-16T04:42:46Z\\\",\\\"type\\\":\\\"Ready\\\"}]}}\" for node \"localhost\": Patch \"https://10.0.0.7:6443/api/v1/nodes/localhost/status?timeout=10s\": context deadline exceeded" Apr 16 04:43:42.460339 kubelet[2744]: E0416 04:43:41.384993 2744 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.7:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded" interval="7s" Apr 16 04:43:43.482757 kubelet[2744]: E0416 04:43:43.398281 2744 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 04:43:56.465353 kubelet[2744]: E0416 04:43:56.454253 2744 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"localhost\": Get \"https://10.0.0.7:6443/api/v1/nodes/localhost?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Apr 16 04:43:56.465353 kubelet[2744]: I0416 04:43:56.465397 2744 reflector.go:571] "Warning: watch ended with error" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" err="an error on the server (\"unable to decode an event from the watch stream: http2: client connection lost\") has prevented the request from succeeding" Apr 16 04:43:58.616667 kubelet[2744]: I0416 04:43:57.359223 2744 reflector.go:571] "Warning: watch ended with error" reflector="pkg/kubelet/config/apiserver.go:66" type="*v1.Pod" err="an error on the server (\"unable to decode an event from the watch stream: http2: client connection lost\") has prevented the request from succeeding" Apr 16 04:43:58.616667 kubelet[2744]: E0416 04:43:55.495604 2744 reflector.go:205] "Failed to watch" err="Get \"https://10.0.0.7:6443/api/v1/nodes?allowWatchBookmarks=true&fieldSelector=metadata.name%3Dlocalhost&resourceVersion=124&timeout=9m25s&timeoutSeconds=565&watch=true\": http2: client connection lost" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 16 04:43:58.616667 kubelet[2744]: I0416 04:43:57.776362 2744 reflector.go:571] "Warning: watch ended with error" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" err="an error on the server (\"unable to decode an event from the watch stream: http2: client connection lost\") has prevented the request from succeeding" Apr 16 04:44:00.772244 kubelet[2744]: E0416 04:44:00.711321 2744 reflector.go:205] "Failed to watch" err="Get \"https://10.0.0.7:6443/apis/node.k8s.io/v1/runtimeclasses?allowWatchBookmarks=true&resourceVersion=124&timeout=9m35s&timeoutSeconds=575&watch=true\": http2: client connection lost" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 16 04:44:06.202679 kubelet[2744]: E0416 04:44:00.293401 2744 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://10.0.0.7:6443/api/v1/namespaces/default/events/localhost.18a6bc0e5fc49f02\": http2: client connection lost" event="&Event{ObjectMeta:{localhost.18a6bc0e5fc49f02 default 115 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node localhost status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-16 04:31:12 +0000 UTC,LastTimestamp:2026-04-16 04:31:14.659211229 +0000 UTC m=+6.823562135,Count:8,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 16 04:44:08.335433 kubelet[2744]: E0416 04:44:08.171865 2744 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.7:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded" interval="7s" Apr 16 04:44:11.365092 kubelet[2744]: E0416 04:44:11.283432 2744 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 04:44:14.604089 containerd[1480]: time="2026-04-16T04:44:14.406418266Z" level=info msg="StopContainer for \"4c156d6a49e78b3ec26800168ff04cd026a1be81a59e72255f07c9af989f65f7\" with timeout 30 (s)" Apr 16 04:44:17.156687 containerd[1480]: time="2026-04-16T04:44:17.086731419Z" level=info msg="Stop container \"4c156d6a49e78b3ec26800168ff04cd026a1be81a59e72255f07c9af989f65f7\" with signal terminated" Apr 16 04:44:21.579292 kubelet[2744]: E0416 04:44:19.354434 2744 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"localhost\": Get \"https://10.0.0.7:6443/api/v1/nodes/localhost?timeout=10s\": context deadline exceeded" Apr 16 04:44:25.163259 kubelet[2744]: E0416 04:44:23.766028 2744 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:44:29.902633 systemd[1]: cri-containerd-4c156d6a49e78b3ec26800168ff04cd026a1be81a59e72255f07c9af989f65f7.scope: Deactivated successfully. Apr 16 04:44:30.082906 systemd[1]: cri-containerd-4c156d6a49e78b3ec26800168ff04cd026a1be81a59e72255f07c9af989f65f7.scope: Consumed 2min 51.790s CPU time. Apr 16 04:44:36.507734 kubelet[2744]: E0416 04:44:32.871295 2744 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.7:6443/apis/node.k8s.io/v1/runtimeclasses?resourceVersion=124\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 16 04:44:38.572630 kubelet[2744]: E0416 04:44:38.554988 2744 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.7:6443/apis/storage.k8s.io/v1/csidrivers?resourceVersion=125\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 16 04:44:43.871304 kubelet[2744]: E0416 04:44:43.865079 2744 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.7:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&resourceVersion=124\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 16 04:44:45.250100 kubelet[2744]: E0416 04:44:44.594162 2744 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.7:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded" interval="7s" Apr 16 04:44:46.811271 containerd[1480]: time="2026-04-16T04:44:43.707902580Z" level=error msg="ttrpc: received message on inactive stream" stream=31 Apr 16 04:44:46.811271 containerd[1480]: time="2026-04-16T04:44:44.318101872Z" level=error msg="ttrpc: received message on inactive stream" stream=35 Apr 16 04:44:50.583680 containerd[1480]: time="2026-04-16T04:44:50.464568506Z" level=error msg="failed to handle container TaskExit event container_id:\"4c156d6a49e78b3ec26800168ff04cd026a1be81a59e72255f07c9af989f65f7\" id:\"4c156d6a49e78b3ec26800168ff04cd026a1be81a59e72255f07c9af989f65f7\" pid:2956 exited_at:{seconds:1776314670 nanos:879116730}" error="failed to stop container: context deadline exceeded: unknown" Apr 16 04:44:53.697252 containerd[1480]: time="2026-04-16T04:44:53.186170237Z" level=info msg="TaskExit event container_id:\"4c156d6a49e78b3ec26800168ff04cd026a1be81a59e72255f07c9af989f65f7\" id:\"4c156d6a49e78b3ec26800168ff04cd026a1be81a59e72255f07c9af989f65f7\" pid:2956 exited_at:{seconds:1776314670 nanos:879116730}" Apr 16 04:45:01.177016 containerd[1480]: time="2026-04-16T04:45:00.451931584Z" level=info msg="Kill container \"4c156d6a49e78b3ec26800168ff04cd026a1be81a59e72255f07c9af989f65f7\"" Apr 16 04:45:02.544089 kubelet[2744]: E0416 04:45:02.257268 2744 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"localhost\": Get \"https://10.0.0.7:6443/api/v1/nodes/localhost?timeout=10s\": context deadline exceeded" Apr 16 04:45:04.845918 kubelet[2744]: E0416 04:45:03.411286 2744 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.0.0.7:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dlocalhost&resourceVersion=124\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="pkg/kubelet/config/apiserver.go:66" type="*v1.Pod" Apr 16 04:45:04.941257 containerd[1480]: time="2026-04-16T04:45:04.662911256Z" level=error msg="ttrpc: received message on inactive stream" stream=39 Apr 16 04:45:05.260396 containerd[1480]: time="2026-04-16T04:45:04.901224081Z" level=error msg="ttrpc: received message on inactive stream" stream=43 Apr 16 04:45:07.271664 containerd[1480]: time="2026-04-16T04:45:07.008264314Z" level=error msg="Failed to handle backOff event container_id:\"4c156d6a49e78b3ec26800168ff04cd026a1be81a59e72255f07c9af989f65f7\" id:\"4c156d6a49e78b3ec26800168ff04cd026a1be81a59e72255f07c9af989f65f7\" pid:2956 exited_at:{seconds:1776314670 nanos:879116730} for 4c156d6a49e78b3ec26800168ff04cd026a1be81a59e72255f07c9af989f65f7" error="failed to handle container TaskExit event: failed to stop container: context deadline exceeded: unknown" Apr 16 04:45:09.158451 kubelet[2744]: E0416 04:45:09.152428 2744 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 04:45:13.146246 containerd[1480]: time="2026-04-16T04:45:13.143220225Z" level=info msg="TaskExit event container_id:\"4c156d6a49e78b3ec26800168ff04cd026a1be81a59e72255f07c9af989f65f7\" id:\"4c156d6a49e78b3ec26800168ff04cd026a1be81a59e72255f07c9af989f65f7\" pid:2956 exited_at:{seconds:1776314670 nanos:879116730}" Apr 16 04:45:17.525461 kubelet[2744]: E0416 04:45:15.629693 2744 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.7:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" interval="7s" Apr 16 04:45:18.503199 kubelet[2744]: E0416 04:45:16.847212 2744 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://10.0.0.7:6443/api/v1/namespaces/default/events/localhost.18a6bc0e5fc49f02\": net/http: TLS handshake timeout" event="&Event{ObjectMeta:{localhost.18a6bc0e5fc49f02 default 115 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node localhost status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-16 04:31:12 +0000 UTC,LastTimestamp:2026-04-16 04:31:14.659211229 +0000 UTC m=+6.823562135,Count:8,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 16 04:45:20.065263 containerd[1480]: time="2026-04-16T04:45:17.536802501Z" level=info msg="StopContainer for \"062cbc5686a451a617a9b42cac5acdf2cec03e5c8356235fe76e44367c15ae51\" with timeout 30 (s)" Apr 16 04:45:21.262996 containerd[1480]: time="2026-04-16T04:45:17.435442190Z" level=error msg="get state for 4c156d6a49e78b3ec26800168ff04cd026a1be81a59e72255f07c9af989f65f7" error="context deadline exceeded: unknown" Apr 16 04:45:22.448168 kubelet[2744]: E0416 04:45:17.885694 2744 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.7:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&resourceVersion=124\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 16 04:45:23.765965 kubelet[2744]: E0416 04:45:22.969136 2744 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"localhost\": Get \"https://10.0.0.7:6443/api/v1/nodes/localhost?timeout=10s\": context deadline exceeded" Apr 16 04:45:25.064791 containerd[1480]: time="2026-04-16T04:45:21.918237731Z" level=warning msg="unknown status" status=0 Apr 16 04:45:26.182255 kubelet[2744]: E0416 04:45:26.097241 2744 kubelet_node_status.go:473] "Unable to update node status" err="update node status exceeds retry count" Apr 16 04:45:29.553803 containerd[1480]: time="2026-04-16T04:45:29.553123492Z" level=error msg="get state for 062cbc5686a451a617a9b42cac5acdf2cec03e5c8356235fe76e44367c15ae51" error="context deadline exceeded: unknown" Apr 16 04:45:31.110107 containerd[1480]: time="2026-04-16T04:45:30.660373935Z" level=error msg="ttrpc: received message on inactive stream" stream=47 Apr 16 04:45:31.110107 containerd[1480]: time="2026-04-16T04:45:30.750651424Z" level=error msg="ttrpc: received message on inactive stream" stream=49 Apr 16 04:45:32.001055 containerd[1480]: time="2026-04-16T04:45:30.149144749Z" level=error msg="ttrpc: received message on inactive stream" stream=25 Apr 16 04:45:32.585321 containerd[1480]: time="2026-04-16T04:45:32.583904149Z" level=warning msg="unknown status" status=0 Apr 16 04:45:33.926713 kubelet[2744]: E0416 04:45:33.213328 2744 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.7:6443/apis/node.k8s.io/v1/runtimeclasses?resourceVersion=124\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 16 04:45:33.926713 kubelet[2744]: E0416 04:45:33.765954 2744 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.7:6443/apis/storage.k8s.io/v1/csidrivers?resourceVersion=125\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 16 04:45:34.945113 containerd[1480]: time="2026-04-16T04:45:32.584066083Z" level=error msg="get state for 4c156d6a49e78b3ec26800168ff04cd026a1be81a59e72255f07c9af989f65f7" error="context deadline exceeded: unknown" Apr 16 04:45:35.714043 containerd[1480]: time="2026-04-16T04:45:35.343634910Z" level=warning msg="unknown status" status=0 Apr 16 04:45:35.714043 containerd[1480]: time="2026-04-16T04:45:34.043635788Z" level=info msg="Stop container \"062cbc5686a451a617a9b42cac5acdf2cec03e5c8356235fe76e44367c15ae51\" with signal terminated" Apr 16 04:45:37.643608 kubelet[2744]: E0416 04:45:37.018453 2744 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.7:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&resourceVersion=124\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 16 04:45:40.493296 containerd[1480]: time="2026-04-16T04:45:40.486080981Z" level=error msg="Failed to handle backOff event container_id:\"4c156d6a49e78b3ec26800168ff04cd026a1be81a59e72255f07c9af989f65f7\" id:\"4c156d6a49e78b3ec26800168ff04cd026a1be81a59e72255f07c9af989f65f7\" pid:2956 exited_at:{seconds:1776314670 nanos:879116730} for 4c156d6a49e78b3ec26800168ff04cd026a1be81a59e72255f07c9af989f65f7" error="failed to handle container TaskExit event: failed to stop container: failed to delete task: context deadline exceeded: unknown" Apr 16 04:45:41.058067 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4c156d6a49e78b3ec26800168ff04cd026a1be81a59e72255f07c9af989f65f7-rootfs.mount: Deactivated successfully. Apr 16 04:45:42.548305 containerd[1480]: time="2026-04-16T04:45:42.337366350Z" level=error msg="ttrpc: received message on inactive stream" stream=51 Apr 16 04:45:44.195442 kubelet[2744]: E0416 04:45:39.419417 2744 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 04:45:45.913664 containerd[1480]: time="2026-04-16T04:45:45.883183228Z" level=info msg="TaskExit event container_id:\"4c156d6a49e78b3ec26800168ff04cd026a1be81a59e72255f07c9af989f65f7\" id:\"4c156d6a49e78b3ec26800168ff04cd026a1be81a59e72255f07c9af989f65f7\" pid:2956 exited_at:{seconds:1776314670 nanos:879116730}" Apr 16 04:45:46.911625 kubelet[2744]: E0416 04:45:46.050982 2744 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.7:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded" interval="7s" Apr 16 04:45:47.711324 kubelet[2744]: E0416 04:45:46.945308 2744 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.0.0.7:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dlocalhost&resourceVersion=124\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="pkg/kubelet/config/apiserver.go:66" type="*v1.Pod" Apr 16 04:45:56.048667 kubelet[2744]: E0416 04:45:46.957619 2744 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://10.0.0.7:6443/api/v1/namespaces/default/events/localhost.18a6bc0e5fc49f02\": net/http: TLS handshake timeout" event="&Event{ObjectMeta:{localhost.18a6bc0e5fc49f02 default 115 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node localhost status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-16 04:31:12 +0000 UTC,LastTimestamp:2026-04-16 04:31:14.659211229 +0000 UTC m=+6.823562135,Count:8,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 16 04:45:57.312761 containerd[1480]: time="2026-04-16T04:45:56.874350414Z" level=error msg="ttrpc: received message on inactive stream" stream=57 Apr 16 04:45:57.979449 containerd[1480]: time="2026-04-16T04:45:57.250030220Z" level=error msg="ttrpc: received message on inactive stream" stream=59 Apr 16 04:45:59.034392 containerd[1480]: time="2026-04-16T04:45:57.834196155Z" level=error msg="Failed to handle backOff event container_id:\"4c156d6a49e78b3ec26800168ff04cd026a1be81a59e72255f07c9af989f65f7\" id:\"4c156d6a49e78b3ec26800168ff04cd026a1be81a59e72255f07c9af989f65f7\" pid:2956 exited_at:{seconds:1776314670 nanos:879116730} for 4c156d6a49e78b3ec26800168ff04cd026a1be81a59e72255f07c9af989f65f7" error="failed to handle container TaskExit event: failed to stop container: context deadline exceeded: unknown" Apr 16 04:46:02.155740 kubelet[2744]: E0416 04:46:02.153361 2744 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.7:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&resourceVersion=124\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 16 04:46:02.155740 kubelet[2744]: E0416 04:46:02.153394 2744 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.7:6443/apis/node.k8s.io/v1/runtimeclasses?resourceVersion=124\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 16 04:46:04.339735 kubelet[2744]: E0416 04:46:04.303684 2744 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.7:6443/apis/storage.k8s.io/v1/csidrivers?resourceVersion=125\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 16 04:46:07.178248 systemd[1]: cri-containerd-062cbc5686a451a617a9b42cac5acdf2cec03e5c8356235fe76e44367c15ae51.scope: Deactivated successfully. Apr 16 04:46:07.244351 systemd[1]: cri-containerd-062cbc5686a451a617a9b42cac5acdf2cec03e5c8356235fe76e44367c15ae51.scope: Consumed 1min 2.527s CPU time. Apr 16 04:46:09.885908 containerd[1480]: time="2026-04-16T04:46:09.869582562Z" level=info msg="TaskExit event container_id:\"4c156d6a49e78b3ec26800168ff04cd026a1be81a59e72255f07c9af989f65f7\" id:\"4c156d6a49e78b3ec26800168ff04cd026a1be81a59e72255f07c9af989f65f7\" pid:2956 exited_at:{seconds:1776314670 nanos:879116730}" Apr 16 04:46:11.397004 kubelet[2744]: E0416 04:46:11.382091 2744 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.7:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&resourceVersion=124\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 16 04:46:12.310039 kubelet[2744]: E0416 04:46:11.815572 2744 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 04:46:18.783852 kubelet[2744]: E0416 04:46:18.674055 2744 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.7:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded" interval="7s" Apr 16 04:46:20.862437 containerd[1480]: time="2026-04-16T04:46:20.019347625Z" level=error msg="ttrpc: received message on inactive stream" stream=65 Apr 16 04:46:20.862437 containerd[1480]: time="2026-04-16T04:46:20.849393031Z" level=error msg="ttrpc: received message on inactive stream" stream=67 Apr 16 04:46:21.906877 containerd[1480]: time="2026-04-16T04:46:20.870094842Z" level=error msg="ttrpc: received message on inactive stream" stream=33 Apr 16 04:46:21.906877 containerd[1480]: time="2026-04-16T04:46:20.364278786Z" level=error msg="Failed to handle backOff event container_id:\"4c156d6a49e78b3ec26800168ff04cd026a1be81a59e72255f07c9af989f65f7\" id:\"4c156d6a49e78b3ec26800168ff04cd026a1be81a59e72255f07c9af989f65f7\" pid:2956 exited_at:{seconds:1776314670 nanos:879116730} for 4c156d6a49e78b3ec26800168ff04cd026a1be81a59e72255f07c9af989f65f7" error="failed to handle container TaskExit event: failed to stop container: context deadline exceeded: unknown" Apr 16 04:46:21.906877 containerd[1480]: time="2026-04-16T04:46:21.097148289Z" level=error msg="ttrpc: received message on inactive stream" stream=35 Apr 16 04:46:21.906877 containerd[1480]: time="2026-04-16T04:46:21.115396323Z" level=error msg="failed to handle container TaskExit event container_id:\"062cbc5686a451a617a9b42cac5acdf2cec03e5c8356235fe76e44367c15ae51\" id:\"062cbc5686a451a617a9b42cac5acdf2cec03e5c8356235fe76e44367c15ae51\" pid:2975 exit_status:2 exited_at:{seconds:1776314768 nanos:342729367}" error="failed to stop container: context deadline exceeded: unknown" Apr 16 04:46:25.032512 containerd[1480]: time="2026-04-16T04:46:25.003818562Z" level=info msg="TaskExit event container_id:\"062cbc5686a451a617a9b42cac5acdf2cec03e5c8356235fe76e44367c15ae51\" id:\"062cbc5686a451a617a9b42cac5acdf2cec03e5c8356235fe76e44367c15ae51\" pid:2975 exit_status:2 exited_at:{seconds:1776314768 nanos:342729367}" Apr 16 04:46:33.382087 kubelet[2744]: E0416 04:46:33.334150 2744 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 04:46:35.210259 containerd[1480]: time="2026-04-16T04:46:34.686165509Z" level=error msg="ttrpc: received message on inactive stream" stream=39 Apr 16 04:46:36.262276 containerd[1480]: time="2026-04-16T04:46:35.346432441Z" level=error msg="Failed to handle backOff event container_id:\"062cbc5686a451a617a9b42cac5acdf2cec03e5c8356235fe76e44367c15ae51\" id:\"062cbc5686a451a617a9b42cac5acdf2cec03e5c8356235fe76e44367c15ae51\" pid:2975 exit_status:2 exited_at:{seconds:1776314768 nanos:342729367} for 062cbc5686a451a617a9b42cac5acdf2cec03e5c8356235fe76e44367c15ae51" error="failed to handle container TaskExit event: failed to stop container: context deadline exceeded: unknown" Apr 16 04:46:36.262276 containerd[1480]: time="2026-04-16T04:46:35.381432056Z" level=error msg="ttrpc: received message on inactive stream" stream=43 Apr 16 04:46:38.715773 kubelet[2744]: E0416 04:46:35.512885 2744 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.0.0.7:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dlocalhost&resourceVersion=124\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="pkg/kubelet/config/apiserver.go:66" type="*v1.Pod" Apr 16 04:46:40.110460 containerd[1480]: time="2026-04-16T04:46:38.886263550Z" level=info msg="TaskExit event container_id:\"4c156d6a49e78b3ec26800168ff04cd026a1be81a59e72255f07c9af989f65f7\" id:\"4c156d6a49e78b3ec26800168ff04cd026a1be81a59e72255f07c9af989f65f7\" pid:2956 exited_at:{seconds:1776314670 nanos:879116730}" Apr 16 04:46:40.110460 containerd[1480]: time="2026-04-16T04:46:39.536018721Z" level=info msg="Kill container \"062cbc5686a451a617a9b42cac5acdf2cec03e5c8356235fe76e44367c15ae51\"" Apr 16 04:46:42.424427 containerd[1480]: time="2026-04-16T04:46:42.263803240Z" level=error msg="StopContainer for \"4c156d6a49e78b3ec26800168ff04cd026a1be81a59e72255f07c9af989f65f7\" failed" error="rpc error: code = DeadlineExceeded desc = an error occurs during waiting for container \"4c156d6a49e78b3ec26800168ff04cd026a1be81a59e72255f07c9af989f65f7\" to be killed: wait container \"4c156d6a49e78b3ec26800168ff04cd026a1be81a59e72255f07c9af989f65f7\": context deadline exceeded" Apr 16 04:46:43.695418 kubelet[2744]: E0416 04:46:40.542438 2744 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.7:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" interval="7s" Apr 16 04:46:45.253692 kubelet[2744]: E0416 04:46:44.015941 2744 log.go:32] "StopContainer from runtime service failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" containerID="4c156d6a49e78b3ec26800168ff04cd026a1be81a59e72255f07c9af989f65f7" Apr 16 04:46:49.073368 kubelet[2744]: E0416 04:46:49.072899 2744 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.7:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&resourceVersion=124\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 16 04:46:50.105686 kubelet[2744]: E0416 04:46:48.167251 2744 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://10.0.0.7:6443/api/v1/namespaces/default/events/localhost.18a6bc0e5fc49f02\": net/http: TLS handshake timeout" event="&Event{ObjectMeta:{localhost.18a6bc0e5fc49f02 default 115 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node localhost status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-16 04:31:12 +0000 UTC,LastTimestamp:2026-04-16 04:31:14.659211229 +0000 UTC m=+6.823562135,Count:8,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 16 04:46:51.289282 containerd[1480]: time="2026-04-16T04:46:51.263884361Z" level=error msg="ttrpc: received message on inactive stream" stream=71 Apr 16 04:46:51.289282 containerd[1480]: time="2026-04-16T04:46:51.264675635Z" level=error msg="ttrpc: received message on inactive stream" stream=75 Apr 16 04:46:51.289282 containerd[1480]: time="2026-04-16T04:46:51.270788281Z" level=error msg="Failed to handle backOff event container_id:\"4c156d6a49e78b3ec26800168ff04cd026a1be81a59e72255f07c9af989f65f7\" id:\"4c156d6a49e78b3ec26800168ff04cd026a1be81a59e72255f07c9af989f65f7\" pid:2956 exited_at:{seconds:1776314670 nanos:879116730} for 4c156d6a49e78b3ec26800168ff04cd026a1be81a59e72255f07c9af989f65f7" error="failed to handle container TaskExit event: failed to stop container: context deadline exceeded: unknown" Apr 16 04:46:52.071840 kubelet[2744]: E0416 04:46:51.266020 2744 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.7:6443/apis/node.k8s.io/v1/runtimeclasses?resourceVersion=124\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 16 04:46:52.071840 kubelet[2744]: E0416 04:46:49.944647 2744 kuberuntime_container.go:871] "Container termination failed with gracePeriod" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" pod="kube-system/kube-scheduler-localhost" podUID="66a243c17a59d09458bf3b09d66260f5" containerName="kube-scheduler" containerID="containerd://4c156d6a49e78b3ec26800168ff04cd026a1be81a59e72255f07c9af989f65f7" gracePeriod=30 Apr 16 04:46:52.071840 kubelet[2744]: E0416 04:46:52.042768 2744 kuberuntime_manager.go:1248] "killContainer for pod failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" containerName="kube-scheduler" containerID={"Type":"containerd","ID":"4c156d6a49e78b3ec26800168ff04cd026a1be81a59e72255f07c9af989f65f7"} pod="kube-system/kube-scheduler-localhost" Apr 16 04:46:52.071840 kubelet[2744]: E0416 04:46:52.044316 2744 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillContainer\" for \"kube-scheduler\" with KillContainerError: \"rpc error: code = DeadlineExceeded desc = context deadline exceeded\"" pod="kube-system/kube-scheduler-localhost" podUID="66a243c17a59d09458bf3b09d66260f5" Apr 16 04:46:52.071840 kubelet[2744]: E0416 04:46:52.046038 2744 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.7:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&resourceVersion=124\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 16 04:46:55.053685 containerd[1480]: time="2026-04-16T04:46:51.393646691Z" level=info msg="TaskExit event container_id:\"062cbc5686a451a617a9b42cac5acdf2cec03e5c8356235fe76e44367c15ae51\" id:\"062cbc5686a451a617a9b42cac5acdf2cec03e5c8356235fe76e44367c15ae51\" pid:2975 exit_status:2 exited_at:{seconds:1776314768 nanos:342729367}" Apr 16 04:46:55.053685 containerd[1480]: time="2026-04-16T04:46:53.477285394Z" level=error msg="get state for 062cbc5686a451a617a9b42cac5acdf2cec03e5c8356235fe76e44367c15ae51" error="context deadline exceeded: unknown" Apr 16 04:46:55.053685 containerd[1480]: time="2026-04-16T04:46:53.916717562Z" level=warning msg="unknown status" status=0 Apr 16 04:46:57.159821 kubelet[2744]: E0416 04:46:57.156034 2744 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 04:46:57.761830 kubelet[2744]: E0416 04:46:56.718948 2744 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.7:6443/apis/storage.k8s.io/v1/csidrivers?resourceVersion=125\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 16 04:46:58.004661 kubelet[2744]: E0416 04:46:57.998394 2744 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3m22.985s" Apr 16 04:46:58.461447 containerd[1480]: time="2026-04-16T04:46:58.460905550Z" level=error msg="get state for 062cbc5686a451a617a9b42cac5acdf2cec03e5c8356235fe76e44367c15ae51" error="context deadline exceeded: unknown" Apr 16 04:46:58.461447 containerd[1480]: time="2026-04-16T04:46:58.461409327Z" level=warning msg="unknown status" status=0 Apr 16 04:46:59.465559 containerd[1480]: time="2026-04-16T04:46:59.439319684Z" level=error msg="ttrpc: received message on inactive stream" stream=47 Apr 16 04:46:59.538987 containerd[1480]: time="2026-04-16T04:46:59.509521863Z" level=error msg="ttrpc: received message on inactive stream" stream=49 Apr 16 04:47:00.118411 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-062cbc5686a451a617a9b42cac5acdf2cec03e5c8356235fe76e44367c15ae51-rootfs.mount: Deactivated successfully. Apr 16 04:47:00.362644 containerd[1480]: time="2026-04-16T04:47:00.361505145Z" level=info msg="shim disconnected" id=062cbc5686a451a617a9b42cac5acdf2cec03e5c8356235fe76e44367c15ae51 namespace=k8s.io Apr 16 04:47:00.362644 containerd[1480]: time="2026-04-16T04:47:00.361817877Z" level=warning msg="cleaning up after shim disconnected" id=062cbc5686a451a617a9b42cac5acdf2cec03e5c8356235fe76e44367c15ae51 namespace=k8s.io Apr 16 04:47:00.362644 containerd[1480]: time="2026-04-16T04:47:00.361828935Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 16 04:47:00.364922 kubelet[2744]: I0416 04:47:00.362496 2744 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=841.362482068 podStartE2EDuration="14m1.362482068s" podCreationTimestamp="2026-04-16 04:32:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-16 04:33:20.135381323 +0000 UTC m=+132.299732237" watchObservedRunningTime="2026-04-16 04:47:00.362482068 +0000 UTC m=+952.526832975" Apr 16 04:47:00.364922 kubelet[2744]: E0416 04:47:00.362577 2744 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.72s" Apr 16 04:47:00.509845 kubelet[2744]: E0416 04:47:00.508487 2744 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:47:00.598381 containerd[1480]: time="2026-04-16T04:47:00.594796066Z" level=info msg="StopContainer for \"4c156d6a49e78b3ec26800168ff04cd026a1be81a59e72255f07c9af989f65f7\" with timeout 30 (s)" Apr 16 04:47:00.598381 containerd[1480]: time="2026-04-16T04:47:00.596229401Z" level=info msg="Skipping the sending of signal terminated to container \"4c156d6a49e78b3ec26800168ff04cd026a1be81a59e72255f07c9af989f65f7\" because a prior stop with timeout>0 request already sent the signal" Apr 16 04:47:00.607115 containerd[1480]: time="2026-04-16T04:47:00.602420813Z" level=warning msg="cleanup warnings time=\"2026-04-16T04:47:00Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Apr 16 04:47:00.635895 containerd[1480]: time="2026-04-16T04:47:00.635634865Z" level=info msg="StopContainer for \"062cbc5686a451a617a9b42cac5acdf2cec03e5c8356235fe76e44367c15ae51\" returns successfully" Apr 16 04:47:00.656222 kubelet[2744]: E0416 04:47:00.656010 2744 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:47:00.745299 containerd[1480]: time="2026-04-16T04:47:00.743991438Z" level=info msg="CreateContainer within sandbox \"644349821040e95ea568f6e1759fa4c4b8a74a20d79aaf0f1dce324ce64e5914\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Apr 16 04:47:01.003721 containerd[1480]: time="2026-04-16T04:47:00.997232955Z" level=info msg="CreateContainer within sandbox \"644349821040e95ea568f6e1759fa4c4b8a74a20d79aaf0f1dce324ce64e5914\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"bcb69c2b7b3bf1ac4623a79fa68fe0fe9ce43678e9c0ae6563a430d6c2209de8\"" Apr 16 04:47:01.018120 containerd[1480]: time="2026-04-16T04:47:01.016762022Z" level=info msg="StartContainer for \"bcb69c2b7b3bf1ac4623a79fa68fe0fe9ce43678e9c0ae6563a430d6c2209de8\"" Apr 16 04:47:01.292097 systemd[1]: Started cri-containerd-bcb69c2b7b3bf1ac4623a79fa68fe0fe9ce43678e9c0ae6563a430d6c2209de8.scope - libcontainer container bcb69c2b7b3bf1ac4623a79fa68fe0fe9ce43678e9c0ae6563a430d6c2209de8. Apr 16 04:47:01.601953 containerd[1480]: time="2026-04-16T04:47:01.600142931Z" level=info msg="StartContainer for \"bcb69c2b7b3bf1ac4623a79fa68fe0fe9ce43678e9c0ae6563a430d6c2209de8\" returns successfully" Apr 16 04:47:02.804169 kubelet[2744]: E0416 04:47:02.800806 2744 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 04:47:02.999536 kubelet[2744]: E0416 04:47:02.999129 2744 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:47:04.268050 kubelet[2744]: E0416 04:47:04.262073 2744 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:47:05.768227 kubelet[2744]: E0416 04:47:05.766964 2744 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:47:07.837405 kubelet[2744]: E0416 04:47:07.836786 2744 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 04:47:13.166336 kubelet[2744]: E0416 04:47:13.155591 2744 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 04:47:14.294778 kubelet[2744]: E0416 04:47:14.292135 2744 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:47:18.185754 kubelet[2744]: E0416 04:47:18.184900 2744 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 04:47:18.366593 systemd[1]: cri-containerd-bcb69c2b7b3bf1ac4623a79fa68fe0fe9ce43678e9c0ae6563a430d6c2209de8.scope: Deactivated successfully. Apr 16 04:47:18.367186 systemd[1]: cri-containerd-bcb69c2b7b3bf1ac4623a79fa68fe0fe9ce43678e9c0ae6563a430d6c2209de8.scope: Consumed 6.388s CPU time. Apr 16 04:47:20.846343 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bcb69c2b7b3bf1ac4623a79fa68fe0fe9ce43678e9c0ae6563a430d6c2209de8-rootfs.mount: Deactivated successfully. Apr 16 04:47:20.891946 kubelet[2744]: E0416 04:47:20.889786 2744 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.902s" Apr 16 04:47:20.894703 containerd[1480]: time="2026-04-16T04:47:20.890801453Z" level=info msg="shim disconnected" id=bcb69c2b7b3bf1ac4623a79fa68fe0fe9ce43678e9c0ae6563a430d6c2209de8 namespace=k8s.io Apr 16 04:47:20.894703 containerd[1480]: time="2026-04-16T04:47:20.890940305Z" level=warning msg="cleaning up after shim disconnected" id=bcb69c2b7b3bf1ac4623a79fa68fe0fe9ce43678e9c0ae6563a430d6c2209de8 namespace=k8s.io Apr 16 04:47:20.894703 containerd[1480]: time="2026-04-16T04:47:20.890947450Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 16 04:47:21.755350 kubelet[2744]: I0416 04:47:21.755279 2744 scope.go:117] "RemoveContainer" containerID="062cbc5686a451a617a9b42cac5acdf2cec03e5c8356235fe76e44367c15ae51" Apr 16 04:47:21.755350 kubelet[2744]: I0416 04:47:21.755561 2744 scope.go:117] "RemoveContainer" containerID="bcb69c2b7b3bf1ac4623a79fa68fe0fe9ce43678e9c0ae6563a430d6c2209de8" Apr 16 04:47:21.777976 kubelet[2744]: E0416 04:47:21.755656 2744 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:47:22.093359 containerd[1480]: time="2026-04-16T04:47:21.990074020Z" level=info msg="RemoveContainer for \"062cbc5686a451a617a9b42cac5acdf2cec03e5c8356235fe76e44367c15ae51\"" Apr 16 04:47:22.216020 containerd[1480]: time="2026-04-16T04:47:22.215790827Z" level=info msg="CreateContainer within sandbox \"644349821040e95ea568f6e1759fa4c4b8a74a20d79aaf0f1dce324ce64e5914\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:2,}" Apr 16 04:47:22.242120 containerd[1480]: time="2026-04-16T04:47:22.240077119Z" level=info msg="RemoveContainer for \"062cbc5686a451a617a9b42cac5acdf2cec03e5c8356235fe76e44367c15ae51\" returns successfully" Apr 16 04:47:22.782208 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4287057836.mount: Deactivated successfully. Apr 16 04:47:22.969139 containerd[1480]: time="2026-04-16T04:47:22.963890623Z" level=info msg="CreateContainer within sandbox \"644349821040e95ea568f6e1759fa4c4b8a74a20d79aaf0f1dce324ce64e5914\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:2,} returns container id \"9418b510d156aec10b1abb327ded63e85b33741d0acb0b5e9a57ee418b16425d\"" Apr 16 04:47:23.194035 containerd[1480]: time="2026-04-16T04:47:22.969115004Z" level=info msg="StartContainer for \"9418b510d156aec10b1abb327ded63e85b33741d0acb0b5e9a57ee418b16425d\"" Apr 16 04:47:24.267908 containerd[1480]: time="2026-04-16T04:47:24.261313785Z" level=info msg="TaskExit event container_id:\"4c156d6a49e78b3ec26800168ff04cd026a1be81a59e72255f07c9af989f65f7\" id:\"4c156d6a49e78b3ec26800168ff04cd026a1be81a59e72255f07c9af989f65f7\" pid:2956 exited_at:{seconds:1776314670 nanos:879116730}" Apr 16 04:47:28.213723 kubelet[2744]: E0416 04:47:28.010072 2744 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 04:47:33.579413 containerd[1480]: time="2026-04-16T04:47:32.432160516Z" level=info msg="Kill container \"4c156d6a49e78b3ec26800168ff04cd026a1be81a59e72255f07c9af989f65f7\"" Apr 16 04:47:37.511576 containerd[1480]: time="2026-04-16T04:47:36.994060471Z" level=error msg="ttrpc: received message on inactive stream" stream=89 Apr 16 04:47:41.962138 containerd[1480]: time="2026-04-16T04:47:37.558304440Z" level=error msg="ttrpc: received message on inactive stream" stream=91 Apr 16 04:47:41.962138 containerd[1480]: time="2026-04-16T04:47:41.829598239Z" level=error msg="Failed to handle backOff event container_id:\"4c156d6a49e78b3ec26800168ff04cd026a1be81a59e72255f07c9af989f65f7\" id:\"4c156d6a49e78b3ec26800168ff04cd026a1be81a59e72255f07c9af989f65f7\" pid:2956 exited_at:{seconds:1776314670 nanos:879116730} for 4c156d6a49e78b3ec26800168ff04cd026a1be81a59e72255f07c9af989f65f7" error="failed to handle container TaskExit event: failed to stop container: context deadline exceeded: unknown" Apr 16 04:47:44.283427 kubelet[2744]: E0416 04:47:44.278622 2744 certificate_manager.go:613] "Certificate request was not signed" err="timed out waiting for the condition" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 16 04:47:44.507459 kubelet[2744]: E0416 04:47:44.291275 2744 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 04:47:45.152363 kubelet[2744]: E0416 04:47:45.145715 2744 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="22.092s" Apr 16 04:47:49.253501 kubelet[2744]: E0416 04:47:49.252320 2744 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="4.035s" Apr 16 04:47:49.445601 kubelet[2744]: E0416 04:47:49.441291 2744 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 04:47:49.616295 systemd[1]: run-containerd-runc-k8s.io-9418b510d156aec10b1abb327ded63e85b33741d0acb0b5e9a57ee418b16425d-runc.W7s9VT.mount: Deactivated successfully. Apr 16 04:47:49.656686 systemd[1]: Started cri-containerd-9418b510d156aec10b1abb327ded63e85b33741d0acb0b5e9a57ee418b16425d.scope - libcontainer container 9418b510d156aec10b1abb327ded63e85b33741d0acb0b5e9a57ee418b16425d. Apr 16 04:47:50.243397 kubelet[2744]: E0416 04:47:50.243038 2744 cadvisor_stats_provider.go:567] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod82faa9ca0765979bc0118d46e6420ed8.slice/cri-containerd-9418b510d156aec10b1abb327ded63e85b33741d0acb0b5e9a57ee418b16425d.scope\": RecentStats: unable to find data in memory cache]" Apr 16 04:47:50.293574 containerd[1480]: time="2026-04-16T04:47:50.293292200Z" level=info msg="StartContainer for \"9418b510d156aec10b1abb327ded63e85b33741d0acb0b5e9a57ee418b16425d\" returns successfully" Apr 16 04:47:50.749281 kubelet[2744]: E0416 04:47:50.746206 2744 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:47:52.095351 kubelet[2744]: E0416 04:47:52.095053 2744 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.122s" Apr 16 04:47:53.012342 kubelet[2744]: E0416 04:47:53.009852 2744 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:47:54.299209 kubelet[2744]: E0416 04:47:54.244232 2744 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:47:54.575179 kubelet[2744]: E0416 04:47:54.574672 2744 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 04:47:58.544247 kubelet[2744]: E0416 04:47:58.543603 2744 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:48:00.227412 kubelet[2744]: E0416 04:48:00.227045 2744 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 04:48:00.272441 kubelet[2744]: E0416 04:48:00.239290 2744 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:48:04.563649 kubelet[2744]: E0416 04:48:04.563182 2744 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:48:05.948397 kubelet[2744]: E0416 04:48:05.877350 2744 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 04:48:06.372688 kubelet[2744]: E0416 04:48:05.991590 2744 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.009s" Apr 16 04:48:09.835626 systemd[1]: Reloading requested from client PID 3279 ('systemctl') (unit session-7.scope)... Apr 16 04:48:09.835663 systemd[1]: Reloading... Apr 16 04:48:10.660001 zram_generator::config[3315]: No configuration found. Apr 16 04:48:11.072605 kubelet[2744]: E0416 04:48:10.948778 2744 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 04:48:12.339819 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 16 04:48:13.019236 systemd[1]: Reloading finished in 3178 ms. Apr 16 04:48:13.142812 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 16 04:48:13.186312 systemd[1]: kubelet.service: Deactivated successfully. Apr 16 04:48:13.246694 containerd[1480]: time="2026-04-16T04:48:13.186288620Z" level=error msg="StopContainer for \"4c156d6a49e78b3ec26800168ff04cd026a1be81a59e72255f07c9af989f65f7\" failed" error="rpc error: code = Canceled desc = an error occurs during waiting for container \"4c156d6a49e78b3ec26800168ff04cd026a1be81a59e72255f07c9af989f65f7\" to be killed: wait container \"4c156d6a49e78b3ec26800168ff04cd026a1be81a59e72255f07c9af989f65f7\": context canceled" Apr 16 04:48:13.186726 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 16 04:48:13.186960 systemd[1]: kubelet.service: Consumed 7min 49.252s CPU time, 130.0M memory peak, 0B memory swap peak. Apr 16 04:48:13.298094 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 16 04:48:15.815305 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 16 04:48:15.904249 (kubelet)[3366]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 16 04:48:16.534099 kubelet[3366]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 16 04:48:16.534099 kubelet[3366]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 16 04:48:16.534099 kubelet[3366]: I0416 04:48:16.533982 3366 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 16 04:48:16.808592 kubelet[3366]: I0416 04:48:16.797645 3366 server.go:529] "Kubelet version" kubeletVersion="v1.34.4" Apr 16 04:48:16.808592 kubelet[3366]: I0416 04:48:16.798730 3366 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 16 04:48:16.870700 kubelet[3366]: I0416 04:48:16.808754 3366 watchdog_linux.go:95] "Systemd watchdog is not enabled" Apr 16 04:48:16.870700 kubelet[3366]: I0416 04:48:16.809233 3366 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 16 04:48:16.892140 kubelet[3366]: I0416 04:48:16.877368 3366 server.go:956] "Client rotation is on, will bootstrap in background" Apr 16 04:48:16.975776 kubelet[3366]: I0416 04:48:16.975405 3366 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Apr 16 04:48:16.989460 kubelet[3366]: I0416 04:48:16.984535 3366 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 16 04:48:17.140808 kubelet[3366]: E0416 04:48:17.138051 3366 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 16 04:48:17.140808 kubelet[3366]: I0416 04:48:17.139771 3366 server.go:1400] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Apr 16 04:48:17.198551 kubelet[3366]: I0416 04:48:17.195112 3366 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Apr 16 04:48:17.226792 kubelet[3366]: I0416 04:48:17.225735 3366 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 16 04:48:17.226991 kubelet[3366]: I0416 04:48:17.226644 3366 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 16 04:48:17.227208 kubelet[3366]: I0416 04:48:17.227024 3366 topology_manager.go:138] "Creating topology manager with none policy" Apr 16 04:48:17.227208 kubelet[3366]: I0416 04:48:17.227033 3366 container_manager_linux.go:306] "Creating device plugin manager" Apr 16 04:48:17.227208 kubelet[3366]: I0416 04:48:17.227170 3366 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Apr 16 04:48:17.237038 kubelet[3366]: I0416 04:48:17.236540 3366 state_mem.go:36] "Initialized new in-memory state store" Apr 16 04:48:17.244566 kubelet[3366]: I0416 04:48:17.237977 3366 kubelet.go:475] "Attempting to sync node with API server" Apr 16 04:48:17.244566 kubelet[3366]: I0416 04:48:17.238012 3366 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 16 04:48:17.244566 kubelet[3366]: I0416 04:48:17.241660 3366 kubelet.go:387] "Adding apiserver pod source" Apr 16 04:48:17.244566 kubelet[3366]: I0416 04:48:17.242306 3366 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 16 04:48:17.264984 kubelet[3366]: I0416 04:48:17.264861 3366 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 16 04:48:17.266399 kubelet[3366]: I0416 04:48:17.266193 3366 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 16 04:48:17.266399 kubelet[3366]: I0416 04:48:17.266279 3366 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Apr 16 04:48:17.610397 kubelet[3366]: I0416 04:48:17.609406 3366 server.go:1262] "Started kubelet" Apr 16 04:48:17.665117 kubelet[3366]: I0416 04:48:17.643807 3366 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 16 04:48:17.735255 kubelet[3366]: I0416 04:48:17.734426 3366 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Apr 16 04:48:17.778785 kubelet[3366]: I0416 04:48:17.756361 3366 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 16 04:48:17.894556 kubelet[3366]: I0416 04:48:17.797809 3366 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 16 04:48:17.895163 kubelet[3366]: I0416 04:48:17.801627 3366 volume_manager.go:313] "Starting Kubelet Volume Manager" Apr 16 04:48:17.916349 kubelet[3366]: I0416 04:48:17.801689 3366 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Apr 16 04:48:17.942299 kubelet[3366]: E0416 04:48:17.801973 3366 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 04:48:17.952400 kubelet[3366]: I0416 04:48:17.950187 3366 server_v1.go:49] "podresources" method="list" useActivePods=true Apr 16 04:48:18.014413 kubelet[3366]: I0416 04:48:18.014104 3366 server.go:310] "Adding debug handlers to kubelet server" Apr 16 04:48:18.161585 kubelet[3366]: E0416 04:48:18.160649 3366 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 04:48:18.214170 kubelet[3366]: I0416 04:48:18.213990 3366 reconciler.go:29] "Reconciler: start to sync state" Apr 16 04:48:18.345613 kubelet[3366]: E0416 04:48:18.298228 3366 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 16 04:48:18.351021 kubelet[3366]: I0416 04:48:18.350762 3366 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 16 04:48:18.395163 kubelet[3366]: I0416 04:48:18.395042 3366 factory.go:223] Registration of the systemd container factory successfully Apr 16 04:48:18.397394 kubelet[3366]: I0416 04:48:18.396600 3366 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 16 04:48:18.624018 kubelet[3366]: I0416 04:48:18.604888 3366 factory.go:223] Registration of the containerd container factory successfully Apr 16 04:48:18.629530 kubelet[3366]: I0416 04:48:18.625011 3366 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Apr 16 04:48:18.667404 kubelet[3366]: I0416 04:48:18.666875 3366 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Apr 16 04:48:18.678366 kubelet[3366]: I0416 04:48:18.678026 3366 status_manager.go:244] "Starting to sync pod status with apiserver" Apr 16 04:48:18.723079 kubelet[3366]: I0416 04:48:18.716565 3366 kubelet.go:2428] "Starting kubelet main sync loop" Apr 16 04:48:18.736017 kubelet[3366]: E0416 04:48:18.717460 3366 kubelet.go:2452] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 16 04:48:18.864873 kubelet[3366]: E0416 04:48:18.864429 3366 kubelet.go:2452] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 16 04:48:19.078404 kubelet[3366]: E0416 04:48:19.077948 3366 kubelet.go:2452] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Apr 16 04:48:19.297767 kubelet[3366]: I0416 04:48:19.293301 3366 apiserver.go:52] "Watching apiserver" Apr 16 04:48:19.497148 kubelet[3366]: E0416 04:48:19.481540 3366 kubelet.go:2452] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Apr 16 04:48:19.973306 kubelet[3366]: I0416 04:48:19.971333 3366 cpu_manager.go:221] "Starting CPU manager" policy="none" Apr 16 04:48:19.982337 kubelet[3366]: I0416 04:48:19.978901 3366 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Apr 16 04:48:19.982337 kubelet[3366]: I0416 04:48:19.980436 3366 state_mem.go:36] "Initialized new in-memory state store" Apr 16 04:48:20.015414 kubelet[3366]: I0416 04:48:20.013950 3366 state_mem.go:88] "Updated default CPUSet" cpuSet="" Apr 16 04:48:20.015414 kubelet[3366]: I0416 04:48:20.014557 3366 state_mem.go:96] "Updated CPUSet assignments" assignments={} Apr 16 04:48:20.015414 kubelet[3366]: I0416 04:48:20.014886 3366 policy_none.go:49] "None policy: Start" Apr 16 04:48:20.015414 kubelet[3366]: I0416 04:48:20.015019 3366 memory_manager.go:187] "Starting memorymanager" policy="None" Apr 16 04:48:20.015414 kubelet[3366]: I0416 04:48:20.015293 3366 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Apr 16 04:48:20.139525 kubelet[3366]: I0416 04:48:20.016622 3366 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Apr 16 04:48:20.139525 kubelet[3366]: I0416 04:48:20.016638 3366 policy_none.go:47] "Start" Apr 16 04:48:20.317943 kubelet[3366]: E0416 04:48:20.307253 3366 kubelet.go:2452] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Apr 16 04:48:20.605581 kubelet[3366]: E0416 04:48:20.605336 3366 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 16 04:48:20.606967 kubelet[3366]: I0416 04:48:20.606396 3366 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 16 04:48:20.606967 kubelet[3366]: I0416 04:48:20.606410 3366 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 16 04:48:20.607671 kubelet[3366]: I0416 04:48:20.607310 3366 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 16 04:48:20.980750 kubelet[3366]: E0416 04:48:20.978433 3366 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 16 04:48:21.405811 kubelet[3366]: I0416 04:48:21.405719 3366 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 16 04:48:21.671823 kubelet[3366]: I0416 04:48:21.670748 3366 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Apr 16 04:48:21.671823 kubelet[3366]: I0416 04:48:21.671694 3366 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Apr 16 04:48:22.009644 kubelet[3366]: I0416 04:48:22.007384 3366 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Apr 16 04:48:22.009644 kubelet[3366]: I0416 04:48:22.009538 3366 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5cc8263309e5e610f62f7f401f49f55d-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"5cc8263309e5e610f62f7f401f49f55d\") " pod="kube-system/kube-apiserver-localhost" Apr 16 04:48:22.009644 kubelet[3366]: I0416 04:48:22.009565 3366 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5cc8263309e5e610f62f7f401f49f55d-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"5cc8263309e5e610f62f7f401f49f55d\") " pod="kube-system/kube-apiserver-localhost" Apr 16 04:48:22.009644 kubelet[3366]: I0416 04:48:22.009654 3366 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5cc8263309e5e610f62f7f401f49f55d-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"5cc8263309e5e610f62f7f401f49f55d\") " pod="kube-system/kube-apiserver-localhost" Apr 16 04:48:22.041265 kubelet[3366]: I0416 04:48:22.010110 3366 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Apr 16 04:48:22.115739 kubelet[3366]: I0416 04:48:22.113205 3366 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/82faa9ca0765979bc0118d46e6420ed8-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"82faa9ca0765979bc0118d46e6420ed8\") " pod="kube-system/kube-controller-manager-localhost" Apr 16 04:48:22.177419 kubelet[3366]: I0416 04:48:22.174631 3366 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Apr 16 04:48:22.190131 kubelet[3366]: I0416 04:48:22.189982 3366 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/82faa9ca0765979bc0118d46e6420ed8-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"82faa9ca0765979bc0118d46e6420ed8\") " pod="kube-system/kube-controller-manager-localhost" Apr 16 04:48:22.224724 kubelet[3366]: I0416 04:48:22.190183 3366 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/82faa9ca0765979bc0118d46e6420ed8-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"82faa9ca0765979bc0118d46e6420ed8\") " pod="kube-system/kube-controller-manager-localhost" Apr 16 04:48:22.224724 kubelet[3366]: I0416 04:48:22.190236 3366 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/66a243c17a59d09458bf3b09d66260f5-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"66a243c17a59d09458bf3b09d66260f5\") " pod="kube-system/kube-scheduler-localhost" Apr 16 04:48:22.224724 kubelet[3366]: I0416 04:48:22.190415 3366 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/82faa9ca0765979bc0118d46e6420ed8-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"82faa9ca0765979bc0118d46e6420ed8\") " pod="kube-system/kube-controller-manager-localhost" Apr 16 04:48:22.224724 kubelet[3366]: I0416 04:48:22.190433 3366 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/82faa9ca0765979bc0118d46e6420ed8-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"82faa9ca0765979bc0118d46e6420ed8\") " pod="kube-system/kube-controller-manager-localhost" Apr 16 04:48:22.356187 kubelet[3366]: E0416 04:48:22.351047 3366 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Apr 16 04:48:22.370513 kubelet[3366]: E0416 04:48:22.366985 3366 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Apr 16 04:48:22.393707 kubelet[3366]: E0416 04:48:22.373684 3366 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:48:22.488057 kubelet[3366]: E0416 04:48:22.487076 3366 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:48:22.519630 kubelet[3366]: E0416 04:48:22.518627 3366 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:48:23.438900 kubelet[3366]: E0416 04:48:23.427993 3366 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:48:23.438900 kubelet[3366]: E0416 04:48:23.428219 3366 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:48:23.507203 kubelet[3366]: E0416 04:48:23.450445 3366 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:48:24.864702 kubelet[3366]: E0416 04:48:24.851892 3366 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:48:25.040650 kubelet[3366]: E0416 04:48:25.037331 3366 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:48:26.479266 kubelet[3366]: E0416 04:48:26.476131 3366 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:48:30.285059 kubelet[3366]: E0416 04:48:30.284807 3366 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:48:32.549667 kubelet[3366]: E0416 04:48:32.545137 3366 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.803s" Apr 16 04:48:32.746611 kubelet[3366]: E0416 04:48:32.745769 3366 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:48:33.337027 kubelet[3366]: E0416 04:48:33.336126 3366 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:48:45.476950 kubelet[3366]: I0416 04:48:45.476546 3366 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Apr 16 04:48:45.505324 containerd[1480]: time="2026-04-16T04:48:45.504675255Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Apr 16 04:48:45.556644 kubelet[3366]: I0416 04:48:45.550441 3366 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Apr 16 04:48:46.010662 containerd[1480]: time="2026-04-16T04:48:46.010318782Z" level=info msg="TaskExit event container_id:\"4c156d6a49e78b3ec26800168ff04cd026a1be81a59e72255f07c9af989f65f7\" id:\"4c156d6a49e78b3ec26800168ff04cd026a1be81a59e72255f07c9af989f65f7\" pid:2956 exited_at:{seconds:1776314670 nanos:879116730}" Apr 16 04:48:47.269886 containerd[1480]: time="2026-04-16T04:48:47.269393008Z" level=info msg="shim disconnected" id=4c156d6a49e78b3ec26800168ff04cd026a1be81a59e72255f07c9af989f65f7 namespace=k8s.io Apr 16 04:48:47.269886 containerd[1480]: time="2026-04-16T04:48:47.269906972Z" level=warning msg="cleaning up after shim disconnected" id=4c156d6a49e78b3ec26800168ff04cd026a1be81a59e72255f07c9af989f65f7 namespace=k8s.io Apr 16 04:48:47.269886 containerd[1480]: time="2026-04-16T04:48:47.270000903Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 16 04:48:49.287755 kubelet[3366]: I0416 04:48:49.281360 3366 scope.go:117] "RemoveContainer" containerID="4c156d6a49e78b3ec26800168ff04cd026a1be81a59e72255f07c9af989f65f7" Apr 16 04:48:49.287755 kubelet[3366]: E0416 04:48:49.281783 3366 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:48:50.214326 containerd[1480]: time="2026-04-16T04:48:50.214119615Z" level=info msg="CreateContainer within sandbox \"e5ac11065ebeed995688ed5c666d4c4cb7a23481c846cf92360ae9219484fcb7\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Apr 16 04:48:51.341344 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3974536411.mount: Deactivated successfully. Apr 16 04:48:51.666264 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4037319657.mount: Deactivated successfully. Apr 16 04:48:52.010280 containerd[1480]: time="2026-04-16T04:48:51.950808794Z" level=info msg="CreateContainer within sandbox \"e5ac11065ebeed995688ed5c666d4c4cb7a23481c846cf92360ae9219484fcb7\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"9bf335d6967b3aaec92630a071061bdfe7b948d58c6fb8c070479c23f3cabca8\"" Apr 16 04:48:52.181520 containerd[1480]: time="2026-04-16T04:48:52.181238589Z" level=info msg="StartContainer for \"9bf335d6967b3aaec92630a071061bdfe7b948d58c6fb8c070479c23f3cabca8\"" Apr 16 04:48:52.617204 systemd[1]: Started cri-containerd-9bf335d6967b3aaec92630a071061bdfe7b948d58c6fb8c070479c23f3cabca8.scope - libcontainer container 9bf335d6967b3aaec92630a071061bdfe7b948d58c6fb8c070479c23f3cabca8. Apr 16 04:48:52.926258 containerd[1480]: time="2026-04-16T04:48:52.925397425Z" level=info msg="StartContainer for \"9bf335d6967b3aaec92630a071061bdfe7b948d58c6fb8c070479c23f3cabca8\" returns successfully" Apr 16 04:48:53.726019 kubelet[3366]: E0416 04:48:53.722669 3366 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:48:55.378732 kubelet[3366]: E0416 04:48:55.369967 3366 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:48:56.638667 kubelet[3366]: E0416 04:48:56.636005 3366 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:49:02.295011 kubelet[3366]: E0416 04:49:02.294550 3366 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:49:04.815445 kubelet[3366]: E0416 04:49:04.815184 3366 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:49:07.673124 kubelet[3366]: E0416 04:49:07.672209 3366 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:49:08.040383 kubelet[3366]: I0416 04:49:08.037546 3366 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/8263d935-eecf-46c4-aefb-0c78c93cf9de-kube-proxy\") pod \"kube-proxy-xnhnz\" (UID: \"8263d935-eecf-46c4-aefb-0c78c93cf9de\") " pod="kube-system/kube-proxy-xnhnz" Apr 16 04:49:08.040383 kubelet[3366]: I0416 04:49:08.037814 3366 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8263d935-eecf-46c4-aefb-0c78c93cf9de-xtables-lock\") pod \"kube-proxy-xnhnz\" (UID: \"8263d935-eecf-46c4-aefb-0c78c93cf9de\") " pod="kube-system/kube-proxy-xnhnz" Apr 16 04:49:08.040383 kubelet[3366]: I0416 04:49:08.037831 3366 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8263d935-eecf-46c4-aefb-0c78c93cf9de-lib-modules\") pod \"kube-proxy-xnhnz\" (UID: \"8263d935-eecf-46c4-aefb-0c78c93cf9de\") " pod="kube-system/kube-proxy-xnhnz" Apr 16 04:49:08.040383 kubelet[3366]: I0416 04:49:08.037846 3366 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8jffg\" (UniqueName: \"kubernetes.io/projected/8263d935-eecf-46c4-aefb-0c78c93cf9de-kube-api-access-8jffg\") pod \"kube-proxy-xnhnz\" (UID: \"8263d935-eecf-46c4-aefb-0c78c93cf9de\") " pod="kube-system/kube-proxy-xnhnz" Apr 16 04:49:08.265699 systemd[1]: Created slice kubepods-besteffort-pod8263d935_eecf_46c4_aefb_0c78c93cf9de.slice - libcontainer container kubepods-besteffort-pod8263d935_eecf_46c4_aefb_0c78c93cf9de.slice. Apr 16 04:49:09.088288 kubelet[3366]: E0416 04:49:09.087984 3366 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:49:09.106535 containerd[1480]: time="2026-04-16T04:49:09.106251493Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-xnhnz,Uid:8263d935-eecf-46c4-aefb-0c78c93cf9de,Namespace:kube-system,Attempt:0,}" Apr 16 04:49:09.459944 kubelet[3366]: I0416 04:49:09.441061 3366 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/86c47a9b-d513-4d05-8695-8b8254c62c02-var-lib-calico\") pod \"tigera-operator-5588576f44-x2gzb\" (UID: \"86c47a9b-d513-4d05-8695-8b8254c62c02\") " pod="tigera-operator/tigera-operator-5588576f44-x2gzb" Apr 16 04:49:09.472564 kubelet[3366]: I0416 04:49:09.462449 3366 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s9mhr\" (UniqueName: \"kubernetes.io/projected/86c47a9b-d513-4d05-8695-8b8254c62c02-kube-api-access-s9mhr\") pod \"tigera-operator-5588576f44-x2gzb\" (UID: \"86c47a9b-d513-4d05-8695-8b8254c62c02\") " pod="tigera-operator/tigera-operator-5588576f44-x2gzb" Apr 16 04:49:09.503308 systemd[1]: Created slice kubepods-besteffort-pod86c47a9b_d513_4d05_8695_8b8254c62c02.slice - libcontainer container kubepods-besteffort-pod86c47a9b_d513_4d05_8695_8b8254c62c02.slice. Apr 16 04:49:09.682710 containerd[1480]: time="2026-04-16T04:49:09.679610820Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 16 04:49:09.734543 containerd[1480]: time="2026-04-16T04:49:09.689785374Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 16 04:49:09.734543 containerd[1480]: time="2026-04-16T04:49:09.689863224Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 16 04:49:09.734543 containerd[1480]: time="2026-04-16T04:49:09.691368626Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 16 04:49:09.799190 systemd[1]: Started cri-containerd-5c4164ce765ed473b7bb649a47da2e0bddfe4476f15b2b7531ce84e9886feb62.scope - libcontainer container 5c4164ce765ed473b7bb649a47da2e0bddfe4476f15b2b7531ce84e9886feb62. Apr 16 04:49:10.036139 containerd[1480]: time="2026-04-16T04:49:10.032422507Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5588576f44-x2gzb,Uid:86c47a9b-d513-4d05-8695-8b8254c62c02,Namespace:tigera-operator,Attempt:0,}" Apr 16 04:49:10.270210 containerd[1480]: time="2026-04-16T04:49:10.268013424Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-xnhnz,Uid:8263d935-eecf-46c4-aefb-0c78c93cf9de,Namespace:kube-system,Attempt:0,} returns sandbox id \"5c4164ce765ed473b7bb649a47da2e0bddfe4476f15b2b7531ce84e9886feb62\"" Apr 16 04:49:10.311691 kubelet[3366]: E0416 04:49:10.302425 3366 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:49:10.397946 containerd[1480]: time="2026-04-16T04:49:10.393013547Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 16 04:49:10.397946 containerd[1480]: time="2026-04-16T04:49:10.393345657Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 16 04:49:10.397946 containerd[1480]: time="2026-04-16T04:49:10.393355439Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 16 04:49:10.408094 containerd[1480]: time="2026-04-16T04:49:10.407449169Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 16 04:49:10.561531 containerd[1480]: time="2026-04-16T04:49:10.559974418Z" level=info msg="CreateContainer within sandbox \"5c4164ce765ed473b7bb649a47da2e0bddfe4476f15b2b7531ce84e9886feb62\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Apr 16 04:49:10.710139 systemd[1]: Started cri-containerd-6f3b4afb2b67ec2608552c46336f53c3981ea5195285732f600275dd24262b55.scope - libcontainer container 6f3b4afb2b67ec2608552c46336f53c3981ea5195285732f600275dd24262b55. Apr 16 04:49:11.220405 containerd[1480]: time="2026-04-16T04:49:11.220232743Z" level=info msg="CreateContainer within sandbox \"5c4164ce765ed473b7bb649a47da2e0bddfe4476f15b2b7531ce84e9886feb62\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"58f0dda47b9362a57955577f29d42a93bc3816e4c51005aaa7b7e2aa717f018d\"" Apr 16 04:49:11.240410 containerd[1480]: time="2026-04-16T04:49:11.237111216Z" level=info msg="StartContainer for \"58f0dda47b9362a57955577f29d42a93bc3816e4c51005aaa7b7e2aa717f018d\"" Apr 16 04:49:11.777611 containerd[1480]: time="2026-04-16T04:49:11.773547332Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5588576f44-x2gzb,Uid:86c47a9b-d513-4d05-8695-8b8254c62c02,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"6f3b4afb2b67ec2608552c46336f53c3981ea5195285732f600275dd24262b55\"" Apr 16 04:49:11.843280 containerd[1480]: time="2026-04-16T04:49:11.839092516Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\"" Apr 16 04:49:11.848081 systemd[1]: Started cri-containerd-58f0dda47b9362a57955577f29d42a93bc3816e4c51005aaa7b7e2aa717f018d.scope - libcontainer container 58f0dda47b9362a57955577f29d42a93bc3816e4c51005aaa7b7e2aa717f018d. Apr 16 04:49:12.355781 containerd[1480]: time="2026-04-16T04:49:12.352083767Z" level=info msg="StartContainer for \"58f0dda47b9362a57955577f29d42a93bc3816e4c51005aaa7b7e2aa717f018d\" returns successfully" Apr 16 04:49:14.882602 kubelet[3366]: E0416 04:49:14.881998 3366 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:49:17.182773 kubelet[3366]: E0416 04:49:17.167878 3366 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:49:17.182773 kubelet[3366]: I0416 04:49:17.182650 3366 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-xnhnz" podStartSLOduration=29.182393369 podStartE2EDuration="29.182393369s" podCreationTimestamp="2026-04-16 04:48:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-16 04:49:17.082242298 +0000 UTC m=+61.109466157" watchObservedRunningTime="2026-04-16 04:49:17.182393369 +0000 UTC m=+61.209617224" Apr 16 04:49:18.182404 kubelet[3366]: E0416 04:49:18.179389 3366 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.462s" Apr 16 04:49:20.049609 kubelet[3366]: E0416 04:49:20.036947 3366 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.319s" Apr 16 04:49:21.420687 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1659904992.mount: Deactivated successfully. Apr 16 04:49:29.285905 containerd[1480]: time="2026-04-16T04:49:29.285566119Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.40.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 04:49:29.307556 containerd[1480]: time="2026-04-16T04:49:29.286833179Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.40.7: active requests=0, bytes read=40846156" Apr 16 04:49:29.325547 containerd[1480]: time="2026-04-16T04:49:29.324906337Z" level=info msg="ImageCreate event name:\"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 04:49:29.351310 containerd[1480]: time="2026-04-16T04:49:29.350623499Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 04:49:29.406763 containerd[1480]: time="2026-04-16T04:49:29.406542372Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.40.7\" with image id \"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\", repo tag \"quay.io/tigera/operator:v1.40.7\", repo digest \"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\", size \"40842151\" in 17.567402342s" Apr 16 04:49:29.406763 containerd[1480]: time="2026-04-16T04:49:29.406589314Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\" returns image reference \"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\"" Apr 16 04:49:29.861325 containerd[1480]: time="2026-04-16T04:49:29.860908817Z" level=info msg="CreateContainer within sandbox \"6f3b4afb2b67ec2608552c46336f53c3981ea5195285732f600275dd24262b55\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Apr 16 04:49:29.950145 containerd[1480]: time="2026-04-16T04:49:29.949758622Z" level=info msg="CreateContainer within sandbox \"6f3b4afb2b67ec2608552c46336f53c3981ea5195285732f600275dd24262b55\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"8059e25422f0341dced962be95d75b94cb2f3ae3e207689b1a1a6362b6596e41\"" Apr 16 04:49:29.969786 containerd[1480]: time="2026-04-16T04:49:29.960606781Z" level=info msg="StartContainer for \"8059e25422f0341dced962be95d75b94cb2f3ae3e207689b1a1a6362b6596e41\"" Apr 16 04:49:30.106206 systemd[1]: Started cri-containerd-8059e25422f0341dced962be95d75b94cb2f3ae3e207689b1a1a6362b6596e41.scope - libcontainer container 8059e25422f0341dced962be95d75b94cb2f3ae3e207689b1a1a6362b6596e41. Apr 16 04:49:30.657249 containerd[1480]: time="2026-04-16T04:49:30.638288344Z" level=info msg="StartContainer for \"8059e25422f0341dced962be95d75b94cb2f3ae3e207689b1a1a6362b6596e41\" returns successfully" Apr 16 04:49:32.993332 sudo[1646]: pam_unix(sudo:session): session closed for user root Apr 16 04:49:33.081173 sshd[1643]: pam_unix(sshd:session): session closed for user core Apr 16 04:49:33.736279 systemd[1]: sshd@6-10.0.0.7:22-10.0.0.1:49512.service: Deactivated successfully. Apr 16 04:49:33.814208 systemd[1]: session-7.scope: Deactivated successfully. Apr 16 04:49:33.817441 kubelet[3366]: E0416 04:49:33.752803 3366 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.016s" Apr 16 04:49:33.815250 systemd[1]: session-7.scope: Consumed 3min 53.229s CPU time, 163.8M memory peak, 0B memory swap peak. Apr 16 04:49:33.817407 systemd-logind[1466]: Session 7 logged out. Waiting for processes to exit. Apr 16 04:49:33.969541 systemd-logind[1466]: Removed session 7.