Apr 28 01:12:32.579033 kernel: Linux version 6.6.127-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Mon Apr 27 22:40:10 -00 2026 Apr 28 01:12:32.579062 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=dba81bba70fdc18951de51911456386ac86d38187268d44374f74ed6158168ec Apr 28 01:12:32.579077 kernel: BIOS-provided physical RAM map: Apr 28 01:12:32.579085 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Apr 28 01:12:32.579093 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Apr 28 01:12:32.579101 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Apr 28 01:12:32.579110 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Apr 28 01:12:32.579119 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Apr 28 01:12:32.579127 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Apr 28 01:12:32.579137 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Apr 28 01:12:32.579176 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Apr 28 01:12:32.579185 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Apr 28 01:12:32.579193 kernel: NX (Execute Disable) protection: active Apr 28 01:12:32.579202 kernel: APIC: Static calls initialized Apr 28 01:12:32.579213 kernel: SMBIOS 2.8 present. Apr 28 01:12:32.579224 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Apr 28 01:12:32.579234 kernel: Hypervisor detected: KVM Apr 28 01:12:32.579242 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Apr 28 01:12:32.579251 kernel: kvm-clock: using sched offset of 6982394791 cycles Apr 28 01:12:32.579261 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Apr 28 01:12:32.579269 kernel: tsc: Detected 2793.438 MHz processor Apr 28 01:12:32.579277 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Apr 28 01:12:32.579285 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Apr 28 01:12:32.579293 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x10000000000 Apr 28 01:12:32.579305 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Apr 28 01:12:32.579314 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Apr 28 01:12:32.579322 kernel: Using GB pages for direct mapping Apr 28 01:12:32.579331 kernel: ACPI: Early table checksum verification disabled Apr 28 01:12:32.579340 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Apr 28 01:12:32.579348 kernel: ACPI: RSDT 0x000000009CFE241A 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 28 01:12:32.579357 kernel: ACPI: FACP 0x000000009CFE21FA 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Apr 28 01:12:32.579366 kernel: ACPI: DSDT 0x000000009CFE0040 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 28 01:12:32.579375 kernel: ACPI: FACS 0x000000009CFE0000 000040 Apr 28 01:12:32.579386 kernel: ACPI: APIC 0x000000009CFE22EE 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 28 01:12:32.579395 kernel: ACPI: HPET 0x000000009CFE237E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 28 01:12:32.579404 kernel: ACPI: MCFG 0x000000009CFE23B6 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 28 01:12:32.579413 kernel: ACPI: WAET 0x000000009CFE23F2 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 28 01:12:32.579422 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21fa-0x9cfe22ed] Apr 28 01:12:32.579431 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21f9] Apr 28 01:12:32.579440 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Apr 28 01:12:32.579453 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22ee-0x9cfe237d] Apr 28 01:12:32.579464 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe237e-0x9cfe23b5] Apr 28 01:12:32.579473 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23b6-0x9cfe23f1] Apr 28 01:12:32.579483 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23f2-0x9cfe2419] Apr 28 01:12:32.579492 kernel: No NUMA configuration found Apr 28 01:12:32.579501 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Apr 28 01:12:32.579510 kernel: NODE_DATA(0) allocated [mem 0x9cfd6000-0x9cfdbfff] Apr 28 01:12:32.579521 kernel: Zone ranges: Apr 28 01:12:32.579531 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Apr 28 01:12:32.579540 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Apr 28 01:12:32.579550 kernel: Normal empty Apr 28 01:12:32.579559 kernel: Movable zone start for each node Apr 28 01:12:32.579568 kernel: Early memory node ranges Apr 28 01:12:32.579577 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Apr 28 01:12:32.579587 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Apr 28 01:12:32.579596 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Apr 28 01:12:32.579622 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Apr 28 01:12:32.579634 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Apr 28 01:12:32.579644 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Apr 28 01:12:32.579653 kernel: ACPI: PM-Timer IO Port: 0x608 Apr 28 01:12:32.579663 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Apr 28 01:12:32.579672 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Apr 28 01:12:32.579682 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Apr 28 01:12:32.579691 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Apr 28 01:12:32.579698 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Apr 28 01:12:32.579706 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Apr 28 01:12:32.579716 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Apr 28 01:12:32.579724 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Apr 28 01:12:32.579732 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Apr 28 01:12:32.579740 kernel: TSC deadline timer available Apr 28 01:12:32.579748 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Apr 28 01:12:32.579756 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Apr 28 01:12:32.579766 kernel: kvm-guest: KVM setup pv remote TLB flush Apr 28 01:12:32.579775 kernel: kvm-guest: setup PV sched yield Apr 28 01:12:32.579785 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Apr 28 01:12:32.579796 kernel: Booting paravirtualized kernel on KVM Apr 28 01:12:32.579806 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Apr 28 01:12:32.579970 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Apr 28 01:12:32.580114 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u524288 Apr 28 01:12:32.580125 kernel: pcpu-alloc: s196328 r8192 d28952 u524288 alloc=1*2097152 Apr 28 01:12:32.580134 kernel: pcpu-alloc: [0] 0 1 2 3 Apr 28 01:12:32.580144 kernel: kvm-guest: PV spinlocks enabled Apr 28 01:12:32.580194 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Apr 28 01:12:32.580206 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=dba81bba70fdc18951de51911456386ac86d38187268d44374f74ed6158168ec Apr 28 01:12:32.580222 kernel: random: crng init done Apr 28 01:12:32.580232 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Apr 28 01:12:32.580241 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Apr 28 01:12:32.580251 kernel: Fallback order for Node 0: 0 Apr 28 01:12:32.580260 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632732 Apr 28 01:12:32.580270 kernel: Policy zone: DMA32 Apr 28 01:12:32.580279 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Apr 28 01:12:32.580289 kernel: Memory: 2433648K/2571752K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42884K init, 2312K bss, 137900K reserved, 0K cma-reserved) Apr 28 01:12:32.580302 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Apr 28 01:12:32.580311 kernel: ftrace: allocating 37996 entries in 149 pages Apr 28 01:12:32.580321 kernel: ftrace: allocated 149 pages with 4 groups Apr 28 01:12:32.580330 kernel: Dynamic Preempt: voluntary Apr 28 01:12:32.580340 kernel: rcu: Preemptible hierarchical RCU implementation. Apr 28 01:12:32.580354 kernel: rcu: RCU event tracing is enabled. Apr 28 01:12:32.580364 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Apr 28 01:12:32.580374 kernel: Trampoline variant of Tasks RCU enabled. Apr 28 01:12:32.580383 kernel: Rude variant of Tasks RCU enabled. Apr 28 01:12:32.580395 kernel: Tracing variant of Tasks RCU enabled. Apr 28 01:12:32.580405 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Apr 28 01:12:32.580414 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Apr 28 01:12:32.580423 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Apr 28 01:12:32.580433 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Apr 28 01:12:32.580442 kernel: Console: colour VGA+ 80x25 Apr 28 01:12:32.580452 kernel: printk: console [ttyS0] enabled Apr 28 01:12:32.580461 kernel: ACPI: Core revision 20230628 Apr 28 01:12:32.580470 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Apr 28 01:12:32.580481 kernel: APIC: Switch to symmetric I/O mode setup Apr 28 01:12:32.580490 kernel: x2apic enabled Apr 28 01:12:32.580500 kernel: APIC: Switched APIC routing to: physical x2apic Apr 28 01:12:32.580509 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Apr 28 01:12:32.580518 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Apr 28 01:12:32.580528 kernel: kvm-guest: setup PV IPIs Apr 28 01:12:32.580537 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Apr 28 01:12:32.580548 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x284409db922, max_idle_ns: 440795228871 ns Apr 28 01:12:32.580568 kernel: Calibrating delay loop (skipped) preset value.. 5586.87 BogoMIPS (lpj=2793438) Apr 28 01:12:32.580578 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Apr 28 01:12:32.580589 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Apr 28 01:12:32.580598 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Apr 28 01:12:32.580611 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Apr 28 01:12:32.580621 kernel: Spectre V2 : Mitigation: Retpolines Apr 28 01:12:32.580631 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Apr 28 01:12:32.580641 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Apr 28 01:12:32.580654 kernel: RETBleed: Vulnerable Apr 28 01:12:32.580664 kernel: Speculative Store Bypass: Vulnerable Apr 28 01:12:32.580674 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Apr 28 01:12:32.580684 kernel: GDS: Unknown: Dependent on hypervisor status Apr 28 01:12:32.580694 kernel: active return thunk: its_return_thunk Apr 28 01:12:32.580704 kernel: ITS: Mitigation: Aligned branch/return thunks Apr 28 01:12:32.580714 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Apr 28 01:12:32.580724 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Apr 28 01:12:32.580735 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Apr 28 01:12:32.580747 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Apr 28 01:12:32.580757 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Apr 28 01:12:32.580767 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Apr 28 01:12:32.580778 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Apr 28 01:12:32.580788 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Apr 28 01:12:32.580798 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Apr 28 01:12:32.580808 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Apr 28 01:12:32.580838 kernel: x86/fpu: Enabled xstate features 0xe7, context size is 2432 bytes, using 'compacted' format. Apr 28 01:12:32.580849 kernel: Freeing SMP alternatives memory: 32K Apr 28 01:12:32.580861 kernel: pid_max: default: 32768 minimum: 301 Apr 28 01:12:32.580870 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Apr 28 01:12:32.580878 kernel: landlock: Up and running. Apr 28 01:12:32.580886 kernel: SELinux: Initializing. Apr 28 01:12:32.580895 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 28 01:12:32.580903 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 28 01:12:32.580912 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8370C CPU @ 2.80GHz (family: 0x6, model: 0x6a, stepping: 0x6) Apr 28 01:12:32.580921 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Apr 28 01:12:32.580930 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Apr 28 01:12:32.580943 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Apr 28 01:12:32.580953 kernel: Performance Events: unsupported p6 CPU model 106 no PMU driver, software events only. Apr 28 01:12:32.580963 kernel: signal: max sigframe size: 3632 Apr 28 01:12:32.580973 kernel: rcu: Hierarchical SRCU implementation. Apr 28 01:12:32.580984 kernel: rcu: Max phase no-delay instances is 400. Apr 28 01:12:32.580994 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Apr 28 01:12:32.581004 kernel: smp: Bringing up secondary CPUs ... Apr 28 01:12:32.581014 kernel: smpboot: x86: Booting SMP configuration: Apr 28 01:12:32.581024 kernel: .... node #0, CPUs: #1 #2 #3 Apr 28 01:12:32.581037 kernel: smp: Brought up 1 node, 4 CPUs Apr 28 01:12:32.581047 kernel: smpboot: Max logical packages: 1 Apr 28 01:12:32.581057 kernel: smpboot: Total of 4 processors activated (22347.50 BogoMIPS) Apr 28 01:12:32.581067 kernel: devtmpfs: initialized Apr 28 01:12:32.581077 kernel: x86/mm: Memory block size: 128MB Apr 28 01:12:32.581087 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Apr 28 01:12:32.581097 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Apr 28 01:12:32.581108 kernel: pinctrl core: initialized pinctrl subsystem Apr 28 01:12:32.581118 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Apr 28 01:12:32.581130 kernel: audit: initializing netlink subsys (disabled) Apr 28 01:12:32.581140 kernel: audit: type=2000 audit(1777338749.954:1): state=initialized audit_enabled=0 res=1 Apr 28 01:12:32.581272 kernel: thermal_sys: Registered thermal governor 'step_wise' Apr 28 01:12:32.581284 kernel: thermal_sys: Registered thermal governor 'user_space' Apr 28 01:12:32.581295 kernel: cpuidle: using governor menu Apr 28 01:12:32.581306 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Apr 28 01:12:32.581314 kernel: dca service started, version 1.12.1 Apr 28 01:12:32.581323 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Apr 28 01:12:32.581331 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Apr 28 01:12:32.581343 kernel: PCI: Using configuration type 1 for base access Apr 28 01:12:32.581351 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Apr 28 01:12:32.581360 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Apr 28 01:12:32.581368 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Apr 28 01:12:32.581377 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Apr 28 01:12:32.581386 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Apr 28 01:12:32.581396 kernel: ACPI: Added _OSI(Module Device) Apr 28 01:12:32.581406 kernel: ACPI: Added _OSI(Processor Device) Apr 28 01:12:32.581416 kernel: ACPI: Added _OSI(Processor Aggregator Device) Apr 28 01:12:32.581429 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Apr 28 01:12:32.581439 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Apr 28 01:12:32.581449 kernel: ACPI: Interpreter enabled Apr 28 01:12:32.581459 kernel: ACPI: PM: (supports S0 S3 S5) Apr 28 01:12:32.581469 kernel: ACPI: Using IOAPIC for interrupt routing Apr 28 01:12:32.581480 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Apr 28 01:12:32.581490 kernel: PCI: Using E820 reservations for host bridge windows Apr 28 01:12:32.581500 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Apr 28 01:12:32.581510 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Apr 28 01:12:32.581842 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Apr 28 01:12:32.582129 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Apr 28 01:12:32.582260 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Apr 28 01:12:32.582273 kernel: PCI host bridge to bus 0000:00 Apr 28 01:12:32.582413 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Apr 28 01:12:32.582488 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Apr 28 01:12:32.582572 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Apr 28 01:12:32.582651 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Apr 28 01:12:32.582725 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Apr 28 01:12:32.582799 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Apr 28 01:12:32.583043 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Apr 28 01:12:32.583203 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Apr 28 01:12:32.583307 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Apr 28 01:12:32.583396 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Apr 28 01:12:32.584494 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Apr 28 01:12:32.584605 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Apr 28 01:12:32.584701 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Apr 28 01:12:32.584806 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Apr 28 01:12:32.585239 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc0c0-0xc0df] Apr 28 01:12:32.585326 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Apr 28 01:12:32.585425 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Apr 28 01:12:32.585533 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Apr 28 01:12:32.585628 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc000-0xc07f] Apr 28 01:12:32.585721 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Apr 28 01:12:32.585811 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Apr 28 01:12:32.586416 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Apr 28 01:12:32.586519 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc0e0-0xc0ff] Apr 28 01:12:32.586610 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] Apr 28 01:12:32.586701 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] Apr 28 01:12:32.586781 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Apr 28 01:12:32.587064 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Apr 28 01:12:32.587206 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Apr 28 01:12:32.587316 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Apr 28 01:12:32.587414 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc100-0xc11f] Apr 28 01:12:32.587508 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd4000-0xfebd4fff] Apr 28 01:12:32.587602 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Apr 28 01:12:32.587688 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Apr 28 01:12:32.587701 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Apr 28 01:12:32.587713 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Apr 28 01:12:32.587723 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Apr 28 01:12:32.587734 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Apr 28 01:12:32.587748 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Apr 28 01:12:32.587758 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Apr 28 01:12:32.587769 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Apr 28 01:12:32.587780 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Apr 28 01:12:32.587791 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Apr 28 01:12:32.587801 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Apr 28 01:12:32.587811 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Apr 28 01:12:32.587992 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Apr 28 01:12:32.588003 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Apr 28 01:12:32.588018 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Apr 28 01:12:32.588028 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Apr 28 01:12:32.588038 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Apr 28 01:12:32.588048 kernel: iommu: Default domain type: Translated Apr 28 01:12:32.588059 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Apr 28 01:12:32.588069 kernel: PCI: Using ACPI for IRQ routing Apr 28 01:12:32.588079 kernel: PCI: pci_cache_line_size set to 64 bytes Apr 28 01:12:32.588090 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Apr 28 01:12:32.588100 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Apr 28 01:12:32.588247 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Apr 28 01:12:32.588334 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Apr 28 01:12:32.588425 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Apr 28 01:12:32.588438 kernel: vgaarb: loaded Apr 28 01:12:32.588449 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Apr 28 01:12:32.588459 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Apr 28 01:12:32.588470 kernel: clocksource: Switched to clocksource kvm-clock Apr 28 01:12:32.588480 kernel: VFS: Disk quotas dquot_6.6.0 Apr 28 01:12:32.588494 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Apr 28 01:12:32.588504 kernel: pnp: PnP ACPI init Apr 28 01:12:32.588612 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Apr 28 01:12:32.588627 kernel: pnp: PnP ACPI: found 6 devices Apr 28 01:12:32.588638 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Apr 28 01:12:32.588648 kernel: NET: Registered PF_INET protocol family Apr 28 01:12:32.588658 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Apr 28 01:12:32.588667 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Apr 28 01:12:32.588678 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Apr 28 01:12:32.588692 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Apr 28 01:12:32.588703 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Apr 28 01:12:32.588713 kernel: TCP: Hash tables configured (established 32768 bind 32768) Apr 28 01:12:32.588723 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 28 01:12:32.588734 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 28 01:12:32.588744 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Apr 28 01:12:32.588754 kernel: NET: Registered PF_XDP protocol family Apr 28 01:12:32.588853 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Apr 28 01:12:32.588921 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Apr 28 01:12:32.588984 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Apr 28 01:12:32.589046 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Apr 28 01:12:32.589107 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Apr 28 01:12:32.589516 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Apr 28 01:12:32.589537 kernel: PCI: CLS 0 bytes, default 64 Apr 28 01:12:32.589547 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Apr 28 01:12:32.589556 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x284409db922, max_idle_ns: 440795228871 ns Apr 28 01:12:32.589564 kernel: Initialise system trusted keyrings Apr 28 01:12:32.589577 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Apr 28 01:12:32.589585 kernel: Key type asymmetric registered Apr 28 01:12:32.589593 kernel: Asymmetric key parser 'x509' registered Apr 28 01:12:32.589601 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Apr 28 01:12:32.589608 kernel: io scheduler mq-deadline registered Apr 28 01:12:32.589617 kernel: io scheduler kyber registered Apr 28 01:12:32.589624 kernel: io scheduler bfq registered Apr 28 01:12:32.589632 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Apr 28 01:12:32.589641 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Apr 28 01:12:32.589651 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Apr 28 01:12:32.589659 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Apr 28 01:12:32.589667 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Apr 28 01:12:32.589675 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Apr 28 01:12:32.589683 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Apr 28 01:12:32.589692 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Apr 28 01:12:32.589700 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Apr 28 01:12:32.589780 kernel: rtc_cmos 00:04: RTC can wake from S4 Apr 28 01:12:32.589876 kernel: rtc_cmos 00:04: registered as rtc0 Apr 28 01:12:32.589891 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Apr 28 01:12:32.589967 kernel: rtc_cmos 00:04: setting system clock to 2026-04-28T01:12:31 UTC (1777338751) Apr 28 01:12:32.590046 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Apr 28 01:12:32.590060 kernel: intel_pstate: CPU model not supported Apr 28 01:12:32.590070 kernel: NET: Registered PF_INET6 protocol family Apr 28 01:12:32.590081 kernel: Segment Routing with IPv6 Apr 28 01:12:32.590091 kernel: In-situ OAM (IOAM) with IPv6 Apr 28 01:12:32.590102 kernel: NET: Registered PF_PACKET protocol family Apr 28 01:12:32.590116 kernel: Key type dns_resolver registered Apr 28 01:12:32.590124 kernel: IPI shorthand broadcast: enabled Apr 28 01:12:32.590134 kernel: sched_clock: Marking stable (1534095004, 416427093)->(2356086779, -405564682) Apr 28 01:12:32.590176 kernel: registered taskstats version 1 Apr 28 01:12:32.590186 kernel: Loading compiled-in X.509 certificates Apr 28 01:12:32.590196 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.127-flatcar: 40b5c5a01382737457e1eae3e889ae587960eb18' Apr 28 01:12:32.590205 kernel: Key type .fscrypt registered Apr 28 01:12:32.590213 kernel: Key type fscrypt-provisioning registered Apr 28 01:12:32.590221 kernel: ima: No TPM chip found, activating TPM-bypass! Apr 28 01:12:32.590232 kernel: ima: Allocated hash algorithm: sha1 Apr 28 01:12:32.590240 kernel: ima: No architecture policies found Apr 28 01:12:32.590249 kernel: clk: Disabling unused clocks Apr 28 01:12:32.590257 kernel: Freeing unused kernel image (initmem) memory: 42884K Apr 28 01:12:32.590265 kernel: Write protecting the kernel read-only data: 36864k Apr 28 01:12:32.590273 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Apr 28 01:12:32.590282 kernel: Run /init as init process Apr 28 01:12:32.590289 kernel: with arguments: Apr 28 01:12:32.590298 kernel: /init Apr 28 01:12:32.590307 kernel: with environment: Apr 28 01:12:32.590315 kernel: HOME=/ Apr 28 01:12:32.590323 kernel: TERM=linux Apr 28 01:12:32.590334 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 28 01:12:32.590345 systemd[1]: Detected virtualization kvm. Apr 28 01:12:32.590354 systemd[1]: Detected architecture x86-64. Apr 28 01:12:32.590363 systemd[1]: Running in initrd. Apr 28 01:12:32.590371 systemd[1]: No hostname configured, using default hostname. Apr 28 01:12:32.590382 systemd[1]: Hostname set to . Apr 28 01:12:32.590391 systemd[1]: Initializing machine ID from VM UUID. Apr 28 01:12:32.590400 systemd[1]: Queued start job for default target initrd.target. Apr 28 01:12:32.590408 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 28 01:12:32.590417 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 28 01:12:32.590426 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Apr 28 01:12:32.590435 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 28 01:12:32.590444 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Apr 28 01:12:32.590454 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Apr 28 01:12:32.590481 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Apr 28 01:12:32.590492 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Apr 28 01:12:32.590502 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 28 01:12:32.590514 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 28 01:12:32.590524 systemd[1]: Reached target paths.target - Path Units. Apr 28 01:12:32.590534 systemd[1]: Reached target slices.target - Slice Units. Apr 28 01:12:32.590545 systemd[1]: Reached target swap.target - Swaps. Apr 28 01:12:32.590555 systemd[1]: Reached target timers.target - Timer Units. Apr 28 01:12:32.590565 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Apr 28 01:12:32.590575 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 28 01:12:32.590586 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 28 01:12:32.590596 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Apr 28 01:12:32.590608 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 28 01:12:32.590618 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 28 01:12:32.590628 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 28 01:12:32.590639 systemd[1]: Reached target sockets.target - Socket Units. Apr 28 01:12:32.590650 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Apr 28 01:12:32.590661 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 28 01:12:32.590671 systemd[1]: Finished network-cleanup.service - Network Cleanup. Apr 28 01:12:32.590681 systemd[1]: Starting systemd-fsck-usr.service... Apr 28 01:12:32.590694 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 28 01:12:32.590706 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 28 01:12:32.590716 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 28 01:12:32.590727 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Apr 28 01:12:32.590737 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 28 01:12:32.590748 systemd[1]: Finished systemd-fsck-usr.service. Apr 28 01:12:32.590787 systemd-journald[193]: Collecting audit messages is disabled. Apr 28 01:12:32.590996 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 28 01:12:32.591015 systemd-journald[193]: Journal started Apr 28 01:12:32.591042 systemd-journald[193]: Runtime Journal (/run/log/journal/39ad4c1e380940e581b135f5b87f4caf) is 6.0M, max 48.4M, 42.3M free. Apr 28 01:12:32.577111 systemd-modules-load[195]: Inserted module 'overlay' Apr 28 01:12:32.596394 systemd[1]: Started systemd-journald.service - Journal Service. Apr 28 01:12:32.597308 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 28 01:12:32.612954 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 28 01:12:32.841717 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Apr 28 01:12:32.841746 kernel: Bridge firewalling registered Apr 28 01:12:32.626224 systemd-modules-load[195]: Inserted module 'br_netfilter' Apr 28 01:12:32.854624 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 28 01:12:32.861949 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 28 01:12:32.862667 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 28 01:12:32.871519 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 28 01:12:32.892843 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 28 01:12:32.901679 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 28 01:12:32.931642 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 28 01:12:32.947582 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 28 01:12:32.951558 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 28 01:12:32.982282 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Apr 28 01:12:32.989788 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 28 01:12:33.052999 dracut-cmdline[231]: dracut-dracut-053 Apr 28 01:12:33.064572 dracut-cmdline[231]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=dba81bba70fdc18951de51911456386ac86d38187268d44374f74ed6158168ec Apr 28 01:12:33.087633 systemd-resolved[236]: Positive Trust Anchors: Apr 28 01:12:33.087643 systemd-resolved[236]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 28 01:12:33.087682 systemd-resolved[236]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 28 01:12:33.091790 systemd-resolved[236]: Defaulting to hostname 'linux'. Apr 28 01:12:33.093388 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 28 01:12:33.102363 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 28 01:12:33.406259 kernel: SCSI subsystem initialized Apr 28 01:12:33.461186 kernel: Loading iSCSI transport class v2.0-870. Apr 28 01:12:33.500732 kernel: iscsi: registered transport (tcp) Apr 28 01:12:33.531133 kernel: hrtimer: interrupt took 3729376 ns Apr 28 01:12:33.581345 kernel: iscsi: registered transport (qla4xxx) Apr 28 01:12:33.581469 kernel: QLogic iSCSI HBA Driver Apr 28 01:12:33.673611 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Apr 28 01:12:33.693753 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Apr 28 01:12:33.805413 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Apr 28 01:12:33.805538 kernel: device-mapper: uevent: version 1.0.3 Apr 28 01:12:33.806973 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Apr 28 01:12:33.907530 kernel: raid6: avx512x4 gen() 32925 MB/s Apr 28 01:12:33.949273 kernel: raid6: avx512x2 gen() 3758 MB/s Apr 28 01:12:33.965285 kernel: raid6: avx512x1 gen() 29434 MB/s Apr 28 01:12:33.983562 kernel: raid6: avx2x4 gen() 26761 MB/s Apr 28 01:12:34.000343 kernel: raid6: avx2x2 gen() 23494 MB/s Apr 28 01:12:34.030396 kernel: raid6: avx2x1 gen() 12695 MB/s Apr 28 01:12:34.030516 kernel: raid6: using algorithm avx512x4 gen() 32925 MB/s Apr 28 01:12:34.048361 kernel: raid6: .... xor() 9375 MB/s, rmw enabled Apr 28 01:12:34.048537 kernel: raid6: using avx512x2 recovery algorithm Apr 28 01:12:34.081411 kernel: xor: automatically using best checksumming function avx Apr 28 01:12:34.469707 kernel: Btrfs loaded, zoned=no, fsverity=no Apr 28 01:12:34.489740 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Apr 28 01:12:34.545699 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 28 01:12:34.569860 systemd-udevd[418]: Using default interface naming scheme 'v255'. Apr 28 01:12:34.575688 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 28 01:12:34.579970 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Apr 28 01:12:34.639026 dracut-pre-trigger[428]: rd.md=0: removing MD RAID activation Apr 28 01:12:34.724615 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Apr 28 01:12:34.736020 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 28 01:12:34.805565 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 28 01:12:34.864024 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Apr 28 01:12:34.889287 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Apr 28 01:12:34.893583 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Apr 28 01:12:34.898521 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 28 01:12:34.899982 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 28 01:12:34.924403 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Apr 28 01:12:34.934277 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Apr 28 01:12:34.940339 kernel: cryptd: max_cpu_qlen set to 1000 Apr 28 01:12:34.952015 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Apr 28 01:12:34.959174 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Apr 28 01:12:34.959266 kernel: GPT:9289727 != 19775487 Apr 28 01:12:34.959279 kernel: GPT:Alternate GPT header not at the end of the disk. Apr 28 01:12:34.964465 kernel: GPT:9289727 != 19775487 Apr 28 01:12:34.964524 kernel: GPT: Use GNU Parted to correct GPT errors. Apr 28 01:12:34.964539 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 28 01:12:34.962734 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 28 01:12:34.963248 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 28 01:12:34.980641 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 28 01:12:34.982993 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 28 01:12:34.984361 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 28 01:12:34.990469 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Apr 28 01:12:35.003215 kernel: libata version 3.00 loaded. Apr 28 01:12:35.032288 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 28 01:12:35.037554 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Apr 28 01:12:35.047488 kernel: AVX2 version of gcm_enc/dec engaged. Apr 28 01:12:35.050313 kernel: AES CTR mode by8 optimization enabled Apr 28 01:12:35.057279 kernel: ahci 0000:00:1f.2: version 3.0 Apr 28 01:12:35.058217 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Apr 28 01:12:35.062139 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Apr 28 01:12:35.062551 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Apr 28 01:12:35.066187 kernel: scsi host0: ahci Apr 28 01:12:35.066434 kernel: scsi host1: ahci Apr 28 01:12:35.067198 kernel: scsi host2: ahci Apr 28 01:12:35.068214 kernel: scsi host3: ahci Apr 28 01:12:35.069223 kernel: scsi host4: ahci Apr 28 01:12:35.070246 kernel: scsi host5: ahci Apr 28 01:12:35.070344 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 Apr 28 01:12:35.070352 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 Apr 28 01:12:35.070364 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 Apr 28 01:12:35.070371 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 Apr 28 01:12:35.070380 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 Apr 28 01:12:35.070387 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 Apr 28 01:12:35.086464 kernel: BTRFS: device fsid c393bc7b-9362-4bef-afe6-6491ed4d6c93 devid 1 transid 34 /dev/vda3 scanned by (udev-worker) (482) Apr 28 01:12:35.088814 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Apr 28 01:12:35.285813 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (467) Apr 28 01:12:35.292350 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 28 01:12:35.358950 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Apr 28 01:12:35.385235 kernel: ata1: SATA link down (SStatus 0 SControl 300) Apr 28 01:12:35.385301 kernel: ata5: SATA link down (SStatus 0 SControl 300) Apr 28 01:12:35.387475 kernel: ata4: SATA link down (SStatus 0 SControl 300) Apr 28 01:12:35.387565 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Apr 28 01:12:35.435681 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Apr 28 01:12:35.435711 kernel: ata6: SATA link down (SStatus 0 SControl 300) Apr 28 01:12:35.435724 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Apr 28 01:12:35.435734 kernel: ata3.00: applying bridge limits Apr 28 01:12:35.435745 kernel: ata2: SATA link down (SStatus 0 SControl 300) Apr 28 01:12:35.435756 kernel: ata3.00: configured for UDMA/100 Apr 28 01:12:35.435767 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Apr 28 01:12:35.435686 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Apr 28 01:12:35.444698 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Apr 28 01:12:35.462693 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Apr 28 01:12:35.474816 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 28 01:12:35.488946 disk-uuid[561]: Primary Header is updated. Apr 28 01:12:35.488946 disk-uuid[561]: Secondary Entries is updated. Apr 28 01:12:35.488946 disk-uuid[561]: Secondary Header is updated. Apr 28 01:12:35.494259 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 28 01:12:35.506317 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 28 01:12:35.572699 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Apr 28 01:12:35.573029 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Apr 28 01:12:35.589933 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 28 01:12:35.604221 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Apr 28 01:12:36.536766 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 28 01:12:36.545616 disk-uuid[574]: The operation has completed successfully. Apr 28 01:12:36.652142 systemd[1]: disk-uuid.service: Deactivated successfully. Apr 28 01:12:36.652308 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Apr 28 01:12:36.706801 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Apr 28 01:12:36.751264 sh[598]: Success Apr 28 01:12:36.870727 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Apr 28 01:12:36.949925 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Apr 28 01:12:36.964945 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Apr 28 01:12:36.978367 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Apr 28 01:12:37.066240 kernel: BTRFS info (device dm-0): first mount of filesystem c393bc7b-9362-4bef-afe6-6491ed4d6c93 Apr 28 01:12:37.066344 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Apr 28 01:12:37.069628 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Apr 28 01:12:37.072466 kernel: BTRFS info (device dm-0): disabling log replay at mount time Apr 28 01:12:37.072557 kernel: BTRFS info (device dm-0): using free space tree Apr 28 01:12:37.102600 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Apr 28 01:12:37.133549 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Apr 28 01:12:37.159440 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Apr 28 01:12:37.182636 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Apr 28 01:12:37.239827 kernel: BTRFS info (device vda6): first mount of filesystem 00ce5520-a395-45f5-887a-de6bb1d2f08f Apr 28 01:12:37.239947 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 28 01:12:37.243530 kernel: BTRFS info (device vda6): using free space tree Apr 28 01:12:37.262304 kernel: BTRFS info (device vda6): auto enabling async discard Apr 28 01:12:37.299027 systemd[1]: mnt-oem.mount: Deactivated successfully. Apr 28 01:12:37.306796 kernel: BTRFS info (device vda6): last unmount of filesystem 00ce5520-a395-45f5-887a-de6bb1d2f08f Apr 28 01:12:37.368674 systemd[1]: Finished ignition-setup.service - Ignition (setup). Apr 28 01:12:37.387956 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Apr 28 01:12:37.578346 ignition[696]: Ignition 2.19.0 Apr 28 01:12:37.578368 ignition[696]: Stage: fetch-offline Apr 28 01:12:37.578414 ignition[696]: no configs at "/usr/lib/ignition/base.d" Apr 28 01:12:37.578423 ignition[696]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 28 01:12:37.578542 ignition[696]: parsed url from cmdline: "" Apr 28 01:12:37.578547 ignition[696]: no config URL provided Apr 28 01:12:37.578553 ignition[696]: reading system config file "/usr/lib/ignition/user.ign" Apr 28 01:12:37.578561 ignition[696]: no config at "/usr/lib/ignition/user.ign" Apr 28 01:12:37.578592 ignition[696]: op(1): [started] loading QEMU firmware config module Apr 28 01:12:37.578597 ignition[696]: op(1): executing: "modprobe" "qemu_fw_cfg" Apr 28 01:12:37.617651 ignition[696]: op(1): [finished] loading QEMU firmware config module Apr 28 01:12:37.641291 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 28 01:12:37.659328 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 28 01:12:37.759486 systemd-networkd[787]: lo: Link UP Apr 28 01:12:37.759511 systemd-networkd[787]: lo: Gained carrier Apr 28 01:12:37.763732 systemd-networkd[787]: Enumeration completed Apr 28 01:12:37.764442 systemd-networkd[787]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 28 01:12:37.764445 systemd-networkd[787]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 28 01:12:37.765693 systemd-networkd[787]: eth0: Link UP Apr 28 01:12:37.765696 systemd-networkd[787]: eth0: Gained carrier Apr 28 01:12:37.765704 systemd-networkd[787]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 28 01:12:37.766375 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 28 01:12:37.768804 systemd[1]: Reached target network.target - Network. Apr 28 01:12:37.837526 systemd-networkd[787]: eth0: DHCPv4 address 10.0.0.36/16, gateway 10.0.0.1 acquired from 10.0.0.1 Apr 28 01:12:37.903359 ignition[696]: parsing config with SHA512: 90f68690d9aecbd93ba4280066a9346a33000692f00c6fa1f9b1bc3927dd820633a1195858b2a875da54862f3899f88708be4beefc0300a3f81229bbb5262809 Apr 28 01:12:37.944283 unknown[696]: fetched base config from "system" Apr 28 01:12:37.945113 unknown[696]: fetched user config from "qemu" Apr 28 01:12:37.948326 ignition[696]: fetch-offline: fetch-offline passed Apr 28 01:12:37.948447 ignition[696]: Ignition finished successfully Apr 28 01:12:37.955308 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Apr 28 01:12:37.956602 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Apr 28 01:12:37.986910 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Apr 28 01:12:38.084332 ignition[791]: Ignition 2.19.0 Apr 28 01:12:38.084354 ignition[791]: Stage: kargs Apr 28 01:12:38.084576 ignition[791]: no configs at "/usr/lib/ignition/base.d" Apr 28 01:12:38.084589 ignition[791]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 28 01:12:38.086742 ignition[791]: kargs: kargs passed Apr 28 01:12:38.086817 ignition[791]: Ignition finished successfully Apr 28 01:12:38.104514 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Apr 28 01:12:38.144515 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Apr 28 01:12:38.196840 ignition[798]: Ignition 2.19.0 Apr 28 01:12:38.197011 ignition[798]: Stage: disks Apr 28 01:12:38.197592 ignition[798]: no configs at "/usr/lib/ignition/base.d" Apr 28 01:12:38.206272 systemd[1]: Finished ignition-disks.service - Ignition (disks). Apr 28 01:12:38.197609 ignition[798]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 28 01:12:38.240487 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Apr 28 01:12:38.199231 ignition[798]: disks: disks passed Apr 28 01:12:38.249338 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 28 01:12:38.199297 ignition[798]: Ignition finished successfully Apr 28 01:12:38.253567 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 28 01:12:38.258469 systemd[1]: Reached target sysinit.target - System Initialization. Apr 28 01:12:38.261371 systemd[1]: Reached target basic.target - Basic System. Apr 28 01:12:38.283539 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Apr 28 01:12:38.347751 systemd-fsck[808]: ROOT: clean, 14/553520 files, 52654/553472 blocks Apr 28 01:12:38.367341 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Apr 28 01:12:38.411551 systemd[1]: Mounting sysroot.mount - /sysroot... Apr 28 01:12:38.834232 kernel: EXT4-fs (vda9): mounted filesystem f590d1f8-5181-4682-9e04-fe65400dca5c r/w with ordered data mode. Quota mode: none. Apr 28 01:12:38.839972 systemd[1]: Mounted sysroot.mount - /sysroot. Apr 28 01:12:38.846288 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Apr 28 01:12:38.887336 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 28 01:12:38.898431 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Apr 28 01:12:38.916124 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (816) Apr 28 01:12:38.917771 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Apr 28 01:12:38.958472 kernel: BTRFS info (device vda6): first mount of filesystem 00ce5520-a395-45f5-887a-de6bb1d2f08f Apr 28 01:12:38.958502 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 28 01:12:38.918544 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Apr 28 01:12:38.967245 kernel: BTRFS info (device vda6): using free space tree Apr 28 01:12:38.918686 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Apr 28 01:12:38.976790 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Apr 28 01:12:38.997191 kernel: BTRFS info (device vda6): auto enabling async discard Apr 28 01:12:39.001961 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Apr 28 01:12:39.040848 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 28 01:12:39.172453 initrd-setup-root[840]: cut: /sysroot/etc/passwd: No such file or directory Apr 28 01:12:39.189507 initrd-setup-root[847]: cut: /sysroot/etc/group: No such file or directory Apr 28 01:12:39.197955 initrd-setup-root[854]: cut: /sysroot/etc/shadow: No such file or directory Apr 28 01:12:39.224262 initrd-setup-root[861]: cut: /sysroot/etc/gshadow: No such file or directory Apr 28 01:12:39.459474 systemd-networkd[787]: eth0: Gained IPv6LL Apr 28 01:12:39.649831 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Apr 28 01:12:39.669682 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Apr 28 01:12:39.677772 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Apr 28 01:12:39.742980 kernel: BTRFS info (device vda6): last unmount of filesystem 00ce5520-a395-45f5-887a-de6bb1d2f08f Apr 28 01:12:39.708412 systemd[1]: sysroot-oem.mount: Deactivated successfully. Apr 28 01:12:39.918701 ignition[930]: INFO : Ignition 2.19.0 Apr 28 01:12:39.924662 ignition[930]: INFO : Stage: mount Apr 28 01:12:39.924662 ignition[930]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 28 01:12:39.924662 ignition[930]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 28 01:12:39.934561 ignition[930]: INFO : mount: mount passed Apr 28 01:12:39.934561 ignition[930]: INFO : Ignition finished successfully Apr 28 01:12:39.931364 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Apr 28 01:12:39.940837 systemd[1]: Finished ignition-mount.service - Ignition (mount). Apr 28 01:12:39.957418 systemd[1]: Starting ignition-files.service - Ignition (files)... Apr 28 01:12:39.976597 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 28 01:12:40.020760 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (942) Apr 28 01:12:40.077745 kernel: BTRFS info (device vda6): first mount of filesystem 00ce5520-a395-45f5-887a-de6bb1d2f08f Apr 28 01:12:40.077855 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 28 01:12:40.077893 kernel: BTRFS info (device vda6): using free space tree Apr 28 01:12:40.102538 kernel: BTRFS info (device vda6): auto enabling async discard Apr 28 01:12:40.106637 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 28 01:12:40.152253 ignition[960]: INFO : Ignition 2.19.0 Apr 28 01:12:40.152253 ignition[960]: INFO : Stage: files Apr 28 01:12:40.158342 ignition[960]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 28 01:12:40.158342 ignition[960]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 28 01:12:40.158342 ignition[960]: DEBUG : files: compiled without relabeling support, skipping Apr 28 01:12:40.170430 ignition[960]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Apr 28 01:12:40.170430 ignition[960]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Apr 28 01:12:40.187133 ignition[960]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Apr 28 01:12:40.198663 ignition[960]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Apr 28 01:12:40.198663 ignition[960]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Apr 28 01:12:40.198431 unknown[960]: wrote ssh authorized keys file for user: core Apr 28 01:12:40.267454 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Apr 28 01:12:40.267454 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Apr 28 01:12:40.267454 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Apr 28 01:12:40.267454 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Apr 28 01:12:40.373360 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Apr 28 01:12:40.635392 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Apr 28 01:12:40.635392 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Apr 28 01:12:40.645676 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Apr 28 01:12:40.645676 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Apr 28 01:12:40.645676 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Apr 28 01:12:40.645676 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 28 01:12:40.645676 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 28 01:12:40.645676 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 28 01:12:40.645676 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 28 01:12:40.645676 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Apr 28 01:12:40.645676 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Apr 28 01:12:40.741935 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Apr 28 01:12:40.741935 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Apr 28 01:12:40.741935 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Apr 28 01:12:40.741935 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.8-x86-64.raw: attempt #1 Apr 28 01:12:40.827264 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Apr 28 01:12:41.879191 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Apr 28 01:12:41.889922 ignition[960]: INFO : files: op(c): [started] processing unit "containerd.service" Apr 28 01:12:41.893866 ignition[960]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Apr 28 01:12:41.893866 ignition[960]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Apr 28 01:12:41.893866 ignition[960]: INFO : files: op(c): [finished] processing unit "containerd.service" Apr 28 01:12:41.893866 ignition[960]: INFO : files: op(e): [started] processing unit "prepare-helm.service" Apr 28 01:12:41.893866 ignition[960]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 28 01:12:41.893866 ignition[960]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 28 01:12:41.893866 ignition[960]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" Apr 28 01:12:41.893866 ignition[960]: INFO : files: op(10): [started] processing unit "coreos-metadata.service" Apr 28 01:12:41.893866 ignition[960]: INFO : files: op(10): op(11): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Apr 28 01:12:41.893866 ignition[960]: INFO : files: op(10): op(11): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Apr 28 01:12:41.893866 ignition[960]: INFO : files: op(10): [finished] processing unit "coreos-metadata.service" Apr 28 01:12:41.893866 ignition[960]: INFO : files: op(12): [started] setting preset to disabled for "coreos-metadata.service" Apr 28 01:12:42.071649 ignition[960]: INFO : files: op(12): op(13): [started] removing enablement symlink(s) for "coreos-metadata.service" Apr 28 01:12:42.081514 ignition[960]: INFO : files: op(12): op(13): [finished] removing enablement symlink(s) for "coreos-metadata.service" Apr 28 01:12:42.085988 ignition[960]: INFO : files: op(12): [finished] setting preset to disabled for "coreos-metadata.service" Apr 28 01:12:42.085988 ignition[960]: INFO : files: op(14): [started] setting preset to enabled for "prepare-helm.service" Apr 28 01:12:42.085988 ignition[960]: INFO : files: op(14): [finished] setting preset to enabled for "prepare-helm.service" Apr 28 01:12:42.085988 ignition[960]: INFO : files: createResultFile: createFiles: op(15): [started] writing file "/sysroot/etc/.ignition-result.json" Apr 28 01:12:42.085988 ignition[960]: INFO : files: createResultFile: createFiles: op(15): [finished] writing file "/sysroot/etc/.ignition-result.json" Apr 28 01:12:42.085988 ignition[960]: INFO : files: files passed Apr 28 01:12:42.085988 ignition[960]: INFO : Ignition finished successfully Apr 28 01:12:42.106386 systemd[1]: Finished ignition-files.service - Ignition (files). Apr 28 01:12:42.139978 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Apr 28 01:12:42.157450 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Apr 28 01:12:42.174124 systemd[1]: ignition-quench.service: Deactivated successfully. Apr 28 01:12:42.174300 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Apr 28 01:12:42.217188 initrd-setup-root-after-ignition[988]: grep: /sysroot/oem/oem-release: No such file or directory Apr 28 01:12:42.245184 initrd-setup-root-after-ignition[994]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 28 01:12:42.250331 initrd-setup-root-after-ignition[990]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 28 01:12:42.250331 initrd-setup-root-after-ignition[990]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Apr 28 01:12:42.251197 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 28 01:12:42.276488 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Apr 28 01:12:42.306976 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Apr 28 01:12:42.428849 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Apr 28 01:12:42.433258 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Apr 28 01:12:42.454189 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Apr 28 01:12:42.456389 systemd[1]: Reached target initrd.target - Initrd Default Target. Apr 28 01:12:42.473865 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Apr 28 01:12:42.508621 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Apr 28 01:12:42.577313 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 28 01:12:42.649598 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Apr 28 01:12:42.691725 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Apr 28 01:12:42.703430 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 28 01:12:42.715840 systemd[1]: Stopped target timers.target - Timer Units. Apr 28 01:12:42.721365 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Apr 28 01:12:42.721551 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 28 01:12:42.728658 systemd[1]: Stopped target initrd.target - Initrd Default Target. Apr 28 01:12:42.740608 systemd[1]: Stopped target basic.target - Basic System. Apr 28 01:12:42.758470 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Apr 28 01:12:42.762428 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Apr 28 01:12:42.762818 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Apr 28 01:12:42.763007 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Apr 28 01:12:42.771438 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Apr 28 01:12:42.776401 systemd[1]: Stopped target sysinit.target - System Initialization. Apr 28 01:12:42.776704 systemd[1]: Stopped target local-fs.target - Local File Systems. Apr 28 01:12:42.776988 systemd[1]: Stopped target swap.target - Swaps. Apr 28 01:12:42.779127 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Apr 28 01:12:42.779314 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Apr 28 01:12:42.805694 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Apr 28 01:12:42.806508 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 28 01:12:42.839962 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Apr 28 01:12:42.840698 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 28 01:12:42.850332 systemd[1]: dracut-initqueue.service: Deactivated successfully. Apr 28 01:12:42.850545 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Apr 28 01:12:42.865478 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Apr 28 01:12:42.865674 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Apr 28 01:12:42.866717 systemd[1]: Stopped target paths.target - Path Units. Apr 28 01:12:42.921600 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Apr 28 01:12:42.924700 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 28 01:12:42.931782 systemd[1]: Stopped target slices.target - Slice Units. Apr 28 01:12:42.932364 systemd[1]: Stopped target sockets.target - Socket Units. Apr 28 01:12:42.936862 systemd[1]: iscsid.socket: Deactivated successfully. Apr 28 01:12:42.937014 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Apr 28 01:12:42.943043 systemd[1]: iscsiuio.socket: Deactivated successfully. Apr 28 01:12:42.943405 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 28 01:12:42.968709 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Apr 28 01:12:42.969346 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 28 01:12:42.994460 systemd[1]: ignition-files.service: Deactivated successfully. Apr 28 01:12:42.994657 systemd[1]: Stopped ignition-files.service - Ignition (files). Apr 28 01:12:43.022606 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Apr 28 01:12:43.027314 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Apr 28 01:12:43.027532 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Apr 28 01:12:43.071359 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Apr 28 01:12:43.080581 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Apr 28 01:12:43.080828 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Apr 28 01:12:43.083520 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Apr 28 01:12:43.142744 ignition[1014]: INFO : Ignition 2.19.0 Apr 28 01:12:43.142744 ignition[1014]: INFO : Stage: umount Apr 28 01:12:43.142744 ignition[1014]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 28 01:12:43.142744 ignition[1014]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 28 01:12:43.142744 ignition[1014]: INFO : umount: umount passed Apr 28 01:12:43.142744 ignition[1014]: INFO : Ignition finished successfully Apr 28 01:12:43.083676 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Apr 28 01:12:43.133000 systemd[1]: initrd-cleanup.service: Deactivated successfully. Apr 28 01:12:43.133439 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Apr 28 01:12:43.147511 systemd[1]: ignition-mount.service: Deactivated successfully. Apr 28 01:12:43.147639 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Apr 28 01:12:43.149569 systemd[1]: sysroot-boot.mount: Deactivated successfully. Apr 28 01:12:43.150817 systemd[1]: Stopped target network.target - Network. Apr 28 01:12:43.155856 systemd[1]: ignition-disks.service: Deactivated successfully. Apr 28 01:12:43.156239 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Apr 28 01:12:43.160023 systemd[1]: ignition-kargs.service: Deactivated successfully. Apr 28 01:12:43.160067 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Apr 28 01:12:43.164007 systemd[1]: ignition-setup.service: Deactivated successfully. Apr 28 01:12:43.164090 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Apr 28 01:12:43.183512 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Apr 28 01:12:43.183584 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Apr 28 01:12:43.266036 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Apr 28 01:12:43.266781 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Apr 28 01:12:43.281770 systemd[1]: sysroot-boot.service: Deactivated successfully. Apr 28 01:12:43.289013 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Apr 28 01:12:43.300016 systemd[1]: initrd-setup-root.service: Deactivated successfully. Apr 28 01:12:43.300086 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Apr 28 01:12:43.361801 systemd[1]: systemd-resolved.service: Deactivated successfully. Apr 28 01:12:43.361812 systemd-networkd[787]: eth0: DHCPv6 lease lost Apr 28 01:12:43.361953 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Apr 28 01:12:43.373918 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Apr 28 01:12:43.374005 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 28 01:12:43.380214 systemd[1]: systemd-networkd.service: Deactivated successfully. Apr 28 01:12:43.380336 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Apr 28 01:12:43.387223 systemd[1]: systemd-networkd.socket: Deactivated successfully. Apr 28 01:12:43.387271 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Apr 28 01:12:43.408397 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Apr 28 01:12:43.428738 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Apr 28 01:12:43.428855 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 28 01:12:43.433521 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 28 01:12:43.433585 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 28 01:12:43.441051 systemd[1]: systemd-modules-load.service: Deactivated successfully. Apr 28 01:12:43.441117 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Apr 28 01:12:43.462960 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 28 01:12:43.555517 systemd[1]: systemd-udevd.service: Deactivated successfully. Apr 28 01:12:43.555693 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 28 01:12:43.565017 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Apr 28 01:12:43.565096 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Apr 28 01:12:43.582588 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Apr 28 01:12:43.582645 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Apr 28 01:12:43.609459 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Apr 28 01:12:43.609576 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Apr 28 01:12:43.618271 systemd[1]: dracut-cmdline.service: Deactivated successfully. Apr 28 01:12:43.618356 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Apr 28 01:12:43.630353 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 28 01:12:43.630435 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 28 01:12:43.669819 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Apr 28 01:12:43.672919 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Apr 28 01:12:43.673351 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 28 01:12:43.682373 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 28 01:12:43.682499 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 28 01:12:43.687371 systemd[1]: network-cleanup.service: Deactivated successfully. Apr 28 01:12:43.687571 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Apr 28 01:12:43.742615 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Apr 28 01:12:43.746276 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Apr 28 01:12:43.751245 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Apr 28 01:12:43.770044 systemd[1]: Starting initrd-switch-root.service - Switch Root... Apr 28 01:12:43.806338 systemd[1]: Switching root. Apr 28 01:12:43.909632 systemd-journald[193]: Journal stopped Apr 28 01:12:46.982900 systemd-journald[193]: Received SIGTERM from PID 1 (systemd). Apr 28 01:12:46.983238 kernel: SELinux: policy capability network_peer_controls=1 Apr 28 01:12:46.983288 kernel: SELinux: policy capability open_perms=1 Apr 28 01:12:46.983299 kernel: SELinux: policy capability extended_socket_class=1 Apr 28 01:12:46.983312 kernel: SELinux: policy capability always_check_network=0 Apr 28 01:12:46.983323 kernel: SELinux: policy capability cgroup_seclabel=1 Apr 28 01:12:46.983334 kernel: SELinux: policy capability nnp_nosuid_transition=1 Apr 28 01:12:46.983344 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Apr 28 01:12:46.983357 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Apr 28 01:12:46.983370 kernel: audit: type=1403 audit(1777338764.481:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Apr 28 01:12:46.983386 systemd[1]: Successfully loaded SELinux policy in 75.976ms. Apr 28 01:12:46.983410 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 28.308ms. Apr 28 01:12:46.983427 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 28 01:12:46.983442 systemd[1]: Detected virtualization kvm. Apr 28 01:12:46.983458 systemd[1]: Detected architecture x86-64. Apr 28 01:12:46.983471 systemd[1]: Detected first boot. Apr 28 01:12:46.983485 systemd[1]: Initializing machine ID from VM UUID. Apr 28 01:12:46.983499 zram_generator::config[1084]: No configuration found. Apr 28 01:12:46.983514 systemd[1]: Populated /etc with preset unit settings. Apr 28 01:12:46.983528 systemd[1]: Queued start job for default target multi-user.target. Apr 28 01:12:46.983541 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Apr 28 01:12:46.983565 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Apr 28 01:12:46.983579 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Apr 28 01:12:46.983592 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Apr 28 01:12:46.983606 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Apr 28 01:12:46.983619 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Apr 28 01:12:46.983633 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Apr 28 01:12:46.983649 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Apr 28 01:12:46.983662 systemd[1]: Created slice user.slice - User and Session Slice. Apr 28 01:12:46.983679 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 28 01:12:46.983693 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 28 01:12:46.983707 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Apr 28 01:12:46.983722 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Apr 28 01:12:46.983736 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Apr 28 01:12:46.983749 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 28 01:12:46.983764 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Apr 28 01:12:46.983777 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 28 01:12:46.983791 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Apr 28 01:12:46.983807 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 28 01:12:46.983820 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 28 01:12:46.983834 systemd[1]: Reached target slices.target - Slice Units. Apr 28 01:12:46.983848 systemd[1]: Reached target swap.target - Swaps. Apr 28 01:12:46.983861 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Apr 28 01:12:46.983873 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Apr 28 01:12:46.983886 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 28 01:12:46.983925 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Apr 28 01:12:46.983943 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 28 01:12:46.983957 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 28 01:12:46.983970 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 28 01:12:46.983981 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Apr 28 01:12:46.983991 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Apr 28 01:12:46.984003 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Apr 28 01:12:46.984014 systemd[1]: Mounting media.mount - External Media Directory... Apr 28 01:12:46.984026 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 28 01:12:46.984038 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Apr 28 01:12:46.984053 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Apr 28 01:12:46.984064 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Apr 28 01:12:46.984075 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Apr 28 01:12:46.984085 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 28 01:12:46.984096 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 28 01:12:46.984107 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Apr 28 01:12:46.984118 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 28 01:12:46.984131 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 28 01:12:46.984144 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 28 01:12:46.987327 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Apr 28 01:12:46.987409 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 28 01:12:46.987424 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Apr 28 01:12:46.987439 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Apr 28 01:12:46.987454 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Apr 28 01:12:46.987466 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 28 01:12:46.987477 kernel: fuse: init (API version 7.39) Apr 28 01:12:46.987489 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 28 01:12:46.987527 kernel: loop: module loaded Apr 28 01:12:46.987543 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Apr 28 01:12:46.987586 systemd-journald[1173]: Collecting audit messages is disabled. Apr 28 01:12:46.987618 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Apr 28 01:12:46.987632 systemd-journald[1173]: Journal started Apr 28 01:12:46.987659 systemd-journald[1173]: Runtime Journal (/run/log/journal/39ad4c1e380940e581b135f5b87f4caf) is 6.0M, max 48.4M, 42.3M free. Apr 28 01:12:47.005178 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 28 01:12:47.033317 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 28 01:12:47.042431 systemd[1]: Started systemd-journald.service - Journal Service. Apr 28 01:12:47.055769 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Apr 28 01:12:47.060430 kernel: ACPI: bus type drm_connector registered Apr 28 01:12:47.060832 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Apr 28 01:12:47.065441 systemd[1]: Mounted media.mount - External Media Directory. Apr 28 01:12:47.068512 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Apr 28 01:12:47.072688 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Apr 28 01:12:47.078706 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Apr 28 01:12:47.083827 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Apr 28 01:12:47.089576 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 28 01:12:47.096132 systemd[1]: modprobe@configfs.service: Deactivated successfully. Apr 28 01:12:47.096662 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Apr 28 01:12:47.105529 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 28 01:12:47.106117 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 28 01:12:47.165883 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 28 01:12:47.166480 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 28 01:12:47.170646 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 28 01:12:47.170850 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 28 01:12:47.175579 systemd[1]: modprobe@fuse.service: Deactivated successfully. Apr 28 01:12:47.178555 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Apr 28 01:12:47.183744 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 28 01:12:47.184965 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 28 01:12:47.189097 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 28 01:12:47.194838 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Apr 28 01:12:47.198684 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Apr 28 01:12:47.239576 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 28 01:12:47.247499 systemd[1]: Reached target network-pre.target - Preparation for Network. Apr 28 01:12:47.269649 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Apr 28 01:12:47.296853 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Apr 28 01:12:47.301477 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Apr 28 01:12:47.346251 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Apr 28 01:12:47.361678 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Apr 28 01:12:47.365871 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 28 01:12:47.374598 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Apr 28 01:12:47.383471 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 28 01:12:47.390469 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 28 01:12:47.406035 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 28 01:12:47.436824 systemd-journald[1173]: Time spent on flushing to /var/log/journal/39ad4c1e380940e581b135f5b87f4caf is 29.426ms for 941 entries. Apr 28 01:12:47.436824 systemd-journald[1173]: System Journal (/var/log/journal/39ad4c1e380940e581b135f5b87f4caf) is 8.0M, max 195.6M, 187.6M free. Apr 28 01:12:47.482515 systemd-journald[1173]: Received client request to flush runtime journal. Apr 28 01:12:47.449651 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Apr 28 01:12:47.459837 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Apr 28 01:12:47.468435 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Apr 28 01:12:47.477689 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Apr 28 01:12:47.489864 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Apr 28 01:12:47.504715 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Apr 28 01:12:47.534874 udevadm[1225]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Apr 28 01:12:47.554707 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 28 01:12:47.564978 systemd-tmpfiles[1224]: ACLs are not supported, ignoring. Apr 28 01:12:47.564999 systemd-tmpfiles[1224]: ACLs are not supported, ignoring. Apr 28 01:12:47.596599 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 28 01:12:47.654493 systemd[1]: Starting systemd-sysusers.service - Create System Users... Apr 28 01:12:47.744266 systemd[1]: Finished systemd-sysusers.service - Create System Users. Apr 28 01:12:47.781675 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 28 01:12:47.847011 systemd-tmpfiles[1244]: ACLs are not supported, ignoring. Apr 28 01:12:47.847042 systemd-tmpfiles[1244]: ACLs are not supported, ignoring. Apr 28 01:12:47.856878 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 28 01:12:49.308822 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Apr 28 01:12:49.373492 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 28 01:12:49.463731 systemd-udevd[1250]: Using default interface naming scheme 'v255'. Apr 28 01:12:49.688548 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 28 01:12:49.741954 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 28 01:12:49.779530 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Apr 28 01:12:49.833664 systemd[1]: Found device dev-ttyS0.device - /dev/ttyS0. Apr 28 01:12:49.933415 systemd[1]: Started systemd-userdbd.service - User Database Manager. Apr 28 01:12:49.995255 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (1268) Apr 28 01:12:49.995325 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Apr 28 01:12:50.012608 kernel: ACPI: button: Power Button [PWRF] Apr 28 01:12:50.076361 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Apr 28 01:12:50.102612 systemd-networkd[1254]: lo: Link UP Apr 28 01:12:50.102625 systemd-networkd[1254]: lo: Gained carrier Apr 28 01:12:50.108345 systemd-networkd[1254]: Enumeration completed Apr 28 01:12:50.132516 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 28 01:12:50.142782 systemd-networkd[1254]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 28 01:12:50.142895 systemd-networkd[1254]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 28 01:12:50.150851 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Apr 28 01:12:50.151442 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Apr 28 01:12:50.151577 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Apr 28 01:12:50.151665 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Apr 28 01:12:50.149131 systemd-networkd[1254]: eth0: Link UP Apr 28 01:12:50.149281 systemd-networkd[1254]: eth0: Gained carrier Apr 28 01:12:50.149316 systemd-networkd[1254]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 28 01:12:50.156591 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Apr 28 01:12:50.169226 systemd-networkd[1254]: eth0: DHCPv4 address 10.0.0.36/16, gateway 10.0.0.1 acquired from 10.0.0.1 Apr 28 01:12:50.330143 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 28 01:12:50.739191 kernel: mousedev: PS/2 mouse device common for all mice Apr 28 01:12:51.091071 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 28 01:12:51.488753 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Apr 28 01:12:51.493454 systemd-networkd[1254]: eth0: Gained IPv6LL Apr 28 01:12:51.516122 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Apr 28 01:12:51.522751 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Apr 28 01:12:51.552413 lvm[1295]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 28 01:12:51.650807 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Apr 28 01:12:51.657339 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 28 01:12:51.692758 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Apr 28 01:12:51.781017 lvm[1300]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 28 01:12:51.820511 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Apr 28 01:12:51.826268 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 28 01:12:51.829848 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Apr 28 01:12:51.830830 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 28 01:12:51.834118 systemd[1]: Reached target machines.target - Containers. Apr 28 01:12:51.839914 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Apr 28 01:12:51.859623 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Apr 28 01:12:51.890313 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Apr 28 01:12:51.893329 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 28 01:12:51.895437 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Apr 28 01:12:51.962340 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Apr 28 01:12:51.973392 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Apr 28 01:12:51.979594 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Apr 28 01:12:51.991222 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Apr 28 01:12:51.995210 kernel: loop0: detected capacity change from 0 to 228704 Apr 28 01:12:52.035083 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Apr 28 01:12:52.040760 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Apr 28 01:12:52.107141 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Apr 28 01:12:52.235064 kernel: loop1: detected capacity change from 0 to 142488 Apr 28 01:12:52.476273 kernel: loop2: detected capacity change from 0 to 140768 Apr 28 01:12:52.689257 kernel: loop3: detected capacity change from 0 to 228704 Apr 28 01:12:52.820600 kernel: loop4: detected capacity change from 0 to 142488 Apr 28 01:12:52.978346 kernel: loop5: detected capacity change from 0 to 140768 Apr 28 01:12:53.053268 (sd-merge)[1322]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Apr 28 01:12:53.056598 (sd-merge)[1322]: Merged extensions into '/usr'. Apr 28 01:12:53.067339 systemd[1]: Reloading requested from client PID 1309 ('systemd-sysext') (unit systemd-sysext.service)... Apr 28 01:12:53.068709 systemd[1]: Reloading... Apr 28 01:12:53.194362 zram_generator::config[1347]: No configuration found. Apr 28 01:12:53.675597 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 28 01:12:53.830016 ldconfig[1304]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Apr 28 01:12:53.891714 systemd[1]: Reloading finished in 821 ms. Apr 28 01:12:53.950830 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Apr 28 01:12:53.955904 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Apr 28 01:12:53.980753 systemd[1]: Starting ensure-sysext.service... Apr 28 01:12:53.995702 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 28 01:12:54.045580 systemd[1]: Reloading requested from client PID 1393 ('systemctl') (unit ensure-sysext.service)... Apr 28 01:12:54.045649 systemd[1]: Reloading... Apr 28 01:12:54.105744 systemd-tmpfiles[1394]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Apr 28 01:12:54.106714 systemd-tmpfiles[1394]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Apr 28 01:12:54.123849 systemd-tmpfiles[1394]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Apr 28 01:12:54.124596 systemd-tmpfiles[1394]: ACLs are not supported, ignoring. Apr 28 01:12:54.124655 systemd-tmpfiles[1394]: ACLs are not supported, ignoring. Apr 28 01:12:54.136775 systemd-tmpfiles[1394]: Detected autofs mount point /boot during canonicalization of boot. Apr 28 01:12:54.137044 systemd-tmpfiles[1394]: Skipping /boot Apr 28 01:12:54.156361 systemd-tmpfiles[1394]: Detected autofs mount point /boot during canonicalization of boot. Apr 28 01:12:54.156374 systemd-tmpfiles[1394]: Skipping /boot Apr 28 01:12:54.245309 zram_generator::config[1428]: No configuration found. Apr 28 01:12:54.799601 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 28 01:12:55.001691 systemd[1]: Reloading finished in 952 ms. Apr 28 01:12:55.095591 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 28 01:12:55.129406 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 28 01:12:55.143759 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Apr 28 01:12:55.175759 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Apr 28 01:12:55.276526 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 28 01:12:55.294755 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Apr 28 01:12:55.318513 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 28 01:12:55.318698 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 28 01:12:55.335725 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 28 01:12:55.343947 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 28 01:12:55.358664 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 28 01:12:55.364474 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 28 01:12:55.364744 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 28 01:12:55.368667 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Apr 28 01:12:55.374438 augenrules[1492]: No rules Apr 28 01:12:55.375079 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 28 01:12:55.375394 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 28 01:12:55.383755 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 28 01:12:55.388417 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 28 01:12:55.389543 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 28 01:12:55.395022 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 28 01:12:55.395928 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 28 01:12:55.465751 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 28 01:12:55.466102 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 28 01:12:55.474865 systemd[1]: Starting systemd-update-done.service - Update is Completed... Apr 28 01:12:55.480670 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Apr 28 01:12:55.500763 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 28 01:12:55.501107 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 28 01:12:55.520140 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 28 01:12:55.529833 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 28 01:12:55.538893 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 28 01:12:55.539137 systemd-resolved[1476]: Positive Trust Anchors: Apr 28 01:12:55.539441 systemd-resolved[1476]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 28 01:12:55.539489 systemd-resolved[1476]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 28 01:12:55.542847 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 28 01:12:55.543624 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Apr 28 01:12:55.543714 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 28 01:12:55.545833 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Apr 28 01:12:55.549588 systemd-resolved[1476]: Defaulting to hostname 'linux'. Apr 28 01:12:55.552687 systemd[1]: Finished systemd-update-done.service - Update is Completed. Apr 28 01:12:55.557709 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 28 01:12:55.562769 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 28 01:12:55.564530 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 28 01:12:55.572592 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 28 01:12:55.572916 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 28 01:12:55.583856 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 28 01:12:55.584889 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 28 01:12:55.657355 systemd[1]: Reached target network.target - Network. Apr 28 01:12:55.661796 systemd[1]: Reached target network-online.target - Network is Online. Apr 28 01:12:55.666362 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 28 01:12:55.675098 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 28 01:12:55.676178 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 28 01:12:55.701240 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 28 01:12:55.706826 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 28 01:12:55.775709 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 28 01:12:55.785065 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 28 01:12:55.788510 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 28 01:12:55.788699 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Apr 28 01:12:55.788774 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 28 01:12:55.794937 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 28 01:12:55.795453 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 28 01:12:55.800578 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 28 01:12:55.800749 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 28 01:12:55.805537 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 28 01:12:55.805707 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 28 01:12:55.817858 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 28 01:12:55.819602 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 28 01:12:55.826712 systemd[1]: Finished ensure-sysext.service. Apr 28 01:12:55.840130 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 28 01:12:55.840614 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 28 01:12:55.880582 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Apr 28 01:12:56.162627 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Apr 28 01:12:56.165868 systemd[1]: Reached target sysinit.target - System Initialization. Apr 28 01:12:56.682796 systemd-resolved[1476]: Clock change detected. Flushing caches. Apr 28 01:12:56.682829 systemd-timesyncd[1538]: Contacted time server 10.0.0.1:123 (10.0.0.1). Apr 28 01:12:56.682870 systemd-timesyncd[1538]: Initial clock synchronization to Tue 2026-04-28 01:12:56.682689 UTC. Apr 28 01:12:56.687869 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Apr 28 01:12:56.691092 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Apr 28 01:12:56.694584 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Apr 28 01:12:56.698760 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Apr 28 01:12:56.698829 systemd[1]: Reached target paths.target - Path Units. Apr 28 01:12:56.701202 systemd[1]: Reached target time-set.target - System Time Set. Apr 28 01:12:56.706969 systemd[1]: Started logrotate.timer - Daily rotation of log files. Apr 28 01:12:56.710907 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Apr 28 01:12:56.718335 systemd[1]: Reached target timers.target - Timer Units. Apr 28 01:12:56.736493 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Apr 28 01:12:56.743372 systemd[1]: Starting docker.socket - Docker Socket for the API... Apr 28 01:12:56.754787 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Apr 28 01:12:56.759112 systemd[1]: Listening on docker.socket - Docker Socket for the API. Apr 28 01:12:56.762986 systemd[1]: Reached target sockets.target - Socket Units. Apr 28 01:12:56.771189 systemd[1]: Reached target basic.target - Basic System. Apr 28 01:12:56.776805 systemd[1]: System is tainted: cgroupsv1 Apr 28 01:12:56.779117 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Apr 28 01:12:56.779732 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Apr 28 01:12:56.814727 systemd[1]: Starting containerd.service - containerd container runtime... Apr 28 01:12:56.820950 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Apr 28 01:12:56.827787 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Apr 28 01:12:56.833106 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Apr 28 01:12:56.843037 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Apr 28 01:12:56.844784 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Apr 28 01:12:56.847976 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 28 01:12:56.856910 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Apr 28 01:12:56.859736 jq[1546]: false Apr 28 01:12:56.892103 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Apr 28 01:12:56.903801 extend-filesystems[1548]: Found loop3 Apr 28 01:12:56.904799 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Apr 28 01:12:56.911380 extend-filesystems[1548]: Found loop4 Apr 28 01:12:56.911380 extend-filesystems[1548]: Found loop5 Apr 28 01:12:56.911380 extend-filesystems[1548]: Found sr0 Apr 28 01:12:56.911380 extend-filesystems[1548]: Found vda Apr 28 01:12:56.911380 extend-filesystems[1548]: Found vda1 Apr 28 01:12:56.911380 extend-filesystems[1548]: Found vda2 Apr 28 01:12:56.911380 extend-filesystems[1548]: Found vda3 Apr 28 01:12:56.911380 extend-filesystems[1548]: Found usr Apr 28 01:12:56.911380 extend-filesystems[1548]: Found vda4 Apr 28 01:12:56.911380 extend-filesystems[1548]: Found vda6 Apr 28 01:12:56.911380 extend-filesystems[1548]: Found vda7 Apr 28 01:12:56.911380 extend-filesystems[1548]: Found vda9 Apr 28 01:12:56.911380 extend-filesystems[1548]: Checking size of /dev/vda9 Apr 28 01:12:56.961193 dbus-daemon[1544]: [system] SELinux support is enabled Apr 28 01:12:56.927651 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Apr 28 01:12:56.995048 extend-filesystems[1548]: Resized partition /dev/vda9 Apr 28 01:12:56.963660 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Apr 28 01:12:56.975221 systemd[1]: Starting systemd-logind.service - User Login Management... Apr 28 01:12:56.983192 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Apr 28 01:12:56.990898 systemd[1]: Starting update-engine.service - Update Engine... Apr 28 01:12:57.001519 extend-filesystems[1579]: resize2fs 1.47.1 (20-May-2024) Apr 28 01:12:57.010473 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Apr 28 01:12:57.014740 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Apr 28 01:12:57.024759 systemd[1]: Started dbus.service - D-Bus System Message Bus. Apr 28 01:12:57.070177 jq[1582]: true Apr 28 01:12:57.083209 update_engine[1577]: I20260428 01:12:57.082384 1577 main.cc:92] Flatcar Update Engine starting Apr 28 01:12:57.085105 update_engine[1577]: I20260428 01:12:57.084956 1577 update_check_scheduler.cc:74] Next update check in 3m44s Apr 28 01:12:57.086664 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Apr 28 01:12:57.086884 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Apr 28 01:12:57.110778 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (1585) Apr 28 01:12:57.098184 systemd[1]: motdgen.service: Deactivated successfully. Apr 28 01:12:57.101395 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Apr 28 01:12:57.105060 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Apr 28 01:12:57.122212 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Apr 28 01:12:57.143346 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Apr 28 01:12:57.154739 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Apr 28 01:12:57.196797 (ntainerd)[1599]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Apr 28 01:12:57.204945 jq[1597]: true Apr 28 01:12:57.218195 extend-filesystems[1579]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Apr 28 01:12:57.218195 extend-filesystems[1579]: old_desc_blocks = 1, new_desc_blocks = 1 Apr 28 01:12:57.218195 extend-filesystems[1579]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Apr 28 01:12:57.237871 extend-filesystems[1548]: Resized filesystem in /dev/vda9 Apr 28 01:12:57.233011 systemd[1]: extend-filesystems.service: Deactivated successfully. Apr 28 01:12:57.233289 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Apr 28 01:12:57.247218 systemd[1]: coreos-metadata.service: Deactivated successfully. Apr 28 01:12:57.247727 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Apr 28 01:12:57.254273 systemd-logind[1574]: Watching system buttons on /dev/input/event1 (Power Button) Apr 28 01:12:57.254296 systemd-logind[1574]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Apr 28 01:12:57.256579 systemd-logind[1574]: New seat seat0. Apr 28 01:12:57.302826 tar[1596]: linux-amd64/LICENSE Apr 28 01:12:57.305991 tar[1596]: linux-amd64/helm Apr 28 01:12:57.363216 systemd[1]: Started systemd-logind.service - User Login Management. Apr 28 01:12:57.368814 systemd[1]: Started update-engine.service - Update Engine. Apr 28 01:12:57.373839 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Apr 28 01:12:57.411724 bash[1634]: Updated "/home/core/.ssh/authorized_keys" Apr 28 01:12:57.374030 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Apr 28 01:12:57.374055 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Apr 28 01:12:57.378139 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Apr 28 01:12:57.378166 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Apr 28 01:12:57.386855 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Apr 28 01:12:57.399866 systemd[1]: Started locksmithd.service - Cluster reboot manager. Apr 28 01:12:57.404229 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Apr 28 01:12:57.415277 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Apr 28 01:12:57.596651 locksmithd[1638]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Apr 28 01:12:57.784972 containerd[1599]: time="2026-04-28T01:12:57.783075603Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Apr 28 01:12:57.860048 sshd_keygen[1583]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Apr 28 01:12:57.909193 containerd[1599]: time="2026-04-28T01:12:57.908773477Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Apr 28 01:12:57.918816 containerd[1599]: time="2026-04-28T01:12:57.918584677Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.127-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Apr 28 01:12:57.918816 containerd[1599]: time="2026-04-28T01:12:57.918635679Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Apr 28 01:12:57.918816 containerd[1599]: time="2026-04-28T01:12:57.918656354Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Apr 28 01:12:57.918816 containerd[1599]: time="2026-04-28T01:12:57.918825322Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Apr 28 01:12:57.919021 containerd[1599]: time="2026-04-28T01:12:57.918841957Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Apr 28 01:12:57.919021 containerd[1599]: time="2026-04-28T01:12:57.918897704Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Apr 28 01:12:57.919021 containerd[1599]: time="2026-04-28T01:12:57.918910619Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Apr 28 01:12:57.919303 containerd[1599]: time="2026-04-28T01:12:57.919134479Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 28 01:12:57.919303 containerd[1599]: time="2026-04-28T01:12:57.919155205Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Apr 28 01:12:57.919303 containerd[1599]: time="2026-04-28T01:12:57.919169444Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Apr 28 01:12:57.919303 containerd[1599]: time="2026-04-28T01:12:57.919180681Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Apr 28 01:12:57.919640 containerd[1599]: time="2026-04-28T01:12:57.919557248Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Apr 28 01:12:57.921268 containerd[1599]: time="2026-04-28T01:12:57.919879058Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Apr 28 01:12:57.921268 containerd[1599]: time="2026-04-28T01:12:57.920313820Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 28 01:12:57.921268 containerd[1599]: time="2026-04-28T01:12:57.920657574Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Apr 28 01:12:57.921268 containerd[1599]: time="2026-04-28T01:12:57.920778622Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Apr 28 01:12:57.921268 containerd[1599]: time="2026-04-28T01:12:57.920831463Z" level=info msg="metadata content store policy set" policy=shared Apr 28 01:12:57.937515 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Apr 28 01:12:57.947661 containerd[1599]: time="2026-04-28T01:12:57.946622567Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Apr 28 01:12:57.947661 containerd[1599]: time="2026-04-28T01:12:57.946806810Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Apr 28 01:12:57.947661 containerd[1599]: time="2026-04-28T01:12:57.946896332Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Apr 28 01:12:57.947661 containerd[1599]: time="2026-04-28T01:12:57.946916893Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Apr 28 01:12:57.947661 containerd[1599]: time="2026-04-28T01:12:57.946934676Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Apr 28 01:12:57.947661 containerd[1599]: time="2026-04-28T01:12:57.947111096Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Apr 28 01:12:57.948019 containerd[1599]: time="2026-04-28T01:12:57.947880973Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Apr 28 01:12:57.948376 containerd[1599]: time="2026-04-28T01:12:57.948078872Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Apr 28 01:12:57.948376 containerd[1599]: time="2026-04-28T01:12:57.948099197Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Apr 28 01:12:57.948376 containerd[1599]: time="2026-04-28T01:12:57.948114211Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Apr 28 01:12:57.948376 containerd[1599]: time="2026-04-28T01:12:57.948129461Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Apr 28 01:12:57.948376 containerd[1599]: time="2026-04-28T01:12:57.948143388Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Apr 28 01:12:57.948376 containerd[1599]: time="2026-04-28T01:12:57.948160998Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Apr 28 01:12:57.948376 containerd[1599]: time="2026-04-28T01:12:57.948175864Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Apr 28 01:12:57.948376 containerd[1599]: time="2026-04-28T01:12:57.948191887Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Apr 28 01:12:57.948376 containerd[1599]: time="2026-04-28T01:12:57.948212970Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Apr 28 01:12:57.948376 containerd[1599]: time="2026-04-28T01:12:57.948229203Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Apr 28 01:12:57.948376 containerd[1599]: time="2026-04-28T01:12:57.948271587Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Apr 28 01:12:57.948376 containerd[1599]: time="2026-04-28T01:12:57.948295510Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Apr 28 01:12:57.948376 containerd[1599]: time="2026-04-28T01:12:57.948313226Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Apr 28 01:12:57.948376 containerd[1599]: time="2026-04-28T01:12:57.948328938Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Apr 28 01:12:57.949065 containerd[1599]: time="2026-04-28T01:12:57.948342515Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Apr 28 01:12:57.949065 containerd[1599]: time="2026-04-28T01:12:57.948356691Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Apr 28 01:12:57.949065 containerd[1599]: time="2026-04-28T01:12:57.948372859Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Apr 28 01:12:57.949065 containerd[1599]: time="2026-04-28T01:12:57.948388257Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Apr 28 01:12:57.949065 containerd[1599]: time="2026-04-28T01:12:57.948403205Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Apr 28 01:12:57.949065 containerd[1599]: time="2026-04-28T01:12:57.948952919Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Apr 28 01:12:57.949065 containerd[1599]: time="2026-04-28T01:12:57.949014446Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Apr 28 01:12:57.949065 containerd[1599]: time="2026-04-28T01:12:57.949032365Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Apr 28 01:12:57.949065 containerd[1599]: time="2026-04-28T01:12:57.949048646Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Apr 28 01:12:57.949065 containerd[1599]: time="2026-04-28T01:12:57.949065872Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Apr 28 01:12:57.949297 containerd[1599]: time="2026-04-28T01:12:57.949087317Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Apr 28 01:12:57.949297 containerd[1599]: time="2026-04-28T01:12:57.949118354Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Apr 28 01:12:57.949297 containerd[1599]: time="2026-04-28T01:12:57.949132709Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Apr 28 01:12:57.949297 containerd[1599]: time="2026-04-28T01:12:57.949144227Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Apr 28 01:12:57.949297 containerd[1599]: time="2026-04-28T01:12:57.949204006Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Apr 28 01:12:57.949297 containerd[1599]: time="2026-04-28T01:12:57.949232320Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Apr 28 01:12:57.949297 containerd[1599]: time="2026-04-28T01:12:57.949267569Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Apr 28 01:12:57.949297 containerd[1599]: time="2026-04-28T01:12:57.949282802Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Apr 28 01:12:57.949297 containerd[1599]: time="2026-04-28T01:12:57.949293552Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Apr 28 01:12:57.949884 containerd[1599]: time="2026-04-28T01:12:57.949309593Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Apr 28 01:12:57.949884 containerd[1599]: time="2026-04-28T01:12:57.949324636Z" level=info msg="NRI interface is disabled by configuration." Apr 28 01:12:57.949884 containerd[1599]: time="2026-04-28T01:12:57.949337534Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Apr 28 01:12:57.952812 containerd[1599]: time="2026-04-28T01:12:57.950193187Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Apr 28 01:12:57.952812 containerd[1599]: time="2026-04-28T01:12:57.950302931Z" level=info msg="Connect containerd service" Apr 28 01:12:57.952812 containerd[1599]: time="2026-04-28T01:12:57.950344615Z" level=info msg="using legacy CRI server" Apr 28 01:12:57.952812 containerd[1599]: time="2026-04-28T01:12:57.950352189Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Apr 28 01:12:57.952812 containerd[1599]: time="2026-04-28T01:12:57.950800373Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Apr 28 01:12:57.957930 containerd[1599]: time="2026-04-28T01:12:57.957854056Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 28 01:12:57.964221 containerd[1599]: time="2026-04-28T01:12:57.959966222Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Apr 28 01:12:57.964221 containerd[1599]: time="2026-04-28T01:12:57.960019597Z" level=info msg=serving... address=/run/containerd/containerd.sock Apr 28 01:12:57.969166 containerd[1599]: time="2026-04-28T01:12:57.968900168Z" level=info msg="Start subscribing containerd event" Apr 28 01:12:57.969754 systemd[1]: Starting issuegen.service - Generate /run/issue... Apr 28 01:12:57.969853 containerd[1599]: time="2026-04-28T01:12:57.969788703Z" level=info msg="Start recovering state" Apr 28 01:12:57.969988 containerd[1599]: time="2026-04-28T01:12:57.969957199Z" level=info msg="Start event monitor" Apr 28 01:12:57.970016 containerd[1599]: time="2026-04-28T01:12:57.969990109Z" level=info msg="Start snapshots syncer" Apr 28 01:12:57.970016 containerd[1599]: time="2026-04-28T01:12:57.970005254Z" level=info msg="Start cni network conf syncer for default" Apr 28 01:12:57.970016 containerd[1599]: time="2026-04-28T01:12:57.970013435Z" level=info msg="Start streaming server" Apr 28 01:12:57.970127 containerd[1599]: time="2026-04-28T01:12:57.970098829Z" level=info msg="containerd successfully booted in 0.204955s" Apr 28 01:12:57.990087 systemd[1]: Started containerd.service - containerd container runtime. Apr 28 01:12:58.006935 systemd[1]: issuegen.service: Deactivated successfully. Apr 28 01:12:58.007197 systemd[1]: Finished issuegen.service - Generate /run/issue. Apr 28 01:12:58.042963 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Apr 28 01:12:58.067983 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Apr 28 01:12:58.094000 systemd[1]: Started getty@tty1.service - Getty on tty1. Apr 28 01:12:58.115235 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Apr 28 01:12:58.118861 systemd[1]: Reached target getty.target - Login Prompts. Apr 28 01:12:59.017410 tar[1596]: linux-amd64/README.md Apr 28 01:12:59.117198 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Apr 28 01:13:00.923505 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 28 01:13:00.942092 systemd[1]: Reached target multi-user.target - Multi-User System. Apr 28 01:13:00.948778 systemd[1]: Startup finished in 14.308s (kernel) + 16.018s (userspace) = 30.326s. Apr 28 01:13:01.030222 (kubelet)[1688]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 28 01:13:01.285830 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Apr 28 01:13:01.306230 systemd[1]: Started sshd@0-10.0.0.36:22-10.0.0.1:46020.service - OpenSSH per-connection server daemon (10.0.0.1:46020). Apr 28 01:13:01.614524 sshd[1693]: Accepted publickey for core from 10.0.0.1 port 46020 ssh2: RSA SHA256:h1NhNWfC+Dmp6wIIzeQOuTKnwsIBe3BN0LKeEqeOidc Apr 28 01:13:01.623354 sshd[1693]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 01:13:01.704879 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Apr 28 01:13:01.742635 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Apr 28 01:13:01.749841 systemd-logind[1574]: New session 1 of user core. Apr 28 01:13:01.772594 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Apr 28 01:13:01.795326 systemd[1]: Starting user@500.service - User Manager for UID 500... Apr 28 01:13:01.824797 (systemd)[1704]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Apr 28 01:13:02.562664 systemd[1704]: Queued start job for default target default.target. Apr 28 01:13:02.566230 systemd[1704]: Created slice app.slice - User Application Slice. Apr 28 01:13:02.566828 systemd[1704]: Reached target paths.target - Paths. Apr 28 01:13:02.566856 systemd[1704]: Reached target timers.target - Timers. Apr 28 01:13:02.579733 systemd[1704]: Starting dbus.socket - D-Bus User Message Bus Socket... Apr 28 01:13:02.663388 systemd[1704]: Listening on dbus.socket - D-Bus User Message Bus Socket. Apr 28 01:13:02.666112 systemd[1704]: Reached target sockets.target - Sockets. Apr 28 01:13:02.667059 systemd[1704]: Reached target basic.target - Basic System. Apr 28 01:13:02.667124 systemd[1704]: Reached target default.target - Main User Target. Apr 28 01:13:02.667159 systemd[1704]: Startup finished in 793ms. Apr 28 01:13:02.668118 systemd[1]: Started user@500.service - User Manager for UID 500. Apr 28 01:13:02.691674 systemd[1]: Started session-1.scope - Session 1 of User core. Apr 28 01:13:02.871995 systemd[1]: Started sshd@1-10.0.0.36:22-10.0.0.1:46022.service - OpenSSH per-connection server daemon (10.0.0.1:46022). Apr 28 01:13:03.072869 sshd[1717]: Accepted publickey for core from 10.0.0.1 port 46022 ssh2: RSA SHA256:h1NhNWfC+Dmp6wIIzeQOuTKnwsIBe3BN0LKeEqeOidc Apr 28 01:13:03.092856 sshd[1717]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 01:13:03.122266 systemd-logind[1574]: New session 2 of user core. Apr 28 01:13:03.198369 systemd[1]: Started session-2.scope - Session 2 of User core. Apr 28 01:13:03.414413 sshd[1717]: pam_unix(sshd:session): session closed for user core Apr 28 01:13:03.435660 systemd[1]: Started sshd@2-10.0.0.36:22-10.0.0.1:46028.service - OpenSSH per-connection server daemon (10.0.0.1:46028). Apr 28 01:13:03.442744 systemd[1]: sshd@1-10.0.0.36:22-10.0.0.1:46022.service: Deactivated successfully. Apr 28 01:13:03.447851 systemd-logind[1574]: Session 2 logged out. Waiting for processes to exit. Apr 28 01:13:03.450721 systemd[1]: session-2.scope: Deactivated successfully. Apr 28 01:13:03.454262 systemd-logind[1574]: Removed session 2. Apr 28 01:13:03.677618 sshd[1722]: Accepted publickey for core from 10.0.0.1 port 46028 ssh2: RSA SHA256:h1NhNWfC+Dmp6wIIzeQOuTKnwsIBe3BN0LKeEqeOidc Apr 28 01:13:03.686107 sshd[1722]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 01:13:03.720063 systemd-logind[1574]: New session 3 of user core. Apr 28 01:13:03.741365 systemd[1]: Started session-3.scope - Session 3 of User core. Apr 28 01:13:03.898174 sshd[1722]: pam_unix(sshd:session): session closed for user core Apr 28 01:13:03.923262 systemd[1]: Started sshd@3-10.0.0.36:22-10.0.0.1:46044.service - OpenSSH per-connection server daemon (10.0.0.1:46044). Apr 28 01:13:03.953834 systemd[1]: sshd@2-10.0.0.36:22-10.0.0.1:46028.service: Deactivated successfully. Apr 28 01:13:03.962103 systemd[1]: session-3.scope: Deactivated successfully. Apr 28 01:13:03.967008 systemd-logind[1574]: Session 3 logged out. Waiting for processes to exit. Apr 28 01:13:03.973322 systemd-logind[1574]: Removed session 3. Apr 28 01:13:04.197557 sshd[1730]: Accepted publickey for core from 10.0.0.1 port 46044 ssh2: RSA SHA256:h1NhNWfC+Dmp6wIIzeQOuTKnwsIBe3BN0LKeEqeOidc Apr 28 01:13:04.208704 sshd[1730]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 01:13:04.273231 systemd-logind[1574]: New session 4 of user core. Apr 28 01:13:04.296517 systemd[1]: Started session-4.scope - Session 4 of User core. Apr 28 01:13:04.499484 sshd[1730]: pam_unix(sshd:session): session closed for user core Apr 28 01:13:04.514190 systemd[1]: Started sshd@4-10.0.0.36:22-10.0.0.1:46056.service - OpenSSH per-connection server daemon (10.0.0.1:46056). Apr 28 01:13:04.521196 systemd[1]: sshd@3-10.0.0.36:22-10.0.0.1:46044.service: Deactivated successfully. Apr 28 01:13:04.549197 systemd[1]: session-4.scope: Deactivated successfully. Apr 28 01:13:04.685704 systemd-logind[1574]: Session 4 logged out. Waiting for processes to exit. Apr 28 01:13:04.848106 systemd-logind[1574]: Removed session 4. Apr 28 01:13:05.120260 sshd[1738]: Accepted publickey for core from 10.0.0.1 port 46056 ssh2: RSA SHA256:h1NhNWfC+Dmp6wIIzeQOuTKnwsIBe3BN0LKeEqeOidc Apr 28 01:13:05.141390 sshd[1738]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 01:13:05.204921 systemd-logind[1574]: New session 5 of user core. Apr 28 01:13:05.237568 systemd[1]: Started session-5.scope - Session 5 of User core. Apr 28 01:13:05.421691 sudo[1745]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Apr 28 01:13:05.422486 sudo[1745]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 28 01:13:05.502233 sudo[1745]: pam_unix(sudo:session): session closed for user root Apr 28 01:13:05.540173 sshd[1738]: pam_unix(sshd:session): session closed for user core Apr 28 01:13:05.556614 systemd[1]: Started sshd@5-10.0.0.36:22-10.0.0.1:46070.service - OpenSSH per-connection server daemon (10.0.0.1:46070). Apr 28 01:13:05.558322 systemd[1]: sshd@4-10.0.0.36:22-10.0.0.1:46056.service: Deactivated successfully. Apr 28 01:13:05.568905 systemd[1]: session-5.scope: Deactivated successfully. Apr 28 01:13:05.589961 systemd-logind[1574]: Session 5 logged out. Waiting for processes to exit. Apr 28 01:13:05.605928 systemd-logind[1574]: Removed session 5. Apr 28 01:13:05.696003 sshd[1747]: Accepted publickey for core from 10.0.0.1 port 46070 ssh2: RSA SHA256:h1NhNWfC+Dmp6wIIzeQOuTKnwsIBe3BN0LKeEqeOidc Apr 28 01:13:05.699128 kubelet[1688]: E0428 01:13:05.699038 1688 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 28 01:13:05.710884 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 28 01:13:05.713278 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 28 01:13:05.714966 sshd[1747]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 01:13:05.800843 systemd-logind[1574]: New session 6 of user core. Apr 28 01:13:05.815223 systemd[1]: Started session-6.scope - Session 6 of User core. Apr 28 01:13:06.029722 sudo[1757]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Apr 28 01:13:06.031478 sudo[1757]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 28 01:13:06.059228 sudo[1757]: pam_unix(sudo:session): session closed for user root Apr 28 01:13:06.335241 sudo[1756]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Apr 28 01:13:06.349516 sudo[1756]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 28 01:13:06.523564 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Apr 28 01:13:06.601585 auditctl[1760]: No rules Apr 28 01:13:06.606099 systemd[1]: audit-rules.service: Deactivated successfully. Apr 28 01:13:06.616916 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Apr 28 01:13:06.672794 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 28 01:13:06.917871 augenrules[1779]: No rules Apr 28 01:13:06.946070 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 28 01:13:06.967895 sudo[1756]: pam_unix(sudo:session): session closed for user root Apr 28 01:13:06.983703 sshd[1747]: pam_unix(sshd:session): session closed for user core Apr 28 01:13:07.000990 systemd[1]: Started sshd@6-10.0.0.36:22-10.0.0.1:46072.service - OpenSSH per-connection server daemon (10.0.0.1:46072). Apr 28 01:13:07.003252 systemd[1]: sshd@5-10.0.0.36:22-10.0.0.1:46070.service: Deactivated successfully. Apr 28 01:13:07.018919 systemd[1]: session-6.scope: Deactivated successfully. Apr 28 01:13:07.025762 systemd-logind[1574]: Session 6 logged out. Waiting for processes to exit. Apr 28 01:13:07.052265 systemd-logind[1574]: Removed session 6. Apr 28 01:13:07.516516 sshd[1785]: Accepted publickey for core from 10.0.0.1 port 46072 ssh2: RSA SHA256:h1NhNWfC+Dmp6wIIzeQOuTKnwsIBe3BN0LKeEqeOidc Apr 28 01:13:07.590199 sshd[1785]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 01:13:07.721112 systemd-logind[1574]: New session 7 of user core. Apr 28 01:13:07.751065 systemd[1]: Started session-7.scope - Session 7 of User core. Apr 28 01:13:07.923074 sudo[1792]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Apr 28 01:13:07.927363 sudo[1792]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 28 01:13:13.713742 systemd[1]: Starting docker.service - Docker Application Container Engine... Apr 28 01:13:13.769024 (dockerd)[1810]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Apr 28 01:13:15.968821 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Apr 28 01:13:16.006105 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 28 01:13:17.634220 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 28 01:13:17.668708 (kubelet)[1827]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 28 01:13:19.689544 dockerd[1810]: time="2026-04-28T01:13:19.686259603Z" level=info msg="Starting up" Apr 28 01:13:19.723609 kubelet[1827]: E0428 01:13:19.723131 1827 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 28 01:13:19.754950 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 28 01:13:19.756098 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 28 01:13:21.910189 systemd[1]: var-lib-docker-metacopy\x2dcheck2968125661-merged.mount: Deactivated successfully. Apr 28 01:13:22.190108 dockerd[1810]: time="2026-04-28T01:13:22.189357031Z" level=info msg="Loading containers: start." Apr 28 01:13:24.156262 kernel: Initializing XFRM netlink socket Apr 28 01:13:25.172864 systemd-networkd[1254]: docker0: Link UP Apr 28 01:13:25.709532 dockerd[1810]: time="2026-04-28T01:13:25.709230282Z" level=info msg="Loading containers: done." Apr 28 01:13:25.893790 dockerd[1810]: time="2026-04-28T01:13:25.892276534Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Apr 28 01:13:25.897926 dockerd[1810]: time="2026-04-28T01:13:25.897793004Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Apr 28 01:13:25.902017 dockerd[1810]: time="2026-04-28T01:13:25.899361467Z" level=info msg="Daemon has completed initialization" Apr 28 01:13:26.759159 systemd[1]: Started docker.service - Docker Application Container Engine. Apr 28 01:13:26.770786 dockerd[1810]: time="2026-04-28T01:13:26.757862132Z" level=info msg="API listen on /run/docker.sock" Apr 28 01:13:29.922335 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Apr 28 01:13:29.984830 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 28 01:13:30.585984 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 28 01:13:30.596116 (kubelet)[1989]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 28 01:13:31.678066 kubelet[1989]: E0428 01:13:31.677256 1989 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 28 01:13:31.685338 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 28 01:13:31.686156 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 28 01:13:34.429791 containerd[1599]: time="2026-04-28T01:13:34.429697004Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.11\"" Apr 28 01:13:35.960559 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2351343435.mount: Deactivated successfully. Apr 28 01:13:41.941176 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Apr 28 01:13:41.973672 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 28 01:13:42.645241 update_engine[1577]: I20260428 01:13:42.644035 1577 update_attempter.cc:509] Updating boot flags... Apr 28 01:13:42.707911 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 28 01:13:42.761187 (kubelet)[2073]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 28 01:13:42.831513 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (2078) Apr 28 01:13:43.079494 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (2078) Apr 28 01:13:43.525754 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (2078) Apr 28 01:13:44.802557 kubelet[2073]: E0428 01:13:44.802013 2073 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 28 01:13:44.953323 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 28 01:13:44.955716 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 28 01:13:46.588169 containerd[1599]: time="2026-04-28T01:13:46.585478271Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 01:13:46.594562 containerd[1599]: time="2026-04-28T01:13:46.593951400Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.11: active requests=0, bytes read=30193427" Apr 28 01:13:46.647030 containerd[1599]: time="2026-04-28T01:13:46.646759866Z" level=info msg="ImageCreate event name:\"sha256:7ea99c30f23b106a042b6c46e565fddb42b20bbe58ba6852e562eed03477aec2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 01:13:46.721173 containerd[1599]: time="2026-04-28T01:13:46.721019305Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:18e9f2b6e4d67c24941e14b2d41ec0aa6e5f628e39f2ef2163e176de85bbe39e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 01:13:46.772789 containerd[1599]: time="2026-04-28T01:13:46.771952948Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.11\" with image id \"sha256:7ea99c30f23b106a042b6c46e565fddb42b20bbe58ba6852e562eed03477aec2\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.11\", repo digest \"registry.k8s.io/kube-apiserver@sha256:18e9f2b6e4d67c24941e14b2d41ec0aa6e5f628e39f2ef2163e176de85bbe39e\", size \"30190588\" in 12.341588196s" Apr 28 01:13:46.772789 containerd[1599]: time="2026-04-28T01:13:46.772130959Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.11\" returns image reference \"sha256:7ea99c30f23b106a042b6c46e565fddb42b20bbe58ba6852e562eed03477aec2\"" Apr 28 01:13:46.813539 containerd[1599]: time="2026-04-28T01:13:46.811690596Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.11\"" Apr 28 01:13:55.005114 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Apr 28 01:13:55.075687 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 28 01:13:56.170357 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 28 01:13:56.188594 (kubelet)[2109]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 28 01:13:56.364771 containerd[1599]: time="2026-04-28T01:13:56.364596583Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 01:13:56.383778 containerd[1599]: time="2026-04-28T01:13:56.366536987Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.11: active requests=0, bytes read=26171379" Apr 28 01:13:56.424567 containerd[1599]: time="2026-04-28T01:13:56.424158518Z" level=info msg="ImageCreate event name:\"sha256:c75dc8a6c47e2f7491fa2e367879f53c6f46053066e6b7135df4b154ddd94a1f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 01:13:56.576754 containerd[1599]: time="2026-04-28T01:13:56.576614579Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:7579451c5b3c2715da4a263c5d80a3367a24fdc12e86fde6851674d567d1dfb2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 01:13:56.608514 containerd[1599]: time="2026-04-28T01:13:56.608356242Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.11\" with image id \"sha256:c75dc8a6c47e2f7491fa2e367879f53c6f46053066e6b7135df4b154ddd94a1f\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.11\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:7579451c5b3c2715da4a263c5d80a3367a24fdc12e86fde6851674d567d1dfb2\", size \"27737794\" in 9.795948335s" Apr 28 01:13:56.608514 containerd[1599]: time="2026-04-28T01:13:56.608476994Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.11\" returns image reference \"sha256:c75dc8a6c47e2f7491fa2e367879f53c6f46053066e6b7135df4b154ddd94a1f\"" Apr 28 01:13:56.662512 containerd[1599]: time="2026-04-28T01:13:56.661324291Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.11\"" Apr 28 01:13:57.692372 kubelet[2109]: E0428 01:13:57.691473 2109 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 28 01:13:57.701374 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 28 01:13:57.702202 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 28 01:14:05.481410 containerd[1599]: time="2026-04-28T01:14:05.480252251Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 01:14:05.487359 containerd[1599]: time="2026-04-28T01:14:05.485336742Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.11: active requests=0, bytes read=20289688" Apr 28 01:14:05.511691 containerd[1599]: time="2026-04-28T01:14:05.511214917Z" level=info msg="ImageCreate event name:\"sha256:3febad3451e2d599688a8ad13d19d03c48c9054be209342c748fac2bb6c56f97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 01:14:05.885952 containerd[1599]: time="2026-04-28T01:14:05.884162280Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:5506f0f94c4d9aeb071664893aabc12166bcb7f775008a6fff02d004e6091d28\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 01:14:05.963115 containerd[1599]: time="2026-04-28T01:14:05.958393988Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.11\" with image id \"sha256:3febad3451e2d599688a8ad13d19d03c48c9054be209342c748fac2bb6c56f97\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.11\", repo digest \"registry.k8s.io/kube-scheduler@sha256:5506f0f94c4d9aeb071664893aabc12166bcb7f775008a6fff02d004e6091d28\", size \"21856121\" in 9.296995364s" Apr 28 01:14:05.963115 containerd[1599]: time="2026-04-28T01:14:05.958498729Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.11\" returns image reference \"sha256:3febad3451e2d599688a8ad13d19d03c48c9054be209342c748fac2bb6c56f97\"" Apr 28 01:14:05.988062 containerd[1599]: time="2026-04-28T01:14:05.987694463Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.11\"" Apr 28 01:14:07.995697 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Apr 28 01:14:08.118665 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 28 01:14:09.003522 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 28 01:14:09.029541 (kubelet)[2140]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 28 01:14:10.884896 kubelet[2140]: E0428 01:14:10.884845 2140 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 28 01:14:10.894803 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 28 01:14:10.895338 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 28 01:14:20.927921 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Apr 28 01:14:20.962231 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 28 01:14:21.253332 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2382792177.mount: Deactivated successfully. Apr 28 01:14:21.733236 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 28 01:14:21.747729 (kubelet)[2165]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 28 01:14:23.578005 kubelet[2165]: E0428 01:14:23.577680 2165 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 28 01:14:23.588048 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 28 01:14:23.589609 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 28 01:14:26.747750 containerd[1599]: time="2026-04-28T01:14:26.742706685Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 01:14:26.855477 containerd[1599]: time="2026-04-28T01:14:26.848316735Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.11: active requests=0, bytes read=32010605" Apr 28 01:14:27.092695 containerd[1599]: time="2026-04-28T01:14:27.091042747Z" level=info msg="ImageCreate event name:\"sha256:4ce1332df15d2a0b1c2d3b18292afb4ff670070401211daebb00b7293b26f6d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 01:14:27.278556 containerd[1599]: time="2026-04-28T01:14:27.276868838Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:8d18637b5c5f58a4ca0163d3cf184e53d4c522963c242860562be7cb25e9303e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 01:14:27.384525 containerd[1599]: time="2026-04-28T01:14:27.326901546Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.11\" with image id \"sha256:4ce1332df15d2a0b1c2d3b18292afb4ff670070401211daebb00b7293b26f6d0\", repo tag \"registry.k8s.io/kube-proxy:v1.33.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:8d18637b5c5f58a4ca0163d3cf184e53d4c522963c242860562be7cb25e9303e\", size \"32009730\" in 21.337387534s" Apr 28 01:14:27.385827 containerd[1599]: time="2026-04-28T01:14:27.384962931Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.11\" returns image reference \"sha256:4ce1332df15d2a0b1c2d3b18292afb4ff670070401211daebb00b7293b26f6d0\"" Apr 28 01:14:27.410200 containerd[1599]: time="2026-04-28T01:14:27.409923476Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Apr 28 01:14:30.008004 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2870506952.mount: Deactivated successfully. Apr 28 01:14:33.690984 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7. Apr 28 01:14:33.772130 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 28 01:14:34.743013 (kubelet)[2200]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 28 01:14:34.743856 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 28 01:14:36.612282 kubelet[2200]: E0428 01:14:36.609412 2200 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 28 01:14:36.679680 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 28 01:14:36.680409 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 28 01:14:46.952800 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 8. Apr 28 01:14:47.079199 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 28 01:14:47.832677 containerd[1599]: time="2026-04-28T01:14:47.832104674Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 01:14:47.840229 containerd[1599]: time="2026-04-28T01:14:47.840040934Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20941714" Apr 28 01:14:47.849172 containerd[1599]: time="2026-04-28T01:14:47.849115886Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 01:14:48.125063 containerd[1599]: time="2026-04-28T01:14:48.124093303Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 01:14:48.148371 containerd[1599]: time="2026-04-28T01:14:48.148053186Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 20.735911966s" Apr 28 01:14:48.148371 containerd[1599]: time="2026-04-28T01:14:48.148136277Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Apr 28 01:14:48.166973 containerd[1599]: time="2026-04-28T01:14:48.166339011Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Apr 28 01:14:48.207297 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 28 01:14:48.255999 (kubelet)[2262]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 28 01:14:50.172773 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount620915365.mount: Deactivated successfully. Apr 28 01:14:50.221392 containerd[1599]: time="2026-04-28T01:14:50.219127148Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 01:14:50.225443 containerd[1599]: time="2026-04-28T01:14:50.222943440Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321070" Apr 28 01:14:50.292638 containerd[1599]: time="2026-04-28T01:14:50.291254307Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 01:14:50.369755 containerd[1599]: time="2026-04-28T01:14:50.367246561Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 01:14:50.392717 containerd[1599]: time="2026-04-28T01:14:50.392043617Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 2.224213629s" Apr 28 01:14:50.392717 containerd[1599]: time="2026-04-28T01:14:50.392211758Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Apr 28 01:14:50.452946 containerd[1599]: time="2026-04-28T01:14:50.451933189Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.24-0\"" Apr 28 01:14:51.017060 kubelet[2262]: E0428 01:14:51.016745 2262 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 28 01:14:51.026154 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 28 01:14:51.048621 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 28 01:14:52.857797 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4269149670.mount: Deactivated successfully. Apr 28 01:15:01.213255 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 9. Apr 28 01:15:01.258094 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 28 01:15:02.301900 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 28 01:15:02.395265 (kubelet)[2301]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 28 01:15:04.409739 kubelet[2301]: E0428 01:15:04.408693 2301 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 28 01:15:04.420559 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 28 01:15:04.422386 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 28 01:15:13.399080 containerd[1599]: time="2026-04-28T01:15:13.398735119Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.24-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 01:15:13.406288 containerd[1599]: time="2026-04-28T01:15:13.406123406Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.24-0: active requests=0, bytes read=23718826" Apr 28 01:15:13.426524 containerd[1599]: time="2026-04-28T01:15:13.424410871Z" level=info msg="ImageCreate event name:\"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 01:15:13.543742 containerd[1599]: time="2026-04-28T01:15:13.543473812Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 01:15:13.584120 containerd[1599]: time="2026-04-28T01:15:13.583966667Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.24-0\" with image id \"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\", repo tag \"registry.k8s.io/etcd:3.5.24-0\", repo digest \"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\", size \"23716032\" in 23.129592227s" Apr 28 01:15:13.584120 containerd[1599]: time="2026-04-28T01:15:13.584070386Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.24-0\" returns image reference \"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\"" Apr 28 01:15:14.691202 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 10. Apr 28 01:15:14.725762 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 28 01:15:15.632253 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 28 01:15:15.654321 (kubelet)[2380]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 28 01:15:16.813943 kubelet[2380]: E0428 01:15:16.813333 2380 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 28 01:15:16.821249 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 28 01:15:16.821966 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 28 01:15:26.960973 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 11. Apr 28 01:15:27.012359 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 28 01:15:28.009882 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 28 01:15:28.086974 (kubelet)[2418]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 28 01:15:29.576279 kubelet[2418]: E0428 01:15:29.575622 2418 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 28 01:15:29.586288 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 28 01:15:29.586543 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 28 01:15:39.693503 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 12. Apr 28 01:15:39.765005 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 28 01:15:40.877516 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 28 01:15:40.908049 (kubelet)[2438]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 28 01:15:42.593343 kubelet[2438]: E0428 01:15:42.593034 2438 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 28 01:15:42.600018 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 28 01:15:42.600770 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 28 01:15:52.600084 systemd[1]: kubelet.service: Stop job pending for unit, skipping automatic restart. Apr 28 01:15:52.600902 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 28 01:15:52.656640 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 28 01:15:53.175902 systemd[1]: Reloading requested from client PID 2457 ('systemctl') (unit session-7.scope)... Apr 28 01:15:53.176571 systemd[1]: Reloading... Apr 28 01:15:55.579537 zram_generator::config[2497]: No configuration found. Apr 28 01:15:59.895183 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 28 01:16:02.412985 systemd[1]: Reloading finished in 9234 ms. Apr 28 01:16:03.511047 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 28 01:16:03.520796 (kubelet)[2544]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 28 01:16:03.678297 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 28 01:16:03.710033 systemd[1]: kubelet.service: Deactivated successfully. Apr 28 01:16:03.726876 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 28 01:16:03.910160 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 28 01:16:06.194055 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 28 01:16:06.252496 (kubelet)[2564]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 28 01:16:08.597759 kubelet[2564]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 28 01:16:08.598920 kubelet[2564]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 28 01:16:08.598920 kubelet[2564]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 28 01:16:08.598920 kubelet[2564]: I0428 01:16:08.598681 2564 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 28 01:16:14.317716 kubelet[2564]: I0428 01:16:14.317528 2564 server.go:530] "Kubelet version" kubeletVersion="v1.33.8" Apr 28 01:16:14.321824 kubelet[2564]: I0428 01:16:14.317781 2564 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 28 01:16:14.377520 kubelet[2564]: I0428 01:16:14.376348 2564 server.go:956] "Client rotation is on, will bootstrap in background" Apr 28 01:16:15.041599 kubelet[2564]: E0428 01:16:15.041320 2564 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.36:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.36:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 28 01:16:15.120817 kubelet[2564]: I0428 01:16:15.120186 2564 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 28 01:16:15.509628 kubelet[2564]: E0428 01:16:15.505303 2564 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 28 01:16:15.515100 kubelet[2564]: I0428 01:16:15.512768 2564 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Apr 28 01:16:15.852825 kubelet[2564]: I0428 01:16:15.851281 2564 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Apr 28 01:16:15.925210 kubelet[2564]: I0428 01:16:15.894771 2564 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 28 01:16:15.988604 kubelet[2564]: I0428 01:16:15.924973 2564 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Apr 28 01:16:15.990795 kubelet[2564]: I0428 01:16:15.989682 2564 topology_manager.go:138] "Creating topology manager with none policy" Apr 28 01:16:15.990795 kubelet[2564]: I0428 01:16:15.990065 2564 container_manager_linux.go:303] "Creating device plugin manager" Apr 28 01:16:16.003071 kubelet[2564]: I0428 01:16:16.001410 2564 state_mem.go:36] "Initialized new in-memory state store" Apr 28 01:16:16.115691 kubelet[2564]: I0428 01:16:16.114531 2564 kubelet.go:480] "Attempting to sync node with API server" Apr 28 01:16:16.120104 kubelet[2564]: I0428 01:16:16.119838 2564 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 28 01:16:16.123374 kubelet[2564]: I0428 01:16:16.122634 2564 kubelet.go:386] "Adding apiserver pod source" Apr 28 01:16:16.123374 kubelet[2564]: I0428 01:16:16.123055 2564 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 28 01:16:16.133309 kubelet[2564]: E0428 01:16:16.133041 2564 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.36:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.36:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 28 01:16:16.143884 kubelet[2564]: E0428 01:16:16.143316 2564 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.36:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.36:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 28 01:16:16.189679 kubelet[2564]: I0428 01:16:16.187094 2564 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 28 01:16:16.281308 kubelet[2564]: I0428 01:16:16.281196 2564 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 28 01:16:16.287911 kubelet[2564]: W0428 01:16:16.285885 2564 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Apr 28 01:16:16.407105 kubelet[2564]: I0428 01:16:16.406240 2564 watchdog_linux.go:99] "Systemd watchdog is not enabled" Apr 28 01:16:16.411843 kubelet[2564]: I0428 01:16:16.409442 2564 server.go:1289] "Started kubelet" Apr 28 01:16:16.413976 kubelet[2564]: I0428 01:16:16.413881 2564 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 28 01:16:16.462642 kubelet[2564]: I0428 01:16:16.461012 2564 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 28 01:16:16.463894 kubelet[2564]: I0428 01:16:16.458969 2564 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Apr 28 01:16:16.475075 kubelet[2564]: E0428 01:16:16.466748 2564 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.36:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.36:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18aa6061de449849 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-28 01:16:16.408213577 +0000 UTC m=+10.105372856,LastTimestamp:2026-04-28 01:16:16.408213577 +0000 UTC m=+10.105372856,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 28 01:16:16.484294 kubelet[2564]: I0428 01:16:16.484214 2564 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 28 01:16:16.497469 kubelet[2564]: E0428 01:16:16.488963 2564 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 01:16:16.497469 kubelet[2564]: I0428 01:16:16.495595 2564 server.go:317] "Adding debug handlers to kubelet server" Apr 28 01:16:16.497469 kubelet[2564]: I0428 01:16:16.495931 2564 volume_manager.go:297] "Starting Kubelet Volume Manager" Apr 28 01:16:16.501352 kubelet[2564]: I0428 01:16:16.499072 2564 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Apr 28 01:16:16.501839 kubelet[2564]: I0428 01:16:16.501823 2564 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 28 01:16:16.502272 kubelet[2564]: I0428 01:16:16.501890 2564 reconciler.go:26] "Reconciler: start to sync state" Apr 28 01:16:16.514930 kubelet[2564]: E0428 01:16:16.509028 2564 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.36:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.36:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 28 01:16:16.525463 kubelet[2564]: E0428 01:16:16.523216 2564 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.36:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.36:6443: connect: connection refused" interval="200ms" Apr 28 01:16:16.590504 kubelet[2564]: I0428 01:16:16.583192 2564 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 28 01:16:16.591568 kubelet[2564]: E0428 01:16:16.591155 2564 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 28 01:16:16.596225 kubelet[2564]: E0428 01:16:16.596091 2564 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 01:16:16.633512 kubelet[2564]: I0428 01:16:16.628103 2564 factory.go:223] Registration of the containerd container factory successfully Apr 28 01:16:16.633512 kubelet[2564]: I0428 01:16:16.628246 2564 factory.go:223] Registration of the systemd container factory successfully Apr 28 01:16:16.705216 kubelet[2564]: E0428 01:16:16.703561 2564 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 01:16:16.754528 kubelet[2564]: E0428 01:16:16.753019 2564 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.36:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.36:6443: connect: connection refused" interval="400ms" Apr 28 01:16:16.787969 kubelet[2564]: I0428 01:16:16.787694 2564 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Apr 28 01:16:16.800208 kubelet[2564]: I0428 01:16:16.799827 2564 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Apr 28 01:16:16.802850 kubelet[2564]: I0428 01:16:16.801930 2564 status_manager.go:230] "Starting to sync pod status with apiserver" Apr 28 01:16:16.806381 kubelet[2564]: I0428 01:16:16.805085 2564 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 28 01:16:16.806381 kubelet[2564]: I0428 01:16:16.806075 2564 kubelet.go:2436] "Starting kubelet main sync loop" Apr 28 01:16:16.806381 kubelet[2564]: E0428 01:16:16.806344 2564 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 28 01:16:16.818475 kubelet[2564]: E0428 01:16:16.818314 2564 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 01:16:16.848533 kubelet[2564]: E0428 01:16:16.848456 2564 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.36:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.36:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 28 01:16:16.922530 kubelet[2564]: E0428 01:16:16.920968 2564 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Apr 28 01:16:16.923292 kubelet[2564]: E0428 01:16:16.923258 2564 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 01:16:17.056497 kubelet[2564]: I0428 01:16:17.055935 2564 cpu_manager.go:221] "Starting CPU manager" policy="none" Apr 28 01:16:17.057057 kubelet[2564]: I0428 01:16:17.056983 2564 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Apr 28 01:16:17.057536 kubelet[2564]: I0428 01:16:17.057166 2564 state_mem.go:36] "Initialized new in-memory state store" Apr 28 01:16:17.058001 kubelet[2564]: E0428 01:16:17.057561 2564 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 01:16:17.076502 kubelet[2564]: I0428 01:16:17.076248 2564 policy_none.go:49] "None policy: Start" Apr 28 01:16:17.076502 kubelet[2564]: I0428 01:16:17.076401 2564 memory_manager.go:186] "Starting memorymanager" policy="None" Apr 28 01:16:17.076502 kubelet[2564]: I0428 01:16:17.076504 2564 state_mem.go:35] "Initializing new in-memory state store" Apr 28 01:16:17.082514 kubelet[2564]: E0428 01:16:17.081073 2564 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.36:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.36:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 28 01:16:17.164080 kubelet[2564]: E0428 01:16:17.128092 2564 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Apr 28 01:16:17.165481 kubelet[2564]: E0428 01:16:17.164545 2564 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 01:16:17.179354 kubelet[2564]: E0428 01:16:17.177026 2564 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.36:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.36:6443: connect: connection refused" interval="800ms" Apr 28 01:16:17.216600 kubelet[2564]: E0428 01:16:17.213617 2564 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 28 01:16:17.224252 kubelet[2564]: I0428 01:16:17.224015 2564 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 28 01:16:17.225407 kubelet[2564]: I0428 01:16:17.224576 2564 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 28 01:16:17.225407 kubelet[2564]: I0428 01:16:17.225269 2564 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 28 01:16:17.269188 kubelet[2564]: E0428 01:16:17.268175 2564 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 28 01:16:17.311930 kubelet[2564]: E0428 01:16:17.302975 2564 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 28 01:16:17.419502 kubelet[2564]: I0428 01:16:17.415769 2564 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 28 01:16:17.419502 kubelet[2564]: E0428 01:16:17.411099 2564 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.36:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.36:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 28 01:16:17.506098 kubelet[2564]: E0428 01:16:17.505512 2564 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.36:6443/api/v1/nodes\": dial tcp 10.0.0.36:6443: connect: connection refused" node="localhost" Apr 28 01:16:17.753483 kubelet[2564]: I0428 01:16:17.741908 2564 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/280ac78f0c06d5a5825d6eaa7709189b-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"280ac78f0c06d5a5825d6eaa7709189b\") " pod="kube-system/kube-apiserver-localhost" Apr 28 01:16:17.754506 kubelet[2564]: I0428 01:16:17.754148 2564 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/280ac78f0c06d5a5825d6eaa7709189b-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"280ac78f0c06d5a5825d6eaa7709189b\") " pod="kube-system/kube-apiserver-localhost" Apr 28 01:16:17.754506 kubelet[2564]: I0428 01:16:17.754217 2564 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/280ac78f0c06d5a5825d6eaa7709189b-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"280ac78f0c06d5a5825d6eaa7709189b\") " pod="kube-system/kube-apiserver-localhost" Apr 28 01:16:17.761057 kubelet[2564]: E0428 01:16:17.760224 2564 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.36:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.36:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 28 01:16:17.801520 kubelet[2564]: I0428 01:16:17.777732 2564 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 28 01:16:17.820856 kubelet[2564]: E0428 01:16:17.820615 2564 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.36:6443/api/v1/nodes\": dial tcp 10.0.0.36:6443: connect: connection refused" node="localhost" Apr 28 01:16:17.824874 kubelet[2564]: E0428 01:16:17.822316 2564 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.36:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.36:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 28 01:16:17.951831 kubelet[2564]: E0428 01:16:17.951328 2564 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 28 01:16:17.981906 kubelet[2564]: I0428 01:16:17.977678 2564 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e9ca41790ae21be9f4cbd451ade0acec-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"e9ca41790ae21be9f4cbd451ade0acec\") " pod="kube-system/kube-controller-manager-localhost" Apr 28 01:16:17.981906 kubelet[2564]: I0428 01:16:17.978351 2564 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/e9ca41790ae21be9f4cbd451ade0acec-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"e9ca41790ae21be9f4cbd451ade0acec\") " pod="kube-system/kube-controller-manager-localhost" Apr 28 01:16:17.981906 kubelet[2564]: I0428 01:16:17.978517 2564 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e9ca41790ae21be9f4cbd451ade0acec-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"e9ca41790ae21be9f4cbd451ade0acec\") " pod="kube-system/kube-controller-manager-localhost" Apr 28 01:16:17.981906 kubelet[2564]: I0428 01:16:17.978571 2564 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e9ca41790ae21be9f4cbd451ade0acec-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"e9ca41790ae21be9f4cbd451ade0acec\") " pod="kube-system/kube-controller-manager-localhost" Apr 28 01:16:17.981906 kubelet[2564]: I0428 01:16:17.978593 2564 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e9ca41790ae21be9f4cbd451ade0acec-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"e9ca41790ae21be9f4cbd451ade0acec\") " pod="kube-system/kube-controller-manager-localhost" Apr 28 01:16:17.994728 kubelet[2564]: E0428 01:16:17.979992 2564 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:16:17.994728 kubelet[2564]: E0428 01:16:17.980698 2564 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.36:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.36:6443: connect: connection refused" interval="1.6s" Apr 28 01:16:18.015488 kubelet[2564]: E0428 01:16:18.014086 2564 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 28 01:16:18.062986 containerd[1599]: time="2026-04-28T01:16:18.061919703Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:280ac78f0c06d5a5825d6eaa7709189b,Namespace:kube-system,Attempt:0,}" Apr 28 01:16:18.089704 kubelet[2564]: I0428 01:16:18.088290 2564 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/33fee6ba1581201eda98a989140db110-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"33fee6ba1581201eda98a989140db110\") " pod="kube-system/kube-scheduler-localhost" Apr 28 01:16:18.089704 kubelet[2564]: E0428 01:16:18.088835 2564 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.36:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.36:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 28 01:16:18.107575 kubelet[2564]: E0428 01:16:18.105948 2564 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 28 01:16:18.369998 kubelet[2564]: E0428 01:16:18.369462 2564 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:16:18.375895 kubelet[2564]: I0428 01:16:18.372373 2564 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 28 01:16:18.379714 kubelet[2564]: E0428 01:16:18.378395 2564 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.36:6443/api/v1/nodes\": dial tcp 10.0.0.36:6443: connect: connection refused" node="localhost" Apr 28 01:16:18.386243 containerd[1599]: time="2026-04-28T01:16:18.385890934Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:e9ca41790ae21be9f4cbd451ade0acec,Namespace:kube-system,Attempt:0,}" Apr 28 01:16:18.519313 kubelet[2564]: E0428 01:16:18.516542 2564 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:16:18.566337 containerd[1599]: time="2026-04-28T01:16:18.566204440Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:33fee6ba1581201eda98a989140db110,Namespace:kube-system,Attempt:0,}" Apr 28 01:16:19.254207 kubelet[2564]: I0428 01:16:19.251260 2564 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 28 01:16:19.263211 kubelet[2564]: E0428 01:16:19.261230 2564 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.36:6443/api/v1/nodes\": dial tcp 10.0.0.36:6443: connect: connection refused" node="localhost" Apr 28 01:16:19.508733 kubelet[2564]: E0428 01:16:19.499992 2564 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.36:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.36:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 28 01:16:19.644233 kubelet[2564]: E0428 01:16:19.642700 2564 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.36:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.36:6443: connect: connection refused" interval="3.2s" Apr 28 01:16:20.069050 kubelet[2564]: E0428 01:16:20.069003 2564 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.36:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.36:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 28 01:16:20.224072 kubelet[2564]: E0428 01:16:20.224014 2564 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.36:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.36:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 28 01:16:20.313361 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3181821144.mount: Deactivated successfully. Apr 28 01:16:20.398627 kubelet[2564]: E0428 01:16:20.398384 2564 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.36:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.36:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 28 01:16:20.496365 containerd[1599]: time="2026-04-28T01:16:20.496112999Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 28 01:16:20.503403 containerd[1599]: time="2026-04-28T01:16:20.503307131Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=311988" Apr 28 01:16:20.515857 containerd[1599]: time="2026-04-28T01:16:20.515211145Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 28 01:16:20.533840 containerd[1599]: time="2026-04-28T01:16:20.533594430Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 28 01:16:20.540992 containerd[1599]: time="2026-04-28T01:16:20.540877984Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 28 01:16:20.541722 containerd[1599]: time="2026-04-28T01:16:20.541497036Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 28 01:16:20.543333 containerd[1599]: time="2026-04-28T01:16:20.541954448Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 28 01:16:20.906279 containerd[1599]: time="2026-04-28T01:16:20.902947237Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 28 01:16:20.971569 containerd[1599]: time="2026-04-28T01:16:20.970104612Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 2.401880883s" Apr 28 01:16:20.981180 kubelet[2564]: I0428 01:16:20.980279 2564 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 28 01:16:20.998060 containerd[1599]: time="2026-04-28T01:16:20.997220429Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 2.929141002s" Apr 28 01:16:21.017030 kubelet[2564]: E0428 01:16:21.014959 2564 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.36:6443/api/v1/nodes\": dial tcp 10.0.0.36:6443: connect: connection refused" node="localhost" Apr 28 01:16:21.049177 containerd[1599]: time="2026-04-28T01:16:21.048910435Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 2.632335116s" Apr 28 01:16:21.212653 kubelet[2564]: E0428 01:16:21.211832 2564 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.36:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.36:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 28 01:16:22.622553 containerd[1599]: time="2026-04-28T01:16:22.621928549Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 28 01:16:22.622553 containerd[1599]: time="2026-04-28T01:16:22.622124461Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 28 01:16:22.622553 containerd[1599]: time="2026-04-28T01:16:22.622157485Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 28 01:16:22.641648 containerd[1599]: time="2026-04-28T01:16:22.625118662Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 28 01:16:22.641648 containerd[1599]: time="2026-04-28T01:16:22.625188093Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 28 01:16:22.641648 containerd[1599]: time="2026-04-28T01:16:22.625201408Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 28 01:16:22.641648 containerd[1599]: time="2026-04-28T01:16:22.625377210Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 28 01:16:22.644644 containerd[1599]: time="2026-04-28T01:16:22.627459081Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 28 01:16:22.646708 containerd[1599]: time="2026-04-28T01:16:22.645306649Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 28 01:16:22.646708 containerd[1599]: time="2026-04-28T01:16:22.646298566Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 28 01:16:22.646708 containerd[1599]: time="2026-04-28T01:16:22.646321133Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 28 01:16:22.646708 containerd[1599]: time="2026-04-28T01:16:22.646659533Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 28 01:16:22.913178 kubelet[2564]: E0428 01:16:22.896234 2564 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.36:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.36:6443: connect: connection refused" interval="6.4s" Apr 28 01:16:23.594488 containerd[1599]: time="2026-04-28T01:16:23.593306467Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:e9ca41790ae21be9f4cbd451ade0acec,Namespace:kube-system,Attempt:0,} returns sandbox id \"fea967e5f4866d5680756f4a5088cf3ab4cb9b669bd6e397b78a0ade1f0c3b01\"" Apr 28 01:16:23.738229 containerd[1599]: time="2026-04-28T01:16:23.737793729Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:280ac78f0c06d5a5825d6eaa7709189b,Namespace:kube-system,Attempt:0,} returns sandbox id \"99e7e50414f6b3e24d501b6fb1366d4032f90d00852d19522b655b406f201c6a\"" Apr 28 01:16:23.744824 containerd[1599]: time="2026-04-28T01:16:23.744181805Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:33fee6ba1581201eda98a989140db110,Namespace:kube-system,Attempt:0,} returns sandbox id \"5fe08e91e6fcb82650641b6964a31edf99b1d8f93d9caa876740f2a339c6908a\"" Apr 28 01:16:23.748269 kubelet[2564]: E0428 01:16:23.745957 2564 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:16:23.777174 kubelet[2564]: E0428 01:16:23.776994 2564 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:16:23.803065 kubelet[2564]: E0428 01:16:23.802735 2564 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:16:23.960690 containerd[1599]: time="2026-04-28T01:16:23.959852828Z" level=info msg="CreateContainer within sandbox \"fea967e5f4866d5680756f4a5088cf3ab4cb9b669bd6e397b78a0ade1f0c3b01\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Apr 28 01:16:23.961035 kubelet[2564]: E0428 01:16:23.960229 2564 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.36:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.36:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 28 01:16:23.986012 containerd[1599]: time="2026-04-28T01:16:23.985537872Z" level=info msg="CreateContainer within sandbox \"5fe08e91e6fcb82650641b6964a31edf99b1d8f93d9caa876740f2a339c6908a\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Apr 28 01:16:24.070037 containerd[1599]: time="2026-04-28T01:16:24.069962860Z" level=info msg="CreateContainer within sandbox \"99e7e50414f6b3e24d501b6fb1366d4032f90d00852d19522b655b406f201c6a\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Apr 28 01:16:24.300533 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1028923610.mount: Deactivated successfully. Apr 28 01:16:24.314518 containerd[1599]: time="2026-04-28T01:16:24.310027270Z" level=info msg="CreateContainer within sandbox \"fea967e5f4866d5680756f4a5088cf3ab4cb9b669bd6e397b78a0ade1f0c3b01\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"f7792d99ccc3f3e44cf7349cd0996d0c4bc160ca4eb527931910d4ef37803588\"" Apr 28 01:16:24.348472 kubelet[2564]: I0428 01:16:24.344644 2564 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 28 01:16:24.351726 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4052760323.mount: Deactivated successfully. Apr 28 01:16:24.357358 kubelet[2564]: E0428 01:16:24.357233 2564 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.36:6443/api/v1/nodes\": dial tcp 10.0.0.36:6443: connect: connection refused" node="localhost" Apr 28 01:16:24.368520 kubelet[2564]: E0428 01:16:24.365036 2564 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.36:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.36:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 28 01:16:24.415510 containerd[1599]: time="2026-04-28T01:16:24.411132224Z" level=info msg="StartContainer for \"f7792d99ccc3f3e44cf7349cd0996d0c4bc160ca4eb527931910d4ef37803588\"" Apr 28 01:16:24.449244 containerd[1599]: time="2026-04-28T01:16:24.446806635Z" level=info msg="CreateContainer within sandbox \"5fe08e91e6fcb82650641b6964a31edf99b1d8f93d9caa876740f2a339c6908a\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"5de345f0dd6e00b70eaf0c77f3ae0e0d08e90c28f96e6c77dac2ac8f5dc26adf\"" Apr 28 01:16:24.478141 containerd[1599]: time="2026-04-28T01:16:24.476200255Z" level=info msg="CreateContainer within sandbox \"99e7e50414f6b3e24d501b6fb1366d4032f90d00852d19522b655b406f201c6a\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"65adc3813cabf7bca457c3ee10ef4ef879485bf78ec0a805fa8ce42bc3ef8cd6\"" Apr 28 01:16:24.525682 containerd[1599]: time="2026-04-28T01:16:24.525525178Z" level=info msg="StartContainer for \"5de345f0dd6e00b70eaf0c77f3ae0e0d08e90c28f96e6c77dac2ac8f5dc26adf\"" Apr 28 01:16:24.594700 containerd[1599]: time="2026-04-28T01:16:24.594143982Z" level=info msg="StartContainer for \"65adc3813cabf7bca457c3ee10ef4ef879485bf78ec0a805fa8ce42bc3ef8cd6\"" Apr 28 01:16:25.479968 kubelet[2564]: E0428 01:16:25.479848 2564 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.36:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.36:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 28 01:16:25.486525 kubelet[2564]: E0428 01:16:25.486126 2564 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.36:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.36:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18aa6061de449849 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-28 01:16:16.408213577 +0000 UTC m=+10.105372856,LastTimestamp:2026-04-28 01:16:16.408213577 +0000 UTC m=+10.105372856,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 28 01:16:25.809020 containerd[1599]: time="2026-04-28T01:16:25.787216607Z" level=info msg="StartContainer for \"5de345f0dd6e00b70eaf0c77f3ae0e0d08e90c28f96e6c77dac2ac8f5dc26adf\" returns successfully" Apr 28 01:16:25.813516 containerd[1599]: time="2026-04-28T01:16:25.812204563Z" level=info msg="StartContainer for \"f7792d99ccc3f3e44cf7349cd0996d0c4bc160ca4eb527931910d4ef37803588\" returns successfully" Apr 28 01:16:25.971210 kubelet[2564]: E0428 01:16:25.966013 2564 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.36:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.36:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 28 01:16:26.222766 containerd[1599]: time="2026-04-28T01:16:26.219847354Z" level=info msg="StartContainer for \"65adc3813cabf7bca457c3ee10ef4ef879485bf78ec0a805fa8ce42bc3ef8cd6\" returns successfully" Apr 28 01:16:27.356639 kubelet[2564]: E0428 01:16:27.352717 2564 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 28 01:16:28.577974 kubelet[2564]: E0428 01:16:28.545888 2564 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 28 01:16:28.785296 kubelet[2564]: E0428 01:16:28.727252 2564 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:16:29.214693 kubelet[2564]: E0428 01:16:29.214466 2564 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 28 01:16:29.269730 kubelet[2564]: E0428 01:16:29.269456 2564 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:16:29.660985 kubelet[2564]: E0428 01:16:29.653734 2564 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 28 01:16:29.678394 kubelet[2564]: E0428 01:16:29.678129 2564 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:16:30.702555 kubelet[2564]: E0428 01:16:30.699218 2564 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 28 01:16:30.702555 kubelet[2564]: E0428 01:16:30.700168 2564 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 28 01:16:30.703618 kubelet[2564]: E0428 01:16:30.703596 2564 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:16:30.704010 kubelet[2564]: E0428 01:16:30.703765 2564 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 28 01:16:30.706665 kubelet[2564]: E0428 01:16:30.706641 2564 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:16:30.710854 kubelet[2564]: E0428 01:16:30.706630 2564 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:16:30.920533 kubelet[2564]: I0428 01:16:30.918774 2564 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 28 01:16:32.032515 kubelet[2564]: E0428 01:16:32.032111 2564 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 28 01:16:32.035357 kubelet[2564]: E0428 01:16:32.034205 2564 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 28 01:16:32.035357 kubelet[2564]: E0428 01:16:32.034279 2564 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:16:32.048137 kubelet[2564]: E0428 01:16:32.039918 2564 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:16:33.406908 kubelet[2564]: E0428 01:16:33.404279 2564 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 28 01:16:33.467793 kubelet[2564]: E0428 01:16:33.465619 2564 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:16:37.410250 kubelet[2564]: E0428 01:16:37.407711 2564 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 28 01:16:37.978611 kubelet[2564]: E0428 01:16:37.973949 2564 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 28 01:16:38.007453 kubelet[2564]: E0428 01:16:38.007017 2564 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:16:39.417754 kubelet[2564]: E0428 01:16:39.414329 2564 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.36:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" interval="7s" Apr 28 01:16:40.141902 kubelet[2564]: E0428 01:16:40.141696 2564 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.36:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": net/http: TLS handshake timeout" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 28 01:16:41.017576 kubelet[2564]: E0428 01:16:41.014584 2564 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.36:6443/api/v1/nodes\": net/http: TLS handshake timeout" node="localhost" Apr 28 01:16:41.787643 update_engine[1577]: I20260428 01:16:41.727236 1577 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Apr 28 01:16:41.787643 update_engine[1577]: I20260428 01:16:41.727392 1577 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Apr 28 01:16:41.787643 update_engine[1577]: I20260428 01:16:41.780241 1577 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Apr 28 01:16:41.787643 update_engine[1577]: I20260428 01:16:41.787228 1577 omaha_request_params.cc:62] Current group set to lts Apr 28 01:16:41.787643 update_engine[1577]: I20260428 01:16:41.787393 1577 update_attempter.cc:499] Already updated boot flags. Skipping. Apr 28 01:16:41.787643 update_engine[1577]: I20260428 01:16:41.787402 1577 update_attempter.cc:643] Scheduling an action processor start. Apr 28 01:16:41.797513 update_engine[1577]: I20260428 01:16:41.791623 1577 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Apr 28 01:16:41.797513 update_engine[1577]: I20260428 01:16:41.791846 1577 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Apr 28 01:16:41.797513 update_engine[1577]: I20260428 01:16:41.792043 1577 omaha_request_action.cc:271] Posting an Omaha request to disabled Apr 28 01:16:41.797513 update_engine[1577]: I20260428 01:16:41.792052 1577 omaha_request_action.cc:272] Request: Apr 28 01:16:41.797513 update_engine[1577]: Apr 28 01:16:41.797513 update_engine[1577]: Apr 28 01:16:41.797513 update_engine[1577]: Apr 28 01:16:41.797513 update_engine[1577]: Apr 28 01:16:41.797513 update_engine[1577]: Apr 28 01:16:41.797513 update_engine[1577]: Apr 28 01:16:41.797513 update_engine[1577]: Apr 28 01:16:41.797513 update_engine[1577]: Apr 28 01:16:41.797513 update_engine[1577]: I20260428 01:16:41.792058 1577 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 28 01:16:41.803450 update_engine[1577]: I20260428 01:16:41.802243 1577 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 28 01:16:41.804535 update_engine[1577]: I20260428 01:16:41.804440 1577 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 28 01:16:41.805210 locksmithd[1638]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Apr 28 01:16:41.812751 update_engine[1577]: E20260428 01:16:41.812579 1577 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Apr 28 01:16:41.812751 update_engine[1577]: I20260428 01:16:41.812716 1577 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Apr 28 01:16:43.103475 kubelet[2564]: E0428 01:16:43.103036 2564 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.36:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 28 01:16:43.519681 kubelet[2564]: E0428 01:16:43.517792 2564 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.36:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 28 01:16:43.705492 kubelet[2564]: E0428 01:16:43.702926 2564 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 28 01:16:43.770804 kubelet[2564]: E0428 01:16:43.764480 2564 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:16:44.989264 kubelet[2564]: E0428 01:16:44.987025 2564 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.36:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 28 01:16:45.520120 kubelet[2564]: E0428 01:16:45.514343 2564 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.36:6443/api/v1/namespaces/default/events\": net/http: TLS handshake timeout" event="&Event{ObjectMeta:{localhost.18aa6061de449849 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-28 01:16:16.408213577 +0000 UTC m=+10.105372856,LastTimestamp:2026-04-28 01:16:16.408213577 +0000 UTC m=+10.105372856,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 28 01:16:46.654664 kubelet[2564]: E0428 01:16:46.652387 2564 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.36:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 28 01:16:47.421460 kubelet[2564]: E0428 01:16:47.421158 2564 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 28 01:16:48.423155 kubelet[2564]: I0428 01:16:48.422937 2564 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 28 01:16:50.590269 kubelet[2564]: E0428 01:16:50.590199 2564 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 28 01:16:50.625927 kubelet[2564]: E0428 01:16:50.621572 2564 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:16:51.647540 update_engine[1577]: I20260428 01:16:51.647347 1577 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 28 01:16:51.649520 update_engine[1577]: I20260428 01:16:51.649490 1577 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 28 01:16:51.650608 update_engine[1577]: I20260428 01:16:51.650542 1577 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 28 01:16:51.661748 update_engine[1577]: E20260428 01:16:51.661538 1577 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Apr 28 01:16:51.663050 update_engine[1577]: I20260428 01:16:51.662953 1577 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Apr 28 01:16:56.505531 kubelet[2564]: E0428 01:16:56.502766 2564 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.36:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded" interval="7s" Apr 28 01:16:57.440982 kubelet[2564]: E0428 01:16:57.426152 2564 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 28 01:16:58.780729 kubelet[2564]: E0428 01:16:58.772305 2564 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.36:6443/api/v1/nodes\": net/http: TLS handshake timeout" node="localhost" Apr 28 01:17:01.636257 update_engine[1577]: I20260428 01:17:01.636094 1577 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 28 01:17:01.637868 update_engine[1577]: I20260428 01:17:01.636579 1577 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 28 01:17:01.641924 update_engine[1577]: I20260428 01:17:01.637959 1577 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 28 01:17:01.663588 update_engine[1577]: E20260428 01:17:01.662943 1577 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Apr 28 01:17:01.663588 update_engine[1577]: I20260428 01:17:01.663134 1577 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Apr 28 01:17:05.577193 kubelet[2564]: E0428 01:17:05.569335 2564 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.36:6443/api/v1/namespaces/default/events\": net/http: TLS handshake timeout" event="&Event{ObjectMeta:{localhost.18aa6061de449849 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-28 01:16:16.408213577 +0000 UTC m=+10.105372856,LastTimestamp:2026-04-28 01:16:16.408213577 +0000 UTC m=+10.105372856,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 28 01:17:06.021775 kubelet[2564]: I0428 01:17:06.021670 2564 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 28 01:17:07.501841 kubelet[2564]: E0428 01:17:07.500447 2564 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 28 01:17:08.011740 kubelet[2564]: E0428 01:17:07.983368 2564 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.36:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": net/http: TLS handshake timeout" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 28 01:17:08.043899 kubelet[2564]: E0428 01:17:08.027968 2564 certificate_manager.go:461] "Reached backoff limit, still unable to rotate certs" err="timed out waiting for the condition" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 28 01:17:09.998611 kubelet[2564]: E0428 01:17:09.996101 2564 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.36:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 28 01:17:11.653202 update_engine[1577]: I20260428 01:17:11.650923 1577 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 28 01:17:11.654207 update_engine[1577]: I20260428 01:17:11.653501 1577 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 28 01:17:11.654207 update_engine[1577]: I20260428 01:17:11.653795 1577 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 28 01:17:11.667112 update_engine[1577]: E20260428 01:17:11.665920 1577 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Apr 28 01:17:11.675344 update_engine[1577]: I20260428 01:17:11.672473 1577 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Apr 28 01:17:11.675344 update_engine[1577]: I20260428 01:17:11.672699 1577 omaha_request_action.cc:617] Omaha request response: Apr 28 01:17:11.675344 update_engine[1577]: E20260428 01:17:11.674377 1577 omaha_request_action.cc:636] Omaha request network transfer failed. Apr 28 01:17:11.677042 update_engine[1577]: I20260428 01:17:11.676219 1577 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Apr 28 01:17:11.683620 update_engine[1577]: I20260428 01:17:11.681125 1577 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Apr 28 01:17:11.683620 update_engine[1577]: I20260428 01:17:11.681339 1577 update_attempter.cc:306] Processing Done. Apr 28 01:17:11.683620 update_engine[1577]: E20260428 01:17:11.681874 1577 update_attempter.cc:619] Update failed. Apr 28 01:17:11.683620 update_engine[1577]: I20260428 01:17:11.682036 1577 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Apr 28 01:17:11.683620 update_engine[1577]: I20260428 01:17:11.682044 1577 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Apr 28 01:17:11.683620 update_engine[1577]: I20260428 01:17:11.682051 1577 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Apr 28 01:17:11.683620 update_engine[1577]: I20260428 01:17:11.682718 1577 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Apr 28 01:17:11.683620 update_engine[1577]: I20260428 01:17:11.682976 1577 omaha_request_action.cc:271] Posting an Omaha request to disabled Apr 28 01:17:11.683620 update_engine[1577]: I20260428 01:17:11.682983 1577 omaha_request_action.cc:272] Request: Apr 28 01:17:11.683620 update_engine[1577]: Apr 28 01:17:11.683620 update_engine[1577]: Apr 28 01:17:11.683620 update_engine[1577]: Apr 28 01:17:11.683620 update_engine[1577]: Apr 28 01:17:11.683620 update_engine[1577]: Apr 28 01:17:11.683620 update_engine[1577]: Apr 28 01:17:11.683620 update_engine[1577]: I20260428 01:17:11.682991 1577 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 28 01:17:11.685798 locksmithd[1638]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Apr 28 01:17:11.689687 update_engine[1577]: I20260428 01:17:11.688585 1577 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 28 01:17:11.692543 update_engine[1577]: I20260428 01:17:11.691377 1577 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 28 01:17:11.702773 update_engine[1577]: E20260428 01:17:11.702637 1577 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Apr 28 01:17:11.709489 update_engine[1577]: I20260428 01:17:11.707868 1577 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Apr 28 01:17:11.709489 update_engine[1577]: I20260428 01:17:11.707991 1577 omaha_request_action.cc:617] Omaha request response: Apr 28 01:17:11.709489 update_engine[1577]: I20260428 01:17:11.708003 1577 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Apr 28 01:17:11.709489 update_engine[1577]: I20260428 01:17:11.708011 1577 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Apr 28 01:17:11.709489 update_engine[1577]: I20260428 01:17:11.708016 1577 update_attempter.cc:306] Processing Done. Apr 28 01:17:11.709489 update_engine[1577]: I20260428 01:17:11.708024 1577 update_attempter.cc:310] Error event sent. Apr 28 01:17:11.709489 update_engine[1577]: I20260428 01:17:11.708052 1577 update_check_scheduler.cc:74] Next update check in 41m49s Apr 28 01:17:11.718051 locksmithd[1638]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Apr 28 01:17:13.593472 kubelet[2564]: E0428 01:17:13.589010 2564 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.36:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded" interval="7s" Apr 28 01:17:15.600912 kubelet[2564]: E0428 01:17:15.598945 2564 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.36:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 28 01:17:16.083310 kubelet[2564]: E0428 01:17:16.083102 2564 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.36:6443/api/v1/nodes\": net/http: TLS handshake timeout" node="localhost" Apr 28 01:17:17.340714 kubelet[2564]: E0428 01:17:17.331127 2564 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.36:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 28 01:17:17.517775 kubelet[2564]: E0428 01:17:17.515963 2564 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 28 01:17:20.671484 kubelet[2564]: E0428 01:17:20.659406 2564 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.36:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 28 01:17:23.206588 kubelet[2564]: I0428 01:17:23.203967 2564 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 28 01:17:25.677879 kubelet[2564]: E0428 01:17:25.666913 2564 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.36:6443/api/v1/namespaces/default/events\": net/http: TLS handshake timeout" event="&Event{ObjectMeta:{localhost.18aa6061de449849 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-28 01:16:16.408213577 +0000 UTC m=+10.105372856,LastTimestamp:2026-04-28 01:16:16.408213577 +0000 UTC m=+10.105372856,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 28 01:17:27.589735 kubelet[2564]: E0428 01:17:27.581329 2564 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 28 01:17:30.811729 kubelet[2564]: E0428 01:17:30.809163 2564 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.36:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded" interval="7s" Apr 28 01:17:33.305053 kubelet[2564]: E0428 01:17:33.304105 2564 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.36:6443/api/v1/nodes\": net/http: TLS handshake timeout" node="localhost" Apr 28 01:17:37.603111 kubelet[2564]: E0428 01:17:37.598465 2564 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 28 01:17:40.530054 kubelet[2564]: I0428 01:17:40.524481 2564 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 28 01:17:45.763536 kubelet[2564]: E0428 01:17:45.703253 2564 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.36:6443/api/v1/namespaces/default/events\": net/http: TLS handshake timeout" event="&Event{ObjectMeta:{localhost.18aa6061de449849 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-28 01:16:16.408213577 +0000 UTC m=+10.105372856,LastTimestamp:2026-04-28 01:16:16.408213577 +0000 UTC m=+10.105372856,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 28 01:17:47.693855 kubelet[2564]: E0428 01:17:47.691895 2564 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 28 01:17:47.866170 kubelet[2564]: E0428 01:17:47.866026 2564 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.36:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" interval="7s" Apr 28 01:17:50.195471 kubelet[2564]: E0428 01:17:50.194850 2564 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.36:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": net/http: TLS handshake timeout" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 28 01:17:50.247791 kubelet[2564]: E0428 01:17:50.243139 2564 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 28 01:17:50.263829 kubelet[2564]: E0428 01:17:50.262101 2564 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:17:50.574011 kubelet[2564]: E0428 01:17:50.569221 2564 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.36:6443/api/v1/nodes\": net/http: TLS handshake timeout" node="localhost" Apr 28 01:17:57.698248 kubelet[2564]: E0428 01:17:57.697285 2564 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 28 01:17:57.723893 kubelet[2564]: I0428 01:17:57.721096 2564 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 28 01:17:59.168368 kubelet[2564]: E0428 01:17:59.167268 2564 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 28 01:17:59.173764 kubelet[2564]: E0428 01:17:59.173632 2564 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:18:04.891883 kubelet[2564]: E0428 01:18:04.889871 2564 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.36:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded" interval="7s" Apr 28 01:18:05.818748 kubelet[2564]: E0428 01:18:05.812172 2564 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.36:6443/api/v1/namespaces/default/events\": net/http: TLS handshake timeout" event="&Event{ObjectMeta:{localhost.18aa6061de449849 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-28 01:16:16.408213577 +0000 UTC m=+10.105372856,LastTimestamp:2026-04-28 01:16:16.408213577 +0000 UTC m=+10.105372856,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 28 01:18:07.680617 kubelet[2564]: E0428 01:18:07.675330 2564 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.36:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 28 01:18:07.764948 kubelet[2564]: E0428 01:18:07.728954 2564 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 28 01:18:07.800230 kubelet[2564]: E0428 01:18:07.798363 2564 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.36:6443/api/v1/nodes\": net/http: TLS handshake timeout" node="localhost" Apr 28 01:18:08.583454 kubelet[2564]: E0428 01:18:08.583182 2564 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.36:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 28 01:18:13.926523 kubelet[2564]: E0428 01:18:13.925484 2564 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 28 01:18:13.972805 kubelet[2564]: E0428 01:18:13.966457 2564 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.36:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 28 01:18:13.979549 kubelet[2564]: E0428 01:18:13.977394 2564 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:18:14.981087 kubelet[2564]: I0428 01:18:14.976358 2564 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 28 01:18:17.867993 kubelet[2564]: E0428 01:18:17.855912 2564 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 28 01:18:19.904811 kubelet[2564]: E0428 01:18:19.903763 2564 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.36:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 28 01:18:21.980076 kubelet[2564]: E0428 01:18:21.972937 2564 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.36:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" interval="7s" Apr 28 01:18:22.279802 kubelet[2564]: E0428 01:18:22.261910 2564 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.36:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": net/http: TLS handshake timeout" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 28 01:18:25.148844 kubelet[2564]: E0428 01:18:25.147111 2564 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.36:6443/api/v1/nodes\": net/http: TLS handshake timeout" node="localhost" Apr 28 01:18:25.954054 kubelet[2564]: E0428 01:18:25.915356 2564 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.36:6443/api/v1/namespaces/default/events\": net/http: TLS handshake timeout" event="&Event{ObjectMeta:{localhost.18aa6061de449849 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-28 01:16:16.408213577 +0000 UTC m=+10.105372856,LastTimestamp:2026-04-28 01:16:16.408213577 +0000 UTC m=+10.105372856,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 28 01:18:27.914831 kubelet[2564]: E0428 01:18:27.912354 2564 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 28 01:18:32.202805 kubelet[2564]: I0428 01:18:32.202330 2564 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 28 01:18:37.969749 kubelet[2564]: E0428 01:18:37.968621 2564 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 28 01:18:39.249411 kubelet[2564]: E0428 01:18:39.242673 2564 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.36:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded" interval="7s" Apr 28 01:18:42.230637 kubelet[2564]: E0428 01:18:42.230202 2564 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.36:6443/api/v1/nodes\": net/http: TLS handshake timeout" node="localhost" Apr 28 01:18:46.060069 kubelet[2564]: E0428 01:18:46.059763 2564 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.36:6443/api/v1/namespaces/default/events\": net/http: TLS handshake timeout" event="&Event{ObjectMeta:{localhost.18aa6061de449849 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-28 01:16:16.408213577 +0000 UTC m=+10.105372856,LastTimestamp:2026-04-28 01:16:16.408213577 +0000 UTC m=+10.105372856,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 28 01:18:48.077150 kubelet[2564]: E0428 01:18:48.045193 2564 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 28 01:18:49.421988 kubelet[2564]: I0428 01:18:49.421732 2564 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 28 01:18:54.305765 kubelet[2564]: E0428 01:18:54.305539 2564 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.36:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": net/http: TLS handshake timeout" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 28 01:18:56.187929 kubelet[2564]: E0428 01:18:56.184995 2564 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.36:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 28 01:18:56.274757 kubelet[2564]: E0428 01:18:56.274526 2564 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.36:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" interval="7s" Apr 28 01:18:58.082995 kubelet[2564]: E0428 01:18:58.078765 2564 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 28 01:18:59.509266 kubelet[2564]: E0428 01:18:59.498927 2564 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.36:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 28 01:18:59.510891 kubelet[2564]: E0428 01:18:59.509088 2564 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.36:6443/api/v1/nodes\": net/http: TLS handshake timeout" node="localhost" Apr 28 01:19:00.159809 kubelet[2564]: E0428 01:19:00.156660 2564 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 28 01:19:00.198818 kubelet[2564]: E0428 01:19:00.194151 2564 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:19:04.012024 kubelet[2564]: E0428 01:19:04.007682 2564 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.18aa6061de449849 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-28 01:16:16.408213577 +0000 UTC m=+10.105372856,LastTimestamp:2026-04-28 01:16:16.408213577 +0000 UTC m=+10.105372856,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 28 01:19:04.634403 kubelet[2564]: E0428 01:19:04.633609 2564 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Apr 28 01:19:05.092122 kubelet[2564]: E0428 01:19:05.091112 2564 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.18aa6061e2cd86b9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:CgroupV1,Message:cgroup v1 support is in maintenance mode, please migrate to cgroup v2,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-28 01:16:16.484296377 +0000 UTC m=+10.181455656,LastTimestamp:2026-04-28 01:16:16.484296377 +0000 UTC m=+10.181455656,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 28 01:19:06.669130 kubelet[2564]: I0428 01:19:06.637078 2564 apiserver.go:52] "Watching apiserver" Apr 28 01:19:06.995493 kubelet[2564]: I0428 01:19:06.988358 2564 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 28 01:19:07.356315 kubelet[2564]: I0428 01:19:07.323089 2564 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Apr 28 01:19:07.365135 kubelet[2564]: E0428 01:19:07.363502 2564 csi_plugin.go:397] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found Apr 28 01:19:07.680004 kubelet[2564]: I0428 01:19:07.678833 2564 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Apr 28 01:19:07.903358 kubelet[2564]: I0428 01:19:07.877143 2564 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Apr 28 01:19:09.384590 kubelet[2564]: I0428 01:19:09.380281 2564 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Apr 28 01:19:09.905543 kubelet[2564]: E0428 01:19:09.901832 2564 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.081s" Apr 28 01:19:10.086128 kubelet[2564]: E0428 01:19:10.086005 2564 kubelet_node_status.go:460] "Node not becoming ready in time after startup" Apr 28 01:19:10.352544 kubelet[2564]: E0428 01:19:10.290252 2564 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:19:11.291814 kubelet[2564]: E0428 01:19:11.284205 2564 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 28 01:19:11.647210 kubelet[2564]: E0428 01:19:11.644968 2564 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:19:11.979513 kubelet[2564]: I0428 01:19:11.970401 2564 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Apr 28 01:19:14.245505 kubelet[2564]: E0428 01:19:14.242269 2564 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:19:16.451720 kubelet[2564]: E0428 01:19:16.442362 2564 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 28 01:19:19.657666 kubelet[2564]: I0428 01:19:19.657217 2564 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=7.656756036 podStartE2EDuration="7.656756036s" podCreationTimestamp="2026-04-28 01:19:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-28 01:19:19.619927019 +0000 UTC m=+193.317086320" watchObservedRunningTime="2026-04-28 01:19:19.656756036 +0000 UTC m=+193.353915322" Apr 28 01:19:21.674285 kubelet[2564]: E0428 01:19:21.633999 2564 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 28 01:19:23.082107 kubelet[2564]: I0428 01:19:23.077020 2564 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=13.076786244000001 podStartE2EDuration="13.076786244s" podCreationTimestamp="2026-04-28 01:19:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-28 01:19:22.909110925 +0000 UTC m=+196.606270218" watchObservedRunningTime="2026-04-28 01:19:23.076786244 +0000 UTC m=+196.773945522" Apr 28 01:19:23.093006 kubelet[2564]: I0428 01:19:23.086887 2564 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=15.081688281 podStartE2EDuration="15.081688281s" podCreationTimestamp="2026-04-28 01:19:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-28 01:19:21.420905526 +0000 UTC m=+195.118064810" watchObservedRunningTime="2026-04-28 01:19:23.081688281 +0000 UTC m=+196.778847562" Apr 28 01:19:26.978157 kubelet[2564]: E0428 01:19:26.962284 2564 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 28 01:19:32.120082 kubelet[2564]: E0428 01:19:32.109923 2564 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 28 01:19:37.253366 kubelet[2564]: E0428 01:19:37.237606 2564 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 28 01:19:42.375218 kubelet[2564]: E0428 01:19:42.372789 2564 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 28 01:19:47.480022 kubelet[2564]: E0428 01:19:47.478858 2564 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 28 01:19:52.662715 kubelet[2564]: E0428 01:19:52.660578 2564 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 28 01:19:58.184561 kubelet[2564]: E0428 01:19:58.180020 2564 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 28 01:20:03.325847 kubelet[2564]: E0428 01:20:03.293062 2564 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 28 01:20:08.394482 kubelet[2564]: E0428 01:20:08.389276 2564 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 28 01:20:13.505162 kubelet[2564]: E0428 01:20:13.501409 2564 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 28 01:20:18.604841 kubelet[2564]: E0428 01:20:18.603205 2564 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 28 01:20:23.824796 kubelet[2564]: E0428 01:20:23.794045 2564 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 28 01:20:29.016773 kubelet[2564]: E0428 01:20:29.016290 2564 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 28 01:20:30.919590 kubelet[2564]: E0428 01:20:30.918947 2564 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:20:32.057470 kubelet[2564]: E0428 01:20:32.054866 2564 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:20:33.163412 kubelet[2564]: E0428 01:20:33.162911 2564 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:20:34.292827 kubelet[2564]: E0428 01:20:34.268015 2564 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 28 01:20:39.491185 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f7792d99ccc3f3e44cf7349cd0996d0c4bc160ca4eb527931910d4ef37803588-rootfs.mount: Deactivated successfully. Apr 28 01:20:39.630475 containerd[1599]: time="2026-04-28T01:20:39.619008782Z" level=info msg="shim disconnected" id=f7792d99ccc3f3e44cf7349cd0996d0c4bc160ca4eb527931910d4ef37803588 namespace=k8s.io Apr 28 01:20:39.630475 containerd[1599]: time="2026-04-28T01:20:39.621184882Z" level=warning msg="cleaning up after shim disconnected" id=f7792d99ccc3f3e44cf7349cd0996d0c4bc160ca4eb527931910d4ef37803588 namespace=k8s.io Apr 28 01:20:39.630475 containerd[1599]: time="2026-04-28T01:20:39.621444574Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 28 01:20:39.651669 kubelet[2564]: E0428 01:20:39.647688 2564 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 28 01:20:40.115389 containerd[1599]: time="2026-04-28T01:20:40.115210238Z" level=warning msg="cleanup warnings time=\"2026-04-28T01:20:40Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Apr 28 01:20:41.560040 kubelet[2564]: I0428 01:20:41.559651 2564 scope.go:117] "RemoveContainer" containerID="f7792d99ccc3f3e44cf7349cd0996d0c4bc160ca4eb527931910d4ef37803588" Apr 28 01:20:41.579738 kubelet[2564]: E0428 01:20:41.568179 2564 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:20:41.828995 containerd[1599]: time="2026-04-28T01:20:41.826203748Z" level=info msg="CreateContainer within sandbox \"fea967e5f4866d5680756f4a5088cf3ab4cb9b669bd6e397b78a0ade1f0c3b01\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Apr 28 01:20:42.495492 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3658145152.mount: Deactivated successfully. Apr 28 01:20:42.696597 containerd[1599]: time="2026-04-28T01:20:42.675236939Z" level=info msg="CreateContainer within sandbox \"fea967e5f4866d5680756f4a5088cf3ab4cb9b669bd6e397b78a0ade1f0c3b01\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"b8549d41269170e838fe6fdfa4e4167a908dc3aae2648b9e319370a7d7a543ec\"" Apr 28 01:20:42.921023 containerd[1599]: time="2026-04-28T01:20:42.919131528Z" level=info msg="StartContainer for \"b8549d41269170e838fe6fdfa4e4167a908dc3aae2648b9e319370a7d7a543ec\"" Apr 28 01:20:44.207450 kubelet[2564]: E0428 01:20:44.161027 2564 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.336s" Apr 28 01:20:44.215197 systemd[1]: run-containerd-runc-k8s.io-b8549d41269170e838fe6fdfa4e4167a908dc3aae2648b9e319370a7d7a543ec-runc.lNQK2M.mount: Deactivated successfully. Apr 28 01:20:45.043709 kubelet[2564]: E0428 01:20:45.039318 2564 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 28 01:20:45.995886 containerd[1599]: time="2026-04-28T01:20:45.994839960Z" level=info msg="StartContainer for \"b8549d41269170e838fe6fdfa4e4167a908dc3aae2648b9e319370a7d7a543ec\" returns successfully" Apr 28 01:20:48.968508 kubelet[2564]: E0428 01:20:48.966831 2564 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:20:50.193501 kubelet[2564]: E0428 01:20:50.192606 2564 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 28 01:20:50.474092 kubelet[2564]: E0428 01:20:50.467343 2564 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:20:51.949668 kubelet[2564]: E0428 01:20:51.946615 2564 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.125s" Apr 28 01:20:53.395743 kubelet[2564]: E0428 01:20:53.388305 2564 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:20:55.412952 kubelet[2564]: E0428 01:20:55.405403 2564 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 28 01:21:00.459875 kubelet[2564]: E0428 01:21:00.459754 2564 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 28 01:21:03.566875 kubelet[2564]: E0428 01:21:03.566231 2564 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:21:05.915724 kubelet[2564]: E0428 01:21:05.899282 2564 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 28 01:21:08.658624 kubelet[2564]: E0428 01:21:08.657884 2564 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.603s" Apr 28 01:21:12.094302 kubelet[2564]: E0428 01:21:12.024547 2564 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 28 01:21:12.094302 kubelet[2564]: E0428 01:21:12.071294 2564 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.984s" Apr 28 01:21:14.377717 kubelet[2564]: E0428 01:21:14.365738 2564 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.089s" Apr 28 01:21:20.315913 kubelet[2564]: E0428 01:21:20.314795 2564 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 28 01:21:21.303646 kubelet[2564]: E0428 01:21:21.303568 2564 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="6.913s" Apr 28 01:21:22.581536 kubelet[2564]: E0428 01:21:22.579727 2564 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.268s" Apr 28 01:21:23.214268 kubelet[2564]: E0428 01:21:23.213057 2564 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:21:25.022751 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b8549d41269170e838fe6fdfa4e4167a908dc3aae2648b9e319370a7d7a543ec-rootfs.mount: Deactivated successfully. Apr 28 01:21:25.053046 containerd[1599]: time="2026-04-28T01:21:25.029212616Z" level=info msg="shim disconnected" id=b8549d41269170e838fe6fdfa4e4167a908dc3aae2648b9e319370a7d7a543ec namespace=k8s.io Apr 28 01:21:25.053046 containerd[1599]: time="2026-04-28T01:21:25.029922996Z" level=warning msg="cleaning up after shim disconnected" id=b8549d41269170e838fe6fdfa4e4167a908dc3aae2648b9e319370a7d7a543ec namespace=k8s.io Apr 28 01:21:25.053046 containerd[1599]: time="2026-04-28T01:21:25.029938436Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 28 01:21:25.624597 kubelet[2564]: E0428 01:21:25.597978 2564 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 28 01:21:27.209556 kubelet[2564]: I0428 01:21:27.203950 2564 scope.go:117] "RemoveContainer" containerID="f7792d99ccc3f3e44cf7349cd0996d0c4bc160ca4eb527931910d4ef37803588" Apr 28 01:21:27.222479 kubelet[2564]: I0428 01:21:27.220827 2564 scope.go:117] "RemoveContainer" containerID="b8549d41269170e838fe6fdfa4e4167a908dc3aae2648b9e319370a7d7a543ec" Apr 28 01:21:27.222479 kubelet[2564]: E0428 01:21:27.222152 2564 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:21:27.228699 kubelet[2564]: E0428 01:21:27.223823 2564 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-controller-manager pod=kube-controller-manager-localhost_kube-system(e9ca41790ae21be9f4cbd451ade0acec)\"" pod="kube-system/kube-controller-manager-localhost" podUID="e9ca41790ae21be9f4cbd451ade0acec" Apr 28 01:21:27.340727 containerd[1599]: time="2026-04-28T01:21:27.339194773Z" level=info msg="RemoveContainer for \"f7792d99ccc3f3e44cf7349cd0996d0c4bc160ca4eb527931910d4ef37803588\"" Apr 28 01:21:27.403933 containerd[1599]: time="2026-04-28T01:21:27.403158333Z" level=info msg="RemoveContainer for \"f7792d99ccc3f3e44cf7349cd0996d0c4bc160ca4eb527931910d4ef37803588\" returns successfully" Apr 28 01:21:28.773410 kubelet[2564]: I0428 01:21:28.773227 2564 scope.go:117] "RemoveContainer" containerID="b8549d41269170e838fe6fdfa4e4167a908dc3aae2648b9e319370a7d7a543ec" Apr 28 01:21:28.825844 kubelet[2564]: E0428 01:21:28.825676 2564 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:21:28.930359 kubelet[2564]: E0428 01:21:28.929770 2564 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-controller-manager pod=kube-controller-manager-localhost_kube-system(e9ca41790ae21be9f4cbd451ade0acec)\"" pod="kube-system/kube-controller-manager-localhost" podUID="e9ca41790ae21be9f4cbd451ade0acec" Apr 28 01:21:30.723319 kubelet[2564]: E0428 01:21:30.718395 2564 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 28 01:21:35.814360 kubelet[2564]: E0428 01:21:35.811308 2564 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 28 01:21:38.120024 kubelet[2564]: E0428 01:21:38.119381 2564 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:21:40.948954 kubelet[2564]: E0428 01:21:40.947753 2564 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 28 01:21:41.964616 kubelet[2564]: I0428 01:21:41.950726 2564 scope.go:117] "RemoveContainer" containerID="b8549d41269170e838fe6fdfa4e4167a908dc3aae2648b9e319370a7d7a543ec" Apr 28 01:21:41.964616 kubelet[2564]: E0428 01:21:41.957781 2564 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:21:42.552656 containerd[1599]: time="2026-04-28T01:21:42.552156736Z" level=info msg="CreateContainer within sandbox \"fea967e5f4866d5680756f4a5088cf3ab4cb9b669bd6e397b78a0ade1f0c3b01\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:2,}" Apr 28 01:21:43.027549 containerd[1599]: time="2026-04-28T01:21:43.026289373Z" level=info msg="CreateContainer within sandbox \"fea967e5f4866d5680756f4a5088cf3ab4cb9b669bd6e397b78a0ade1f0c3b01\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:2,} returns container id \"8b39cdb2721a8a356057f3337af11da49226e96dd4ec0122b0287c12c3d8169a\"" Apr 28 01:21:43.085940 containerd[1599]: time="2026-04-28T01:21:43.080345049Z" level=info msg="StartContainer for \"8b39cdb2721a8a356057f3337af11da49226e96dd4ec0122b0287c12c3d8169a\"" Apr 28 01:21:45.976821 kubelet[2564]: E0428 01:21:45.975267 2564 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 28 01:21:46.411651 containerd[1599]: time="2026-04-28T01:21:46.409754580Z" level=info msg="StartContainer for \"8b39cdb2721a8a356057f3337af11da49226e96dd4ec0122b0287c12c3d8169a\" returns successfully" Apr 28 01:21:48.131830 kubelet[2564]: E0428 01:21:48.130270 2564 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:21:50.175675 kubelet[2564]: E0428 01:21:50.175491 2564 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:21:50.877344 kubelet[2564]: E0428 01:21:50.873595 2564 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:21:51.047923 kubelet[2564]: E0428 01:21:51.046522 2564 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 28 01:21:53.474192 kubelet[2564]: E0428 01:21:53.473305 2564 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:21:56.462518 kubelet[2564]: E0428 01:21:56.462268 2564 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.653s" Apr 28 01:21:56.491886 kubelet[2564]: E0428 01:21:56.488847 2564 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 28 01:22:01.668609 kubelet[2564]: E0428 01:22:01.662937 2564 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 28 01:22:04.061601 kubelet[2564]: E0428 01:22:04.055989 2564 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:22:05.193577 systemd[1]: Reloading requested from client PID 2996 ('systemctl') (unit session-7.scope)... Apr 28 01:22:05.193680 systemd[1]: Reloading... Apr 28 01:22:06.810558 zram_generator::config[3033]: No configuration found. Apr 28 01:22:06.862201 kubelet[2564]: E0428 01:22:06.861688 2564 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 28 01:22:09.047140 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 28 01:22:10.025284 systemd[1]: Reloading finished in 4826 ms. Apr 28 01:22:10.397329 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 28 01:22:10.489546 systemd[1]: kubelet.service: Deactivated successfully. Apr 28 01:22:10.492965 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 28 01:22:10.578271 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 28 01:22:12.604614 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 28 01:22:12.688103 (kubelet)[3091]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 28 01:22:15.368372 kubelet[3091]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 28 01:22:15.368372 kubelet[3091]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 28 01:22:15.368372 kubelet[3091]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 28 01:22:15.375556 kubelet[3091]: I0428 01:22:15.366058 3091 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 28 01:22:15.892684 kubelet[3091]: I0428 01:22:15.881902 3091 server.go:530] "Kubelet version" kubeletVersion="v1.33.8" Apr 28 01:22:15.892684 kubelet[3091]: I0428 01:22:15.888106 3091 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 28 01:22:15.940855 kubelet[3091]: I0428 01:22:15.937479 3091 server.go:956] "Client rotation is on, will bootstrap in background" Apr 28 01:22:16.049880 kubelet[3091]: I0428 01:22:16.049719 3091 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Apr 28 01:22:16.452016 kubelet[3091]: I0428 01:22:16.451498 3091 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 28 01:22:16.970937 kubelet[3091]: E0428 01:22:16.969360 3091 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 28 01:22:16.975808 kubelet[3091]: I0428 01:22:16.972287 3091 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Apr 28 01:22:17.153510 kubelet[3091]: I0428 01:22:17.153309 3091 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Apr 28 01:22:17.191057 kubelet[3091]: I0428 01:22:17.187982 3091 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 28 01:22:17.199843 kubelet[3091]: I0428 01:22:17.191840 3091 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Apr 28 01:22:17.200598 kubelet[3091]: I0428 01:22:17.200357 3091 topology_manager.go:138] "Creating topology manager with none policy" Apr 28 01:22:17.200668 kubelet[3091]: I0428 01:22:17.200652 3091 container_manager_linux.go:303] "Creating device plugin manager" Apr 28 01:22:17.202800 kubelet[3091]: I0428 01:22:17.200997 3091 state_mem.go:36] "Initialized new in-memory state store" Apr 28 01:22:17.225013 kubelet[3091]: I0428 01:22:17.220976 3091 kubelet.go:480] "Attempting to sync node with API server" Apr 28 01:22:17.259244 kubelet[3091]: I0428 01:22:17.229387 3091 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 28 01:22:17.264977 kubelet[3091]: I0428 01:22:17.264666 3091 kubelet.go:386] "Adding apiserver pod source" Apr 28 01:22:17.267987 kubelet[3091]: I0428 01:22:17.267582 3091 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 28 01:22:17.475824 kubelet[3091]: I0428 01:22:17.475555 3091 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 28 01:22:17.499883 kubelet[3091]: I0428 01:22:17.499490 3091 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 28 01:22:17.992638 kubelet[3091]: I0428 01:22:17.989822 3091 watchdog_linux.go:99] "Systemd watchdog is not enabled" Apr 28 01:22:18.014507 kubelet[3091]: I0428 01:22:18.012341 3091 server.go:1289] "Started kubelet" Apr 28 01:22:18.025774 kubelet[3091]: I0428 01:22:18.023862 3091 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Apr 28 01:22:18.081670 kubelet[3091]: I0428 01:22:18.079324 3091 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 28 01:22:18.271303 kubelet[3091]: I0428 01:22:18.260376 3091 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 28 01:22:18.463134 kubelet[3091]: I0428 01:22:18.461937 3091 server.go:317] "Adding debug handlers to kubelet server" Apr 28 01:22:18.566903 kubelet[3091]: E0428 01:22:18.561044 3091 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 28 01:22:18.593548 kubelet[3091]: I0428 01:22:18.591868 3091 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 28 01:22:18.603694 kubelet[3091]: I0428 01:22:18.600380 3091 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 28 01:22:18.630563 kubelet[3091]: I0428 01:22:18.627782 3091 volume_manager.go:297] "Starting Kubelet Volume Manager" Apr 28 01:22:18.666118 kubelet[3091]: I0428 01:22:18.660371 3091 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Apr 28 01:22:18.720622 kubelet[3091]: I0428 01:22:18.719014 3091 reconciler.go:26] "Reconciler: start to sync state" Apr 28 01:22:18.823211 kubelet[3091]: I0428 01:22:18.821658 3091 factory.go:223] Registration of the systemd container factory successfully Apr 28 01:22:18.893976 kubelet[3091]: I0428 01:22:18.887980 3091 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 28 01:22:19.164030 kubelet[3091]: I0428 01:22:19.160833 3091 factory.go:223] Registration of the containerd container factory successfully Apr 28 01:22:19.395523 kubelet[3091]: I0428 01:22:19.379065 3091 apiserver.go:52] "Watching apiserver" Apr 28 01:22:19.610679 kubelet[3091]: I0428 01:22:19.576882 3091 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Apr 28 01:22:19.709609 kubelet[3091]: I0428 01:22:19.707131 3091 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Apr 28 01:22:19.709609 kubelet[3091]: I0428 01:22:19.707521 3091 status_manager.go:230] "Starting to sync pod status with apiserver" Apr 28 01:22:19.721924 kubelet[3091]: I0428 01:22:19.718905 3091 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 28 01:22:19.721924 kubelet[3091]: I0428 01:22:19.719039 3091 kubelet.go:2436] "Starting kubelet main sync loop" Apr 28 01:22:19.726156 kubelet[3091]: E0428 01:22:19.723339 3091 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 28 01:22:19.835174 kubelet[3091]: E0428 01:22:19.834969 3091 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 28 01:22:20.080832 kubelet[3091]: E0428 01:22:20.073027 3091 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Apr 28 01:22:20.492678 kubelet[3091]: E0428 01:22:20.488356 3091 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Apr 28 01:22:21.320947 kubelet[3091]: E0428 01:22:21.309212 3091 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Apr 28 01:22:23.014768 kubelet[3091]: E0428 01:22:23.010903 3091 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Apr 28 01:22:24.925967 kubelet[3091]: I0428 01:22:24.922125 3091 cpu_manager.go:221] "Starting CPU manager" policy="none" Apr 28 01:22:24.925967 kubelet[3091]: I0428 01:22:24.922255 3091 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Apr 28 01:22:24.925967 kubelet[3091]: I0428 01:22:24.922370 3091 state_mem.go:36] "Initialized new in-memory state store" Apr 28 01:22:24.977928 kubelet[3091]: I0428 01:22:24.975084 3091 state_mem.go:88] "Updated default CPUSet" cpuSet="" Apr 28 01:22:24.984543 kubelet[3091]: I0428 01:22:24.975758 3091 state_mem.go:96] "Updated CPUSet assignments" assignments={} Apr 28 01:22:24.984913 kubelet[3091]: I0428 01:22:24.984771 3091 policy_none.go:49] "None policy: Start" Apr 28 01:22:24.990203 kubelet[3091]: I0428 01:22:24.989724 3091 memory_manager.go:186] "Starting memorymanager" policy="None" Apr 28 01:22:24.994552 kubelet[3091]: I0428 01:22:24.993877 3091 state_mem.go:35] "Initializing new in-memory state store" Apr 28 01:22:25.029704 kubelet[3091]: I0428 01:22:25.027026 3091 state_mem.go:75] "Updated machine memory state" Apr 28 01:22:25.165164 kubelet[3091]: E0428 01:22:25.164033 3091 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 28 01:22:25.181947 kubelet[3091]: I0428 01:22:25.180333 3091 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 28 01:22:25.187976 kubelet[3091]: I0428 01:22:25.185649 3091 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 28 01:22:25.205192 kubelet[3091]: I0428 01:22:25.205037 3091 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 28 01:22:25.380834 kubelet[3091]: E0428 01:22:25.379174 3091 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 28 01:22:25.987759 kubelet[3091]: I0428 01:22:25.987036 3091 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 28 01:22:26.373244 kubelet[3091]: I0428 01:22:26.352081 3091 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Apr 28 01:22:26.393145 kubelet[3091]: I0428 01:22:26.379302 3091 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/280ac78f0c06d5a5825d6eaa7709189b-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"280ac78f0c06d5a5825d6eaa7709189b\") " pod="kube-system/kube-apiserver-localhost" Apr 28 01:22:26.628208 kubelet[3091]: I0428 01:22:26.624908 3091 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/280ac78f0c06d5a5825d6eaa7709189b-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"280ac78f0c06d5a5825d6eaa7709189b\") " pod="kube-system/kube-apiserver-localhost" Apr 28 01:22:26.650518 kubelet[3091]: I0428 01:22:26.648164 3091 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Apr 28 01:22:26.650518 kubelet[3091]: I0428 01:22:26.649624 3091 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/280ac78f0c06d5a5825d6eaa7709189b-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"280ac78f0c06d5a5825d6eaa7709189b\") " pod="kube-system/kube-apiserver-localhost" Apr 28 01:22:26.651329 kubelet[3091]: I0428 01:22:26.650762 3091 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/e9ca41790ae21be9f4cbd451ade0acec-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"e9ca41790ae21be9f4cbd451ade0acec\") " pod="kube-system/kube-controller-manager-localhost" Apr 28 01:22:26.660749 kubelet[3091]: I0428 01:22:26.655835 3091 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e9ca41790ae21be9f4cbd451ade0acec-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"e9ca41790ae21be9f4cbd451ade0acec\") " pod="kube-system/kube-controller-manager-localhost" Apr 28 01:22:26.676776 kubelet[3091]: I0428 01:22:26.669213 3091 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e9ca41790ae21be9f4cbd451ade0acec-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"e9ca41790ae21be9f4cbd451ade0acec\") " pod="kube-system/kube-controller-manager-localhost" Apr 28 01:22:26.676776 kubelet[3091]: I0428 01:22:26.672929 3091 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/33fee6ba1581201eda98a989140db110-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"33fee6ba1581201eda98a989140db110\") " pod="kube-system/kube-scheduler-localhost" Apr 28 01:22:26.676776 kubelet[3091]: I0428 01:22:26.673154 3091 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e9ca41790ae21be9f4cbd451ade0acec-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"e9ca41790ae21be9f4cbd451ade0acec\") " pod="kube-system/kube-controller-manager-localhost" Apr 28 01:22:26.676776 kubelet[3091]: I0428 01:22:26.673252 3091 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e9ca41790ae21be9f4cbd451ade0acec-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"e9ca41790ae21be9f4cbd451ade0acec\") " pod="kube-system/kube-controller-manager-localhost" Apr 28 01:22:26.777541 kubelet[3091]: I0428 01:22:26.772118 3091 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Apr 28 01:22:27.088700 kubelet[3091]: E0428 01:22:27.024395 3091 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:22:27.775596 kubelet[3091]: I0428 01:22:27.709097 3091 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Apr 28 01:22:27.784613 kubelet[3091]: I0428 01:22:27.784118 3091 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Apr 28 01:22:28.589603 kubelet[3091]: E0428 01:22:28.477148 3091 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Apr 28 01:22:28.783629 kubelet[3091]: E0428 01:22:28.573183 3091 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Apr 28 01:22:28.883310 kubelet[3091]: E0428 01:22:28.882833 3091 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.125s" Apr 28 01:22:28.922584 kubelet[3091]: E0428 01:22:28.916199 3091 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:22:29.097767 kubelet[3091]: E0428 01:22:29.029749 3091 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:22:29.123950 kubelet[3091]: E0428 01:22:29.121298 3091 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:22:30.521728 kubelet[3091]: E0428 01:22:30.510218 3091 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:22:30.547077 kubelet[3091]: E0428 01:22:30.546163 3091 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:22:30.553207 kubelet[3091]: E0428 01:22:30.552801 3091 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:22:31.899474 kubelet[3091]: E0428 01:22:31.899364 3091 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:22:31.997537 kubelet[3091]: E0428 01:22:31.997156 3091 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:22:36.916695 kubelet[3091]: E0428 01:22:36.907183 3091 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.048s" Apr 28 01:22:42.765263 kubelet[3091]: E0428 01:22:42.764561 3091 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.943s" Apr 28 01:22:43.462454 kubelet[3091]: E0428 01:22:43.461827 3091 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:22:43.484927 kubelet[3091]: E0428 01:22:43.484819 3091 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:22:43.880984 kubelet[3091]: E0428 01:22:43.879819 3091 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:22:45.873095 kubelet[3091]: E0428 01:22:45.819663 3091 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:22:47.079076 kubelet[3091]: E0428 01:22:47.065185 3091 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.312s" Apr 28 01:22:49.153791 kubelet[3091]: E0428 01:22:49.153587 3091 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.377s" Apr 28 01:22:57.974035 kubelet[3091]: E0428 01:22:57.973747 3091 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.135s" Apr 28 01:23:00.910376 kubelet[3091]: E0428 01:23:00.910098 3091 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.154s" Apr 28 01:23:29.167466 kubelet[3091]: E0428 01:23:29.162814 3091 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.389s" Apr 28 01:23:30.867149 kubelet[3091]: E0428 01:23:30.866681 3091 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.122s" Apr 28 01:23:36.862082 kubelet[3091]: I0428 01:23:36.861863 3091 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Apr 28 01:23:37.029277 containerd[1599]: time="2026-04-28T01:23:37.029174853Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Apr 28 01:23:37.469668 kubelet[3091]: I0428 01:23:37.389333 3091 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Apr 28 01:23:40.113344 kubelet[3091]: E0428 01:23:40.076812 3091 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.209s" Apr 28 01:23:42.648531 kubelet[3091]: E0428 01:23:42.588366 3091 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.314s" Apr 28 01:23:44.527164 kubelet[3091]: E0428 01:23:44.491724 3091 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.836s" Apr 28 01:23:45.623461 containerd[1599]: time="2026-04-28T01:23:45.621056759Z" level=info msg="shim disconnected" id=8b39cdb2721a8a356057f3337af11da49226e96dd4ec0122b0287c12c3d8169a namespace=k8s.io Apr 28 01:23:45.623461 containerd[1599]: time="2026-04-28T01:23:45.622042502Z" level=warning msg="cleaning up after shim disconnected" id=8b39cdb2721a8a356057f3337af11da49226e96dd4ec0122b0287c12c3d8169a namespace=k8s.io Apr 28 01:23:45.623461 containerd[1599]: time="2026-04-28T01:23:45.622136533Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 28 01:23:45.683358 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8b39cdb2721a8a356057f3337af11da49226e96dd4ec0122b0287c12c3d8169a-rootfs.mount: Deactivated successfully. Apr 28 01:23:47.412397 kubelet[3091]: I0428 01:23:47.406466 3091 scope.go:117] "RemoveContainer" containerID="b8549d41269170e838fe6fdfa4e4167a908dc3aae2648b9e319370a7d7a543ec" Apr 28 01:23:47.419011 kubelet[3091]: I0428 01:23:47.418827 3091 scope.go:117] "RemoveContainer" containerID="8b39cdb2721a8a356057f3337af11da49226e96dd4ec0122b0287c12c3d8169a" Apr 28 01:23:47.468322 kubelet[3091]: E0428 01:23:47.422400 3091 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:23:47.493100 containerd[1599]: time="2026-04-28T01:23:47.491351627Z" level=info msg="RemoveContainer for \"b8549d41269170e838fe6fdfa4e4167a908dc3aae2648b9e319370a7d7a543ec\"" Apr 28 01:23:47.659736 containerd[1599]: time="2026-04-28T01:23:47.656059410Z" level=info msg="RemoveContainer for \"b8549d41269170e838fe6fdfa4e4167a908dc3aae2648b9e319370a7d7a543ec\" returns successfully" Apr 28 01:23:47.912475 containerd[1599]: time="2026-04-28T01:23:47.910018951Z" level=info msg="CreateContainer within sandbox \"fea967e5f4866d5680756f4a5088cf3ab4cb9b669bd6e397b78a0ade1f0c3b01\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:3,}" Apr 28 01:23:48.387653 containerd[1599]: time="2026-04-28T01:23:48.376393075Z" level=info msg="CreateContainer within sandbox \"fea967e5f4866d5680756f4a5088cf3ab4cb9b669bd6e397b78a0ade1f0c3b01\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:3,} returns container id \"6bd20b71b21ffee767b1716fe73866019576e75f1119a00f19e47041ea0b893b\"" Apr 28 01:23:48.641796 containerd[1599]: time="2026-04-28T01:23:48.640196229Z" level=info msg="StartContainer for \"6bd20b71b21ffee767b1716fe73866019576e75f1119a00f19e47041ea0b893b\"" Apr 28 01:23:48.931540 kubelet[3091]: E0428 01:23:48.930119 3091 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:23:50.527249 containerd[1599]: time="2026-04-28T01:23:50.526295310Z" level=info msg="StartContainer for \"6bd20b71b21ffee767b1716fe73866019576e75f1119a00f19e47041ea0b893b\" returns successfully" Apr 28 01:23:51.601542 kubelet[3091]: E0428 01:23:51.600260 3091 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:23:52.909251 kubelet[3091]: E0428 01:23:52.902038 3091 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:23:58.895651 kubelet[3091]: E0428 01:23:58.895470 3091 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:24:00.825412 kubelet[3091]: E0428 01:24:00.823295 3091 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:24:09.407231 kubelet[3091]: E0428 01:24:09.406972 3091 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:24:18.846078 kubelet[3091]: E0428 01:24:18.845568 3091 kubelet_node_status.go:460] "Node not becoming ready in time after startup" Apr 28 01:24:20.844381 kubelet[3091]: E0428 01:24:20.842593 3091 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 28 01:24:21.305482 sudo[1792]: pam_unix(sudo:session): session closed for user root Apr 28 01:24:21.341817 sshd[1785]: pam_unix(sshd:session): session closed for user core Apr 28 01:24:21.358539 systemd[1]: sshd@6-10.0.0.36:22-10.0.0.1:46072.service: Deactivated successfully. Apr 28 01:24:21.371527 systemd-logind[1574]: Session 7 logged out. Waiting for processes to exit. Apr 28 01:24:21.378165 systemd[1]: session-7.scope: Deactivated successfully. Apr 28 01:24:21.384550 systemd-logind[1574]: Removed session 7.