Apr 28 00:18:49.970947 kernel: Linux version 6.12.81-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 15.2.1_p20260214 p5) 15.2.1 20260214, GNU ld (Gentoo 2.46.0 p1) 2.46.0) #1 SMP PREEMPT_DYNAMIC Mon Apr 27 22:13:07 -00 2026 Apr 28 00:18:49.972329 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=f23531cb6330205ea1df0485b9a03deeb8b8f7eb9c40767cd8b5a2bc5be69458 Apr 28 00:18:49.972448 kernel: BIOS-provided physical RAM map: Apr 28 00:18:49.972456 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Apr 28 00:18:49.972557 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Apr 28 00:18:49.972566 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Apr 28 00:18:49.974631 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Apr 28 00:18:49.975005 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Apr 28 00:18:49.975106 kernel: BIOS-e820: [mem 0x000000000080c000-0x0000000000810fff] usable Apr 28 00:18:49.975291 kernel: BIOS-e820: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Apr 28 00:18:49.975401 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009bd3efff] usable Apr 28 00:18:49.975598 kernel: BIOS-e820: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Apr 28 00:18:49.975606 kernel: BIOS-e820: [mem 0x000000009be00000-0x000000009c8ecfff] usable Apr 28 00:18:49.975711 kernel: BIOS-e820: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Apr 28 00:18:49.976021 kernel: BIOS-e820: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Apr 28 00:18:49.976129 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Apr 28 00:18:49.976137 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009ce90fff] usable Apr 28 00:18:49.976144 kernel: BIOS-e820: [mem 0x000000009ce91000-0x000000009ce94fff] reserved Apr 28 00:18:49.976151 kernel: BIOS-e820: [mem 0x000000009ce95000-0x000000009ce96fff] ACPI NVS Apr 28 00:18:49.976431 kernel: BIOS-e820: [mem 0x000000009ce97000-0x000000009cedbfff] usable Apr 28 00:18:49.976439 kernel: BIOS-e820: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Apr 28 00:18:49.976548 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Apr 28 00:18:49.976655 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Apr 28 00:18:49.976665 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Apr 28 00:18:49.976673 kernel: BIOS-e820: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Apr 28 00:18:49.976943 kernel: NX (Execute Disable) protection: active Apr 28 00:18:49.977050 kernel: APIC: Static calls initialized Apr 28 00:18:49.977058 kernel: e820: update [mem 0x9b31e018-0x9b327c57] usable ==> usable Apr 28 00:18:49.977067 kernel: e820: update [mem 0x9b2e1018-0x9b31de57] usable ==> usable Apr 28 00:18:49.977171 kernel: extended physical RAM map: Apr 28 00:18:49.977279 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable Apr 28 00:18:49.977288 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000007fffff] usable Apr 28 00:18:49.977302 kernel: reserve setup_data: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Apr 28 00:18:49.977310 kernel: reserve setup_data: [mem 0x0000000000808000-0x000000000080afff] usable Apr 28 00:18:49.977318 kernel: reserve setup_data: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Apr 28 00:18:49.977325 kernel: reserve setup_data: [mem 0x000000000080c000-0x0000000000810fff] usable Apr 28 00:18:49.977427 kernel: reserve setup_data: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Apr 28 00:18:49.977435 kernel: reserve setup_data: [mem 0x0000000000900000-0x000000009b2e1017] usable Apr 28 00:18:49.977442 kernel: reserve setup_data: [mem 0x000000009b2e1018-0x000000009b31de57] usable Apr 28 00:18:49.977449 kernel: reserve setup_data: [mem 0x000000009b31de58-0x000000009b31e017] usable Apr 28 00:18:49.977660 kernel: reserve setup_data: [mem 0x000000009b31e018-0x000000009b327c57] usable Apr 28 00:18:49.977932 kernel: reserve setup_data: [mem 0x000000009b327c58-0x000000009bd3efff] usable Apr 28 00:18:49.977944 kernel: reserve setup_data: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Apr 28 00:18:49.978060 kernel: reserve setup_data: [mem 0x000000009be00000-0x000000009c8ecfff] usable Apr 28 00:18:49.978288 kernel: reserve setup_data: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Apr 28 00:18:49.978298 kernel: reserve setup_data: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Apr 28 00:18:49.978307 kernel: reserve setup_data: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Apr 28 00:18:49.978316 kernel: reserve setup_data: [mem 0x000000009cbff000-0x000000009ce90fff] usable Apr 28 00:18:49.978420 kernel: reserve setup_data: [mem 0x000000009ce91000-0x000000009ce94fff] reserved Apr 28 00:18:49.978429 kernel: reserve setup_data: [mem 0x000000009ce95000-0x000000009ce96fff] ACPI NVS Apr 28 00:18:49.978437 kernel: reserve setup_data: [mem 0x000000009ce97000-0x000000009cedbfff] usable Apr 28 00:18:49.978445 kernel: reserve setup_data: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Apr 28 00:18:49.978551 kernel: reserve setup_data: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Apr 28 00:18:49.978559 kernel: reserve setup_data: [mem 0x00000000e0000000-0x00000000efffffff] reserved Apr 28 00:18:49.978676 kernel: reserve setup_data: [mem 0x00000000feffc000-0x00000000feffffff] reserved Apr 28 00:18:50.015156 kernel: reserve setup_data: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Apr 28 00:18:50.015281 kernel: efi: EFI v2.7 by EDK II Apr 28 00:18:50.015402 kernel: efi: SMBIOS=0x9c988000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b9e4198 RNG=0x9cb73018 Apr 28 00:18:50.015409 kernel: random: crng init done Apr 28 00:18:50.015506 kernel: efi: Remove mem151: MMIO range=[0xffc00000-0xffffffff] (4MB) from e820 map Apr 28 00:18:50.015512 kernel: e820: remove [mem 0xffc00000-0xffffffff] reserved Apr 28 00:18:50.015518 kernel: secureboot: Secure boot disabled Apr 28 00:18:50.015524 kernel: SMBIOS 2.8 present. Apr 28 00:18:50.015530 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS unknown 02/02/2022 Apr 28 00:18:50.015535 kernel: DMI: Memory slots populated: 1/1 Apr 28 00:18:50.016546 kernel: Hypervisor detected: KVM Apr 28 00:18:50.016634 kernel: last_pfn = 0x9cedc max_arch_pfn = 0x10000000000 Apr 28 00:18:50.016641 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Apr 28 00:18:50.016648 kernel: kvm-clock: using sched offset of 16470444356 cycles Apr 28 00:18:50.016719 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Apr 28 00:18:50.016727 kernel: tsc: Detected 2793.438 MHz processor Apr 28 00:18:50.016735 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Apr 28 00:18:50.016742 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Apr 28 00:18:50.016957 kernel: last_pfn = 0x9cedc max_arch_pfn = 0x10000000000 Apr 28 00:18:50.016964 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Apr 28 00:18:50.016970 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Apr 28 00:18:50.016976 kernel: Using GB pages for direct mapping Apr 28 00:18:50.017045 kernel: ACPI: Early table checksum verification disabled Apr 28 00:18:50.017051 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Apr 28 00:18:50.017057 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Apr 28 00:18:50.017063 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Apr 28 00:18:50.017261 kernel: ACPI: DSDT 0x000000009CB7A000 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 28 00:18:50.017270 kernel: ACPI: FACS 0x000000009CBDD000 000040 Apr 28 00:18:50.017278 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 28 00:18:50.017288 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 28 00:18:50.017297 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 28 00:18:50.017305 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 28 00:18:50.017313 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Apr 28 00:18:50.017394 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Apr 28 00:18:50.017403 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1b9] Apr 28 00:18:50.017411 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Apr 28 00:18:50.017419 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Apr 28 00:18:50.017428 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Apr 28 00:18:50.017436 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Apr 28 00:18:50.017445 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Apr 28 00:18:50.017521 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Apr 28 00:18:50.017530 kernel: No NUMA configuration found Apr 28 00:18:50.017538 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cedbfff] Apr 28 00:18:50.017547 kernel: NODE_DATA(0) allocated [mem 0x9ce36dc0-0x9ce3dfff] Apr 28 00:18:50.017555 kernel: Zone ranges: Apr 28 00:18:50.017626 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Apr 28 00:18:50.017635 kernel: DMA32 [mem 0x0000000001000000-0x000000009cedbfff] Apr 28 00:18:50.018455 kernel: Normal empty Apr 28 00:18:50.018466 kernel: Device empty Apr 28 00:18:50.018544 kernel: Movable zone start for each node Apr 28 00:18:50.018555 kernel: Early memory node ranges Apr 28 00:18:50.018565 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Apr 28 00:18:50.018574 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Apr 28 00:18:50.018582 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Apr 28 00:18:50.019570 kernel: node 0: [mem 0x000000000080c000-0x0000000000810fff] Apr 28 00:18:50.019580 kernel: node 0: [mem 0x0000000000900000-0x000000009bd3efff] Apr 28 00:18:50.019587 kernel: node 0: [mem 0x000000009be00000-0x000000009c8ecfff] Apr 28 00:18:50.019595 kernel: node 0: [mem 0x000000009cbff000-0x000000009ce90fff] Apr 28 00:18:50.019602 kernel: node 0: [mem 0x000000009ce97000-0x000000009cedbfff] Apr 28 00:18:50.019610 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cedbfff] Apr 28 00:18:50.019617 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Apr 28 00:18:50.019703 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Apr 28 00:18:50.021439 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Apr 28 00:18:50.021725 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Apr 28 00:18:50.021922 kernel: On node 0, zone DMA: 239 pages in unavailable ranges Apr 28 00:18:50.021932 kernel: On node 0, zone DMA32: 193 pages in unavailable ranges Apr 28 00:18:50.021942 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Apr 28 00:18:50.021951 kernel: On node 0, zone DMA32: 6 pages in unavailable ranges Apr 28 00:18:50.021962 kernel: On node 0, zone DMA32: 12580 pages in unavailable ranges Apr 28 00:18:50.021971 kernel: ACPI: PM-Timer IO Port: 0x608 Apr 28 00:18:50.022044 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Apr 28 00:18:50.022111 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Apr 28 00:18:50.022118 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Apr 28 00:18:50.022124 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Apr 28 00:18:50.022131 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Apr 28 00:18:50.022258 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Apr 28 00:18:50.022268 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Apr 28 00:18:50.022277 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Apr 28 00:18:50.022286 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Apr 28 00:18:50.022295 kernel: TSC deadline timer available Apr 28 00:18:50.022303 kernel: CPU topo: Max. logical packages: 1 Apr 28 00:18:50.022312 kernel: CPU topo: Max. logical dies: 1 Apr 28 00:18:50.025469 kernel: CPU topo: Max. dies per package: 1 Apr 28 00:18:50.025485 kernel: CPU topo: Max. threads per core: 1 Apr 28 00:18:50.025496 kernel: CPU topo: Num. cores per package: 4 Apr 28 00:18:50.025506 kernel: CPU topo: Num. threads per package: 4 Apr 28 00:18:50.025517 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs Apr 28 00:18:50.025527 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Apr 28 00:18:50.025538 kernel: kvm-guest: KVM setup pv remote TLB flush Apr 28 00:18:50.025671 kernel: kvm-guest: setup PV sched yield Apr 28 00:18:50.025680 kernel: [mem 0x9d000000-0xdfffffff] available for PCI devices Apr 28 00:18:50.025691 kernel: Booting paravirtualized kernel on KVM Apr 28 00:18:50.025703 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Apr 28 00:18:50.025712 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Apr 28 00:18:50.025721 kernel: percpu: Embedded 60 pages/cpu s207960 r8192 d29608 u524288 Apr 28 00:18:50.025944 kernel: pcpu-alloc: s207960 r8192 d29608 u524288 alloc=1*2097152 Apr 28 00:18:50.026029 kernel: pcpu-alloc: [0] 0 1 2 3 Apr 28 00:18:50.026039 kernel: kvm-guest: PV spinlocks enabled Apr 28 00:18:50.026048 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Apr 28 00:18:50.026061 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=f23531cb6330205ea1df0485b9a03deeb8b8f7eb9c40767cd8b5a2bc5be69458 Apr 28 00:18:50.026073 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Apr 28 00:18:50.026082 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Apr 28 00:18:50.026091 kernel: Fallback order for Node 0: 0 Apr 28 00:18:50.026173 kernel: Built 1 zonelists, mobility grouping on. Total pages: 641450 Apr 28 00:18:50.026255 kernel: Policy zone: DMA32 Apr 28 00:18:50.026266 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Apr 28 00:18:50.026277 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Apr 28 00:18:50.026286 kernel: ftrace: allocating 40346 entries in 158 pages Apr 28 00:18:50.026295 kernel: ftrace: allocated 158 pages with 5 groups Apr 28 00:18:50.026304 kernel: Dynamic Preempt: voluntary Apr 28 00:18:50.026386 kernel: rcu: Preemptible hierarchical RCU implementation. Apr 28 00:18:50.026397 kernel: rcu: RCU event tracing is enabled. Apr 28 00:18:50.026407 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Apr 28 00:18:50.026418 kernel: Trampoline variant of Tasks RCU enabled. Apr 28 00:18:50.026427 kernel: Rude variant of Tasks RCU enabled. Apr 28 00:18:50.026435 kernel: Tracing variant of Tasks RCU enabled. Apr 28 00:18:50.026444 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Apr 28 00:18:50.026525 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Apr 28 00:18:50.026535 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Apr 28 00:18:50.026612 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Apr 28 00:18:50.026624 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Apr 28 00:18:50.026633 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Apr 28 00:18:50.026642 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Apr 28 00:18:50.026650 kernel: Console: colour dummy device 80x25 Apr 28 00:18:50.026730 kernel: printk: legacy console [ttyS0] enabled Apr 28 00:18:50.026740 kernel: ACPI: Core revision 20240827 Apr 28 00:18:50.026884 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Apr 28 00:18:50.026895 kernel: APIC: Switch to symmetric I/O mode setup Apr 28 00:18:50.026904 kernel: x2apic enabled Apr 28 00:18:50.026912 kernel: APIC: Switched APIC routing to: physical x2apic Apr 28 00:18:50.026921 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Apr 28 00:18:50.027003 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Apr 28 00:18:50.027014 kernel: kvm-guest: setup PV IPIs Apr 28 00:18:50.027023 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Apr 28 00:18:50.027033 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x284409db922, max_idle_ns: 440795228871 ns Apr 28 00:18:50.027044 kernel: Calibrating delay loop (skipped) preset value.. 5586.87 BogoMIPS (lpj=2793438) Apr 28 00:18:50.027054 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Apr 28 00:18:50.027063 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Apr 28 00:18:50.027143 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Apr 28 00:18:50.027153 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Apr 28 00:18:50.027162 kernel: Spectre V2 : Mitigation: Retpolines Apr 28 00:18:50.027172 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Apr 28 00:18:50.028080 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Apr 28 00:18:50.028095 kernel: RETBleed: Vulnerable Apr 28 00:18:50.028105 kernel: Speculative Store Bypass: Vulnerable Apr 28 00:18:50.028294 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Apr 28 00:18:50.028303 kernel: GDS: Unknown: Dependent on hypervisor status Apr 28 00:18:50.028313 kernel: active return thunk: its_return_thunk Apr 28 00:18:50.028323 kernel: ITS: Mitigation: Aligned branch/return thunks Apr 28 00:18:50.028333 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Apr 28 00:18:50.028344 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Apr 28 00:18:50.028353 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Apr 28 00:18:50.028435 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Apr 28 00:18:50.028445 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Apr 28 00:18:50.028454 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Apr 28 00:18:50.028464 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Apr 28 00:18:50.028473 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Apr 28 00:18:50.028482 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Apr 28 00:18:50.028491 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Apr 28 00:18:50.028573 kernel: x86/fpu: Enabled xstate features 0xe7, context size is 2432 bytes, using 'compacted' format. Apr 28 00:18:50.028585 kernel: Freeing SMP alternatives memory: 32K Apr 28 00:18:50.028595 kernel: pid_max: default: 32768 minimum: 301 Apr 28 00:18:50.028605 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Apr 28 00:18:50.028614 kernel: landlock: Up and running. Apr 28 00:18:50.028624 kernel: SELinux: Initializing. Apr 28 00:18:50.028634 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 28 00:18:50.028643 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 28 00:18:50.029593 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8370C CPU @ 2.80GHz (family: 0x6, model: 0x6a, stepping: 0x6) Apr 28 00:18:50.029604 kernel: Performance Events: unsupported p6 CPU model 106 no PMU driver, software events only. Apr 28 00:18:50.029613 kernel: signal: max sigframe size: 3632 Apr 28 00:18:50.029623 kernel: rcu: Hierarchical SRCU implementation. Apr 28 00:18:50.029632 kernel: rcu: Max phase no-delay instances is 400. Apr 28 00:18:50.029641 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Apr 28 00:18:50.029650 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Apr 28 00:18:50.029737 kernel: smp: Bringing up secondary CPUs ... Apr 28 00:18:50.029900 kernel: smpboot: x86: Booting SMP configuration: Apr 28 00:18:50.029911 kernel: .... node #0, CPUs: #1 #2 #3 Apr 28 00:18:50.029922 kernel: smp: Brought up 1 node, 4 CPUs Apr 28 00:18:50.029997 kernel: smpboot: Total of 4 processors activated (22347.50 BogoMIPS) Apr 28 00:18:50.030010 kernel: Memory: 2399268K/2565800K available (14336K kernel code, 2458K rwdata, 31736K rodata, 15944K init, 2284K bss, 160636K reserved, 0K cma-reserved) Apr 28 00:18:50.030020 kernel: devtmpfs: initialized Apr 28 00:18:50.030107 kernel: x86/mm: Memory block size: 128MB Apr 28 00:18:50.030119 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Apr 28 00:18:50.030129 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Apr 28 00:18:50.030138 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00811000-0x008fffff] (978944 bytes) Apr 28 00:18:50.030147 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Apr 28 00:18:50.030156 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9ce95000-0x9ce96fff] (8192 bytes) Apr 28 00:18:50.030165 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Apr 28 00:18:50.030258 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Apr 28 00:18:50.030270 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Apr 28 00:18:50.030282 kernel: pinctrl core: initialized pinctrl subsystem Apr 28 00:18:50.030294 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Apr 28 00:18:50.030305 kernel: audit: initializing netlink subsys (disabled) Apr 28 00:18:50.030314 kernel: audit: type=2000 audit(1777335507.928:1): state=initialized audit_enabled=0 res=1 Apr 28 00:18:50.030324 kernel: thermal_sys: Registered thermal governor 'step_wise' Apr 28 00:18:50.030408 kernel: thermal_sys: Registered thermal governor 'user_space' Apr 28 00:18:50.030417 kernel: cpuidle: using governor menu Apr 28 00:18:50.030495 kernel: efi: Freeing EFI boot services memory: 38812K Apr 28 00:18:50.030505 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Apr 28 00:18:50.030514 kernel: dca service started, version 1.12.1 Apr 28 00:18:50.030585 kernel: PCI: ECAM [mem 0xe0000000-0xefffffff] (base 0xe0000000) for domain 0000 [bus 00-ff] Apr 28 00:18:50.030597 kernel: PCI: Using configuration type 1 for base access Apr 28 00:18:50.030739 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Apr 28 00:18:50.031172 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Apr 28 00:18:50.031254 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Apr 28 00:18:50.031263 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Apr 28 00:18:50.031272 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Apr 28 00:18:50.031281 kernel: ACPI: Added _OSI(Module Device) Apr 28 00:18:50.031291 kernel: ACPI: Added _OSI(Processor Device) Apr 28 00:18:50.031458 kernel: ACPI: Added _OSI(Processor Aggregator Device) Apr 28 00:18:50.031534 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Apr 28 00:18:50.031610 kernel: ACPI: Interpreter enabled Apr 28 00:18:50.031619 kernel: ACPI: PM: (supports S0 S3 S5) Apr 28 00:18:50.031628 kernel: ACPI: Using IOAPIC for interrupt routing Apr 28 00:18:50.031637 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Apr 28 00:18:50.031646 kernel: PCI: Using E820 reservations for host bridge windows Apr 28 00:18:50.032041 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Apr 28 00:18:50.032051 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Apr 28 00:18:50.032500 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Apr 28 00:18:50.032746 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Apr 28 00:18:50.033046 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Apr 28 00:18:50.033061 kernel: PCI host bridge to bus 0000:00 Apr 28 00:18:50.050708 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Apr 28 00:18:50.054061 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Apr 28 00:18:50.054275 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Apr 28 00:18:50.054492 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xdfffffff window] Apr 28 00:18:50.054991 kernel: pci_bus 0000:00: root bus resource [mem 0xf0000000-0xfebfffff window] Apr 28 00:18:50.055284 kernel: pci_bus 0000:00: root bus resource [mem 0x380000000000-0x3807ffffffff window] Apr 28 00:18:50.058554 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Apr 28 00:18:50.059669 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Apr 28 00:18:50.064363 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Apr 28 00:18:50.065005 kernel: pci 0000:00:01.0: BAR 0 [mem 0xc0000000-0xc0ffffff pref] Apr 28 00:18:50.065369 kernel: pci 0000:00:01.0: BAR 2 [mem 0xc1044000-0xc1044fff] Apr 28 00:18:50.065698 kernel: pci 0000:00:01.0: ROM [mem 0xffff0000-0xffffffff pref] Apr 28 00:18:50.065919 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Apr 28 00:18:50.066022 kernel: pci 0000:00:01.0: pci_fixup_video+0x0/0xe0 took 10742 usecs Apr 28 00:18:50.066130 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Apr 28 00:18:50.066301 kernel: pci 0000:00:02.0: BAR 0 [io 0x6100-0x611f] Apr 28 00:18:50.066474 kernel: pci 0000:00:02.0: BAR 1 [mem 0xc1043000-0xc1043fff] Apr 28 00:18:50.066572 kernel: pci 0000:00:02.0: BAR 4 [mem 0x380000000000-0x380000003fff 64bit pref] Apr 28 00:18:50.066679 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Apr 28 00:18:50.067169 kernel: pci 0000:00:03.0: BAR 0 [io 0x6000-0x607f] Apr 28 00:18:50.067939 kernel: pci 0000:00:03.0: BAR 1 [mem 0xc1042000-0xc1042fff] Apr 28 00:18:50.068047 kernel: pci 0000:00:03.0: BAR 4 [mem 0x380000004000-0x380000007fff 64bit pref] Apr 28 00:18:50.068299 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Apr 28 00:18:50.068400 kernel: pci 0000:00:04.0: BAR 0 [io 0x60e0-0x60ff] Apr 28 00:18:50.068495 kernel: pci 0000:00:04.0: BAR 1 [mem 0xc1041000-0xc1041fff] Apr 28 00:18:50.068591 kernel: pci 0000:00:04.0: BAR 4 [mem 0x380000008000-0x38000000bfff 64bit pref] Apr 28 00:18:50.068687 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref] Apr 28 00:18:50.068911 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Apr 28 00:18:50.069172 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Apr 28 00:18:50.069339 kernel: pci 0000:00:1f.0: quirk_ich7_lpc+0x0/0xc0 took 11718 usecs Apr 28 00:18:50.069444 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Apr 28 00:18:50.069540 kernel: pci 0000:00:1f.2: BAR 4 [io 0x60c0-0x60df] Apr 28 00:18:50.069636 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xc1040000-0xc1040fff] Apr 28 00:18:50.070559 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Apr 28 00:18:50.070660 kernel: pci 0000:00:1f.3: BAR 4 [io 0x6080-0x60bf] Apr 28 00:18:50.070669 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Apr 28 00:18:50.070676 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Apr 28 00:18:50.070683 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Apr 28 00:18:50.070689 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Apr 28 00:18:50.070695 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Apr 28 00:18:50.070894 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Apr 28 00:18:50.070901 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Apr 28 00:18:50.070908 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Apr 28 00:18:50.070914 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Apr 28 00:18:50.070920 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Apr 28 00:18:50.070927 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Apr 28 00:18:50.070933 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Apr 28 00:18:50.070998 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Apr 28 00:18:50.071005 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Apr 28 00:18:50.071011 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Apr 28 00:18:50.071018 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Apr 28 00:18:50.071024 kernel: iommu: Default domain type: Translated Apr 28 00:18:50.071031 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Apr 28 00:18:50.071037 kernel: efivars: Registered efivars operations Apr 28 00:18:50.071100 kernel: PCI: Using ACPI for IRQ routing Apr 28 00:18:50.071107 kernel: PCI: pci_cache_line_size set to 64 bytes Apr 28 00:18:50.071114 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Apr 28 00:18:50.071121 kernel: e820: reserve RAM buffer [mem 0x00811000-0x008fffff] Apr 28 00:18:50.071127 kernel: e820: reserve RAM buffer [mem 0x9b2e1018-0x9bffffff] Apr 28 00:18:50.071133 kernel: e820: reserve RAM buffer [mem 0x9b31e018-0x9bffffff] Apr 28 00:18:50.071140 kernel: e820: reserve RAM buffer [mem 0x9bd3f000-0x9bffffff] Apr 28 00:18:50.071256 kernel: e820: reserve RAM buffer [mem 0x9c8ed000-0x9fffffff] Apr 28 00:18:50.071264 kernel: e820: reserve RAM buffer [mem 0x9ce91000-0x9fffffff] Apr 28 00:18:50.071270 kernel: e820: reserve RAM buffer [mem 0x9cedc000-0x9fffffff] Apr 28 00:18:50.071380 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Apr 28 00:18:50.071475 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Apr 28 00:18:50.071571 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Apr 28 00:18:50.071580 kernel: vgaarb: loaded Apr 28 00:18:50.071651 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Apr 28 00:18:50.071659 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Apr 28 00:18:50.071665 kernel: clocksource: Switched to clocksource kvm-clock Apr 28 00:18:50.071672 kernel: VFS: Disk quotas dquot_6.6.0 Apr 28 00:18:50.071678 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Apr 28 00:18:50.071685 kernel: pnp: PnP ACPI init Apr 28 00:18:50.071924 kernel: system 00:05: [mem 0xe0000000-0xefffffff window] has been reserved Apr 28 00:18:50.073055 kernel: pnp: PnP ACPI: found 6 devices Apr 28 00:18:50.073125 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Apr 28 00:18:50.073132 kernel: NET: Registered PF_INET protocol family Apr 28 00:18:50.073138 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Apr 28 00:18:50.073145 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Apr 28 00:18:50.073151 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Apr 28 00:18:50.073158 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Apr 28 00:18:50.073274 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Apr 28 00:18:50.073282 kernel: TCP: Hash tables configured (established 32768 bind 32768) Apr 28 00:18:50.073289 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 28 00:18:50.073295 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 28 00:18:50.073301 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Apr 28 00:18:50.073309 kernel: NET: Registered PF_XDP protocol family Apr 28 00:18:50.073474 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref]: can't claim; no compatible bridge window Apr 28 00:18:50.073702 kernel: pci 0000:00:04.0: ROM [mem 0x9d000000-0x9d03ffff pref]: assigned Apr 28 00:18:50.073950 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Apr 28 00:18:50.074045 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Apr 28 00:18:50.074133 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Apr 28 00:18:50.074292 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xdfffffff window] Apr 28 00:18:50.074383 kernel: pci_bus 0000:00: resource 8 [mem 0xf0000000-0xfebfffff window] Apr 28 00:18:50.074543 kernel: pci_bus 0000:00: resource 9 [mem 0x380000000000-0x3807ffffffff window] Apr 28 00:18:50.074551 kernel: PCI: CLS 0 bytes, default 64 Apr 28 00:18:50.074558 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Apr 28 00:18:50.074628 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x284409db922, max_idle_ns: 440795228871 ns Apr 28 00:18:50.074690 kernel: Initialise system trusted keyrings Apr 28 00:18:50.074698 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Apr 28 00:18:50.074704 kernel: Key type asymmetric registered Apr 28 00:18:50.074711 kernel: Asymmetric key parser 'x509' registered Apr 28 00:18:50.074717 kernel: hrtimer: interrupt took 6780627 ns Apr 28 00:18:50.074725 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Apr 28 00:18:50.074731 kernel: io scheduler mq-deadline registered Apr 28 00:18:50.074897 kernel: io scheduler kyber registered Apr 28 00:18:50.074905 kernel: io scheduler bfq registered Apr 28 00:18:50.074911 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Apr 28 00:18:50.074919 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Apr 28 00:18:50.074925 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Apr 28 00:18:50.074932 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Apr 28 00:18:50.074939 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Apr 28 00:18:50.075002 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Apr 28 00:18:50.075009 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Apr 28 00:18:50.075016 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Apr 28 00:18:50.075022 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Apr 28 00:18:50.075135 kernel: rtc_cmos 00:04: RTC can wake from S4 Apr 28 00:18:50.075144 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Apr 28 00:18:50.075304 kernel: rtc_cmos 00:04: registered as rtc0 Apr 28 00:18:50.075464 kernel: rtc_cmos 00:04: setting system clock to 2026-04-28T00:18:39 UTC (1777335519) Apr 28 00:18:50.075554 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Apr 28 00:18:50.075562 kernel: intel_pstate: CPU model not supported Apr 28 00:18:50.075568 kernel: efifb: probing for efifb Apr 28 00:18:50.075575 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k Apr 28 00:18:50.075582 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 Apr 28 00:18:50.075588 kernel: efifb: scrolling: redraw Apr 28 00:18:50.075658 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Apr 28 00:18:50.075665 kernel: Console: switching to colour frame buffer device 160x50 Apr 28 00:18:50.075671 kernel: fb0: EFI VGA frame buffer device Apr 28 00:18:50.075678 kernel: pstore: Using crash dump compression: deflate Apr 28 00:18:50.075685 kernel: pstore: Registered efi_pstore as persistent store backend Apr 28 00:18:50.075692 kernel: NET: Registered PF_INET6 protocol family Apr 28 00:18:50.075698 kernel: Segment Routing with IPv6 Apr 28 00:18:50.075865 kernel: In-situ OAM (IOAM) with IPv6 Apr 28 00:18:50.075872 kernel: NET: Registered PF_PACKET protocol family Apr 28 00:18:50.075879 kernel: Key type dns_resolver registered Apr 28 00:18:50.075885 kernel: IPI shorthand broadcast: enabled Apr 28 00:18:50.075892 kernel: sched_clock: Marking stable (12399125600, 1940119605)->(15671284017, -1332038812) Apr 28 00:18:50.075898 kernel: registered taskstats version 1 Apr 28 00:18:50.075905 kernel: Loading compiled-in X.509 certificates Apr 28 00:18:50.075968 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.81-flatcar: d347ed0a99522a2efcf66a259b61bb14bbbefd0c' Apr 28 00:18:50.075975 kernel: Demotion targets for Node 0: null Apr 28 00:18:50.076037 kernel: Key type .fscrypt registered Apr 28 00:18:50.076043 kernel: Key type fscrypt-provisioning registered Apr 28 00:18:50.076050 kernel: ima: No TPM chip found, activating TPM-bypass! Apr 28 00:18:50.076057 kernel: ima: Allocated hash algorithm: sha1 Apr 28 00:18:50.076064 kernel: ima: No architecture policies found Apr 28 00:18:50.076125 kernel: clk: Disabling unused clocks Apr 28 00:18:50.076132 kernel: Freeing unused kernel image (initmem) memory: 15944K Apr 28 00:18:50.076139 kernel: Write protecting the kernel read-only data: 47104k Apr 28 00:18:50.076146 kernel: Freeing unused kernel image (rodata/data gap) memory: 1032K Apr 28 00:18:50.076153 kernel: Run /init as init process Apr 28 00:18:50.076160 kernel: with arguments: Apr 28 00:18:50.076167 kernel: /init Apr 28 00:18:50.076173 kernel: with environment: Apr 28 00:18:50.076296 kernel: HOME=/ Apr 28 00:18:50.076303 kernel: TERM=linux Apr 28 00:18:50.076309 kernel: SCSI subsystem initialized Apr 28 00:18:50.076316 kernel: libata version 3.00 loaded. Apr 28 00:18:50.076437 kernel: ahci 0000:00:1f.2: version 3.0 Apr 28 00:18:50.076447 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Apr 28 00:18:50.076541 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Apr 28 00:18:50.076710 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Apr 28 00:18:50.076927 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Apr 28 00:18:50.077271 kernel: scsi host0: ahci Apr 28 00:18:50.077388 kernel: scsi host1: ahci Apr 28 00:18:50.077493 kernel: scsi host2: ahci Apr 28 00:18:50.078733 kernel: scsi host3: ahci Apr 28 00:18:50.079344 kernel: scsi host4: ahci Apr 28 00:18:50.079523 kernel: scsi host5: ahci Apr 28 00:18:50.079534 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 26 lpm-pol 1 Apr 28 00:18:50.079542 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 26 lpm-pol 1 Apr 28 00:18:50.079548 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 26 lpm-pol 1 Apr 28 00:18:50.079622 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 26 lpm-pol 1 Apr 28 00:18:50.079629 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 26 lpm-pol 1 Apr 28 00:18:50.079636 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 26 lpm-pol 1 Apr 28 00:18:50.079642 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Apr 28 00:18:50.079649 kernel: ata4: SATA link down (SStatus 0 SControl 300) Apr 28 00:18:50.079656 kernel: ata1: SATA link down (SStatus 0 SControl 300) Apr 28 00:18:50.079663 kernel: ata3.00: LPM support broken, forcing max_power Apr 28 00:18:50.079726 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Apr 28 00:18:50.079732 kernel: ata3.00: applying bridge limits Apr 28 00:18:50.079739 kernel: ata5: SATA link down (SStatus 0 SControl 300) Apr 28 00:18:50.079746 kernel: ata2: SATA link down (SStatus 0 SControl 300) Apr 28 00:18:50.079862 kernel: ata6: SATA link down (SStatus 0 SControl 300) Apr 28 00:18:50.079869 kernel: ata3.00: LPM support broken, forcing max_power Apr 28 00:18:50.079875 kernel: ata3.00: configured for UDMA/100 Apr 28 00:18:50.080457 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Apr 28 00:18:50.080576 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Apr 28 00:18:50.080673 kernel: virtio_blk virtio1: [vda] 27000832 512-byte logical blocks (13.8 GB/12.9 GiB) Apr 28 00:18:50.080912 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Apr 28 00:18:50.080923 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Apr 28 00:18:50.080930 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Apr 28 00:18:50.081004 kernel: GPT:16515071 != 27000831 Apr 28 00:18:50.081011 kernel: GPT:Alternate GPT header not at the end of the disk. Apr 28 00:18:50.081017 kernel: GPT:16515071 != 27000831 Apr 28 00:18:50.081024 kernel: GPT: Use GNU Parted to correct GPT errors. Apr 28 00:18:50.081030 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 28 00:18:50.081147 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Apr 28 00:18:50.081156 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Apr 28 00:18:50.081282 kernel: device-mapper: uevent: version 1.0.3 Apr 28 00:18:50.081289 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Apr 28 00:18:50.081296 kernel: device-mapper: verity: sha256 using shash "sha256-generic" Apr 28 00:18:50.081303 kernel: raid6: avx512x4 gen() 12480 MB/s Apr 28 00:18:50.081309 kernel: raid6: avx512x2 gen() 22504 MB/s Apr 28 00:18:50.081316 kernel: raid6: avx512x1 gen() 26494 MB/s Apr 28 00:18:50.081322 kernel: raid6: avx2x4 gen() 13258 MB/s Apr 28 00:18:50.081387 kernel: raid6: avx2x2 gen() 26668 MB/s Apr 28 00:18:50.081393 kernel: raid6: avx2x1 gen() 25075 MB/s Apr 28 00:18:50.081400 kernel: raid6: using algorithm avx2x2 gen() 26668 MB/s Apr 28 00:18:50.081407 kernel: raid6: .... xor() 14143 MB/s, rmw enabled Apr 28 00:18:50.081413 kernel: raid6: using avx512x2 recovery algorithm Apr 28 00:18:50.081420 kernel: xor: automatically using best checksumming function avx Apr 28 00:18:50.081427 kernel: Btrfs loaded, zoned=no, fsverity=no Apr 28 00:18:50.081541 kernel: BTRFS: device fsid ceb5d4c4-0ad9-4dbe-97f4-74392863c761 devid 1 transid 38 /dev/mapper/usr (253:0) scanned by mount (182) Apr 28 00:18:50.081549 kernel: BTRFS info (device dm-0): first mount of filesystem ceb5d4c4-0ad9-4dbe-97f4-74392863c761 Apr 28 00:18:50.081555 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Apr 28 00:18:50.081562 kernel: BTRFS info (device dm-0 state E): disabling log replay at mount time Apr 28 00:18:50.081569 kernel: BTRFS info (device dm-0 state E): enabling free space tree Apr 28 00:18:50.081576 kernel: loop: module loaded Apr 28 00:18:50.081582 kernel: loop0: detected capacity change from 0 to 106960 Apr 28 00:18:50.081646 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Apr 28 00:18:50.081654 systemd[1]: /etc/systemd/system.conf.d/nocgroup.conf:2: Support for option DefaultCPUAccounting= has been removed and it is ignored Apr 28 00:18:50.081664 systemd[1]: /etc/systemd/system.conf.d/nocgroup.conf:5: Support for option DefaultBlockIOAccounting= has been removed and it is ignored Apr 28 00:18:50.081671 systemd[1]: Successfully made /usr/ read-only. Apr 28 00:18:50.081680 systemd[1]: systemd 258.3 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Apr 28 00:18:50.081746 systemd[1]: Detected virtualization kvm. Apr 28 00:18:50.081861 systemd[1]: Detected architecture x86-64. Apr 28 00:18:50.081868 systemd[1]: Running in initrd. Apr 28 00:18:50.081875 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Apr 28 00:18:50.081882 systemd[1]: No hostname configured, using default hostname. Apr 28 00:18:50.081889 systemd[1]: Hostname set to . Apr 28 00:18:50.081896 systemd[1]: Queued start job for default target initrd.target. Apr 28 00:18:50.081960 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Apr 28 00:18:50.081968 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 28 00:18:50.081975 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 28 00:18:50.081983 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Apr 28 00:18:50.081991 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 28 00:18:50.081998 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Apr 28 00:18:50.082061 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Apr 28 00:18:50.082068 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 28 00:18:50.082076 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 28 00:18:50.082083 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Apr 28 00:18:50.082090 systemd[1]: Reached target paths.target - Path Units. Apr 28 00:18:50.082097 systemd[1]: Reached target slices.target - Slice Units. Apr 28 00:18:50.082159 systemd[1]: Reached target swap.target - Swaps. Apr 28 00:18:50.082167 systemd[1]: Reached target timers.target - Timer Units. Apr 28 00:18:50.082174 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Apr 28 00:18:50.082238 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 28 00:18:50.082246 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Apr 28 00:18:50.082253 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 28 00:18:50.082260 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Apr 28 00:18:50.082323 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 28 00:18:50.082330 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 28 00:18:50.082337 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 28 00:18:50.082344 systemd[1]: Reached target sockets.target - Socket Units. Apr 28 00:18:50.082352 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Apr 28 00:18:50.082359 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Apr 28 00:18:50.082366 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 28 00:18:50.082427 systemd[1]: Finished network-cleanup.service - Network Cleanup. Apr 28 00:18:50.082435 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Apr 28 00:18:50.082442 systemd[1]: Starting systemd-fsck-usr.service... Apr 28 00:18:50.082450 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 28 00:18:50.082512 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 28 00:18:50.082520 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 28 00:18:50.082527 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Apr 28 00:18:50.082534 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 28 00:18:50.082541 systemd[1]: Finished systemd-fsck-usr.service. Apr 28 00:18:50.082635 systemd-journald[318]: Collecting audit messages is enabled. Apr 28 00:18:50.082717 kernel: audit: type=1130 audit(1777335529.932:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:18:50.082729 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 28 00:18:50.082739 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Apr 28 00:18:50.083690 systemd-journald[318]: Journal started Apr 28 00:18:50.083711 systemd-journald[318]: Runtime Journal (/run/log/journal/9afbd9333af146a5b0b285af80f77485) is 6M, max 48M, 42M free. Apr 28 00:18:49.932000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:18:50.092295 systemd[1]: Started systemd-journald.service - Journal Service. Apr 28 00:18:50.092460 kernel: Bridge firewalling registered Apr 28 00:18:50.099000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:18:50.110159 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 28 00:18:50.123371 kernel: audit: type=1130 audit(1777335530.099:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:18:50.143098 systemd-modules-load[322]: Inserted module 'br_netfilter' Apr 28 00:18:50.160000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:18:50.186046 kernel: audit: type=1130 audit(1777335530.160:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:18:50.161014 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 28 00:18:50.193022 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 28 00:18:50.228000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:18:50.229245 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 28 00:18:50.263516 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 28 00:18:50.290107 kernel: audit: type=1130 audit(1777335530.228:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:18:50.280332 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 28 00:18:50.301000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:18:50.335019 kernel: audit: type=1130 audit(1777335530.301:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:18:50.345079 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 28 00:18:50.369490 systemd-tmpfiles[335]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Apr 28 00:18:50.381000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:18:50.381941 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 28 00:18:50.414715 kernel: audit: type=1130 audit(1777335530.381:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:18:50.433589 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 28 00:18:50.470000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:18:50.492114 kernel: audit: type=1130 audit(1777335530.470:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:18:50.492309 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 28 00:18:50.492000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:18:50.511538 kernel: audit: type=1130 audit(1777335530.492:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:18:50.518000 audit: BPF prog-id=5 op=LOAD Apr 28 00:18:50.525981 kernel: audit: type=1334 audit(1777335530.518:10): prog-id=5 op=LOAD Apr 28 00:18:50.526722 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 28 00:18:50.552293 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 28 00:18:50.562000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:18:50.592357 kernel: audit: type=1130 audit(1777335530.562:11): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:18:50.603137 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Apr 28 00:18:50.665270 dracut-cmdline[361]: dracut-109 Apr 28 00:18:50.674712 dracut-cmdline[361]: Using kernel command line parameters: SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=f23531cb6330205ea1df0485b9a03deeb8b8f7eb9c40767cd8b5a2bc5be69458 Apr 28 00:18:50.695967 systemd-resolved[358]: Positive Trust Anchors: Apr 28 00:18:50.695976 systemd-resolved[358]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 28 00:18:50.695980 systemd-resolved[358]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Apr 28 00:18:50.696010 systemd-resolved[358]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 28 00:18:50.844065 systemd-resolved[358]: Defaulting to hostname 'linux'. Apr 28 00:18:50.872382 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 28 00:18:50.886000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:18:50.888064 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 28 00:18:51.625877 kernel: Loading iSCSI transport class v2.0-870. Apr 28 00:18:51.664456 kernel: iscsi: registered transport (tcp) Apr 28 00:18:51.754043 kernel: iscsi: registered transport (qla4xxx) Apr 28 00:18:51.754600 kernel: QLogic iSCSI HBA Driver Apr 28 00:18:51.875179 systemd[1]: Starting systemd-network-generator.service - Generate Network Units from Kernel Command Line... Apr 28 00:18:51.993149 systemd[1]: Finished systemd-network-generator.service - Generate Network Units from Kernel Command Line. Apr 28 00:18:52.002000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:18:52.008921 systemd[1]: Reached target network-pre.target - Preparation for Network. Apr 28 00:18:52.185338 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Apr 28 00:18:52.199000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:18:52.203317 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Apr 28 00:18:52.215961 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Apr 28 00:18:52.349617 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Apr 28 00:18:52.357000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:18:52.363000 audit: BPF prog-id=6 op=LOAD Apr 28 00:18:52.363000 audit: BPF prog-id=7 op=LOAD Apr 28 00:18:52.368125 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 28 00:18:52.456401 systemd-udevd[584]: Using default interface naming scheme 'v258'. Apr 28 00:18:52.542051 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 28 00:18:52.563000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:18:52.566543 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Apr 28 00:18:52.653636 dracut-pre-trigger[638]: rd.md=0: removing MD RAID activation Apr 28 00:18:52.778641 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Apr 28 00:18:52.798000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:18:52.805036 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 28 00:18:52.840183 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 28 00:18:52.859000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:18:52.861000 audit: BPF prog-id=8 op=LOAD Apr 28 00:18:52.863995 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 28 00:18:52.973293 systemd-networkd[739]: lo: Link UP Apr 28 00:18:52.987000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:18:52.973372 systemd-networkd[739]: lo: Gained carrier Apr 28 00:18:53.002000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:18:52.974720 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 28 00:18:52.990099 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 28 00:18:53.005132 systemd[1]: Reached target network.target - Network. Apr 28 00:18:53.016045 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Apr 28 00:18:53.182299 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Apr 28 00:18:53.215986 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Apr 28 00:18:53.245725 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Apr 28 00:18:53.283653 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Apr 28 00:18:53.304125 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Apr 28 00:18:53.410990 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Apr 28 00:18:53.423879 kernel: cryptd: max_cpu_qlen set to 1000 Apr 28 00:18:53.428413 disk-uuid[777]: Primary Header is updated. Apr 28 00:18:53.428413 disk-uuid[777]: Secondary Entries is updated. Apr 28 00:18:53.428413 disk-uuid[777]: Secondary Header is updated. Apr 28 00:18:53.474993 kernel: AES CTR mode by8 optimization enabled Apr 28 00:18:53.478544 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 28 00:18:53.497000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:18:53.479130 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 28 00:18:53.498307 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Apr 28 00:18:53.510665 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 28 00:18:53.566118 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 28 00:18:53.566417 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 28 00:18:53.581000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:18:53.581000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:18:53.609149 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 28 00:18:53.628606 systemd-networkd[739]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Apr 28 00:18:53.628616 systemd-networkd[739]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 28 00:18:53.636907 systemd-networkd[739]: eth0: Link UP Apr 28 00:18:53.638304 systemd-networkd[739]: eth0: Gained carrier Apr 28 00:18:53.638321 systemd-networkd[739]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Apr 28 00:18:53.717889 systemd-networkd[739]: eth0: DHCPv4 address 10.0.0.20/16, gateway 10.0.0.1 acquired from 10.0.0.1 Apr 28 00:18:53.742670 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 28 00:18:53.748000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:18:53.820719 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Apr 28 00:18:53.827000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:18:53.829176 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Apr 28 00:18:53.838024 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 28 00:18:53.838276 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 28 00:18:53.848171 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Apr 28 00:18:53.904309 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Apr 28 00:18:53.905000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:18:54.525958 disk-uuid[780]: Warning: The kernel is still using the old partition table. Apr 28 00:18:54.525958 disk-uuid[780]: The new table will be used at the next reboot or after you Apr 28 00:18:54.525958 disk-uuid[780]: run partprobe(8) or kpartx(8) Apr 28 00:18:54.525958 disk-uuid[780]: The operation has completed successfully. Apr 28 00:18:54.553694 systemd[1]: disk-uuid.service: Deactivated successfully. Apr 28 00:18:54.554076 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Apr 28 00:18:54.571000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:18:54.572000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:18:54.575413 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Apr 28 00:18:54.667180 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (899) Apr 28 00:18:54.677339 kernel: BTRFS info (device vda6): first mount of filesystem 91af0ae0-8636-4662-9335-0ea2677cb45d Apr 28 00:18:54.677880 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 28 00:18:54.701944 kernel: BTRFS info (device vda6): turning on async discard Apr 28 00:18:54.702401 kernel: BTRFS info (device vda6): enabling free space tree Apr 28 00:18:54.725418 kernel: BTRFS info (device vda6): last unmount of filesystem 91af0ae0-8636-4662-9335-0ea2677cb45d Apr 28 00:18:54.731043 systemd[1]: Finished ignition-setup.service - Ignition (setup). Apr 28 00:18:54.739000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:18:54.742173 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Apr 28 00:18:54.988476 ignition[918]: Ignition 2.24.0 Apr 28 00:18:54.988530 ignition[918]: Stage: fetch-offline Apr 28 00:18:54.988576 ignition[918]: no configs at "/usr/lib/ignition/base.d" Apr 28 00:18:54.988608 ignition[918]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 28 00:18:54.988702 ignition[918]: parsed url from cmdline: "" Apr 28 00:18:54.988705 ignition[918]: no config URL provided Apr 28 00:18:54.988709 ignition[918]: reading system config file "/usr/lib/ignition/user.ign" Apr 28 00:18:54.988716 ignition[918]: no config at "/usr/lib/ignition/user.ign" Apr 28 00:18:54.988837 ignition[918]: op(1): [started] loading QEMU firmware config module Apr 28 00:18:54.988840 ignition[918]: op(1): executing: "modprobe" "qemu_fw_cfg" Apr 28 00:18:55.024454 ignition[918]: op(1): [finished] loading QEMU firmware config module Apr 28 00:18:55.357425 systemd-networkd[739]: eth0: Gained IPv6LL Apr 28 00:18:55.684691 ignition[918]: parsing config with SHA512: 603ad571ca1254f1e947f3592ca7eff47d2a93420a3ee4cbec0ebde9133347908fe190c1478e8dd164475c915a8aba92d192111cb189db0203f67608ca466308 Apr 28 00:18:55.710266 unknown[918]: fetched base config from "system" Apr 28 00:18:55.710315 unknown[918]: fetched user config from "qemu" Apr 28 00:18:55.710968 ignition[918]: fetch-offline: fetch-offline passed Apr 28 00:18:55.727000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:18:55.719511 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Apr 28 00:18:55.811328 kernel: kauditd_printk_skb: 21 callbacks suppressed Apr 28 00:18:55.711042 ignition[918]: Ignition finished successfully Apr 28 00:18:55.729005 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Apr 28 00:18:55.828027 kernel: audit: type=1130 audit(1777335535.727:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:18:55.740536 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Apr 28 00:18:55.894550 ignition[927]: Ignition 2.24.0 Apr 28 00:18:55.902405 ignition[927]: Stage: kargs Apr 28 00:18:55.907443 ignition[927]: no configs at "/usr/lib/ignition/base.d" Apr 28 00:18:55.907500 ignition[927]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 28 00:18:55.912190 ignition[927]: kargs: kargs passed Apr 28 00:18:55.912447 ignition[927]: Ignition finished successfully Apr 28 00:18:55.930186 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Apr 28 00:18:55.940000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:18:55.947525 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Apr 28 00:18:55.956187 kernel: audit: type=1130 audit(1777335535.940:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:18:56.015994 ignition[935]: Ignition 2.24.0 Apr 28 00:18:56.016047 ignition[935]: Stage: disks Apr 28 00:18:56.017882 ignition[935]: no configs at "/usr/lib/ignition/base.d" Apr 28 00:18:56.017894 ignition[935]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 28 00:18:56.020058 ignition[935]: disks: disks passed Apr 28 00:18:56.040000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:18:56.032523 systemd[1]: Finished ignition-disks.service - Ignition (disks). Apr 28 00:18:56.068520 kernel: audit: type=1130 audit(1777335536.040:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:18:56.020118 ignition[935]: Ignition finished successfully Apr 28 00:18:56.043371 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Apr 28 00:18:56.062102 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 28 00:18:56.072689 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 28 00:18:56.083108 systemd[1]: Reached target sysinit.target - System Initialization. Apr 28 00:18:56.097329 systemd[1]: Reached target basic.target - Basic System. Apr 28 00:18:56.112060 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Apr 28 00:18:56.202873 systemd-fsck[945]: ROOT: clean, 15/456736 files, 38230/456704 blocks Apr 28 00:18:56.214982 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Apr 28 00:18:56.218000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:18:56.243661 kernel: audit: type=1130 audit(1777335536.218:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:18:56.225976 systemd[1]: Mounting sysroot.mount - /sysroot... Apr 28 00:18:56.564400 kernel: EXT4-fs (vda9): mounted filesystem f2ab3bab-5f4f-4f13-9e1d-ae27d704ff83 r/w with ordered data mode. Quota mode: none. Apr 28 00:18:56.568692 systemd[1]: Mounted sysroot.mount - /sysroot. Apr 28 00:18:56.574107 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Apr 28 00:18:56.587353 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 28 00:18:56.606920 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Apr 28 00:18:56.611942 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Apr 28 00:18:56.642426 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (953) Apr 28 00:18:56.611985 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Apr 28 00:18:56.612017 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Apr 28 00:18:56.622087 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Apr 28 00:18:56.639855 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Apr 28 00:18:56.693851 kernel: BTRFS info (device vda6): first mount of filesystem 91af0ae0-8636-4662-9335-0ea2677cb45d Apr 28 00:18:56.694095 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 28 00:18:56.709263 kernel: BTRFS info (device vda6): turning on async discard Apr 28 00:18:56.709539 kernel: BTRFS info (device vda6): enabling free space tree Apr 28 00:18:56.711938 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 28 00:18:57.367908 kernel: loop1: detected capacity change from 0 to 43472 Apr 28 00:18:57.372974 kernel: loop1: p1 p2 p3 Apr 28 00:18:57.419503 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Apr 28 00:18:57.420119 kernel: device-mapper: verity: Unrecognized verity feature request: root_hash_sig_key_desc Apr 28 00:18:57.428103 kernel: device-mapper: table: 253:1: verity: Unrecognized verity feature request (-EINVAL) Apr 28 00:18:57.428174 kernel: device-mapper: ioctl: error adding target to table Apr 28 00:18:57.434479 systemd-confext[1043]: device-mapper: reload ioctl on bd01924efa64fd6fbc49c41573ab9db4b6e97144b422d98aceb773101478822c-verity (253:1) failed: Invalid argument Apr 28 00:18:57.498341 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Apr 28 00:18:57.759397 kernel: erofs: (device dm-1): mounted with root inode @ nid 40. Apr 28 00:18:57.825940 kernel: loop2: detected capacity change from 0 to 43472 Apr 28 00:18:57.831923 kernel: loop2: p1 p2 p3 Apr 28 00:18:57.900130 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Apr 28 00:18:57.900522 kernel: device-mapper: verity: Unrecognized verity feature request: root_hash_sig_key_desc Apr 28 00:18:57.900536 kernel: device-mapper: table: 253:1: verity: Unrecognized verity feature request (-EINVAL) Apr 28 00:18:57.912160 kernel: device-mapper: ioctl: error adding target to table Apr 28 00:18:57.912534 (sd-merge)[1053]: device-mapper: reload ioctl on bd01924efa64fd6fbc49c41573ab9db4b6e97144b422d98aceb773101478822c-verity (253:1) failed: Invalid argument Apr 28 00:18:57.936867 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Apr 28 00:18:58.329958 kernel: erofs: (device dm-1): mounted with root inode @ nid 40. Apr 28 00:18:58.331339 (sd-merge)[1053]: Using extensions '00-flatcar-default.raw'. Apr 28 00:18:58.396296 (sd-merge)[1053]: Merged extensions into '/sysroot/etc'. Apr 28 00:18:58.419071 initrd-setup-root[1060]: /etc 00-flatcar-default Tue 2026-04-28 00:18:50 UTC Apr 28 00:18:58.422024 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Apr 28 00:18:58.427000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:18:58.454913 kernel: audit: type=1130 audit(1777335538.427:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:18:58.434561 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Apr 28 00:18:58.482696 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Apr 28 00:18:58.502549 systemd[1]: sysroot-oem.mount: Deactivated successfully. Apr 28 00:18:58.516131 kernel: BTRFS info (device vda6): last unmount of filesystem 91af0ae0-8636-4662-9335-0ea2677cb45d Apr 28 00:18:58.584135 ignition[1070]: INFO : Ignition 2.24.0 Apr 28 00:18:58.584135 ignition[1070]: INFO : Stage: mount Apr 28 00:18:58.592090 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Apr 28 00:18:58.596000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:18:58.618685 ignition[1070]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 28 00:18:58.618685 ignition[1070]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 28 00:18:58.641000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:18:58.656020 kernel: audit: type=1130 audit(1777335538.596:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:18:58.627628 systemd[1]: Finished ignition-mount.service - Ignition (mount). Apr 28 00:18:58.666208 ignition[1070]: INFO : mount: mount passed Apr 28 00:18:58.666208 ignition[1070]: INFO : Ignition finished successfully Apr 28 00:18:58.678295 kernel: audit: type=1130 audit(1777335538.641:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:18:58.646678 systemd[1]: Starting ignition-files.service - Ignition (files)... Apr 28 00:18:58.705340 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 28 00:18:58.769171 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1082) Apr 28 00:18:58.780381 kernel: BTRFS info (device vda6): first mount of filesystem 91af0ae0-8636-4662-9335-0ea2677cb45d Apr 28 00:18:58.781168 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 28 00:18:58.795314 kernel: BTRFS info (device vda6): turning on async discard Apr 28 00:18:58.797422 kernel: BTRFS info (device vda6): enabling free space tree Apr 28 00:18:58.802512 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 28 00:18:58.935601 ignition[1099]: INFO : Ignition 2.24.0 Apr 28 00:18:58.935601 ignition[1099]: INFO : Stage: files Apr 28 00:18:58.943515 ignition[1099]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 28 00:18:58.943515 ignition[1099]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 28 00:18:58.943515 ignition[1099]: DEBUG : files: compiled without relabeling support, skipping Apr 28 00:18:58.972167 ignition[1099]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Apr 28 00:18:58.972167 ignition[1099]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Apr 28 00:18:58.989064 ignition[1099]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Apr 28 00:18:58.989064 ignition[1099]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Apr 28 00:18:58.989064 ignition[1099]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Apr 28 00:18:58.989026 unknown[1099]: wrote ssh authorized keys file for user: core Apr 28 00:18:59.053493 ignition[1099]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Apr 28 00:18:59.053493 ignition[1099]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Apr 28 00:18:59.422491 ignition[1099]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Apr 28 00:18:59.851134 ignition[1099]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Apr 28 00:18:59.851134 ignition[1099]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Apr 28 00:18:59.880984 ignition[1099]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Apr 28 00:18:59.880984 ignition[1099]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Apr 28 00:18:59.903360 ignition[1099]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Apr 28 00:18:59.912716 ignition[1099]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 28 00:18:59.923528 ignition[1099]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 28 00:18:59.923528 ignition[1099]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 28 00:18:59.946280 ignition[1099]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 28 00:18:59.946280 ignition[1099]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Apr 28 00:18:59.946280 ignition[1099]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Apr 28 00:18:59.946280 ignition[1099]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Apr 28 00:18:59.946280 ignition[1099]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Apr 28 00:18:59.946280 ignition[1099]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Apr 28 00:18:59.946280 ignition[1099]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.4-x86-64.raw: attempt #1 Apr 28 00:19:00.132067 ignition[1099]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Apr 28 00:19:01.491431 ignition[1099]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Apr 28 00:19:01.491431 ignition[1099]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Apr 28 00:19:01.514009 ignition[1099]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 28 00:19:01.514009 ignition[1099]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 28 00:19:01.514009 ignition[1099]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Apr 28 00:19:01.514009 ignition[1099]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Apr 28 00:19:01.547425 ignition[1099]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Apr 28 00:19:01.560720 ignition[1099]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Apr 28 00:19:01.560720 ignition[1099]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Apr 28 00:19:01.579433 ignition[1099]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Apr 28 00:19:01.778095 ignition[1099]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Apr 28 00:19:01.788720 ignition[1099]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Apr 28 00:19:01.788720 ignition[1099]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Apr 28 00:19:01.788720 ignition[1099]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Apr 28 00:19:01.788720 ignition[1099]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Apr 28 00:19:01.788720 ignition[1099]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Apr 28 00:19:01.834428 ignition[1099]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Apr 28 00:19:01.834428 ignition[1099]: INFO : files: files passed Apr 28 00:19:01.834428 ignition[1099]: INFO : Ignition finished successfully Apr 28 00:19:01.857927 systemd[1]: Finished ignition-files.service - Ignition (files). Apr 28 00:19:01.860000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:19:01.882128 kernel: audit: type=1130 audit(1777335541.860:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:19:01.872920 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Apr 28 00:19:01.898457 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Apr 28 00:19:01.937132 systemd[1]: ignition-quench.service: Deactivated successfully. Apr 28 00:19:01.948706 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Apr 28 00:19:01.955000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:19:01.955000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:19:01.985188 kernel: audit: type=1130 audit(1777335541.955:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:19:01.985311 initrd-setup-root-after-ignition[1131]: grep: /sysroot/oem/oem-release: No such file or directory Apr 28 00:19:01.992926 kernel: audit: type=1131 audit(1777335541.955:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:19:01.993000 initrd-setup-root-after-ignition[1133]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 28 00:19:01.993000 initrd-setup-root-after-ignition[1133]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Apr 28 00:19:02.014108 initrd-setup-root-after-ignition[1137]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 28 00:19:02.068054 kernel: loop3: detected capacity change from 0 to 43472 Apr 28 00:19:02.082455 kernel: loop3: p1 p2 p3 Apr 28 00:19:02.234960 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Apr 28 00:19:02.235410 kernel: device-mapper: verity: Unrecognized verity feature request: root_hash_sig_key_desc Apr 28 00:19:02.235544 kernel: device-mapper: table: 253:2: verity: Unrecognized verity feature request (-EINVAL) Apr 28 00:19:02.245300 kernel: device-mapper: ioctl: error adding target to table Apr 28 00:19:02.246460 systemd-confext[1139]: device-mapper: reload ioctl on loop3p1-verity (253:2) failed: Invalid argument Apr 28 00:19:02.260026 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Apr 28 00:19:02.358340 kernel: erofs: (device dm-2): mounted with root inode @ nid 40. Apr 28 00:19:02.384943 kernel: loop4: detected capacity change from 0 to 43472 Apr 28 00:19:02.391138 kernel: loop4: p1 p2 p3 Apr 28 00:19:02.437343 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Apr 28 00:19:02.437871 kernel: device-mapper: verity: Unrecognized verity feature request: root_hash_sig_key_desc Apr 28 00:19:02.437944 kernel: device-mapper: table: 253:2: verity: Unrecognized verity feature request (-EINVAL) Apr 28 00:19:02.444363 kernel: device-mapper: ioctl: error adding target to table Apr 28 00:19:02.450553 (sd-merge)[1150]: device-mapper: reload ioctl on loop4p1-verity (253:2) failed: Invalid argument Apr 28 00:19:02.468145 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Apr 28 00:19:02.583992 kernel: erofs: (device dm-2): mounted with root inode @ nid 40. Apr 28 00:19:02.584604 (sd-merge)[1150]: Skipping extension refresh because no change was found, use --always-refresh=yes to always do a refresh. Apr 28 00:19:02.603908 kernel: device-mapper: ioctl: remove_all left 2 open device(s) Apr 28 00:19:02.617035 kernel: loop4: detected capacity change from 0 to 378016 Apr 28 00:19:02.624906 kernel: loop4: p1 p2 p3 Apr 28 00:19:02.683480 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Apr 28 00:19:02.683596 kernel: device-mapper: verity: Unrecognized verity feature request: root_hash_sig_key_desc Apr 28 00:19:02.683608 kernel: device-mapper: table: 253:2: verity: Unrecognized verity feature request (-EINVAL) Apr 28 00:19:02.692138 kernel: device-mapper: ioctl: error adding target to table Apr 28 00:19:02.692548 systemd-sysext[1158]: device-mapper: reload ioctl on 7872a58ca41eede16f5f9c4d58208200d7d53a6d6326a9fbd8291496d1250167-verity (253:2) failed: Invalid argument Apr 28 00:19:02.708960 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Apr 28 00:19:02.806942 kernel: erofs: (device dm-2): mounted with root inode @ nid 39. Apr 28 00:19:02.849884 kernel: loop5: detected capacity change from 0 to 219192 Apr 28 00:19:02.968214 kernel: loop6: detected capacity change from 0 to 178200 Apr 28 00:19:02.974985 kernel: loop6: p1 p2 p3 Apr 28 00:19:03.032541 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Apr 28 00:19:03.034959 kernel: device-mapper: verity: Unrecognized verity feature request: root_hash_sig_key_desc Apr 28 00:19:03.035128 kernel: device-mapper: table: 253:2: verity: Unrecognized verity feature request (-EINVAL) Apr 28 00:19:03.037930 kernel: device-mapper: ioctl: error adding target to table Apr 28 00:19:03.041583 systemd-sysext[1158]: device-mapper: reload ioctl on b14ca717c93af6dcf45970900eba2c84b1df1635b4cfb0353a4efa1194de37b1-verity (253:2) failed: Invalid argument Apr 28 00:19:03.061154 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Apr 28 00:19:03.231543 kernel: erofs: (device dm-2): mounted with root inode @ nid 39. Apr 28 00:19:03.283969 kernel: loop7: detected capacity change from 0 to 378016 Apr 28 00:19:03.288991 kernel: loop7: p1 p2 p3 Apr 28 00:19:03.322226 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Apr 28 00:19:03.328326 kernel: device-mapper: verity: Unrecognized verity feature request: root_hash_sig_key_desc Apr 28 00:19:03.328369 kernel: device-mapper: table: 253:2: verity: Unrecognized verity feature request (-EINVAL) Apr 28 00:19:03.336474 kernel: device-mapper: ioctl: error adding target to table Apr 28 00:19:03.336946 (sd-merge)[1176]: device-mapper: reload ioctl on 7872a58ca41eede16f5f9c4d58208200d7d53a6d6326a9fbd8291496d1250167-verity (253:2) failed: Invalid argument Apr 28 00:19:03.356584 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Apr 28 00:19:03.538958 kernel: erofs: (device dm-2): mounted with root inode @ nid 39. Apr 28 00:19:03.546926 kernel: loop1: detected capacity change from 0 to 219192 Apr 28 00:19:03.568903 kernel: loop3: detected capacity change from 0 to 178200 Apr 28 00:19:03.572912 kernel: loop3: p1 p2 p3 Apr 28 00:19:03.579931 kernel: loop3: p1 p2 p3 Apr 28 00:19:03.613717 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Apr 28 00:19:03.613939 kernel: device-mapper: verity: Unrecognized verity feature request: root_hash_sig_key_desc Apr 28 00:19:03.613958 kernel: device-mapper: table: 253:3: verity: Unrecognized verity feature request (-EINVAL) Apr 28 00:19:03.623901 kernel: device-mapper: ioctl: error adding target to table Apr 28 00:19:03.623944 (sd-merge)[1176]: device-mapper: reload ioctl on b14ca717c93af6dcf45970900eba2c84b1df1635b4cfb0353a4efa1194de37b1-verity (253:3) failed: Invalid argument Apr 28 00:19:03.641236 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Apr 28 00:19:03.705590 kernel: erofs: (device dm-3): mounted with root inode @ nid 39. Apr 28 00:19:03.707672 (sd-merge)[1176]: Using extensions 'containerd-flatcar.raw', 'docker-flatcar.raw', 'kubernetes-v1.34.4-x86-64.raw'. Apr 28 00:19:03.712133 (sd-merge)[1176]: Merged extensions into '/sysroot/usr'. Apr 28 00:19:03.724396 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 28 00:19:03.727000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:19:03.750326 kernel: audit: type=1130 audit(1777335543.727:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:19:03.730451 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Apr 28 00:19:03.761191 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Apr 28 00:19:03.982896 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Apr 28 00:19:03.983167 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Apr 28 00:19:04.004000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:19:04.005230 systemd[1]: initrd-parse-etc.service: Triggering OnSuccess= dependencies. Apr 28 00:19:04.004000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:19:04.032240 kernel: audit: type=1130 audit(1777335544.004:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:19:04.006679 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Apr 28 00:19:04.048560 kernel: audit: type=1131 audit(1777335544.004:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:19:04.045989 systemd[1]: Reached target initrd.target - Initrd Default Target. Apr 28 00:19:04.058528 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Apr 28 00:19:04.066212 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Apr 28 00:19:04.154021 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 28 00:19:04.167000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:19:04.173600 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Apr 28 00:19:04.214240 kernel: audit: type=1130 audit(1777335544.167:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:19:04.268987 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Apr 28 00:19:04.289163 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 28 00:19:04.299844 systemd[1]: Stopped target timers.target - Timer Units. Apr 28 00:19:04.305884 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Apr 28 00:19:04.306128 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 28 00:19:04.320686 systemd[1]: Stopped target initrd.target - Initrd Default Target. Apr 28 00:19:04.331881 systemd[1]: Stopped target basic.target - Basic System. Apr 28 00:19:04.344572 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Apr 28 00:19:04.352719 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Apr 28 00:19:04.320000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:19:04.371690 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Apr 28 00:19:04.411335 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Apr 28 00:19:04.431975 kernel: audit: type=1131 audit(1777335544.320:47): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:19:04.423656 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Apr 28 00:19:04.444510 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Apr 28 00:19:04.475738 systemd[1]: Stopped target sysinit.target - System Initialization. Apr 28 00:19:04.489690 systemd[1]: Stopped target local-fs.target - Local File Systems. Apr 28 00:19:04.504374 systemd[1]: Stopped target swap.target - Swaps. Apr 28 00:19:04.513965 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Apr 28 00:19:04.520000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:19:04.542223 kernel: audit: type=1131 audit(1777335544.520:48): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:19:04.514363 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Apr 28 00:19:04.521055 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Apr 28 00:19:04.555160 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 28 00:19:04.560314 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Apr 28 00:19:04.580000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:19:04.603452 kernel: audit: type=1131 audit(1777335544.580:49): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:19:04.568591 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 28 00:19:04.608000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:19:04.579577 systemd[1]: dracut-initqueue.service: Deactivated successfully. Apr 28 00:19:04.580367 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Apr 28 00:19:04.581369 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Apr 28 00:19:04.581513 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Apr 28 00:19:04.609882 systemd[1]: Stopped target paths.target - Path Units. Apr 28 00:19:04.616023 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Apr 28 00:19:04.618568 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 28 00:19:04.637704 systemd[1]: Stopped target slices.target - Slice Units. Apr 28 00:19:04.656555 systemd[1]: Stopped target sockets.target - Socket Units. Apr 28 00:19:04.668322 systemd[1]: iscsid.socket: Deactivated successfully. Apr 28 00:19:04.668729 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Apr 28 00:19:04.677019 systemd[1]: iscsiuio.socket: Deactivated successfully. Apr 28 00:19:04.677142 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 28 00:19:04.740000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:19:04.693076 systemd[1]: systemd-journald-audit.socket: Deactivated successfully. Apr 28 00:19:04.748000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:19:04.693449 systemd[1]: Closed systemd-journald-audit.socket - Journal Audit Socket. Apr 28 00:19:04.718399 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Apr 28 00:19:04.718717 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 28 00:19:04.741625 systemd[1]: ignition-files.service: Deactivated successfully. Apr 28 00:19:04.741984 systemd[1]: Stopped ignition-files.service - Ignition (files). Apr 28 00:19:04.749126 systemd[1]: ignition-files.service: Consumed 2.381s CPU time. Apr 28 00:19:04.756536 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Apr 28 00:19:04.806000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:19:04.787225 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Apr 28 00:19:04.791863 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Apr 28 00:19:04.824000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:19:04.792051 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 28 00:19:04.841000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:19:04.807667 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Apr 28 00:19:04.896000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:19:04.896000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:19:04.813991 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Apr 28 00:19:04.825396 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Apr 28 00:19:04.825665 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Apr 28 00:19:04.924000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:19:04.849722 systemd[1]: initrd-cleanup.service: Deactivated successfully. Apr 28 00:19:04.850031 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Apr 28 00:19:04.903149 systemd[1]: sysroot-boot.mount: Deactivated successfully. Apr 28 00:19:04.908556 systemd[1]: sysroot-boot.service: Deactivated successfully. Apr 28 00:19:04.908714 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Apr 28 00:19:04.971364 ignition[1206]: INFO : Ignition 2.24.0 Apr 28 00:19:04.971364 ignition[1206]: INFO : Stage: umount Apr 28 00:19:04.979045 ignition[1206]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 28 00:19:04.979045 ignition[1206]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 28 00:19:04.979045 ignition[1206]: INFO : umount: umount passed Apr 28 00:19:04.979045 ignition[1206]: INFO : Ignition finished successfully Apr 28 00:19:05.006114 systemd[1]: ignition-mount.service: Deactivated successfully. Apr 28 00:19:05.008017 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Apr 28 00:19:05.018000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:19:05.023992 systemd[1]: Stopped target network.target - Network. Apr 28 00:19:05.036545 systemd[1]: ignition-disks.service: Deactivated successfully. Apr 28 00:19:05.037160 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Apr 28 00:19:05.045000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:19:05.047490 systemd[1]: ignition-kargs.service: Deactivated successfully. Apr 28 00:19:05.059000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:19:05.049729 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Apr 28 00:19:05.069000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:19:05.060987 systemd[1]: ignition-setup.service: Deactivated successfully. Apr 28 00:19:05.076000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup-pre comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:19:05.061165 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Apr 28 00:19:05.087000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:19:05.070095 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Apr 28 00:19:05.070192 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Apr 28 00:19:05.076929 systemd[1]: initrd-setup-root.service: Deactivated successfully. Apr 28 00:19:05.076967 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Apr 28 00:19:05.088868 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Apr 28 00:19:05.094979 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Apr 28 00:19:05.123372 systemd[1]: systemd-resolved.service: Deactivated successfully. Apr 28 00:19:05.137000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:19:05.123587 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Apr 28 00:19:05.157017 systemd[1]: systemd-networkd.service: Deactivated successfully. Apr 28 00:19:05.163000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:19:05.157471 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Apr 28 00:19:05.170000 audit: BPF prog-id=5 op=UNLOAD Apr 28 00:19:05.171733 systemd[1]: Stopped target network-pre.target - Preparation for Network. Apr 28 00:19:05.187160 systemd[1]: systemd-networkd.socket: Deactivated successfully. Apr 28 00:19:05.187431 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Apr 28 00:19:05.199429 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Apr 28 00:19:05.209911 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Apr 28 00:19:05.218000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:19:05.211103 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 28 00:19:05.232000 audit: BPF prog-id=8 op=UNLOAD Apr 28 00:19:05.219397 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 28 00:19:05.232000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:19:05.246000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:19:05.219470 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 28 00:19:05.233724 systemd[1]: systemd-modules-load.service: Deactivated successfully. Apr 28 00:19:05.233862 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Apr 28 00:19:05.248163 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 28 00:19:05.288397 systemd[1]: systemd-udevd.service: Deactivated successfully. Apr 28 00:19:05.305335 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 28 00:19:05.319000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:19:05.320871 systemd[1]: systemd-udevd.service: Consumed 2.508s CPU time. Apr 28 00:19:05.321624 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Apr 28 00:19:05.321662 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Apr 28 00:19:05.329371 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Apr 28 00:19:05.346000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:19:05.329409 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Apr 28 00:19:05.355000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:19:05.347574 systemd[1]: dracut-cmdline.service: Deactivated successfully. Apr 28 00:19:05.364000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:19:05.347664 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Apr 28 00:19:05.378000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:19:05.356923 systemd[1]: dracut-cmdline.service: Consumed 1.084s CPU time. Apr 28 00:19:05.357163 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 28 00:19:05.400000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:19:05.357362 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 28 00:19:05.411000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:19:05.369557 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Apr 28 00:19:05.418000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:19:05.369621 systemd[1]: systemd-network-generator.service: Deactivated successfully. Apr 28 00:19:05.426000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:19:05.369651 systemd[1]: Stopped systemd-network-generator.service - Generate Network Units from Kernel Command Line. Apr 28 00:19:05.451000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:19:05.379189 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Apr 28 00:19:05.463000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:19:05.464000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:19:05.379222 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 28 00:19:05.401993 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Apr 28 00:19:05.402187 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 28 00:19:05.412594 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Apr 28 00:19:05.413564 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Apr 28 00:19:05.419423 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 28 00:19:05.419530 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 28 00:19:05.430858 systemd[1]: network-cleanup.service: Deactivated successfully. Apr 28 00:19:05.431073 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Apr 28 00:19:05.452506 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Apr 28 00:19:05.452872 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Apr 28 00:19:05.467507 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Apr 28 00:19:05.498510 systemd[1]: Starting initrd-switch-root.service - Switch Root... Apr 28 00:19:05.546979 systemd[1]: Switching root. Apr 28 00:19:05.616456 systemd-journald[318]: Received SIGTERM from PID 1 (systemd). Apr 28 00:19:05.620563 systemd-journald[318]: Journal stopped Apr 28 00:19:10.900091 kernel: SELinux: policy capability network_peer_controls=1 Apr 28 00:19:10.901491 kernel: SELinux: policy capability open_perms=1 Apr 28 00:19:10.901532 kernel: SELinux: policy capability extended_socket_class=1 Apr 28 00:19:10.901632 kernel: SELinux: policy capability always_check_network=0 Apr 28 00:19:10.901683 kernel: SELinux: policy capability cgroup_seclabel=1 Apr 28 00:19:10.901693 kernel: SELinux: policy capability nnp_nosuid_transition=1 Apr 28 00:19:10.901824 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Apr 28 00:19:10.901836 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Apr 28 00:19:10.901891 kernel: SELinux: policy capability userspace_initial_context=0 Apr 28 00:19:10.901952 systemd[1]: Successfully loaded SELinux policy in 141.384ms. Apr 28 00:19:10.901975 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 24.346ms. Apr 28 00:19:10.901991 systemd[1]: systemd 258.3 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Apr 28 00:19:10.902005 systemd[1]: Detected virtualization kvm. Apr 28 00:19:10.902014 systemd[1]: Detected architecture x86-64. Apr 28 00:19:10.902023 systemd[1]: Detected first boot. Apr 28 00:19:10.902033 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Apr 28 00:19:10.902042 kernel: kauditd_printk_skb: 36 callbacks suppressed Apr 28 00:19:10.902100 kernel: audit: type=1334 audit(1777335546.966:86): prog-id=9 op=LOAD Apr 28 00:19:10.902114 kernel: audit: type=1334 audit(1777335546.967:87): prog-id=9 op=UNLOAD Apr 28 00:19:10.902122 zram_generator::config[1253]: No configuration found. Apr 28 00:19:10.902133 kernel: Guest personality initialized and is inactive Apr 28 00:19:10.902182 kernel: VMCI host device registered (name=vmci, major=10, minor=258) Apr 28 00:19:10.902228 kernel: Initialized host personality Apr 28 00:19:10.902278 kernel: NET: Registered PF_VSOCK protocol family Apr 28 00:19:10.902288 systemd-ssh-generator[1249]: Failed to query local AF_VSOCK CID: Cannot assign requested address Apr 28 00:19:10.902349 (sd-exec-[1234]: /usr/lib/systemd/system-generators/systemd-ssh-generator failed with exit status 1. Apr 28 00:19:10.902361 systemd[1]: Applying preset policy. Apr 28 00:19:10.902415 systemd[1]: Created symlink '/etc/systemd/system/multi-user.target.wants/prepare-helm.service' → '/etc/systemd/system/prepare-helm.service'. Apr 28 00:19:10.902426 systemd[1]: Created symlink '/etc/systemd/system/timers.target.wants/google-oslogin-cache.timer' → '/usr/lib/systemd/system/google-oslogin-cache.timer'. Apr 28 00:19:10.902477 systemd[1]: Populated /etc with preset unit settings. Apr 28 00:19:10.902487 systemd[1]: /usr/lib/systemd/system/update-engine.service:10: Support for option BlockIOWeight= has been removed and it is ignored Apr 28 00:19:10.902497 kernel: audit: type=1334 audit(1777335548.782:88): prog-id=10 op=LOAD Apr 28 00:19:10.902505 kernel: audit: type=1334 audit(1777335548.782:89): prog-id=2 op=UNLOAD Apr 28 00:19:10.902515 kernel: audit: type=1334 audit(1777335548.782:90): prog-id=11 op=LOAD Apr 28 00:19:10.902523 kernel: audit: type=1334 audit(1777335548.782:91): prog-id=12 op=LOAD Apr 28 00:19:10.902531 kernel: audit: type=1334 audit(1777335548.782:92): prog-id=3 op=UNLOAD Apr 28 00:19:10.902580 kernel: audit: type=1334 audit(1777335548.782:93): prog-id=4 op=UNLOAD Apr 28 00:19:10.902627 systemd[1]: initrd-switch-root.service: Deactivated successfully. Apr 28 00:19:10.902673 kernel: audit: type=1334 audit(1777335548.783:94): prog-id=13 op=LOAD Apr 28 00:19:10.902683 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Apr 28 00:19:10.902692 kernel: audit: type=1334 audit(1777335548.783:95): prog-id=10 op=UNLOAD Apr 28 00:19:10.902701 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Apr 28 00:19:10.902830 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Apr 28 00:19:10.902841 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Apr 28 00:19:10.902849 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Apr 28 00:19:10.902858 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Apr 28 00:19:10.902868 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Apr 28 00:19:10.902877 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Apr 28 00:19:10.902966 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Apr 28 00:19:10.902976 systemd[1]: Created slice user.slice - User and Session Slice. Apr 28 00:19:10.902985 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 28 00:19:10.902994 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 28 00:19:10.903003 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Apr 28 00:19:10.903012 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Apr 28 00:19:10.903022 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Apr 28 00:19:10.903072 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 28 00:19:10.903081 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Apr 28 00:19:10.903128 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 28 00:19:10.903138 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 28 00:19:10.903147 systemd[1]: Reached target imports.target - Image Downloads. Apr 28 00:19:10.903156 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Apr 28 00:19:10.903165 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Apr 28 00:19:10.903174 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Apr 28 00:19:10.903223 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Apr 28 00:19:10.903233 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 28 00:19:10.903242 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 28 00:19:10.903251 systemd[1]: Reached target remote-integritysetup.target - Remote Integrity Protected Volumes. Apr 28 00:19:10.903260 systemd[1]: Reached target remote-veritysetup.target - Remote Verity Protected Volumes. Apr 28 00:19:10.903344 systemd[1]: Reached target slices.target - Slice Units. Apr 28 00:19:10.903393 systemd[1]: Reached target swap.target - Swaps. Apr 28 00:19:10.905732 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Apr 28 00:19:10.906124 systemd[1]: Listening on systemd-ask-password.socket - Query the User Interactively for a Password. Apr 28 00:19:10.906134 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Apr 28 00:19:10.906144 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Apr 28 00:19:10.906153 systemd[1]: Listening on systemd-factory-reset.socket - Factory Reset Management. Apr 28 00:19:10.906162 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Apr 28 00:19:10.906171 systemd[1]: Listening on systemd-mountfsd.socket - DDI File System Mounter Socket. Apr 28 00:19:10.906262 systemd[1]: Listening on systemd-networkd-varlink.socket - Network Service Varlink Socket. Apr 28 00:19:10.906272 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 28 00:19:10.906281 systemd[1]: Listening on systemd-nsresourced.socket - Namespace Resource Manager Socket. Apr 28 00:19:10.906290 systemd[1]: Listening on systemd-oomd.socket - Userspace Out-Of-Memory (OOM) Killer Socket. Apr 28 00:19:10.906343 systemd[1]: Listening on systemd-resolved-monitor.socket - Resolve Monitor Varlink Socket. Apr 28 00:19:10.906353 systemd[1]: Listening on systemd-resolved-varlink.socket - Resolve Service Varlink Socket. Apr 28 00:19:10.906363 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 28 00:19:10.906414 systemd[1]: Listening on systemd-udevd-varlink.socket - udev Varlink Socket. Apr 28 00:19:10.906424 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Apr 28 00:19:10.906434 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Apr 28 00:19:10.906442 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Apr 28 00:19:10.906489 systemd[1]: Mounting media.mount - External Media Directory... Apr 28 00:19:10.906500 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 28 00:19:10.906593 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Apr 28 00:19:10.906603 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Apr 28 00:19:10.906612 systemd[1]: tmp.mount: x-systemd.graceful-option=usrquota specified, but option is not available, suppressing. Apr 28 00:19:10.906621 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Apr 28 00:19:10.906630 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Apr 28 00:19:10.906639 systemd[1]: Reached target machines.target - Virtual Machines and Containers. Apr 28 00:19:10.906648 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Apr 28 00:19:10.907652 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 28 00:19:10.908539 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 28 00:19:10.908710 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Apr 28 00:19:10.908721 systemd[1]: modprobe@dm_mod.service - Load Kernel Module dm_mod was skipped because of an unmet condition check (ConditionKernelModuleLoaded=!dm_mod). Apr 28 00:19:10.908887 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 28 00:19:10.908897 systemd[1]: modprobe@efi_pstore.service - Load Kernel Module efi_pstore was skipped because of an unmet condition check (ConditionKernelModuleLoaded=!efi_pstore). Apr 28 00:19:10.908907 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Apr 28 00:19:10.908916 systemd[1]: modprobe@loop.service - Load Kernel Module loop was skipped because of an unmet condition check (ConditionKernelModuleLoaded=!loop). Apr 28 00:19:10.908963 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Apr 28 00:19:10.908974 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Apr 28 00:19:10.909025 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Apr 28 00:19:10.909591 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Apr 28 00:19:10.909679 systemd[1]: Stopped systemd-fsck-usr.service. Apr 28 00:19:10.909891 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Apr 28 00:19:10.909988 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 28 00:19:10.909998 kernel: ACPI: bus type drm_connector registered Apr 28 00:19:10.910008 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 28 00:19:10.910017 kernel: fuse: init (API version 7.41) Apr 28 00:19:10.910025 systemd[1]: Starting systemd-network-generator.service - Generate Network Units from Kernel Command Line... Apr 28 00:19:10.910035 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Apr 28 00:19:10.910044 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 28 00:19:10.910095 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Apr 28 00:19:10.910105 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 28 00:19:10.910154 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 28 00:19:10.910164 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Apr 28 00:19:10.910250 systemd-journald[1329]: Collecting audit messages is enabled. Apr 28 00:19:10.910389 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Apr 28 00:19:10.910437 systemd[1]: Mounted media.mount - External Media Directory. Apr 28 00:19:10.910449 systemd-journald[1329]: Journal started Apr 28 00:19:10.910467 systemd-journald[1329]: Runtime Journal (/run/log/journal/9afbd9333af146a5b0b285af80f77485) is 6M, max 48M, 42M free. Apr 28 00:19:09.587000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Apr 28 00:19:10.529000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:19:10.546000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:19:10.560000 audit: BPF prog-id=18 op=UNLOAD Apr 28 00:19:10.560000 audit: BPF prog-id=17 op=UNLOAD Apr 28 00:19:10.561000 audit: BPF prog-id=19 op=LOAD Apr 28 00:19:10.561000 audit: BPF prog-id=20 op=LOAD Apr 28 00:19:10.561000 audit: BPF prog-id=21 op=LOAD Apr 28 00:19:10.833000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Apr 28 00:19:10.833000 audit[1329]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=5 a1=7ffd61848020 a2=4000 a3=0 items=0 ppid=1 pid=1329 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 28 00:19:10.833000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Apr 28 00:19:08.762180 systemd[1]: Queued start job for default target multi-user.target. Apr 28 00:19:08.784891 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Apr 28 00:19:08.788018 systemd[1]: systemd-journald.service: Deactivated successfully. Apr 28 00:19:08.790084 systemd[1]: systemd-journald.service: Consumed 3.223s CPU time. Apr 28 00:19:10.939394 systemd[1]: Started systemd-journald.service - Journal Service. Apr 28 00:19:10.950000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:19:10.954187 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Apr 28 00:19:10.968118 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Apr 28 00:19:10.981258 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Apr 28 00:19:10.989843 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Apr 28 00:19:10.999000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:19:11.003429 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 28 00:19:11.010000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:19:11.011463 systemd[1]: modprobe@configfs.service: Deactivated successfully. Apr 28 00:19:11.011741 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Apr 28 00:19:11.017000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:19:11.017000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:19:11.018734 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 28 00:19:11.019115 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 28 00:19:11.023000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:19:11.024000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:19:11.025684 systemd[1]: modprobe@fuse.service: Deactivated successfully. Apr 28 00:19:11.026618 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Apr 28 00:19:11.037000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:19:11.038000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:19:11.040195 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 28 00:19:11.050000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:19:11.052638 systemd[1]: Finished systemd-network-generator.service - Generate Network Units from Kernel Command Line. Apr 28 00:19:11.064000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:19:11.089205 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Apr 28 00:19:11.109000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:19:11.114995 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Apr 28 00:19:11.125000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-load-credentials comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:19:11.243527 systemd[1]: Reached target network-pre.target - Preparation for Network. Apr 28 00:19:11.257216 systemd[1]: Listening on systemd-importd.socket - Disk Image Download Service Socket. Apr 28 00:19:11.291663 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Apr 28 00:19:11.306513 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Apr 28 00:19:11.316260 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Apr 28 00:19:11.317641 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 28 00:19:11.328418 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Apr 28 00:19:11.337912 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 28 00:19:11.343565 systemd[1]: Starting systemd-confext.service - Merge System Configuration Images into /etc/... Apr 28 00:19:11.359956 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Apr 28 00:19:11.383946 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Apr 28 00:19:11.393874 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 28 00:19:11.402177 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Apr 28 00:19:11.432277 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 28 00:19:11.537102 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 28 00:19:11.560118 systemd[1]: Starting systemd-userdb-load-credentials.service - Load JSON user/group Records from Credentials... Apr 28 00:19:11.575737 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Apr 28 00:19:11.593203 systemd-journald[1329]: Time spent on flushing to /var/log/journal/9afbd9333af146a5b0b285af80f77485 is 163.619ms for 1303 entries. Apr 28 00:19:11.593203 systemd-journald[1329]: System Journal (/var/log/journal/9afbd9333af146a5b0b285af80f77485) is 8M, max 163.5M, 155.5M free. Apr 28 00:19:11.614000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:19:11.689000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:19:11.586002 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Apr 28 00:19:11.792249 systemd-journald[1329]: Received client request to flush runtime journal. Apr 28 00:19:11.605684 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 28 00:19:11.631714 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Apr 28 00:19:11.702129 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Apr 28 00:19:11.723703 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Apr 28 00:19:11.805463 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Apr 28 00:19:11.816000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:19:11.855682 systemd[1]: Finished systemd-userdb-load-credentials.service - Load JSON user/group Records from Credentials. Apr 28 00:19:11.888000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdb-load-credentials comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:19:11.931589 kernel: loop4: detected capacity change from 0 to 43472 Apr 28 00:19:11.944649 kernel: loop4: p1 p2 p3 Apr 28 00:19:11.949963 systemd-tmpfiles[1369]: ACLs are not supported, ignoring. Apr 28 00:19:11.958000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:19:11.950027 systemd-tmpfiles[1369]: ACLs are not supported, ignoring. Apr 28 00:19:11.950198 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 28 00:19:11.983000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:19:11.973998 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 28 00:19:12.006737 kernel: kauditd_printk_skb: 43 callbacks suppressed Apr 28 00:19:12.016570 kernel: audit: type=1130 audit(1777335551.983:137): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:19:12.097196 systemd[1]: Starting systemd-sysusers.service - Create System Users... Apr 28 00:19:12.114206 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Apr 28 00:19:12.115276 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Apr 28 00:19:12.124000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:19:12.166897 kernel: audit: type=1130 audit(1777335552.124:138): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:19:12.235290 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Apr 28 00:19:12.242627 kernel: device-mapper: verity: Unrecognized verity feature request: root_hash_sig_key_desc Apr 28 00:19:12.242690 kernel: device-mapper: table: 253:4: verity: Unrecognized verity feature request (-EINVAL) Apr 28 00:19:12.248029 kernel: device-mapper: ioctl: error adding target to table Apr 28 00:19:12.248386 systemd-confext[1371]: device-mapper: reload ioctl on loop4p1-verity (253:4) failed: Invalid argument Apr 28 00:19:12.257298 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Apr 28 00:19:12.399928 systemd[1]: Finished systemd-sysusers.service - Create System Users. Apr 28 00:19:12.408000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:19:12.413412 systemd[1]: Starting systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer... Apr 28 00:19:12.424908 kernel: audit: type=1130 audit(1777335552.408:139): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:19:12.410000 audit: BPF prog-id=22 op=LOAD Apr 28 00:19:12.437951 kernel: audit: type=1334 audit(1777335552.410:140): prog-id=22 op=LOAD Apr 28 00:19:12.410000 audit: BPF prog-id=23 op=LOAD Apr 28 00:19:12.440273 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 28 00:19:12.410000 audit: BPF prog-id=24 op=LOAD Apr 28 00:19:12.438000 audit: BPF prog-id=25 op=LOAD Apr 28 00:19:12.449874 kernel: audit: type=1334 audit(1777335552.410:141): prog-id=23 op=LOAD Apr 28 00:19:12.449906 kernel: audit: type=1334 audit(1777335552.410:142): prog-id=24 op=LOAD Apr 28 00:19:12.449924 kernel: audit: type=1334 audit(1777335552.438:143): prog-id=25 op=LOAD Apr 28 00:19:12.456000 audit: BPF prog-id=26 op=LOAD Apr 28 00:19:12.464109 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Apr 28 00:19:12.469217 kernel: audit: type=1334 audit(1777335552.456:144): prog-id=26 op=LOAD Apr 28 00:19:12.480301 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 28 00:19:12.491287 systemd[1]: Starting modprobe@tun.service - Load Kernel Module tun... Apr 28 00:19:12.507000 audit: BPF prog-id=27 op=LOAD Apr 28 00:19:12.511089 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Apr 28 00:19:12.507000 audit: BPF prog-id=28 op=LOAD Apr 28 00:19:12.522709 kernel: audit: type=1334 audit(1777335552.507:145): prog-id=27 op=LOAD Apr 28 00:19:12.507000 audit: BPF prog-id=29 op=LOAD Apr 28 00:19:12.522972 kernel: audit: type=1334 audit(1777335552.507:146): prog-id=28 op=LOAD Apr 28 00:19:12.535233 kernel: tun: Universal TUN/TAP device driver, 1.6 Apr 28 00:19:12.536450 systemd[1]: modprobe@tun.service: Deactivated successfully. Apr 28 00:19:12.540433 systemd[1]: Finished modprobe@tun.service - Load Kernel Module tun. Apr 28 00:19:12.546000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@tun comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:19:12.547000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@tun comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:19:12.555000 audit: BPF prog-id=30 op=LOAD Apr 28 00:19:12.555000 audit: BPF prog-id=31 op=LOAD Apr 28 00:19:12.555000 audit: BPF prog-id=32 op=LOAD Apr 28 00:19:12.565542 systemd[1]: Starting systemd-nsresourced.service - Namespace Resource Manager... Apr 28 00:19:12.578001 systemd-tmpfiles[1397]: ACLs are not supported, ignoring. Apr 28 00:19:12.578021 systemd-tmpfiles[1397]: ACLs are not supported, ignoring. Apr 28 00:19:12.589884 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 28 00:19:12.602000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:19:12.886995 systemd[1]: Started systemd-userdbd.service - User Database Manager. Apr 28 00:19:12.894000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:19:13.131204 systemd-nsresourced[1402]: Not setting up BPF subsystem, as functionality has been disabled at compile time. Apr 28 00:19:13.189987 systemd[1]: Started systemd-nsresourced.service - Namespace Resource Manager. Apr 28 00:19:13.205000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-nsresourced comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:19:13.739662 systemd-oomd[1394]: No swap; memory pressure usage will be degraded Apr 28 00:19:13.750139 systemd[1]: Started systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer. Apr 28 00:19:13.757000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-oomd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:19:13.767609 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Apr 28 00:19:13.773000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-timesyncd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:19:13.775526 systemd[1]: Reached target time-set.target - System Time Set. Apr 28 00:19:13.784635 systemd-resolved[1395]: Positive Trust Anchors: Apr 28 00:19:13.784689 systemd-resolved[1395]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 28 00:19:13.784692 systemd-resolved[1395]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Apr 28 00:19:13.784721 systemd-resolved[1395]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 28 00:19:13.890428 systemd-resolved[1395]: Defaulting to hostname 'linux'. Apr 28 00:19:14.181552 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 28 00:19:14.190000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:19:14.193194 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 28 00:19:18.771726 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Apr 28 00:19:18.784000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:19:18.790524 kernel: kauditd_printk_skb: 12 callbacks suppressed Apr 28 00:19:18.792540 kernel: audit: type=1130 audit(1777335558.784:159): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:19:18.793077 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 28 00:19:18.787000 audit: BPF prog-id=7 op=UNLOAD Apr 28 00:19:18.787000 audit: BPF prog-id=6 op=UNLOAD Apr 28 00:19:18.816106 kernel: audit: type=1334 audit(1777335558.787:160): prog-id=7 op=UNLOAD Apr 28 00:19:18.816275 kernel: audit: type=1334 audit(1777335558.787:161): prog-id=6 op=UNLOAD Apr 28 00:19:18.788000 audit: BPF prog-id=33 op=LOAD Apr 28 00:19:18.788000 audit: BPF prog-id=34 op=LOAD Apr 28 00:19:18.845978 kernel: audit: type=1334 audit(1777335558.788:162): prog-id=33 op=LOAD Apr 28 00:19:18.846155 kernel: audit: type=1334 audit(1777335558.788:163): prog-id=34 op=LOAD Apr 28 00:19:19.257922 systemd-udevd[1424]: Using default interface naming scheme 'v258'. Apr 28 00:19:20.815005 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 28 00:19:20.828000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:19:20.882514 kernel: audit: type=1130 audit(1777335560.828:164): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:19:20.884000 audit: BPF prog-id=35 op=LOAD Apr 28 00:19:20.888613 kernel: audit: type=1334 audit(1777335560.884:165): prog-id=35 op=LOAD Apr 28 00:19:20.898686 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 28 00:19:21.186931 systemd-networkd[1426]: lo: Link UP Apr 28 00:19:21.187503 systemd-networkd[1426]: lo: Gained carrier Apr 28 00:19:21.189460 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 28 00:19:21.198000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:19:21.214000 kernel: audit: type=1130 audit(1777335561.198:166): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:19:21.238520 systemd[1]: Reached target network.target - Network. Apr 28 00:19:21.248229 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Apr 28 00:19:21.263731 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Apr 28 00:19:21.277182 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Apr 28 00:19:21.354004 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Apr 28 00:19:21.360000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-persistent-storage comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:19:21.376975 kernel: audit: type=1130 audit(1777335561.360:167): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-persistent-storage comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:19:21.392619 systemd-networkd[1426]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Apr 28 00:19:21.392629 systemd-networkd[1426]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 28 00:19:21.395540 systemd-networkd[1426]: eth0: Link UP Apr 28 00:19:21.396965 kernel: mousedev: PS/2 mouse device common for all mice Apr 28 00:19:21.397589 systemd-networkd[1426]: eth0: Gained carrier Apr 28 00:19:21.397956 systemd-networkd[1426]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Apr 28 00:19:21.415248 systemd-networkd[1426]: eth0: DHCPv4 address 10.0.0.20/16, gateway 10.0.0.1 acquired from 10.0.0.1 Apr 28 00:19:21.420654 systemd-timesyncd[1396]: Network configuration changed, trying to establish connection. Apr 28 00:19:22.093977 systemd-timesyncd[1396]: Contacted time server 10.0.0.1:123 (10.0.0.1). Apr 28 00:19:22.094077 systemd-timesyncd[1396]: Initial clock synchronization to Tue 2026-04-28 00:19:22.093620 UTC. Apr 28 00:19:22.095173 systemd-resolved[1395]: Clock change detected. Flushing caches. Apr 28 00:19:22.116104 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Apr 28 00:19:22.132629 kernel: ACPI: button: Power Button [PWRF] Apr 28 00:19:22.301213 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Apr 28 00:19:22.323366 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Apr 28 00:19:22.377244 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Apr 28 00:19:22.380101 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Apr 28 00:19:22.388304 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Apr 28 00:19:22.595000 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Apr 28 00:19:22.604000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:19:22.622006 kernel: audit: type=1130 audit(1777335562.604:168): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:19:22.699586 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 28 00:19:22.820943 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 28 00:19:22.822697 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 28 00:19:22.848000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:19:22.849000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:19:23.023191 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 28 00:19:23.364200 kernel: erofs: (device dm-4): mounted with root inode @ nid 40. Apr 28 00:19:23.480175 kernel: loop4: detected capacity change from 0 to 43472 Apr 28 00:19:23.485927 kernel: loop4: p1 p2 p3 Apr 28 00:19:23.707647 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Apr 28 00:19:23.709640 kernel: device-mapper: verity: Unrecognized verity feature request: root_hash_sig_key_desc Apr 28 00:19:23.722468 kernel: device-mapper: table: 253:4: verity: Unrecognized verity feature request (-EINVAL) Apr 28 00:19:23.723763 kernel: device-mapper: ioctl: error adding target to table Apr 28 00:19:23.724791 (sd-merge)[1490]: device-mapper: reload ioctl on loop4p1-verity (253:4) failed: Invalid argument Apr 28 00:19:23.746768 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Apr 28 00:19:23.780522 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 28 00:19:23.788000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:19:23.803822 systemd-networkd[1426]: eth0: Gained IPv6LL Apr 28 00:19:23.827344 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Apr 28 00:19:23.835000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-wait-online comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:19:23.837135 systemd[1]: Reached target network-online.target - Network is Online. Apr 28 00:19:24.036455 kernel: erofs: (device dm-4): mounted with root inode @ nid 40. Apr 28 00:19:24.044238 (sd-merge)[1490]: Skipping extension refresh because no change was found, use --always-refresh=yes to always do a refresh. Apr 28 00:19:24.080832 kernel: device-mapper: ioctl: remove_all left 4 open device(s) Apr 28 00:19:24.111674 systemd[1]: Finished systemd-confext.service - Merge System Configuration Images into /etc/. Apr 28 00:19:24.119000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-confext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:19:24.130374 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Apr 28 00:19:24.331225 kernel: loop4: detected capacity change from 0 to 219192 Apr 28 00:19:24.574800 kernel: loop4: detected capacity change from 0 to 178200 Apr 28 00:19:24.597805 kernel: loop4: p1 p2 p3 Apr 28 00:19:24.671352 kernel: loop4: p1 p2 p3 Apr 28 00:19:24.814043 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Apr 28 00:19:24.839610 kernel: device-mapper: verity: Unrecognized verity feature request: root_hash_sig_key_desc Apr 28 00:19:24.842791 kernel: device-mapper: table: 253:4: verity: Unrecognized verity feature request (-EINVAL) Apr 28 00:19:24.842821 kernel: device-mapper: ioctl: error adding target to table Apr 28 00:19:24.843672 systemd-sysext[1501]: device-mapper: reload ioctl on loop4p1-verity (253:4) failed: Invalid argument Apr 28 00:19:24.874307 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Apr 28 00:19:25.380001 kernel: erofs: (device dm-4): mounted with root inode @ nid 39. Apr 28 00:19:25.458169 kernel: loop4: detected capacity change from 0 to 378016 Apr 28 00:19:25.469450 kernel: loop4: p1 p2 p3 Apr 28 00:19:25.487372 kernel: loop4: p1 p2 p3 Apr 28 00:19:25.569180 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Apr 28 00:19:25.591009 kernel: device-mapper: verity: Unrecognized verity feature request: root_hash_sig_key_desc Apr 28 00:19:25.593162 kernel: device-mapper: table: 253:4: verity: Unrecognized verity feature request (-EINVAL) Apr 28 00:19:25.593189 kernel: device-mapper: ioctl: error adding target to table Apr 28 00:19:25.596778 systemd-sysext[1501]: device-mapper: reload ioctl on loop4p1-verity (253:4) failed: Invalid argument Apr 28 00:19:25.648430 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Apr 28 00:19:25.878096 kernel: erofs: (device dm-4): mounted with root inode @ nid 39. Apr 28 00:19:26.061174 kernel: loop4: detected capacity change from 0 to 219192 Apr 28 00:19:26.127932 kernel: loop5: detected capacity change from 0 to 178200 Apr 28 00:19:26.134133 kernel: loop5: p1 p2 p3 Apr 28 00:19:26.141126 kernel: loop5: p1 p2 p3 Apr 28 00:19:26.227415 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Apr 28 00:19:26.277565 kernel: device-mapper: verity: Unrecognized verity feature request: root_hash_sig_key_desc Apr 28 00:19:26.279629 kernel: device-mapper: table: 253:4: verity: Unrecognized verity feature request (-EINVAL) Apr 28 00:19:26.279774 kernel: device-mapper: ioctl: error adding target to table Apr 28 00:19:26.279676 (sd-merge)[1521]: device-mapper: reload ioctl on loop5p1-verity (253:4) failed: Invalid argument Apr 28 00:19:26.287064 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Apr 28 00:19:26.440693 kernel: erofs: (device dm-4): mounted with root inode @ nid 39. Apr 28 00:19:26.458996 kernel: loop6: detected capacity change from 0 to 378016 Apr 28 00:19:26.466045 kernel: loop6: p1 p2 p3 Apr 28 00:19:26.480478 kernel: loop6: p1 p2 p3 Apr 28 00:19:26.626708 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Apr 28 00:19:26.627684 kernel: device-mapper: verity: Unrecognized verity feature request: root_hash_sig_key_desc Apr 28 00:19:26.627743 kernel: device-mapper: table: 253:5: verity: Unrecognized verity feature request (-EINVAL) Apr 28 00:19:26.633024 kernel: device-mapper: ioctl: error adding target to table Apr 28 00:19:26.637293 (sd-merge)[1521]: device-mapper: reload ioctl on loop6p1-verity (253:5) failed: Invalid argument Apr 28 00:19:26.645156 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Apr 28 00:19:26.793992 kernel: erofs: (device dm-5): mounted with root inode @ nid 39. Apr 28 00:19:26.821446 (sd-merge)[1521]: Skipping extension refresh because no change was found, use --always-refresh=yes to always do a refresh. Apr 28 00:19:26.831151 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Apr 28 00:19:26.838000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:19:26.843288 kernel: kauditd_printk_skb: 5 callbacks suppressed Apr 28 00:19:26.843320 kernel: audit: type=1130 audit(1777335566.838:174): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:19:26.851289 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 28 00:19:26.871241 kernel: device-mapper: ioctl: remove_all left 4 open device(s) Apr 28 00:19:27.068313 systemd-tmpfiles[1538]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Apr 28 00:19:27.070794 systemd-tmpfiles[1538]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Apr 28 00:19:27.071360 systemd-tmpfiles[1538]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Apr 28 00:19:27.097164 systemd-tmpfiles[1538]: ACLs are not supported, ignoring. Apr 28 00:19:27.097406 systemd-tmpfiles[1538]: ACLs are not supported, ignoring. Apr 28 00:19:27.186551 systemd-tmpfiles[1538]: Detected autofs mount point /boot during canonicalization of boot. Apr 28 00:19:27.186608 systemd-tmpfiles[1538]: Skipping /boot Apr 28 00:19:27.249040 systemd-tmpfiles[1538]: Detected autofs mount point /boot during canonicalization of boot. Apr 28 00:19:27.250818 systemd-tmpfiles[1538]: Skipping /boot Apr 28 00:19:27.527404 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 28 00:19:27.546000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:19:27.565319 kernel: audit: type=1130 audit(1777335567.546:175): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:19:27.671312 systemd[1]: Starting audit-rules.service - Load Audit Rules... Apr 28 00:19:27.697278 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Apr 28 00:19:27.782987 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Apr 28 00:19:27.830592 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Apr 28 00:19:27.855184 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Apr 28 00:19:28.033000 audit[1554]: AUDIT1127 pid=1554 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Apr 28 00:19:28.055497 kernel: audit: type=1127 audit(1777335568.033:176): pid=1554 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Apr 28 00:19:28.096326 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Apr 28 00:19:28.172000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:19:28.202120 kernel: audit: type=1130 audit(1777335568.172:177): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:19:28.244651 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Apr 28 00:19:28.272000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:19:28.375633 kernel: audit: type=1130 audit(1777335568.272:178): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:19:28.465000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Apr 28 00:19:28.471368 augenrules[1570]: No rules Apr 28 00:19:28.471811 systemd[1]: audit-rules.service: Deactivated successfully. Apr 28 00:19:28.480988 kernel: audit: type=1305 audit(1777335568.465:179): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Apr 28 00:19:28.465000 audit[1570]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7fffd9914170 a2=420 a3=0 items=0 ppid=1544 pid=1570 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 28 00:19:28.483257 systemd[1]: Finished audit-rules.service - Load Audit Rules. Apr 28 00:19:28.484208 kernel: audit: type=1300 audit(1777335568.465:179): arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7fffd9914170 a2=420 a3=0 items=0 ppid=1544 pid=1570 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 28 00:19:28.465000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Apr 28 00:19:28.598103 kernel: audit: type=1327 audit(1777335568.465:179): proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Apr 28 00:19:28.623452 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Apr 28 00:19:28.661654 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Apr 28 00:19:33.785707 ldconfig[1546]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Apr 28 00:19:33.809688 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Apr 28 00:19:33.827966 systemd[1]: Starting systemd-update-done.service - Update is Completed... Apr 28 00:19:33.932325 systemd[1]: Finished systemd-update-done.service - Update is Completed. Apr 28 00:19:33.947809 systemd[1]: Reached target sysinit.target - System Initialization. Apr 28 00:19:33.954668 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Apr 28 00:19:33.993078 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Apr 28 00:19:34.004738 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Apr 28 00:19:34.011273 systemd[1]: Started logrotate.timer - Daily rotation of log files. Apr 28 00:19:34.016395 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Apr 28 00:19:34.022117 systemd[1]: Started systemd-sysupdate-reboot.timer - Reboot Automatically After System Update. Apr 28 00:19:34.028036 systemd[1]: Started systemd-sysupdate.timer - Automatic System Update. Apr 28 00:19:34.034605 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Apr 28 00:19:34.040468 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Apr 28 00:19:34.040715 systemd[1]: Reached target paths.target - Path Units. Apr 28 00:19:34.048487 systemd[1]: Reached target timers.target - Timer Units. Apr 28 00:19:34.056784 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Apr 28 00:19:34.066206 systemd[1]: Starting docker.socket - Docker Socket for the API... Apr 28 00:19:34.086280 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Apr 28 00:19:34.116134 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Apr 28 00:19:34.122326 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Apr 28 00:19:34.132466 systemd[1]: Listening on systemd-logind-varlink.socket - User Login Management Varlink Socket. Apr 28 00:19:34.150348 systemd[1]: Listening on systemd-machined.socket - Virtual Machine and Container Registration Service Socket. Apr 28 00:19:34.188252 systemd[1]: Listening on docker.socket - Docker Socket for the API. Apr 28 00:19:34.210117 systemd[1]: Reached target sockets.target - Socket Units. Apr 28 00:19:34.214797 systemd[1]: Reached target basic.target - Basic System. Apr 28 00:19:34.222263 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Apr 28 00:19:34.222458 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Apr 28 00:19:34.227758 systemd[1]: Starting containerd.service - containerd container runtime... Apr 28 00:19:34.248405 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Apr 28 00:19:34.257173 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Apr 28 00:19:34.266174 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Apr 28 00:19:34.275285 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Apr 28 00:19:34.291684 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Apr 28 00:19:34.297677 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Apr 28 00:19:34.299985 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Apr 28 00:19:34.307329 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 28 00:19:34.323092 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Apr 28 00:19:34.330456 extend-filesystems[1587]: Found /dev/vda6 Apr 28 00:19:34.338990 jq[1586]: false Apr 28 00:19:34.334151 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Apr 28 00:19:34.343829 google_oslogin_nss_cache[1588]: oslogin_cache_refresh[1588]: Refreshing passwd entry cache Apr 28 00:19:34.332465 oslogin_cache_refresh[1588]: Refreshing passwd entry cache Apr 28 00:19:34.352463 extend-filesystems[1587]: Found /dev/vda9 Apr 28 00:19:34.361017 google_oslogin_nss_cache[1588]: oslogin_cache_refresh[1588]: Failure getting users, quitting Apr 28 00:19:34.361017 google_oslogin_nss_cache[1588]: oslogin_cache_refresh[1588]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Apr 28 00:19:34.361017 google_oslogin_nss_cache[1588]: oslogin_cache_refresh[1588]: Refreshing group entry cache Apr 28 00:19:34.352405 oslogin_cache_refresh[1588]: Failure getting users, quitting Apr 28 00:19:34.353111 oslogin_cache_refresh[1588]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Apr 28 00:19:34.353225 oslogin_cache_refresh[1588]: Refreshing group entry cache Apr 28 00:19:34.366730 extend-filesystems[1587]: Checking size of /dev/vda9 Apr 28 00:19:34.372790 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Apr 28 00:19:34.379982 google_oslogin_nss_cache[1588]: oslogin_cache_refresh[1588]: Failure getting groups, quitting Apr 28 00:19:34.379982 google_oslogin_nss_cache[1588]: oslogin_cache_refresh[1588]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Apr 28 00:19:34.379808 oslogin_cache_refresh[1588]: Failure getting groups, quitting Apr 28 00:19:34.379823 oslogin_cache_refresh[1588]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Apr 28 00:19:34.387986 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Apr 28 00:19:34.401965 extend-filesystems[1587]: Resized partition /dev/vda9 Apr 28 00:19:34.408392 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Apr 28 00:19:34.411760 extend-filesystems[1614]: resize2fs 1.47.3 (8-Jul-2025) Apr 28 00:19:34.425003 kernel: EXT4-fs (vda9): resizing filesystem from 456704 to 1784827 blocks Apr 28 00:19:34.425142 systemd[1]: Starting systemd-logind.service - User Login Management... Apr 28 00:19:34.430498 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Apr 28 00:19:34.448243 systemd[1]: Starting update-engine.service - Update Engine... Apr 28 00:19:34.463058 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Apr 28 00:19:34.478449 kernel: EXT4-fs (vda9): resized filesystem to 1784827 Apr 28 00:19:34.576022 extend-filesystems[1614]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Apr 28 00:19:34.576022 extend-filesystems[1614]: old_desc_blocks = 1, new_desc_blocks = 1 Apr 28 00:19:34.576022 extend-filesystems[1614]: The filesystem on /dev/vda9 is now 1784827 (4k) blocks long. Apr 28 00:19:34.613646 extend-filesystems[1587]: Resized filesystem in /dev/vda9 Apr 28 00:19:34.585340 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Apr 28 00:19:34.596311 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Apr 28 00:19:34.596780 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Apr 28 00:19:34.597398 systemd[1]: extend-filesystems.service: Deactivated successfully. Apr 28 00:19:34.600229 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Apr 28 00:19:34.615715 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Apr 28 00:19:34.616241 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Apr 28 00:19:34.620483 systemd[1]: motdgen.service: Deactivated successfully. Apr 28 00:19:34.622987 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Apr 28 00:19:34.644199 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Apr 28 00:19:34.667747 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Apr 28 00:19:34.668127 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Apr 28 00:19:34.675279 jq[1621]: true Apr 28 00:19:34.725772 update_engine[1618]: I20260428 00:19:34.725579 1618 main.cc:92] Flatcar Update Engine starting Apr 28 00:19:34.735035 systemd-logind[1616]: Watching system buttons on /dev/input/event2 (Power Button) Apr 28 00:19:34.739648 systemd-logind[1616]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Apr 28 00:19:34.743356 systemd-logind[1616]: New seat seat0. Apr 28 00:19:34.752436 systemd[1]: Started systemd-logind.service - User Login Management. Apr 28 00:19:34.801119 systemd[1]: coreos-metadata.service: Deactivated successfully. Apr 28 00:19:34.801493 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Apr 28 00:19:34.807439 jq[1642]: true Apr 28 00:19:34.893456 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Apr 28 00:19:35.155664 tar[1641]: linux-amd64/LICENSE Apr 28 00:19:35.156990 tar[1641]: linux-amd64/helm Apr 28 00:19:35.210620 dbus-daemon[1584]: [system] SELinux support is enabled Apr 28 00:19:35.217079 bash[1689]: Updated "/home/core/.ssh/authorized_keys" Apr 28 00:19:35.217528 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Apr 28 00:19:35.227281 systemd[1]: Started dbus.service - D-Bus System Message Bus. Apr 28 00:19:35.259446 dbus-daemon[1584]: [system] Successfully activated service 'org.freedesktop.systemd1' Apr 28 00:19:35.276019 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Apr 28 00:19:35.281647 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Apr 28 00:19:35.282816 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Apr 28 00:19:35.297967 update_engine[1618]: I20260428 00:19:35.297163 1618 update_check_scheduler.cc:74] Next update check in 9m57s Apr 28 00:19:35.298244 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Apr 28 00:19:35.298379 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Apr 28 00:19:35.306195 sshd_keygen[1635]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Apr 28 00:19:35.310206 systemd[1]: Started update-engine.service - Update Engine. Apr 28 00:19:35.483242 systemd[1]: Started locksmithd.service - Cluster reboot manager. Apr 28 00:19:35.591653 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Apr 28 00:19:35.778927 systemd[1]: Starting issuegen.service - Generate /run/issue... Apr 28 00:19:35.974108 systemd[1]: issuegen.service: Deactivated successfully. Apr 28 00:19:35.975611 systemd[1]: Finished issuegen.service - Generate /run/issue. Apr 28 00:19:35.989163 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Apr 28 00:19:36.290985 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Apr 28 00:19:36.373736 locksmithd[1695]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Apr 28 00:19:36.428789 systemd[1]: Started getty@tty1.service - Getty on tty1. Apr 28 00:19:36.585241 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Apr 28 00:19:36.605666 systemd[1]: Reached target getty.target - Login Prompts. Apr 28 00:19:38.322619 containerd[1643]: time="2026-04-28T00:19:38Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Apr 28 00:19:38.339103 containerd[1643]: time="2026-04-28T00:19:38.338426292Z" level=info msg="starting containerd" revision=dea7da592f5d1d2b7755e3a161be07f43fad8f75 version=v2.2.1 Apr 28 00:19:38.617770 containerd[1643]: time="2026-04-28T00:19:38.616107075Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="561.039µs" Apr 28 00:19:38.619247 containerd[1643]: time="2026-04-28T00:19:38.618061982Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Apr 28 00:19:38.619247 containerd[1643]: time="2026-04-28T00:19:38.618428185Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Apr 28 00:19:38.619247 containerd[1643]: time="2026-04-28T00:19:38.618446563Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Apr 28 00:19:38.619247 containerd[1643]: time="2026-04-28T00:19:38.618983249Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Apr 28 00:19:38.619247 containerd[1643]: time="2026-04-28T00:19:38.619002291Z" level=info msg="loading plugin" id=io.containerd.mount-handler.v1.erofs type=io.containerd.mount-handler.v1 Apr 28 00:19:38.619247 containerd[1643]: time="2026-04-28T00:19:38.619012126Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Apr 28 00:19:38.619247 containerd[1643]: time="2026-04-28T00:19:38.619245535Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Apr 28 00:19:38.620497 containerd[1643]: time="2026-04-28T00:19:38.619262994Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Apr 28 00:19:38.620497 containerd[1643]: time="2026-04-28T00:19:38.619804144Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Apr 28 00:19:38.620497 containerd[1643]: time="2026-04-28T00:19:38.619818723Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Apr 28 00:19:38.620497 containerd[1643]: time="2026-04-28T00:19:38.619966631Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Apr 28 00:19:38.620497 containerd[1643]: time="2026-04-28T00:19:38.619975363Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Apr 28 00:19:38.620789 containerd[1643]: time="2026-04-28T00:19:38.620717830Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Apr 28 00:19:38.621624 containerd[1643]: time="2026-04-28T00:19:38.621449510Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Apr 28 00:19:38.622103 containerd[1643]: time="2026-04-28T00:19:38.621826388Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Apr 28 00:19:38.622103 containerd[1643]: time="2026-04-28T00:19:38.621977415Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Apr 28 00:19:38.622103 containerd[1643]: time="2026-04-28T00:19:38.621990991Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Apr 28 00:19:38.625654 containerd[1643]: time="2026-04-28T00:19:38.625626630Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Apr 28 00:19:38.635069 containerd[1643]: time="2026-04-28T00:19:38.634261138Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Apr 28 00:19:38.653432 containerd[1643]: time="2026-04-28T00:19:38.652498885Z" level=info msg="metadata content store policy set" policy=shared Apr 28 00:19:38.681256 containerd[1643]: time="2026-04-28T00:19:38.679832940Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Apr 28 00:19:38.681256 containerd[1643]: time="2026-04-28T00:19:38.680761903Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Apr 28 00:19:38.683043 containerd[1643]: time="2026-04-28T00:19:38.683015366Z" level=info msg="built-in NRI default validator is disabled" Apr 28 00:19:38.683138 containerd[1643]: time="2026-04-28T00:19:38.683123217Z" level=info msg="runtime interface created" Apr 28 00:19:38.683182 containerd[1643]: time="2026-04-28T00:19:38.683172743Z" level=info msg="created NRI interface" Apr 28 00:19:38.683238 containerd[1643]: time="2026-04-28T00:19:38.683224199Z" level=info msg="loading plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Apr 28 00:19:38.683515 containerd[1643]: time="2026-04-28T00:19:38.683489656Z" level=info msg="skip loading plugin" error="failed to check mkfs.erofs availability: failed to run mkfs.erofs --help: exec: \"mkfs.erofs\": executable file not found in $PATH: skip plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Apr 28 00:19:38.683646 containerd[1643]: time="2026-04-28T00:19:38.683555942Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Apr 28 00:19:38.683699 containerd[1643]: time="2026-04-28T00:19:38.683687724Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Apr 28 00:19:38.684027 containerd[1643]: time="2026-04-28T00:19:38.684006159Z" level=info msg="loading plugin" id=io.containerd.mount-manager.v1.bolt type=io.containerd.mount-manager.v1 Apr 28 00:19:38.684719 containerd[1643]: time="2026-04-28T00:19:38.684690667Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Apr 28 00:19:38.685024 containerd[1643]: time="2026-04-28T00:19:38.685000627Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Apr 28 00:19:38.685120 containerd[1643]: time="2026-04-28T00:19:38.685103614Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Apr 28 00:19:38.685193 containerd[1643]: time="2026-04-28T00:19:38.685179232Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Apr 28 00:19:38.685249 containerd[1643]: time="2026-04-28T00:19:38.685237388Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Apr 28 00:19:38.685301 containerd[1643]: time="2026-04-28T00:19:38.685289554Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Apr 28 00:19:38.685435 containerd[1643]: time="2026-04-28T00:19:38.685422590Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Apr 28 00:19:38.692770 containerd[1643]: time="2026-04-28T00:19:38.692417160Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Apr 28 00:19:38.693937 containerd[1643]: time="2026-04-28T00:19:38.693298259Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Apr 28 00:19:38.700165 containerd[1643]: time="2026-04-28T00:19:38.698798165Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Apr 28 00:19:38.700165 containerd[1643]: time="2026-04-28T00:19:38.699500564Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Apr 28 00:19:38.700165 containerd[1643]: time="2026-04-28T00:19:38.699539779Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Apr 28 00:19:38.700165 containerd[1643]: time="2026-04-28T00:19:38.699642563Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Apr 28 00:19:38.700165 containerd[1643]: time="2026-04-28T00:19:38.699654995Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Apr 28 00:19:38.700165 containerd[1643]: time="2026-04-28T00:19:38.699712629Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Apr 28 00:19:38.710795 containerd[1643]: time="2026-04-28T00:19:38.699829915Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Apr 28 00:19:38.710795 containerd[1643]: time="2026-04-28T00:19:38.707733031Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Apr 28 00:19:38.710795 containerd[1643]: time="2026-04-28T00:19:38.710304766Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.mounts type=io.containerd.grpc.v1 Apr 28 00:19:38.710795 containerd[1643]: time="2026-04-28T00:19:38.710461341Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Apr 28 00:19:38.710795 containerd[1643]: time="2026-04-28T00:19:38.710626067Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Apr 28 00:19:38.710795 containerd[1643]: time="2026-04-28T00:19:38.710638280Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Apr 28 00:19:38.714306 containerd[1643]: time="2026-04-28T00:19:38.711614261Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Apr 28 00:19:38.717370 containerd[1643]: time="2026-04-28T00:19:38.716987998Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Apr 28 00:19:38.717748 containerd[1643]: time="2026-04-28T00:19:38.717625863Z" level=info msg="Start snapshots syncer" Apr 28 00:19:38.722650 containerd[1643]: time="2026-04-28T00:19:38.722063429Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Apr 28 00:19:38.724434 containerd[1643]: time="2026-04-28T00:19:38.724198145Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"cgroupWritable\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"\",\"binDirs\":[\"/opt/cni/bin\"],\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogLineSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Apr 28 00:19:38.732255 containerd[1643]: time="2026-04-28T00:19:38.729349303Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Apr 28 00:19:38.739105 containerd[1643]: time="2026-04-28T00:19:38.738179329Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Apr 28 00:19:38.742818 tar[1641]: linux-amd64/README.md Apr 28 00:19:38.743413 containerd[1643]: time="2026-04-28T00:19:38.742998677Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Apr 28 00:19:38.743549 containerd[1643]: time="2026-04-28T00:19:38.743449241Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Apr 28 00:19:38.743675 containerd[1643]: time="2026-04-28T00:19:38.743659467Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Apr 28 00:19:38.743709 containerd[1643]: time="2026-04-28T00:19:38.743679867Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Apr 28 00:19:38.744069 containerd[1643]: time="2026-04-28T00:19:38.743997666Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Apr 28 00:19:38.744092 containerd[1643]: time="2026-04-28T00:19:38.744068116Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Apr 28 00:19:38.744366 containerd[1643]: time="2026-04-28T00:19:38.744288055Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Apr 28 00:19:38.745039 containerd[1643]: time="2026-04-28T00:19:38.744765720Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Apr 28 00:19:38.745039 containerd[1643]: time="2026-04-28T00:19:38.744938983Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Apr 28 00:19:38.745418 containerd[1643]: time="2026-04-28T00:19:38.745332294Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Apr 28 00:19:38.745616 containerd[1643]: time="2026-04-28T00:19:38.745495099Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Apr 28 00:19:38.745616 containerd[1643]: time="2026-04-28T00:19:38.745611596Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Apr 28 00:19:38.745674 containerd[1643]: time="2026-04-28T00:19:38.745628150Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Apr 28 00:19:38.745674 containerd[1643]: time="2026-04-28T00:19:38.745638513Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Apr 28 00:19:38.745674 containerd[1643]: time="2026-04-28T00:19:38.745651231Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Apr 28 00:19:38.745674 containerd[1643]: time="2026-04-28T00:19:38.745669957Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Apr 28 00:19:38.745748 containerd[1643]: time="2026-04-28T00:19:38.745730964Z" level=info msg="Connect containerd service" Apr 28 00:19:38.745828 containerd[1643]: time="2026-04-28T00:19:38.745765348Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Apr 28 00:19:38.796942 containerd[1643]: time="2026-04-28T00:19:38.796412231Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 28 00:19:38.883443 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Apr 28 00:19:41.257567 containerd[1643]: time="2026-04-28T00:19:41.256323247Z" level=info msg="Start subscribing containerd event" Apr 28 00:19:41.257567 containerd[1643]: time="2026-04-28T00:19:41.257771555Z" level=info msg="Start recovering state" Apr 28 00:19:41.271969 containerd[1643]: time="2026-04-28T00:19:41.262307159Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Apr 28 00:19:41.271969 containerd[1643]: time="2026-04-28T00:19:41.262332122Z" level=info msg="Start event monitor" Apr 28 00:19:41.271969 containerd[1643]: time="2026-04-28T00:19:41.262365593Z" level=info msg=serving... address=/run/containerd/containerd.sock Apr 28 00:19:41.271969 containerd[1643]: time="2026-04-28T00:19:41.262442411Z" level=info msg="Start cni network conf syncer for default" Apr 28 00:19:41.271969 containerd[1643]: time="2026-04-28T00:19:41.262503170Z" level=info msg="Start streaming server" Apr 28 00:19:41.271969 containerd[1643]: time="2026-04-28T00:19:41.262775308Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Apr 28 00:19:41.271969 containerd[1643]: time="2026-04-28T00:19:41.266158090Z" level=info msg="runtime interface starting up..." Apr 28 00:19:41.271969 containerd[1643]: time="2026-04-28T00:19:41.267824483Z" level=info msg="starting plugins..." Apr 28 00:19:41.271969 containerd[1643]: time="2026-04-28T00:19:41.268939826Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Apr 28 00:19:41.273193 systemd[1]: Started containerd.service - containerd container runtime. Apr 28 00:19:41.274013 containerd[1643]: time="2026-04-28T00:19:41.273359445Z" level=info msg="containerd successfully booted in 2.958734s" Apr 28 00:19:44.425535 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Apr 28 00:19:44.449208 systemd[1]: Started sshd@0-1-10.0.0.20:22-10.0.0.1:60272.service - OpenSSH per-connection server daemon (10.0.0.1:60272). Apr 28 00:19:44.744381 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 28 00:19:44.751354 systemd[1]: Reached target multi-user.target - Multi-User System. Apr 28 00:19:44.751806 systemd[1]: Startup finished in 18.646s (kernel) + 19.544s (initrd) + 38.376s (userspace) = 1min 16.567s. Apr 28 00:19:44.814302 (kubelet)[1753]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 28 00:19:45.056263 sshd[1745]: Accepted publickey for core from 10.0.0.1 port 60272 ssh2: RSA SHA256:H+Vux/uLcwSNGfLerJ6bpcTovGn/hDI0W9YvrkmMHk4 Apr 28 00:19:45.083258 sshd-session[1745]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 00:19:45.302774 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Apr 28 00:19:45.306389 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Apr 28 00:19:45.701674 systemd-logind[1616]: New session '1' of user 'core' with class 'user' and type 'tty'. Apr 28 00:19:46.148106 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Apr 28 00:19:46.365509 systemd[1]: Starting user@500.service - User Manager for UID 500... Apr 28 00:19:46.651126 (systemd)[1762]: pam_unix(systemd-user:session): session opened for user core(uid=500) by core(uid=0) Apr 28 00:19:46.815206 systemd-logind[1616]: New session '2' of user 'core' with class 'manager-early' and type 'unspecified'. Apr 28 00:19:52.362571 systemd[1762]: Queued start job for default target default.target. Apr 28 00:19:52.411599 systemd[1762]: Created slice app.slice - User Application Slice. Apr 28 00:19:52.411817 systemd[1762]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of User's Temporary Directories. Apr 28 00:19:52.411833 systemd[1762]: Reached target machines.target - Virtual Machines and Containers. Apr 28 00:19:52.449242 systemd[1762]: Reached target paths.target - Paths. Apr 28 00:19:52.449575 systemd[1762]: Reached target timers.target - Timers. Apr 28 00:19:52.464753 systemd[1762]: Starting dbus.socket - D-Bus User Message Bus Socket... Apr 28 00:19:52.502093 systemd[1762]: Listening on systemd-ask-password.socket - Query the User Interactively for a Password. Apr 28 00:19:52.565328 systemd[1762]: Starting systemd-tmpfiles-setup.service - Create User Files and Directories... Apr 28 00:19:52.736809 systemd[1762]: Listening on dbus.socket - D-Bus User Message Bus Socket. Apr 28 00:19:52.741539 systemd[1762]: Reached target sockets.target - Sockets. Apr 28 00:19:52.938486 systemd[1762]: Finished systemd-tmpfiles-setup.service - Create User Files and Directories. Apr 28 00:19:52.938719 systemd[1762]: Reached target basic.target - Basic System. Apr 28 00:19:52.938776 systemd[1762]: Reached target default.target - Main User Target. Apr 28 00:19:52.938799 systemd[1762]: Startup finished in 5.942s. Apr 28 00:19:52.940019 systemd[1]: Started user@500.service - User Manager for UID 500. Apr 28 00:19:52.967622 systemd[1]: Started session-1.scope - Session 1 of User core. Apr 28 00:19:53.365398 kubelet[1753]: E0428 00:19:53.363390 1753 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 28 00:19:53.599126 systemd[1]: Started sshd@1-4097-10.0.0.20:22-10.0.0.1:35086.service - OpenSSH per-connection server daemon (10.0.0.1:35086). Apr 28 00:19:53.702401 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 28 00:19:53.702629 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 28 00:19:53.704025 systemd[1]: kubelet.service: Consumed 10.938s CPU time, 259M memory peak. Apr 28 00:19:54.201582 sshd[1778]: Accepted publickey for core from 10.0.0.1 port 35086 ssh2: RSA SHA256:H+Vux/uLcwSNGfLerJ6bpcTovGn/hDI0W9YvrkmMHk4 Apr 28 00:19:54.211531 sshd-session[1778]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 00:19:54.432978 systemd-logind[1616]: New session '3' of user 'core' with class 'user' and type 'tty'. Apr 28 00:19:54.455221 systemd[1]: Started session-3.scope - Session 3 of User core. Apr 28 00:19:54.656727 sshd[1783]: Connection closed by 10.0.0.1 port 35086 Apr 28 00:19:54.661140 sshd-session[1778]: pam_unix(sshd:session): session closed for user core Apr 28 00:19:54.836516 systemd[1]: sshd@1-4097-10.0.0.20:22-10.0.0.1:35086.service: Deactivated successfully. Apr 28 00:19:54.992815 systemd[1]: session-3.scope: Deactivated successfully. Apr 28 00:19:55.216186 systemd-logind[1616]: Session 3 logged out. Waiting for processes to exit. Apr 28 00:19:55.288067 systemd[1]: Started sshd@2-8193-10.0.0.20:22-10.0.0.1:35096.service - OpenSSH per-connection server daemon (10.0.0.1:35096). Apr 28 00:19:55.317583 systemd-logind[1616]: Removed session 3. Apr 28 00:19:56.030952 sshd[1789]: Accepted publickey for core from 10.0.0.1 port 35096 ssh2: RSA SHA256:H+Vux/uLcwSNGfLerJ6bpcTovGn/hDI0W9YvrkmMHk4 Apr 28 00:19:56.034368 sshd-session[1789]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 00:19:56.184563 systemd-logind[1616]: New session '4' of user 'core' with class 'user' and type 'tty'. Apr 28 00:19:56.294125 systemd[1]: Started session-4.scope - Session 4 of User core. Apr 28 00:19:56.518512 sshd[1793]: Connection closed by 10.0.0.1 port 35096 Apr 28 00:19:56.519418 sshd-session[1789]: pam_unix(sshd:session): session closed for user core Apr 28 00:19:56.695830 systemd[1]: sshd@2-8193-10.0.0.20:22-10.0.0.1:35096.service: Deactivated successfully. Apr 28 00:19:56.874290 systemd[1]: session-4.scope: Deactivated successfully. Apr 28 00:19:56.888197 systemd-logind[1616]: Session 4 logged out. Waiting for processes to exit. Apr 28 00:19:56.945116 systemd[1]: Started sshd@3-8194-10.0.0.20:22-10.0.0.1:35112.service - OpenSSH per-connection server daemon (10.0.0.1:35112). Apr 28 00:19:56.963797 systemd-logind[1616]: Removed session 4. Apr 28 00:19:58.052121 sshd[1799]: Accepted publickey for core from 10.0.0.1 port 35112 ssh2: RSA SHA256:H+Vux/uLcwSNGfLerJ6bpcTovGn/hDI0W9YvrkmMHk4 Apr 28 00:19:58.063636 sshd-session[1799]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 00:19:58.303204 systemd-logind[1616]: New session '5' of user 'core' with class 'user' and type 'tty'. Apr 28 00:19:58.426265 systemd[1]: Started session-5.scope - Session 5 of User core. Apr 28 00:19:59.441407 sshd[1803]: Connection closed by 10.0.0.1 port 35112 Apr 28 00:19:59.447580 sshd-session[1799]: pam_unix(sshd:session): session closed for user core Apr 28 00:19:59.701613 systemd[1]: Started sshd@4-8195-10.0.0.20:22-10.0.0.1:48790.service - OpenSSH per-connection server daemon (10.0.0.1:48790). Apr 28 00:19:59.717001 systemd[1]: sshd@3-8194-10.0.0.20:22-10.0.0.1:35112.service: Deactivated successfully. Apr 28 00:19:59.891560 systemd[1]: session-5.scope: Deactivated successfully. Apr 28 00:19:59.945411 systemd-logind[1616]: Session 5 logged out. Waiting for processes to exit. Apr 28 00:20:00.060604 systemd-logind[1616]: Removed session 5. Apr 28 00:20:01.282124 sshd[1806]: Accepted publickey for core from 10.0.0.1 port 48790 ssh2: RSA SHA256:H+Vux/uLcwSNGfLerJ6bpcTovGn/hDI0W9YvrkmMHk4 Apr 28 00:20:01.304187 sshd-session[1806]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 00:20:01.411318 systemd-logind[1616]: New session '6' of user 'core' with class 'user' and type 'tty'. Apr 28 00:20:01.442257 systemd[1]: Started session-6.scope - Session 6 of User core. Apr 28 00:20:02.111816 sudo[1814]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Apr 28 00:20:02.112476 sudo[1814]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 28 00:20:02.153276 sudo[1814]: pam_unix(sudo:session): session closed for user root Apr 28 00:20:02.185137 sshd[1813]: Connection closed by 10.0.0.1 port 48790 Apr 28 00:20:02.196038 sshd-session[1806]: pam_unix(sshd:session): session closed for user core Apr 28 00:20:02.233363 systemd[1]: sshd@4-8195-10.0.0.20:22-10.0.0.1:48790.service: Deactivated successfully. Apr 28 00:20:02.253287 systemd[1]: session-6.scope: Deactivated successfully. Apr 28 00:20:02.258399 systemd-logind[1616]: Session 6 logged out. Waiting for processes to exit. Apr 28 00:20:02.364586 systemd[1]: Started sshd@5-12289-10.0.0.20:22-10.0.0.1:48794.service - OpenSSH per-connection server daemon (10.0.0.1:48794). Apr 28 00:20:02.391494 systemd-logind[1616]: Removed session 6. Apr 28 00:20:03.471266 sshd[1821]: Accepted publickey for core from 10.0.0.1 port 48794 ssh2: RSA SHA256:H+Vux/uLcwSNGfLerJ6bpcTovGn/hDI0W9YvrkmMHk4 Apr 28 00:20:03.728165 sshd-session[1821]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 00:20:03.810338 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Apr 28 00:20:03.892242 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 28 00:20:03.960738 systemd-logind[1616]: New session '7' of user 'core' with class 'user' and type 'tty'. Apr 28 00:20:04.100165 systemd[1]: Started session-7.scope - Session 7 of User core. Apr 28 00:20:04.767451 sudo[1830]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Apr 28 00:20:04.803065 sudo[1830]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 28 00:20:04.872809 sudo[1830]: pam_unix(sudo:session): session closed for user root Apr 28 00:20:05.293234 sudo[1829]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Apr 28 00:20:05.351766 sudo[1829]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 28 00:20:06.033086 systemd[1]: Starting audit-rules.service - Load Audit Rules... Apr 28 00:20:06.850000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Apr 28 00:20:06.855437 augenrules[1854]: No rules Apr 28 00:20:06.850000 audit[1854]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffcb9420f20 a2=420 a3=0 items=0 ppid=1835 pid=1854 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 28 00:20:06.873300 kernel: audit: type=1305 audit(1777335606.850:180): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Apr 28 00:20:06.850000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Apr 28 00:20:06.887236 kernel: audit: type=1300 audit(1777335606.850:180): arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffcb9420f20 a2=420 a3=0 items=0 ppid=1835 pid=1854 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 28 00:20:06.888617 kernel: audit: type=1327 audit(1777335606.850:180): proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Apr 28 00:20:06.891441 systemd[1]: audit-rules.service: Deactivated successfully. Apr 28 00:20:06.892272 systemd[1]: Finished audit-rules.service - Load Audit Rules. Apr 28 00:20:06.920000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:20:06.949582 sudo[1829]: pam_unix(sudo:session): session closed for user root Apr 28 00:20:06.933000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:20:06.979815 kernel: audit: type=1130 audit(1777335606.920:181): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:20:06.984116 kernel: audit: type=1131 audit(1777335606.933:182): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:20:06.982343 sshd-session[1821]: pam_unix(sshd:session): session closed for user core Apr 28 00:20:06.984389 sshd[1828]: Connection closed by 10.0.0.1 port 48794 Apr 28 00:20:06.948000 audit[1829]: AUDIT1106 pid=1829 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Apr 28 00:20:06.949000 audit[1829]: AUDIT1104 pid=1829 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Apr 28 00:20:07.026058 kernel: audit: type=1106 audit(1777335606.948:183): pid=1829 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Apr 28 00:20:07.030615 kernel: audit: type=1104 audit(1777335606.949:184): pid=1829 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Apr 28 00:20:06.983000 audit[1821]: AUDIT1106 pid=1821 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 28 00:20:07.035314 kernel: audit: type=1106 audit(1777335606.983:185): pid=1821 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 28 00:20:06.983000 audit[1821]: AUDIT1104 pid=1821 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 28 00:20:07.092354 kernel: audit: type=1104 audit(1777335606.983:186): pid=1821 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 28 00:20:07.156282 systemd[1]: sshd@5-12289-10.0.0.20:22-10.0.0.1:48794.service: Deactivated successfully. Apr 28 00:20:07.155000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-12289-10.0.0.20:22-10.0.0.1:48794 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:20:07.187484 kernel: audit: type=1131 audit(1777335607.155:187): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-12289-10.0.0.20:22-10.0.0.1:48794 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:20:07.285469 systemd[1]: session-7.scope: Deactivated successfully. Apr 28 00:20:07.297754 systemd-logind[1616]: Session 7 logged out. Waiting for processes to exit. Apr 28 00:20:07.334592 systemd[1]: Started sshd@6-12290-10.0.0.20:22-10.0.0.1:48810.service - OpenSSH per-connection server daemon (10.0.0.1:48810). Apr 28 00:20:07.337000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-12290-10.0.0.20:22-10.0.0.1:48810 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:20:07.341795 systemd-logind[1616]: Removed session 7. Apr 28 00:20:07.948000 audit[1863]: AUDIT1101 pid=1863 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 28 00:20:07.951364 sshd[1863]: Accepted publickey for core from 10.0.0.1 port 48810 ssh2: RSA SHA256:H+Vux/uLcwSNGfLerJ6bpcTovGn/hDI0W9YvrkmMHk4 Apr 28 00:20:07.962000 audit[1863]: AUDIT1103 pid=1863 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 28 00:20:07.989000 audit[1863]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffcec182a40 a2=3 a3=0 items=0 ppid=1 pid=1863 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=8 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 28 00:20:07.989000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Apr 28 00:20:08.002238 sshd-session[1863]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 00:20:08.161270 systemd-logind[1616]: New session '8' of user 'core' with class 'user' and type 'tty'. Apr 28 00:20:08.315307 systemd[1]: Started session-8.scope - Session 8 of User core. Apr 28 00:20:08.358001 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 28 00:20:08.363000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:20:08.495000 audit[1863]: AUDIT1105 pid=1863 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 28 00:20:08.498760 (kubelet)[1871]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 28 00:20:08.518000 audit[1873]: AUDIT1103 pid=1873 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 28 00:20:09.537000 audit[1874]: AUDIT1101 pid=1874 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Apr 28 00:20:09.686263 sudo[1874]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Apr 28 00:20:09.721000 audit[1874]: AUDIT1110 pid=1874 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Apr 28 00:20:09.724000 audit[1874]: AUDIT1105 pid=1874 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Apr 28 00:20:09.725495 sudo[1874]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 28 00:20:12.700279 kubelet[1871]: E0428 00:20:12.699230 1871 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 28 00:20:12.802122 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 28 00:20:12.802323 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 28 00:20:12.802000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Apr 28 00:20:12.837104 kernel: kauditd_printk_skb: 12 callbacks suppressed Apr 28 00:20:12.803600 systemd[1]: kubelet.service: Consumed 5.920s CPU time, 111.9M memory peak. Apr 28 00:20:12.837449 kernel: audit: type=1131 audit(1777335612.802:198): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Apr 28 00:20:20.248829 update_engine[1618]: I20260428 00:20:20.241397 1618 update_attempter.cc:509] Updating boot flags... Apr 28 00:20:23.100607 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Apr 28 00:20:23.848664 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 28 00:20:31.343262 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 28 00:20:31.394000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:20:31.543341 kernel: audit: type=1130 audit(1777335631.394:199): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:20:31.543109 (kubelet)[1929]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 28 00:20:33.372681 systemd[1]: Starting docker.service - Docker Application Container Engine... Apr 28 00:20:33.589637 (dockerd)[1937]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Apr 28 00:20:34.138699 kubelet[1929]: E0428 00:20:34.136243 1929 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 28 00:20:34.248321 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 28 00:20:34.257408 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 28 00:20:34.343000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Apr 28 00:20:34.353482 systemd[1]: kubelet.service: Consumed 4.271s CPU time, 110.4M memory peak. Apr 28 00:20:34.366489 kernel: audit: type=1131 audit(1777335634.343:200): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Apr 28 00:20:44.483326 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Apr 28 00:20:44.622727 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 28 00:20:47.957174 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 28 00:20:47.979000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:20:48.062250 kernel: audit: type=1130 audit(1777335647.979:201): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:20:48.106807 (kubelet)[1957]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 28 00:20:48.697682 dockerd[1937]: time="2026-04-28T00:20:48.689530997Z" level=info msg="Starting up" Apr 28 00:20:48.777628 dockerd[1937]: time="2026-04-28T00:20:48.775412860Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Apr 28 00:20:49.290718 kubelet[1957]: E0428 00:20:49.289261 1957 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 28 00:20:49.345127 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 28 00:20:49.345304 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 28 00:20:49.346187 systemd[1]: kubelet.service: Consumed 2.971s CPU time, 110.5M memory peak. Apr 28 00:20:49.345000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Apr 28 00:20:49.366347 kernel: audit: type=1131 audit(1777335649.345:202): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Apr 28 00:20:49.937411 dockerd[1937]: time="2026-04-28T00:20:49.935591344Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Apr 28 00:20:51.367723 dockerd[1937]: time="2026-04-28T00:20:51.365569011Z" level=info msg="Loading containers: start." Apr 28 00:20:51.587637 kernel: Initializing XFRM netlink socket Apr 28 00:20:55.848000 audit[2009]: NETFILTER_CFG table=nat:2 family=2 entries=2 op=nft_register_chain pid=2009 subj=system_u:system_r:kernel_t:s0 comm="iptables" Apr 28 00:20:55.848000 audit[2009]: SYSCALL arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7ffeeda9a700 a2=0 a3=0 items=0 ppid=1937 pid=2009 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 28 00:20:55.866082 kernel: audit: type=1325 audit(1777335655.848:203): table=nat:2 family=2 entries=2 op=nft_register_chain pid=2009 subj=system_u:system_r:kernel_t:s0 comm="iptables" Apr 28 00:20:55.866142 kernel: audit: type=1300 audit(1777335655.848:203): arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7ffeeda9a700 a2=0 a3=0 items=0 ppid=1937 pid=2009 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 28 00:20:55.848000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Apr 28 00:20:55.874119 kernel: audit: type=1327 audit(1777335655.848:203): proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Apr 28 00:20:55.930000 audit[2011]: NETFILTER_CFG table=filter:3 family=2 entries=2 op=nft_register_chain pid=2011 subj=system_u:system_r:kernel_t:s0 comm="iptables" Apr 28 00:20:55.930000 audit[2011]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7ffd3c1b6a40 a2=0 a3=0 items=0 ppid=1937 pid=2011 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 28 00:20:55.950535 kernel: audit: type=1325 audit(1777335655.930:204): table=filter:3 family=2 entries=2 op=nft_register_chain pid=2011 subj=system_u:system_r:kernel_t:s0 comm="iptables" Apr 28 00:20:55.930000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Apr 28 00:20:55.952097 kernel: audit: type=1300 audit(1777335655.930:204): arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7ffd3c1b6a40 a2=0 a3=0 items=0 ppid=1937 pid=2011 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 28 00:20:55.952122 kernel: audit: type=1327 audit(1777335655.930:204): proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Apr 28 00:20:55.965000 audit[2013]: NETFILTER_CFG table=filter:4 family=2 entries=1 op=nft_register_chain pid=2013 subj=system_u:system_r:kernel_t:s0 comm="iptables" Apr 28 00:20:55.965000 audit[2013]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffdb9172a50 a2=0 a3=0 items=0 ppid=1937 pid=2013 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 28 00:20:55.988679 kernel: audit: type=1325 audit(1777335655.965:205): table=filter:4 family=2 entries=1 op=nft_register_chain pid=2013 subj=system_u:system_r:kernel_t:s0 comm="iptables" Apr 28 00:20:55.965000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Apr 28 00:20:55.996585 kernel: audit: type=1300 audit(1777335655.965:205): arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffdb9172a50 a2=0 a3=0 items=0 ppid=1937 pid=2013 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 28 00:20:55.996713 kernel: audit: type=1327 audit(1777335655.965:205): proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Apr 28 00:20:56.057000 audit[2015]: NETFILTER_CFG table=filter:5 family=2 entries=1 op=nft_register_chain pid=2015 subj=system_u:system_r:kernel_t:s0 comm="iptables" Apr 28 00:20:56.057000 audit[2015]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffdc209ea50 a2=0 a3=0 items=0 ppid=1937 pid=2015 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 28 00:20:56.057000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D425249444745 Apr 28 00:20:56.072509 kernel: audit: type=1325 audit(1777335656.057:206): table=filter:5 family=2 entries=1 op=nft_register_chain pid=2015 subj=system_u:system_r:kernel_t:s0 comm="iptables" Apr 28 00:20:56.184000 audit[2017]: NETFILTER_CFG table=filter:6 family=2 entries=1 op=nft_register_chain pid=2017 subj=system_u:system_r:kernel_t:s0 comm="iptables" Apr 28 00:20:56.184000 audit[2017]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffe913e5950 a2=0 a3=0 items=0 ppid=1937 pid=2017 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 28 00:20:56.184000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D4354 Apr 28 00:20:56.340000 audit[2019]: NETFILTER_CFG table=filter:7 family=2 entries=1 op=nft_register_chain pid=2019 subj=system_u:system_r:kernel_t:s0 comm="iptables" Apr 28 00:20:56.340000 audit[2019]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffe078cd880 a2=0 a3=0 items=0 ppid=1937 pid=2019 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 28 00:20:56.340000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Apr 28 00:20:56.530000 audit[2021]: NETFILTER_CFG table=filter:8 family=2 entries=1 op=nft_register_chain pid=2021 subj=system_u:system_r:kernel_t:s0 comm="iptables" Apr 28 00:20:56.530000 audit[2021]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffd883a17e0 a2=0 a3=0 items=0 ppid=1937 pid=2021 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 28 00:20:56.530000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Apr 28 00:20:56.726000 audit[2023]: NETFILTER_CFG table=nat:9 family=2 entries=2 op=nft_register_chain pid=2023 subj=system_u:system_r:kernel_t:s0 comm="iptables" Apr 28 00:20:56.726000 audit[2023]: SYSCALL arch=c000003e syscall=46 success=yes exit=384 a0=3 a1=7ffccfc41dc0 a2=0 a3=0 items=0 ppid=1937 pid=2023 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 28 00:20:56.726000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Apr 28 00:20:57.114000 audit[2029]: NETFILTER_CFG table=nat:10 family=2 entries=2 op=nft_register_chain pid=2029 subj=system_u:system_r:kernel_t:s0 comm="iptables" Apr 28 00:20:57.114000 audit[2029]: SYSCALL arch=c000003e syscall=46 success=yes exit=472 a0=3 a1=7ffc4e769fc0 a2=0 a3=0 items=0 ppid=1937 pid=2029 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 28 00:20:57.114000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003132372E302E302E302F38 Apr 28 00:20:57.308000 audit[2031]: NETFILTER_CFG table=filter:11 family=2 entries=2 op=nft_register_chain pid=2031 subj=system_u:system_r:kernel_t:s0 comm="iptables" Apr 28 00:20:57.308000 audit[2031]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffebf0e2570 a2=0 a3=0 items=0 ppid=1937 pid=2031 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 28 00:20:57.308000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D464F5257415244 Apr 28 00:20:57.478000 audit[2033]: NETFILTER_CFG table=filter:12 family=2 entries=1 op=nft_register_rule pid=2033 subj=system_u:system_r:kernel_t:s0 comm="iptables" Apr 28 00:20:57.478000 audit[2033]: SYSCALL arch=c000003e syscall=46 success=yes exit=236 a0=3 a1=7ffd09529220 a2=0 a3=0 items=0 ppid=1937 pid=2033 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 28 00:20:57.478000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D425249444745 Apr 28 00:20:57.531000 audit[2035]: NETFILTER_CFG table=filter:13 family=2 entries=1 op=nft_register_rule pid=2035 subj=system_u:system_r:kernel_t:s0 comm="iptables" Apr 28 00:20:57.531000 audit[2035]: SYSCALL arch=c000003e syscall=46 success=yes exit=248 a0=3 a1=7ffdb6612220 a2=0 a3=0 items=0 ppid=1937 pid=2035 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 28 00:20:57.531000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Apr 28 00:20:57.872000 audit[2037]: NETFILTER_CFG table=filter:14 family=2 entries=1 op=nft_register_rule pid=2037 subj=system_u:system_r:kernel_t:s0 comm="iptables" Apr 28 00:20:57.872000 audit[2037]: SYSCALL arch=c000003e syscall=46 success=yes exit=232 a0=3 a1=7ffc5649d1d0 a2=0 a3=0 items=0 ppid=1937 pid=2037 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 28 00:20:57.872000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D4354 Apr 28 00:20:59.781491 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Apr 28 00:21:00.023429 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 28 00:21:00.647000 audit[2070]: NETFILTER_CFG table=nat:15 family=10 entries=2 op=nft_register_chain pid=2070 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Apr 28 00:21:00.647000 audit[2070]: SYSCALL arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7fff03791390 a2=0 a3=0 items=0 ppid=1937 pid=2070 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 28 00:21:00.647000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Apr 28 00:21:00.682000 audit[2072]: NETFILTER_CFG table=filter:16 family=10 entries=2 op=nft_register_chain pid=2072 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Apr 28 00:21:00.682000 audit[2072]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7ffe50310730 a2=0 a3=0 items=0 ppid=1937 pid=2072 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 28 00:21:00.682000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Apr 28 00:21:00.753000 audit[2074]: NETFILTER_CFG table=filter:17 family=10 entries=1 op=nft_register_chain pid=2074 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Apr 28 00:21:00.753000 audit[2074]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd2f2f4350 a2=0 a3=0 items=0 ppid=1937 pid=2074 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 28 00:21:00.753000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Apr 28 00:21:00.772000 audit[2076]: NETFILTER_CFG table=filter:18 family=10 entries=1 op=nft_register_chain pid=2076 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Apr 28 00:21:00.772000 audit[2076]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff2e793300 a2=0 a3=0 items=0 ppid=1937 pid=2076 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 28 00:21:00.772000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D425249444745 Apr 28 00:21:00.850000 audit[2078]: NETFILTER_CFG table=filter:19 family=10 entries=1 op=nft_register_chain pid=2078 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Apr 28 00:21:00.855372 kernel: kauditd_printk_skb: 41 callbacks suppressed Apr 28 00:21:00.855439 kernel: audit: type=1325 audit(1777335660.850:220): table=filter:19 family=10 entries=1 op=nft_register_chain pid=2078 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Apr 28 00:21:00.850000 audit[2078]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7fffcad18dc0 a2=0 a3=0 items=0 ppid=1937 pid=2078 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 28 00:21:00.877809 kernel: audit: type=1300 audit(1777335660.850:220): arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7fffcad18dc0 a2=0 a3=0 items=0 ppid=1937 pid=2078 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 28 00:21:00.850000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D4354 Apr 28 00:21:00.883024 kernel: audit: type=1327 audit(1777335660.850:220): proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D4354 Apr 28 00:21:00.948000 audit[2080]: NETFILTER_CFG table=filter:20 family=10 entries=1 op=nft_register_chain pid=2080 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Apr 28 00:21:00.948000 audit[2080]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffcad9674f0 a2=0 a3=0 items=0 ppid=1937 pid=2080 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 28 00:21:00.973621 kernel: audit: type=1325 audit(1777335660.948:221): table=filter:20 family=10 entries=1 op=nft_register_chain pid=2080 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Apr 28 00:21:00.948000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Apr 28 00:21:01.053493 kernel: audit: type=1300 audit(1777335660.948:221): arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffcad9674f0 a2=0 a3=0 items=0 ppid=1937 pid=2080 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 28 00:21:01.072591 kernel: audit: type=1327 audit(1777335660.948:221): proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Apr 28 00:21:01.080000 audit[2082]: NETFILTER_CFG table=filter:21 family=10 entries=1 op=nft_register_chain pid=2082 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Apr 28 00:21:01.080000 audit[2082]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffd26286f70 a2=0 a3=0 items=0 ppid=1937 pid=2082 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 28 00:21:01.080000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Apr 28 00:21:01.106674 kernel: audit: type=1325 audit(1777335661.080:222): table=filter:21 family=10 entries=1 op=nft_register_chain pid=2082 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Apr 28 00:21:01.106971 kernel: audit: type=1300 audit(1777335661.080:222): arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffd26286f70 a2=0 a3=0 items=0 ppid=1937 pid=2082 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 28 00:21:01.107051 kernel: audit: type=1327 audit(1777335661.080:222): proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Apr 28 00:21:01.121000 audit[2084]: NETFILTER_CFG table=nat:22 family=10 entries=2 op=nft_register_chain pid=2084 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Apr 28 00:21:01.121000 audit[2084]: SYSCALL arch=c000003e syscall=46 success=yes exit=384 a0=3 a1=7ffd77b07960 a2=0 a3=0 items=0 ppid=1937 pid=2084 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 28 00:21:01.121000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Apr 28 00:21:01.130001 kernel: audit: type=1325 audit(1777335661.121:223): table=nat:22 family=10 entries=2 op=nft_register_chain pid=2084 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Apr 28 00:21:01.147000 audit[2086]: NETFILTER_CFG table=nat:23 family=10 entries=2 op=nft_register_chain pid=2086 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Apr 28 00:21:01.147000 audit[2086]: SYSCALL arch=c000003e syscall=46 success=yes exit=484 a0=3 a1=7ffd915a4bc0 a2=0 a3=0 items=0 ppid=1937 pid=2086 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 28 00:21:01.147000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003A3A312F313238 Apr 28 00:21:01.181000 audit[2088]: NETFILTER_CFG table=filter:24 family=10 entries=2 op=nft_register_chain pid=2088 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Apr 28 00:21:01.181000 audit[2088]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffdf10fd9e0 a2=0 a3=0 items=0 ppid=1937 pid=2088 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 28 00:21:01.181000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D464F5257415244 Apr 28 00:21:01.207000 audit[2090]: NETFILTER_CFG table=filter:25 family=10 entries=1 op=nft_register_rule pid=2090 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Apr 28 00:21:01.207000 audit[2090]: SYSCALL arch=c000003e syscall=46 success=yes exit=236 a0=3 a1=7fff9c41d540 a2=0 a3=0 items=0 ppid=1937 pid=2090 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 28 00:21:01.207000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D425249444745 Apr 28 00:21:01.244000 audit[2092]: NETFILTER_CFG table=filter:26 family=10 entries=1 op=nft_register_rule pid=2092 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Apr 28 00:21:01.244000 audit[2092]: SYSCALL arch=c000003e syscall=46 success=yes exit=248 a0=3 a1=7ffd9f0065a0 a2=0 a3=0 items=0 ppid=1937 pid=2092 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 28 00:21:01.244000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Apr 28 00:21:01.289000 audit[2094]: NETFILTER_CFG table=filter:27 family=10 entries=1 op=nft_register_rule pid=2094 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Apr 28 00:21:01.289000 audit[2094]: SYSCALL arch=c000003e syscall=46 success=yes exit=232 a0=3 a1=7fff28681ac0 a2=0 a3=0 items=0 ppid=1937 pid=2094 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 28 00:21:01.289000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D4354 Apr 28 00:21:01.409000 audit[2101]: NETFILTER_CFG table=filter:28 family=2 entries=1 op=nft_register_chain pid=2101 subj=system_u:system_r:kernel_t:s0 comm="iptables" Apr 28 00:21:01.409000 audit[2101]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7fff76c4c030 a2=0 a3=0 items=0 ppid=1937 pid=2101 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 28 00:21:01.409000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Apr 28 00:21:01.440000 audit[2105]: NETFILTER_CFG table=filter:29 family=2 entries=1 op=nft_register_rule pid=2105 subj=system_u:system_r:kernel_t:s0 comm="iptables" Apr 28 00:21:01.440000 audit[2105]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7ffc58e1eff0 a2=0 a3=0 items=0 ppid=1937 pid=2105 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 28 00:21:01.440000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Apr 28 00:21:01.484000 audit[2107]: NETFILTER_CFG table=filter:30 family=10 entries=1 op=nft_register_chain pid=2107 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Apr 28 00:21:01.484000 audit[2107]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffd6ab0d1b0 a2=0 a3=0 items=0 ppid=1937 pid=2107 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 28 00:21:01.484000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Apr 28 00:21:01.494000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:21:01.487322 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 28 00:21:01.516229 (kubelet)[2108]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 28 00:21:01.560000 audit[2111]: NETFILTER_CFG table=filter:31 family=10 entries=1 op=nft_register_rule pid=2111 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Apr 28 00:21:01.560000 audit[2111]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7ffefe7dad50 a2=0 a3=0 items=0 ppid=1937 pid=2111 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 28 00:21:01.560000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Apr 28 00:21:02.044000 audit[2123]: NETFILTER_CFG table=nat:32 family=2 entries=2 op=nft_register_chain pid=2123 subj=system_u:system_r:kernel_t:s0 comm="iptables" Apr 28 00:21:02.044000 audit[2123]: SYSCALL arch=c000003e syscall=46 success=yes exit=520 a0=3 a1=7fff7fd672d0 a2=0 a3=0 items=0 ppid=1937 pid=2123 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 28 00:21:02.044000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4900504F5354524F5554494E47002D73003137322E31372E302E302F31360000002D6F00646F636B657230002D6A004D415351554552414445 Apr 28 00:21:02.174000 audit[2126]: NETFILTER_CFG table=nat:33 family=2 entries=1 op=nft_register_rule pid=2126 subj=system_u:system_r:kernel_t:s0 comm="iptables" Apr 28 00:21:02.174000 audit[2126]: SYSCALL arch=c000003e syscall=46 success=yes exit=288 a0=3 a1=7ffdf127fbe0 a2=0 a3=0 items=0 ppid=1937 pid=2126 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 28 00:21:02.174000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4900444F434B4552002D6900646F636B657230002D6A0052455455524E Apr 28 00:21:02.696000 audit[2135]: NETFILTER_CFG table=filter:34 family=2 entries=1 op=nft_register_rule pid=2135 subj=system_u:system_r:kernel_t:s0 comm="iptables" Apr 28 00:21:02.696000 audit[2135]: SYSCALL arch=c000003e syscall=46 success=yes exit=300 a0=3 a1=7fff55d06410 a2=0 a3=0 items=0 ppid=1937 pid=2135 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 28 00:21:02.696000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D464F5257415244002D6900646F636B657230002D6A00414343455054 Apr 28 00:21:02.873338 kubelet[2108]: E0428 00:21:02.867610 2108 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 28 00:21:02.927313 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 28 00:21:02.930000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Apr 28 00:21:02.927467 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 28 00:21:02.931259 systemd[1]: kubelet.service: Consumed 1.928s CPU time, 109.9M memory peak. Apr 28 00:21:03.284000 audit[2142]: NETFILTER_CFG table=filter:35 family=2 entries=1 op=nft_register_rule pid=2142 subj=system_u:system_r:kernel_t:s0 comm="iptables" Apr 28 00:21:03.284000 audit[2142]: SYSCALL arch=c000003e syscall=46 success=yes exit=376 a0=3 a1=7ffc79978de0 a2=0 a3=0 items=0 ppid=1937 pid=2142 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 28 00:21:03.284000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45520000002D6900646F636B657230002D6F00646F636B657230002D6A0044524F50 Apr 28 00:21:03.845000 audit[2144]: NETFILTER_CFG table=filter:36 family=2 entries=1 op=nft_register_rule pid=2144 subj=system_u:system_r:kernel_t:s0 comm="iptables" Apr 28 00:21:03.845000 audit[2144]: SYSCALL arch=c000003e syscall=46 success=yes exit=512 a0=3 a1=7fff891606f0 a2=0 a3=0 items=0 ppid=1937 pid=2144 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 28 00:21:03.845000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D4354002D6F00646F636B657230002D6D00636F6E6E747261636B002D2D637473746174650052454C415445442C45535441424C4953484544002D6A00414343455054 Apr 28 00:21:04.099000 audit[2146]: NETFILTER_CFG table=filter:37 family=2 entries=1 op=nft_register_rule pid=2146 subj=system_u:system_r:kernel_t:s0 comm="iptables" Apr 28 00:21:04.099000 audit[2146]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffe25ef6710 a2=0 a3=0 items=0 ppid=1937 pid=2146 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 28 00:21:04.099000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D425249444745002D6F00646F636B657230002D6A00444F434B4552 Apr 28 00:21:04.285000 audit[2148]: NETFILTER_CFG table=filter:38 family=2 entries=1 op=nft_register_rule pid=2148 subj=system_u:system_r:kernel_t:s0 comm="iptables" Apr 28 00:21:04.285000 audit[2148]: SYSCALL arch=c000003e syscall=46 success=yes exit=428 a0=3 a1=7ffeb7014e00 a2=0 a3=0 items=0 ppid=1937 pid=2148 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 28 00:21:04.285000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D49534F4C4154494F4E2D53544147452D31002D6900646F636B6572300000002D6F00646F636B657230002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D32 Apr 28 00:21:04.405000 audit[2150]: NETFILTER_CFG table=filter:39 family=2 entries=1 op=nft_register_rule pid=2150 subj=system_u:system_r:kernel_t:s0 comm="iptables" Apr 28 00:21:04.405000 audit[2150]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffc2b0956b0 a2=0 a3=0 items=0 ppid=1937 pid=2150 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 28 00:21:04.405000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D32002D6F00646F636B657230002D6A0044524F50 Apr 28 00:21:04.475565 systemd-networkd[1426]: docker0: Link UP Apr 28 00:21:04.760649 dockerd[1937]: time="2026-04-28T00:21:04.757240692Z" level=info msg="Loading containers: done." Apr 28 00:21:05.146454 dockerd[1937]: time="2026-04-28T00:21:05.143188223Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Apr 28 00:21:05.160217 dockerd[1937]: time="2026-04-28T00:21:05.149713835Z" level=info msg="Docker daemon" commit=45873be4ae3f5488c9498b3d9f17deaddaf609f4 containerd-snapshotter=false storage-driver=overlay2 version=28.2.2 Apr 28 00:21:05.160217 dockerd[1937]: time="2026-04-28T00:21:05.155212155Z" level=info msg="Initializing buildkit" Apr 28 00:21:05.161793 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3568670135-merged.mount: Deactivated successfully. Apr 28 00:21:05.206101 dockerd[1937]: time="2026-04-28T00:21:05.202089168Z" level=warning msg="CDI setup error /etc/cdi: failed to monitor for changes: no such file or directory" Apr 28 00:21:05.210401 dockerd[1937]: time="2026-04-28T00:21:05.207303399Z" level=warning msg="CDI setup error /var/run/cdi: failed to monitor for changes: no such file or directory" Apr 28 00:21:05.906415 dockerd[1937]: time="2026-04-28T00:21:05.902112871Z" level=info msg="Completed buildkit initialization" Apr 28 00:21:06.557605 dockerd[1937]: time="2026-04-28T00:21:06.555624003Z" level=info msg="Daemon has completed initialization" Apr 28 00:21:06.561096 dockerd[1937]: time="2026-04-28T00:21:06.558271906Z" level=info msg="API listen on /run/docker.sock" Apr 28 00:21:06.561188 systemd[1]: Started docker.service - Docker Application Container Engine. Apr 28 00:21:06.561000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=docker comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:21:06.589283 kernel: kauditd_printk_skb: 55 callbacks suppressed Apr 28 00:21:06.590208 kernel: audit: type=1130 audit(1777335666.561:243): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=docker comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:21:13.068369 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Apr 28 00:21:13.244584 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 28 00:21:18.705916 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 28 00:21:18.725000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:21:18.777517 kernel: audit: type=1130 audit(1777335678.725:244): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:21:18.783310 (kubelet)[2202]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 28 00:21:20.263336 kubelet[2202]: E0428 00:21:20.262750 2202 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 28 00:21:20.282194 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 28 00:21:20.282000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Apr 28 00:21:20.282361 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 28 00:21:20.288711 systemd[1]: kubelet.service: Consumed 4.038s CPU time, 111.5M memory peak. Apr 28 00:21:20.291566 kernel: audit: type=1131 audit(1777335680.282:245): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Apr 28 00:21:30.630670 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Apr 28 00:21:30.795050 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 28 00:21:31.737647 containerd[1643]: time="2026-04-28T00:21:31.736613576Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.4\"" Apr 28 00:21:33.234809 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 28 00:21:33.236000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:21:33.265239 kernel: audit: type=1130 audit(1777335693.236:246): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:21:33.299504 (kubelet)[2222]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 28 00:21:34.770748 kubelet[2222]: E0428 00:21:34.767647 2222 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 28 00:21:34.794048 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 28 00:21:34.802000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Apr 28 00:21:34.794408 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 28 00:21:34.814297 systemd[1]: kubelet.service: Consumed 2.624s CPU time, 110.1M memory peak. Apr 28 00:21:34.817319 kernel: audit: type=1131 audit(1777335694.802:247): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Apr 28 00:21:40.075644 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3547556428.mount: Deactivated successfully. Apr 28 00:21:44.967516 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7. Apr 28 00:21:44.989401 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 28 00:21:47.865197 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 28 00:21:47.870000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:21:47.982802 kernel: audit: type=1130 audit(1777335707.870:248): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:21:48.075116 (kubelet)[2253]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 28 00:21:49.346955 kubelet[2253]: E0428 00:21:49.306310 2253 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 28 00:21:49.363968 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 28 00:21:49.363000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Apr 28 00:21:49.364083 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 28 00:21:49.364978 systemd[1]: kubelet.service: Consumed 2.861s CPU time, 110.4M memory peak. Apr 28 00:21:49.374786 kernel: audit: type=1131 audit(1777335709.363:249): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Apr 28 00:21:59.854970 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 8. Apr 28 00:22:00.243809 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 28 00:22:04.948396 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 28 00:22:04.947000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:22:04.966461 kernel: audit: type=1130 audit(1777335724.947:250): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:22:05.181329 (kubelet)[2273]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 28 00:22:09.264707 kubelet[2273]: E0428 00:22:09.263789 2273 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 28 00:22:09.304680 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 28 00:22:09.366000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Apr 28 00:22:09.385192 kernel: audit: type=1131 audit(1777335729.366:251): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Apr 28 00:22:09.305046 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 28 00:22:09.385024 systemd[1]: kubelet.service: Consumed 5.973s CPU time, 109.3M memory peak. Apr 28 00:22:19.888462 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 9. Apr 28 00:22:20.147662 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 28 00:22:27.233777 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 28 00:22:27.241000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:22:27.264180 kernel: audit: type=1130 audit(1777335747.241:252): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:22:27.261815 (kubelet)[2331]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 28 00:22:29.045611 kubelet[2331]: E0428 00:22:29.044126 2331 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 28 00:22:29.072831 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 28 00:22:29.167092 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 28 00:22:29.195000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Apr 28 00:22:29.196967 systemd[1]: kubelet.service: Consumed 5.056s CPU time, 110.8M memory peak. Apr 28 00:22:29.232313 kernel: audit: type=1131 audit(1777335749.195:253): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Apr 28 00:22:33.392960 containerd[1643]: time="2026-04-28T00:22:33.378365559Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.34.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 00:22:33.700585 containerd[1643]: time="2026-04-28T00:22:33.410580011Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.34.4: active requests=0, bytes read=27063040" Apr 28 00:22:34.095484 containerd[1643]: time="2026-04-28T00:22:34.081792829Z" level=info msg="ImageCreate event name:\"sha256:580dc2bd813334b9ca30ac3a513b3577d055dd0bc8a7018a424b552afd7319f9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 00:22:35.094234 containerd[1643]: time="2026-04-28T00:22:35.093292356Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:f2b5a686d329b24ef4f4b057ddaf61e01874122d584e99c2a19d1e1714e4b7ae\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 00:22:35.263138 containerd[1643]: time="2026-04-28T00:22:35.261257043Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.34.4\" with image id \"sha256:580dc2bd813334b9ca30ac3a513b3577d055dd0bc8a7018a424b552afd7319f9\", repo tag \"registry.k8s.io/kube-apiserver:v1.34.4\", repo digest \"registry.k8s.io/kube-apiserver@sha256:f2b5a686d329b24ef4f4b057ddaf61e01874122d584e99c2a19d1e1714e4b7ae\", size \"27069180\" in 1m3.501755368s" Apr 28 00:22:35.291639 containerd[1643]: time="2026-04-28T00:22:35.274481195Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.4\" returns image reference \"sha256:580dc2bd813334b9ca30ac3a513b3577d055dd0bc8a7018a424b552afd7319f9\"" Apr 28 00:22:35.449825 containerd[1643]: time="2026-04-28T00:22:35.448940868Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.4\"" Apr 28 00:22:39.355685 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 10. Apr 28 00:22:39.496594 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 28 00:22:41.820078 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 28 00:22:41.827000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:22:41.858118 kernel: audit: type=1130 audit(1777335761.827:254): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:22:41.858561 (kubelet)[2347]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 28 00:22:43.492983 kubelet[2347]: E0428 00:22:43.488393 2347 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 28 00:22:43.637789 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 28 00:22:43.686713 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 28 00:22:43.698027 systemd[1]: kubelet.service: Consumed 2.581s CPU time, 110.3M memory peak. Apr 28 00:22:43.696000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Apr 28 00:22:43.837765 kernel: audit: type=1131 audit(1777335763.696:255): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Apr 28 00:22:53.813491 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 11. Apr 28 00:22:53.832905 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 28 00:22:57.166665 containerd[1643]: time="2026-04-28T00:22:57.163922793Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.34.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 00:22:57.187223 containerd[1643]: time="2026-04-28T00:22:57.186977664Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.34.4: active requests=0, bytes read=21158554" Apr 28 00:22:57.207807 containerd[1643]: time="2026-04-28T00:22:57.207239627Z" level=info msg="ImageCreate event name:\"sha256:608737e269607b0d5c252a3296dc4fd80e7f2e90907f46ad5c8cf3e4f23c6d0d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 00:22:57.909596 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 28 00:22:57.911000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:22:57.940467 kernel: audit: type=1130 audit(1777335777.911:256): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:22:57.947772 (kubelet)[2367]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 28 00:22:57.984139 containerd[1643]: time="2026-04-28T00:22:57.983667363Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:b8f0ae8a1bddb70981f4999e63df7e59838b9b4ee27831831802317101164e1e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 00:22:58.086828 containerd[1643]: time="2026-04-28T00:22:58.085669179Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.34.4\" with image id \"sha256:608737e269607b0d5c252a3296dc4fd80e7f2e90907f46ad5c8cf3e4f23c6d0d\", repo tag \"registry.k8s.io/kube-controller-manager:v1.34.4\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:b8f0ae8a1bddb70981f4999e63df7e59838b9b4ee27831831802317101164e1e\", size \"22820907\" in 22.636013442s" Apr 28 00:22:58.086828 containerd[1643]: time="2026-04-28T00:22:58.086247167Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.4\" returns image reference \"sha256:608737e269607b0d5c252a3296dc4fd80e7f2e90907f46ad5c8cf3e4f23c6d0d\"" Apr 28 00:22:58.202042 containerd[1643]: time="2026-04-28T00:22:58.200397986Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.4\"" Apr 28 00:22:59.004961 kubelet[2367]: E0428 00:22:59.004401 2367 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 28 00:22:59.012087 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 28 00:22:59.011000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Apr 28 00:22:59.012280 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 28 00:22:59.019149 systemd[1]: kubelet.service: Consumed 2.847s CPU time, 111M memory peak. Apr 28 00:22:59.022219 kernel: audit: type=1131 audit(1777335779.011:257): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Apr 28 00:23:09.267525 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 12. Apr 28 00:23:09.336946 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 28 00:23:11.761000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:23:11.776908 kernel: audit: type=1130 audit(1777335791.761:258): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:23:11.759480 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 28 00:23:11.801921 (kubelet)[2388]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 28 00:23:12.595481 kubelet[2388]: E0428 00:23:12.595015 2388 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 28 00:23:12.784126 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 28 00:23:12.826000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Apr 28 00:23:12.834894 kernel: audit: type=1131 audit(1777335792.826:259): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Apr 28 00:23:12.784420 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 28 00:23:12.831803 systemd[1]: kubelet.service: Consumed 2.225s CPU time, 110.3M memory peak. Apr 28 00:23:13.875652 containerd[1643]: time="2026-04-28T00:23:13.874735128Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.34.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 00:23:13.988672 containerd[1643]: time="2026-04-28T00:23:13.905771513Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.34.4: active requests=0, bytes read=15722505" Apr 28 00:23:14.122605 containerd[1643]: time="2026-04-28T00:23:14.114627866Z" level=info msg="ImageCreate event name:\"sha256:5ad88f27116a5809b6bdb7b410bc4c456e918bc25e96804201540fd30892e7aa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 00:23:16.075655 containerd[1643]: time="2026-04-28T00:23:16.072947970Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:5b0dcf6f7178b6bff5cbf59f2a695b13987181cb1610bfca63cad50b1df8f982\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 00:23:16.256638 containerd[1643]: time="2026-04-28T00:23:16.253014671Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.34.4\" with image id \"sha256:5ad88f27116a5809b6bdb7b410bc4c456e918bc25e96804201540fd30892e7aa\", repo tag \"registry.k8s.io/kube-scheduler:v1.34.4\", repo digest \"registry.k8s.io/kube-scheduler@sha256:5b0dcf6f7178b6bff5cbf59f2a695b13987181cb1610bfca63cad50b1df8f982\", size \"17384858\" in 18.05209002s" Apr 28 00:23:16.263155 containerd[1643]: time="2026-04-28T00:23:16.258513348Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.4\" returns image reference \"sha256:5ad88f27116a5809b6bdb7b410bc4c456e918bc25e96804201540fd30892e7aa\"" Apr 28 00:23:16.274965 containerd[1643]: time="2026-04-28T00:23:16.274709011Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.4\"" Apr 28 00:23:22.952130 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 13. Apr 28 00:23:23.084164 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 28 00:23:24.723906 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 28 00:23:24.724000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:23:24.733052 kernel: audit: type=1130 audit(1777335804.724:260): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:23:24.758176 (kubelet)[2409]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 28 00:23:25.403067 kubelet[2409]: E0428 00:23:25.400724 2409 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 28 00:23:25.411140 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 28 00:23:25.411344 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 28 00:23:25.417000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Apr 28 00:23:25.418269 systemd[1]: kubelet.service: Consumed 1.607s CPU time, 110.7M memory peak. Apr 28 00:23:25.425246 kernel: audit: type=1131 audit(1777335805.417:261): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Apr 28 00:23:35.573881 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 14. Apr 28 00:23:35.941209 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 28 00:23:39.591599 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 28 00:23:39.632000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:23:39.737570 kernel: audit: type=1130 audit(1777335819.632:262): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:23:39.738218 (kubelet)[2425]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 28 00:23:45.074007 kubelet[2425]: E0428 00:23:45.065770 2425 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 28 00:23:45.147528 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 28 00:23:45.242050 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 28 00:23:45.469000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Apr 28 00:23:45.505597 kernel: audit: type=1131 audit(1777335825.469:263): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Apr 28 00:23:45.478539 systemd[1]: kubelet.service: Consumed 5.754s CPU time, 112.5M memory peak. Apr 28 00:23:55.732005 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 15. Apr 28 00:23:55.841030 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 28 00:24:03.503584 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 28 00:24:03.539000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:24:03.598953 kernel: audit: type=1130 audit(1777335843.539:264): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:24:03.683297 (kubelet)[2441]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 28 00:24:09.259687 kubelet[2441]: E0428 00:24:09.259072 2441 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 28 00:24:09.575691 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 28 00:24:09.653411 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 28 00:24:10.012000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Apr 28 00:24:10.096662 kernel: audit: type=1131 audit(1777335850.012:265): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Apr 28 00:24:10.074402 systemd[1]: kubelet.service: Consumed 7.720s CPU time, 110.7M memory peak. Apr 28 00:24:19.797697 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 16. Apr 28 00:24:20.251396 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 28 00:24:26.921209 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3524561648.mount: Deactivated successfully. Apr 28 00:24:28.700517 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 28 00:24:28.793000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:24:28.803689 kernel: audit: type=1130 audit(1777335868.793:266): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:24:28.833642 (kubelet)[2467]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 28 00:24:34.700826 kubelet[2467]: E0428 00:24:34.688502 2467 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 28 00:24:34.824347 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 28 00:24:34.826766 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 28 00:24:34.863000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Apr 28 00:24:34.873590 systemd[1]: kubelet.service: Consumed 8.199s CPU time, 112.8M memory peak. Apr 28 00:24:34.900534 kernel: audit: type=1131 audit(1777335874.863:267): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Apr 28 00:24:45.178489 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 17. Apr 28 00:24:45.241125 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 28 00:24:45.835605 containerd[1643]: time="2026-04-28T00:24:45.835348953Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.34.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 00:24:45.865791 containerd[1643]: time="2026-04-28T00:24:45.863621900Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.34.4: active requests=1, bytes read=23097540" Apr 28 00:24:46.341951 containerd[1643]: time="2026-04-28T00:24:46.333479705Z" level=info msg="ImageCreate event name:\"sha256:ccb613b010acadd9a69cf0ea80a60105c0d14106903c2572e2c6452f8615b3c7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 00:24:49.356109 containerd[1643]: time="2026-04-28T00:24:49.345611816Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:be6f624483c350da6022d54965ba5b01b35f067737959d7fb11d625f1d975045\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 00:24:50.945665 containerd[1643]: time="2026-04-28T00:24:50.905554259Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.34.4\" with image id \"sha256:ccb613b010acadd9a69cf0ea80a60105c0d14106903c2572e2c6452f8615b3c7\", repo tag \"registry.k8s.io/kube-proxy:v1.34.4\", repo digest \"registry.k8s.io/kube-proxy@sha256:be6f624483c350da6022d54965ba5b01b35f067737959d7fb11d625f1d975045\", size \"25858928\" in 1m34.611645102s" Apr 28 00:24:50.947678 containerd[1643]: time="2026-04-28T00:24:50.947376615Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.4\" returns image reference \"sha256:ccb613b010acadd9a69cf0ea80a60105c0d14106903c2572e2c6452f8615b3c7\"" Apr 28 00:24:51.050182 containerd[1643]: time="2026-04-28T00:24:51.046054738Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\"" Apr 28 00:24:54.900637 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 28 00:24:54.963000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:24:54.999828 kernel: audit: type=1130 audit(1777335894.963:268): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:24:55.000088 (kubelet)[2488]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 28 00:25:02.330535 kubelet[2488]: E0428 00:25:02.287379 2488 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 28 00:25:02.572424 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 28 00:25:02.645259 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 28 00:25:02.673000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Apr 28 00:25:02.773455 kernel: audit: type=1131 audit(1777335902.673:269): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Apr 28 00:25:02.674784 systemd[1]: kubelet.service: Consumed 10.215s CPU time, 110M memory peak. Apr 28 00:25:12.850747 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 18. Apr 28 00:25:12.944511 systemd[1762]: Created slice background.slice - User Background Tasks Slice. Apr 28 00:25:12.959358 systemd[1762]: Starting systemd-tmpfiles-clean.service - Cleanup of User's Temporary Files and Directories... Apr 28 00:25:13.537354 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 28 00:25:13.539069 systemd[1762]: Finished systemd-tmpfiles-clean.service - Cleanup of User's Temporary Files and Directories. Apr 28 00:25:18.786207 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2580351557.mount: Deactivated successfully. Apr 28 00:25:21.573559 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 28 00:25:21.645000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:25:21.699355 kernel: audit: type=1130 audit(1777335921.645:270): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:25:21.763690 (kubelet)[2515]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 28 00:25:24.454282 kubelet[2515]: E0428 00:25:24.453548 2515 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 28 00:25:24.633896 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 28 00:25:24.634555 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 28 00:25:24.681000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Apr 28 00:25:24.698278 kernel: audit: type=1131 audit(1777335924.681:271): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Apr 28 00:25:24.687206 systemd[1]: kubelet.service: Consumed 6.542s CPU time, 110.6M memory peak. Apr 28 00:25:34.850594 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 19. Apr 28 00:25:35.015268 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 28 00:25:46.388121 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 28 00:25:46.508000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:25:46.518612 kernel: audit: type=1130 audit(1777335946.508:272): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:25:46.603620 (kubelet)[2537]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 28 00:25:55.228720 kubelet[2537]: E0428 00:25:55.227591 2537 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 28 00:25:55.258669 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 28 00:25:55.263000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Apr 28 00:25:55.369462 kernel: audit: type=1131 audit(1777335955.263:273): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Apr 28 00:25:55.261486 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 28 00:25:55.264602 systemd[1]: kubelet.service: Consumed 11.726s CPU time, 112.3M memory peak. Apr 28 00:26:05.552872 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 20. Apr 28 00:26:05.688551 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 28 00:26:10.342004 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 28 00:26:10.363000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:26:10.388340 kernel: audit: type=1130 audit(1777335970.363:274): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:26:10.403626 (kubelet)[2556]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 28 00:26:15.203509 kubelet[2556]: E0428 00:26:15.193718 2556 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 28 00:26:15.338526 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 28 00:26:15.349898 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 28 00:26:15.446000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Apr 28 00:26:15.453546 systemd[1]: kubelet.service: Consumed 6.298s CPU time, 110.7M memory peak. Apr 28 00:26:15.472352 kernel: audit: type=1131 audit(1777335975.446:275): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Apr 28 00:26:25.520039 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 21. Apr 28 00:26:25.563119 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 28 00:26:28.271652 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 28 00:26:28.271000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:26:28.288159 kernel: audit: type=1130 audit(1777335988.271:276): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:26:28.437002 (kubelet)[2612]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 28 00:26:29.551147 kubelet[2612]: E0428 00:26:29.549718 2612 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 28 00:26:29.675165 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 28 00:26:29.688919 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 28 00:26:29.699000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Apr 28 00:26:29.733143 kernel: audit: type=1131 audit(1777335989.699:277): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Apr 28 00:26:29.707425 systemd[1]: kubelet.service: Consumed 2.634s CPU time, 109.3M memory peak. Apr 28 00:26:40.291221 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 22. Apr 28 00:26:40.597323 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 28 00:26:42.486333 containerd[1643]: time="2026-04-28T00:26:42.472899868Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 00:26:42.486333 containerd[1643]: time="2026-04-28T00:26:42.476660844Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.1: active requests=0, bytes read=22377210" Apr 28 00:26:42.516241 containerd[1643]: time="2026-04-28T00:26:42.507170334Z" level=info msg="ImageCreate event name:\"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 00:26:43.375633 containerd[1643]: time="2026-04-28T00:26:43.373927061Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 00:26:43.508499 containerd[1643]: time="2026-04-28T00:26:43.507444386Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.1\" with image id \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\", size \"22384805\" in 1m52.458373364s" Apr 28 00:26:43.552695 containerd[1643]: time="2026-04-28T00:26:43.508577412Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\" returns image reference \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\"" Apr 28 00:26:43.757680 containerd[1643]: time="2026-04-28T00:26:43.757088088Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Apr 28 00:26:44.386713 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 28 00:26:44.395000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:26:44.408355 kernel: audit: type=1130 audit(1777336004.395:278): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:26:44.420786 (kubelet)[2629]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 28 00:26:46.095780 kubelet[2629]: E0428 00:26:46.076792 2629 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 28 00:26:46.237637 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 28 00:26:46.246000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Apr 28 00:26:46.256905 kernel: audit: type=1131 audit(1777336006.246:279): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Apr 28 00:26:46.243546 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 28 00:26:46.250614 systemd[1]: kubelet.service: Consumed 3.580s CPU time, 110.8M memory peak. Apr 28 00:26:51.474696 containerd[1643]: time="2026-04-28T00:26:51.466757621Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 28 00:26:51.475334 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount788019521.mount: Deactivated successfully. Apr 28 00:26:51.616383 containerd[1643]: time="2026-04-28T00:26:51.600700889Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=0" Apr 28 00:26:52.161074 containerd[1643]: time="2026-04-28T00:26:52.156236332Z" level=info msg="ImageCreate event name:\"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 28 00:26:52.745069 containerd[1643]: time="2026-04-28T00:26:52.744559464Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 28 00:26:52.747133 containerd[1643]: time="2026-04-28T00:26:52.745584003Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"320448\" in 8.987642574s" Apr 28 00:26:52.747133 containerd[1643]: time="2026-04-28T00:26:52.745736473Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\"" Apr 28 00:26:52.750207 containerd[1643]: time="2026-04-28T00:26:52.750155349Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.5-0\"" Apr 28 00:26:56.502793 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 23. Apr 28 00:26:56.512007 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 28 00:27:00.444328 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 28 00:27:00.444000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:27:00.463592 kernel: audit: type=1130 audit(1777336020.444:280): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:27:00.567766 (kubelet)[2650]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 28 00:27:02.809209 kubelet[2650]: E0428 00:27:02.808512 2650 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 28 00:27:02.849267 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 28 00:27:02.988825 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 28 00:27:03.042000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Apr 28 00:27:03.043532 systemd[1]: kubelet.service: Consumed 3.901s CPU time, 112.3M memory peak. Apr 28 00:27:03.065669 kernel: audit: type=1131 audit(1777336023.042:281): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Apr 28 00:27:04.066260 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3585790015.mount: Deactivated successfully. Apr 28 00:27:13.195086 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 24. Apr 28 00:27:13.360867 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 28 00:27:18.303160 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 28 00:27:18.368000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:27:18.383993 kernel: audit: type=1130 audit(1777336038.368:282): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:27:18.395322 (kubelet)[2678]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 28 00:27:19.116639 kubelet[2678]: E0428 00:27:19.115746 2678 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 28 00:27:19.140219 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 28 00:27:19.141000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Apr 28 00:27:19.150747 kernel: audit: type=1131 audit(1777336039.141:283): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Apr 28 00:27:19.140596 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 28 00:27:19.142087 systemd[1]: kubelet.service: Consumed 3.437s CPU time, 108.9M memory peak. Apr 28 00:27:29.766546 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 25. Apr 28 00:27:30.139072 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 28 00:27:34.649538 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 28 00:27:34.782000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:27:34.792793 kernel: audit: type=1130 audit(1777336054.782:284): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:27:35.033898 (kubelet)[2703]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 28 00:27:39.787297 kubelet[2703]: E0428 00:27:39.772826 2703 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 28 00:27:39.876217 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 28 00:27:39.881360 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 28 00:27:39.893000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Apr 28 00:27:39.894817 systemd[1]: kubelet.service: Consumed 6.629s CPU time, 114.6M memory peak. Apr 28 00:27:39.908236 kernel: audit: type=1131 audit(1777336059.893:285): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Apr 28 00:27:48.828695 containerd[1643]: time="2026-04-28T00:27:48.827891202Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.5-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 00:27:48.846540 containerd[1643]: time="2026-04-28T00:27:48.828692270Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.5-0: active requests=0, bytes read=22851106" Apr 28 00:27:49.388425 containerd[1643]: time="2026-04-28T00:27:49.384307131Z" level=info msg="ImageCreate event name:\"sha256:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 00:27:50.156482 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 26. Apr 28 00:27:50.288150 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 28 00:27:53.176672 containerd[1643]: time="2026-04-28T00:27:53.174823511Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 00:27:54.214647 containerd[1643]: time="2026-04-28T00:27:54.211957479Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.5-0\" with image id \"sha256:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1\", repo tag \"registry.k8s.io/etcd:3.6.5-0\", repo digest \"registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534\", size \"22871747\" in 1m1.46139875s" Apr 28 00:27:54.249658 containerd[1643]: time="2026-04-28T00:27:54.222008132Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.5-0\" returns image reference \"sha256:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1\"" Apr 28 00:28:02.146760 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 28 00:28:02.162000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:28:02.436526 kernel: audit: type=1130 audit(1777336082.162:286): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:28:02.562799 (kubelet)[2769]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 28 00:28:09.084801 kubelet[2769]: E0428 00:28:09.084082 2769 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 28 00:28:09.182782 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 28 00:28:09.196752 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 28 00:28:09.254000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Apr 28 00:28:09.281563 kernel: audit: type=1131 audit(1777336089.254:287): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Apr 28 00:28:09.263973 systemd[1]: kubelet.service: Consumed 10.419s CPU time, 110M memory peak. Apr 28 00:28:19.321814 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 27. Apr 28 00:28:19.645233 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 28 00:28:31.342421 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 28 00:28:31.355000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:28:31.440542 kernel: audit: type=1130 audit(1777336111.355:288): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:28:31.686365 (kubelet)[2789]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 28 00:28:35.304790 kubelet[2789]: E0428 00:28:35.287111 2789 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 28 00:28:35.465301 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 28 00:28:35.526120 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 28 00:28:35.525000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Apr 28 00:28:35.527804 systemd[1]: kubelet.service: Consumed 8.299s CPU time, 110.3M memory peak. Apr 28 00:28:35.574451 kernel: audit: type=1131 audit(1777336115.525:289): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Apr 28 00:28:45.774130 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 28. Apr 28 00:28:46.100437 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 28 00:28:53.165723 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 28 00:28:53.165000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:28:53.282352 kernel: audit: type=1130 audit(1777336133.165:290): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:28:53.322727 (kubelet)[2817]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 28 00:28:55.277008 kubelet[2817]: E0428 00:28:55.273755 2817 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 28 00:28:55.342629 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 28 00:28:55.345272 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 28 00:28:55.400000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Apr 28 00:28:55.450065 kernel: audit: type=1131 audit(1777336135.400:291): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Apr 28 00:28:55.446503 systemd[1]: kubelet.service: Consumed 4.860s CPU time, 110.7M memory peak. Apr 28 00:29:06.074409 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 29. Apr 28 00:29:06.331441 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 28 00:29:17.133393 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 28 00:29:17.145000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:29:17.154923 kernel: audit: type=1130 audit(1777336157.145:292): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:29:17.163362 (kubelet)[2833]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 28 00:29:18.801590 kubelet[2833]: E0428 00:29:18.798689 2833 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 28 00:29:18.909877 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 28 00:29:18.920358 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 28 00:29:18.932000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Apr 28 00:29:18.934084 systemd[1]: kubelet.service: Consumed 6.670s CPU time, 109.2M memory peak. Apr 28 00:29:18.950649 kernel: audit: type=1131 audit(1777336158.932:293): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Apr 28 00:29:29.002878 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 30. Apr 28 00:29:29.167246 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 28 00:29:32.293778 update_engine[1618]: I20260428 00:29:32.287319 1618 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Apr 28 00:29:32.305780 update_engine[1618]: I20260428 00:29:32.294942 1618 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Apr 28 00:29:32.310528 update_engine[1618]: I20260428 00:29:32.307261 1618 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Apr 28 00:29:32.333927 update_engine[1618]: I20260428 00:29:32.332273 1618 omaha_request_params.cc:62] Current group set to alpha Apr 28 00:29:32.336499 update_engine[1618]: I20260428 00:29:32.335160 1618 update_attempter.cc:499] Already updated boot flags. Skipping. Apr 28 00:29:32.336499 update_engine[1618]: I20260428 00:29:32.335186 1618 update_attempter.cc:643] Scheduling an action processor start. Apr 28 00:29:32.336499 update_engine[1618]: I20260428 00:29:32.335244 1618 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Apr 28 00:29:32.358729 update_engine[1618]: I20260428 00:29:32.338146 1618 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Apr 28 00:29:32.358729 update_engine[1618]: I20260428 00:29:32.338630 1618 omaha_request_action.cc:271] Posting an Omaha request to disabled Apr 28 00:29:32.358729 update_engine[1618]: I20260428 00:29:32.338642 1618 omaha_request_action.cc:272] Request: Apr 28 00:29:32.358729 update_engine[1618]: Apr 28 00:29:32.358729 update_engine[1618]: Apr 28 00:29:32.358729 update_engine[1618]: Apr 28 00:29:32.358729 update_engine[1618]: Apr 28 00:29:32.358729 update_engine[1618]: Apr 28 00:29:32.358729 update_engine[1618]: Apr 28 00:29:32.358729 update_engine[1618]: Apr 28 00:29:32.358729 update_engine[1618]: Apr 28 00:29:32.358729 update_engine[1618]: I20260428 00:29:32.338656 1618 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 28 00:29:32.360198 locksmithd[1695]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Apr 28 00:29:32.482467 update_engine[1618]: I20260428 00:29:32.371482 1618 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 28 00:29:32.558190 update_engine[1618]: I20260428 00:29:32.541997 1618 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 28 00:29:32.563554 update_engine[1618]: E20260428 00:29:32.559547 1618 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (Domain name not found) Apr 28 00:29:32.594136 update_engine[1618]: I20260428 00:29:32.584326 1618 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Apr 28 00:29:33.387222 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 28 00:29:33.404000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:29:33.447990 kernel: audit: type=1130 audit(1777336173.404:294): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:29:33.490690 (kubelet)[2859]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 28 00:29:36.398667 kubelet[2859]: E0428 00:29:36.398137 2859 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 28 00:29:36.441783 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 28 00:29:36.457298 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 28 00:29:36.500000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Apr 28 00:29:36.505823 systemd[1]: kubelet.service: Consumed 4.406s CPU time, 110.1M memory peak. Apr 28 00:29:36.507909 kernel: audit: type=1131 audit(1777336176.500:295): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Apr 28 00:29:43.221098 update_engine[1618]: I20260428 00:29:43.198697 1618 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 28 00:29:43.250176 update_engine[1618]: I20260428 00:29:43.227711 1618 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 28 00:29:43.252471 update_engine[1618]: I20260428 00:29:43.252059 1618 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 28 00:29:43.266372 update_engine[1618]: E20260428 00:29:43.265239 1618 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (Domain name not found) Apr 28 00:29:43.272187 update_engine[1618]: I20260428 00:29:43.266548 1618 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Apr 28 00:29:46.692308 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 31. Apr 28 00:29:46.786332 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 28 00:29:48.248037 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Apr 28 00:29:48.248226 systemd[1]: kubelet.service: Failed with result 'signal'. Apr 28 00:29:48.283550 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 28 00:29:48.292000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Apr 28 00:29:48.303591 kernel: audit: type=1130 audit(1777336188.292:296): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Apr 28 00:29:49.254239 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 28 00:29:51.156195 systemd[1]: Reload requested from client PID 2882 ('systemctl') (unit session-8.scope)... Apr 28 00:29:51.164288 systemd[1]: Reloading... Apr 28 00:29:53.199306 update_engine[1618]: I20260428 00:29:53.197046 1618 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 28 00:29:53.248634 update_engine[1618]: I20260428 00:29:53.200817 1618 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 28 00:29:53.260927 update_engine[1618]: I20260428 00:29:53.260470 1618 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 28 00:29:53.291809 update_engine[1618]: E20260428 00:29:53.284663 1618 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (Domain name not found) Apr 28 00:29:53.296881 update_engine[1618]: I20260428 00:29:53.293715 1618 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Apr 28 00:29:59.092986 systemd-ssh-generator[2935]: Failed to query local AF_VSOCK CID: Cannot assign requested address Apr 28 00:29:59.103347 zram_generator::config[2939]: No configuration found. Apr 28 00:29:59.153032 (sd-exec-[2913]: /usr/lib/systemd/system-generators/systemd-ssh-generator failed with exit status 1. Apr 28 00:30:03.211332 update_engine[1618]: I20260428 00:30:03.195289 1618 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 28 00:30:03.295939 update_engine[1618]: I20260428 00:30:03.279091 1618 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 28 00:30:03.308707 update_engine[1618]: I20260428 00:30:03.305264 1618 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 28 00:30:03.320004 update_engine[1618]: E20260428 00:30:03.319319 1618 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (Domain name not found) Apr 28 00:30:03.320004 update_engine[1618]: I20260428 00:30:03.319973 1618 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Apr 28 00:30:03.320004 update_engine[1618]: I20260428 00:30:03.320086 1618 omaha_request_action.cc:617] Omaha request response: Apr 28 00:30:03.327180 update_engine[1618]: E20260428 00:30:03.322156 1618 omaha_request_action.cc:636] Omaha request network transfer failed. Apr 28 00:30:03.327180 update_engine[1618]: I20260428 00:30:03.325963 1618 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Apr 28 00:30:03.327180 update_engine[1618]: I20260428 00:30:03.326167 1618 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Apr 28 00:30:03.327180 update_engine[1618]: I20260428 00:30:03.326172 1618 update_attempter.cc:306] Processing Done. Apr 28 00:30:03.327180 update_engine[1618]: E20260428 00:30:03.326310 1618 update_attempter.cc:619] Update failed. Apr 28 00:30:03.327180 update_engine[1618]: I20260428 00:30:03.326318 1618 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Apr 28 00:30:03.327180 update_engine[1618]: I20260428 00:30:03.326323 1618 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Apr 28 00:30:03.327180 update_engine[1618]: I20260428 00:30:03.326327 1618 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Apr 28 00:30:03.327180 update_engine[1618]: I20260428 00:30:03.326659 1618 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Apr 28 00:30:03.327180 update_engine[1618]: I20260428 00:30:03.326788 1618 omaha_request_action.cc:271] Posting an Omaha request to disabled Apr 28 00:30:03.327180 update_engine[1618]: I20260428 00:30:03.326794 1618 omaha_request_action.cc:272] Request: Apr 28 00:30:03.327180 update_engine[1618]: Apr 28 00:30:03.327180 update_engine[1618]: Apr 28 00:30:03.327180 update_engine[1618]: Apr 28 00:30:03.327180 update_engine[1618]: Apr 28 00:30:03.327180 update_engine[1618]: Apr 28 00:30:03.327180 update_engine[1618]: Apr 28 00:30:03.327180 update_engine[1618]: I20260428 00:30:03.326799 1618 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 28 00:30:03.337239 update_engine[1618]: I20260428 00:30:03.327040 1618 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 28 00:30:03.337239 update_engine[1618]: I20260428 00:30:03.333595 1618 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 28 00:30:03.337302 locksmithd[1695]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Apr 28 00:30:03.351953 update_engine[1618]: E20260428 00:30:03.350894 1618 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (Domain name not found) Apr 28 00:30:03.352815 update_engine[1618]: I20260428 00:30:03.352370 1618 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Apr 28 00:30:03.352815 update_engine[1618]: I20260428 00:30:03.352427 1618 omaha_request_action.cc:617] Omaha request response: Apr 28 00:30:03.352815 update_engine[1618]: I20260428 00:30:03.352437 1618 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Apr 28 00:30:03.352815 update_engine[1618]: I20260428 00:30:03.352440 1618 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Apr 28 00:30:03.352815 update_engine[1618]: I20260428 00:30:03.352443 1618 update_attempter.cc:306] Processing Done. Apr 28 00:30:03.352815 update_engine[1618]: I20260428 00:30:03.352450 1618 update_attempter.cc:310] Error event sent. Apr 28 00:30:03.352815 update_engine[1618]: I20260428 00:30:03.352459 1618 update_check_scheduler.cc:74] Next update check in 44m58s Apr 28 00:30:03.383661 locksmithd[1695]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Apr 28 00:30:08.822885 systemd[1]: /usr/lib/systemd/system/update-engine.service:10: Support for option BlockIOWeight= has been removed and it is ignored Apr 28 00:30:15.096951 systemd[1]: Reloading finished in 23931 ms. Apr 28 00:30:15.109000 audit: BPF prog-id=39 op=LOAD Apr 28 00:30:15.109000 audit: BPF prog-id=26 op=UNLOAD Apr 28 00:30:15.177094 kernel: audit: type=1334 audit(1777336215.109:297): prog-id=39 op=LOAD Apr 28 00:30:15.177324 kernel: audit: type=1334 audit(1777336215.109:298): prog-id=26 op=UNLOAD Apr 28 00:30:15.180000 audit: BPF prog-id=40 op=LOAD Apr 28 00:30:15.181000 audit: BPF prog-id=30 op=UNLOAD Apr 28 00:30:15.181000 audit: BPF prog-id=41 op=LOAD Apr 28 00:30:15.186172 kernel: audit: type=1334 audit(1777336215.180:299): prog-id=40 op=LOAD Apr 28 00:30:15.186233 kernel: audit: type=1334 audit(1777336215.181:300): prog-id=30 op=UNLOAD Apr 28 00:30:15.186295 kernel: audit: type=1334 audit(1777336215.181:301): prog-id=41 op=LOAD Apr 28 00:30:15.181000 audit: BPF prog-id=42 op=LOAD Apr 28 00:30:15.181000 audit: BPF prog-id=31 op=UNLOAD Apr 28 00:30:15.181000 audit: BPF prog-id=32 op=UNLOAD Apr 28 00:30:15.194066 kernel: audit: type=1334 audit(1777336215.181:302): prog-id=42 op=LOAD Apr 28 00:30:15.194379 kernel: audit: type=1334 audit(1777336215.181:303): prog-id=31 op=UNLOAD Apr 28 00:30:15.194494 kernel: audit: type=1334 audit(1777336215.181:304): prog-id=32 op=UNLOAD Apr 28 00:30:15.184000 audit: BPF prog-id=43 op=LOAD Apr 28 00:30:15.195561 kernel: audit: type=1334 audit(1777336215.184:305): prog-id=43 op=LOAD Apr 28 00:30:15.184000 audit: BPF prog-id=22 op=UNLOAD Apr 28 00:30:15.197134 kernel: audit: type=1334 audit(1777336215.184:306): prog-id=22 op=UNLOAD Apr 28 00:30:15.185000 audit: BPF prog-id=44 op=LOAD Apr 28 00:30:15.185000 audit: BPF prog-id=45 op=LOAD Apr 28 00:30:15.185000 audit: BPF prog-id=23 op=UNLOAD Apr 28 00:30:15.185000 audit: BPF prog-id=24 op=UNLOAD Apr 28 00:30:15.186000 audit: BPF prog-id=46 op=LOAD Apr 28 00:30:15.186000 audit: BPF prog-id=27 op=UNLOAD Apr 28 00:30:15.186000 audit: BPF prog-id=47 op=LOAD Apr 28 00:30:15.186000 audit: BPF prog-id=48 op=LOAD Apr 28 00:30:15.186000 audit: BPF prog-id=28 op=UNLOAD Apr 28 00:30:15.186000 audit: BPF prog-id=29 op=UNLOAD Apr 28 00:30:15.195000 audit: BPF prog-id=49 op=LOAD Apr 28 00:30:15.195000 audit: BPF prog-id=25 op=UNLOAD Apr 28 00:30:15.196000 audit: BPF prog-id=50 op=LOAD Apr 28 00:30:15.196000 audit: BPF prog-id=19 op=UNLOAD Apr 28 00:30:15.196000 audit: BPF prog-id=51 op=LOAD Apr 28 00:30:15.196000 audit: BPF prog-id=52 op=LOAD Apr 28 00:30:15.196000 audit: BPF prog-id=20 op=UNLOAD Apr 28 00:30:15.196000 audit: BPF prog-id=21 op=UNLOAD Apr 28 00:30:15.205000 audit: BPF prog-id=53 op=LOAD Apr 28 00:30:15.205000 audit: BPF prog-id=35 op=UNLOAD Apr 28 00:30:15.205000 audit: BPF prog-id=54 op=LOAD Apr 28 00:30:15.208000 audit: BPF prog-id=55 op=LOAD Apr 28 00:30:15.209000 audit: BPF prog-id=33 op=UNLOAD Apr 28 00:30:15.209000 audit: BPF prog-id=34 op=UNLOAD Apr 28 00:30:15.219000 audit: BPF prog-id=56 op=LOAD Apr 28 00:30:15.219000 audit: BPF prog-id=36 op=UNLOAD Apr 28 00:30:15.219000 audit: BPF prog-id=57 op=LOAD Apr 28 00:30:15.219000 audit: BPF prog-id=58 op=LOAD Apr 28 00:30:15.219000 audit: BPF prog-id=37 op=UNLOAD Apr 28 00:30:15.219000 audit: BPF prog-id=38 op=UNLOAD Apr 28 00:30:15.860073 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Apr 28 00:30:15.861930 systemd[1]: kubelet.service: Failed with result 'signal'. Apr 28 00:30:15.863532 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 28 00:30:15.862000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Apr 28 00:30:15.864070 systemd[1]: kubelet.service: Consumed 3.136s CPU time, 98.5M memory peak. Apr 28 00:30:16.073564 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 28 00:30:18.198578 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 28 00:30:18.213000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:30:18.226000 (kubelet)[2985]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 28 00:30:19.650667 kubelet[2985]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 28 00:30:19.650667 kubelet[2985]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 28 00:30:19.691736 kubelet[2985]: I0428 00:30:19.651379 2985 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 28 00:30:22.454550 kubelet[2985]: I0428 00:30:22.438138 2985 server.go:529] "Kubelet version" kubeletVersion="v1.34.4" Apr 28 00:30:22.565768 kubelet[2985]: I0428 00:30:22.464017 2985 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 28 00:30:22.565768 kubelet[2985]: I0428 00:30:22.505733 2985 watchdog_linux.go:95] "Systemd watchdog is not enabled" Apr 28 00:30:22.565768 kubelet[2985]: I0428 00:30:22.558267 2985 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 28 00:30:22.581816 kubelet[2985]: I0428 00:30:22.581516 2985 server.go:956] "Client rotation is on, will bootstrap in background" Apr 28 00:30:22.948788 kubelet[2985]: E0428 00:30:22.948374 2985 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.20:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.20:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 28 00:30:22.964652 kubelet[2985]: I0428 00:30:22.964113 2985 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 28 00:30:23.976282 kubelet[2985]: I0428 00:30:23.975363 2985 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Apr 28 00:30:25.365204 kubelet[2985]: E0428 00:30:25.358201 2985 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.20:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.20:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 28 00:30:26.248105 kubelet[2985]: I0428 00:30:26.247298 2985 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Apr 28 00:30:26.253084 kubelet[2985]: I0428 00:30:26.252741 2985 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 28 00:30:26.253666 kubelet[2985]: I0428 00:30:26.253125 2985 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 28 00:30:26.254127 kubelet[2985]: I0428 00:30:26.253775 2985 topology_manager.go:138] "Creating topology manager with none policy" Apr 28 00:30:26.254127 kubelet[2985]: I0428 00:30:26.253828 2985 container_manager_linux.go:306] "Creating device plugin manager" Apr 28 00:30:26.254419 kubelet[2985]: I0428 00:30:26.254368 2985 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Apr 28 00:30:26.294672 kubelet[2985]: I0428 00:30:26.293670 2985 state_mem.go:36] "Initialized new in-memory state store" Apr 28 00:30:26.328593 kubelet[2985]: I0428 00:30:26.319835 2985 kubelet.go:475] "Attempting to sync node with API server" Apr 28 00:30:26.375454 kubelet[2985]: I0428 00:30:26.336388 2985 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 28 00:30:26.449287 kubelet[2985]: I0428 00:30:26.375666 2985 kubelet.go:387] "Adding apiserver pod source" Apr 28 00:30:26.449287 kubelet[2985]: I0428 00:30:26.376160 2985 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 28 00:30:26.449287 kubelet[2985]: E0428 00:30:26.402449 2985 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.20:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.20:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 28 00:30:26.583970 kubelet[2985]: E0428 00:30:26.483425 2985 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.20:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.20:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 28 00:30:26.632302 kubelet[2985]: I0428 00:30:26.631751 2985 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.2.1" apiVersion="v1" Apr 28 00:30:26.661733 kubelet[2985]: I0428 00:30:26.661627 2985 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 28 00:30:26.661733 kubelet[2985]: I0428 00:30:26.661715 2985 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Apr 28 00:30:26.662489 kubelet[2985]: W0428 00:30:26.662093 2985 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Apr 28 00:30:27.485170 kubelet[2985]: E0428 00:30:27.484772 2985 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.20:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.20:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 28 00:30:27.485170 kubelet[2985]: I0428 00:30:27.485115 2985 server.go:1262] "Started kubelet" Apr 28 00:30:27.755963 kubelet[2985]: I0428 00:30:27.495882 2985 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Apr 28 00:30:27.755963 kubelet[2985]: I0428 00:30:27.498118 2985 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 28 00:30:27.755963 kubelet[2985]: I0428 00:30:27.499070 2985 server_v1.go:49] "podresources" method="list" useActivePods=true Apr 28 00:30:27.755963 kubelet[2985]: I0428 00:30:27.743408 2985 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 28 00:30:28.088466 kubelet[2985]: E0428 00:30:27.873731 2985 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.20:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.20:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18aa5de1d5ad00e3 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-28 00:30:27.484991715 +0000 UTC m=+9.237899629,LastTimestamp:2026-04-28 00:30:27.484991715 +0000 UTC m=+9.237899629,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 28 00:30:28.088466 kubelet[2985]: E0428 00:30:28.056716 2985 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.20:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.20:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 28 00:30:28.207369 kubelet[2985]: I0428 00:30:28.206987 2985 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 28 00:30:28.264486 kubelet[2985]: I0428 00:30:28.249810 2985 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 28 00:30:28.291173 kubelet[2985]: I0428 00:30:28.266535 2985 server.go:310] "Adding debug handlers to kubelet server" Apr 28 00:30:28.337750 kubelet[2985]: I0428 00:30:28.337163 2985 volume_manager.go:313] "Starting Kubelet Volume Manager" Apr 28 00:30:28.461223 kubelet[2985]: E0428 00:30:28.370236 2985 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 00:30:28.543282 kubelet[2985]: I0428 00:30:28.540066 2985 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Apr 28 00:30:28.944937 kubelet[2985]: E0428 00:30:28.937682 2985 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 00:30:28.995635 kubelet[2985]: I0428 00:30:28.942459 2985 reconciler.go:29] "Reconciler: start to sync state" Apr 28 00:30:29.337603 kubelet[2985]: E0428 00:30:28.986374 2985 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.20:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.20:6443: connect: connection refused" interval="200ms" Apr 28 00:30:29.341540 kubelet[2985]: E0428 00:30:29.338688 2985 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 00:30:29.341540 kubelet[2985]: E0428 00:30:29.338002 2985 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.20:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.20:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 28 00:30:29.521098 kubelet[2985]: I0428 00:30:29.339924 2985 factory.go:223] Registration of the systemd container factory successfully Apr 28 00:30:29.522931 kubelet[2985]: E0428 00:30:29.522821 2985 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.20:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.20:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 28 00:30:29.528041 kubelet[2985]: E0428 00:30:29.521732 2985 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 00:30:29.591625 kubelet[2985]: I0428 00:30:29.582438 2985 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 28 00:30:29.907812 kubelet[2985]: E0428 00:30:29.888371 2985 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 00:30:30.174302 kubelet[2985]: E0428 00:30:30.166699 2985 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.20:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.20:6443: connect: connection refused" interval="400ms" Apr 28 00:30:30.202481 kubelet[2985]: E0428 00:30:30.200277 2985 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 00:30:30.365730 kubelet[2985]: E0428 00:30:30.363303 2985 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 00:30:30.750524 kubelet[2985]: E0428 00:30:30.747459 2985 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 00:30:30.870526 kubelet[2985]: E0428 00:30:30.747068 2985 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.20:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.20:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 28 00:30:31.328375 kubelet[2985]: E0428 00:30:31.071817 2985 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 00:30:31.563743 kubelet[2985]: E0428 00:30:31.550779 2985 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 00:30:32.274911 kubelet[2985]: E0428 00:30:32.205139 2985 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.20:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.20:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 28 00:30:32.365467 kubelet[2985]: E0428 00:30:32.192135 2985 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 00:30:32.365467 kubelet[2985]: E0428 00:30:32.357004 2985 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.20:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.20:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 28 00:30:32.569452 kubelet[2985]: E0428 00:30:32.524472 2985 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 00:30:32.892774 kubelet[2985]: I0428 00:30:32.756507 2985 factory.go:223] Registration of the containerd container factory successfully Apr 28 00:30:32.962368 kubelet[2985]: E0428 00:30:32.824773 2985 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 00:30:33.071258 kubelet[2985]: E0428 00:30:32.962814 2985 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.20:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.20:6443: connect: connection refused" interval="800ms" Apr 28 00:30:33.071258 kubelet[2985]: E0428 00:30:32.973533 2985 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 28 00:30:33.155000 audit[3005]: NETFILTER_CFG table=mangle:40 family=10 entries=2 op=nft_register_chain pid=3005 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Apr 28 00:30:33.155000 audit[3005]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7fff760e2da0 a2=0 a3=0 items=0 ppid=2985 pid=3005 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 28 00:30:33.155000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Apr 28 00:30:34.004805 kernel: kauditd_printk_skb: 32 callbacks suppressed Apr 28 00:30:34.100597 kubelet[2985]: E0428 00:30:33.299133 2985 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 00:30:34.100597 kubelet[2985]: I0428 00:30:33.365436 2985 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Apr 28 00:30:34.100597 kubelet[2985]: E0428 00:30:33.557379 2985 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 00:30:34.100597 kubelet[2985]: E0428 00:30:33.953494 2985 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 00:30:34.562131 kernel: audit: type=1325 audit(1777336233.155:339): table=mangle:40 family=10 entries=2 op=nft_register_chain pid=3005 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Apr 28 00:30:34.576417 kubelet[2985]: E0428 00:30:34.142494 2985 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 00:30:34.576417 kubelet[2985]: E0428 00:30:34.357257 2985 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 00:30:34.580000 audit[3006]: NETFILTER_CFG table=mangle:41 family=2 entries=2 op=nft_register_chain pid=3006 subj=system_u:system_r:kernel_t:s0 comm="iptables" Apr 28 00:30:34.580000 audit[3006]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffd22466e50 a2=0 a3=0 items=0 ppid=2985 pid=3006 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 28 00:30:34.580000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Apr 28 00:30:34.710635 kernel: audit: type=1300 audit(1777336233.155:339): arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7fff760e2da0 a2=0 a3=0 items=0 ppid=2985 pid=3005 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 28 00:30:34.711000 audit[3007]: NETFILTER_CFG table=mangle:42 family=10 entries=1 op=nft_register_chain pid=3007 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Apr 28 00:30:34.711000 audit[3007]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc87e1a4e0 a2=0 a3=0 items=0 ppid=2985 pid=3007 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 28 00:30:34.711000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Apr 28 00:30:34.952739 kubelet[2985]: E0428 00:30:34.695407 2985 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 00:30:34.952739 kubelet[2985]: E0428 00:30:34.710115 2985 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.20:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.20:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 28 00:30:34.967138 kernel: audit: type=1327 audit(1777336233.155:339): proctitle=6970367461626C6573002D770035002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Apr 28 00:30:34.973506 kubelet[2985]: E0428 00:30:34.962668 2985 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 00:30:35.034725 kernel: audit: type=1325 audit(1777336234.580:340): table=mangle:41 family=2 entries=2 op=nft_register_chain pid=3006 subj=system_u:system_r:kernel_t:s0 comm="iptables" Apr 28 00:30:35.054211 kernel: audit: type=1300 audit(1777336234.580:340): arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffd22466e50 a2=0 a3=0 items=0 ppid=2985 pid=3006 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 28 00:30:35.060608 kernel: audit: type=1327 audit(1777336234.580:340): proctitle=69707461626C6573002D770035002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Apr 28 00:30:35.064272 kernel: audit: type=1325 audit(1777336234.711:341): table=mangle:42 family=10 entries=1 op=nft_register_chain pid=3007 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Apr 28 00:30:35.064338 kernel: audit: type=1300 audit(1777336234.711:341): arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc87e1a4e0 a2=0 a3=0 items=0 ppid=2985 pid=3007 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 28 00:30:35.064394 kernel: audit: type=1327 audit(1777336234.711:341): proctitle=6970367461626C6573002D770035002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Apr 28 00:30:35.134000 audit[3010]: NETFILTER_CFG table=nat:43 family=10 entries=1 op=nft_register_chain pid=3010 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Apr 28 00:30:35.134000 audit[3010]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffdf8ec52a0 a2=0 a3=0 items=0 ppid=2985 pid=3010 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 28 00:30:35.138000 audit[3009]: NETFILTER_CFG table=filter:44 family=2 entries=1 op=nft_register_chain pid=3009 subj=system_u:system_r:kernel_t:s0 comm="iptables" Apr 28 00:30:35.138000 audit[3009]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe31f7f2a0 a2=0 a3=0 items=0 ppid=2985 pid=3009 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 28 00:30:35.138000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D4649524557414C4C002D740066696C746572 Apr 28 00:30:35.134000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Apr 28 00:30:35.518747 kernel: audit: type=1325 audit(1777336235.134:342): table=nat:43 family=10 entries=1 op=nft_register_chain pid=3010 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Apr 28 00:30:35.521554 kubelet[2985]: E0428 00:30:35.346297 2985 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 00:30:35.521554 kubelet[2985]: E0428 00:30:35.497913 2985 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 00:30:35.611470 kubelet[2985]: E0428 00:30:35.570663 2985 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.20:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.20:6443: connect: connection refused" interval="1.6s" Apr 28 00:30:35.924369 kubelet[2985]: E0428 00:30:35.804151 2985 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 00:30:36.097744 kubelet[2985]: E0428 00:30:36.094086 2985 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 00:30:36.286000 audit[3012]: NETFILTER_CFG table=filter:45 family=10 entries=1 op=nft_register_chain pid=3012 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Apr 28 00:30:36.286000 audit[3012]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff7d5b9c80 a2=0 a3=0 items=0 ppid=2985 pid=3012 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 28 00:30:36.286000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Apr 28 00:30:36.485381 kubelet[2985]: E0428 00:30:36.354226 2985 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 00:30:36.658309 kubelet[2985]: E0428 00:30:36.645271 2985 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 00:30:37.008429 kubelet[2985]: E0428 00:30:37.007075 2985 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 00:30:37.042115 kubelet[2985]: E0428 00:30:37.009163 2985 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.20:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.20:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 28 00:30:37.058000 audit[3014]: NETFILTER_CFG table=filter:46 family=2 entries=2 op=nft_register_chain pid=3014 subj=system_u:system_r:kernel_t:s0 comm="iptables" Apr 28 00:30:37.058000 audit[3014]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7fff28197eb0 a2=0 a3=0 items=0 ppid=2985 pid=3014 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 28 00:30:37.058000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Apr 28 00:30:37.280297 kubelet[2985]: E0428 00:30:37.010530 2985 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.20:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.20:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 28 00:30:37.322609 kubelet[2985]: E0428 00:30:37.321685 2985 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 00:30:37.338451 kubelet[2985]: E0428 00:30:37.158242 2985 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.20:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.20:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18aa5de1d5ad00e3 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-28 00:30:27.484991715 +0000 UTC m=+9.237899629,LastTimestamp:2026-04-28 00:30:27.484991715 +0000 UTC m=+9.237899629,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 28 00:30:37.484442 kubelet[2985]: E0428 00:30:37.482160 2985 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 00:30:37.521742 kubelet[2985]: E0428 00:30:37.368328 2985 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.20:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.20:6443: connect: connection refused" interval="3.2s" Apr 28 00:30:37.805912 kubelet[2985]: E0428 00:30:37.782350 2985 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 00:30:38.203734 kubelet[2985]: E0428 00:30:38.195307 2985 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 00:30:38.424000 audit[3016]: NETFILTER_CFG table=filter:47 family=2 entries=2 op=nft_register_chain pid=3016 subj=system_u:system_r:kernel_t:s0 comm="iptables" Apr 28 00:30:38.424000 audit[3016]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffec1e2dc30 a2=0 a3=0 items=0 ppid=2985 pid=3016 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 28 00:30:38.424000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Apr 28 00:30:38.665558 kubelet[2985]: E0428 00:30:38.505122 2985 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 00:30:38.665558 kubelet[2985]: E0428 00:30:38.664666 2985 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 00:30:38.703980 kernel: kauditd_printk_skb: 11 callbacks suppressed Apr 28 00:30:38.711533 kernel: audit: type=1325 audit(1777336238.424:346): table=filter:47 family=2 entries=2 op=nft_register_chain pid=3016 subj=system_u:system_r:kernel_t:s0 comm="iptables" Apr 28 00:30:38.782739 kernel: audit: type=1300 audit(1777336238.424:346): arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffec1e2dc30 a2=0 a3=0 items=0 ppid=2985 pid=3016 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 28 00:30:38.797446 kernel: audit: type=1327 audit(1777336238.424:346): proctitle=69707461626C6573002D770035002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Apr 28 00:30:38.995284 kubelet[2985]: E0428 00:30:38.845117 2985 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 00:30:39.162447 kubelet[2985]: E0428 00:30:39.153390 2985 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 00:30:39.285470 kubelet[2985]: E0428 00:30:39.279567 2985 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 00:30:39.643368 kubelet[2985]: E0428 00:30:39.641331 2985 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 00:30:39.851466 kubelet[2985]: E0428 00:30:39.850525 2985 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 00:30:39.853134 kubelet[2985]: E0428 00:30:39.853024 2985 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.20:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.20:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 28 00:30:39.883000 audit[3019]: NETFILTER_CFG table=filter:48 family=2 entries=1 op=nft_register_rule pid=3019 subj=system_u:system_r:kernel_t:s0 comm="iptables" Apr 28 00:30:39.883000 audit[3019]: SYSCALL arch=c000003e syscall=46 success=yes exit=924 a0=3 a1=7fff69968e30 a2=0 a3=0 items=0 ppid=2985 pid=3019 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 28 00:30:39.883000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F380000002D2D737263003132372E Apr 28 00:30:39.965828 kernel: audit: type=1325 audit(1777336239.883:347): table=filter:48 family=2 entries=1 op=nft_register_rule pid=3019 subj=system_u:system_r:kernel_t:s0 comm="iptables" Apr 28 00:30:39.973201 kubelet[2985]: E0428 00:30:39.965107 2985 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 00:30:40.013506 kernel: audit: type=1300 audit(1777336239.883:347): arch=c000003e syscall=46 success=yes exit=924 a0=3 a1=7fff69968e30 a2=0 a3=0 items=0 ppid=2985 pid=3019 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 28 00:30:40.030622 kubelet[2985]: I0428 00:30:40.008951 2985 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Apr 28 00:30:40.030622 kubelet[2985]: I0428 00:30:40.009494 2985 status_manager.go:244] "Starting to sync pod status with apiserver" Apr 28 00:30:40.163684 kernel: audit: type=1327 audit(1777336239.883:347): proctitle=69707461626C6573002D770035002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F380000002D2D737263003132372E Apr 28 00:30:40.168024 kubelet[2985]: I0428 00:30:40.030886 2985 kubelet.go:2428] "Starting kubelet main sync loop" Apr 28 00:30:40.174991 kubelet[2985]: E0428 00:30:40.172064 2985 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 00:30:40.174991 kubelet[2985]: E0428 00:30:40.168471 2985 kubelet.go:2452] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 28 00:30:40.208000 audit[3023]: NETFILTER_CFG table=mangle:49 family=2 entries=1 op=nft_register_chain pid=3023 subj=system_u:system_r:kernel_t:s0 comm="iptables" Apr 28 00:30:40.208000 audit[3023]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc4932fc70 a2=0 a3=0 items=0 ppid=2985 pid=3023 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 28 00:30:40.208000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Apr 28 00:30:40.660379 kernel: audit: type=1325 audit(1777336240.208:348): table=mangle:49 family=2 entries=1 op=nft_register_chain pid=3023 subj=system_u:system_r:kernel_t:s0 comm="iptables" Apr 28 00:30:40.665208 kubelet[2985]: E0428 00:30:40.340144 2985 kubelet.go:2452] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 28 00:30:40.665208 kubelet[2985]: E0428 00:30:40.356662 2985 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 00:30:40.665208 kubelet[2985]: E0428 00:30:40.490084 2985 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 00:30:40.665208 kubelet[2985]: E0428 00:30:40.529355 2985 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.20:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.20:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 28 00:30:40.665208 kubelet[2985]: E0428 00:30:40.603239 2985 kubelet.go:2452] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 28 00:30:40.665208 kubelet[2985]: E0428 00:30:40.617504 2985 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 00:30:40.666436 kernel: audit: type=1300 audit(1777336240.208:348): arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc4932fc70 a2=0 a3=0 items=0 ppid=2985 pid=3023 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 28 00:30:40.666478 kernel: audit: type=1327 audit(1777336240.208:348): proctitle=69707461626C6573002D770035002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Apr 28 00:30:40.749773 kubelet[2985]: E0428 00:30:40.747260 2985 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.20:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.20:6443: connect: connection refused" interval="6.4s" Apr 28 00:30:40.764000 audit[3024]: NETFILTER_CFG table=nat:50 family=2 entries=1 op=nft_register_chain pid=3024 subj=system_u:system_r:kernel_t:s0 comm="iptables" Apr 28 00:30:40.772428 kernel: audit: type=1325 audit(1777336240.764:349): table=nat:50 family=2 entries=1 op=nft_register_chain pid=3024 subj=system_u:system_r:kernel_t:s0 comm="iptables" Apr 28 00:30:40.764000 audit[3024]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffea8fce530 a2=0 a3=0 items=0 ppid=2985 pid=3024 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 28 00:30:40.764000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Apr 28 00:30:41.044778 kubelet[2985]: E0428 00:30:40.815326 2985 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 00:30:41.148972 kubelet[2985]: E0428 00:30:41.148181 2985 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 00:30:41.169755 kubelet[2985]: E0428 00:30:41.157095 2985 kubelet.go:2452] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Apr 28 00:30:41.276000 audit[3025]: NETFILTER_CFG table=filter:51 family=2 entries=1 op=nft_register_chain pid=3025 subj=system_u:system_r:kernel_t:s0 comm="iptables" Apr 28 00:30:41.276000 audit[3025]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd9a9320c0 a2=0 a3=0 items=0 ppid=2985 pid=3025 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 28 00:30:41.276000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Apr 28 00:30:41.306294 kubelet[2985]: E0428 00:30:41.276813 2985 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 00:30:41.452775 kubelet[2985]: E0428 00:30:41.450481 2985 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 00:30:41.602431 kubelet[2985]: E0428 00:30:41.585326 2985 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 00:30:41.767774 kubelet[2985]: E0428 00:30:41.764645 2985 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 00:30:41.822029 kubelet[2985]: E0428 00:30:41.812498 2985 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.20:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.20:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 28 00:30:41.966221 kubelet[2985]: E0428 00:30:41.964237 2985 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 00:30:42.072582 kubelet[2985]: E0428 00:30:42.071962 2985 kubelet.go:2452] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Apr 28 00:30:42.072582 kubelet[2985]: E0428 00:30:42.072253 2985 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 00:30:42.322982 kubelet[2985]: E0428 00:30:42.300935 2985 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 00:30:42.564684 kubelet[2985]: E0428 00:30:42.541274 2985 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 00:30:42.706897 kubelet[2985]: E0428 00:30:42.700760 2985 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 00:30:43.288560 kubelet[2985]: E0428 00:30:43.192561 2985 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 00:30:43.981571 kubelet[2985]: E0428 00:30:43.549564 2985 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.20:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.20:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 28 00:30:43.981571 kubelet[2985]: I0428 00:30:43.550130 2985 cpu_manager.go:221] "Starting CPU manager" policy="none" Apr 28 00:30:43.981571 kubelet[2985]: I0428 00:30:43.568359 2985 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Apr 28 00:30:43.981571 kubelet[2985]: E0428 00:30:43.731405 2985 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 00:30:45.149385 kubelet[2985]: E0428 00:30:43.752045 2985 kubelet.go:2452] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Apr 28 00:30:45.149385 kubelet[2985]: I0428 00:30:43.765563 2985 state_mem.go:36] "Initialized new in-memory state store" Apr 28 00:30:45.149385 kubelet[2985]: E0428 00:30:44.655665 2985 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 00:30:47.275389 kubelet[2985]: E0428 00:30:47.269527 2985 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 00:30:50.481464 kubelet[2985]: E0428 00:30:50.165936 2985 kubelet.go:2452] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Apr 28 00:30:52.453150 kubelet[2985]: E0428 00:30:47.983831 2985 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.20:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.20:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 28 00:30:52.588617 kubelet[2985]: E0428 00:30:49.396834 2985 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 00:30:52.588617 kubelet[2985]: I0428 00:30:52.583401 2985 policy_none.go:49] "None policy: Start" Apr 28 00:30:52.888200 kubelet[2985]: E0428 00:30:52.873339 2985 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 00:30:52.964195 kubelet[2985]: I0428 00:30:52.861711 2985 memory_manager.go:187] "Starting memorymanager" policy="None" Apr 28 00:30:52.980783 kubelet[2985]: I0428 00:30:52.964473 2985 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Apr 28 00:30:53.183792 kubelet[2985]: E0428 00:30:53.163613 2985 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 00:30:53.682300 kubelet[2985]: E0428 00:30:53.644292 2985 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 00:30:54.475562 kubelet[2985]: E0428 00:30:54.461530 2985 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 00:30:54.747621 kubelet[2985]: E0428 00:30:54.553489 2985 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.20:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.20:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 28 00:30:55.358595 kubelet[2985]: E0428 00:30:54.701379 2985 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.20:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.20:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 28 00:30:55.583325 kubelet[2985]: E0428 00:30:55.548413 2985 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.20:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.20:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18aa5de1d5ad00e3 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-28 00:30:27.484991715 +0000 UTC m=+9.237899629,LastTimestamp:2026-04-28 00:30:27.484991715 +0000 UTC m=+9.237899629,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 28 00:30:55.689247 kubelet[2985]: I0428 00:30:55.659809 2985 policy_none.go:47] "Start" Apr 28 00:30:55.689247 kubelet[2985]: E0428 00:30:55.584311 2985 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 00:30:55.689247 kubelet[2985]: E0428 00:30:55.472523 2985 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.20:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.20:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 28 00:30:56.268738 kubelet[2985]: E0428 00:30:56.266377 2985 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 00:30:56.595453 kubelet[2985]: E0428 00:30:56.589586 2985 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.20:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.20:6443: connect: connection refused" interval="7s" Apr 28 00:30:56.736510 kubelet[2985]: E0428 00:30:56.632630 2985 kubelet.go:2452] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Apr 28 00:30:56.799823 kubelet[2985]: E0428 00:30:56.667586 2985 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 00:30:57.701781 kubelet[2985]: E0428 00:30:57.362382 2985 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 00:30:57.963475 kubelet[2985]: E0428 00:30:57.951537 2985 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 00:30:58.444297 kubelet[2985]: E0428 00:30:58.428579 2985 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 00:30:58.559563 kubelet[2985]: E0428 00:30:58.559161 2985 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 00:30:58.680349 kubelet[2985]: E0428 00:30:58.678622 2985 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.20:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.20:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 28 00:30:58.757483 kubelet[2985]: E0428 00:30:58.743631 2985 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 00:30:58.813293 kubelet[2985]: E0428 00:30:58.812007 2985 certificate_manager.go:461] "Reached backoff limit, still unable to rotate certs" err="timed out waiting for the condition" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 28 00:30:58.939572 kubelet[2985]: E0428 00:30:58.901589 2985 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 00:30:59.184977 kubelet[2985]: E0428 00:30:59.161830 2985 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 00:30:59.695294 kubelet[2985]: E0428 00:30:59.674267 2985 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 00:30:59.931763 kubelet[2985]: E0428 00:30:59.862598 2985 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 00:31:00.416587 kubelet[2985]: E0428 00:31:00.415919 2985 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 00:31:00.575467 kubelet[2985]: E0428 00:31:00.571347 2985 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 00:31:00.769604 kubelet[2985]: E0428 00:31:00.700886 2985 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 00:31:00.820689 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Apr 28 00:31:00.918343 kubelet[2985]: E0428 00:31:00.914569 2985 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 00:31:01.051261 kubelet[2985]: E0428 00:31:01.043810 2985 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 00:31:01.173540 kubelet[2985]: E0428 00:31:01.164550 2985 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.20:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.20:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 28 00:31:01.242289 kubelet[2985]: E0428 00:31:01.167515 2985 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 00:31:01.305501 kubelet[2985]: E0428 00:31:01.296351 2985 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 00:31:01.535584 kubelet[2985]: E0428 00:31:01.510103 2985 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 00:31:01.690100 kubelet[2985]: E0428 00:31:01.656774 2985 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 00:31:01.777202 kubelet[2985]: E0428 00:31:01.762478 2985 kubelet.go:2452] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Apr 28 00:31:01.882815 kubelet[2985]: E0428 00:31:01.780477 2985 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 00:31:01.967441 kubelet[2985]: E0428 00:31:01.963225 2985 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 00:31:02.072617 kubelet[2985]: E0428 00:31:02.066400 2985 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 00:31:02.249241 kubelet[2985]: E0428 00:31:02.194499 2985 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 00:31:02.454368 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Apr 28 00:31:02.745559 kubelet[2985]: E0428 00:31:02.700631 2985 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 00:31:02.995457 kubelet[2985]: E0428 00:31:02.995093 2985 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 00:31:03.111343 kubelet[2985]: E0428 00:31:03.100685 2985 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 00:31:03.229151 kubelet[2985]: E0428 00:31:03.226450 2985 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 00:31:03.352152 kubelet[2985]: E0428 00:31:03.350796 2985 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 00:31:03.464439 kubelet[2985]: E0428 00:31:03.463408 2985 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 00:31:03.581274 kubelet[2985]: E0428 00:31:03.580541 2985 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 00:31:03.692416 kubelet[2985]: E0428 00:31:03.687617 2985 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 00:31:03.819110 kubelet[2985]: E0428 00:31:03.816874 2985 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 00:31:03.960436 kubelet[2985]: E0428 00:31:03.954602 2985 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.20:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.20:6443: connect: connection refused" interval="7s" Apr 28 00:31:03.986562 kubelet[2985]: E0428 00:31:03.984614 2985 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 00:31:04.095105 kubelet[2985]: E0428 00:31:04.092908 2985 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 00:31:04.246208 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Apr 28 00:31:04.467787 kubelet[2985]: E0428 00:31:04.248650 2985 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 00:31:04.744284 kubelet[2985]: E0428 00:31:04.724318 2985 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 00:31:04.934323 kubelet[2985]: E0428 00:31:04.933418 2985 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 00:31:05.054756 kubelet[2985]: E0428 00:31:04.934113 2985 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 28 00:31:05.101604 kubelet[2985]: E0428 00:31:05.093233 2985 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 00:31:05.172635 kubelet[2985]: I0428 00:31:05.171711 2985 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 28 00:31:05.338474 kubelet[2985]: I0428 00:31:05.174608 2985 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 28 00:31:05.376109 kubelet[2985]: E0428 00:31:05.305169 2985 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 00:31:05.440906 kubelet[2985]: I0428 00:31:05.440737 2985 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 28 00:31:05.672753 kubelet[2985]: E0428 00:31:05.655199 2985 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.20:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.20:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18aa5de1d5ad00e3 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-28 00:30:27.484991715 +0000 UTC m=+9.237899629,LastTimestamp:2026-04-28 00:30:27.484991715 +0000 UTC m=+9.237899629,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 28 00:31:05.680258 kubelet[2985]: E0428 00:31:05.676289 2985 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 28 00:31:05.681118 kubelet[2985]: E0428 00:31:05.681006 2985 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 28 00:31:05.791347 kubelet[2985]: I0428 00:31:05.789599 2985 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 28 00:31:05.844001 kubelet[2985]: E0428 00:31:05.843657 2985 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.20:6443/api/v1/nodes\": dial tcp 10.0.0.20:6443: connect: connection refused" node="localhost" Apr 28 00:31:07.177202 kubelet[2985]: I0428 00:31:07.174247 2985 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 28 00:31:07.759499 kubelet[2985]: I0428 00:31:07.748581 2985 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fe37cb66764ed0c204cee10807d65f19-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"fe37cb66764ed0c204cee10807d65f19\") " pod="kube-system/kube-apiserver-localhost" Apr 28 00:31:07.855402 kubelet[2985]: E0428 00:31:07.777648 2985 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.20:6443/api/v1/nodes\": dial tcp 10.0.0.20:6443: connect: connection refused" node="localhost" Apr 28 00:31:08.162314 kubelet[2985]: I0428 00:31:08.146139 2985 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fe37cb66764ed0c204cee10807d65f19-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"fe37cb66764ed0c204cee10807d65f19\") " pod="kube-system/kube-apiserver-localhost" Apr 28 00:31:08.260959 kubelet[2985]: I0428 00:31:08.259605 2985 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fe37cb66764ed0c204cee10807d65f19-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"fe37cb66764ed0c204cee10807d65f19\") " pod="kube-system/kube-apiserver-localhost" Apr 28 00:31:08.574833 kubelet[2985]: E0428 00:31:08.574098 2985 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.20:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.20:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 28 00:31:09.146368 kubelet[2985]: I0428 00:31:09.140216 2985 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/82faa9ca0765979bc0118d46e6420ed8-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"82faa9ca0765979bc0118d46e6420ed8\") " pod="kube-system/kube-controller-manager-localhost" Apr 28 00:31:09.159325 kubelet[2985]: I0428 00:31:09.155468 2985 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/82faa9ca0765979bc0118d46e6420ed8-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"82faa9ca0765979bc0118d46e6420ed8\") " pod="kube-system/kube-controller-manager-localhost" Apr 28 00:31:09.159325 kubelet[2985]: I0428 00:31:09.155665 2985 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 28 00:31:09.159325 kubelet[2985]: I0428 00:31:09.155609 2985 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/82faa9ca0765979bc0118d46e6420ed8-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"82faa9ca0765979bc0118d46e6420ed8\") " pod="kube-system/kube-controller-manager-localhost" Apr 28 00:31:09.159325 kubelet[2985]: I0428 00:31:09.155866 2985 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/82faa9ca0765979bc0118d46e6420ed8-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"82faa9ca0765979bc0118d46e6420ed8\") " pod="kube-system/kube-controller-manager-localhost" Apr 28 00:31:09.159325 kubelet[2985]: I0428 00:31:09.155882 2985 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/82faa9ca0765979bc0118d46e6420ed8-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"82faa9ca0765979bc0118d46e6420ed8\") " pod="kube-system/kube-controller-manager-localhost" Apr 28 00:31:09.159325 kubelet[2985]: E0428 00:31:09.158521 2985 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.20:6443/api/v1/nodes\": dial tcp 10.0.0.20:6443: connect: connection refused" node="localhost" Apr 28 00:31:09.507588 kubelet[2985]: I0428 00:31:09.507398 2985 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/66a243c17a59d09458bf3b09d66260f5-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"66a243c17a59d09458bf3b09d66260f5\") " pod="kube-system/kube-scheduler-localhost" Apr 28 00:31:10.332021 kubelet[2985]: E0428 00:31:10.299913 2985 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.20:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.20:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 28 00:31:10.334531 kubelet[2985]: E0428 00:31:10.299826 2985 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.20:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.20:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 28 00:31:10.466420 kubelet[2985]: I0428 00:31:10.466200 2985 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 28 00:31:10.473675 kubelet[2985]: E0428 00:31:10.467108 2985 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.20:6443/api/v1/nodes\": dial tcp 10.0.0.20:6443: connect: connection refused" node="localhost" Apr 28 00:31:10.466917 systemd[1]: Created slice kubepods-burstable-podfe37cb66764ed0c204cee10807d65f19.slice - libcontainer container kubepods-burstable-podfe37cb66764ed0c204cee10807d65f19.slice. Apr 28 00:31:10.784810 kubelet[2985]: E0428 00:31:10.784461 2985 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 28 00:31:10.793373 systemd[1]: Created slice kubepods-burstable-pod82faa9ca0765979bc0118d46e6420ed8.slice - libcontainer container kubepods-burstable-pod82faa9ca0765979bc0118d46e6420ed8.slice. Apr 28 00:31:10.862368 kubelet[2985]: E0428 00:31:10.861486 2985 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:31:11.394050 kubelet[2985]: E0428 00:31:11.392646 2985 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 28 00:31:11.465330 kubelet[2985]: E0428 00:31:11.383468 2985 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.20:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.20:6443: connect: connection refused" interval="7s" Apr 28 00:31:11.588377 containerd[1643]: time="2026-04-28T00:31:11.564366259Z" level=info msg="RunPodSandbox for name:\"kube-apiserver-localhost\" uid:\"fe37cb66764ed0c204cee10807d65f19\" namespace:\"kube-system\"" Apr 28 00:31:11.688536 kubelet[2985]: E0428 00:31:11.688493 2985 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:31:11.854426 systemd[1]: Created slice kubepods-burstable-pod66a243c17a59d09458bf3b09d66260f5.slice - libcontainer container kubepods-burstable-pod66a243c17a59d09458bf3b09d66260f5.slice. Apr 28 00:31:12.159946 containerd[1643]: time="2026-04-28T00:31:12.143184942Z" level=info msg="RunPodSandbox for name:\"kube-controller-manager-localhost\" uid:\"82faa9ca0765979bc0118d46e6420ed8\" namespace:\"kube-system\"" Apr 28 00:31:12.513488 kubelet[2985]: E0428 00:31:12.508660 2985 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 28 00:31:12.572526 kubelet[2985]: I0428 00:31:12.566249 2985 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 28 00:31:12.614383 kubelet[2985]: E0428 00:31:12.606144 2985 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:31:12.622592 kubelet[2985]: E0428 00:31:12.622463 2985 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.20:6443/api/v1/nodes\": dial tcp 10.0.0.20:6443: connect: connection refused" node="localhost" Apr 28 00:31:12.641917 containerd[1643]: time="2026-04-28T00:31:12.641403495Z" level=info msg="RunPodSandbox for name:\"kube-scheduler-localhost\" uid:\"66a243c17a59d09458bf3b09d66260f5\" namespace:\"kube-system\"" Apr 28 00:31:16.017682 kubelet[2985]: E0428 00:31:16.016183 2985 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 28 00:31:16.466409 kubelet[2985]: E0428 00:31:16.465075 2985 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.20:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.20:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18aa5de1d5ad00e3 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-28 00:30:27.484991715 +0000 UTC m=+9.237899629,LastTimestamp:2026-04-28 00:30:27.484991715 +0000 UTC m=+9.237899629,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 28 00:31:17.511191 kubelet[2985]: I0428 00:31:17.481440 2985 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 28 00:31:17.999436 kubelet[2985]: E0428 00:31:17.956639 2985 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.20:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.20:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 28 00:31:17.999436 kubelet[2985]: E0428 00:31:17.956738 2985 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.20:6443/api/v1/nodes\": dial tcp 10.0.0.20:6443: connect: connection refused" node="localhost" Apr 28 00:31:18.824396 kubelet[2985]: E0428 00:31:18.778396 2985 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.20:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.20:6443: connect: connection refused" interval="7s" Apr 28 00:31:25.904665 kubelet[2985]: E0428 00:31:25.904007 2985 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.20:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.20:6443: connect: connection refused" interval="7s" Apr 28 00:31:26.103249 kubelet[2985]: E0428 00:31:26.102461 2985 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 28 00:31:26.103249 kubelet[2985]: I0428 00:31:26.102441 2985 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 28 00:31:26.131933 kubelet[2985]: E0428 00:31:26.104624 2985 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.20:6443/api/v1/nodes\": dial tcp 10.0.0.20:6443: connect: connection refused" node="localhost" Apr 28 00:31:27.086472 kubelet[2985]: E0428 00:31:26.837714 2985 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.20:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.20:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18aa5de1d5ad00e3 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-28 00:30:27.484991715 +0000 UTC m=+9.237899629,LastTimestamp:2026-04-28 00:30:27.484991715 +0000 UTC m=+9.237899629,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 28 00:31:31.563456 containerd[1643]: time="2026-04-28T00:31:31.257131946Z" level=info msg="connecting to shim b7ee253259b6a3b4ae5b6bb3287bd672403cb39773b66829256f12470361c399" address="unix:///run/containerd/s/e1a54b6da91b4e0421e766b746d4fe4b47cf810f2e905930acdc49bdb00a7da1" namespace=k8s.io protocol=ttrpc version=3 Apr 28 00:31:31.919948 containerd[1643]: time="2026-04-28T00:31:31.125721779Z" level=info msg="connecting to shim 9a1bbe8a27cd9b2c5c5de397d762547016f65005043da5e49ed4938780441df3" address="unix:///run/containerd/s/a384ddc518c1e9621709867c463bb602f5b126eb8941810a65e8c2e174de9f6f" namespace=k8s.io protocol=ttrpc version=3 Apr 28 00:31:32.805568 kubelet[2985]: E0428 00:31:32.663730 2985 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.20:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.20:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 28 00:31:33.196520 kubelet[2985]: E0428 00:31:33.177888 2985 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.20:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.20:6443: connect: connection refused" interval="7s" Apr 28 00:31:33.323698 kubelet[2985]: E0428 00:31:33.322907 2985 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.20:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.20:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 28 00:31:35.188294 kubelet[2985]: I0428 00:31:35.183995 2985 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 28 00:31:35.348672 kubelet[2985]: E0428 00:31:35.220691 2985 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.20:6443/api/v1/nodes\": dial tcp 10.0.0.20:6443: connect: connection refused" node="localhost" Apr 28 00:31:36.113457 kubelet[2985]: E0428 00:31:36.109073 2985 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 28 00:31:39.951718 kubelet[2985]: E0428 00:31:39.947640 2985 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.20:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.20:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18aa5de1d5ad00e3 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-28 00:30:27.484991715 +0000 UTC m=+9.237899629,LastTimestamp:2026-04-28 00:30:27.484991715 +0000 UTC m=+9.237899629,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 28 00:31:41.860340 kubelet[2985]: E0428 00:31:41.858401 2985 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.20:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.20:6443: connect: connection refused" interval="7s" Apr 28 00:31:42.198693 containerd[1643]: time="2026-04-28T00:31:42.069464389Z" level=info msg="connecting to shim 68425dca7e328e676abf88b4e1f3b26c28554391eef3e12dac3404861636ff23" address="unix:///run/containerd/s/a8f19aff4c54a4bf8e95907f2a3356235c31cb787b6c243f084642a11761d204" namespace=k8s.io protocol=ttrpc version=3 Apr 28 00:31:43.421698 kubelet[2985]: E0428 00:31:43.420491 2985 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.20:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.20:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 28 00:31:46.459821 kubelet[2985]: E0428 00:31:46.458092 2985 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 28 00:31:46.599244 kubelet[2985]: I0428 00:31:46.598748 2985 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 28 00:31:46.738057 kubelet[2985]: E0428 00:31:46.735947 2985 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.20:6443/api/v1/nodes\": dial tcp 10.0.0.20:6443: connect: connection refused" node="localhost" Apr 28 00:31:49.934519 systemd[1]: Started cri-containerd-9a1bbe8a27cd9b2c5c5de397d762547016f65005043da5e49ed4938780441df3.scope - libcontainer container 9a1bbe8a27cd9b2c5c5de397d762547016f65005043da5e49ed4938780441df3. Apr 28 00:31:50.644717 kubelet[2985]: E0428 00:31:50.639473 2985 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.20:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.20:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 28 00:31:50.949469 kubelet[2985]: E0428 00:31:50.640220 2985 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.20:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.20:6443: connect: connection refused" interval="7s" Apr 28 00:31:51.608945 kubelet[2985]: E0428 00:31:51.604371 2985 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.20:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.20:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 28 00:31:51.766635 kubelet[2985]: E0428 00:31:51.676286 2985 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.20:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.20:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18aa5de1d5ad00e3 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-28 00:30:27.484991715 +0000 UTC m=+9.237899629,LastTimestamp:2026-04-28 00:30:27.484991715 +0000 UTC m=+9.237899629,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 28 00:31:53.159016 systemd[1]: Started cri-containerd-68425dca7e328e676abf88b4e1f3b26c28554391eef3e12dac3404861636ff23.scope - libcontainer container 68425dca7e328e676abf88b4e1f3b26c28554391eef3e12dac3404861636ff23. Apr 28 00:31:53.671000 audit: BPF prog-id=59 op=LOAD Apr 28 00:31:53.703323 kernel: kauditd_printk_skb: 5 callbacks suppressed Apr 28 00:31:53.706167 kernel: audit: type=1334 audit(1777336313.671:351): prog-id=59 op=LOAD Apr 28 00:31:54.517000 audit: BPF prog-id=60 op=LOAD Apr 28 00:31:54.517000 audit[3094]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00019e240 a2=98 a3=0 items=0 ppid=3053 pid=3094 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 28 00:31:54.517000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3961316262653861323763643962326335633564653339376437363235 Apr 28 00:31:54.589066 kernel: audit: type=1334 audit(1777336314.517:352): prog-id=60 op=LOAD Apr 28 00:31:54.561629 systemd[1]: Started cri-containerd-b7ee253259b6a3b4ae5b6bb3287bd672403cb39773b66829256f12470361c399.scope - libcontainer container b7ee253259b6a3b4ae5b6bb3287bd672403cb39773b66829256f12470361c399. Apr 28 00:31:54.664038 kernel: audit: type=1300 audit(1777336314.517:352): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00019e240 a2=98 a3=0 items=0 ppid=3053 pid=3094 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 28 00:31:54.680627 kernel: audit: type=1327 audit(1777336314.517:352): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3961316262653861323763643962326335633564653339376437363235 Apr 28 00:31:54.686000 audit: BPF prog-id=60 op=UNLOAD Apr 28 00:31:54.686000 audit[3094]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3053 pid=3094 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 28 00:31:54.686000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3961316262653861323763643962326335633564653339376437363235 Apr 28 00:31:54.740566 kernel: audit: type=1334 audit(1777336314.686:353): prog-id=60 op=UNLOAD Apr 28 00:31:54.740754 kernel: audit: type=1300 audit(1777336314.686:353): arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3053 pid=3094 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 28 00:31:54.740794 kernel: audit: type=1327 audit(1777336314.686:353): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3961316262653861323763643962326335633564653339376437363235 Apr 28 00:31:54.747000 audit: BPF prog-id=61 op=LOAD Apr 28 00:31:54.747000 audit[3094]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00019e490 a2=98 a3=0 items=0 ppid=3053 pid=3094 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 28 00:31:54.747000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3961316262653861323763643962326335633564653339376437363235 Apr 28 00:31:54.747000 audit: BPF prog-id=62 op=LOAD Apr 28 00:31:54.747000 audit[3094]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c00019e220 a2=98 a3=0 items=0 ppid=3053 pid=3094 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 28 00:31:54.747000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3961316262653861323763643962326335633564653339376437363235 Apr 28 00:31:54.747000 audit: BPF prog-id=62 op=UNLOAD Apr 28 00:31:54.747000 audit[3094]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3053 pid=3094 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 28 00:31:54.747000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3961316262653861323763643962326335633564653339376437363235 Apr 28 00:31:54.747000 audit: BPF prog-id=61 op=UNLOAD Apr 28 00:31:54.747000 audit[3094]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3053 pid=3094 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 28 00:31:54.747000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3961316262653861323763643962326335633564653339376437363235 Apr 28 00:31:54.773887 kernel: audit: type=1334 audit(1777336314.747:354): prog-id=61 op=LOAD Apr 28 00:31:54.773914 kubelet[2985]: I0428 00:31:54.773346 2985 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 28 00:31:54.772000 audit: BPF prog-id=63 op=LOAD Apr 28 00:31:54.772000 audit[3094]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00019e6f0 a2=98 a3=0 items=0 ppid=3053 pid=3094 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 28 00:31:54.772000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3961316262653861323763643962326335633564653339376437363235 Apr 28 00:31:54.812234 kernel: audit: type=1300 audit(1777336314.747:354): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00019e490 a2=98 a3=0 items=0 ppid=3053 pid=3094 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 28 00:31:54.812298 kubelet[2985]: E0428 00:31:54.780701 2985 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.20:6443/api/v1/nodes\": dial tcp 10.0.0.20:6443: connect: connection refused" node="localhost" Apr 28 00:31:54.812619 kernel: audit: type=1327 audit(1777336314.747:354): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3961316262653861323763643962326335633564653339376437363235 Apr 28 00:31:55.109000 audit: BPF prog-id=64 op=LOAD Apr 28 00:31:55.778000 audit: BPF prog-id=65 op=LOAD Apr 28 00:31:55.778000 audit[3112]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001ac240 a2=98 a3=0 items=0 ppid=3087 pid=3112 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 28 00:31:55.778000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3638343235646361376533323865363736616266383862346531663362 Apr 28 00:31:55.804000 audit: BPF prog-id=65 op=UNLOAD Apr 28 00:31:55.804000 audit[3112]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3087 pid=3112 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 28 00:31:55.804000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3638343235646361376533323865363736616266383862346531663362 Apr 28 00:31:56.294000 audit: BPF prog-id=66 op=LOAD Apr 28 00:31:56.311000 audit: BPF prog-id=67 op=LOAD Apr 28 00:31:56.294000 audit[3112]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001ac490 a2=98 a3=0 items=0 ppid=3087 pid=3112 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 28 00:31:56.294000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3638343235646361376533323865363736616266383862346531663362 Apr 28 00:31:56.315000 audit: BPF prog-id=68 op=LOAD Apr 28 00:31:56.315000 audit[3112]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001ac220 a2=98 a3=0 items=0 ppid=3087 pid=3112 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 28 00:31:56.315000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3638343235646361376533323865363736616266383862346531663362 Apr 28 00:31:56.315000 audit: BPF prog-id=68 op=UNLOAD Apr 28 00:31:56.315000 audit[3112]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3087 pid=3112 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 28 00:31:56.315000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3638343235646361376533323865363736616266383862346531663362 Apr 28 00:31:56.316000 audit: BPF prog-id=66 op=UNLOAD Apr 28 00:31:56.316000 audit[3112]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3087 pid=3112 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 28 00:31:56.316000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3638343235646361376533323865363736616266383862346531663362 Apr 28 00:31:56.317000 audit: BPF prog-id=69 op=LOAD Apr 28 00:31:56.317000 audit[3112]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001ac6f0 a2=98 a3=0 items=0 ppid=3087 pid=3112 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 28 00:31:56.317000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3638343235646361376533323865363736616266383862346531663362 Apr 28 00:31:56.401000 audit: BPF prog-id=70 op=LOAD Apr 28 00:31:56.401000 audit[3075]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001ca240 a2=98 a3=0 items=0 ppid=3051 pid=3075 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 28 00:31:56.401000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6237656532353332353962366133623461653562366262333238376264 Apr 28 00:31:56.438000 audit: BPF prog-id=70 op=UNLOAD Apr 28 00:31:56.438000 audit[3075]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3051 pid=3075 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 28 00:31:56.438000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6237656532353332353962366133623461653562366262333238376264 Apr 28 00:31:56.452000 audit: BPF prog-id=71 op=LOAD Apr 28 00:31:56.452000 audit[3075]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001ca490 a2=98 a3=0 items=0 ppid=3051 pid=3075 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 28 00:31:56.452000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6237656532353332353962366133623461653562366262333238376264 Apr 28 00:31:56.456000 audit: BPF prog-id=72 op=LOAD Apr 28 00:31:56.456000 audit[3075]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001ca220 a2=98 a3=0 items=0 ppid=3051 pid=3075 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 28 00:31:56.456000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6237656532353332353962366133623461653562366262333238376264 Apr 28 00:31:56.639000 audit: BPF prog-id=72 op=UNLOAD Apr 28 00:31:56.639000 audit[3075]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3051 pid=3075 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 28 00:31:56.639000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6237656532353332353962366133623461653562366262333238376264 Apr 28 00:31:56.640000 audit: BPF prog-id=71 op=UNLOAD Apr 28 00:31:56.640000 audit[3075]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3051 pid=3075 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 28 00:31:56.640000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6237656532353332353962366133623461653562366262333238376264 Apr 28 00:31:56.641000 audit: BPF prog-id=73 op=LOAD Apr 28 00:31:56.641000 audit[3075]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001ca6f0 a2=98 a3=0 items=0 ppid=3051 pid=3075 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 28 00:31:56.641000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6237656532353332353962366133623461653562366262333238376264 Apr 28 00:31:56.666081 kubelet[2985]: E0428 00:31:56.653479 2985 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 28 00:31:58.338681 kubelet[2985]: E0428 00:31:58.285806 2985 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.20:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.20:6443: connect: connection refused" interval="7s" Apr 28 00:32:00.645392 containerd[1643]: time="2026-04-28T00:32:00.644672437Z" level=info msg="RunPodSandbox for name:\"kube-controller-manager-localhost\" uid:\"82faa9ca0765979bc0118d46e6420ed8\" namespace:\"kube-system\" returns sandbox id \"68425dca7e328e676abf88b4e1f3b26c28554391eef3e12dac3404861636ff23\"" Apr 28 00:32:02.265967 kubelet[2985]: E0428 00:32:02.260541 2985 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:32:02.507913 kubelet[2985]: E0428 00:32:02.259824 2985 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.20:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.20:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18aa5de1d5ad00e3 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-28 00:30:27.484991715 +0000 UTC m=+9.237899629,LastTimestamp:2026-04-28 00:30:27.484991715 +0000 UTC m=+9.237899629,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 28 00:32:02.613371 containerd[1643]: time="2026-04-28T00:32:02.598625417Z" level=info msg="RunPodSandbox for name:\"kube-apiserver-localhost\" uid:\"fe37cb66764ed0c204cee10807d65f19\" namespace:\"kube-system\" returns sandbox id \"b7ee253259b6a3b4ae5b6bb3287bd672403cb39773b66829256f12470361c399\"" Apr 28 00:32:02.638283 kubelet[2985]: I0428 00:32:02.638004 2985 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 28 00:32:02.638713 kubelet[2985]: E0428 00:32:02.638687 2985 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.20:6443/api/v1/nodes\": dial tcp 10.0.0.20:6443: connect: connection refused" node="localhost" Apr 28 00:32:02.662486 containerd[1643]: time="2026-04-28T00:32:02.662140944Z" level=info msg="RunPodSandbox for name:\"kube-scheduler-localhost\" uid:\"66a243c17a59d09458bf3b09d66260f5\" namespace:\"kube-system\" returns sandbox id \"9a1bbe8a27cd9b2c5c5de397d762547016f65005043da5e49ed4938780441df3\"" Apr 28 00:32:02.663892 kubelet[2985]: E0428 00:32:02.663797 2985 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:32:02.847631 kubelet[2985]: E0428 00:32:02.847301 2985 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:32:02.847631 kubelet[2985]: E0428 00:32:02.847279 2985 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.20:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.20:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 28 00:32:03.098990 containerd[1643]: time="2026-04-28T00:32:03.097383390Z" level=info msg="CreateContainer within sandbox \"68425dca7e328e676abf88b4e1f3b26c28554391eef3e12dac3404861636ff23\" for container name:\"kube-controller-manager\"" Apr 28 00:32:03.137211 containerd[1643]: time="2026-04-28T00:32:03.097650438Z" level=info msg="CreateContainer within sandbox \"b7ee253259b6a3b4ae5b6bb3287bd672403cb39773b66829256f12470361c399\" for container name:\"kube-apiserver\"" Apr 28 00:32:03.146078 containerd[1643]: time="2026-04-28T00:32:03.097433709Z" level=info msg="CreateContainer within sandbox \"9a1bbe8a27cd9b2c5c5de397d762547016f65005043da5e49ed4938780441df3\" for container name:\"kube-scheduler\"" Apr 28 00:32:03.746406 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3896755233.mount: Deactivated successfully. Apr 28 00:32:03.925912 containerd[1643]: time="2026-04-28T00:32:03.875823020Z" level=info msg="Container caffbcea64b01430281daf11f74b16b780527fc7f722e7944cefd42b64df8696: CDI devices from CRI Config.CDIDevices: []" Apr 28 00:32:03.946784 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1846983546.mount: Deactivated successfully. Apr 28 00:32:03.947542 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1249381895.mount: Deactivated successfully. Apr 28 00:32:04.048089 containerd[1643]: time="2026-04-28T00:32:03.946695703Z" level=info msg="Container 167724ee3dcd79da3fa03ec163c3febd8e528d3e62ed1439ff8689ff452c62a2: CDI devices from CRI Config.CDIDevices: []" Apr 28 00:32:04.169622 containerd[1643]: time="2026-04-28T00:32:04.164685620Z" level=info msg="Container 04cfef914f0c78abd0148917046a940864958975d6a8c9b8292dd00de3bec16f: CDI devices from CRI Config.CDIDevices: []" Apr 28 00:32:05.097751 containerd[1643]: time="2026-04-28T00:32:05.094564449Z" level=info msg="CreateContainer within sandbox \"b7ee253259b6a3b4ae5b6bb3287bd672403cb39773b66829256f12470361c399\" for name:\"kube-apiserver\" returns container id \"caffbcea64b01430281daf11f74b16b780527fc7f722e7944cefd42b64df8696\"" Apr 28 00:32:05.221936 containerd[1643]: time="2026-04-28T00:32:05.218689775Z" level=info msg="StartContainer for \"caffbcea64b01430281daf11f74b16b780527fc7f722e7944cefd42b64df8696\"" Apr 28 00:32:05.245089 containerd[1643]: time="2026-04-28T00:32:05.244518976Z" level=info msg="CreateContainer within sandbox \"68425dca7e328e676abf88b4e1f3b26c28554391eef3e12dac3404861636ff23\" for name:\"kube-controller-manager\" returns container id \"167724ee3dcd79da3fa03ec163c3febd8e528d3e62ed1439ff8689ff452c62a2\"" Apr 28 00:32:05.259451 containerd[1643]: time="2026-04-28T00:32:05.245085354Z" level=info msg="CreateContainer within sandbox \"9a1bbe8a27cd9b2c5c5de397d762547016f65005043da5e49ed4938780441df3\" for name:\"kube-scheduler\" returns container id \"04cfef914f0c78abd0148917046a940864958975d6a8c9b8292dd00de3bec16f\"" Apr 28 00:32:05.281405 containerd[1643]: time="2026-04-28T00:32:05.279247870Z" level=info msg="StartContainer for \"167724ee3dcd79da3fa03ec163c3febd8e528d3e62ed1439ff8689ff452c62a2\"" Apr 28 00:32:05.308734 containerd[1643]: time="2026-04-28T00:32:05.301771725Z" level=info msg="StartContainer for \"04cfef914f0c78abd0148917046a940864958975d6a8c9b8292dd00de3bec16f\"" Apr 28 00:32:05.310615 containerd[1643]: time="2026-04-28T00:32:05.281723171Z" level=info msg="connecting to shim caffbcea64b01430281daf11f74b16b780527fc7f722e7944cefd42b64df8696" address="unix:///run/containerd/s/e1a54b6da91b4e0421e766b746d4fe4b47cf810f2e905930acdc49bdb00a7da1" protocol=ttrpc version=3 Apr 28 00:32:05.312956 containerd[1643]: time="2026-04-28T00:32:05.312764458Z" level=info msg="connecting to shim 167724ee3dcd79da3fa03ec163c3febd8e528d3e62ed1439ff8689ff452c62a2" address="unix:///run/containerd/s/a8f19aff4c54a4bf8e95907f2a3356235c31cb787b6c243f084642a11761d204" protocol=ttrpc version=3 Apr 28 00:32:05.412215 kubelet[2985]: E0428 00:32:05.400193 2985 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.20:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.20:6443: connect: connection refused" interval="7s" Apr 28 00:32:05.557766 kubelet[2985]: E0428 00:32:05.557255 2985 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.20:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.20:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 28 00:32:05.765938 containerd[1643]: time="2026-04-28T00:32:05.764336029Z" level=info msg="connecting to shim 04cfef914f0c78abd0148917046a940864958975d6a8c9b8292dd00de3bec16f" address="unix:///run/containerd/s/a384ddc518c1e9621709867c463bb602f5b126eb8941810a65e8c2e174de9f6f" protocol=ttrpc version=3 Apr 28 00:32:06.935515 kubelet[2985]: E0428 00:32:06.933162 2985 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 28 00:32:07.880583 systemd[1]: Started cri-containerd-167724ee3dcd79da3fa03ec163c3febd8e528d3e62ed1439ff8689ff452c62a2.scope - libcontainer container 167724ee3dcd79da3fa03ec163c3febd8e528d3e62ed1439ff8689ff452c62a2. Apr 28 00:32:08.063782 systemd[1]: Started cri-containerd-caffbcea64b01430281daf11f74b16b780527fc7f722e7944cefd42b64df8696.scope - libcontainer container caffbcea64b01430281daf11f74b16b780527fc7f722e7944cefd42b64df8696. Apr 28 00:32:08.511000 audit: BPF prog-id=74 op=LOAD Apr 28 00:32:08.561335 kernel: kauditd_printk_skb: 56 callbacks suppressed Apr 28 00:32:08.565105 kernel: audit: type=1334 audit(1777336328.511:375): prog-id=74 op=LOAD Apr 28 00:32:08.566000 audit: BPF prog-id=75 op=LOAD Apr 28 00:32:08.576833 kernel: audit: type=1334 audit(1777336328.566:376): prog-id=75 op=LOAD Apr 28 00:32:08.566000 audit[3177]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000206240 a2=98 a3=0 items=0 ppid=3087 pid=3177 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 28 00:32:08.566000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3136373732346565336463643739646133666130336563313633633366 Apr 28 00:32:08.585000 audit: BPF prog-id=75 op=UNLOAD Apr 28 00:32:08.585000 audit[3177]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3087 pid=3177 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 28 00:32:08.585000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3136373732346565336463643739646133666130336563313633633366 Apr 28 00:32:08.654000 audit: BPF prog-id=76 op=LOAD Apr 28 00:32:08.654000 audit[3177]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000206490 a2=98 a3=0 items=0 ppid=3087 pid=3177 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 28 00:32:08.654000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3136373732346565336463643739646133666130336563313633633366 Apr 28 00:32:08.690000 audit: BPF prog-id=77 op=LOAD Apr 28 00:32:08.690000 audit[3177]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000206220 a2=98 a3=0 items=0 ppid=3087 pid=3177 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 28 00:32:08.690000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3136373732346565336463643739646133666130336563313633633366 Apr 28 00:32:08.696000 audit: BPF prog-id=77 op=UNLOAD Apr 28 00:32:08.696000 audit[3177]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3087 pid=3177 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 28 00:32:08.696000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3136373732346565336463643739646133666130336563313633633366 Apr 28 00:32:08.762000 audit: BPF prog-id=78 op=LOAD Apr 28 00:32:08.824000 audit: BPF prog-id=76 op=UNLOAD Apr 28 00:32:08.824000 audit[3177]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3087 pid=3177 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 28 00:32:08.824000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3136373732346565336463643739646133666130336563313633633366 Apr 28 00:32:08.831000 audit: BPF prog-id=79 op=LOAD Apr 28 00:32:08.831000 audit[3177]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0002066f0 a2=98 a3=0 items=0 ppid=3087 pid=3177 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 28 00:32:08.831000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3136373732346565336463643739646133666130336563313633633366 Apr 28 00:32:08.998000 audit: BPF prog-id=80 op=LOAD Apr 28 00:32:08.998000 audit[3184]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000186240 a2=98 a3=0 items=0 ppid=3051 pid=3184 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 28 00:32:08.998000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6361666662636561363462303134333032383164616631316637346231 Apr 28 00:32:08.998000 audit: BPF prog-id=80 op=UNLOAD Apr 28 00:32:08.998000 audit[3184]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3051 pid=3184 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 28 00:32:08.998000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6361666662636561363462303134333032383164616631316637346231 Apr 28 00:32:09.009000 audit: BPF prog-id=81 op=LOAD Apr 28 00:32:09.009000 audit[3184]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000186490 a2=98 a3=0 items=0 ppid=3051 pid=3184 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 28 00:32:09.009000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6361666662636561363462303134333032383164616631316637346231 Apr 28 00:32:09.041000 audit: BPF prog-id=82 op=LOAD Apr 28 00:32:09.041000 audit[3184]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000186220 a2=98 a3=0 items=0 ppid=3051 pid=3184 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 28 00:32:09.041000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6361666662636561363462303134333032383164616631316637346231 Apr 28 00:32:09.041000 audit: BPF prog-id=82 op=UNLOAD Apr 28 00:32:09.041000 audit[3184]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3051 pid=3184 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 28 00:32:09.041000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6361666662636561363462303134333032383164616631316637346231 Apr 28 00:32:09.042000 audit: BPF prog-id=81 op=UNLOAD Apr 28 00:32:09.042000 audit[3184]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3051 pid=3184 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 28 00:32:09.042000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6361666662636561363462303134333032383164616631316637346231 Apr 28 00:32:09.042000 audit: BPF prog-id=83 op=LOAD Apr 28 00:32:09.042000 audit[3184]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001866f0 a2=98 a3=0 items=0 ppid=3051 pid=3184 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 28 00:32:09.042000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6361666662636561363462303134333032383164616631316637346231 Apr 28 00:32:09.225305 kernel: audit: type=1300 audit(1777336328.566:376): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000206240 a2=98 a3=0 items=0 ppid=3087 pid=3177 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 28 00:32:09.225344 kernel: audit: type=1327 audit(1777336328.566:376): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3136373732346565336463643739646133666130336563313633633366 Apr 28 00:32:09.225413 kernel: audit: type=1334 audit(1777336328.585:377): prog-id=75 op=UNLOAD Apr 28 00:32:09.225487 kernel: audit: type=1300 audit(1777336328.585:377): arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3087 pid=3177 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 28 00:32:09.225538 kernel: audit: type=1327 audit(1777336328.585:377): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3136373732346565336463643739646133666130336563313633633366 Apr 28 00:32:09.225553 kernel: audit: type=1334 audit(1777336328.654:378): prog-id=76 op=LOAD Apr 28 00:32:09.225569 kernel: audit: type=1300 audit(1777336328.654:378): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000206490 a2=98 a3=0 items=0 ppid=3087 pid=3177 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 28 00:32:09.225593 kernel: audit: type=1327 audit(1777336328.654:378): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3136373732346565336463643739646133666130336563313633633366 Apr 28 00:32:09.375676 systemd[1]: Started cri-containerd-04cfef914f0c78abd0148917046a940864958975d6a8c9b8292dd00de3bec16f.scope - libcontainer container 04cfef914f0c78abd0148917046a940864958975d6a8c9b8292dd00de3bec16f. Apr 28 00:32:10.449516 kubelet[2985]: I0428 00:32:10.439700 2985 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 28 00:32:10.646720 kubelet[2985]: E0428 00:32:10.645723 2985 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.20:6443/api/v1/nodes\": dial tcp 10.0.0.20:6443: connect: connection refused" node="localhost" Apr 28 00:32:10.676000 audit: BPF prog-id=84 op=LOAD Apr 28 00:32:10.819000 audit: BPF prog-id=85 op=LOAD Apr 28 00:32:10.819000 audit[3199]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00012a240 a2=98 a3=0 items=0 ppid=3053 pid=3199 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 28 00:32:10.819000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3034636665663931346630633738616264303134383931373034366139 Apr 28 00:32:10.819000 audit: BPF prog-id=85 op=UNLOAD Apr 28 00:32:10.819000 audit[3199]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3053 pid=3199 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 28 00:32:10.819000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3034636665663931346630633738616264303134383931373034366139 Apr 28 00:32:10.819000 audit: BPF prog-id=86 op=LOAD Apr 28 00:32:10.819000 audit[3199]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00012a490 a2=98 a3=0 items=0 ppid=3053 pid=3199 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 28 00:32:10.819000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3034636665663931346630633738616264303134383931373034366139 Apr 28 00:32:10.820000 audit: BPF prog-id=87 op=LOAD Apr 28 00:32:10.820000 audit[3199]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c00012a220 a2=98 a3=0 items=0 ppid=3053 pid=3199 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 28 00:32:10.820000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3034636665663931346630633738616264303134383931373034366139 Apr 28 00:32:10.820000 audit: BPF prog-id=87 op=UNLOAD Apr 28 00:32:10.820000 audit[3199]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3053 pid=3199 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 28 00:32:10.820000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3034636665663931346630633738616264303134383931373034366139 Apr 28 00:32:10.822000 audit: BPF prog-id=86 op=UNLOAD Apr 28 00:32:10.822000 audit[3199]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3053 pid=3199 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 28 00:32:10.822000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3034636665663931346630633738616264303134383931373034366139 Apr 28 00:32:10.823000 audit: BPF prog-id=88 op=LOAD Apr 28 00:32:10.823000 audit[3199]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00012a6f0 a2=98 a3=0 items=0 ppid=3053 pid=3199 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 28 00:32:10.823000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3034636665663931346630633738616264303134383931373034366139 Apr 28 00:32:11.209002 containerd[1643]: time="2026-04-28T00:32:11.208640565Z" level=info msg="StartContainer for \"167724ee3dcd79da3fa03ec163c3febd8e528d3e62ed1439ff8689ff452c62a2\" returns successfully" Apr 28 00:32:11.463983 containerd[1643]: time="2026-04-28T00:32:11.462332933Z" level=info msg="StartContainer for \"caffbcea64b01430281daf11f74b16b780527fc7f722e7944cefd42b64df8696\" returns successfully" Apr 28 00:32:12.475241 containerd[1643]: time="2026-04-28T00:32:12.472103468Z" level=info msg="StartContainer for \"04cfef914f0c78abd0148917046a940864958975d6a8c9b8292dd00de3bec16f\" returns successfully" Apr 28 00:32:12.550950 kubelet[2985]: E0428 00:32:12.497714 2985 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.20:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.20:6443: connect: connection refused" interval="7s" Apr 28 00:32:12.640213 kubelet[2985]: E0428 00:32:12.638949 2985 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.20:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.20:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18aa5de1d5ad00e3 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-28 00:30:27.484991715 +0000 UTC m=+9.237899629,LastTimestamp:2026-04-28 00:30:27.484991715 +0000 UTC m=+9.237899629,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 28 00:32:14.368675 kubelet[2985]: E0428 00:32:14.363553 2985 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 28 00:32:14.551672 kubelet[2985]: E0428 00:32:14.550133 2985 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:32:17.054345 kubelet[2985]: E0428 00:32:17.031554 2985 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 28 00:32:17.263169 kubelet[2985]: E0428 00:32:17.262203 2985 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 28 00:32:17.263169 kubelet[2985]: E0428 00:32:17.262751 2985 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:32:17.752543 kubelet[2985]: I0428 00:32:17.752194 2985 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 28 00:32:17.805482 kubelet[2985]: E0428 00:32:17.804791 2985 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.20:6443/api/v1/nodes\": dial tcp 10.0.0.20:6443: connect: connection refused" node="localhost" Apr 28 00:32:17.882276 kubelet[2985]: E0428 00:32:17.881833 2985 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 28 00:32:17.896655 kubelet[2985]: E0428 00:32:17.896519 2985 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:32:18.663137 kubelet[2985]: E0428 00:32:18.662715 2985 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 28 00:32:18.663137 kubelet[2985]: E0428 00:32:18.663003 2985 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:32:18.665420 kubelet[2985]: E0428 00:32:18.663794 2985 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 28 00:32:18.665420 kubelet[2985]: E0428 00:32:18.664028 2985 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:32:18.665420 kubelet[2985]: E0428 00:32:18.664303 2985 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 28 00:32:18.665420 kubelet[2985]: E0428 00:32:18.664426 2985 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:32:20.295568 kubelet[2985]: E0428 00:32:20.208063 2985 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 28 00:32:20.539249 kubelet[2985]: E0428 00:32:20.537319 2985 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:32:20.539249 kubelet[2985]: E0428 00:32:20.539564 2985 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 28 00:32:20.576304 kubelet[2985]: E0428 00:32:20.539830 2985 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:32:21.461579 kubelet[2985]: E0428 00:32:21.460256 2985 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 28 00:32:21.606727 kubelet[2985]: E0428 00:32:21.587141 2985 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:32:21.884406 kubelet[2985]: E0428 00:32:21.861600 2985 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 28 00:32:21.915208 kubelet[2985]: E0428 00:32:21.912964 2985 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:32:23.357594 kubelet[2985]: E0428 00:32:23.355756 2985 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 28 00:32:23.548282 kubelet[2985]: E0428 00:32:23.528272 2985 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:32:23.910804 kubelet[2985]: E0428 00:32:23.901692 2985 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 28 00:32:23.962737 kubelet[2985]: E0428 00:32:23.962079 2985 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:32:24.848367 kubelet[2985]: I0428 00:32:24.845261 2985 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 28 00:32:27.170630 kubelet[2985]: E0428 00:32:27.168221 2985 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 28 00:32:29.597347 kubelet[2985]: E0428 00:32:29.594519 2985 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.20:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" interval="7s" Apr 28 00:32:32.264302 kubelet[2985]: E0428 00:32:32.262925 2985 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.20:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 28 00:32:32.803357 kubelet[2985]: E0428 00:32:32.792286 2985 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.20:6443/api/v1/namespaces/default/events\": net/http: TLS handshake timeout" event="&Event{ObjectMeta:{localhost.18aa5de1d5ad00e3 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-28 00:30:27.484991715 +0000 UTC m=+9.237899629,LastTimestamp:2026-04-28 00:30:27.484991715 +0000 UTC m=+9.237899629,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 28 00:32:33.353600 kubelet[2985]: E0428 00:32:33.350053 2985 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.20:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 28 00:32:34.867722 kubelet[2985]: E0428 00:32:34.866583 2985 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.20:6443/api/v1/nodes\": net/http: TLS handshake timeout" node="localhost" Apr 28 00:32:37.190645 kubelet[2985]: E0428 00:32:37.189527 2985 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 28 00:32:42.788485 kubelet[2985]: I0428 00:32:42.706395 2985 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 28 00:32:45.063415 kubelet[2985]: E0428 00:32:45.062653 2985 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.20:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": net/http: TLS handshake timeout" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 28 00:32:46.748331 kubelet[2985]: E0428 00:32:46.747671 2985 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.20:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" interval="7s" Apr 28 00:32:47.238898 kubelet[2985]: E0428 00:32:47.237236 2985 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 28 00:32:50.963532 kernel: clocksource: Long readout interval, skipping watchdog check: cs_nsec: 1618712807 wd_nsec: 1618712044 Apr 28 00:32:54.554203 kubelet[2985]: E0428 00:32:53.860682 2985 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.20:6443/api/v1/namespaces/default/events\": net/http: TLS handshake timeout" event="&Event{ObjectMeta:{localhost.18aa5de1d5ad00e3 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-28 00:30:27.484991715 +0000 UTC m=+9.237899629,LastTimestamp:2026-04-28 00:30:27.484991715 +0000 UTC m=+9.237899629,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 28 00:32:55.758668 kubelet[2985]: E0428 00:32:54.108479 2985 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.20:6443/api/v1/nodes\": net/http: TLS handshake timeout" node="localhost" Apr 28 00:32:57.109348 kubelet[2985]: E0428 00:32:55.983347 2985 event.go:307] "Unable to write event (retry limit exceeded!)" event="&Event{ObjectMeta:{localhost.18aa5de1d5ad00e3 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-28 00:30:27.484991715 +0000 UTC m=+9.237899629,LastTimestamp:2026-04-28 00:30:27.484991715 +0000 UTC m=+9.237899629,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 28 00:32:57.485392 kubelet[2985]: E0428 00:32:57.296398 2985 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 28 00:32:57.638160 kubelet[2985]: E0428 00:32:57.637507 2985 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.20:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 28 00:33:02.727525 kubelet[2985]: E0428 00:33:02.725973 2985 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 28 00:33:02.766210 kubelet[2985]: E0428 00:33:02.742227 2985 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:33:04.142663 kubelet[2985]: E0428 00:33:04.139241 2985 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.20:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded" interval="7s" Apr 28 00:33:04.773020 kubelet[2985]: I0428 00:33:04.772586 2985 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 28 00:33:07.525986 kubelet[2985]: E0428 00:33:07.523464 2985 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 28 00:33:08.095705 kubelet[2985]: E0428 00:33:08.094395 2985 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.20:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 28 00:33:09.350155 kubelet[2985]: E0428 00:33:09.336021 2985 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.20:6443/api/v1/namespaces/default/events\": net/http: TLS handshake timeout" event="&Event{ObjectMeta:{localhost.18aa5de31c2d438c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-28 00:30:32.962769804 +0000 UTC m=+14.715677709,LastTimestamp:2026-04-28 00:30:32.962769804 +0000 UTC m=+14.715677709,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 28 00:33:17.484716 kubelet[2985]: E0428 00:33:16.750725 2985 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.20:6443/api/v1/nodes\": net/http: TLS handshake timeout" node="localhost" Apr 28 00:33:18.150016 kubelet[2985]: E0428 00:33:18.144219 2985 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.20:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": net/http: TLS handshake timeout" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 28 00:33:19.353797 kubelet[2985]: E0428 00:33:19.352523 2985 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 28 00:33:21.558628 kubelet[2985]: E0428 00:33:21.552761 2985 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.20:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded" interval="7s" Apr 28 00:33:22.107486 kubelet[2985]: E0428 00:33:21.965189 2985 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.20:6443/api/v1/namespaces/default/events\": net/http: TLS handshake timeout" event="&Event{ObjectMeta:{localhost.18aa5de31c2d438c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-28 00:30:32.962769804 +0000 UTC m=+14.715677709,LastTimestamp:2026-04-28 00:30:32.962769804 +0000 UTC m=+14.715677709,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 28 00:33:27.778815 kubelet[2985]: E0428 00:33:27.704511 2985 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.20:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 28 00:33:27.778815 kubelet[2985]: I0428 00:33:27.760787 2985 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 28 00:33:29.938711 kubelet[2985]: E0428 00:33:29.676802 2985 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 28 00:33:37.007692 kubelet[2985]: E0428 00:33:36.513694 2985 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.20:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 28 00:33:41.727813 kubelet[2985]: E0428 00:33:41.722470 2985 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.20:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" interval="7s" Apr 28 00:33:42.575424 kubelet[2985]: E0428 00:33:42.569292 2985 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 28 00:33:44.209617 kubelet[2985]: E0428 00:33:42.122601 2985 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.20:6443/api/v1/nodes\": net/http: TLS handshake timeout" node="localhost" Apr 28 00:33:45.728102 systemd[1]: Starting systemd-tmpfiles-clean.service - Cleanup of Temporary Directories... Apr 28 00:33:52.981421 systemd-tmpfiles[3288]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Apr 28 00:33:53.007113 systemd-tmpfiles[3288]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Apr 28 00:33:53.883260 kubelet[2985]: E0428 00:33:52.904023 2985 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.20:6443/api/v1/namespaces/default/events\": net/http: TLS handshake timeout" event="&Event{ObjectMeta:{localhost.18aa5de31c2d438c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-28 00:30:32.962769804 +0000 UTC m=+14.715677709,LastTimestamp:2026-04-28 00:30:32.962769804 +0000 UTC m=+14.715677709,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 28 00:33:53.567682 systemd-tmpfiles[3288]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Apr 28 00:33:54.020690 systemd-tmpfiles[3288]: ACLs are not supported, ignoring. Apr 28 00:33:54.103660 systemd-tmpfiles[3288]: ACLs are not supported, ignoring. Apr 28 00:33:55.109135 systemd-tmpfiles[3288]: Detected autofs mount point /boot during canonicalization of boot. Apr 28 00:33:55.147505 systemd-tmpfiles[3288]: Skipping /boot Apr 28 00:33:57.107566 systemd[1]: systemd-tmpfiles-clean.service: Deactivated successfully. Apr 28 00:33:57.281480 systemd[1]: Finished systemd-tmpfiles-clean.service - Cleanup of Temporary Directories. Apr 28 00:33:57.354000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-clean comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:33:57.354000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-clean comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:33:57.685705 kernel: kauditd_printk_skb: 56 callbacks suppressed Apr 28 00:33:57.356786 systemd[1]: systemd-tmpfiles-clean.service: Consumed 2.796s CPU time, 4.5M memory peak. Apr 28 00:33:57.779436 kernel: audit: type=1130 audit(1777336437.354:399): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-clean comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:33:57.788817 kernel: audit: type=1131 audit(1777336437.354:400): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-clean comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:34:07.570104 kubelet[2985]: E0428 00:34:05.636501 2985 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 28 00:34:08.990568 kubelet[2985]: E0428 00:34:06.813346 2985 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.20:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 28 00:34:21.402808 kubelet[2985]: E0428 00:34:21.347573 2985 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 28 00:34:23.075972 kubelet[2985]: E0428 00:34:23.049711 2985 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.20:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded" interval="7s" Apr 28 00:34:28.473374 kubelet[2985]: E0428 00:34:28.472423 2985 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.20:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": net/http: TLS handshake timeout" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 28 00:34:28.710005 kubelet[2985]: E0428 00:34:28.264733 2985 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.20:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 28 00:34:29.345690 kubelet[2985]: I0428 00:34:29.345262 2985 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 28 00:34:38.651708 kubelet[2985]: E0428 00:34:35.183153 2985 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 28 00:34:41.093794 kubelet[2985]: E0428 00:34:41.084291 2985 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 28 00:34:48.181971 kubelet[2985]: E0428 00:34:46.403795 2985 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 28 00:34:48.954599 kubelet[2985]: E0428 00:34:40.668830 2985 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.20:6443/api/v1/namespaces/default/events\": net/http: TLS handshake timeout" event="&Event{ObjectMeta:{localhost.18aa5de31c2d438c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-28 00:30:32.962769804 +0000 UTC m=+14.715677709,LastTimestamp:2026-04-28 00:30:32.962769804 +0000 UTC m=+14.715677709,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 28 00:34:49.054655 kubelet[2985]: E0428 00:34:48.292645 2985 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:34:49.302908 kubelet[2985]: E0428 00:34:48.292966 2985 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.20:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 28 00:34:49.302908 kubelet[2985]: E0428 00:34:49.246178 2985 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:34:50.316046 kubelet[2985]: E0428 00:34:50.315631 2985 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 28 00:34:50.341380 kubelet[2985]: E0428 00:34:50.340948 2985 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:34:54.999652 kubelet[2985]: E0428 00:34:54.964263 2985 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.20:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" interval="7s" Apr 28 00:34:58.853562 kubelet[2985]: E0428 00:34:58.850221 2985 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.20:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 28 00:34:59.262720 kubelet[2985]: E0428 00:34:59.261306 2985 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.20:6443/api/v1/nodes\": net/http: TLS handshake timeout" node="localhost" Apr 28 00:34:59.262720 kubelet[2985]: E0428 00:34:59.261894 2985 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.20:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": net/http: TLS handshake timeout" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 28 00:35:00.563623 kubelet[2985]: E0428 00:35:00.555805 2985 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 28 00:35:02.419807 kubelet[2985]: E0428 00:35:02.360505 2985 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.20:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 28 00:35:11.731166 kubelet[2985]: E0428 00:35:10.994360 2985 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 28 00:35:30.558697 kubelet[2985]: E0428 00:35:30.549434 2985 kubelet.go:2452] "Skipping pod synchronization" err="container runtime is down" Apr 28 00:35:32.175277 kubelet[2985]: E0428 00:35:32.162291 2985 kubelet.go:2452] "Skipping pod synchronization" err="container runtime is down" Apr 28 00:35:37.890385 kubelet[2985]: E0428 00:35:22.793360 2985 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.20:6443/api/v1/namespaces/default/events\": net/http: TLS handshake timeout" event="&Event{ObjectMeta:{localhost.18aa5de31c2d438c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-28 00:30:32.962769804 +0000 UTC m=+14.715677709,LastTimestamp:2026-04-28 00:30:32.962769804 +0000 UTC m=+14.715677709,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 28 00:35:40.599364 kubelet[2985]: E0428 00:35:38.044480 2985 kubelet.go:2452] "Skipping pod synchronization" err="container runtime is down" Apr 28 00:35:42.078409 kubelet[2985]: E0428 00:35:41.375492 2985 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.20:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded" interval="7s" Apr 28 00:35:42.704489 kubelet[2985]: E0428 00:35:42.702965 2985 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 28 00:35:59.376391 kubelet[2985]: E0428 00:35:59.282798 2985 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.20:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 28 00:36:11.341524 kubelet[2985]: E0428 00:36:08.706394 2985 kubelet.go:2452] "Skipping pod synchronization" err="container runtime is down" Apr 28 00:36:12.594962 kubelet[2985]: E0428 00:36:04.352367 2985 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.20:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 28 00:36:12.955199 kubelet[2985]: E0428 00:36:10.674722 2985 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.20:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 28 00:36:13.374000 audit: BPF prog-id=83 op=UNLOAD Apr 28 00:36:13.489000 audit: BPF prog-id=78 op=UNLOAD Apr 28 00:36:13.351550 systemd[1]: cri-containerd-caffbcea64b01430281daf11f74b16b780527fc7f722e7944cefd42b64df8696.scope: Deactivated successfully. Apr 28 00:36:13.979524 kernel: audit: type=1334 audit(1777336573.374:401): prog-id=83 op=UNLOAD Apr 28 00:36:13.505918 systemd[1]: cri-containerd-caffbcea64b01430281daf11f74b16b780527fc7f722e7944cefd42b64df8696.scope: Consumed 3min 31.670s CPU time, 47.5M memory peak. Apr 28 00:36:13.985430 kernel: audit: type=1334 audit(1777336573.489:402): prog-id=78 op=UNLOAD Apr 28 00:36:14.565323 kubelet[2985]: E0428 00:36:10.876742 2985 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 28 00:36:16.204409 kubelet[2985]: I0428 00:36:16.194342 2985 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 28 00:36:17.127694 kubelet[2985]: E0428 00:36:16.595702 2985 kubelet.go:2452] "Skipping pod synchronization" err="container runtime is down" Apr 28 00:36:17.374574 kubelet[2985]: E0428 00:36:17.176575 2985 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.20:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 28 00:36:17.781051 kubelet[2985]: E0428 00:36:16.660567 2985 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.20:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded" interval="7s" Apr 28 00:36:18.532353 containerd[1643]: time="2026-04-28T00:36:18.529231182Z" level=info msg="received container exit event container_id:\"caffbcea64b01430281daf11f74b16b780527fc7f722e7944cefd42b64df8696\" id:\"caffbcea64b01430281daf11f74b16b780527fc7f722e7944cefd42b64df8696\" pid:3218 exit_status:1 exited_at:{seconds:1777336576 nanos:818784844}" Apr 28 00:36:20.832187 kubelet[2985]: E0428 00:36:18.526269 2985 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.20:6443/api/v1/namespaces/default/events\": net/http: TLS handshake timeout" event="&Event{ObjectMeta:{localhost.18aa5de31c2d438c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-28 00:30:32.962769804 +0000 UTC m=+14.715677709,LastTimestamp:2026-04-28 00:30:32.962769804 +0000 UTC m=+14.715677709,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 28 00:36:24.261582 kubelet[2985]: E0428 00:36:24.259206 2985 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.20:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.20:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 28 00:36:27.770951 kubelet[2985]: E0428 00:36:27.760617 2985 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 28 00:36:28.363380 kubelet[2985]: E0428 00:36:25.376495 2985 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.20:6443/api/v1/nodes\": dial tcp 10.0.0.20:6443: connect: connection refused" node="localhost" Apr 28 00:36:32.427557 containerd[1643]: time="2026-04-28T00:36:32.426129676Z" level=error msg="failed to handle container TaskExit event container_id:\"caffbcea64b01430281daf11f74b16b780527fc7f722e7944cefd42b64df8696\" id:\"caffbcea64b01430281daf11f74b16b780527fc7f722e7944cefd42b64df8696\" pid:3218 exit_status:1 exited_at:{seconds:1777336576 nanos:818784844}" error="failed to stop container: context deadline exceeded" Apr 28 00:36:32.999131 kubelet[2985]: E0428 00:36:30.470655 2985 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.20:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.20:6443: connect: connection refused" interval="7s" Apr 28 00:36:33.697767 containerd[1643]: time="2026-04-28T00:36:31.830796144Z" level=error msg="ttrpc: received message on inactive stream" stream=25 Apr 28 00:36:36.083018 containerd[1643]: time="2026-04-28T00:36:36.066091658Z" level=info msg="TaskExit event container_id:\"caffbcea64b01430281daf11f74b16b780527fc7f722e7944cefd42b64df8696\" id:\"caffbcea64b01430281daf11f74b16b780527fc7f722e7944cefd42b64df8696\" pid:3218 exit_status:1 exited_at:{seconds:1777336576 nanos:818784844}" Apr 28 00:36:44.028602 kubelet[2985]: E0428 00:36:44.007785 2985 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 28 00:36:55.201402 kubelet[2985]: E0428 00:36:42.570495 2985 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.20:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.20:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18aa5de31c2d438c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-28 00:30:32.962769804 +0000 UTC m=+14.715677709,LastTimestamp:2026-04-28 00:30:32.962769804 +0000 UTC m=+14.715677709,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 28 00:36:57.981137 kubelet[2985]: E0428 00:36:57.972736 2985 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 28 00:36:58.974834 kubelet[2985]: E0428 00:36:57.988459 2985 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.20:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded" interval="7s" Apr 28 00:36:59.040400 containerd[1643]: time="2026-04-28T00:36:59.033549380Z" level=error msg="ttrpc: received message on inactive stream" stream=33 Apr 28 00:36:59.184455 containerd[1643]: time="2026-04-28T00:36:59.062315320Z" level=error msg="Failed to handle backOff event container_id:\"caffbcea64b01430281daf11f74b16b780527fc7f722e7944cefd42b64df8696\" id:\"caffbcea64b01430281daf11f74b16b780527fc7f722e7944cefd42b64df8696\" pid:3218 exit_status:1 exited_at:{seconds:1777336576 nanos:818784844} for caffbcea64b01430281daf11f74b16b780527fc7f722e7944cefd42b64df8696" error="failed to handle container TaskExit event: failed to stop container: context deadline exceeded" Apr 28 00:36:59.558505 containerd[1643]: time="2026-04-28T00:36:59.115086383Z" level=error msg="ttrpc: received message on inactive stream" stream=35 Apr 28 00:37:01.784071 kubelet[2985]: E0428 00:37:01.776521 2985 kubelet.go:2452] "Skipping pod synchronization" err="container runtime is down" Apr 28 00:37:02.987573 containerd[1643]: time="2026-04-28T00:37:01.041760272Z" level=info msg="container event discarded" container=68425dca7e328e676abf88b4e1f3b26c28554391eef3e12dac3404861636ff23 type=CONTAINER_CREATED_EVENT Apr 28 00:37:03.883771 containerd[1643]: time="2026-04-28T00:37:02.629761504Z" level=info msg="TaskExit event container_id:\"caffbcea64b01430281daf11f74b16b780527fc7f722e7944cefd42b64df8696\" id:\"caffbcea64b01430281daf11f74b16b780527fc7f722e7944cefd42b64df8696\" pid:3218 exit_status:1 exited_at:{seconds:1777336576 nanos:818784844}" Apr 28 00:37:04.177344 kubelet[2985]: E0428 00:37:03.167800 2985 kubelet.go:2452] "Skipping pod synchronization" err="container runtime is down" Apr 28 00:37:04.570475 containerd[1643]: time="2026-04-28T00:37:03.608374475Z" level=info msg="container event discarded" container=68425dca7e328e676abf88b4e1f3b26c28554391eef3e12dac3404861636ff23 type=CONTAINER_STARTED_EVENT Apr 28 00:37:06.336435 kubelet[2985]: E0428 00:37:06.335368 2985 kubelet.go:2452] "Skipping pod synchronization" err="container runtime is down" Apr 28 00:37:06.995325 containerd[1643]: time="2026-04-28T00:37:06.629875301Z" level=info msg="container event discarded" container=b7ee253259b6a3b4ae5b6bb3287bd672403cb39773b66829256f12470361c399 type=CONTAINER_CREATED_EVENT Apr 28 00:37:06.995325 containerd[1643]: time="2026-04-28T00:37:06.749903494Z" level=info msg="container event discarded" container=b7ee253259b6a3b4ae5b6bb3287bd672403cb39773b66829256f12470361c399 type=CONTAINER_STARTED_EVENT Apr 28 00:37:07.964248 containerd[1643]: time="2026-04-28T00:37:07.902439331Z" level=info msg="container event discarded" container=9a1bbe8a27cd9b2c5c5de397d762547016f65005043da5e49ed4938780441df3 type=CONTAINER_CREATED_EVENT Apr 28 00:37:09.040373 kubelet[2985]: E0428 00:37:09.038331 2985 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 28 00:37:09.896866 kubelet[2985]: E0428 00:37:08.664166 2985 kubelet.go:2452] "Skipping pod synchronization" err="container runtime is down" Apr 28 00:37:09.989865 containerd[1643]: time="2026-04-28T00:37:07.831261686Z" level=error msg="get state for caffbcea64b01430281daf11f74b16b780527fc7f722e7944cefd42b64df8696" error="context deadline exceeded" Apr 28 00:37:10.193480 containerd[1643]: time="2026-04-28T00:37:09.892164424Z" level=error msg="ttrpc: received message on inactive stream" stream=37 Apr 28 00:37:10.325718 containerd[1643]: time="2026-04-28T00:37:10.111714332Z" level=warning msg="unknown status" status=0 Apr 28 00:37:10.638731 containerd[1643]: time="2026-04-28T00:37:09.892237100Z" level=info msg="container event discarded" container=9a1bbe8a27cd9b2c5c5de397d762547016f65005043da5e49ed4938780441df3 type=CONTAINER_STARTED_EVENT Apr 28 00:37:10.892589 containerd[1643]: time="2026-04-28T00:37:10.684332264Z" level=info msg="container event discarded" container=caffbcea64b01430281daf11f74b16b780527fc7f722e7944cefd42b64df8696 type=CONTAINER_CREATED_EVENT Apr 28 00:37:11.255886 containerd[1643]: time="2026-04-28T00:37:10.977423931Z" level=info msg="container event discarded" container=167724ee3dcd79da3fa03ec163c3febd8e528d3e62ed1439ff8689ff452c62a2 type=CONTAINER_CREATED_EVENT Apr 28 00:37:11.739798 kubelet[2985]: E0428 00:37:11.255503 2985 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 28 00:37:11.739798 kubelet[2985]: E0428 00:37:11.255834 2985 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:37:11.739798 kubelet[2985]: E0428 00:37:11.256644 2985 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.20:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.20:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 28 00:37:12.108747 containerd[1643]: time="2026-04-28T00:37:11.435627505Z" level=info msg="container event discarded" container=04cfef914f0c78abd0148917046a940864958975d6a8c9b8292dd00de3bec16f type=CONTAINER_CREATED_EVENT Apr 28 00:37:13.324053 containerd[1643]: time="2026-04-28T00:37:13.066487552Z" level=info msg="container event discarded" container=167724ee3dcd79da3fa03ec163c3febd8e528d3e62ed1439ff8689ff452c62a2 type=CONTAINER_STARTED_EVENT Apr 28 00:37:13.549704 containerd[1643]: time="2026-04-28T00:37:13.324579986Z" level=info msg="container event discarded" container=caffbcea64b01430281daf11f74b16b780527fc7f722e7944cefd42b64df8696 type=CONTAINER_STARTED_EVENT Apr 28 00:37:13.764518 containerd[1643]: time="2026-04-28T00:37:13.540707035Z" level=info msg="container event discarded" container=04cfef914f0c78abd0148917046a940864958975d6a8c9b8292dd00de3bec16f type=CONTAINER_STARTED_EVENT Apr 28 00:37:14.235519 kubelet[2985]: E0428 00:37:14.235214 2985 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.20:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.20:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 28 00:37:14.444674 kubelet[2985]: E0428 00:37:14.235219 2985 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 28 00:37:14.451602 kubelet[2985]: E0428 00:37:14.052931 2985 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.20:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.20:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 28 00:37:14.669652 kubelet[2985]: E0428 00:37:14.497244 2985 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.20:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.20:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18aa5de31c2d438c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-28 00:30:32.962769804 +0000 UTC m=+14.715677709,LastTimestamp:2026-04-28 00:30:32.962769804 +0000 UTC m=+14.715677709,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 28 00:37:16.537679 kubelet[2985]: E0428 00:37:16.187603 2985 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.20:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.20:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 28 00:37:17.258407 kubelet[2985]: E0428 00:37:17.253304 2985 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.20:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.20:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 28 00:37:17.336404 kubelet[2985]: E0428 00:37:17.259430 2985 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 28 00:37:17.469495 kubelet[2985]: E0428 00:37:17.182631 2985 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:37:18.060698 kubelet[2985]: E0428 00:37:18.045316 2985 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.20:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.20:6443: connect: connection refused" interval="7s" Apr 28 00:37:19.401573 containerd[1643]: time="2026-04-28T00:37:19.107820310Z" level=error msg="get state for caffbcea64b01430281daf11f74b16b780527fc7f722e7944cefd42b64df8696" error="context deadline exceeded" Apr 28 00:37:19.746641 containerd[1643]: time="2026-04-28T00:37:19.728473682Z" level=error msg="ttrpc: received message on inactive stream" stream=41 Apr 28 00:37:19.885270 kubelet[2985]: E0428 00:37:19.882523 2985 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 28 00:37:20.358024 containerd[1643]: time="2026-04-28T00:37:19.724643862Z" level=warning msg="unknown status" status=0 Apr 28 00:37:21.745151 kubelet[2985]: I0428 00:37:21.741536 2985 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 28 00:37:22.284135 kubelet[2985]: E0428 00:37:22.280589 2985 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:37:23.067626 containerd[1643]: time="2026-04-28T00:37:22.948497371Z" level=error msg="failed to delete task" error="context deadline exceeded" id=caffbcea64b01430281daf11f74b16b780527fc7f722e7944cefd42b64df8696 Apr 28 00:37:24.025718 containerd[1643]: time="2026-04-28T00:37:24.022654857Z" level=error msg="Failed to handle backOff event container_id:\"caffbcea64b01430281daf11f74b16b780527fc7f722e7944cefd42b64df8696\" id:\"caffbcea64b01430281daf11f74b16b780527fc7f722e7944cefd42b64df8696\" pid:3218 exit_status:1 exited_at:{seconds:1777336576 nanos:818784844} for caffbcea64b01430281daf11f74b16b780527fc7f722e7944cefd42b64df8696" error="failed to handle container TaskExit event: failed to stop container: failed to delete task: context deadline exceeded" Apr 28 00:37:25.010772 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-caffbcea64b01430281daf11f74b16b780527fc7f722e7944cefd42b64df8696-rootfs.mount: Deactivated successfully. Apr 28 00:37:25.182531 containerd[1643]: time="2026-04-28T00:37:25.066393468Z" level=error msg="ttrpc: received message on inactive stream" stream=43 Apr 28 00:37:26.066341 kubelet[2985]: E0428 00:37:26.057334 2985 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.20:6443/api/v1/nodes\": dial tcp 10.0.0.20:6443: connect: connection refused" node="localhost" Apr 28 00:37:26.444971 kubelet[2985]: E0428 00:37:26.241899 2985 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.20:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.20:6443: connect: connection refused" interval="7s" Apr 28 00:37:26.634107 kubelet[2985]: E0428 00:37:26.410000 2985 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.20:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.20:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18aa5de31c2d438c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-28 00:30:32.962769804 +0000 UTC m=+14.715677709,LastTimestamp:2026-04-28 00:30:32.962769804 +0000 UTC m=+14.715677709,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 28 00:37:26.952459 kubelet[2985]: E0428 00:37:26.895067 2985 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.20:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.20:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 28 00:37:28.419672 containerd[1643]: time="2026-04-28T00:37:28.396548619Z" level=info msg="TaskExit event container_id:\"caffbcea64b01430281daf11f74b16b780527fc7f722e7944cefd42b64df8696\" id:\"caffbcea64b01430281daf11f74b16b780527fc7f722e7944cefd42b64df8696\" pid:3218 exit_status:1 exited_at:{seconds:1777336576 nanos:818784844}" Apr 28 00:37:30.441680 kubelet[2985]: E0428 00:37:30.436251 2985 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 28 00:37:35.692997 kubelet[2985]: E0428 00:37:35.590171 2985 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.20:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.20:6443: connect: connection refused" interval="7s" Apr 28 00:37:37.156210 kubelet[2985]: E0428 00:37:37.101752 2985 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.20:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.20:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18aa5de31c2d438c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-28 00:30:32.962769804 +0000 UTC m=+14.715677709,LastTimestamp:2026-04-28 00:30:32.962769804 +0000 UTC m=+14.715677709,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 28 00:37:37.991265 kubelet[2985]: I0428 00:37:37.865637 2985 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 28 00:37:38.172801 kubelet[2985]: E0428 00:37:38.158540 2985 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.20:6443/api/v1/nodes\": dial tcp 10.0.0.20:6443: connect: connection refused" node="localhost" Apr 28 00:37:38.741657 containerd[1643]: time="2026-04-28T00:37:38.510782409Z" level=error msg="ttrpc: received message on inactive stream" stream=53 Apr 28 00:37:39.099047 containerd[1643]: time="2026-04-28T00:37:39.041104397Z" level=error msg="get state for caffbcea64b01430281daf11f74b16b780527fc7f722e7944cefd42b64df8696" error="context deadline exceeded" Apr 28 00:37:39.099047 containerd[1643]: time="2026-04-28T00:37:39.041898711Z" level=warning msg="unknown status" status=0 Apr 28 00:37:39.435354 containerd[1643]: time="2026-04-28T00:37:39.418315476Z" level=error msg="failed to delete task" error="context deadline exceeded" id=caffbcea64b01430281daf11f74b16b780527fc7f722e7944cefd42b64df8696 Apr 28 00:37:39.435354 containerd[1643]: time="2026-04-28T00:37:39.418730287Z" level=error msg="Failed to handle backOff event container_id:\"caffbcea64b01430281daf11f74b16b780527fc7f722e7944cefd42b64df8696\" id:\"caffbcea64b01430281daf11f74b16b780527fc7f722e7944cefd42b64df8696\" pid:3218 exit_status:1 exited_at:{seconds:1777336576 nanos:818784844} for caffbcea64b01430281daf11f74b16b780527fc7f722e7944cefd42b64df8696" error="failed to handle container TaskExit event: failed to stop container: failed to delete task: context deadline exceeded" Apr 28 00:37:39.581482 containerd[1643]: time="2026-04-28T00:37:39.432181772Z" level=error msg="failed to drain init process caffbcea64b01430281daf11f74b16b780527fc7f722e7944cefd42b64df8696 io" error="context deadline exceeded" runtime=io.containerd.runc.v2 Apr 28 00:37:39.728770 containerd[1643]: time="2026-04-28T00:37:39.698219376Z" level=error msg="ttrpc: received message on inactive stream" stream=57 Apr 28 00:37:40.309616 containerd[1643]: time="2026-04-28T00:37:40.307970083Z" level=error msg="ttrpc: received message on inactive stream" stream=55 Apr 28 00:37:40.452632 kubelet[2985]: E0428 00:37:40.450665 2985 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 28 00:37:44.815039 kubelet[2985]: E0428 00:37:44.810568 2985 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.20:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.20:6443: connect: connection refused" interval="7s" Apr 28 00:37:45.459576 kubelet[2985]: I0428 00:37:45.459103 2985 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 28 00:37:45.535581 kubelet[2985]: E0428 00:37:45.531335 2985 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.20:6443/api/v1/nodes\": dial tcp 10.0.0.20:6443: connect: connection refused" node="localhost" Apr 28 00:37:47.230035 kubelet[2985]: E0428 00:37:47.229160 2985 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.20:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.20:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18aa5de31c2d438c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-28 00:30:32.962769804 +0000 UTC m=+14.715677709,LastTimestamp:2026-04-28 00:30:32.962769804 +0000 UTC m=+14.715677709,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 28 00:37:48.266203 containerd[1643]: time="2026-04-28T00:37:48.265727075Z" level=info msg="TaskExit event container_id:\"caffbcea64b01430281daf11f74b16b780527fc7f722e7944cefd42b64df8696\" id:\"caffbcea64b01430281daf11f74b16b780527fc7f722e7944cefd42b64df8696\" pid:3218 exit_status:1 exited_at:{seconds:1777336576 nanos:818784844}" Apr 28 00:37:50.461244 kubelet[2985]: E0428 00:37:50.458968 2985 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 28 00:37:50.975006 kubelet[2985]: E0428 00:37:50.913733 2985 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.20:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.20:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 28 00:37:51.990663 kubelet[2985]: E0428 00:37:51.985834 2985 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.20:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.20:6443: connect: connection refused" interval="7s" Apr 28 00:37:52.091690 kubelet[2985]: E0428 00:37:52.083767 2985 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.20:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.20:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 28 00:37:54.245202 kubelet[2985]: I0428 00:37:54.241605 2985 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 28 00:37:54.683447 kubelet[2985]: E0428 00:37:54.613693 2985 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.20:6443/api/v1/nodes\": dial tcp 10.0.0.20:6443: connect: connection refused" node="localhost" Apr 28 00:37:55.861257 kubelet[2985]: E0428 00:37:55.706591 2985 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.20:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.20:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 28 00:37:57.353956 kubelet[2985]: E0428 00:37:57.343388 2985 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.20:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.20:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18aa5de31c2d438c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-28 00:30:32.962769804 +0000 UTC m=+14.715677709,LastTimestamp:2026-04-28 00:30:32.962769804 +0000 UTC m=+14.715677709,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 28 00:37:57.353956 kubelet[2985]: E0428 00:37:57.348981 2985 event.go:307] "Unable to write event (retry limit exceeded!)" event="&Event{ObjectMeta:{localhost.18aa5de31c2d438c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-28 00:30:32.962769804 +0000 UTC m=+14.715677709,LastTimestamp:2026-04-28 00:30:32.962769804 +0000 UTC m=+14.715677709,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 28 00:37:57.501481 kubelet[2985]: E0428 00:37:57.353670 2985 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.20:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.20:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18aa5de504922aa4 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node localhost status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-28 00:30:41.156663972 +0000 UTC m=+22.909571887,LastTimestamp:2026-04-28 00:30:41.156663972 +0000 UTC m=+22.909571887,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 28 00:37:59.263925 kubelet[2985]: E0428 00:37:59.263020 2985 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.20:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.20:6443: connect: connection refused" interval="7s" Apr 28 00:38:00.470321 kubelet[2985]: E0428 00:38:00.466455 2985 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 28 00:38:02.840148 kubelet[2985]: I0428 00:38:02.839584 2985 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 28 00:38:02.865441 kubelet[2985]: E0428 00:38:02.855440 2985 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.20:6443/api/v1/nodes\": dial tcp 10.0.0.20:6443: connect: connection refused" node="localhost" Apr 28 00:38:03.146151 kubelet[2985]: E0428 00:38:03.144939 2985 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 28 00:38:03.146151 kubelet[2985]: I0428 00:38:03.145313 2985 scope.go:117] "RemoveContainer" containerID="caffbcea64b01430281daf11f74b16b780527fc7f722e7944cefd42b64df8696" Apr 28 00:38:03.146151 kubelet[2985]: E0428 00:38:03.145543 2985 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:38:03.355153 containerd[1643]: time="2026-04-28T00:38:03.354784633Z" level=info msg="CreateContainer within sandbox \"b7ee253259b6a3b4ae5b6bb3287bd672403cb39773b66829256f12470361c399\" for container name:\"kube-apiserver\" attempt:1" Apr 28 00:38:06.232075 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount499372376.mount: Deactivated successfully. Apr 28 00:38:06.246692 containerd[1643]: time="2026-04-28T00:38:06.245959819Z" level=info msg="Container df253fc00d29b4e10e00c35167d41a8b9932e959e83ab21aaa88d08d77c56d76: CDI devices from CRI Config.CDIDevices: []" Apr 28 00:38:06.248606 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2332816005.mount: Deactivated successfully. Apr 28 00:38:06.413405 kubelet[2985]: E0428 00:38:06.410942 2985 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.20:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.20:6443: connect: connection refused" interval="7s" Apr 28 00:38:06.590448 kubelet[2985]: E0428 00:38:06.469591 2985 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.20:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.20:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18aa5de504922aa4 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node localhost status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-28 00:30:41.156663972 +0000 UTC m=+22.909571887,LastTimestamp:2026-04-28 00:30:41.156663972 +0000 UTC m=+22.909571887,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 28 00:38:10.580483 kubelet[2985]: E0428 00:38:10.551561 2985 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 28 00:38:11.673247 kubelet[2985]: E0428 00:38:11.669391 2985 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.20:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.20:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 28 00:38:11.963225 containerd[1643]: time="2026-04-28T00:38:11.908611966Z" level=info msg="CreateContainer within sandbox \"b7ee253259b6a3b4ae5b6bb3287bd672403cb39773b66829256f12470361c399\" for name:\"kube-apiserver\" attempt:1 returns container id \"df253fc00d29b4e10e00c35167d41a8b9932e959e83ab21aaa88d08d77c56d76\"" Apr 28 00:38:12.789370 kubelet[2985]: E0428 00:38:12.788379 2985 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.20:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.20:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 28 00:38:12.878044 kubelet[2985]: I0428 00:38:12.868083 2985 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 28 00:38:13.131156 containerd[1643]: time="2026-04-28T00:38:12.970898419Z" level=info msg="StartContainer for \"df253fc00d29b4e10e00c35167d41a8b9932e959e83ab21aaa88d08d77c56d76\"" Apr 28 00:38:13.182197 kubelet[2985]: E0428 00:38:13.162475 2985 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.20:6443/api/v1/nodes\": dial tcp 10.0.0.20:6443: connect: connection refused" node="localhost" Apr 28 00:38:14.133281 kubelet[2985]: E0428 00:38:14.112034 2985 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.20:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.20:6443: connect: connection refused" interval="7s" Apr 28 00:38:15.538793 containerd[1643]: time="2026-04-28T00:38:15.306615190Z" level=info msg="connecting to shim df253fc00d29b4e10e00c35167d41a8b9932e959e83ab21aaa88d08d77c56d76" address="unix:///run/containerd/s/e1a54b6da91b4e0421e766b746d4fe4b47cf810f2e905930acdc49bdb00a7da1" protocol=ttrpc version=3 Apr 28 00:38:17.138184 kubelet[2985]: E0428 00:38:16.649421 2985 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.20:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.20:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18aa5de504922aa4 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node localhost status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-28 00:30:41.156663972 +0000 UTC m=+22.909571887,LastTimestamp:2026-04-28 00:30:41.156663972 +0000 UTC m=+22.909571887,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 28 00:38:20.871834 kubelet[2985]: E0428 00:38:20.858755 2985 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 28 00:38:21.448057 kubelet[2985]: E0428 00:38:21.447507 2985 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.20:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.20:6443: connect: connection refused" interval="7s" Apr 28 00:38:22.367097 kubelet[2985]: I0428 00:38:22.366643 2985 scope.go:117] "RemoveContainer" containerID="caffbcea64b01430281daf11f74b16b780527fc7f722e7944cefd42b64df8696" Apr 28 00:38:22.367097 kubelet[2985]: I0428 00:38:22.366711 2985 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 28 00:38:22.367097 kubelet[2985]: E0428 00:38:22.367081 2985 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.20:6443/api/v1/nodes\": dial tcp 10.0.0.20:6443: connect: connection refused" node="localhost" Apr 28 00:38:23.658500 kubelet[2985]: E0428 00:38:23.615879 2985 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 28 00:38:23.842292 kubelet[2985]: E0428 00:38:23.839326 2985 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:38:28.074671 kubelet[2985]: E0428 00:38:28.067835 2985 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.20:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.20:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18aa5de504922aa4 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node localhost status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-28 00:30:41.156663972 +0000 UTC m=+22.909571887,LastTimestamp:2026-04-28 00:30:41.156663972 +0000 UTC m=+22.909571887,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 28 00:38:28.355764 kubelet[2985]: E0428 00:38:28.350348 2985 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.20:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.20:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 28 00:38:28.723394 kubelet[2985]: E0428 00:38:28.715505 2985 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.20:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.20:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 28 00:38:28.797662 kubelet[2985]: E0428 00:38:28.775740 2985 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.20:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.20:6443: connect: connection refused" interval="7s" Apr 28 00:38:31.354652 kubelet[2985]: E0428 00:38:31.336568 2985 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 28 00:38:32.815438 containerd[1643]: time="2026-04-28T00:38:32.595296854Z" level=info msg="RemoveContainer for \"caffbcea64b01430281daf11f74b16b780527fc7f722e7944cefd42b64df8696\"" Apr 28 00:38:35.389938 kubelet[2985]: I0428 00:38:35.389269 2985 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 28 00:38:36.140464 kubelet[2985]: E0428 00:38:36.136373 2985 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.20:6443/api/v1/nodes\": dial tcp 10.0.0.20:6443: connect: connection refused" node="localhost" Apr 28 00:38:36.911789 containerd[1643]: time="2026-04-28T00:38:36.838784086Z" level=error msg="get state for b7ee253259b6a3b4ae5b6bb3287bd672403cb39773b66829256f12470361c399" error="context deadline exceeded" Apr 28 00:38:37.238005 kubelet[2985]: E0428 00:38:36.767799 2985 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.20:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.20:6443: connect: connection refused" interval="7s" Apr 28 00:38:37.327354 containerd[1643]: time="2026-04-28T00:38:37.323394323Z" level=warning msg="unknown status" status=0 Apr 28 00:38:38.536777 kubelet[2985]: E0428 00:38:38.515099 2985 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.20:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.20:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18aa5de504922aa4 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node localhost status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-28 00:30:41.156663972 +0000 UTC m=+22.909571887,LastTimestamp:2026-04-28 00:30:41.156663972 +0000 UTC m=+22.909571887,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 28 00:38:39.399304 systemd[1]: Started cri-containerd-df253fc00d29b4e10e00c35167d41a8b9932e959e83ab21aaa88d08d77c56d76.scope - libcontainer container df253fc00d29b4e10e00c35167d41a8b9932e959e83ab21aaa88d08d77c56d76. Apr 28 00:38:42.512717 kubelet[2985]: E0428 00:38:42.482328 2985 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 28 00:38:44.399204 kubelet[2985]: E0428 00:38:44.353653 2985 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.20:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.20:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 28 00:38:47.090656 kubelet[2985]: E0428 00:38:47.090223 2985 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.20:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.20:6443: connect: connection refused" interval="7s" Apr 28 00:38:51.011765 kubelet[2985]: E0428 00:38:51.009316 2985 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.20:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.20:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 28 00:38:51.514123 containerd[1643]: time="2026-04-28T00:38:51.497573738Z" level=info msg="RemoveContainer for \"caffbcea64b01430281daf11f74b16b780527fc7f722e7944cefd42b64df8696\" returns successfully" Apr 28 00:38:51.779437 kubelet[2985]: E0428 00:38:51.376224 2985 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.20:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.20:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18aa5de504922aa4 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node localhost status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-28 00:30:41.156663972 +0000 UTC m=+22.909571887,LastTimestamp:2026-04-28 00:30:41.156663972 +0000 UTC m=+22.909571887,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 28 00:38:52.631956 kubelet[2985]: E0428 00:38:52.629937 2985 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 28 00:38:55.433937 kubelet[2985]: E0428 00:38:55.433688 2985 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.20:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.20:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 28 00:38:55.431000 audit: BPF prog-id=89 op=LOAD Apr 28 00:38:55.447592 kernel: audit: type=1334 audit(1777336735.431:403): prog-id=89 op=LOAD Apr 28 00:38:55.454000 audit: BPF prog-id=90 op=LOAD Apr 28 00:38:55.454000 audit[3334]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00018c240 a2=98 a3=0 items=0 ppid=3051 pid=3334 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 28 00:38:55.454000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6466323533666330306432396234653130653030633335313637643431 Apr 28 00:38:55.454000 audit: BPF prog-id=90 op=UNLOAD Apr 28 00:38:55.454000 audit[3334]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3051 pid=3334 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 28 00:38:55.454000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6466323533666330306432396234653130653030633335313637643431 Apr 28 00:38:55.458000 audit: BPF prog-id=91 op=LOAD Apr 28 00:38:55.458000 audit[3334]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00018c490 a2=98 a3=0 items=0 ppid=3051 pid=3334 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 28 00:38:55.458000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6466323533666330306432396234653130653030633335313637643431 Apr 28 00:38:55.458000 audit: BPF prog-id=92 op=LOAD Apr 28 00:38:55.458000 audit[3334]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c00018c220 a2=98 a3=0 items=0 ppid=3051 pid=3334 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 28 00:38:55.458000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6466323533666330306432396234653130653030633335313637643431 Apr 28 00:38:55.458000 audit: BPF prog-id=92 op=UNLOAD Apr 28 00:38:55.458000 audit[3334]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3051 pid=3334 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 28 00:38:55.458000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6466323533666330306432396234653130653030633335313637643431 Apr 28 00:38:55.733000 audit: BPF prog-id=91 op=UNLOAD Apr 28 00:38:55.733000 audit[3334]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3051 pid=3334 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 28 00:38:55.733000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6466323533666330306432396234653130653030633335313637643431 Apr 28 00:38:55.892000 audit: BPF prog-id=93 op=LOAD Apr 28 00:38:55.892000 audit[3334]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00018c6f0 a2=98 a3=0 items=0 ppid=3051 pid=3334 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 28 00:38:55.892000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6466323533666330306432396234653130653030633335313637643431 Apr 28 00:38:56.749803 kernel: audit: type=1334 audit(1777336735.454:404): prog-id=90 op=LOAD Apr 28 00:38:56.750014 kernel: audit: type=1300 audit(1777336735.454:404): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00018c240 a2=98 a3=0 items=0 ppid=3051 pid=3334 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 28 00:38:56.750040 kernel: audit: type=1327 audit(1777336735.454:404): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6466323533666330306432396234653130653030633335313637643431 Apr 28 00:38:56.873735 kernel: audit: type=1334 audit(1777336735.454:405): prog-id=90 op=UNLOAD Apr 28 00:38:57.138201 kernel: audit: type=1300 audit(1777336735.454:405): arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3051 pid=3334 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 28 00:38:57.206958 kernel: audit: type=1327 audit(1777336735.454:405): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6466323533666330306432396234653130653030633335313637643431 Apr 28 00:38:57.245621 kernel: audit: type=1334 audit(1777336735.458:406): prog-id=91 op=LOAD Apr 28 00:38:57.315266 kernel: audit: type=1300 audit(1777336735.458:406): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00018c490 a2=98 a3=0 items=0 ppid=3051 pid=3334 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 28 00:38:57.374939 kernel: audit: type=1327 audit(1777336735.458:406): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6466323533666330306432396234653130653030633335313637643431 Apr 28 00:38:58.211669 containerd[1643]: time="2026-04-28T00:38:57.968051147Z" level=error msg="get state for df253fc00d29b4e10e00c35167d41a8b9932e959e83ab21aaa88d08d77c56d76" error="context deadline exceeded" Apr 28 00:38:58.693538 containerd[1643]: time="2026-04-28T00:38:58.373614309Z" level=warning msg="unknown status" status=0 Apr 28 00:38:59.780603 kubelet[2985]: I0428 00:38:59.271669 2985 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 28 00:39:01.166582 kubelet[2985]: E0428 00:38:59.968865 2985 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.20:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.20:6443: connect: connection refused" interval="7s" Apr 28 00:39:02.466762 kubelet[2985]: E0428 00:39:02.448489 2985 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.20:6443/api/v1/nodes\": dial tcp 10.0.0.20:6443: connect: connection refused" node="localhost" Apr 28 00:39:04.614232 kubelet[2985]: E0428 00:39:04.603203 2985 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 28 00:39:05.878561 kubelet[2985]: E0428 00:39:05.868227 2985 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 28 00:39:06.609674 kubelet[2985]: E0428 00:39:06.583651 2985 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.20:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.20:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18aa5de504922aa4 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node localhost status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-28 00:30:41.156663972 +0000 UTC m=+22.909571887,LastTimestamp:2026-04-28 00:30:41.156663972 +0000 UTC m=+22.909571887,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 28 00:39:09.186782 kubelet[2985]: E0428 00:39:08.802994 2985 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.20:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.20:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 28 00:39:10.154642 containerd[1643]: time="2026-04-28T00:39:10.147492065Z" level=error msg="get state for df253fc00d29b4e10e00c35167d41a8b9932e959e83ab21aaa88d08d77c56d76" error="context deadline exceeded" Apr 28 00:39:10.372446 kubelet[2985]: E0428 00:39:09.893695 2985 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:39:11.222724 containerd[1643]: time="2026-04-28T00:39:10.414512413Z" level=warning msg="unknown status" status=0 Apr 28 00:39:11.515784 containerd[1643]: time="2026-04-28T00:39:11.496658214Z" level=error msg="ttrpc: received message on inactive stream" stream=3 Apr 28 00:39:13.044212 containerd[1643]: time="2026-04-28T00:39:11.931783183Z" level=error msg="ttrpc: received message on inactive stream" stream=5 Apr 28 00:39:13.474881 containerd[1643]: time="2026-04-28T00:39:11.700524975Z" level=error msg="ttrpc: received message on inactive stream" stream=39 Apr 28 00:39:14.157995 kubelet[2985]: E0428 00:39:14.155271 2985 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.20:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.20:6443: connect: connection refused" interval="7s" Apr 28 00:39:16.465424 kubelet[2985]: E0428 00:39:16.464249 2985 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 28 00:39:22.167205 kubelet[2985]: I0428 00:39:22.165241 2985 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 28 00:39:23.162618 kubelet[2985]: E0428 00:39:23.152630 2985 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.20:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.20:6443: connect: connection refused" interval="7s" Apr 28 00:39:23.715608 kubelet[2985]: E0428 00:39:22.895138 2985 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.20:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.20:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18aa5de504922aa4 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node localhost status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-28 00:30:41.156663972 +0000 UTC m=+22.909571887,LastTimestamp:2026-04-28 00:30:41.156663972 +0000 UTC m=+22.909571887,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 28 00:39:24.409800 kubelet[2985]: E0428 00:39:24.409258 2985 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.20:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.20:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 28 00:39:25.275676 kubelet[2985]: E0428 00:39:25.182746 2985 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.20:6443/api/v1/nodes\": dial tcp 10.0.0.20:6443: connect: connection refused" node="localhost" Apr 28 00:39:26.288357 kubelet[2985]: E0428 00:39:26.280478 2985 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.20:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.20:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 28 00:39:29.541566 kubelet[2985]: E0428 00:39:29.536792 2985 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 28 00:39:39.693776 kubelet[2985]: E0428 00:39:37.364544 2985 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.20:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.20:6443: connect: connection refused" interval="7s" Apr 28 00:39:47.949071 containerd[1643]: time="2026-04-28T00:39:47.697529525Z" level=error msg="get state for b7ee253259b6a3b4ae5b6bb3287bd672403cb39773b66829256f12470361c399" error="context deadline exceeded" Apr 28 00:39:48.995454 containerd[1643]: time="2026-04-28T00:39:48.160036096Z" level=warning msg="unknown status" status=0 Apr 28 00:39:49.344562 kubelet[2985]: E0428 00:39:46.952817 2985 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 28 00:39:50.199296 containerd[1643]: time="2026-04-28T00:39:48.358194181Z" level=error msg="ttrpc: received message on inactive stream" stream=43 Apr 28 00:39:50.206126 kubelet[2985]: E0428 00:39:49.456141 2985 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.20:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.20:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 28 00:39:53.045035 kubelet[2985]: E0428 00:39:53.044317 2985 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.20:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.20:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18aa5de504922aa4 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node localhost status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-28 00:30:41.156663972 +0000 UTC m=+22.909571887,LastTimestamp:2026-04-28 00:30:41.156663972 +0000 UTC m=+22.909571887,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 28 00:39:53.414592 containerd[1643]: time="2026-04-28T00:39:53.045992851Z" level=info msg="StartContainer for \"df253fc00d29b4e10e00c35167d41a8b9932e959e83ab21aaa88d08d77c56d76\" returns successfully" Apr 28 00:39:53.816420 kubelet[2985]: E0428 00:39:53.047271 2985 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.20:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.20:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 28 00:39:56.697594 kubelet[2985]: E0428 00:39:56.059690 2985 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.20:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.20:6443: connect: connection refused" interval="7s" Apr 28 00:39:57.977569 kubelet[2985]: E0428 00:39:56.393402 2985 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.20:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.20:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 28 00:40:02.131305 kubelet[2985]: E0428 00:40:01.908277 2985 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 28 00:40:02.952694 kubelet[2985]: E0428 00:40:02.951185 2985 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.20:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.20:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 28 00:40:07.163429 kubelet[2985]: I0428 00:40:07.162813 2985 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 28 00:40:10.185661 kubelet[2985]: E0428 00:40:10.151789 2985 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.20:6443/api/v1/nodes\": dial tcp 10.0.0.20:6443: connect: connection refused" node="localhost" Apr 28 00:40:11.244605 kubelet[2985]: E0428 00:40:09.791376 2985 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.20:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.20:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 28 00:40:12.495715 kubelet[2985]: E0428 00:40:07.161079 2985 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.20:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.20:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18aa5de504922aa4 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node localhost status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-28 00:30:41.156663972 +0000 UTC m=+22.909571887,LastTimestamp:2026-04-28 00:30:41.156663972 +0000 UTC m=+22.909571887,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 28 00:40:14.287081 kubelet[2985]: E0428 00:40:14.285523 2985 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.20:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.20:6443: connect: connection refused" interval="7s" Apr 28 00:40:14.396187 kubelet[2985]: E0428 00:40:14.287762 2985 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 28 00:40:14.396187 kubelet[2985]: E0428 00:40:14.292274 2985 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 28 00:40:14.396187 kubelet[2985]: E0428 00:40:14.292299 2985 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:40:18.305193 kubelet[2985]: E0428 00:40:18.293005 2985 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.20:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.20:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 28 00:40:25.068827 kubelet[2985]: E0428 00:40:24.028576 2985 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.20:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.20:6443: connect: connection refused" interval="7s" Apr 28 00:40:29.057040 kubelet[2985]: E0428 00:40:27.908543 2985 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 28 00:40:29.799031 kubelet[2985]: E0428 00:40:28.931499 2985 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.20:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.20:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18aa5de504922aa4 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node localhost status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-28 00:30:41.156663972 +0000 UTC m=+22.909571887,LastTimestamp:2026-04-28 00:30:41.156663972 +0000 UTC m=+22.909571887,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 28 00:40:45.108630 kubelet[2985]: E0428 00:40:40.992461 2985 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.20:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.20:6443: connect: connection refused" interval="7s" Apr 28 00:40:46.806882 kubelet[2985]: I0428 00:40:46.803713 2985 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 28 00:40:47.819793 kubelet[2985]: E0428 00:40:47.818623 2985 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 28 00:40:48.589280 kubelet[2985]: E0428 00:40:48.579686 2985 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.20:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.20:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 28 00:40:56.365784 kubelet[2985]: E0428 00:40:56.317680 2985 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.20:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.20:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 28 00:40:57.635697 kubelet[2985]: E0428 00:40:57.632991 2985 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.20:6443/api/v1/nodes\": dial tcp 10.0.0.20:6443: connect: connection refused" node="localhost" Apr 28 00:41:00.367079 kubelet[2985]: E0428 00:41:00.304587 2985 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.20:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.20:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 28 00:41:00.367079 kubelet[2985]: E0428 00:40:53.843646 2985 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.20:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.20:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18aa5de504922aa4 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node localhost status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-28 00:30:41.156663972 +0000 UTC m=+22.909571887,LastTimestamp:2026-04-28 00:30:41.156663972 +0000 UTC m=+22.909571887,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 28 00:41:00.367079 kubelet[2985]: E0428 00:41:00.304744 2985 event.go:307] "Unable to write event (retry limit exceeded!)" event="&Event{ObjectMeta:{localhost.18aa5de504922aa4 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node localhost status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-28 00:30:41.156663972 +0000 UTC m=+22.909571887,LastTimestamp:2026-04-28 00:30:41.156663972 +0000 UTC m=+22.909571887,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 28 00:41:06.559054 kubelet[2985]: E0428 00:41:05.301527 2985 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.20:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.20:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 28 00:41:06.835776 kubelet[2985]: E0428 00:41:01.585801 2985 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 28 00:41:09.383145 kubelet[2985]: E0428 00:41:03.540970 2985 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.20:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.20:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18aa5de504945db9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node localhost status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-28 00:30:41.156808121 +0000 UTC m=+22.909716036,LastTimestamp:2026-04-28 00:30:41.156808121 +0000 UTC m=+22.909716036,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 28 00:41:09.575083 kubelet[2985]: E0428 00:41:09.403780 2985 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.20:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.20:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 28 00:41:09.575083 kubelet[2985]: E0428 00:41:09.409879 2985 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.20:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded" interval="7s" Apr 28 00:41:12.660059 kubelet[2985]: E0428 00:41:12.659517 2985 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.20:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.20:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18aa5de504945db9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node localhost status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-28 00:30:41.156808121 +0000 UTC m=+22.909716036,LastTimestamp:2026-04-28 00:30:41.156808121 +0000 UTC m=+22.909716036,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 28 00:41:12.715716 kubelet[2985]: I0428 00:41:12.669663 2985 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 28 00:41:18.039623 kubelet[2985]: E0428 00:41:16.763350 2985 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.20:6443/api/v1/nodes\": dial tcp 10.0.0.20:6443: connect: connection refused" node="localhost" Apr 28 00:41:20.359633 kubelet[2985]: E0428 00:41:20.356945 2985 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 28 00:41:23.663110 kubelet[2985]: E0428 00:41:23.661607 2985 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.20:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.20:6443: connect: connection refused" interval="7s" Apr 28 00:41:34.693369 kubelet[2985]: E0428 00:41:34.682745 2985 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 28 00:41:36.553292 kubelet[2985]: E0428 00:41:36.528671 2985 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 28 00:41:37.900394 kubelet[2985]: E0428 00:41:36.528729 2985 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.20:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.20:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18aa5de504945db9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node localhost status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-28 00:30:41.156808121 +0000 UTC m=+22.909716036,LastTimestamp:2026-04-28 00:30:41.156808121 +0000 UTC m=+22.909716036,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 28 00:41:39.249725 kubelet[2985]: E0428 00:41:39.244504 2985 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 28 00:41:44.542712 kubelet[2985]: E0428 00:41:42.591060 2985 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:41:45.620266 kubelet[2985]: E0428 00:41:45.617436 2985 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.20:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded" interval="7s" Apr 28 00:41:50.987153 kubelet[2985]: E0428 00:41:46.889126 2985 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:41:57.951825 kubelet[2985]: E0428 00:41:57.798601 2985 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 28 00:42:05.972803 kubelet[2985]: I0428 00:42:05.481548 2985 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 28 00:42:10.574398 kubelet[2985]: E0428 00:42:10.564944 2985 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 28 00:42:13.981506 kubelet[2985]: E0428 00:42:13.814601 2985 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.20:6443/api/v1/nodes\": dial tcp 10.0.0.20:6443: connect: connection refused" node="localhost" Apr 28 00:42:14.738268 kubelet[2985]: E0428 00:42:09.387095 2985 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.20:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.20:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18aa5de504945db9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node localhost status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-28 00:30:41.156808121 +0000 UTC m=+22.909716036,LastTimestamp:2026-04-28 00:30:41.156808121 +0000 UTC m=+22.909716036,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 28 00:42:16.983499 kubelet[2985]: E0428 00:42:16.956220 2985 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.20:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.20:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 28 00:42:19.151158 kubelet[2985]: E0428 00:42:19.146538 2985 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.20:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.20:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 28 00:42:23.887558 kubelet[2985]: E0428 00:42:22.892896 2985 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.20:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.20:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 28 00:42:24.666339 kubelet[2985]: E0428 00:42:24.661963 2985 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 28 00:42:25.162702 kubelet[2985]: E0428 00:42:25.162266 2985 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.20:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.20:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 28 00:42:25.455250 kubelet[2985]: E0428 00:42:24.868586 2985 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.20:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded" interval="7s" Apr 28 00:42:35.962986 kubelet[2985]: E0428 00:42:35.668757 2985 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 28 00:42:37.158733 kubelet[2985]: E0428 00:42:36.964747 2985 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.20:6443/api/v1/namespaces/default/events\": net/http: TLS handshake timeout" event="&Event{ObjectMeta:{localhost.18aa5de504945db9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node localhost status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-28 00:30:41.156808121 +0000 UTC m=+22.909716036,LastTimestamp:2026-04-28 00:30:41.156808121 +0000 UTC m=+22.909716036,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 28 00:42:37.705571 kubelet[2985]: I0428 00:42:37.354472 2985 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 28 00:42:38.776179 kubelet[2985]: E0428 00:42:38.770050 2985 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 28 00:42:39.203605 kubelet[2985]: E0428 00:42:39.170722 2985 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:42:40.570720 kubelet[2985]: E0428 00:42:40.563819 2985 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.20:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": net/http: TLS handshake timeout" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 28 00:42:47.850583 kubelet[2985]: E0428 00:42:46.154448 2985 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.20:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" interval="7s" Apr 28 00:42:49.611510 kubelet[2985]: E0428 00:42:49.605725 2985 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 28 00:42:51.253191 kubelet[2985]: E0428 00:42:51.218386 2985 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.20:6443/api/v1/nodes\": net/http: TLS handshake timeout" node="localhost" Apr 28 00:42:56.958939 kubelet[2985]: E0428 00:42:56.957187 2985 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 28 00:42:58.288716 containerd[1643]: time="2026-04-28T00:42:58.263481085Z" level=info msg="container event discarded" container=caffbcea64b01430281daf11f74b16b780527fc7f722e7944cefd42b64df8696 type=CONTAINER_STOPPED_EVENT Apr 28 00:42:59.055263 kubelet[2985]: E0428 00:42:59.048584 2985 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:43:00.244939 kubelet[2985]: E0428 00:43:00.238599 2985 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 28 00:43:03.648590 kubelet[2985]: E0428 00:43:03.160150 2985 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.20:6443/api/v1/namespaces/default/events\": net/http: TLS handshake timeout" event="&Event{ObjectMeta:{localhost.18aa5de504945db9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node localhost status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-28 00:30:41.156808121 +0000 UTC m=+22.909716036,LastTimestamp:2026-04-28 00:30:41.156808121 +0000 UTC m=+22.909716036,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 28 00:43:05.770204 kubelet[2985]: E0428 00:43:05.768412 2985 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.20:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded" interval="7s" Apr 28 00:43:05.839558 kubelet[2985]: I0428 00:43:05.839472 2985 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 28 00:43:07.498583 kubelet[2985]: E0428 00:43:07.398460 2985 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.20:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 28 00:43:08.042031 kubelet[2985]: E0428 00:43:08.037778 2985 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.20:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 28 00:43:08.695813 containerd[1643]: time="2026-04-28T00:43:08.617751353Z" level=info msg="container event discarded" container=df253fc00d29b4e10e00c35167d41a8b9932e959e83ab21aaa88d08d77c56d76 type=CONTAINER_CREATED_EVENT Apr 28 00:43:09.614707 kubelet[2985]: E0428 00:43:09.598182 2985 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.20:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": net/http: TLS handshake timeout" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 28 00:43:10.490292 kubelet[2985]: E0428 00:43:10.378012 2985 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 28 00:43:16.326107 kubelet[2985]: E0428 00:43:16.167507 2985 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.20:6443/api/v1/nodes\": net/http: TLS handshake timeout" node="localhost" Apr 28 00:43:18.852749 kubelet[2985]: E0428 00:43:18.851825 2985 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.20:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 28 00:43:19.426069 kubelet[2985]: E0428 00:43:19.288574 2985 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 28 00:43:19.627123 kubelet[2985]: E0428 00:43:19.615607 2985 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:43:20.582464 kubelet[2985]: E0428 00:43:20.577316 2985 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 28 00:43:23.342003 kubelet[2985]: E0428 00:43:23.267671 2985 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.20:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" interval="7s" Apr 28 00:43:26.172595 kubelet[2985]: E0428 00:43:25.481713 2985 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.20:6443/api/v1/namespaces/default/events\": net/http: TLS handshake timeout" event="&Event{ObjectMeta:{localhost.18aa5de504945db9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node localhost status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-28 00:30:41.156808121 +0000 UTC m=+22.909716036,LastTimestamp:2026-04-28 00:30:41.156808121 +0000 UTC m=+22.909716036,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 28 00:43:28.569471 kubelet[2985]: E0428 00:43:28.279715 2985 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.20:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": net/http: TLS handshake timeout" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 28 00:43:28.966516 kubelet[2985]: I0428 00:43:28.963647 2985 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 28 00:43:31.900252 kubelet[2985]: E0428 00:43:31.835691 2985 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.20:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 28 00:43:32.538415 kubelet[2985]: E0428 00:43:32.534701 2985 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 28 00:43:41.323708 kubelet[2985]: E0428 00:43:41.313807 2985 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.20:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded" interval="7s" Apr 28 00:43:43.938802 kubelet[2985]: E0428 00:43:43.937402 2985 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.20:6443/api/v1/nodes\": net/http: TLS handshake timeout" node="localhost" Apr 28 00:43:44.350566 kubelet[2985]: E0428 00:43:44.341169 2985 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 28 00:43:51.289211 kubelet[2985]: E0428 00:43:51.286528 2985 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.20:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 28 00:43:51.882397 containerd[1643]: time="2026-04-28T00:43:51.874652657Z" level=info msg="container event discarded" container=caffbcea64b01430281daf11f74b16b780527fc7f722e7944cefd42b64df8696 type=CONTAINER_DELETED_EVENT Apr 28 00:43:54.800481 kubelet[2985]: E0428 00:43:54.786629 2985 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 28 00:43:56.874795 kubelet[2985]: E0428 00:43:56.815682 2985 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.20:6443/api/v1/namespaces/default/events\": net/http: TLS handshake timeout" event="&Event{ObjectMeta:{localhost.18aa5de504945db9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node localhost status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-28 00:30:41.156808121 +0000 UTC m=+22.909716036,LastTimestamp:2026-04-28 00:30:41.156808121 +0000 UTC m=+22.909716036,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 28 00:44:02.224639 kubelet[2985]: E0428 00:44:02.220403 2985 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.20:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded" interval="7s" Apr 28 00:44:04.556478 kubelet[2985]: E0428 00:44:04.545448 2985 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.20:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 28 00:44:07.457209 kubelet[2985]: E0428 00:44:07.423819 2985 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 28 00:44:12.668765 kubelet[2985]: I0428 00:44:12.654784 2985 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 28 00:44:13.843253 kubelet[2985]: E0428 00:44:12.999705 2985 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.20:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": net/http: TLS handshake timeout" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 28 00:44:16.767915 kubelet[2985]: E0428 00:44:16.765458 2985 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 28 00:44:18.454162 kubelet[2985]: E0428 00:44:18.453710 2985 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 28 00:44:18.931223 kubelet[2985]: E0428 00:44:18.894842 2985 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 28 00:44:18.931223 kubelet[2985]: E0428 00:44:18.923816 2985 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:44:20.192084 kubelet[2985]: E0428 00:44:20.172234 2985 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.20:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" interval="7s" Apr 28 00:44:20.656065 kubelet[2985]: E0428 00:44:20.628833 2985 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:44:23.283704 kubelet[2985]: E0428 00:44:21.878397 2985 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.20:6443/api/v1/namespaces/default/events\": net/http: TLS handshake timeout" event="&Event{ObjectMeta:{localhost.18aa5de504945db9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node localhost status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-28 00:30:41.156808121 +0000 UTC m=+22.909716036,LastTimestamp:2026-04-28 00:30:41.156808121 +0000 UTC m=+22.909716036,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 28 00:44:25.959574 kubelet[2985]: E0428 00:44:25.759717 2985 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.20:6443/api/v1/nodes\": net/http: TLS handshake timeout" node="localhost" Apr 28 00:44:26.355696 kubelet[2985]: E0428 00:44:26.337444 2985 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.20:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 28 00:44:29.088947 kubelet[2985]: E0428 00:44:29.076780 2985 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 28 00:44:33.309876 kubelet[2985]: E0428 00:44:32.987758 2985 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.20:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 28 00:44:37.056336 kubelet[2985]: E0428 00:44:36.964257 2985 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.20:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": net/http: TLS handshake timeout" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 28 00:44:37.856940 kubelet[2985]: I0428 00:44:37.837987 2985 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 28 00:44:37.856940 kubelet[2985]: E0428 00:44:37.838661 2985 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.20:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" interval="7s" Apr 28 00:44:40.689288 containerd[1643]: time="2026-04-28T00:44:40.260703064Z" level=info msg="container event discarded" container=df253fc00d29b4e10e00c35167d41a8b9932e959e83ab21aaa88d08d77c56d76 type=CONTAINER_STARTED_EVENT Apr 28 00:44:41.252102 kubelet[2985]: E0428 00:44:41.251429 2985 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 28 00:44:45.230307 kubelet[2985]: E0428 00:44:45.229578 2985 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.20:6443/api/v1/namespaces/default/events\": net/http: TLS handshake timeout" event="&Event{ObjectMeta:{localhost.18aa5de504945db9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node localhost status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-28 00:30:41.156808121 +0000 UTC m=+22.909716036,LastTimestamp:2026-04-28 00:30:41.156808121 +0000 UTC m=+22.909716036,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 28 00:44:46.197264 kubelet[2985]: E0428 00:44:46.191819 2985 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 28 00:44:46.250386 kubelet[2985]: E0428 00:44:46.249675 2985 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:44:48.102545 kubelet[2985]: E0428 00:44:48.094834 2985 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.20:6443/api/v1/nodes\": net/http: TLS handshake timeout" node="localhost" Apr 28 00:44:51.402630 kubelet[2985]: E0428 00:44:51.375632 2985 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 28 00:44:55.239302 kubelet[2985]: I0428 00:44:55.238042 2985 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 28 00:44:55.246748 kubelet[2985]: E0428 00:44:55.245822 2985 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.20:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded" interval="7s" Apr 28 00:44:56.022789 kubelet[2985]: E0428 00:44:56.016487 2985 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.20:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 28 00:44:59.918382 kubelet[2985]: E0428 00:44:59.917716 2985 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.20:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 28 00:45:00.999383 kubelet[2985]: E0428 00:45:00.998735 2985 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.20:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": net/http: TLS handshake timeout" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 28 00:45:01.498201 kubelet[2985]: E0428 00:45:01.495553 2985 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 28 00:45:07.076735 kubelet[2985]: E0428 00:45:07.068683 2985 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.20:6443/api/v1/nodes\": net/http: TLS handshake timeout" node="localhost" Apr 28 00:45:12.985214 kubelet[2985]: E0428 00:45:12.966553 2985 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 28 00:45:14.457523 kubelet[2985]: E0428 00:45:12.877132 2985 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.20:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded" interval="7s" Apr 28 00:45:14.791808 kubelet[2985]: E0428 00:45:08.898414 2985 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.20:6443/api/v1/namespaces/default/events\": net/http: TLS handshake timeout" event="&Event{ObjectMeta:{localhost.18aa5de504945db9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node localhost status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-28 00:30:41.156808121 +0000 UTC m=+22.909716036,LastTimestamp:2026-04-28 00:30:41.156808121 +0000 UTC m=+22.909716036,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 28 00:45:22.000571 kubelet[2985]: E0428 00:45:21.355773 2985 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.20:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 28 00:45:24.031601 kubelet[2985]: E0428 00:45:24.029904 2985 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 28 00:45:24.552706 kubelet[2985]: I0428 00:45:24.549209 2985 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 28 00:45:35.972082 kubelet[2985]: E0428 00:45:35.971159 2985 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.20:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" interval="7s" Apr 28 00:45:43.389612 kubelet[2985]: E0428 00:45:43.378606 2985 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.20:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": net/http: TLS handshake timeout" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 28 00:45:44.886527 kubelet[2985]: E0428 00:45:41.048637 2985 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 28 00:45:49.352667 kubelet[2985]: E0428 00:45:48.523574 2985 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.20:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 28 00:45:57.600204 kubelet[2985]: E0428 00:45:57.594273 2985 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 28 00:46:01.032185 kubelet[2985]: E0428 00:45:45.156630 2985 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.20:6443/api/v1/namespaces/default/events\": net/http: TLS handshake timeout" event="&Event{ObjectMeta:{localhost.18aa5de504945db9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node localhost status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-28 00:30:41.156808121 +0000 UTC m=+22.909716036,LastTimestamp:2026-04-28 00:30:41.156808121 +0000 UTC m=+22.909716036,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 28 00:46:01.032185 kubelet[2985]: E0428 00:46:01.030768 2985 event.go:307] "Unable to write event (retry limit exceeded!)" event="&Event{ObjectMeta:{localhost.18aa5de504945db9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node localhost status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-28 00:30:41.156808121 +0000 UTC m=+22.909716036,LastTimestamp:2026-04-28 00:30:41.156808121 +0000 UTC m=+22.909716036,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 28 00:46:01.557334 kubelet[2985]: E0428 00:46:01.544809 2985 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.20:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 28 00:46:13.745172 kubelet[2985]: E0428 00:46:12.464477 2985 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.20:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded" interval="7s" Apr 28 00:46:15.191807 kubelet[2985]: E0428 00:46:14.065718 2985 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.20:6443/api/v1/nodes\": net/http: TLS handshake timeout" node="localhost" Apr 28 00:46:20.561652 kubelet[2985]: E0428 00:46:18.393354 2985 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 28 00:46:30.076568 kubelet[2985]: E0428 00:46:27.279482 2985 kubelet.go:2452] "Skipping pod synchronization" err="container runtime is down" Apr 28 00:46:33.650630 kubelet[2985]: E0428 00:46:33.604489 2985 kubelet.go:2452] "Skipping pod synchronization" err="container runtime is down" Apr 28 00:46:34.546623 kubelet[2985]: E0428 00:46:34.531406 2985 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 28 00:46:35.633734 kubelet[2985]: E0428 00:46:35.632947 2985 kubelet.go:2452] "Skipping pod synchronization" err="container runtime is down" Apr 28 00:46:37.865727 kubelet[2985]: E0428 00:46:37.858830 2985 kubelet.go:2452] "Skipping pod synchronization" err="container runtime is down" Apr 28 00:46:41.485614 kubelet[2985]: E0428 00:46:34.438208 2985 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.20:6443/api/v1/namespaces/default/events\": net/http: TLS handshake timeout" event="&Event{ObjectMeta:{localhost.18aa5de5049471e1 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node localhost status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-28 00:30:41.156813281 +0000 UTC m=+22.909721205,LastTimestamp:2026-04-28 00:30:41.156813281 +0000 UTC m=+22.909721205,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 28 00:46:42.840702 kubelet[2985]: E0428 00:46:42.836939 2985 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.20:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" interval="7s" Apr 28 00:46:44.470714 kubelet[2985]: E0428 00:46:44.415490 2985 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.20:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 28 00:46:45.269282 kubelet[2985]: E0428 00:46:41.862681 2985 kubelet.go:2452] "Skipping pod synchronization" err="container runtime is down" Apr 28 00:46:51.845023 kubelet[2985]: E0428 00:46:51.843500 2985 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 28 00:46:52.993881 kubelet[2985]: E0428 00:46:52.989920 2985 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.20:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 28 00:46:55.111186 kubelet[2985]: E0428 00:46:55.109461 2985 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 28 00:46:58.382688 kubelet[2985]: E0428 00:46:58.382016 2985 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:47:03.413815 kubelet[2985]: E0428 00:47:02.459745 2985 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.20:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" interval="7s" Apr 28 00:47:05.351674 kubelet[2985]: E0428 00:47:04.403729 2985 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.20:6443/api/v1/namespaces/default/events\": net/http: TLS handshake timeout" event="&Event{ObjectMeta:{localhost.18aa5de5049471e1 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node localhost status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-28 00:30:41.156813281 +0000 UTC m=+22.909721205,LastTimestamp:2026-04-28 00:30:41.156813281 +0000 UTC m=+22.909721205,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 28 00:47:08.147803 kubelet[2985]: E0428 00:47:08.134526 2985 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 28 00:47:09.071222 kubelet[2985]: E0428 00:47:09.070268 2985 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.20:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 28 00:47:09.191316 kubelet[2985]: I0428 00:47:08.878998 2985 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 28 00:47:10.400169 kubelet[2985]: E0428 00:47:10.399573 2985 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.20:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 28 00:47:10.661053 kubelet[2985]: E0428 00:47:10.292321 2985 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.20:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": net/http: TLS handshake timeout" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 28 00:47:10.661053 kubelet[2985]: E0428 00:47:10.289800 2985 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 28 00:47:10.856991 kubelet[2985]: E0428 00:47:10.838916 2985 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 28 00:47:12.156533 kubelet[2985]: E0428 00:47:12.153457 2985 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:47:15.210832 kubelet[2985]: E0428 00:47:15.209534 2985 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:47:18.630642 kubelet[2985]: E0428 00:47:18.629327 2985 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 28 00:47:24.485502 kubelet[2985]: E0428 00:47:23.553977 2985 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.20:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded" interval="7s" Apr 28 00:47:25.178419 kubelet[2985]: E0428 00:47:25.171751 2985 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.20:6443/api/v1/nodes\": net/http: TLS handshake timeout" node="localhost" Apr 28 00:47:34.453067 kubelet[2985]: E0428 00:47:33.828424 2985 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 28 00:47:49.958508 kubelet[2985]: E0428 00:47:49.957086 2985 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 28 00:47:49.958508 kubelet[2985]: E0428 00:47:29.095692 2985 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.20:6443/api/v1/namespaces/default/events\": net/http: TLS handshake timeout" event="&Event{ObjectMeta:{localhost.18aa5de5049471e1 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node localhost status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-28 00:30:41.156813281 +0000 UTC m=+22.909721205,LastTimestamp:2026-04-28 00:30:41.156813281 +0000 UTC m=+22.909721205,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 28 00:47:52.707649 kubelet[2985]: E0428 00:47:48.653829 2985 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.20:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 28 00:48:00.382515 kubelet[2985]: E0428 00:47:59.097629 2985 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.20:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded" interval="7s" Apr 28 00:48:09.576321 kubelet[2985]: E0428 00:48:08.803323 2985 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 28 00:48:23.515249 kubelet[2985]: E0428 00:48:23.492786 2985 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.20:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 28 00:48:24.095052 kubelet[2985]: E0428 00:48:24.083656 2985 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 28 00:48:25.876305 kubelet[2985]: E0428 00:48:24.275627 2985 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.20:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded" interval="7s" Apr 28 00:48:29.073059 kubelet[2985]: E0428 00:48:28.575392 2985 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.20:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 28 00:48:29.999674 kubelet[2985]: I0428 00:48:29.993356 2985 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 28 00:48:31.548410 kubelet[2985]: E0428 00:48:30.811102 2985 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.20:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": net/http: TLS handshake timeout" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 28 00:48:32.210941 kubelet[2985]: E0428 00:48:24.710214 2985 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.20:6443/api/v1/namespaces/default/events\": net/http: TLS handshake timeout" event="&Event{ObjectMeta:{localhost.18aa5de5049471e1 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node localhost status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-28 00:30:41.156813281 +0000 UTC m=+22.909721205,LastTimestamp:2026-04-28 00:30:41.156813281 +0000 UTC m=+22.909721205,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 28 00:48:38.104722 kubelet[2985]: E0428 00:48:38.094757 2985 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 28 00:48:49.670320 kubelet[2985]: E0428 00:48:49.666460 2985 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.20:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 28 00:48:52.461666 kubelet[2985]: E0428 00:48:52.459807 2985 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 28 00:48:53.106306 kubelet[2985]: E0428 00:48:50.916308 2985 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.20:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" interval="7s" Apr 28 00:48:54.857601 kubelet[2985]: E0428 00:48:52.888492 2985 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.20:6443/api/v1/nodes\": net/http: TLS handshake timeout" node="localhost" Apr 28 00:48:59.486553 kubelet[2985]: E0428 00:48:59.073316 2985 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.20:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 28 00:49:04.008577 kubelet[2985]: E0428 00:49:03.994061 2985 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 28 00:49:08.953941 kubelet[2985]: E0428 00:49:05.372266 2985 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.20:6443/api/v1/namespaces/default/events\": net/http: TLS handshake timeout" event="&Event{ObjectMeta:{localhost.18aa5de5049471e1 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node localhost status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-28 00:30:41.156813281 +0000 UTC m=+22.909721205,LastTimestamp:2026-04-28 00:30:41.156813281 +0000 UTC m=+22.909721205,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 28 00:49:09.572003 kubelet[2985]: E0428 00:49:09.467723 2985 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.20:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": net/http: TLS handshake timeout" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 28 00:49:14.556573 kubelet[2985]: E0428 00:49:13.924207 2985 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 28 00:49:18.851632 kubelet[2985]: E0428 00:49:18.801718 2985 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 28 00:49:21.118090 kubelet[2985]: E0428 00:49:20.264779 2985 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.20:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded" interval="7s" Apr 28 00:49:22.288587 kubelet[2985]: E0428 00:49:22.282537 2985 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 28 00:49:23.286203 kubelet[2985]: E0428 00:49:23.262065 2985 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 28 00:49:23.286203 kubelet[2985]: E0428 00:49:23.262975 2985 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:49:23.286203 kubelet[2985]: E0428 00:49:23.263031 2985 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:49:26.378783 kubelet[2985]: E0428 00:49:25.394755 2985 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:49:28.538464 kubelet[2985]: E0428 00:49:28.535566 2985 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.20:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 28 00:49:30.295039 kubelet[2985]: E0428 00:49:29.788899 2985 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 28 00:49:31.038890 kubelet[2985]: I0428 00:49:31.030965 2985 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 28 00:49:41.991628 kubelet[2985]: E0428 00:49:41.986400 2985 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.20:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 28 00:49:46.294359 kubelet[2985]: E0428 00:49:44.075111 2985 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 28 00:49:47.957568 kubelet[2985]: E0428 00:49:45.977703 2985 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.20:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" interval="7s" Apr 28 00:49:52.815532 kubelet[2985]: E0428 00:49:43.941661 2985 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.20:6443/api/v1/namespaces/default/events\": net/http: TLS handshake timeout" event="&Event{ObjectMeta:{localhost.18aa5de5049471e1 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node localhost status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-28 00:30:41.156813281 +0000 UTC m=+22.909721205,LastTimestamp:2026-04-28 00:30:41.156813281 +0000 UTC m=+22.909721205,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 28 00:49:58.119334 kubelet[2985]: E0428 00:49:58.105512 2985 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 28 00:49:59.699595 kubelet[2985]: E0428 00:49:59.671674 2985 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.20:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 28 00:50:00.681255 kubelet[2985]: E0428 00:49:58.176572 2985 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.20:6443/api/v1/nodes\": net/http: TLS handshake timeout" node="localhost" Apr 28 00:50:12.496590 kubelet[2985]: E0428 00:50:12.485529 2985 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.20:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded" interval="7s" Apr 28 00:50:15.410537 kubelet[2985]: E0428 00:50:13.866738 2985 kubelet.go:2452] "Skipping pod synchronization" err="container runtime is down" Apr 28 00:50:17.500533 kubelet[2985]: E0428 00:50:17.475528 2985 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 28 00:50:24.158736 kubelet[2985]: E0428 00:50:24.120164 2985 kubelet.go:2452] "Skipping pod synchronization" err="container runtime is down" Apr 28 00:50:29.880722 kubelet[2985]: E0428 00:50:29.866130 2985 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 28 00:50:38.386637 kubelet[2985]: E0428 00:50:38.384699 2985 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.20:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 28 00:50:39.074533 kubelet[2985]: E0428 00:50:37.211623 2985 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.20:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded" interval="7s" Apr 28 00:50:51.060611 kubelet[2985]: E0428 00:50:40.277394 2985 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.20:6443/api/v1/namespaces/default/events\": net/http: TLS handshake timeout" event="&Event{ObjectMeta:{localhost.18aa5de5049471e1 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node localhost status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-28 00:30:41.156813281 +0000 UTC m=+22.909721205,LastTimestamp:2026-04-28 00:30:41.156813281 +0000 UTC m=+22.909721205,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 28 00:50:52.724816 kubelet[2985]: E0428 00:50:52.616677 2985 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 28 00:50:53.387533 kubelet[2985]: E0428 00:50:52.960740 2985 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.20:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 28 00:50:55.666686 kubelet[2985]: E0428 00:50:55.651640 2985 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.20:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 28 00:50:56.548761 kubelet[2985]: E0428 00:50:55.491715 2985 kubelet.go:2452] "Skipping pod synchronization" err="container runtime is down" Apr 28 00:50:58.720971 kubelet[2985]: E0428 00:50:58.655205 2985 kubelet.go:2452] "Skipping pod synchronization" err="container runtime is down" Apr 28 00:50:59.581264 kubelet[2985]: E0428 00:50:59.570135 2985 kubelet.go:2452] "Skipping pod synchronization" err="container runtime is down" Apr 28 00:50:59.581264 kubelet[2985]: I0428 00:50:59.571215 2985 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 28 00:51:00.798450 kubelet[2985]: E0428 00:51:00.796059 2985 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.20:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded" interval="7s" Apr 28 00:51:01.038323 kubelet[2985]: E0428 00:51:01.034648 2985 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 28 00:51:01.174378 kubelet[2985]: E0428 00:51:01.170481 2985 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:51:01.564762 kubelet[2985]: E0428 00:51:01.564229 2985 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 28 00:51:01.654642 kubelet[2985]: E0428 00:51:01.583813 2985 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:51:01.994734 kubelet[2985]: E0428 00:51:01.994115 2985 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 28 00:51:01.994734 kubelet[2985]: E0428 00:51:01.995153 2985 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:51:04.850895 kubelet[2985]: E0428 00:51:04.850133 2985 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 28 00:51:06.095373 kubelet[2985]: E0428 00:51:06.093781 2985 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.20:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": net/http: TLS handshake timeout" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 28 00:51:10.163727 kubelet[2985]: E0428 00:51:10.161427 2985 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.20:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 28 00:51:10.651641 kubelet[2985]: E0428 00:51:10.649280 2985 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.20:6443/api/v1/nodes\": net/http: TLS handshake timeout" node="localhost" Apr 28 00:51:13.044756 kubelet[2985]: E0428 00:51:12.670256 2985 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.20:6443/api/v1/namespaces/default/events\": net/http: TLS handshake timeout" event="&Event{ObjectMeta:{localhost.18aa5de5049471e1 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node localhost status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-28 00:30:41.156813281 +0000 UTC m=+22.909721205,LastTimestamp:2026-04-28 00:30:41.156813281 +0000 UTC m=+22.909721205,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 28 00:51:14.878771 kubelet[2985]: E0428 00:51:14.877005 2985 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 28 00:51:18.075260 kubelet[2985]: E0428 00:51:18.066257 2985 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.20:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" interval="7s" Apr 28 00:51:19.488496 kubelet[2985]: I0428 00:51:19.486166 2985 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 28 00:51:21.319810 kubelet[2985]: E0428 00:51:21.261572 2985 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.20:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 28 00:51:24.896342 kubelet[2985]: E0428 00:51:24.890650 2985 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 28 00:51:26.340051 kubelet[2985]: E0428 00:51:26.270540 2985 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.20:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": net/http: TLS handshake timeout" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 28 00:51:29.620058 kubelet[2985]: E0428 00:51:29.595405 2985 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.20:6443/api/v1/nodes\": net/http: TLS handshake timeout" node="localhost" Apr 28 00:51:33.167532 kubelet[2985]: E0428 00:51:33.164138 2985 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.20:6443/api/v1/namespaces/default/events\": net/http: TLS handshake timeout" event="&Event{ObjectMeta:{localhost.18aa5de5049471e1 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node localhost status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-28 00:30:41.156813281 +0000 UTC m=+22.909721205,LastTimestamp:2026-04-28 00:30:41.156813281 +0000 UTC m=+22.909721205,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 28 00:51:34.996763 kubelet[2985]: E0428 00:51:34.988244 2985 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 28 00:51:35.258341 kubelet[2985]: E0428 00:51:35.256875 2985 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.20:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded" interval="7s" Apr 28 00:51:36.891233 kubelet[2985]: I0428 00:51:36.890617 2985 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 28 00:51:45.033568 kubelet[2985]: E0428 00:51:45.032931 2985 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 28 00:51:50.966268 kubelet[2985]: E0428 00:51:50.961265 2985 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Apr 28 00:51:51.040095 kubelet[2985]: E0428 00:51:51.004813 2985 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.18aa5de5049471e1 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node localhost status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-28 00:30:41.156813281 +0000 UTC m=+22.909721205,LastTimestamp:2026-04-28 00:30:41.156813281 +0000 UTC m=+22.909721205,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 28 00:51:51.433611 kubelet[2985]: I0428 00:51:51.424462 2985 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Apr 28 00:51:51.433611 kubelet[2985]: E0428 00:51:51.424777 2985 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Apr 28 00:51:53.769263 kubelet[2985]: E0428 00:51:53.767305 2985 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 00:51:53.797811 kubelet[2985]: E0428 00:51:53.720230 2985 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.18aa5de504922aa4 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node localhost status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-28 00:30:41.156663972 +0000 UTC m=+22.909571887,LastTimestamp:2026-04-28 00:30:56.684791968 +0000 UTC m=+38.437699877,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 28 00:51:53.960811 kubelet[2985]: E0428 00:51:53.960008 2985 kubelet_node_status.go:398] "Node not becoming ready in time after startup" Apr 28 00:51:55.484036 kubelet[2985]: E0428 00:51:55.446600 2985 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 28 00:51:55.768276 kubelet[2985]: E0428 00:51:55.499382 2985 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 28 00:52:02.398335 kubelet[2985]: E0428 00:52:02.278715 2985 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 28 00:52:03.894252 kubelet[2985]: E0428 00:52:03.893658 2985 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Apr 28 00:52:05.762727 kubelet[2985]: E0428 00:52:05.761014 2985 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 28 00:52:07.857295 kubelet[2985]: E0428 00:52:07.854792 2985 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 28 00:52:08.970890 kubelet[2985]: E0428 00:52:08.965645 2985 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 28 00:52:08.982631 kubelet[2985]: E0428 00:52:08.975958 2985 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:52:09.733651 kubelet[2985]: I0428 00:52:09.508673 2985 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Apr 28 00:52:10.356550 kubelet[2985]: I0428 00:52:10.351777 2985 apiserver.go:52] "Watching apiserver" Apr 28 00:52:11.425768 kubelet[2985]: I0428 00:52:11.421543 2985 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Apr 28 00:52:12.937708 kubelet[2985]: E0428 00:52:12.935093 2985 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 28 00:52:13.755082 kubelet[2985]: E0428 00:52:13.722784 2985 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.244s" Apr 28 00:52:14.168761 kubelet[2985]: I0428 00:52:14.125252 2985 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Apr 28 00:52:15.405150 kubelet[2985]: E0428 00:52:15.404474 2985 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.133s" Apr 28 00:52:15.630940 kubelet[2985]: I0428 00:52:15.630409 2985 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Apr 28 00:52:15.847957 kubelet[2985]: E0428 00:52:15.846386 2985 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:52:16.667300 kubelet[2985]: I0428 00:52:16.664604 2985 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Apr 28 00:52:16.945734 kubelet[2985]: E0428 00:52:16.928378 2985 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:52:16.945734 kubelet[2985]: E0428 00:52:16.930192 2985 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:52:17.371768 kubelet[2985]: E0428 00:52:17.363581 2985 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Apr 28 00:52:17.496564 kubelet[2985]: I0428 00:52:17.466697 2985 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Apr 28 00:52:18.015401 kubelet[2985]: E0428 00:52:18.015071 2985 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:52:18.728257 kubelet[2985]: E0428 00:52:18.726802 2985 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 28 00:52:18.728257 kubelet[2985]: E0428 00:52:18.704306 2985 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:52:19.372276 kubelet[2985]: E0428 00:52:19.323562 2985 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.258s" Apr 28 00:52:19.372276 kubelet[2985]: E0428 00:52:19.368743 2985 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Apr 28 00:52:21.959667 kubelet[2985]: I0428 00:52:21.959006 2985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=6.958901625 podStartE2EDuration="6.958901625s" podCreationTimestamp="2026-04-28 00:52:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-28 00:52:21.409990606 +0000 UTC m=+1323.162898520" watchObservedRunningTime="2026-04-28 00:52:21.958901625 +0000 UTC m=+1323.711809541" Apr 28 00:52:23.183193 kubelet[2985]: I0428 00:52:23.180771 2985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=7.174450044 podStartE2EDuration="7.174450044s" podCreationTimestamp="2026-04-28 00:52:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-28 00:52:21.980771482 +0000 UTC m=+1323.733679391" watchObservedRunningTime="2026-04-28 00:52:23.174450044 +0000 UTC m=+1324.927357961" Apr 28 00:52:23.968321 kubelet[2985]: E0428 00:52:23.966444 2985 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 28 00:52:29.636603 kubelet[2985]: E0428 00:52:29.633665 2985 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 28 00:52:29.670622 kubelet[2985]: E0428 00:52:29.654584 2985 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.56s" Apr 28 00:52:33.548703 kubelet[2985]: E0428 00:52:33.548400 2985 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.464s" Apr 28 00:52:35.680465 kubelet[2985]: E0428 00:52:35.671273 2985 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 28 00:52:39.888671 kubelet[2985]: E0428 00:52:39.875388 2985 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.722s" Apr 28 00:52:42.421544 kubelet[2985]: E0428 00:52:42.396001 2985 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 28 00:52:42.579552 kubelet[2985]: E0428 00:52:42.572406 2985 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.255s" Apr 28 00:52:44.537349 kubelet[2985]: E0428 00:52:44.535495 2985 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.582s" Apr 28 00:52:48.094750 kubelet[2985]: E0428 00:52:48.085722 2985 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.974s" Apr 28 00:52:48.420973 kubelet[2985]: E0428 00:52:48.418122 2985 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 28 00:52:49.369678 kubelet[2985]: E0428 00:52:49.369326 2985 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.283s" Apr 28 00:52:52.414353 kubelet[2985]: E0428 00:52:52.414003 2985 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.313s" Apr 28 00:52:53.996416 kubelet[2985]: E0428 00:52:53.996056 2985 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.497s" Apr 28 00:52:54.744035 kubelet[2985]: E0428 00:52:54.741028 2985 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 28 00:52:57.814555 kubelet[2985]: E0428 00:52:57.809689 2985 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.72s" Apr 28 00:53:00.260607 kubelet[2985]: E0428 00:53:00.254275 2985 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.444s" Apr 28 00:53:02.579789 kubelet[2985]: E0428 00:53:02.567477 2985 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 28 00:53:05.669805 kubelet[2985]: E0428 00:53:05.663345 2985 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="5.398s" Apr 28 00:53:09.537256 kubelet[2985]: E0428 00:53:09.536743 2985 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 28 00:53:09.922141 kubelet[2985]: E0428 00:53:09.536973 2985 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.571s" Apr 28 00:53:16.446361 kubelet[2985]: E0428 00:53:16.387507 2985 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 28 00:53:19.589264 kubelet[2985]: E0428 00:53:19.579451 2985 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="5.325s" Apr 28 00:53:24.142210 kubelet[2985]: E0428 00:53:24.083223 2985 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 28 00:53:27.660033 kubelet[2985]: E0428 00:53:27.659163 2985 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="7.885s" Apr 28 00:53:29.679073 kubelet[2985]: E0428 00:53:29.624333 2985 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 28 00:53:30.504821 kubelet[2985]: E0428 00:53:30.504275 2985 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.269s" Apr 28 00:53:32.603707 kubelet[2985]: E0428 00:53:32.591774 2985 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.064s" Apr 28 00:53:34.351698 kubelet[2985]: E0428 00:53:34.351301 2985 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.65s" Apr 28 00:53:35.037033 kubelet[2985]: E0428 00:53:35.036362 2985 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 28 00:53:35.526510 kubelet[2985]: E0428 00:53:35.525887 2985 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:53:42.804023 kubelet[2985]: E0428 00:53:42.802555 2985 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="4.505s" Apr 28 00:53:43.359174 kubelet[2985]: E0428 00:53:43.358606 2985 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 28 00:53:44.367716 kubelet[2985]: E0428 00:53:44.366046 2985 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.563s" Apr 28 00:53:45.502001 kubelet[2985]: E0428 00:53:45.501563 2985 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.097s" Apr 28 00:53:45.749750 kubelet[2985]: E0428 00:53:45.503260 2985 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:53:45.749750 kubelet[2985]: E0428 00:53:45.503383 2985 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:53:48.099998 kubelet[2985]: E0428 00:53:48.085258 2985 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.97s" Apr 28 00:53:49.498346 kubelet[2985]: E0428 00:53:49.491490 2985 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 28 00:53:54.863336 kubelet[2985]: E0428 00:53:54.861383 2985 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="6.556s" Apr 28 00:53:56.373716 kubelet[2985]: E0428 00:53:56.370278 2985 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 28 00:53:57.842269 kubelet[2985]: E0428 00:53:57.812702 2985 controller.go:195] "Failed to update lease" err="Put \"https://10.0.0.20:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Apr 28 00:54:01.170490 kubelet[2985]: E0428 00:54:01.160568 2985 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="6.081s" Apr 28 00:54:03.637630 kubelet[2985]: E0428 00:54:03.637151 2985 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 28 00:54:03.866305 kubelet[2985]: E0428 00:54:03.865574 2985 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.67s" Apr 28 00:54:05.451071 kubelet[2985]: E0428 00:54:05.428438 2985 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.324s" Apr 28 00:54:09.967579 kubelet[2985]: E0428 00:54:09.966509 2985 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 28 00:54:10.508394 kubelet[2985]: E0428 00:54:10.491532 2985 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.293s" Apr 28 00:54:17.052060 kubelet[2985]: E0428 00:54:16.884480 2985 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 28 00:54:22.566562 kubelet[2985]: E0428 00:54:22.555218 2985 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="12.063s" Apr 28 00:54:23.367477 kubelet[2985]: E0428 00:54:23.363249 2985 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 28 00:54:25.217596 kubelet[2985]: E0428 00:54:25.126589 2985 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.571s" Apr 28 00:54:27.270536 kubelet[2985]: E0428 00:54:27.270124 2985 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.084s" Apr 28 00:54:28.820663 kubelet[2985]: E0428 00:54:28.816652 2985 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 28 00:54:30.173798 kubelet[2985]: E0428 00:54:30.171017 2985 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.714s" Apr 28 00:54:34.392696 kubelet[2985]: E0428 00:54:34.383246 2985 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="4.153s" Apr 28 00:54:35.158520 kubelet[2985]: E0428 00:54:35.153987 2985 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 28 00:54:43.801305 kubelet[2985]: E0428 00:54:43.589754 2985 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 28 00:54:46.582695 kubelet[2985]: E0428 00:54:46.410673 2985 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="11.763s" Apr 28 00:54:50.491614 kubelet[2985]: E0428 00:54:50.486493 2985 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 28 00:54:56.281957 kubelet[2985]: E0428 00:54:56.275957 2985 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 28 00:54:57.307664 kubelet[2985]: E0428 00:54:57.232703 2985 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="10.633s" Apr 28 00:54:58.751025 kubelet[2985]: E0428 00:54:58.750671 2985 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.39s" Apr 28 00:54:58.962211 kubelet[2985]: E0428 00:54:58.873375 2985 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:55:02.263504 kubelet[2985]: E0428 00:55:02.260609 2985 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.431s" Apr 28 00:55:02.687859 kubelet[2985]: E0428 00:55:02.615917 2985 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 28 00:55:04.346345 kubelet[2985]: E0428 00:55:04.345432 2985 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.06s" Apr 28 00:55:08.153557 kubelet[2985]: E0428 00:55:08.151502 2985 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.794s" Apr 28 00:55:09.356519 kubelet[2985]: E0428 00:55:09.353572 2985 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 28 00:55:11.593820 kubelet[2985]: E0428 00:55:11.571117 2985 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.312s" Apr 28 00:55:14.100534 kubelet[2985]: E0428 00:55:14.096018 2985 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.434s" Apr 28 00:55:15.346565 kubelet[2985]: E0428 00:55:15.346097 2985 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 28 00:55:15.814805 kubelet[2985]: E0428 00:55:15.814027 2985 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.601s" Apr 28 00:55:15.963226 kubelet[2985]: E0428 00:55:15.962835 2985 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:55:16.112750 kubelet[2985]: E0428 00:55:16.110231 2985 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:55:19.986208 kubelet[2985]: E0428 00:55:19.985662 2985 controller.go:195] "Failed to update lease" err="Put \"https://10.0.0.20:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Apr 28 00:55:21.302131 kubelet[2985]: E0428 00:55:21.301307 2985 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.247s" Apr 28 00:55:21.597788 kubelet[2985]: E0428 00:55:21.586482 2985 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 28 00:55:24.565406 kubelet[2985]: E0428 00:55:24.565023 2985 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.977s" Apr 28 00:55:26.004802 kubelet[2985]: E0428 00:55:25.912739 2985 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.336s" Apr 28 00:55:28.586982 kubelet[2985]: E0428 00:55:28.571578 2985 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 28 00:55:28.745400 kubelet[2985]: E0428 00:55:28.744414 2985 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.531s" Apr 28 00:55:30.261484 kubelet[2985]: E0428 00:55:30.253970 2985 controller.go:195] "Failed to update lease" err="Put \"https://10.0.0.20:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Apr 28 00:55:35.164671 kubelet[2985]: E0428 00:55:35.164073 2985 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 28 00:55:35.504711 kubelet[2985]: E0428 00:55:35.502601 2985 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="6.758s" Apr 28 00:55:37.468185 kubelet[2985]: E0428 00:55:37.464059 2985 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.866s" Apr 28 00:55:39.662535 kubelet[2985]: E0428 00:55:39.551974 2985 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.955s" Apr 28 00:55:41.241232 kubelet[2985]: E0428 00:55:41.231329 2985 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 28 00:55:43.102110 kubelet[2985]: E0428 00:55:43.099215 2985 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.426s" Apr 28 00:55:44.140916 kubelet[2985]: E0428 00:55:44.139108 2985 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.039s" Apr 28 00:55:47.130376 kubelet[2985]: E0428 00:55:47.129220 2985 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 28 00:55:52.346496 kubelet[2985]: E0428 00:55:52.345994 2985 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 28 00:55:56.104539 kubelet[2985]: E0428 00:55:56.101190 2985 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.047s" Apr 28 00:55:57.533459 kubelet[2985]: E0428 00:55:57.532073 2985 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 28 00:55:59.587961 kubelet[2985]: E0428 00:55:59.576175 2985 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.468s" Apr 28 00:56:03.614269 kubelet[2985]: E0428 00:56:03.613830 2985 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 28 00:56:03.614269 kubelet[2985]: E0428 00:56:03.614222 2985 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="4.021s" Apr 28 00:56:05.945558 kubelet[2985]: E0428 00:56:05.942412 2985 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.328s" Apr 28 00:56:09.489632 kubelet[2985]: E0428 00:56:09.488891 2985 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 28 00:56:09.816188 kubelet[2985]: E0428 00:56:09.735714 2985 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.59s" Apr 28 00:56:11.202588 kubelet[2985]: E0428 00:56:11.199425 2985 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.135s" Apr 28 00:56:14.723524 kubelet[2985]: E0428 00:56:14.723035 2985 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 28 00:56:14.782440 kubelet[2985]: E0428 00:56:14.770196 2985 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.541s" Apr 28 00:56:16.114135 kubelet[2985]: E0428 00:56:15.993448 2985 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.063s" Apr 28 00:56:17.274421 kubelet[2985]: E0428 00:56:17.269015 2985 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.158s" Apr 28 00:56:17.497134 kubelet[2985]: E0428 00:56:17.495778 2985 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:56:20.151501 kubelet[2985]: E0428 00:56:20.151058 2985 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 28 00:56:20.179916 kubelet[2985]: E0428 00:56:20.152420 2985 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.893s" Apr 28 00:56:21.472585 kubelet[2985]: E0428 00:56:21.460184 2985 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.304s" Apr 28 00:56:28.273755 kubelet[2985]: E0428 00:56:27.806640 2985 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 28 00:56:28.419763 kubelet[2985]: E0428 00:56:28.419464 2985 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="6.216s" Apr 28 00:56:28.421159 systemd[1]: cri-containerd-167724ee3dcd79da3fa03ec163c3febd8e528d3e62ed1439ff8689ff452c62a2.scope: Deactivated successfully. Apr 28 00:56:28.454000 audit: BPF prog-id=74 op=UNLOAD Apr 28 00:56:28.454000 audit: BPF prog-id=79 op=UNLOAD Apr 28 00:56:28.821881 kernel: kauditd_printk_skb: 12 callbacks suppressed Apr 28 00:56:28.422051 systemd[1]: cri-containerd-167724ee3dcd79da3fa03ec163c3febd8e528d3e62ed1439ff8689ff452c62a2.scope: Consumed 2min 19.932s CPU time, 24M memory peak. Apr 28 00:56:28.854045 kernel: audit: type=1334 audit(1777337788.454:411): prog-id=74 op=UNLOAD Apr 28 00:56:28.866637 kernel: audit: type=1334 audit(1777337788.454:412): prog-id=79 op=UNLOAD Apr 28 00:56:33.444805 containerd[1643]: time="2026-04-28T00:56:33.416637181Z" level=info msg="received container exit event container_id:\"167724ee3dcd79da3fa03ec163c3febd8e528d3e62ed1439ff8689ff452c62a2\" id:\"167724ee3dcd79da3fa03ec163c3febd8e528d3e62ed1439ff8689ff452c62a2\" pid:3208 exit_status:1 exited_at:{seconds:1777337790 nanos:912694079}" Apr 28 00:56:36.373442 kubelet[2985]: E0428 00:56:36.370534 2985 controller.go:195] "Failed to update lease" err="Put \"https://10.0.0.20:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Apr 28 00:56:37.092730 kubelet[2985]: E0428 00:56:37.081734 2985 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 28 00:56:44.757651 containerd[1643]: time="2026-04-28T00:56:44.418979317Z" level=error msg="ttrpc: received message on inactive stream" stream=51 Apr 28 00:56:45.272800 containerd[1643]: time="2026-04-28T00:56:44.869472860Z" level=error msg="failed to handle container TaskExit event container_id:\"167724ee3dcd79da3fa03ec163c3febd8e528d3e62ed1439ff8689ff452c62a2\" id:\"167724ee3dcd79da3fa03ec163c3febd8e528d3e62ed1439ff8689ff452c62a2\" pid:3208 exit_status:1 exited_at:{seconds:1777337790 nanos:912694079}" error="failed to stop container: context deadline exceeded" Apr 28 00:56:45.468647 containerd[1643]: time="2026-04-28T00:56:45.352693230Z" level=error msg="ttrpc: received message on inactive stream" stream=49 Apr 28 00:56:47.557632 containerd[1643]: time="2026-04-28T00:56:47.467095781Z" level=info msg="TaskExit event container_id:\"167724ee3dcd79da3fa03ec163c3febd8e528d3e62ed1439ff8689ff452c62a2\" id:\"167724ee3dcd79da3fa03ec163c3febd8e528d3e62ed1439ff8689ff452c62a2\" pid:3208 exit_status:1 exited_at:{seconds:1777337790 nanos:912694079}" Apr 28 00:56:57.967486 kubelet[2985]: E0428 00:56:57.906579 2985 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 28 00:56:59.161818 containerd[1643]: time="2026-04-28T00:56:59.023490364Z" level=error msg="get state for 68425dca7e328e676abf88b4e1f3b26c28554391eef3e12dac3404861636ff23" error="context deadline exceeded" Apr 28 00:56:59.806825 containerd[1643]: time="2026-04-28T00:56:59.279333300Z" level=error msg="ttrpc: received message on inactive stream" stream=27 Apr 28 00:57:00.097704 containerd[1643]: time="2026-04-28T00:56:59.716081197Z" level=warning msg="unknown status" status=0 Apr 28 00:57:02.768148 kubelet[2985]: E0428 00:57:02.761926 2985 controller.go:195] "Failed to update lease" err="Put \"https://10.0.0.20:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded" Apr 28 00:57:03.696750 containerd[1643]: time="2026-04-28T00:57:03.695216374Z" level=error msg="Failed to handle backOff event container_id:\"167724ee3dcd79da3fa03ec163c3febd8e528d3e62ed1439ff8689ff452c62a2\" id:\"167724ee3dcd79da3fa03ec163c3febd8e528d3e62ed1439ff8689ff452c62a2\" pid:3208 exit_status:1 exited_at:{seconds:1777337790 nanos:912694079} for 167724ee3dcd79da3fa03ec163c3febd8e528d3e62ed1439ff8689ff452c62a2" error="failed to handle container TaskExit event: failed to stop container: context deadline exceeded" Apr 28 00:57:05.003791 containerd[1643]: time="2026-04-28T00:57:04.521635611Z" level=error msg="ttrpc: received message on inactive stream" stream=57 Apr 28 00:57:05.453133 containerd[1643]: time="2026-04-28T00:57:05.296461291Z" level=error msg="ttrpc: received message on inactive stream" stream=59 Apr 28 00:57:06.497531 containerd[1643]: time="2026-04-28T00:57:06.466650087Z" level=info msg="TaskExit event container_id:\"167724ee3dcd79da3fa03ec163c3febd8e528d3e62ed1439ff8689ff452c62a2\" id:\"167724ee3dcd79da3fa03ec163c3febd8e528d3e62ed1439ff8689ff452c62a2\" pid:3208 exit_status:1 exited_at:{seconds:1777337790 nanos:912694079}" Apr 28 00:57:12.776369 kubelet[2985]: E0428 00:57:11.519672 2985 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 28 00:57:14.195654 kubelet[2985]: E0428 00:57:14.099670 2985 controller.go:195] "Failed to update lease" err="Put \"https://10.0.0.20:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Apr 28 00:57:21.258430 containerd[1643]: time="2026-04-28T00:57:20.757377813Z" level=error msg="Failed to handle backOff event container_id:\"167724ee3dcd79da3fa03ec163c3febd8e528d3e62ed1439ff8689ff452c62a2\" id:\"167724ee3dcd79da3fa03ec163c3febd8e528d3e62ed1439ff8689ff452c62a2\" pid:3208 exit_status:1 exited_at:{seconds:1777337790 nanos:912694079} for 167724ee3dcd79da3fa03ec163c3febd8e528d3e62ed1439ff8689ff452c62a2" error="failed to handle container TaskExit event: failed to stop container: unknown error after kill: context deadline exceeded: " Apr 28 00:57:21.367261 containerd[1643]: time="2026-04-28T00:57:21.098678507Z" level=error msg="ttrpc: received message on inactive stream" stream=67 Apr 28 00:57:26.914090 containerd[1643]: time="2026-04-28T00:57:26.701111248Z" level=info msg="TaskExit event container_id:\"167724ee3dcd79da3fa03ec163c3febd8e528d3e62ed1439ff8689ff452c62a2\" id:\"167724ee3dcd79da3fa03ec163c3febd8e528d3e62ed1439ff8689ff452c62a2\" pid:3208 exit_status:1 exited_at:{seconds:1777337790 nanos:912694079}" Apr 28 00:57:28.077748 kubelet[2985]: E0428 00:57:28.064035 2985 controller.go:195] "Failed to update lease" err="Put \"https://10.0.0.20:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded" Apr 28 00:57:30.553070 kubelet[2985]: E0428 00:57:30.516574 2985 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1m2.097s" Apr 28 00:57:30.822284 kubelet[2985]: E0428 00:57:30.554916 2985 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 28 00:57:34.400523 kubelet[2985]: E0428 00:57:33.702785 2985 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://10.0.0.20:6443/api/v1/namespaces/kube-system/events/kube-controller-manager-localhost.18aa5e12dfa56da0\": stream error: stream ID 321; INTERNAL_ERROR; received from peer" event="&Event{ObjectMeta:{kube-controller-manager-localhost.18aa5e12dfa56da0 kube-system 282 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-controller-manager-localhost,UID:82faa9ca0765979bc0118d46e6420ed8,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Unhealthy,Message:Liveness probe failed: Get \"https://127.0.0.1:10257/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-28 00:33:58 +0000 UTC,LastTimestamp:2026-04-28 00:37:14.857118497 +0000 UTC m=+416.610026409,Count:6,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 28 00:57:37.199781 containerd[1643]: time="2026-04-28T00:57:37.179754774Z" level=error msg="Failed to handle backOff event container_id:\"167724ee3dcd79da3fa03ec163c3febd8e528d3e62ed1439ff8689ff452c62a2\" id:\"167724ee3dcd79da3fa03ec163c3febd8e528d3e62ed1439ff8689ff452c62a2\" pid:3208 exit_status:1 exited_at:{seconds:1777337790 nanos:912694079} for 167724ee3dcd79da3fa03ec163c3febd8e528d3e62ed1439ff8689ff452c62a2" error="failed to handle container TaskExit event: failed to stop container: context deadline exceeded" Apr 28 00:57:37.979384 containerd[1643]: time="2026-04-28T00:57:37.955516836Z" level=error msg="ttrpc: received message on inactive stream" stream=73 Apr 28 00:57:37.979384 containerd[1643]: time="2026-04-28T00:57:37.961349048Z" level=error msg="ttrpc: received message on inactive stream" stream=77 Apr 28 00:57:38.563788 kubelet[2985]: E0428 00:57:38.555797 2985 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 28 00:57:39.170562 kubelet[2985]: E0428 00:57:39.108394 2985 controller.go:195] "Failed to update lease" err="Put \"https://10.0.0.20:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded" Apr 28 00:57:39.329474 kubelet[2985]: I0428 00:57:39.301769 2985 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Apr 28 00:57:45.562658 kubelet[2985]: E0428 00:57:45.332491 2985 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 28 00:57:46.474084 containerd[1643]: time="2026-04-28T00:57:46.470316227Z" level=info msg="TaskExit event container_id:\"167724ee3dcd79da3fa03ec163c3febd8e528d3e62ed1439ff8689ff452c62a2\" id:\"167724ee3dcd79da3fa03ec163c3febd8e528d3e62ed1439ff8689ff452c62a2\" pid:3208 exit_status:1 exited_at:{seconds:1777337790 nanos:912694079}" Apr 28 00:57:48.268723 kubelet[2985]: E0428 00:57:48.267958 2985 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="17.025s" Apr 28 00:57:50.299272 kubelet[2985]: E0428 00:57:49.668828 2985 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.20:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="200ms" Apr 28 00:57:51.140573 kubelet[2985]: E0428 00:57:51.133701 2985 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:57:53.557373 kubelet[2985]: E0428 00:57:53.553487 2985 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:57:53.708254 containerd[1643]: time="2026-04-28T00:57:53.692264485Z" level=info msg="StopContainer for \"167724ee3dcd79da3fa03ec163c3febd8e528d3e62ed1439ff8689ff452c62a2\" with timeout 30 (s)" Apr 28 00:57:54.353297 kubelet[2985]: E0428 00:57:54.289701 2985 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 28 00:57:54.501698 containerd[1643]: time="2026-04-28T00:57:54.308826354Z" level=info msg="Stop container \"167724ee3dcd79da3fa03ec163c3febd8e528d3e62ed1439ff8689ff452c62a2\" with signal terminated" Apr 28 00:57:56.664105 containerd[1643]: time="2026-04-28T00:57:56.663263939Z" level=error msg="failed to delete task" error="context deadline exceeded" id=167724ee3dcd79da3fa03ec163c3febd8e528d3e62ed1439ff8689ff452c62a2 Apr 28 00:57:56.922629 containerd[1643]: time="2026-04-28T00:57:56.766039184Z" level=error msg="Failed to handle backOff event container_id:\"167724ee3dcd79da3fa03ec163c3febd8e528d3e62ed1439ff8689ff452c62a2\" id:\"167724ee3dcd79da3fa03ec163c3febd8e528d3e62ed1439ff8689ff452c62a2\" pid:3208 exit_status:1 exited_at:{seconds:1777337790 nanos:912694079} for 167724ee3dcd79da3fa03ec163c3febd8e528d3e62ed1439ff8689ff452c62a2" error="failed to handle container TaskExit event: failed to stop container: failed to delete task: context deadline exceeded" Apr 28 00:57:57.517094 containerd[1643]: time="2026-04-28T00:57:57.263393374Z" level=error msg="failed to drain init process 167724ee3dcd79da3fa03ec163c3febd8e528d3e62ed1439ff8689ff452c62a2 io" error="context deadline exceeded" runtime=io.containerd.runc.v2 Apr 28 00:57:58.296133 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-167724ee3dcd79da3fa03ec163c3febd8e528d3e62ed1439ff8689ff452c62a2-rootfs.mount: Deactivated successfully. Apr 28 00:57:58.503291 containerd[1643]: time="2026-04-28T00:57:58.501957271Z" level=error msg="ttrpc: received message on inactive stream" stream=97 Apr 28 00:58:01.480741 kubelet[2985]: E0428 00:58:01.479956 2985 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="12.291s" Apr 28 00:58:01.869025 kubelet[2985]: E0428 00:58:01.862720 2985 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.20:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" interval="400ms" Apr 28 00:58:01.994791 kubelet[2985]: E0428 00:58:01.864488 2985 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 28 00:58:01.994791 kubelet[2985]: E0428 00:58:01.981737 2985 kubelet_node_status.go:486] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-04-28T00:57:50Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-28T00:57:50Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-28T00:57:50Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-28T00:57:50Z\\\",\\\"type\\\":\\\"Ready\\\"}]}}\" for node \"localhost\": Patch \"https://10.0.0.20:6443/api/v1/nodes/localhost/status?timeout=10s\": context deadline exceeded" Apr 28 00:58:07.883033 kubelet[2985]: E0428 00:58:07.869786 2985 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 28 00:58:08.095518 kubelet[2985]: E0428 00:58:07.966763 2985 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="6.468s" Apr 28 00:58:12.352333 kubelet[2985]: E0428 00:58:12.345346 2985 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"localhost\": Get \"https://10.0.0.20:6443/api/v1/nodes/localhost?timeout=10s\": context deadline exceeded" Apr 28 00:58:13.699112 containerd[1643]: time="2026-04-28T00:58:13.610604617Z" level=info msg="TaskExit event container_id:\"167724ee3dcd79da3fa03ec163c3febd8e528d3e62ed1439ff8689ff452c62a2\" id:\"167724ee3dcd79da3fa03ec163c3febd8e528d3e62ed1439ff8689ff452c62a2\" pid:3208 exit_status:1 exited_at:{seconds:1777337790 nanos:912694079}" Apr 28 00:58:14.476514 kubelet[2985]: E0428 00:58:14.474367 2985 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 28 00:58:14.982996 kubelet[2985]: E0428 00:58:14.982543 2985 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="4.885s" Apr 28 00:58:16.450343 kubelet[2985]: E0428 00:58:16.444044 2985 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.461s" Apr 28 00:58:19.397668 kubelet[2985]: E0428 00:58:19.373993 2985 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.927s" Apr 28 00:58:21.421645 kubelet[2985]: E0428 00:58:21.270237 2985 controller.go:195] "Failed to update lease" err="Put \"https://10.0.0.20:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Apr 28 00:58:21.813101 kubelet[2985]: E0428 00:58:21.693830 2985 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 28 00:58:23.374723 kubelet[2985]: E0428 00:58:23.366183 2985 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.872s" Apr 28 00:58:23.757542 containerd[1643]: time="2026-04-28T00:58:23.753698051Z" level=error msg="failed to delete task" error="context deadline exceeded" id=167724ee3dcd79da3fa03ec163c3febd8e528d3e62ed1439ff8689ff452c62a2 Apr 28 00:58:24.165215 containerd[1643]: time="2026-04-28T00:58:24.120873403Z" level=error msg="Failed to handle backOff event container_id:\"167724ee3dcd79da3fa03ec163c3febd8e528d3e62ed1439ff8689ff452c62a2\" id:\"167724ee3dcd79da3fa03ec163c3febd8e528d3e62ed1439ff8689ff452c62a2\" pid:3208 exit_status:1 exited_at:{seconds:1777337790 nanos:912694079} for 167724ee3dcd79da3fa03ec163c3febd8e528d3e62ed1439ff8689ff452c62a2" error="failed to handle container TaskExit event: failed to stop container: failed to delete task: context deadline exceeded" Apr 28 00:58:25.112822 containerd[1643]: time="2026-04-28T00:58:25.109539281Z" level=error msg="ttrpc: received message on inactive stream" stream=113 Apr 28 00:58:26.122895 kubelet[2985]: E0428 00:58:26.116486 2985 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.608s" Apr 28 00:58:28.312726 kubelet[2985]: E0428 00:58:28.289672 2985 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 28 00:58:28.370953 containerd[1643]: time="2026-04-28T00:58:28.317570588Z" level=info msg="Kill container \"167724ee3dcd79da3fa03ec163c3febd8e528d3e62ed1439ff8689ff452c62a2\"" Apr 28 00:58:32.274095 kubelet[2985]: E0428 00:58:32.273764 2985 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="6.151s" Apr 28 00:58:34.608630 kubelet[2985]: E0428 00:58:34.595939 2985 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.296s" Apr 28 00:58:34.878174 kubelet[2985]: E0428 00:58:34.868946 2985 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 28 00:58:38.700648 kubelet[2985]: E0428 00:58:38.685648 2985 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="4.084s" Apr 28 00:58:40.657956 kubelet[2985]: E0428 00:58:40.656182 2985 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 28 00:58:42.034240 kubelet[2985]: E0428 00:58:41.868281 2985 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.178s" Apr 28 00:58:43.701736 kubelet[2985]: E0428 00:58:43.700573 2985 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.664s" Apr 28 00:58:46.669436 kubelet[2985]: E0428 00:58:46.668362 2985 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 28 00:58:48.257525 kubelet[2985]: E0428 00:58:48.256271 2985 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="4.266s" Apr 28 00:58:52.475546 kubelet[2985]: E0428 00:58:52.470567 2985 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="4.213s" Apr 28 00:58:53.721194 kubelet[2985]: E0428 00:58:53.720523 2985 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 28 00:58:56.582252 containerd[1643]: time="2026-04-28T00:58:56.549469013Z" level=info msg="TaskExit event container_id:\"167724ee3dcd79da3fa03ec163c3febd8e528d3e62ed1439ff8689ff452c62a2\" id:\"167724ee3dcd79da3fa03ec163c3febd8e528d3e62ed1439ff8689ff452c62a2\" pid:3208 exit_status:1 exited_at:{seconds:1777337790 nanos:912694079}" Apr 28 00:58:58.702342 containerd[1643]: time="2026-04-28T00:58:58.701342563Z" level=error msg="failed to delete task" error="rpc error: code = NotFound desc = container not created: not found" id=167724ee3dcd79da3fa03ec163c3febd8e528d3e62ed1439ff8689ff452c62a2 Apr 28 00:58:59.134334 kubelet[2985]: E0428 00:58:59.103077 2985 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="6.402s" Apr 28 00:59:00.557555 containerd[1643]: time="2026-04-28T00:59:00.404789903Z" level=info msg="Ensure that container 167724ee3dcd79da3fa03ec163c3febd8e528d3e62ed1439ff8689ff452c62a2 in task-service has been cleanup successfully" Apr 28 00:59:01.640247 kubelet[2985]: E0428 00:59:01.616757 2985 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 28 00:59:02.684162 containerd[1643]: time="2026-04-28T00:59:02.683525147Z" level=info msg="StopContainer for \"167724ee3dcd79da3fa03ec163c3febd8e528d3e62ed1439ff8689ff452c62a2\" returns successfully" Apr 28 00:59:02.815388 kubelet[2985]: E0428 00:59:02.813242 2985 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.669s" Apr 28 00:59:03.099425 kubelet[2985]: E0428 00:59:03.070195 2985 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:59:04.133981 kubelet[2985]: E0428 00:59:04.119707 2985 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:59:04.643463 containerd[1643]: time="2026-04-28T00:59:04.643026034Z" level=info msg="CreateContainer within sandbox \"68425dca7e328e676abf88b4e1f3b26c28554391eef3e12dac3404861636ff23\" for container name:\"kube-controller-manager\" attempt:1" Apr 28 00:59:04.670610 kubelet[2985]: E0428 00:59:04.670175 2985 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.607s" Apr 28 00:59:07.113909 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3077552666.mount: Deactivated successfully. Apr 28 00:59:07.415373 kubelet[2985]: E0428 00:59:07.390894 2985 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 28 00:59:07.633058 containerd[1643]: time="2026-04-28T00:59:07.627809279Z" level=info msg="Container 66807e1b930179dc102d155b7cb3305034ff1bc67b57040611ccd564e337b10b: CDI devices from CRI Config.CDIDevices: []" Apr 28 00:59:07.634524 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3049869815.mount: Deactivated successfully. Apr 28 00:59:10.567472 containerd[1643]: time="2026-04-28T00:59:10.566200819Z" level=info msg="CreateContainer within sandbox \"68425dca7e328e676abf88b4e1f3b26c28554391eef3e12dac3404861636ff23\" for name:\"kube-controller-manager\" attempt:1 returns container id \"66807e1b930179dc102d155b7cb3305034ff1bc67b57040611ccd564e337b10b\"" Apr 28 00:59:11.678931 containerd[1643]: time="2026-04-28T00:59:11.675601418Z" level=info msg="StartContainer for \"66807e1b930179dc102d155b7cb3305034ff1bc67b57040611ccd564e337b10b\"" Apr 28 00:59:12.730165 kubelet[2985]: E0428 00:59:12.725058 2985 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="8.055s" Apr 28 00:59:12.806055 kubelet[2985]: I0428 00:59:12.796765 2985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=418.794762217 podStartE2EDuration="6m58.794762217s" podCreationTimestamp="2026-04-28 00:52:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-28 00:52:23.326297791 +0000 UTC m=+1325.079205702" watchObservedRunningTime="2026-04-28 00:59:12.794762217 +0000 UTC m=+1734.547670125" Apr 28 00:59:13.685545 containerd[1643]: time="2026-04-28T00:59:13.682952703Z" level=info msg="connecting to shim 66807e1b930179dc102d155b7cb3305034ff1bc67b57040611ccd564e337b10b" address="unix:///run/containerd/s/a8f19aff4c54a4bf8e95907f2a3356235c31cb787b6c243f084642a11761d204" protocol=ttrpc version=3 Apr 28 00:59:14.277627 kubelet[2985]: E0428 00:59:14.230328 2985 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 28 00:59:15.868917 kubelet[2985]: E0428 00:59:15.866629 2985 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.141s" Apr 28 00:59:16.196793 kubelet[2985]: E0428 00:59:16.196557 2985 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:59:20.092046 kubelet[2985]: E0428 00:59:20.090243 2985 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 28 00:59:22.091777 systemd[1]: Started cri-containerd-66807e1b930179dc102d155b7cb3305034ff1bc67b57040611ccd564e337b10b.scope - libcontainer container 66807e1b930179dc102d155b7cb3305034ff1bc67b57040611ccd564e337b10b. Apr 28 00:59:24.953753 kubelet[2985]: E0428 00:59:24.950641 2985 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="9.083s" Apr 28 00:59:26.096000 audit: BPF prog-id=94 op=LOAD Apr 28 00:59:26.120506 kernel: audit: type=1334 audit(1777337966.096:413): prog-id=94 op=LOAD Apr 28 00:59:26.653000 audit: BPF prog-id=95 op=LOAD Apr 28 00:59:26.653000 audit[3492]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0000f4240 a2=98 a3=0 items=0 ppid=3087 pid=3492 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 28 00:59:26.653000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3636383037653162393330313739646331303264313535623763623333 Apr 28 00:59:26.788274 kernel: audit: type=1334 audit(1777337966.653:414): prog-id=95 op=LOAD Apr 28 00:59:26.789902 kernel: audit: type=1300 audit(1777337966.653:414): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0000f4240 a2=98 a3=0 items=0 ppid=3087 pid=3492 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 28 00:59:26.789958 kernel: audit: type=1327 audit(1777337966.653:414): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3636383037653162393330313739646331303264313535623763623333 Apr 28 00:59:26.789000 audit: BPF prog-id=95 op=UNLOAD Apr 28 00:59:26.789000 audit[3492]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3087 pid=3492 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 28 00:59:26.798711 kernel: audit: type=1334 audit(1777337966.789:415): prog-id=95 op=UNLOAD Apr 28 00:59:26.800326 kernel: audit: type=1300 audit(1777337966.789:415): arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3087 pid=3492 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 28 00:59:26.789000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3636383037653162393330313739646331303264313535623763623333 Apr 28 00:59:26.807089 kernel: audit: type=1327 audit(1777337966.789:415): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3636383037653162393330313739646331303264313535623763623333 Apr 28 00:59:26.790000 audit: BPF prog-id=96 op=LOAD Apr 28 00:59:26.790000 audit[3492]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0000f4490 a2=98 a3=0 items=0 ppid=3087 pid=3492 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 28 00:59:26.814428 kubelet[2985]: E0428 00:59:26.812023 2985 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 28 00:59:26.790000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3636383037653162393330313739646331303264313535623763623333 Apr 28 00:59:26.790000 audit: BPF prog-id=97 op=LOAD Apr 28 00:59:26.790000 audit[3492]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c0000f4220 a2=98 a3=0 items=0 ppid=3087 pid=3492 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 28 00:59:26.790000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3636383037653162393330313739646331303264313535623763623333 Apr 28 00:59:26.790000 audit: BPF prog-id=97 op=UNLOAD Apr 28 00:59:26.790000 audit[3492]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3087 pid=3492 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 28 00:59:26.790000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3636383037653162393330313739646331303264313535623763623333 Apr 28 00:59:26.790000 audit: BPF prog-id=96 op=UNLOAD Apr 28 00:59:26.790000 audit[3492]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3087 pid=3492 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 28 00:59:26.790000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3636383037653162393330313739646331303264313535623763623333 Apr 28 00:59:26.790000 audit: BPF prog-id=98 op=LOAD Apr 28 00:59:26.790000 audit[3492]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0000f46f0 a2=98 a3=0 items=0 ppid=3087 pid=3492 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 28 00:59:26.790000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3636383037653162393330313739646331303264313535623763623333 Apr 28 00:59:26.969354 kernel: audit: type=1334 audit(1777337966.790:416): prog-id=96 op=LOAD Apr 28 00:59:26.969380 kernel: audit: type=1300 audit(1777337966.790:416): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0000f4490 a2=98 a3=0 items=0 ppid=3087 pid=3492 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 28 00:59:26.969401 kernel: audit: type=1327 audit(1777337966.790:416): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3636383037653162393330313739646331303264313535623763623333 Apr 28 00:59:27.168146 containerd[1643]: time="2026-04-28T00:59:27.161824600Z" level=error msg="get state for 66807e1b930179dc102d155b7cb3305034ff1bc67b57040611ccd564e337b10b" error="context deadline exceeded" Apr 28 00:59:27.301518 containerd[1643]: time="2026-04-28T00:59:27.172542659Z" level=warning msg="unknown status" status=0 Apr 28 00:59:27.896323 kubelet[2985]: E0428 00:59:27.864765 2985 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.731s" Apr 28 00:59:28.638282 containerd[1643]: time="2026-04-28T00:59:28.617437613Z" level=error msg="ttrpc: received message on inactive stream" stream=5 Apr 28 00:59:30.762944 kubelet[2985]: E0428 00:59:30.711792 2985 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.595s" Apr 28 00:59:33.378191 kubelet[2985]: E0428 00:59:33.375400 2985 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 28 00:59:33.491995 kubelet[2985]: E0428 00:59:33.484094 2985 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.63s" Apr 28 00:59:38.062770 systemd[1]: cri-containerd-04cfef914f0c78abd0148917046a940864958975d6a8c9b8292dd00de3bec16f.scope: Deactivated successfully. Apr 28 00:59:38.080000 audit: BPF prog-id=88 op=UNLOAD Apr 28 00:59:38.184000 audit: BPF prog-id=84 op=UNLOAD Apr 28 00:59:38.228384 kernel: kauditd_printk_skb: 12 callbacks suppressed Apr 28 00:59:38.188255 systemd[1]: cri-containerd-04cfef914f0c78abd0148917046a940864958975d6a8c9b8292dd00de3bec16f.scope: Consumed 10min 37.754s CPU time, 25.7M memory peak. Apr 28 00:59:38.450688 kernel: audit: type=1334 audit(1777337978.080:421): prog-id=88 op=UNLOAD Apr 28 00:59:38.458817 kernel: audit: type=1334 audit(1777337978.184:422): prog-id=84 op=UNLOAD Apr 28 00:59:41.396467 containerd[1643]: time="2026-04-28T00:59:41.296904018Z" level=info msg="received container exit event container_id:\"04cfef914f0c78abd0148917046a940864958975d6a8c9b8292dd00de3bec16f\" id:\"04cfef914f0c78abd0148917046a940864958975d6a8c9b8292dd00de3bec16f\" pid:3229 exit_status:1 exited_at:{seconds:1777337980 nanos:710086030}" Apr 28 00:59:42.543201 kubelet[2985]: E0428 00:59:42.539564 2985 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 28 00:59:47.076742 containerd[1643]: time="2026-04-28T00:59:47.075173813Z" level=info msg="StartContainer for \"66807e1b930179dc102d155b7cb3305034ff1bc67b57040611ccd564e337b10b\" returns successfully" Apr 28 00:59:47.387003 kubelet[2985]: E0428 00:59:47.218417 2985 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="13.5s" Apr 28 00:59:48.865143 kubelet[2985]: E0428 00:59:48.864489 2985 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:59:49.408996 kubelet[2985]: E0428 00:59:49.394411 2985 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 28 00:59:49.858382 kubelet[2985]: E0428 00:59:49.857313 2985 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.694s" Apr 28 00:59:52.496558 containerd[1643]: time="2026-04-28T00:59:52.470290000Z" level=error msg="ttrpc: received message on inactive stream" stream=63 Apr 28 00:59:53.186779 containerd[1643]: time="2026-04-28T00:59:53.173701445Z" level=error msg="failed to handle container TaskExit event container_id:\"04cfef914f0c78abd0148917046a940864958975d6a8c9b8292dd00de3bec16f\" id:\"04cfef914f0c78abd0148917046a940864958975d6a8c9b8292dd00de3bec16f\" pid:3229 exit_status:1 exited_at:{seconds:1777337980 nanos:710086030}" error="failed to stop container: context deadline exceeded" Apr 28 00:59:55.193306 containerd[1643]: time="2026-04-28T00:59:55.192929741Z" level=info msg="TaskExit event container_id:\"04cfef914f0c78abd0148917046a940864958975d6a8c9b8292dd00de3bec16f\" id:\"04cfef914f0c78abd0148917046a940864958975d6a8c9b8292dd00de3bec16f\" pid:3229 exit_status:1 exited_at:{seconds:1777337980 nanos:710086030}" Apr 28 00:59:55.233165 kubelet[2985]: E0428 00:59:55.232365 2985 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 28 00:59:56.513242 kubelet[2985]: E0428 00:59:56.507145 2985 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="6.34s" Apr 28 01:00:04.756380 kubelet[2985]: E0428 01:00:04.753910 2985 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 28 01:00:05.365804 containerd[1643]: time="2026-04-28T01:00:05.364623095Z" level=error msg="Failed to handle backOff event container_id:\"04cfef914f0c78abd0148917046a940864958975d6a8c9b8292dd00de3bec16f\" id:\"04cfef914f0c78abd0148917046a940864958975d6a8c9b8292dd00de3bec16f\" pid:3229 exit_status:1 exited_at:{seconds:1777337980 nanos:710086030} for 04cfef914f0c78abd0148917046a940864958975d6a8c9b8292dd00de3bec16f" error="failed to handle container TaskExit event: failed to stop container: context deadline exceeded" Apr 28 01:00:05.451799 containerd[1643]: time="2026-04-28T01:00:05.418494313Z" level=error msg="ttrpc: received message on inactive stream" stream=75 Apr 28 01:00:05.451799 containerd[1643]: time="2026-04-28T01:00:05.447814312Z" level=error msg="ttrpc: received message on inactive stream" stream=73 Apr 28 01:00:06.469369 kubelet[2985]: E0428 01:00:06.461098 2985 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:00:08.705298 containerd[1643]: time="2026-04-28T01:00:08.661677710Z" level=info msg="TaskExit event container_id:\"04cfef914f0c78abd0148917046a940864958975d6a8c9b8292dd00de3bec16f\" id:\"04cfef914f0c78abd0148917046a940864958975d6a8c9b8292dd00de3bec16f\" pid:3229 exit_status:1 exited_at:{seconds:1777337980 nanos:710086030}" Apr 28 01:00:17.609122 kubelet[2985]: E0428 01:00:17.592719 2985 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 28 01:00:19.508291 containerd[1643]: time="2026-04-28T01:00:19.503664832Z" level=error msg="Failed to handle backOff event container_id:\"04cfef914f0c78abd0148917046a940864958975d6a8c9b8292dd00de3bec16f\" id:\"04cfef914f0c78abd0148917046a940864958975d6a8c9b8292dd00de3bec16f\" pid:3229 exit_status:1 exited_at:{seconds:1777337980 nanos:710086030} for 04cfef914f0c78abd0148917046a940864958975d6a8c9b8292dd00de3bec16f" error="failed to handle container TaskExit event: failed to stop container: context deadline exceeded" Apr 28 01:00:19.922969 containerd[1643]: time="2026-04-28T01:00:19.875696363Z" level=error msg="ttrpc: received message on inactive stream" stream=81 Apr 28 01:00:20.364905 containerd[1643]: time="2026-04-28T01:00:20.222617804Z" level=error msg="ttrpc: received message on inactive stream" stream=85 Apr 28 01:00:22.474710 kubelet[2985]: E0428 01:00:22.462663 2985 controller.go:195] "Failed to update lease" err="Put \"https://10.0.0.20:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Apr 28 01:00:24.296310 containerd[1643]: time="2026-04-28T01:00:24.295501165Z" level=info msg="TaskExit event container_id:\"04cfef914f0c78abd0148917046a940864958975d6a8c9b8292dd00de3bec16f\" id:\"04cfef914f0c78abd0148917046a940864958975d6a8c9b8292dd00de3bec16f\" pid:3229 exit_status:1 exited_at:{seconds:1777337980 nanos:710086030}" Apr 28 01:00:24.545623 kubelet[2985]: E0428 01:00:23.957364 2985 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="27.415s" Apr 28 01:00:26.619885 kubelet[2985]: E0428 01:00:26.619232 2985 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 28 01:00:34.527563 kubelet[2985]: E0428 01:00:34.526559 2985 controller.go:195] "Failed to update lease" err="Put \"https://10.0.0.20:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Apr 28 01:00:34.816021 containerd[1643]: time="2026-04-28T01:00:34.658587679Z" level=error msg="Failed to handle backOff event container_id:\"04cfef914f0c78abd0148917046a940864958975d6a8c9b8292dd00de3bec16f\" id:\"04cfef914f0c78abd0148917046a940864958975d6a8c9b8292dd00de3bec16f\" pid:3229 exit_status:1 exited_at:{seconds:1777337980 nanos:710086030} for 04cfef914f0c78abd0148917046a940864958975d6a8c9b8292dd00de3bec16f" error="failed to handle container TaskExit event: failed to stop container: context deadline exceeded" Apr 28 01:00:34.816021 containerd[1643]: time="2026-04-28T01:00:34.663130003Z" level=error msg="ttrpc: received message on inactive stream" stream=91 Apr 28 01:00:34.940553 kubelet[2985]: E0428 01:00:34.923274 2985 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="10.037s" Apr 28 01:00:34.950779 kubelet[2985]: E0428 01:00:34.816326 2985 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 28 01:00:35.363742 containerd[1643]: time="2026-04-28T01:00:35.352200486Z" level=error msg="ttrpc: received message on inactive stream" stream=93 Apr 28 01:00:36.706152 kubelet[2985]: E0428 01:00:36.701282 2985 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:00:37.883414 kubelet[2985]: E0428 01:00:37.878726 2985 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:00:37.994290 sudo[1874]: pam_unix(sudo:session): session closed for user root Apr 28 01:00:37.995000 audit[1874]: AUDIT1106 pid=1874 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Apr 28 01:00:38.023000 audit[1874]: AUDIT1104 pid=1874 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Apr 28 01:00:38.122790 sshd-session[1863]: pam_unix(sshd:session): session closed for user core Apr 28 01:00:38.466000 audit[1863]: AUDIT1106 pid=1863 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 28 01:00:38.478000 audit[1863]: AUDIT1104 pid=1863 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 28 01:00:38.655683 sshd[1873]: Connection closed by 10.0.0.1 port 48810 Apr 28 01:00:38.661000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-12290-10.0.0.20:22-10.0.0.1:48810 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 01:00:38.665002 kernel: audit: type=1106 audit(1777338037.995:423): pid=1874 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Apr 28 01:00:38.641816 systemd[1]: sshd@6-12290-10.0.0.20:22-10.0.0.1:48810.service: Deactivated successfully. Apr 28 01:00:38.665963 kernel: audit: type=1104 audit(1777338038.023:424): pid=1874 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Apr 28 01:00:38.666036 kernel: audit: type=1106 audit(1777338038.466:425): pid=1863 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 28 01:00:38.666060 kernel: audit: type=1104 audit(1777338038.478:426): pid=1863 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 28 01:00:38.666084 kernel: audit: type=1131 audit(1777338038.661:427): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-12290-10.0.0.20:22-10.0.0.1:48810 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 01:00:38.854961 systemd[1]: session-8.scope: Deactivated successfully. Apr 28 01:00:38.981386 systemd[1]: session-8.scope: Consumed 8min 2.792s CPU time, 157.2M memory peak. Apr 28 01:00:39.315192 systemd-logind[1616]: Session 8 logged out. Waiting for processes to exit.