Apr 20 19:55:33.614278 kernel: Linux version 6.12.81-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 15.2.1_p20260214 p5) 15.2.1 20260214, GNU ld (Gentoo 2.46.0 p1) 2.46.0) #1 SMP PREEMPT_DYNAMIC Tue Apr 14 02:21:25 -00 2026 Apr 20 19:55:33.614347 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=70de22794192cd52e167b5a4b1ae0509811ded61dbe4152dfc02378f843ae81a Apr 20 19:55:33.614361 kernel: BIOS-provided physical RAM map: Apr 20 19:55:33.614371 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Apr 20 19:55:33.614378 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Apr 20 19:55:33.614386 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Apr 20 19:55:33.614399 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Apr 20 19:55:33.614407 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Apr 20 19:55:33.614414 kernel: BIOS-e820: [mem 0x000000000080c000-0x0000000000810fff] usable Apr 20 19:55:33.614422 kernel: BIOS-e820: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Apr 20 19:55:33.614432 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009bd3efff] usable Apr 20 19:55:33.614442 kernel: BIOS-e820: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Apr 20 19:55:33.614449 kernel: BIOS-e820: [mem 0x000000009be00000-0x000000009c8ecfff] usable Apr 20 19:55:33.614459 kernel: BIOS-e820: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Apr 20 19:55:33.614469 kernel: BIOS-e820: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Apr 20 19:55:33.614480 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Apr 20 19:55:33.614490 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009ce90fff] usable Apr 20 19:55:33.614500 kernel: BIOS-e820: [mem 0x000000009ce91000-0x000000009ce94fff] reserved Apr 20 19:55:33.614511 kernel: BIOS-e820: [mem 0x000000009ce95000-0x000000009ce96fff] ACPI NVS Apr 20 19:55:33.614523 kernel: BIOS-e820: [mem 0x000000009ce97000-0x000000009cedbfff] usable Apr 20 19:55:33.614534 kernel: BIOS-e820: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Apr 20 19:55:33.614544 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Apr 20 19:55:33.614554 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Apr 20 19:55:33.614562 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Apr 20 19:55:33.614572 kernel: BIOS-e820: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Apr 20 19:55:33.614580 kernel: NX (Execute Disable) protection: active Apr 20 19:55:33.614588 kernel: APIC: Static calls initialized Apr 20 19:55:33.614598 kernel: e820: update [mem 0x9b31e018-0x9b327c57] usable ==> usable Apr 20 19:55:33.614606 kernel: e820: update [mem 0x9b2e1018-0x9b31de57] usable ==> usable Apr 20 19:55:33.614616 kernel: extended physical RAM map: Apr 20 19:55:33.614627 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable Apr 20 19:55:33.614635 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000007fffff] usable Apr 20 19:55:33.614645 kernel: reserve setup_data: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Apr 20 19:55:33.614653 kernel: reserve setup_data: [mem 0x0000000000808000-0x000000000080afff] usable Apr 20 19:55:33.614661 kernel: reserve setup_data: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Apr 20 19:55:33.614672 kernel: reserve setup_data: [mem 0x000000000080c000-0x0000000000810fff] usable Apr 20 19:55:33.614682 kernel: reserve setup_data: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Apr 20 19:55:33.614690 kernel: reserve setup_data: [mem 0x0000000000900000-0x000000009b2e1017] usable Apr 20 19:55:33.614701 kernel: reserve setup_data: [mem 0x000000009b2e1018-0x000000009b31de57] usable Apr 20 19:55:33.614709 kernel: reserve setup_data: [mem 0x000000009b31de58-0x000000009b31e017] usable Apr 20 19:55:33.614721 kernel: reserve setup_data: [mem 0x000000009b31e018-0x000000009b327c57] usable Apr 20 19:55:33.614730 kernel: reserve setup_data: [mem 0x000000009b327c58-0x000000009bd3efff] usable Apr 20 19:55:33.614741 kernel: reserve setup_data: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Apr 20 19:55:33.614749 kernel: reserve setup_data: [mem 0x000000009be00000-0x000000009c8ecfff] usable Apr 20 19:55:33.614759 kernel: reserve setup_data: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Apr 20 19:55:33.614769 kernel: reserve setup_data: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Apr 20 19:55:33.614777 kernel: reserve setup_data: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Apr 20 19:55:33.614786 kernel: reserve setup_data: [mem 0x000000009cbff000-0x000000009ce90fff] usable Apr 20 19:55:33.614795 kernel: reserve setup_data: [mem 0x000000009ce91000-0x000000009ce94fff] reserved Apr 20 19:55:33.614804 kernel: reserve setup_data: [mem 0x000000009ce95000-0x000000009ce96fff] ACPI NVS Apr 20 19:55:33.614813 kernel: reserve setup_data: [mem 0x000000009ce97000-0x000000009cedbfff] usable Apr 20 19:55:33.614821 kernel: reserve setup_data: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Apr 20 19:55:33.614829 kernel: reserve setup_data: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Apr 20 19:55:33.614838 kernel: reserve setup_data: [mem 0x00000000e0000000-0x00000000efffffff] reserved Apr 20 19:55:33.614848 kernel: reserve setup_data: [mem 0x00000000feffc000-0x00000000feffffff] reserved Apr 20 19:55:33.614856 kernel: reserve setup_data: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Apr 20 19:55:33.614865 kernel: efi: EFI v2.7 by EDK II Apr 20 19:55:33.614875 kernel: efi: SMBIOS=0x9c988000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b9e4198 RNG=0x9cb73018 Apr 20 19:55:33.614980 kernel: random: crng init done Apr 20 19:55:33.614989 kernel: efi: Remove mem151: MMIO range=[0xffc00000-0xffffffff] (4MB) from e820 map Apr 20 19:55:33.615003 kernel: e820: remove [mem 0xffc00000-0xffffffff] reserved Apr 20 19:55:33.615012 kernel: secureboot: Secure boot disabled Apr 20 19:55:33.615021 kernel: SMBIOS 2.8 present. Apr 20 19:55:33.615029 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS unknown 02/02/2022 Apr 20 19:55:33.615037 kernel: DMI: Memory slots populated: 1/1 Apr 20 19:55:33.615046 kernel: Hypervisor detected: KVM Apr 20 19:55:33.615054 kernel: last_pfn = 0x9cedc max_arch_pfn = 0x10000000000 Apr 20 19:55:33.615063 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Apr 20 19:55:33.615071 kernel: kvm-clock: using sched offset of 6804735310 cycles Apr 20 19:55:33.615081 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Apr 20 19:55:33.615090 kernel: tsc: Detected 2793.438 MHz processor Apr 20 19:55:33.615099 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Apr 20 19:55:33.615111 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Apr 20 19:55:33.615122 kernel: last_pfn = 0x9cedc max_arch_pfn = 0x10000000000 Apr 20 19:55:33.615132 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Apr 20 19:55:33.615142 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Apr 20 19:55:33.615276 kernel: Using GB pages for direct mapping Apr 20 19:55:33.615291 kernel: ACPI: Early table checksum verification disabled Apr 20 19:55:33.615300 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Apr 20 19:55:33.615312 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Apr 20 19:55:33.615322 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Apr 20 19:55:33.615332 kernel: ACPI: DSDT 0x000000009CB7A000 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 20 19:55:33.615342 kernel: ACPI: FACS 0x000000009CBDD000 000040 Apr 20 19:55:33.615352 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 20 19:55:33.615361 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 20 19:55:33.615371 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 20 19:55:33.615381 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 20 19:55:33.615389 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Apr 20 19:55:33.615400 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Apr 20 19:55:33.615409 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1b9] Apr 20 19:55:33.615418 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Apr 20 19:55:33.615428 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Apr 20 19:55:33.615440 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Apr 20 19:55:33.615450 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Apr 20 19:55:33.615459 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Apr 20 19:55:33.615470 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Apr 20 19:55:33.615479 kernel: No NUMA configuration found Apr 20 19:55:33.615488 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cedbfff] Apr 20 19:55:33.615500 kernel: NODE_DATA(0) allocated [mem 0x9ce36dc0-0x9ce3dfff] Apr 20 19:55:33.615509 kernel: Zone ranges: Apr 20 19:55:33.615518 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Apr 20 19:55:33.615526 kernel: DMA32 [mem 0x0000000001000000-0x000000009cedbfff] Apr 20 19:55:33.615537 kernel: Normal empty Apr 20 19:55:33.615546 kernel: Device empty Apr 20 19:55:33.615556 kernel: Movable zone start for each node Apr 20 19:55:33.615566 kernel: Early memory node ranges Apr 20 19:55:33.615575 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Apr 20 19:55:33.615584 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Apr 20 19:55:33.615594 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Apr 20 19:55:33.615604 kernel: node 0: [mem 0x000000000080c000-0x0000000000810fff] Apr 20 19:55:33.615615 kernel: node 0: [mem 0x0000000000900000-0x000000009bd3efff] Apr 20 19:55:33.615624 kernel: node 0: [mem 0x000000009be00000-0x000000009c8ecfff] Apr 20 19:55:33.615634 kernel: node 0: [mem 0x000000009cbff000-0x000000009ce90fff] Apr 20 19:55:33.615643 kernel: node 0: [mem 0x000000009ce97000-0x000000009cedbfff] Apr 20 19:55:33.615652 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cedbfff] Apr 20 19:55:33.615662 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Apr 20 19:55:33.615671 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Apr 20 19:55:33.615683 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Apr 20 19:55:33.615699 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Apr 20 19:55:33.615709 kernel: On node 0, zone DMA: 239 pages in unavailable ranges Apr 20 19:55:33.615720 kernel: On node 0, zone DMA32: 193 pages in unavailable ranges Apr 20 19:55:33.615730 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Apr 20 19:55:33.615739 kernel: On node 0, zone DMA32: 6 pages in unavailable ranges Apr 20 19:55:33.615749 kernel: On node 0, zone DMA32: 12580 pages in unavailable ranges Apr 20 19:55:33.615759 kernel: ACPI: PM-Timer IO Port: 0x608 Apr 20 19:55:33.615769 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Apr 20 19:55:33.615780 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Apr 20 19:55:33.615790 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Apr 20 19:55:33.615800 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Apr 20 19:55:33.615809 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Apr 20 19:55:33.615819 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Apr 20 19:55:33.615832 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Apr 20 19:55:33.615843 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Apr 20 19:55:33.615854 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Apr 20 19:55:33.615864 kernel: TSC deadline timer available Apr 20 19:55:33.615874 kernel: CPU topo: Max. logical packages: 1 Apr 20 19:55:33.615981 kernel: CPU topo: Max. logical dies: 1 Apr 20 19:55:33.615991 kernel: CPU topo: Max. dies per package: 1 Apr 20 19:55:33.616003 kernel: CPU topo: Max. threads per core: 1 Apr 20 19:55:33.616014 kernel: CPU topo: Num. cores per package: 4 Apr 20 19:55:33.616024 kernel: CPU topo: Num. threads per package: 4 Apr 20 19:55:33.616034 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs Apr 20 19:55:33.616044 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Apr 20 19:55:33.616054 kernel: kvm-guest: KVM setup pv remote TLB flush Apr 20 19:55:33.616065 kernel: kvm-guest: setup PV sched yield Apr 20 19:55:33.616075 kernel: [mem 0x9d000000-0xdfffffff] available for PCI devices Apr 20 19:55:33.616087 kernel: Booting paravirtualized kernel on KVM Apr 20 19:55:33.616097 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Apr 20 19:55:33.616108 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Apr 20 19:55:33.616118 kernel: percpu: Embedded 60 pages/cpu s207960 r8192 d29608 u524288 Apr 20 19:55:33.616128 kernel: pcpu-alloc: s207960 r8192 d29608 u524288 alloc=1*2097152 Apr 20 19:55:33.616138 kernel: pcpu-alloc: [0] 0 1 2 3 Apr 20 19:55:33.616358 kernel: kvm-guest: PV spinlocks enabled Apr 20 19:55:33.616373 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Apr 20 19:55:33.616385 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=70de22794192cd52e167b5a4b1ae0509811ded61dbe4152dfc02378f843ae81a Apr 20 19:55:33.616394 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Apr 20 19:55:33.616403 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Apr 20 19:55:33.616413 kernel: Fallback order for Node 0: 0 Apr 20 19:55:33.616422 kernel: Built 1 zonelists, mobility grouping on. Total pages: 641450 Apr 20 19:55:33.616433 kernel: Policy zone: DMA32 Apr 20 19:55:33.616442 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Apr 20 19:55:33.616452 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Apr 20 19:55:33.616461 kernel: ftrace: allocating 40346 entries in 158 pages Apr 20 19:55:33.616470 kernel: ftrace: allocated 158 pages with 5 groups Apr 20 19:55:33.616479 kernel: Dynamic Preempt: voluntary Apr 20 19:55:33.616489 kernel: rcu: Preemptible hierarchical RCU implementation. Apr 20 19:55:33.616500 kernel: rcu: RCU event tracing is enabled. Apr 20 19:55:33.616511 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Apr 20 19:55:33.616521 kernel: Trampoline variant of Tasks RCU enabled. Apr 20 19:55:33.616530 kernel: Rude variant of Tasks RCU enabled. Apr 20 19:55:33.616541 kernel: Tracing variant of Tasks RCU enabled. Apr 20 19:55:33.616553 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Apr 20 19:55:33.616563 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Apr 20 19:55:33.616574 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Apr 20 19:55:33.616586 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Apr 20 19:55:33.616597 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Apr 20 19:55:33.616608 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Apr 20 19:55:33.616618 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Apr 20 19:55:33.616627 kernel: Console: colour dummy device 80x25 Apr 20 19:55:33.616637 kernel: printk: legacy console [ttyS0] enabled Apr 20 19:55:33.616647 kernel: ACPI: Core revision 20240827 Apr 20 19:55:33.616658 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Apr 20 19:55:33.616668 kernel: APIC: Switch to symmetric I/O mode setup Apr 20 19:55:33.616678 kernel: x2apic enabled Apr 20 19:55:33.616688 kernel: APIC: Switched APIC routing to: physical x2apic Apr 20 19:55:33.616699 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Apr 20 19:55:33.616710 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Apr 20 19:55:33.616721 kernel: kvm-guest: setup PV IPIs Apr 20 19:55:33.617120 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Apr 20 19:55:33.617135 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x284409db922, max_idle_ns: 440795228871 ns Apr 20 19:55:33.617174 kernel: Calibrating delay loop (skipped) preset value.. 5586.87 BogoMIPS (lpj=2793438) Apr 20 19:55:33.617184 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Apr 20 19:55:33.617194 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Apr 20 19:55:33.617204 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Apr 20 19:55:33.617215 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Apr 20 19:55:33.617228 kernel: Spectre V2 : Mitigation: Retpolines Apr 20 19:55:33.617238 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Apr 20 19:55:33.617249 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Apr 20 19:55:33.617259 kernel: RETBleed: Vulnerable Apr 20 19:55:33.617270 kernel: Speculative Store Bypass: Vulnerable Apr 20 19:55:33.617280 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Apr 20 19:55:33.617291 kernel: GDS: Unknown: Dependent on hypervisor status Apr 20 19:55:33.617302 kernel: active return thunk: its_return_thunk Apr 20 19:55:33.617312 kernel: ITS: Mitigation: Aligned branch/return thunks Apr 20 19:55:33.617323 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Apr 20 19:55:33.617333 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Apr 20 19:55:33.617343 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Apr 20 19:55:33.617354 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Apr 20 19:55:33.617364 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Apr 20 19:55:33.617375 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Apr 20 19:55:33.617384 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Apr 20 19:55:33.617394 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Apr 20 19:55:33.617404 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Apr 20 19:55:33.617414 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Apr 20 19:55:33.617423 kernel: x86/fpu: Enabled xstate features 0xe7, context size is 2432 bytes, using 'compacted' format. Apr 20 19:55:33.617434 kernel: Freeing SMP alternatives memory: 32K Apr 20 19:55:33.617447 kernel: pid_max: default: 32768 minimum: 301 Apr 20 19:55:33.617457 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Apr 20 19:55:33.617468 kernel: landlock: Up and running. Apr 20 19:55:33.617478 kernel: SELinux: Initializing. Apr 20 19:55:33.617488 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 20 19:55:33.617499 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 20 19:55:33.617510 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8370C CPU @ 2.80GHz (family: 0x6, model: 0x6a, stepping: 0x6) Apr 20 19:55:33.617522 kernel: Performance Events: unsupported p6 CPU model 106 no PMU driver, software events only. Apr 20 19:55:33.617531 kernel: signal: max sigframe size: 3632 Apr 20 19:55:33.617540 kernel: rcu: Hierarchical SRCU implementation. Apr 20 19:55:33.617549 kernel: rcu: Max phase no-delay instances is 400. Apr 20 19:55:33.617558 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Apr 20 19:55:33.617567 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Apr 20 19:55:33.617577 kernel: smp: Bringing up secondary CPUs ... Apr 20 19:55:33.617586 kernel: smpboot: x86: Booting SMP configuration: Apr 20 19:55:33.617598 kernel: .... node #0, CPUs: #1 #2 #3 Apr 20 19:55:33.617608 kernel: smp: Brought up 1 node, 4 CPUs Apr 20 19:55:33.617618 kernel: smpboot: Total of 4 processors activated (22347.50 BogoMIPS) Apr 20 19:55:33.617629 kernel: Memory: 2399268K/2565800K available (14336K kernel code, 2458K rwdata, 31736K rodata, 15944K init, 2284K bss, 160640K reserved, 0K cma-reserved) Apr 20 19:55:33.617639 kernel: devtmpfs: initialized Apr 20 19:55:33.617649 kernel: x86/mm: Memory block size: 128MB Apr 20 19:55:33.617660 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Apr 20 19:55:33.617671 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Apr 20 19:55:33.617681 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00811000-0x008fffff] (978944 bytes) Apr 20 19:55:33.617691 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Apr 20 19:55:33.617701 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9ce95000-0x9ce96fff] (8192 bytes) Apr 20 19:55:33.617710 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Apr 20 19:55:33.617720 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Apr 20 19:55:33.617730 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Apr 20 19:55:33.617742 kernel: pinctrl core: initialized pinctrl subsystem Apr 20 19:55:33.617752 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Apr 20 19:55:33.617762 kernel: audit: initializing netlink subsys (disabled) Apr 20 19:55:33.617771 kernel: audit: type=2000 audit(1776714927.633:1): state=initialized audit_enabled=0 res=1 Apr 20 19:55:33.617782 kernel: thermal_sys: Registered thermal governor 'step_wise' Apr 20 19:55:33.617792 kernel: thermal_sys: Registered thermal governor 'user_space' Apr 20 19:55:33.617802 kernel: cpuidle: using governor menu Apr 20 19:55:33.617814 kernel: efi: Freeing EFI boot services memory: 38812K Apr 20 19:55:33.617824 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Apr 20 19:55:33.617834 kernel: dca service started, version 1.12.1 Apr 20 19:55:33.617844 kernel: PCI: ECAM [mem 0xe0000000-0xefffffff] (base 0xe0000000) for domain 0000 [bus 00-ff] Apr 20 19:55:33.617854 kernel: PCI: Using configuration type 1 for base access Apr 20 19:55:33.617864 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Apr 20 19:55:33.617874 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Apr 20 19:55:33.617906 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Apr 20 19:55:33.617916 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Apr 20 19:55:33.617925 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Apr 20 19:55:33.617934 kernel: ACPI: Added _OSI(Module Device) Apr 20 19:55:33.617942 kernel: ACPI: Added _OSI(Processor Device) Apr 20 19:55:33.617952 kernel: ACPI: Added _OSI(Processor Aggregator Device) Apr 20 19:55:33.617961 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Apr 20 19:55:33.617971 kernel: ACPI: Interpreter enabled Apr 20 19:55:33.617980 kernel: ACPI: PM: (supports S0 S3 S5) Apr 20 19:55:33.617990 kernel: ACPI: Using IOAPIC for interrupt routing Apr 20 19:55:33.617999 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Apr 20 19:55:33.618009 kernel: PCI: Using E820 reservations for host bridge windows Apr 20 19:55:33.618018 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Apr 20 19:55:33.618028 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Apr 20 19:55:33.618429 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Apr 20 19:55:33.618579 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Apr 20 19:55:33.618712 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Apr 20 19:55:33.618726 kernel: PCI host bridge to bus 0000:00 Apr 20 19:55:33.618847 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Apr 20 19:55:33.619015 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Apr 20 19:55:33.619141 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Apr 20 19:55:33.619396 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xdfffffff window] Apr 20 19:55:33.619515 kernel: pci_bus 0000:00: root bus resource [mem 0xf0000000-0xfebfffff window] Apr 20 19:55:33.619620 kernel: pci_bus 0000:00: root bus resource [mem 0x380000000000-0x3807ffffffff window] Apr 20 19:55:33.619730 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Apr 20 19:55:33.619875 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Apr 20 19:55:33.620041 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Apr 20 19:55:33.620329 kernel: pci 0000:00:01.0: BAR 0 [mem 0xc0000000-0xc0ffffff pref] Apr 20 19:55:33.620461 kernel: pci 0000:00:01.0: BAR 2 [mem 0xc1044000-0xc1044fff] Apr 20 19:55:33.620575 kernel: pci 0000:00:01.0: ROM [mem 0xffff0000-0xffffffff pref] Apr 20 19:55:33.620696 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Apr 20 19:55:33.620822 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Apr 20 19:55:33.620968 kernel: pci 0000:00:02.0: BAR 0 [io 0x6100-0x611f] Apr 20 19:55:33.621090 kernel: pci 0000:00:02.0: BAR 1 [mem 0xc1043000-0xc1043fff] Apr 20 19:55:33.621241 kernel: pci 0000:00:02.0: BAR 4 [mem 0x380000000000-0x380000003fff 64bit pref] Apr 20 19:55:33.621368 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Apr 20 19:55:33.621489 kernel: pci 0000:00:03.0: BAR 0 [io 0x6000-0x607f] Apr 20 19:55:33.621604 kernel: pci 0000:00:03.0: BAR 1 [mem 0xc1042000-0xc1042fff] Apr 20 19:55:33.621720 kernel: pci 0000:00:03.0: BAR 4 [mem 0x380000004000-0x380000007fff 64bit pref] Apr 20 19:55:33.621844 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Apr 20 19:55:33.621990 kernel: pci 0000:00:04.0: BAR 0 [io 0x60e0-0x60ff] Apr 20 19:55:33.622112 kernel: pci 0000:00:04.0: BAR 1 [mem 0xc1041000-0xc1041fff] Apr 20 19:55:33.622263 kernel: pci 0000:00:04.0: BAR 4 [mem 0x380000008000-0x38000000bfff 64bit pref] Apr 20 19:55:33.622381 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref] Apr 20 19:55:33.622504 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Apr 20 19:55:33.622618 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Apr 20 19:55:33.622744 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Apr 20 19:55:33.622864 kernel: pci 0000:00:1f.2: BAR 4 [io 0x60c0-0x60df] Apr 20 19:55:33.623126 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xc1040000-0xc1040fff] Apr 20 19:55:33.623294 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Apr 20 19:55:33.623411 kernel: pci 0000:00:1f.3: BAR 4 [io 0x6080-0x60bf] Apr 20 19:55:33.623423 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Apr 20 19:55:33.623433 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Apr 20 19:55:33.623447 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Apr 20 19:55:33.623456 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Apr 20 19:55:33.623466 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Apr 20 19:55:33.623476 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Apr 20 19:55:33.623486 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Apr 20 19:55:33.623496 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Apr 20 19:55:33.623505 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Apr 20 19:55:33.623516 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Apr 20 19:55:33.623525 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Apr 20 19:55:33.623535 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Apr 20 19:55:33.623544 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Apr 20 19:55:33.623554 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Apr 20 19:55:33.623563 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Apr 20 19:55:33.623573 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Apr 20 19:55:33.623584 kernel: iommu: Default domain type: Translated Apr 20 19:55:33.623593 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Apr 20 19:55:33.623603 kernel: efivars: Registered efivars operations Apr 20 19:55:33.623613 kernel: PCI: Using ACPI for IRQ routing Apr 20 19:55:33.623622 kernel: PCI: pci_cache_line_size set to 64 bytes Apr 20 19:55:33.623632 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Apr 20 19:55:33.623641 kernel: e820: reserve RAM buffer [mem 0x00811000-0x008fffff] Apr 20 19:55:33.623651 kernel: e820: reserve RAM buffer [mem 0x9b2e1018-0x9bffffff] Apr 20 19:55:33.623662 kernel: e820: reserve RAM buffer [mem 0x9b31e018-0x9bffffff] Apr 20 19:55:33.623671 kernel: e820: reserve RAM buffer [mem 0x9bd3f000-0x9bffffff] Apr 20 19:55:33.623681 kernel: e820: reserve RAM buffer [mem 0x9c8ed000-0x9fffffff] Apr 20 19:55:33.623690 kernel: e820: reserve RAM buffer [mem 0x9ce91000-0x9fffffff] Apr 20 19:55:33.623699 kernel: e820: reserve RAM buffer [mem 0x9cedc000-0x9fffffff] Apr 20 19:55:33.623818 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Apr 20 19:55:33.623969 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Apr 20 19:55:33.624104 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Apr 20 19:55:33.624118 kernel: vgaarb: loaded Apr 20 19:55:33.624129 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Apr 20 19:55:33.624139 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Apr 20 19:55:33.624306 kernel: clocksource: Switched to clocksource kvm-clock Apr 20 19:55:33.624319 kernel: VFS: Disk quotas dquot_6.6.0 Apr 20 19:55:33.624337 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Apr 20 19:55:33.624347 kernel: pnp: PnP ACPI init Apr 20 19:55:33.624500 kernel: system 00:05: [mem 0xe0000000-0xefffffff window] has been reserved Apr 20 19:55:33.624514 kernel: pnp: PnP ACPI: found 6 devices Apr 20 19:55:33.624525 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Apr 20 19:55:33.624549 kernel: NET: Registered PF_INET protocol family Apr 20 19:55:33.624561 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Apr 20 19:55:33.624572 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Apr 20 19:55:33.624582 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Apr 20 19:55:33.624593 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Apr 20 19:55:33.624604 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Apr 20 19:55:33.624616 kernel: TCP: Hash tables configured (established 32768 bind 32768) Apr 20 19:55:33.624627 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 20 19:55:33.624641 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 20 19:55:33.624650 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Apr 20 19:55:33.624660 kernel: NET: Registered PF_XDP protocol family Apr 20 19:55:33.624781 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref]: can't claim; no compatible bridge window Apr 20 19:55:33.624951 kernel: pci 0000:00:04.0: ROM [mem 0x9d000000-0x9d03ffff pref]: assigned Apr 20 19:55:33.625083 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Apr 20 19:55:33.626011 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Apr 20 19:55:33.626185 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Apr 20 19:55:33.626300 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xdfffffff window] Apr 20 19:55:33.626406 kernel: pci_bus 0000:00: resource 8 [mem 0xf0000000-0xfebfffff window] Apr 20 19:55:33.626513 kernel: pci_bus 0000:00: resource 9 [mem 0x380000000000-0x3807ffffffff window] Apr 20 19:55:33.626525 kernel: PCI: CLS 0 bytes, default 64 Apr 20 19:55:33.626535 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Apr 20 19:55:33.626547 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x284409db922, max_idle_ns: 440795228871 ns Apr 20 19:55:33.626561 kernel: Initialise system trusted keyrings Apr 20 19:55:33.626573 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Apr 20 19:55:33.626584 kernel: Key type asymmetric registered Apr 20 19:55:33.626593 kernel: Asymmetric key parser 'x509' registered Apr 20 19:55:33.626605 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Apr 20 19:55:33.626614 kernel: io scheduler mq-deadline registered Apr 20 19:55:33.626625 kernel: io scheduler kyber registered Apr 20 19:55:33.626635 kernel: io scheduler bfq registered Apr 20 19:55:33.626645 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Apr 20 19:55:33.626657 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Apr 20 19:55:33.626668 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Apr 20 19:55:33.626680 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Apr 20 19:55:33.626690 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Apr 20 19:55:33.626700 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Apr 20 19:55:33.626710 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Apr 20 19:55:33.626720 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Apr 20 19:55:33.626729 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Apr 20 19:55:33.626852 kernel: rtc_cmos 00:04: RTC can wake from S4 Apr 20 19:55:33.626869 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1 Apr 20 19:55:33.627006 kernel: rtc_cmos 00:04: registered as rtc0 Apr 20 19:55:33.627116 kernel: rtc_cmos 00:04: setting system clock to 2026-04-20T19:55:30 UTC (1776714930) Apr 20 19:55:33.627442 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Apr 20 19:55:33.627469 kernel: intel_pstate: CPU model not supported Apr 20 19:55:33.627479 kernel: efifb: probing for efifb Apr 20 19:55:33.627512 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k Apr 20 19:55:33.627533 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 Apr 20 19:55:33.627543 kernel: efifb: scrolling: redraw Apr 20 19:55:33.627552 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Apr 20 19:55:33.627562 kernel: Console: switching to colour frame buffer device 160x50 Apr 20 19:55:33.627573 kernel: fb0: EFI VGA frame buffer device Apr 20 19:55:33.627584 kernel: pstore: Using crash dump compression: deflate Apr 20 19:55:33.627594 kernel: pstore: Registered efi_pstore as persistent store backend Apr 20 19:55:33.627607 kernel: NET: Registered PF_INET6 protocol family Apr 20 19:55:33.627617 kernel: Segment Routing with IPv6 Apr 20 19:55:33.627628 kernel: In-situ OAM (IOAM) with IPv6 Apr 20 19:55:33.627638 kernel: NET: Registered PF_PACKET protocol family Apr 20 19:55:33.627647 kernel: Key type dns_resolver registered Apr 20 19:55:33.627657 kernel: IPI shorthand broadcast: enabled Apr 20 19:55:33.627667 kernel: sched_clock: Marking stable (2962048516, 884326458)->(4316449133, -470074159) Apr 20 19:55:33.627679 kernel: registered taskstats version 1 Apr 20 19:55:33.627721 kernel: Loading compiled-in X.509 certificates Apr 20 19:55:33.627732 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.81-flatcar: 7cf14208c08026297bea8a5678f7340932b35e4b' Apr 20 19:55:33.627742 kernel: Demotion targets for Node 0: null Apr 20 19:55:33.627753 kernel: Key type .fscrypt registered Apr 20 19:55:33.627762 kernel: Key type fscrypt-provisioning registered Apr 20 19:55:33.627772 kernel: ima: No TPM chip found, activating TPM-bypass! Apr 20 19:55:33.627784 kernel: ima: Allocated hash algorithm: sha1 Apr 20 19:55:33.627794 kernel: ima: No architecture policies found Apr 20 19:55:33.627804 kernel: clk: Disabling unused clocks Apr 20 19:55:33.627814 kernel: Freeing unused kernel image (initmem) memory: 15944K Apr 20 19:55:33.627825 kernel: Write protecting the kernel read-only data: 47104k Apr 20 19:55:33.627835 kernel: Freeing unused kernel image (rodata/data gap) memory: 1032K Apr 20 19:55:33.627845 kernel: Run /init as init process Apr 20 19:55:33.627857 kernel: with arguments: Apr 20 19:55:33.627867 kernel: /init Apr 20 19:55:33.627894 kernel: with environment: Apr 20 19:55:33.627904 kernel: HOME=/ Apr 20 19:55:33.627914 kernel: TERM=linux Apr 20 19:55:33.627925 kernel: SCSI subsystem initialized Apr 20 19:55:33.627934 kernel: libata version 3.00 loaded. Apr 20 19:55:33.628085 kernel: ahci 0000:00:1f.2: version 3.0 Apr 20 19:55:33.628100 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Apr 20 19:55:33.628282 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Apr 20 19:55:33.628402 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Apr 20 19:55:33.628522 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Apr 20 19:55:33.628654 kernel: scsi host0: ahci Apr 20 19:55:33.628788 kernel: scsi host1: ahci Apr 20 19:55:33.629004 kernel: scsi host2: ahci Apr 20 19:55:33.629184 kernel: scsi host3: ahci Apr 20 19:55:33.629325 kernel: scsi host4: ahci Apr 20 19:55:33.629459 kernel: scsi host5: ahci Apr 20 19:55:33.629474 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 26 lpm-pol 1 Apr 20 19:55:33.629488 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 26 lpm-pol 1 Apr 20 19:55:33.629498 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 26 lpm-pol 1 Apr 20 19:55:33.629508 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 26 lpm-pol 1 Apr 20 19:55:33.629519 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 26 lpm-pol 1 Apr 20 19:55:33.629529 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 26 lpm-pol 1 Apr 20 19:55:33.629539 kernel: ata1: SATA link down (SStatus 0 SControl 300) Apr 20 19:55:33.629551 kernel: ata4: SATA link down (SStatus 0 SControl 300) Apr 20 19:55:33.629560 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Apr 20 19:55:33.629570 kernel: ata2: SATA link down (SStatus 0 SControl 300) Apr 20 19:55:33.629580 kernel: ata3.00: LPM support broken, forcing max_power Apr 20 19:55:33.629589 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Apr 20 19:55:33.629599 kernel: ata3.00: applying bridge limits Apr 20 19:55:33.629608 kernel: ata5: SATA link down (SStatus 0 SControl 300) Apr 20 19:55:33.629618 kernel: ata3.00: LPM support broken, forcing max_power Apr 20 19:55:33.629629 kernel: ata3.00: configured for UDMA/100 Apr 20 19:55:33.629639 kernel: ata6: SATA link down (SStatus 0 SControl 300) Apr 20 19:55:33.629780 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Apr 20 19:55:33.629941 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Apr 20 19:55:33.630066 kernel: virtio_blk virtio1: [vda] 27000832 512-byte logical blocks (13.8 GB/12.9 GiB) Apr 20 19:55:33.630084 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Apr 20 19:55:33.630094 kernel: GPT:16515071 != 27000831 Apr 20 19:55:33.630104 kernel: GPT:Alternate GPT header not at the end of the disk. Apr 20 19:55:33.630113 kernel: GPT:16515071 != 27000831 Apr 20 19:55:33.630123 kernel: GPT: Use GNU Parted to correct GPT errors. Apr 20 19:55:33.630132 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 20 19:55:33.630302 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Apr 20 19:55:33.630318 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Apr 20 19:55:33.630452 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Apr 20 19:55:33.630466 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Apr 20 19:55:33.630476 kernel: device-mapper: uevent: version 1.0.3 Apr 20 19:55:33.630486 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Apr 20 19:55:33.630497 kernel: device-mapper: verity: sha256 using shash "sha256-generic" Apr 20 19:55:33.630507 kernel: raid6: avx512x4 gen() 20392 MB/s Apr 20 19:55:33.630520 kernel: raid6: avx512x2 gen() 31417 MB/s Apr 20 19:55:33.630530 kernel: raid6: avx512x1 gen() 26566 MB/s Apr 20 19:55:33.630540 kernel: raid6: avx2x4 gen() 18186 MB/s Apr 20 19:55:33.630550 kernel: raid6: avx2x2 gen() 9051 MB/s Apr 20 19:55:33.630560 kernel: raid6: avx2x1 gen() 13325 MB/s Apr 20 19:55:33.630570 kernel: raid6: using algorithm avx512x2 gen() 31417 MB/s Apr 20 19:55:33.630580 kernel: raid6: .... xor() 18243 MB/s, rmw enabled Apr 20 19:55:33.630593 kernel: raid6: using avx512x2 recovery algorithm Apr 20 19:55:33.630604 kernel: xor: automatically using best checksumming function avx Apr 20 19:55:33.630613 kernel: Btrfs loaded, zoned=no, fsverity=no Apr 20 19:55:33.630623 kernel: BTRFS: device fsid 2b1891e6-d4d2-4c02-a1ed-3a6feccae86f devid 1 transid 45 /dev/mapper/usr (253:0) scanned by mount (181) Apr 20 19:55:33.630634 kernel: BTRFS info (device dm-0): first mount of filesystem 2b1891e6-d4d2-4c02-a1ed-3a6feccae86f Apr 20 19:55:33.630643 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Apr 20 19:55:33.630653 kernel: BTRFS info (device dm-0 state E): disabling log replay at mount time Apr 20 19:55:33.630665 kernel: BTRFS info (device dm-0 state E): enabling free space tree Apr 20 19:55:33.630674 kernel: loop: module loaded Apr 20 19:55:33.630684 kernel: loop0: detected capacity change from 0 to 106960 Apr 20 19:55:33.630694 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Apr 20 19:55:33.630706 systemd[1]: /etc/systemd/system.conf.d/nocgroup.conf:2: Support for option DefaultCPUAccounting= has been removed and it is ignored Apr 20 19:55:33.630720 systemd[1]: /etc/systemd/system.conf.d/nocgroup.conf:5: Support for option DefaultBlockIOAccounting= has been removed and it is ignored Apr 20 19:55:33.630733 systemd[1]: Successfully made /usr/ read-only. Apr 20 19:55:33.630745 systemd[1]: systemd 258.3 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Apr 20 19:55:33.630756 systemd[1]: Detected virtualization kvm. Apr 20 19:55:33.630767 systemd[1]: Detected architecture x86-64. Apr 20 19:55:33.630778 systemd[1]: Running in initrd. Apr 20 19:55:33.630789 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Apr 20 19:55:33.630799 systemd[1]: No hostname configured, using default hostname. Apr 20 19:55:33.630811 systemd[1]: Hostname set to . Apr 20 19:55:33.630822 systemd[1]: Queued start job for default target initrd.target. Apr 20 19:55:33.630832 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Apr 20 19:55:33.630843 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 20 19:55:33.630854 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 20 19:55:33.630865 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Apr 20 19:55:33.630924 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 20 19:55:33.630939 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Apr 20 19:55:33.630950 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Apr 20 19:55:33.630961 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 20 19:55:33.630972 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 20 19:55:33.630983 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Apr 20 19:55:33.630997 systemd[1]: Reached target paths.target - Path Units. Apr 20 19:55:33.631008 systemd[1]: Reached target slices.target - Slice Units. Apr 20 19:55:33.631018 systemd[1]: Reached target swap.target - Swaps. Apr 20 19:55:33.631028 systemd[1]: Reached target timers.target - Timer Units. Apr 20 19:55:33.631040 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Apr 20 19:55:33.631052 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 20 19:55:33.631063 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Apr 20 19:55:33.631077 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 20 19:55:33.631088 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Apr 20 19:55:33.631099 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 20 19:55:33.631110 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 20 19:55:33.631122 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 20 19:55:33.631133 systemd[1]: Reached target sockets.target - Socket Units. Apr 20 19:55:33.631176 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Apr 20 19:55:33.631190 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Apr 20 19:55:33.631201 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 20 19:55:33.631211 systemd[1]: Finished network-cleanup.service - Network Cleanup. Apr 20 19:55:33.631223 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Apr 20 19:55:33.631233 systemd[1]: Starting systemd-fsck-usr.service... Apr 20 19:55:33.631247 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 20 19:55:33.631258 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 20 19:55:33.631269 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 20 19:55:33.631279 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Apr 20 19:55:33.631291 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 20 19:55:33.631304 systemd[1]: Finished systemd-fsck-usr.service. Apr 20 19:55:33.631315 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 20 19:55:33.631391 systemd-journald[320]: Collecting audit messages is enabled. Apr 20 19:55:33.631421 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 20 19:55:33.631436 systemd-journald[320]: Journal started Apr 20 19:55:33.631459 systemd-journald[320]: Runtime Journal (/run/log/journal/2715a5e161b3425ca1d87e0a256c33b3) is 6M, max 48M, 42M free. Apr 20 19:55:33.633000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:55:33.638203 kernel: audit: type=1130 audit(1776714933.633:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:55:33.638255 systemd[1]: Started systemd-journald.service - Journal Service. Apr 20 19:55:33.642000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:55:33.648244 kernel: audit: type=1130 audit(1776714933.642:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:55:33.648687 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 20 19:55:33.651000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:55:33.657397 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 20 19:55:33.665112 kernel: audit: type=1130 audit(1776714933.651:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:55:33.667655 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 20 19:55:33.669268 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Apr 20 19:55:33.673626 systemd-modules-load[321]: Inserted module 'br_netfilter' Apr 20 19:55:33.675717 kernel: Bridge firewalling registered Apr 20 19:55:33.676224 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 20 19:55:33.676518 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 20 19:55:33.676000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:55:33.682493 kernel: audit: type=1130 audit(1776714933.676:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:55:33.682261 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 20 19:55:33.703086 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 20 19:55:33.703000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:55:33.716558 kernel: audit: type=1130 audit(1776714933.703:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:55:33.716940 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 20 19:55:33.722000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:55:33.728709 kernel: audit: type=1130 audit(1776714933.722:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:55:33.729054 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Apr 20 19:55:33.731953 systemd-tmpfiles[337]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Apr 20 19:55:33.734000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:55:33.744782 kernel: audit: type=1130 audit(1776714933.734:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:55:33.734370 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 20 19:55:33.744921 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 20 19:55:33.749000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:55:33.754233 kernel: audit: type=1130 audit(1776714933.749:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:55:33.796000 audit: BPF prog-id=5 op=LOAD Apr 20 19:55:33.798370 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 20 19:55:33.802959 kernel: audit: type=1334 audit(1776714933.796:10): prog-id=5 op=LOAD Apr 20 19:55:33.811663 dracut-cmdline[353]: dracut-109 Apr 20 19:55:33.817212 dracut-cmdline[353]: Using kernel command line parameters: SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=70de22794192cd52e167b5a4b1ae0509811ded61dbe4152dfc02378f843ae81a Apr 20 19:55:33.872457 systemd-resolved[356]: Positive Trust Anchors: Apr 20 19:55:33.872477 systemd-resolved[356]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 20 19:55:33.872480 systemd-resolved[356]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Apr 20 19:55:33.872507 systemd-resolved[356]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 20 19:55:33.936225 systemd-resolved[356]: Defaulting to hostname 'linux'. Apr 20 19:55:33.939455 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 20 19:55:33.946000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:55:33.951351 kernel: audit: type=1130 audit(1776714933.946:11): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:55:33.946306 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 20 19:55:34.097603 kernel: Loading iSCSI transport class v2.0-870. Apr 20 19:55:34.123289 kernel: iscsi: registered transport (tcp) Apr 20 19:55:34.156631 kernel: iscsi: registered transport (qla4xxx) Apr 20 19:55:34.156740 kernel: QLogic iSCSI HBA Driver Apr 20 19:55:34.192137 kernel: hrtimer: interrupt took 3522215 ns Apr 20 19:55:34.226822 systemd[1]: Starting systemd-network-generator.service - Generate Network Units from Kernel Command Line... Apr 20 19:55:34.251574 systemd[1]: Finished systemd-network-generator.service - Generate Network Units from Kernel Command Line. Apr 20 19:55:34.256000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:55:34.259741 systemd[1]: Reached target network-pre.target - Preparation for Network. Apr 20 19:55:34.361026 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Apr 20 19:55:34.361000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:55:34.389695 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Apr 20 19:55:34.391317 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Apr 20 19:55:34.450538 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Apr 20 19:55:34.450000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:55:34.452000 audit: BPF prog-id=6 op=LOAD Apr 20 19:55:34.452000 audit: BPF prog-id=7 op=LOAD Apr 20 19:55:34.453021 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 20 19:55:34.515278 systemd-udevd[583]: Using default interface naming scheme 'v258'. Apr 20 19:55:34.543049 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 20 19:55:34.546000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:55:34.552297 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Apr 20 19:55:34.604463 dracut-pre-trigger[654]: rd.md=0: removing MD RAID activation Apr 20 19:55:34.622606 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 20 19:55:34.626000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:55:34.628000 audit: BPF prog-id=8 op=LOAD Apr 20 19:55:34.628988 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 20 19:55:34.657195 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Apr 20 19:55:34.660000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:55:34.681554 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 20 19:55:34.739257 systemd-networkd[722]: lo: Link UP Apr 20 19:55:34.742000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:55:34.739273 systemd-networkd[722]: lo: Gained carrier Apr 20 19:55:34.740307 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 20 19:55:34.742834 systemd[1]: Reached target network.target - Network. Apr 20 19:55:34.802793 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 20 19:55:34.806000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:55:34.808503 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Apr 20 19:55:34.947522 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Apr 20 19:55:35.027595 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Apr 20 19:55:35.046217 kernel: cryptd: max_cpu_qlen set to 1000 Apr 20 19:55:35.047924 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Apr 20 19:55:35.061214 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Apr 20 19:55:35.090449 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Apr 20 19:55:35.094207 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Apr 20 19:55:35.103099 systemd-networkd[722]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Apr 20 19:55:35.103448 systemd-networkd[722]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 20 19:55:35.106672 systemd-networkd[722]: eth0: Link UP Apr 20 19:55:35.108346 systemd-networkd[722]: eth0: Gained carrier Apr 20 19:55:35.116128 kernel: AES CTR mode by8 optimization enabled Apr 20 19:55:35.108370 systemd-networkd[722]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Apr 20 19:55:35.122000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:55:35.122676 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 20 19:55:35.122843 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 20 19:55:35.123243 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Apr 20 19:55:35.147465 disk-uuid[822]: Primary Header is updated. Apr 20 19:55:35.147465 disk-uuid[822]: Secondary Entries is updated. Apr 20 19:55:35.147465 disk-uuid[822]: Secondary Header is updated. Apr 20 19:55:35.130508 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 20 19:55:35.137259 systemd-networkd[722]: eth0: DHCPv4 address 10.0.0.6/16, gateway 10.0.0.1 acquired from 10.0.0.1 Apr 20 19:55:35.161487 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 20 19:55:35.161000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:55:35.161000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:55:35.161605 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 20 19:55:35.169023 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 20 19:55:35.209372 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 20 19:55:35.209000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:55:35.306881 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Apr 20 19:55:35.308000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:55:35.310102 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Apr 20 19:55:35.312473 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 20 19:55:35.316441 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 20 19:55:35.324966 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Apr 20 19:55:35.367914 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Apr 20 19:55:35.372000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:55:36.209403 disk-uuid[841]: Warning: The kernel is still using the old partition table. Apr 20 19:55:36.209403 disk-uuid[841]: The new table will be used at the next reboot or after you Apr 20 19:55:36.209403 disk-uuid[841]: run partprobe(8) or kpartx(8) Apr 20 19:55:36.209403 disk-uuid[841]: The operation has completed successfully. Apr 20 19:55:36.229860 systemd[1]: disk-uuid.service: Deactivated successfully. Apr 20 19:55:36.230366 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Apr 20 19:55:36.237000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:55:36.238000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:55:36.240200 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Apr 20 19:55:36.338290 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (897) Apr 20 19:55:36.345007 kernel: BTRFS info (device vda6): first mount of filesystem 17906e87-85d1-46f5-980e-3e85555360cf Apr 20 19:55:36.345083 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 20 19:55:36.351873 kernel: BTRFS info (device vda6): turning on async discard Apr 20 19:55:36.352239 kernel: BTRFS info (device vda6): enabling free space tree Apr 20 19:55:36.365762 kernel: BTRFS info (device vda6): last unmount of filesystem 17906e87-85d1-46f5-980e-3e85555360cf Apr 20 19:55:36.368798 systemd[1]: Finished ignition-setup.service - Ignition (setup). Apr 20 19:55:36.370000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:55:36.371397 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Apr 20 19:55:36.576546 ignition[916]: Ignition 2.24.0 Apr 20 19:55:36.576679 ignition[916]: Stage: fetch-offline Apr 20 19:55:36.576728 ignition[916]: no configs at "/usr/lib/ignition/base.d" Apr 20 19:55:36.576735 ignition[916]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 20 19:55:36.576799 ignition[916]: parsed url from cmdline: "" Apr 20 19:55:36.576801 ignition[916]: no config URL provided Apr 20 19:55:36.576866 ignition[916]: reading system config file "/usr/lib/ignition/user.ign" Apr 20 19:55:36.576872 ignition[916]: no config at "/usr/lib/ignition/user.ign" Apr 20 19:55:36.576921 ignition[916]: op(1): [started] loading QEMU firmware config module Apr 20 19:55:36.576925 ignition[916]: op(1): executing: "modprobe" "qemu_fw_cfg" Apr 20 19:55:36.588246 ignition[916]: op(1): [finished] loading QEMU firmware config module Apr 20 19:55:36.646752 ignition[916]: parsing config with SHA512: deefe14000abdc70cb941c31948fdc288f537639030ea07ba2057be2a0acfbae4f34e80a105741d07bc6b2a6907ba3c445087c0c93ab2336a4e824ea3164184f Apr 20 19:55:36.651800 unknown[916]: fetched base config from "system" Apr 20 19:55:36.651931 unknown[916]: fetched user config from "qemu" Apr 20 19:55:36.653659 ignition[916]: fetch-offline: fetch-offline passed Apr 20 19:55:36.660000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:55:36.656969 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Apr 20 19:55:36.654305 ignition[916]: Ignition finished successfully Apr 20 19:55:36.661293 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Apr 20 19:55:36.662310 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Apr 20 19:55:36.745822 ignition[926]: Ignition 2.24.0 Apr 20 19:55:36.745842 ignition[926]: Stage: kargs Apr 20 19:55:36.746384 ignition[926]: no configs at "/usr/lib/ignition/base.d" Apr 20 19:55:36.753000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:55:36.750932 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Apr 20 19:55:36.746401 ignition[926]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 20 19:55:36.756002 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Apr 20 19:55:36.747366 ignition[926]: kargs: kargs passed Apr 20 19:55:36.747415 ignition[926]: Ignition finished successfully Apr 20 19:55:36.794948 ignition[933]: Ignition 2.24.0 Apr 20 19:55:36.794969 ignition[933]: Stage: disks Apr 20 19:55:36.795204 ignition[933]: no configs at "/usr/lib/ignition/base.d" Apr 20 19:55:36.795213 ignition[933]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 20 19:55:36.796277 ignition[933]: disks: disks passed Apr 20 19:55:36.802000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:55:36.800978 systemd[1]: Finished ignition-disks.service - Ignition (disks). Apr 20 19:55:36.796339 ignition[933]: Ignition finished successfully Apr 20 19:55:36.803312 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Apr 20 19:55:36.807696 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 20 19:55:36.811559 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 20 19:55:36.819970 systemd[1]: Reached target sysinit.target - System Initialization. Apr 20 19:55:36.827132 systemd[1]: Reached target basic.target - Basic System. Apr 20 19:55:36.841953 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Apr 20 19:55:36.918131 systemd-fsck[943]: ROOT: clean, 15/456736 files, 38230/456704 blocks Apr 20 19:55:36.924245 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Apr 20 19:55:36.924000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:55:36.927818 systemd[1]: Mounting sysroot.mount - /sysroot... Apr 20 19:55:37.097552 systemd-networkd[722]: eth0: Gained IPv6LL Apr 20 19:55:37.129432 kernel: EXT4-fs (vda9): mounted filesystem 2bdffc2e-451a-418b-b04b-9e3cd9229e7e r/w with ordered data mode. Quota mode: none. Apr 20 19:55:37.129773 systemd[1]: Mounted sysroot.mount - /sysroot. Apr 20 19:55:37.133682 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Apr 20 19:55:37.138563 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 20 19:55:37.141952 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Apr 20 19:55:37.145022 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Apr 20 19:55:37.145081 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Apr 20 19:55:37.145113 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Apr 20 19:55:37.183299 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Apr 20 19:55:37.186584 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Apr 20 19:55:37.201726 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (951) Apr 20 19:55:37.201775 kernel: BTRFS info (device vda6): first mount of filesystem 17906e87-85d1-46f5-980e-3e85555360cf Apr 20 19:55:37.201796 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 20 19:55:37.212746 kernel: BTRFS info (device vda6): turning on async discard Apr 20 19:55:37.212813 kernel: BTRFS info (device vda6): enabling free space tree Apr 20 19:55:37.215062 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 20 19:55:37.462255 kernel: loop1: detected capacity change from 0 to 43472 Apr 20 19:55:37.499274 kernel: loop1: p1 p2 p3 Apr 20 19:55:37.546937 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Apr 20 19:55:37.547004 kernel: device-mapper: verity: Unrecognized verity feature request: root_hash_sig_key_desc Apr 20 19:55:37.547015 kernel: device-mapper: table: 253:1: verity: Unrecognized verity feature request (-EINVAL) Apr 20 19:55:37.548528 kernel: device-mapper: ioctl: error adding target to table Apr 20 19:55:37.549667 systemd-confext[1041]: device-mapper: reload ioctl on 036bab43330e5a16b58b2997d79b59667c299046b83a0d438261a470d6586a8f-verity (253:1) failed: Invalid argument Apr 20 19:55:37.585712 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Apr 20 19:55:37.759377 kernel: erofs: (device dm-1): mounted with root inode @ nid 40. Apr 20 19:55:37.791220 kernel: loop2: detected capacity change from 0 to 43472 Apr 20 19:55:37.794257 kernel: loop2: p1 p2 p3 Apr 20 19:55:37.814263 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Apr 20 19:55:37.814357 kernel: device-mapper: verity: Unrecognized verity feature request: root_hash_sig_key_desc Apr 20 19:55:37.814374 kernel: device-mapper: table: 253:1: verity: Unrecognized verity feature request (-EINVAL) Apr 20 19:55:37.816889 kernel: device-mapper: ioctl: error adding target to table Apr 20 19:55:37.820930 (sd-merge)[1051]: device-mapper: reload ioctl on 036bab43330e5a16b58b2997d79b59667c299046b83a0d438261a470d6586a8f-verity (253:1) failed: Invalid argument Apr 20 19:55:37.842303 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Apr 20 19:55:37.992272 kernel: erofs: (device dm-1): mounted with root inode @ nid 40. Apr 20 19:55:37.992502 (sd-merge)[1051]: Using extensions '00-flatcar-default.raw'. Apr 20 19:55:37.996321 (sd-merge)[1051]: Merged extensions into '/sysroot/etc'. Apr 20 19:55:38.019608 initrd-setup-root[1058]: /etc 00-flatcar-default Mon 2026-04-20 19:55:33 UTC Apr 20 19:55:38.022607 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Apr 20 19:55:38.025000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:55:38.027800 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Apr 20 19:55:38.032932 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Apr 20 19:55:38.054208 systemd[1]: sysroot-oem.mount: Deactivated successfully. Apr 20 19:55:38.058439 kernel: BTRFS info (device vda6): last unmount of filesystem 17906e87-85d1-46f5-980e-3e85555360cf Apr 20 19:55:38.133188 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Apr 20 19:55:38.137000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:55:38.140807 ignition[1067]: INFO : Ignition 2.24.0 Apr 20 19:55:38.140807 ignition[1067]: INFO : Stage: mount Apr 20 19:55:38.140807 ignition[1067]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 20 19:55:38.140807 ignition[1067]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 20 19:55:38.147000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:55:38.155766 ignition[1067]: INFO : mount: mount passed Apr 20 19:55:38.155766 ignition[1067]: INFO : Ignition finished successfully Apr 20 19:55:38.144370 systemd[1]: Finished ignition-mount.service - Ignition (mount). Apr 20 19:55:38.149484 systemd[1]: Starting ignition-files.service - Ignition (files)... Apr 20 19:55:38.178173 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 20 19:55:38.227209 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1079) Apr 20 19:55:38.232720 kernel: BTRFS info (device vda6): first mount of filesystem 17906e87-85d1-46f5-980e-3e85555360cf Apr 20 19:55:38.232791 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 20 19:55:38.242981 kernel: BTRFS info (device vda6): turning on async discard Apr 20 19:55:38.243062 kernel: BTRFS info (device vda6): enabling free space tree Apr 20 19:55:38.248220 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 20 19:55:38.330304 ignition[1096]: INFO : Ignition 2.24.0 Apr 20 19:55:38.330304 ignition[1096]: INFO : Stage: files Apr 20 19:55:38.334315 ignition[1096]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 20 19:55:38.334315 ignition[1096]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 20 19:55:38.340071 ignition[1096]: DEBUG : files: compiled without relabeling support, skipping Apr 20 19:55:38.343946 ignition[1096]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Apr 20 19:55:38.343946 ignition[1096]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Apr 20 19:55:38.353027 ignition[1096]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Apr 20 19:55:38.359255 ignition[1096]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Apr 20 19:55:38.362632 unknown[1096]: wrote ssh authorized keys file for user: core Apr 20 19:55:38.364955 ignition[1096]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Apr 20 19:55:38.368124 ignition[1096]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Apr 20 19:55:38.372489 ignition[1096]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Apr 20 19:55:38.442996 ignition[1096]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Apr 20 19:55:38.532110 ignition[1096]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Apr 20 19:55:38.532110 ignition[1096]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Apr 20 19:55:38.544694 ignition[1096]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Apr 20 19:55:38.544694 ignition[1096]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Apr 20 19:55:38.544694 ignition[1096]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Apr 20 19:55:38.544694 ignition[1096]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 20 19:55:38.544694 ignition[1096]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 20 19:55:38.544694 ignition[1096]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 20 19:55:38.544694 ignition[1096]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 20 19:55:38.544694 ignition[1096]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Apr 20 19:55:38.544694 ignition[1096]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Apr 20 19:55:38.544694 ignition[1096]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.35.1-x86-64.raw" Apr 20 19:55:38.544694 ignition[1096]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.35.1-x86-64.raw" Apr 20 19:55:38.544694 ignition[1096]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.35.1-x86-64.raw" Apr 20 19:55:38.544694 ignition[1096]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.35.1-x86-64.raw: attempt #1 Apr 20 19:55:38.931473 ignition[1096]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Apr 20 19:55:39.742019 ignition[1096]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.35.1-x86-64.raw" Apr 20 19:55:39.742019 ignition[1096]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Apr 20 19:55:39.761018 ignition[1096]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 20 19:55:39.761018 ignition[1096]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 20 19:55:39.761018 ignition[1096]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Apr 20 19:55:39.761018 ignition[1096]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Apr 20 19:55:39.761018 ignition[1096]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Apr 20 19:55:39.761018 ignition[1096]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Apr 20 19:55:39.761018 ignition[1096]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Apr 20 19:55:39.761018 ignition[1096]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Apr 20 19:55:39.887693 ignition[1096]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Apr 20 19:55:39.903391 ignition[1096]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Apr 20 19:55:39.903391 ignition[1096]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Apr 20 19:55:39.903391 ignition[1096]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Apr 20 19:55:39.903391 ignition[1096]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Apr 20 19:55:39.916013 ignition[1096]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Apr 20 19:55:39.916013 ignition[1096]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Apr 20 19:55:39.916013 ignition[1096]: INFO : files: files passed Apr 20 19:55:39.916013 ignition[1096]: INFO : Ignition finished successfully Apr 20 19:55:39.914958 systemd[1]: Finished ignition-files.service - Ignition (files). Apr 20 19:55:39.917000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:55:39.935547 kernel: kauditd_printk_skb: 27 callbacks suppressed Apr 20 19:55:39.920227 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Apr 20 19:55:39.937824 kernel: audit: type=1130 audit(1776714939.917:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:55:39.938441 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Apr 20 19:55:39.957831 systemd[1]: ignition-quench.service: Deactivated successfully. Apr 20 19:55:39.988938 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Apr 20 19:55:39.991000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:55:39.991000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:55:40.002943 kernel: audit: type=1130 audit(1776714939.991:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:55:40.002973 kernel: audit: type=1131 audit(1776714939.991:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:55:40.027138 initrd-setup-root-after-ignition[1128]: grep: /sysroot/oem/oem-release: No such file or directory Apr 20 19:55:40.033682 initrd-setup-root-after-ignition[1130]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 20 19:55:40.033682 initrd-setup-root-after-ignition[1130]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Apr 20 19:55:40.040647 initrd-setup-root-after-ignition[1134]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 20 19:55:40.059479 kernel: loop3: detected capacity change from 0 to 43472 Apr 20 19:55:40.080392 kernel: loop3: p1 p2 p3 Apr 20 19:55:40.115125 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Apr 20 19:55:40.115292 kernel: device-mapper: verity: Unrecognized verity feature request: root_hash_sig_key_desc Apr 20 19:55:40.115307 kernel: device-mapper: table: 253:2: verity: Unrecognized verity feature request (-EINVAL) Apr 20 19:55:40.119545 kernel: device-mapper: ioctl: error adding target to table Apr 20 19:55:40.119633 systemd-confext[1136]: device-mapper: reload ioctl on loop3p1-verity (253:2) failed: Invalid argument Apr 20 19:55:40.128316 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Apr 20 19:55:40.254523 kernel: erofs: (device dm-2): mounted with root inode @ nid 40. Apr 20 19:55:40.320225 kernel: loop4: detected capacity change from 0 to 43472 Apr 20 19:55:40.327436 kernel: loop4: p1 p2 p3 Apr 20 19:55:40.417582 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Apr 20 19:55:40.417680 kernel: device-mapper: verity: Unrecognized verity feature request: root_hash_sig_key_desc Apr 20 19:55:40.417764 kernel: device-mapper: table: 253:2: verity: Unrecognized verity feature request (-EINVAL) Apr 20 19:55:40.420704 kernel: device-mapper: ioctl: error adding target to table Apr 20 19:55:40.420816 (sd-merge)[1144]: device-mapper: reload ioctl on loop4p1-verity (253:2) failed: Invalid argument Apr 20 19:55:40.430294 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Apr 20 19:55:40.585736 (sd-merge)[1144]: Skipping extension refresh because no change was found, use --always-refresh=yes to always do a refresh. Apr 20 19:55:40.590197 kernel: erofs: (device dm-2): mounted with root inode @ nid 40. Apr 20 19:55:40.607026 kernel: device-mapper: ioctl: remove_all left 2 open device(s) Apr 20 19:55:40.611227 kernel: loop4: detected capacity change from 0 to 378016 Apr 20 19:55:40.614190 kernel: loop4: p1 p2 p3 Apr 20 19:55:40.653468 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Apr 20 19:55:40.653555 kernel: device-mapper: verity: Unrecognized verity feature request: root_hash_sig_key_desc Apr 20 19:55:40.653573 kernel: device-mapper: table: 253:2: verity: Unrecognized verity feature request (-EINVAL) Apr 20 19:55:40.655468 kernel: device-mapper: ioctl: error adding target to table Apr 20 19:55:40.657041 systemd-sysext[1152]: device-mapper: reload ioctl on 5f63b01eb609e19b7df6b1f3554b098a8644903507171258f91f339ee69140b0-verity (253:2) failed: Invalid argument Apr 20 19:55:40.672537 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Apr 20 19:55:40.793593 kernel: erofs: (device dm-2): mounted with root inode @ nid 39. Apr 20 19:55:40.843242 kernel: loop5: detected capacity change from 0 to 217752 Apr 20 19:55:40.950599 kernel: loop6: detected capacity change from 0 to 178200 Apr 20 19:55:40.960487 kernel: loop6: p1 p2 p3 Apr 20 19:55:41.082218 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Apr 20 19:55:41.082332 kernel: device-mapper: verity: Unrecognized verity feature request: root_hash_sig_key_desc Apr 20 19:55:41.082350 kernel: device-mapper: table: 253:2: verity: Unrecognized verity feature request (-EINVAL) Apr 20 19:55:41.086462 kernel: device-mapper: ioctl: error adding target to table Apr 20 19:55:41.086317 systemd-sysext[1152]: device-mapper: reload ioctl on 47e3a0d62726bde98fcb471f946aa0f0e9f97280e4f7267ec40f142aba643eb6-verity (253:2) failed: Invalid argument Apr 20 19:55:41.092449 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Apr 20 19:55:41.264658 kernel: erofs: (device dm-2): mounted with root inode @ nid 39. Apr 20 19:55:41.306205 kernel: loop7: detected capacity change from 0 to 378016 Apr 20 19:55:41.309594 kernel: loop7: p1 p2 p3 Apr 20 19:55:41.414436 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Apr 20 19:55:41.414524 kernel: device-mapper: verity: Unrecognized verity feature request: root_hash_sig_key_desc Apr 20 19:55:41.414542 kernel: device-mapper: table: 253:2: verity: Unrecognized verity feature request (-EINVAL) Apr 20 19:55:41.418556 kernel: device-mapper: ioctl: error adding target to table Apr 20 19:55:41.418642 (sd-merge)[1169]: device-mapper: reload ioctl on 5f63b01eb609e19b7df6b1f3554b098a8644903507171258f91f339ee69140b0-verity (253:2) failed: Invalid argument Apr 20 19:55:41.430772 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Apr 20 19:55:41.650496 kernel: erofs: (device dm-2): mounted with root inode @ nid 39. Apr 20 19:55:41.658211 kernel: loop1: detected capacity change from 0 to 217752 Apr 20 19:55:41.701510 kernel: loop3: detected capacity change from 0 to 178200 Apr 20 19:55:41.714353 kernel: loop3: p1 p2 p3 Apr 20 19:55:41.755750 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Apr 20 19:55:41.755842 kernel: device-mapper: verity: Unrecognized verity feature request: root_hash_sig_key_desc Apr 20 19:55:41.755857 kernel: device-mapper: table: 253:3: verity: Unrecognized verity feature request (-EINVAL) Apr 20 19:55:41.759630 kernel: device-mapper: ioctl: error adding target to table Apr 20 19:55:41.758796 (sd-merge)[1169]: device-mapper: reload ioctl on 47e3a0d62726bde98fcb471f946aa0f0e9f97280e4f7267ec40f142aba643eb6-verity (253:3) failed: Invalid argument Apr 20 19:55:41.813203 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Apr 20 19:55:41.902272 kernel: erofs: (device dm-3): mounted with root inode @ nid 39. Apr 20 19:55:41.908739 (sd-merge)[1169]: Using extensions 'containerd-flatcar.raw', 'docker-flatcar.raw', 'kubernetes-v1.35.1-x86-64.raw'. Apr 20 19:55:41.916971 (sd-merge)[1169]: Merged extensions into '/sysroot/usr'. Apr 20 19:55:41.920808 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 20 19:55:41.923000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:55:41.932247 kernel: audit: type=1130 audit(1776714941.923:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:55:41.923585 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Apr 20 19:55:41.935042 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Apr 20 19:55:42.018371 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Apr 20 19:55:42.020000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:55:42.020000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:55:42.035639 kernel: audit: type=1130 audit(1776714942.020:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:55:42.018813 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Apr 20 19:55:42.044633 kernel: audit: type=1131 audit(1776714942.020:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:55:42.020688 systemd[1]: initrd-parse-etc.service: Triggering OnSuccess= dependencies. Apr 20 19:55:42.020880 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Apr 20 19:55:42.035492 systemd[1]: Reached target initrd.target - Initrd Default Target. Apr 20 19:55:42.040513 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Apr 20 19:55:42.042123 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Apr 20 19:55:42.099396 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 20 19:55:42.099000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:55:42.112872 kernel: audit: type=1130 audit(1776714942.099:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:55:42.102803 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Apr 20 19:55:42.140883 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Apr 20 19:55:42.149591 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 20 19:55:42.184061 systemd[1]: Stopped target timers.target - Timer Units. Apr 20 19:55:42.187000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:55:42.197413 kernel: audit: type=1131 audit(1776714942.187:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:55:42.187424 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Apr 20 19:55:42.187597 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 20 19:55:42.188368 systemd[1]: Stopped target initrd.target - Initrd Default Target. Apr 20 19:55:42.200251 systemd[1]: Stopped target basic.target - Basic System. Apr 20 19:55:42.204142 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Apr 20 19:55:42.207313 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Apr 20 19:55:42.221307 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Apr 20 19:55:42.230874 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Apr 20 19:55:42.235540 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Apr 20 19:55:42.242326 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Apr 20 19:55:42.247587 systemd[1]: Stopped target sysinit.target - System Initialization. Apr 20 19:55:42.248629 systemd[1]: Stopped target local-fs.target - Local File Systems. Apr 20 19:55:42.257631 systemd[1]: Stopped target swap.target - Swaps. Apr 20 19:55:42.262335 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Apr 20 19:55:42.266000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:55:42.279835 kernel: audit: type=1131 audit(1776714942.266:47): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:55:42.262538 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Apr 20 19:55:42.267109 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Apr 20 19:55:42.295974 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 20 19:55:42.297618 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Apr 20 19:55:42.302289 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 20 19:55:42.307623 systemd[1]: dracut-initqueue.service: Deactivated successfully. Apr 20 19:55:42.313000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:55:42.322505 kernel: audit: type=1131 audit(1776714942.313:48): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:55:42.307964 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Apr 20 19:55:42.313877 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Apr 20 19:55:42.328000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:55:42.314044 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Apr 20 19:55:42.329326 systemd[1]: Stopped target paths.target - Path Units. Apr 20 19:55:42.334977 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Apr 20 19:55:42.335733 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 20 19:55:42.341112 systemd[1]: Stopped target slices.target - Slice Units. Apr 20 19:55:42.353713 systemd[1]: Stopped target sockets.target - Socket Units. Apr 20 19:55:42.353975 systemd[1]: iscsid.socket: Deactivated successfully. Apr 20 19:55:42.354123 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Apr 20 19:55:42.361467 systemd[1]: iscsiuio.socket: Deactivated successfully. Apr 20 19:55:42.361565 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 20 19:55:42.406000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:55:42.393247 systemd[1]: systemd-journald-audit.socket: Deactivated successfully. Apr 20 19:55:42.393397 systemd[1]: Closed systemd-journald-audit.socket - Journal Audit Socket. Apr 20 19:55:42.413000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:55:42.402139 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Apr 20 19:55:42.402547 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 20 19:55:42.406687 systemd[1]: ignition-files.service: Deactivated successfully. Apr 20 19:55:42.407028 systemd[1]: Stopped ignition-files.service - Ignition (files). Apr 20 19:55:42.417936 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Apr 20 19:55:42.425758 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Apr 20 19:55:42.431122 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Apr 20 19:55:42.431379 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 20 19:55:42.436000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:55:42.436000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:55:42.442000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:55:42.436544 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Apr 20 19:55:42.436687 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Apr 20 19:55:42.436864 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Apr 20 19:55:42.437115 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Apr 20 19:55:42.448234 systemd[1]: initrd-cleanup.service: Deactivated successfully. Apr 20 19:55:42.525239 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Apr 20 19:55:42.529000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:55:42.529000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:55:42.557412 ignition[1200]: INFO : Ignition 2.24.0 Apr 20 19:55:42.557412 ignition[1200]: INFO : Stage: umount Apr 20 19:55:42.557412 ignition[1200]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 20 19:55:42.557412 ignition[1200]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 20 19:55:42.557412 ignition[1200]: INFO : umount: umount passed Apr 20 19:55:42.557412 ignition[1200]: INFO : Ignition finished successfully Apr 20 19:55:42.566000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:55:42.562421 systemd[1]: ignition-mount.service: Deactivated successfully. Apr 20 19:55:42.562582 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Apr 20 19:55:42.578346 systemd[1]: sysroot-boot.mount: Deactivated successfully. Apr 20 19:55:42.579281 systemd[1]: sysroot-boot.service: Deactivated successfully. Apr 20 19:55:42.579395 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Apr 20 19:55:42.583000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:55:42.589131 systemd[1]: Stopped target network.target - Network. Apr 20 19:55:42.589349 systemd[1]: ignition-disks.service: Deactivated successfully. Apr 20 19:55:42.596000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:55:42.589401 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Apr 20 19:55:42.599673 systemd[1]: ignition-kargs.service: Deactivated successfully. Apr 20 19:55:42.601000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:55:42.599755 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Apr 20 19:55:42.605000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:55:42.606000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup-pre comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:55:42.601655 systemd[1]: ignition-setup.service: Deactivated successfully. Apr 20 19:55:42.615000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:55:42.601712 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Apr 20 19:55:42.605995 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Apr 20 19:55:42.606101 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Apr 20 19:55:42.606541 systemd[1]: initrd-setup-root.service: Deactivated successfully. Apr 20 19:55:42.606584 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Apr 20 19:55:42.616516 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Apr 20 19:55:42.620573 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Apr 20 19:55:42.649000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:55:42.645376 systemd[1]: systemd-resolved.service: Deactivated successfully. Apr 20 19:55:42.645603 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Apr 20 19:55:42.657460 systemd[1]: systemd-networkd.service: Deactivated successfully. Apr 20 19:55:42.657640 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Apr 20 19:55:42.726000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:55:42.732000 audit: BPF prog-id=8 op=UNLOAD Apr 20 19:55:42.733000 audit: BPF prog-id=5 op=UNLOAD Apr 20 19:55:42.735472 systemd[1]: Stopped target network-pre.target - Preparation for Network. Apr 20 19:55:42.742076 systemd[1]: systemd-networkd.socket: Deactivated successfully. Apr 20 19:55:42.742170 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Apr 20 19:55:42.749996 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Apr 20 19:55:42.757316 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Apr 20 19:55:42.760000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:55:42.757384 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 20 19:55:42.763000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:55:42.760700 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 20 19:55:42.772000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:55:42.760736 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 20 19:55:42.763734 systemd[1]: systemd-modules-load.service: Deactivated successfully. Apr 20 19:55:42.764659 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Apr 20 19:55:42.772712 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 20 19:55:42.792041 systemd[1]: systemd-udevd.service: Deactivated successfully. Apr 20 19:55:42.794560 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 20 19:55:42.806563 systemd[1]: systemd-udevd.service: Consumed 1.336s CPU time. Apr 20 19:55:42.806000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:55:42.810036 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Apr 20 19:55:42.810281 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Apr 20 19:55:42.816265 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Apr 20 19:55:42.816315 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Apr 20 19:55:42.818349 systemd[1]: dracut-cmdline.service: Deactivated successfully. Apr 20 19:55:42.818390 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Apr 20 19:55:42.819026 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 20 19:55:42.818000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:55:42.818000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:55:42.819000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:55:42.819067 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 20 19:55:42.821179 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Apr 20 19:55:42.821377 systemd[1]: systemd-network-generator.service: Deactivated successfully. Apr 20 19:55:42.821407 systemd[1]: Stopped systemd-network-generator.service - Generate Network Units from Kernel Command Line. Apr 20 19:55:42.821000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:55:42.822000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:55:42.821859 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Apr 20 19:55:42.821887 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 20 19:55:42.822000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:55:42.823000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:55:42.822777 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Apr 20 19:55:42.822809 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 20 19:55:42.823241 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Apr 20 19:55:42.825000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:55:42.823275 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Apr 20 19:55:42.823680 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 20 19:55:42.823707 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 20 19:55:42.947892 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Apr 20 19:55:42.948215 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Apr 20 19:55:42.953000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:55:42.954000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:55:42.958000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:55:42.955480 systemd[1]: network-cleanup.service: Deactivated successfully. Apr 20 19:55:42.955673 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Apr 20 19:55:42.960512 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Apr 20 19:55:42.972522 systemd[1]: Starting initrd-switch-root.service - Switch Root... Apr 20 19:55:43.005739 systemd[1]: Switching root. Apr 20 19:55:43.129087 systemd-journald[320]: Journal stopped Apr 20 19:55:46.720858 systemd-journald[320]: Received SIGTERM from PID 1 (systemd). Apr 20 19:55:46.720915 kernel: SELinux: policy capability network_peer_controls=1 Apr 20 19:55:46.721033 kernel: SELinux: policy capability open_perms=1 Apr 20 19:55:46.721066 kernel: SELinux: policy capability extended_socket_class=1 Apr 20 19:55:46.721074 kernel: SELinux: policy capability always_check_network=0 Apr 20 19:55:46.721082 kernel: SELinux: policy capability cgroup_seclabel=1 Apr 20 19:55:46.721091 kernel: SELinux: policy capability nnp_nosuid_transition=1 Apr 20 19:55:46.721099 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Apr 20 19:55:46.721111 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Apr 20 19:55:46.721120 kernel: SELinux: policy capability userspace_initial_context=0 Apr 20 19:55:46.721131 systemd[1]: Successfully loaded SELinux policy in 82.282ms. Apr 20 19:55:46.721177 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 12.308ms. Apr 20 19:55:46.721205 systemd[1]: systemd 258.3 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Apr 20 19:55:46.721215 systemd[1]: Detected virtualization kvm. Apr 20 19:55:46.721224 systemd[1]: Detected architecture x86-64. Apr 20 19:55:46.721234 systemd[1]: Detected first boot. Apr 20 19:55:46.721243 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Apr 20 19:55:46.721251 zram_generator::config[1248]: No configuration found. Apr 20 19:55:46.721262 kernel: Guest personality initialized and is inactive Apr 20 19:55:46.721270 kernel: VMCI host device registered (name=vmci, major=10, minor=258) Apr 20 19:55:46.721278 kernel: Initialized host personality Apr 20 19:55:46.721286 kernel: NET: Registered PF_VSOCK protocol family Apr 20 19:55:46.721296 systemd-ssh-generator[1244]: Failed to query local AF_VSOCK CID: Cannot assign requested address Apr 20 19:55:46.721308 (sd-exec-[1229]: /usr/lib/systemd/system-generators/systemd-ssh-generator failed with exit status 1. Apr 20 19:55:46.721318 systemd[1]: Applying preset policy. Apr 20 19:55:46.721328 systemd[1]: Created symlink '/etc/systemd/system/multi-user.target.wants/prepare-helm.service' → '/etc/systemd/system/prepare-helm.service'. Apr 20 19:55:46.721337 systemd[1]: Created symlink '/etc/systemd/system/timers.target.wants/google-oslogin-cache.timer' → '/usr/lib/systemd/system/google-oslogin-cache.timer'. Apr 20 19:55:46.721347 systemd[1]: Populated /etc with preset unit settings. Apr 20 19:55:46.721357 systemd[1]: /usr/lib/systemd/system/update-engine.service:10: Support for option BlockIOWeight= has been removed and it is ignored Apr 20 19:55:46.721365 kernel: kauditd_printk_skb: 38 callbacks suppressed Apr 20 19:55:46.721373 kernel: audit: type=1334 audit(1776714945.877:87): prog-id=10 op=LOAD Apr 20 19:55:46.721381 kernel: audit: type=1334 audit(1776714945.877:88): prog-id=2 op=UNLOAD Apr 20 19:55:46.721389 kernel: audit: type=1334 audit(1776714945.877:89): prog-id=11 op=LOAD Apr 20 19:55:46.721408 kernel: audit: type=1334 audit(1776714945.877:90): prog-id=12 op=LOAD Apr 20 19:55:46.721416 systemd[1]: initrd-switch-root.service: Deactivated successfully. Apr 20 19:55:46.721430 kernel: audit: type=1334 audit(1776714945.877:91): prog-id=3 op=UNLOAD Apr 20 19:55:46.721438 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Apr 20 19:55:46.721446 kernel: audit: type=1334 audit(1776714945.877:92): prog-id=4 op=UNLOAD Apr 20 19:55:46.721455 kernel: audit: type=1131 audit(1776714945.881:93): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:55:46.721463 kernel: audit: type=1334 audit(1776714945.898:94): prog-id=10 op=UNLOAD Apr 20 19:55:46.721471 kernel: audit: type=1130 audit(1776714945.901:95): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:55:46.721481 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Apr 20 19:55:46.721489 kernel: audit: type=1131 audit(1776714945.901:96): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:55:46.721498 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Apr 20 19:55:46.721506 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Apr 20 19:55:46.721514 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Apr 20 19:55:46.721523 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Apr 20 19:55:46.721534 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Apr 20 19:55:46.721543 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Apr 20 19:55:46.721552 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Apr 20 19:55:46.721561 systemd[1]: Created slice user.slice - User and Session Slice. Apr 20 19:55:46.721569 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 20 19:55:46.721578 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 20 19:55:46.721587 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Apr 20 19:55:46.721694 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Apr 20 19:55:46.721706 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Apr 20 19:55:46.721715 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 20 19:55:46.721724 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Apr 20 19:55:46.721732 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 20 19:55:46.721741 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 20 19:55:46.721749 systemd[1]: Reached target imports.target - Image Downloads. Apr 20 19:55:46.721759 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Apr 20 19:55:46.721767 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Apr 20 19:55:46.721776 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Apr 20 19:55:46.721786 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Apr 20 19:55:46.721794 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 20 19:55:46.721803 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 20 19:55:46.721812 systemd[1]: Reached target remote-integritysetup.target - Remote Integrity Protected Volumes. Apr 20 19:55:46.721821 systemd[1]: Reached target remote-veritysetup.target - Remote Verity Protected Volumes. Apr 20 19:55:46.721830 systemd[1]: Reached target slices.target - Slice Units. Apr 20 19:55:46.721838 systemd[1]: Reached target swap.target - Swaps. Apr 20 19:55:46.721846 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Apr 20 19:55:46.721855 systemd[1]: Listening on systemd-ask-password.socket - Query the User Interactively for a Password. Apr 20 19:55:46.721863 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Apr 20 19:55:46.721871 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Apr 20 19:55:46.721892 systemd[1]: Listening on systemd-factory-reset.socket - Factory Reset Management. Apr 20 19:55:46.721900 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Apr 20 19:55:46.721908 systemd[1]: Listening on systemd-mountfsd.socket - DDI File System Mounter Socket. Apr 20 19:55:46.721917 systemd[1]: Listening on systemd-networkd-varlink.socket - Network Service Varlink Socket. Apr 20 19:55:46.721925 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 20 19:55:46.721951 systemd[1]: Listening on systemd-nsresourced.socket - Namespace Resource Manager Socket. Apr 20 19:55:46.721966 systemd[1]: Listening on systemd-oomd.socket - Userspace Out-Of-Memory (OOM) Killer Socket. Apr 20 19:55:46.721977 systemd[1]: Listening on systemd-resolved-monitor.socket - Resolve Monitor Varlink Socket. Apr 20 19:55:46.721986 systemd[1]: Listening on systemd-resolved-varlink.socket - Resolve Service Varlink Socket. Apr 20 19:55:46.721995 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 20 19:55:46.722007 systemd[1]: Listening on systemd-udevd-varlink.socket - udev Varlink Socket. Apr 20 19:55:46.722015 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Apr 20 19:55:46.722025 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Apr 20 19:55:46.722033 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Apr 20 19:55:46.722042 systemd[1]: Mounting media.mount - External Media Directory... Apr 20 19:55:46.722050 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 20 19:55:46.722058 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Apr 20 19:55:46.722066 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Apr 20 19:55:46.722075 systemd[1]: tmp.mount: x-systemd.graceful-option=usrquota specified, but option is not available, suppressing. Apr 20 19:55:46.722085 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Apr 20 19:55:46.722100 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Apr 20 19:55:46.722489 systemd[1]: Reached target machines.target - Virtual Machines and Containers. Apr 20 19:55:46.722508 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Apr 20 19:55:46.722523 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 20 19:55:46.722541 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 20 19:55:46.722555 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Apr 20 19:55:46.722570 systemd[1]: modprobe@dm_mod.service - Load Kernel Module dm_mod was skipped because of an unmet condition check (ConditionKernelModuleLoaded=!dm_mod). Apr 20 19:55:46.722583 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 20 19:55:46.722597 systemd[1]: modprobe@efi_pstore.service - Load Kernel Module efi_pstore was skipped because of an unmet condition check (ConditionKernelModuleLoaded=!efi_pstore). Apr 20 19:55:46.722611 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Apr 20 19:55:46.722626 systemd[1]: modprobe@loop.service - Load Kernel Module loop was skipped because of an unmet condition check (ConditionKernelModuleLoaded=!loop). Apr 20 19:55:46.722642 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Apr 20 19:55:46.722655 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Apr 20 19:55:46.722669 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Apr 20 19:55:46.722686 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Apr 20 19:55:46.722700 systemd[1]: Stopped systemd-fsck-usr.service. Apr 20 19:55:46.722716 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Apr 20 19:55:46.722732 kernel: fuse: init (API version 7.41) Apr 20 19:55:46.722749 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 20 19:55:46.722763 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 20 19:55:46.722798 kernel: ACPI: bus type drm_connector registered Apr 20 19:55:46.722807 systemd[1]: Starting systemd-network-generator.service - Generate Network Units from Kernel Command Line... Apr 20 19:55:46.722818 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Apr 20 19:55:46.722827 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 20 19:55:46.722835 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Apr 20 19:55:46.722845 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 20 19:55:46.722898 systemd-journald[1320]: Collecting audit messages is enabled. Apr 20 19:55:46.722925 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 20 19:55:46.722957 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Apr 20 19:55:46.722972 systemd-journald[1320]: Journal started Apr 20 19:55:46.722999 systemd-journald[1320]: Runtime Journal (/run/log/journal/2715a5e161b3425ca1d87e0a256c33b3) is 6M, max 48M, 42M free. Apr 20 19:55:46.598000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:55:46.604000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:55:46.615000 audit: BPF prog-id=12 op=UNLOAD Apr 20 19:55:46.615000 audit: BPF prog-id=11 op=UNLOAD Apr 20 19:55:46.615000 audit: BPF prog-id=13 op=LOAD Apr 20 19:55:46.616000 audit: BPF prog-id=14 op=LOAD Apr 20 19:55:46.616000 audit: BPF prog-id=15 op=LOAD Apr 20 19:55:46.707000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Apr 20 19:55:46.707000 audit[1320]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=6 a1=7ffcef08f100 a2=4000 a3=0 items=0 ppid=1 pid=1320 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:55:46.707000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Apr 20 19:55:45.857382 systemd[1]: Queued start job for default target multi-user.target. Apr 20 19:55:46.728639 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Apr 20 19:55:45.879637 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Apr 20 19:55:46.729713 systemd[1]: Started systemd-journald.service - Journal Service. Apr 20 19:55:45.880641 systemd[1]: systemd-journald.service: Deactivated successfully. Apr 20 19:55:46.733000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:55:46.736732 systemd[1]: Mounted media.mount - External Media Directory. Apr 20 19:55:46.740111 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Apr 20 19:55:46.743844 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Apr 20 19:55:46.747121 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Apr 20 19:55:46.752114 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Apr 20 19:55:46.756000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:55:46.757510 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 20 19:55:46.760000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:55:46.761568 systemd[1]: modprobe@configfs.service: Deactivated successfully. Apr 20 19:55:46.762355 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Apr 20 19:55:46.784000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:55:46.784000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:55:46.784768 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 20 19:55:46.784993 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 20 19:55:46.787000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:55:46.787000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:55:46.788575 systemd[1]: modprobe@fuse.service: Deactivated successfully. Apr 20 19:55:46.789459 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Apr 20 19:55:46.792000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:55:46.792000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:55:46.792738 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 20 19:55:46.796000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:55:46.797026 systemd[1]: Finished systemd-network-generator.service - Generate Network Units from Kernel Command Line. Apr 20 19:55:46.799000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:55:46.803341 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Apr 20 19:55:46.806000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:55:46.808405 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Apr 20 19:55:46.811000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-load-credentials comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:55:46.826835 systemd[1]: Reached target network-pre.target - Preparation for Network. Apr 20 19:55:46.829691 systemd[1]: Listening on systemd-importd.socket - Disk Image Download Service Socket. Apr 20 19:55:46.833396 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Apr 20 19:55:46.836521 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Apr 20 19:55:46.839667 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Apr 20 19:55:46.839846 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 20 19:55:46.843228 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Apr 20 19:55:46.846455 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 20 19:55:46.848342 systemd[1]: Starting systemd-confext.service - Merge System Configuration Images into /etc/... Apr 20 19:55:46.862166 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Apr 20 19:55:46.898307 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Apr 20 19:55:46.901141 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 20 19:55:46.906323 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Apr 20 19:55:46.910655 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 20 19:55:46.921982 systemd-journald[1320]: Time spent on flushing to /var/log/journal/2715a5e161b3425ca1d87e0a256c33b3 is 55.809ms for 1285 entries. Apr 20 19:55:46.921982 systemd-journald[1320]: System Journal (/var/log/journal/2715a5e161b3425ca1d87e0a256c33b3) is 8M, max 163.5M, 155.5M free. Apr 20 19:55:46.970000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:55:46.979000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:55:46.998000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:55:47.023000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdb-load-credentials comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:55:46.919433 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 20 19:55:47.033861 systemd-journald[1320]: Received client request to flush runtime journal. Apr 20 19:55:46.937626 systemd[1]: Starting systemd-userdb-load-credentials.service - Load JSON user/group Records from Credentials... Apr 20 19:55:47.035603 kernel: loop4: detected capacity change from 0 to 43472 Apr 20 19:55:46.967645 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 20 19:55:47.036834 kernel: loop4: p1 p2 p3 Apr 20 19:55:46.970732 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Apr 20 19:55:46.973356 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Apr 20 19:55:46.978139 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Apr 20 19:55:46.993475 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 20 19:55:47.004558 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Apr 20 19:55:47.016701 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Apr 20 19:55:47.020277 systemd[1]: Finished systemd-userdb-load-credentials.service - Load JSON user/group Records from Credentials. Apr 20 19:55:47.025866 systemd-tmpfiles[1364]: ACLs are not supported, ignoring. Apr 20 19:55:47.040000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:55:47.025887 systemd-tmpfiles[1364]: ACLs are not supported, ignoring. Apr 20 19:55:47.035399 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Apr 20 19:55:47.099595 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 20 19:55:47.099000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:55:47.112916 systemd[1]: Starting systemd-sysusers.service - Create System Users... Apr 20 19:55:47.128844 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Apr 20 19:55:47.129411 kernel: device-mapper: verity: Unrecognized verity feature request: root_hash_sig_key_desc Apr 20 19:55:47.129461 kernel: device-mapper: table: 253:4: verity: Unrecognized verity feature request (-EINVAL) Apr 20 19:55:47.131657 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Apr 20 19:55:47.132627 systemd-confext[1368]: device-mapper: reload ioctl on loop4p1-verity (253:4) failed: Invalid argument Apr 20 19:55:47.133399 kernel: device-mapper: ioctl: error adding target to table Apr 20 19:55:47.139000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:55:47.140458 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Apr 20 19:55:47.171831 systemd[1]: Finished systemd-sysusers.service - Create System Users. Apr 20 19:55:47.174000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:55:47.176000 audit: BPF prog-id=16 op=LOAD Apr 20 19:55:47.176000 audit: BPF prog-id=17 op=LOAD Apr 20 19:55:47.177000 audit: BPF prog-id=18 op=LOAD Apr 20 19:55:47.178742 systemd[1]: Starting systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer... Apr 20 19:55:47.188000 audit: BPF prog-id=19 op=LOAD Apr 20 19:55:47.189472 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 20 19:55:47.193000 audit: BPF prog-id=20 op=LOAD Apr 20 19:55:47.197394 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Apr 20 19:55:47.203684 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 20 19:55:47.228343 systemd[1]: Starting modprobe@tun.service - Load Kernel Module tun... Apr 20 19:55:47.232000 audit: BPF prog-id=21 op=LOAD Apr 20 19:55:47.232000 audit: BPF prog-id=22 op=LOAD Apr 20 19:55:47.232000 audit: BPF prog-id=23 op=LOAD Apr 20 19:55:47.235420 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Apr 20 19:55:47.260520 kernel: tun: Universal TUN/TAP device driver, 1.6 Apr 20 19:55:47.262644 systemd[1]: modprobe@tun.service: Deactivated successfully. Apr 20 19:55:47.262835 systemd[1]: Finished modprobe@tun.service - Load Kernel Module tun. Apr 20 19:55:47.287536 systemd-tmpfiles[1394]: ACLs are not supported, ignoring. Apr 20 19:55:47.287545 systemd-tmpfiles[1394]: ACLs are not supported, ignoring. Apr 20 19:55:47.289000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@tun comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:55:47.289000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@tun comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:55:47.291000 audit: BPF prog-id=24 op=LOAD Apr 20 19:55:47.291000 audit: BPF prog-id=25 op=LOAD Apr 20 19:55:47.292000 audit: BPF prog-id=26 op=LOAD Apr 20 19:55:47.295621 systemd[1]: Starting systemd-nsresourced.service - Namespace Resource Manager... Apr 20 19:55:47.301028 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 20 19:55:47.304000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:55:47.343774 systemd[1]: Started systemd-userdbd.service - User Database Manager. Apr 20 19:55:47.346000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:55:47.376025 systemd-nsresourced[1399]: Not setting up BPF subsystem, as functionality has been disabled at compile time. Apr 20 19:55:47.378323 systemd[1]: Started systemd-nsresourced.service - Namespace Resource Manager. Apr 20 19:55:47.381000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-nsresourced comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:55:47.816518 systemd-oomd[1391]: No swap; memory pressure usage will be degraded Apr 20 19:55:47.821823 systemd[1]: Started systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer. Apr 20 19:55:47.824000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-oomd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:55:47.850691 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Apr 20 19:55:47.854000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-timesyncd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:55:47.855975 systemd[1]: Reached target time-set.target - System Time Set. Apr 20 19:55:47.862632 systemd-resolved[1392]: Positive Trust Anchors: Apr 20 19:55:47.862656 systemd-resolved[1392]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 20 19:55:47.862659 systemd-resolved[1392]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Apr 20 19:55:47.862687 systemd-resolved[1392]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 20 19:55:47.891297 systemd-resolved[1392]: Defaulting to hostname 'linux'. Apr 20 19:55:47.893549 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 20 19:55:47.896000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:55:47.897806 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 20 19:55:47.964322 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Apr 20 19:55:51.684571 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Apr 20 19:55:51.688000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:55:51.691562 kernel: kauditd_printk_skb: 51 callbacks suppressed Apr 20 19:55:51.691719 kernel: audit: type=1130 audit(1776714951.688:146): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:55:51.693000 audit: BPF prog-id=27 op=LOAD Apr 20 19:55:51.697848 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 20 19:55:51.698571 kernel: audit: type=1334 audit(1776714951.693:147): prog-id=27 op=LOAD Apr 20 19:55:51.693000 audit: BPF prog-id=28 op=LOAD Apr 20 19:55:51.698688 kernel: audit: type=1334 audit(1776714951.693:148): prog-id=28 op=LOAD Apr 20 19:55:51.818229 systemd-udevd[1420]: Using default interface naming scheme 'v258'. Apr 20 19:55:52.101403 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 20 19:55:52.109000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:55:52.114000 audit: BPF prog-id=29 op=LOAD Apr 20 19:55:52.115837 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 20 19:55:52.119905 kernel: audit: type=1130 audit(1776714952.109:149): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:55:52.120472 kernel: audit: type=1334 audit(1776714952.114:150): prog-id=29 op=LOAD Apr 20 19:55:52.121000 audit: BPF prog-id=7 op=UNLOAD Apr 20 19:55:52.121000 audit: BPF prog-id=6 op=UNLOAD Apr 20 19:55:52.125762 kernel: audit: type=1334 audit(1776714952.121:151): prog-id=7 op=UNLOAD Apr 20 19:55:52.125927 kernel: audit: type=1334 audit(1776714952.121:152): prog-id=6 op=UNLOAD Apr 20 19:55:52.318224 systemd-networkd[1423]: lo: Link UP Apr 20 19:55:52.318261 systemd-networkd[1423]: lo: Gained carrier Apr 20 19:55:52.324660 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 20 19:55:52.351000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:55:52.360759 kernel: audit: type=1130 audit(1776714952.351:153): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:55:52.362986 systemd[1]: Reached target network.target - Network. Apr 20 19:55:52.367919 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Apr 20 19:55:52.374724 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Apr 20 19:55:52.377756 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Apr 20 19:55:52.381699 systemd-networkd[1423]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Apr 20 19:55:52.381730 systemd-networkd[1423]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 20 19:55:52.384465 systemd-networkd[1423]: eth0: Link UP Apr 20 19:55:52.385113 systemd-networkd[1423]: eth0: Gained carrier Apr 20 19:55:52.385141 systemd-networkd[1423]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Apr 20 19:55:52.411798 systemd-networkd[1423]: eth0: DHCPv4 address 10.0.0.6/16, gateway 10.0.0.1 acquired from 10.0.0.1 Apr 20 19:55:52.414797 systemd-timesyncd[1393]: Network configuration changed, trying to establish connection. Apr 20 19:55:53.551562 systemd-timesyncd[1393]: Contacted time server 10.0.0.1:123 (10.0.0.1). Apr 20 19:55:53.551603 systemd-timesyncd[1393]: Initial clock synchronization to Mon 2026-04-20 19:55:53.551438 UTC. Apr 20 19:55:53.551753 systemd-resolved[1392]: Clock change detected. Flushing caches. Apr 20 19:55:53.585086 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Apr 20 19:55:53.586000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-persistent-storage comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:55:53.597000 kernel: audit: type=1130 audit(1776714953.586:154): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-persistent-storage comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:55:53.979522 kernel: mousedev: PS/2 mouse device common for all mice Apr 20 19:55:54.010673 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Apr 20 19:55:54.017711 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Apr 20 19:55:54.036455 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input4 Apr 20 19:55:54.047540 kernel: ACPI: button: Power Button [PWRF] Apr 20 19:55:54.094789 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Apr 20 19:55:54.099242 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Apr 20 19:55:54.104279 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Apr 20 19:55:54.102000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:55:54.104985 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Apr 20 19:55:54.109293 kernel: audit: type=1130 audit(1776714954.102:155): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:55:54.278780 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 20 19:55:54.710899 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 20 19:55:54.715000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:55:54.761611 kernel: erofs: (device dm-4): mounted with root inode @ nid 40. Apr 20 19:55:54.855599 kernel: loop4: detected capacity change from 0 to 43472 Apr 20 19:55:54.862318 kernel: loop4: p1 p2 p3 Apr 20 19:55:55.003553 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Apr 20 19:55:55.004780 kernel: device-mapper: verity: Unrecognized verity feature request: root_hash_sig_key_desc Apr 20 19:55:55.004831 kernel: device-mapper: table: 253:4: verity: Unrecognized verity feature request (-EINVAL) Apr 20 19:55:55.007096 kernel: device-mapper: ioctl: error adding target to table Apr 20 19:55:55.007829 (sd-merge)[1487]: device-mapper: reload ioctl on loop4p1-verity (253:4) failed: Invalid argument Apr 20 19:55:55.016489 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Apr 20 19:55:55.242484 kernel: erofs: (device dm-4): mounted with root inode @ nid 40. Apr 20 19:55:55.246212 (sd-merge)[1487]: Skipping extension refresh because no change was found, use --always-refresh=yes to always do a refresh. Apr 20 19:55:55.263915 systemd[1]: Finished systemd-confext.service - Merge System Configuration Images into /etc/. Apr 20 19:55:55.271565 kernel: device-mapper: ioctl: remove_all left 4 open device(s) Apr 20 19:55:55.270000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-confext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:55:55.365519 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Apr 20 19:55:55.522875 kernel: loop4: detected capacity change from 0 to 178200 Apr 20 19:55:55.532084 kernel: loop4: p1 p2 p3 Apr 20 19:55:55.581280 systemd-networkd[1423]: eth0: Gained IPv6LL Apr 20 19:55:55.592645 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Apr 20 19:55:55.595000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-wait-online comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:55:55.616735 systemd[1]: Reached target network-online.target - Network is Online. Apr 20 19:55:55.645489 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Apr 20 19:55:55.645876 kernel: device-mapper: verity: Unrecognized verity feature request: root_hash_sig_key_desc Apr 20 19:55:55.645989 kernel: device-mapper: table: 253:4: verity: Unrecognized verity feature request (-EINVAL) Apr 20 19:55:55.648304 kernel: device-mapper: ioctl: error adding target to table Apr 20 19:55:55.654398 systemd-sysext[1496]: device-mapper: reload ioctl on loop4p1-verity (253:4) failed: Invalid argument Apr 20 19:55:55.686709 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Apr 20 19:55:55.858958 kernel: erofs: (device dm-4): mounted with root inode @ nid 39. Apr 20 19:55:55.983572 kernel: loop4: detected capacity change from 0 to 217752 Apr 20 19:55:56.233728 kernel: loop4: detected capacity change from 0 to 378016 Apr 20 19:55:56.240552 kernel: loop4: p1 p2 p3 Apr 20 19:55:56.371668 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Apr 20 19:55:56.372055 kernel: device-mapper: verity: Unrecognized verity feature request: root_hash_sig_key_desc Apr 20 19:55:56.374969 kernel: device-mapper: table: 253:4: verity: Unrecognized verity feature request (-EINVAL) Apr 20 19:55:56.378071 kernel: device-mapper: ioctl: error adding target to table Apr 20 19:55:56.379762 systemd-sysext[1496]: device-mapper: reload ioctl on loop4p1-verity (253:4) failed: Invalid argument Apr 20 19:55:56.390631 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Apr 20 19:55:56.593963 kernel: erofs: (device dm-4): mounted with root inode @ nid 39. Apr 20 19:55:56.641476 kernel: loop4: detected capacity change from 0 to 178200 Apr 20 19:55:56.641780 kernel: loop4: p1 p2 p3 Apr 20 19:55:56.764480 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Apr 20 19:55:56.764874 kernel: device-mapper: verity: Unrecognized verity feature request: root_hash_sig_key_desc Apr 20 19:55:56.769970 kernel: device-mapper: table: 253:4: verity: Unrecognized verity feature request (-EINVAL) Apr 20 19:55:56.769666 (sd-merge)[1518]: device-mapper: reload ioctl on loop4p1-verity (253:4) failed: Invalid argument Apr 20 19:55:56.770796 kernel: device-mapper: ioctl: error adding target to table Apr 20 19:55:56.788282 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Apr 20 19:55:56.864034 kernel: erofs: (device dm-4): mounted with root inode @ nid 39. Apr 20 19:55:56.883511 kernel: loop5: detected capacity change from 0 to 217752 Apr 20 19:55:56.952100 kernel: loop6: detected capacity change from 0 to 378016 Apr 20 19:55:56.959456 kernel: loop6: p1 p2 p3 Apr 20 19:55:57.026414 kernel: loop6: p1 p2 p3 Apr 20 19:55:57.157647 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Apr 20 19:55:57.158260 kernel: device-mapper: verity: Unrecognized verity feature request: root_hash_sig_key_desc Apr 20 19:55:57.158330 kernel: device-mapper: table: 253:5: verity: Unrecognized verity feature request (-EINVAL) Apr 20 19:55:57.160413 kernel: device-mapper: ioctl: error adding target to table Apr 20 19:55:57.161660 (sd-merge)[1518]: device-mapper: reload ioctl on loop6p1-verity (253:5) failed: Invalid argument Apr 20 19:55:57.169964 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Apr 20 19:55:57.442611 kernel: erofs: (device dm-5): mounted with root inode @ nid 39. Apr 20 19:55:57.449091 (sd-merge)[1518]: Skipping extension refresh because no change was found, use --always-refresh=yes to always do a refresh. Apr 20 19:55:57.466247 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Apr 20 19:55:57.477000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:55:57.515238 kernel: device-mapper: ioctl: remove_all left 4 open device(s) Apr 20 19:55:57.516779 kernel: device-mapper: ioctl: remove_all left 4 open device(s) Apr 20 19:55:57.515492 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 20 19:55:57.630486 systemd-tmpfiles[1535]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Apr 20 19:55:57.632246 systemd-tmpfiles[1535]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Apr 20 19:55:57.636146 systemd-tmpfiles[1535]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Apr 20 19:55:57.643708 systemd-tmpfiles[1535]: ACLs are not supported, ignoring. Apr 20 19:55:57.643772 systemd-tmpfiles[1535]: ACLs are not supported, ignoring. Apr 20 19:55:57.687010 systemd-tmpfiles[1535]: Detected autofs mount point /boot during canonicalization of boot. Apr 20 19:55:57.688590 systemd-tmpfiles[1535]: Skipping /boot Apr 20 19:55:57.734610 systemd-tmpfiles[1535]: Detected autofs mount point /boot during canonicalization of boot. Apr 20 19:55:57.734677 systemd-tmpfiles[1535]: Skipping /boot Apr 20 19:55:57.863763 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 20 19:55:57.869000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:55:57.872926 kernel: kauditd_printk_skb: 4 callbacks suppressed Apr 20 19:55:57.873507 kernel: audit: type=1130 audit(1776714957.869:160): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:55:57.967976 systemd[1]: Starting audit-rules.service - Load Audit Rules... Apr 20 19:55:57.980725 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Apr 20 19:55:57.989880 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Apr 20 19:55:58.007627 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Apr 20 19:55:58.017271 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Apr 20 19:55:58.037000 audit[1546]: AUDIT1127 pid=1546 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Apr 20 19:55:58.047570 kernel: audit: type=1127 audit(1776714958.037:161): pid=1546 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Apr 20 19:55:58.071057 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Apr 20 19:55:58.075000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:55:58.086804 kernel: audit: type=1130 audit(1776714958.075:162): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:55:58.191672 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Apr 20 19:55:58.194000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:55:58.199743 augenrules[1566]: No rules Apr 20 19:55:58.197000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Apr 20 19:55:58.202891 systemd[1]: audit-rules.service: Deactivated successfully. Apr 20 19:55:58.203313 systemd[1]: Finished audit-rules.service - Load Audit Rules. Apr 20 19:55:58.205409 kernel: audit: type=1130 audit(1776714958.194:163): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:55:58.197000 audit[1566]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffd56388110 a2=420 a3=0 items=0 ppid=1541 pid=1566 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:55:58.197000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Apr 20 19:55:58.206060 kernel: audit: type=1305 audit(1776714958.197:164): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Apr 20 19:55:58.206122 kernel: audit: type=1300 audit(1776714958.197:164): arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffd56388110 a2=420 a3=0 items=0 ppid=1541 pid=1566 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:55:58.206606 kernel: audit: type=1327 audit(1776714958.197:164): proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Apr 20 19:55:58.286797 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Apr 20 19:55:58.295053 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Apr 20 19:56:01.327877 ldconfig[1543]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Apr 20 19:56:01.386202 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Apr 20 19:56:01.470048 systemd[1]: Starting systemd-update-done.service - Update is Completed... Apr 20 19:56:01.787841 systemd[1]: Finished systemd-update-done.service - Update is Completed. Apr 20 19:56:01.834237 systemd[1]: Reached target sysinit.target - System Initialization. Apr 20 19:56:01.844063 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Apr 20 19:56:01.852983 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Apr 20 19:56:01.862133 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Apr 20 19:56:01.867043 systemd[1]: Started logrotate.timer - Daily rotation of log files. Apr 20 19:56:01.875088 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Apr 20 19:56:01.890139 systemd[1]: Started systemd-sysupdate-reboot.timer - Reboot Automatically After System Update. Apr 20 19:56:01.916844 systemd[1]: Started systemd-sysupdate.timer - Automatic System Update. Apr 20 19:56:01.924769 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Apr 20 19:56:01.930037 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Apr 20 19:56:01.931159 systemd[1]: Reached target paths.target - Path Units. Apr 20 19:56:01.936804 systemd[1]: Reached target timers.target - Timer Units. Apr 20 19:56:01.943039 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Apr 20 19:56:01.983281 systemd[1]: Starting docker.socket - Docker Socket for the API... Apr 20 19:56:02.017966 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Apr 20 19:56:02.075929 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Apr 20 19:56:02.086643 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Apr 20 19:56:02.091864 systemd[1]: Listening on systemd-logind-varlink.socket - User Login Management Varlink Socket. Apr 20 19:56:02.113782 systemd[1]: Listening on systemd-machined.socket - Virtual Machine and Container Registration Service Socket. Apr 20 19:56:02.129129 systemd[1]: Listening on docker.socket - Docker Socket for the API. Apr 20 19:56:02.158719 systemd[1]: Reached target sockets.target - Socket Units. Apr 20 19:56:02.163094 systemd[1]: Reached target basic.target - Basic System. Apr 20 19:56:02.169960 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Apr 20 19:56:02.173079 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Apr 20 19:56:02.232675 systemd[1]: Starting containerd.service - containerd container runtime... Apr 20 19:56:02.245913 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Apr 20 19:56:02.289751 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Apr 20 19:56:02.331118 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Apr 20 19:56:02.345993 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Apr 20 19:56:02.369299 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Apr 20 19:56:02.371939 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Apr 20 19:56:02.376482 jq[1583]: false Apr 20 19:56:02.376666 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Apr 20 19:56:02.449909 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 20 19:56:02.470416 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Apr 20 19:56:02.484810 extend-filesystems[1584]: Found /dev/vda6 Apr 20 19:56:02.486008 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Apr 20 19:56:02.501531 extend-filesystems[1584]: Found /dev/vda9 Apr 20 19:56:02.501115 oslogin_cache_refresh[1585]: Refreshing passwd entry cache Apr 20 19:56:02.513767 google_oslogin_nss_cache[1585]: oslogin_cache_refresh[1585]: Refreshing passwd entry cache Apr 20 19:56:02.505920 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Apr 20 19:56:02.518715 extend-filesystems[1584]: Checking size of /dev/vda9 Apr 20 19:56:02.531261 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Apr 20 19:56:02.535939 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Apr 20 19:56:02.537138 google_oslogin_nss_cache[1585]: oslogin_cache_refresh[1585]: Failure getting users, quitting Apr 20 19:56:02.537219 oslogin_cache_refresh[1585]: Failure getting users, quitting Apr 20 19:56:02.538726 google_oslogin_nss_cache[1585]: oslogin_cache_refresh[1585]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Apr 20 19:56:02.538726 google_oslogin_nss_cache[1585]: oslogin_cache_refresh[1585]: Refreshing group entry cache Apr 20 19:56:02.537504 oslogin_cache_refresh[1585]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Apr 20 19:56:02.537560 oslogin_cache_refresh[1585]: Refreshing group entry cache Apr 20 19:56:02.558685 extend-filesystems[1584]: Resized partition /dev/vda9 Apr 20 19:56:02.566410 google_oslogin_nss_cache[1585]: oslogin_cache_refresh[1585]: Failure getting groups, quitting Apr 20 19:56:02.566410 google_oslogin_nss_cache[1585]: oslogin_cache_refresh[1585]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Apr 20 19:56:02.563729 oslogin_cache_refresh[1585]: Failure getting groups, quitting Apr 20 19:56:02.563797 oslogin_cache_refresh[1585]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Apr 20 19:56:02.567165 systemd[1]: Starting systemd-logind.service - User Login Management... Apr 20 19:56:02.574092 extend-filesystems[1608]: resize2fs 1.47.3 (8-Jul-2025) Apr 20 19:56:02.570151 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Apr 20 19:56:02.584997 kernel: EXT4-fs (vda9): resizing filesystem from 456704 to 1784827 blocks Apr 20 19:56:02.578975 systemd[1]: Starting update-engine.service - Update Engine... Apr 20 19:56:02.595280 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Apr 20 19:56:02.679443 jq[1613]: true Apr 20 19:56:02.683623 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Apr 20 19:56:02.695524 kernel: EXT4-fs (vda9): resized filesystem to 1784827 Apr 20 19:56:02.695930 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Apr 20 19:56:02.700423 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Apr 20 19:56:02.701020 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Apr 20 19:56:02.713023 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Apr 20 19:56:02.721598 extend-filesystems[1608]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Apr 20 19:56:02.721598 extend-filesystems[1608]: old_desc_blocks = 1, new_desc_blocks = 1 Apr 20 19:56:02.721598 extend-filesystems[1608]: The filesystem on /dev/vda9 is now 1784827 (4k) blocks long. Apr 20 19:56:02.770945 extend-filesystems[1584]: Resized filesystem in /dev/vda9 Apr 20 19:56:02.724709 systemd[1]: motdgen.service: Deactivated successfully. Apr 20 19:56:02.724958 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Apr 20 19:56:02.731608 systemd[1]: extend-filesystems.service: Deactivated successfully. Apr 20 19:56:02.732039 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Apr 20 19:56:02.738671 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Apr 20 19:56:02.770005 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Apr 20 19:56:02.775867 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Apr 20 19:56:02.898549 jq[1639]: true Apr 20 19:56:02.913177 systemd[1]: coreos-metadata.service: Deactivated successfully. Apr 20 19:56:02.913571 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Apr 20 19:56:02.918811 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Apr 20 19:56:03.029405 update_engine[1610]: I20260420 19:56:03.022657 1610 main.cc:92] Flatcar Update Engine starting Apr 20 19:56:03.112775 tar[1638]: linux-amd64/LICENSE Apr 20 19:56:03.112775 tar[1638]: linux-amd64/helm Apr 20 19:56:03.315749 sshd_keygen[1637]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Apr 20 19:56:03.327432 bash[1673]: Updated "/home/core/.ssh/authorized_keys" Apr 20 19:56:03.343641 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Apr 20 19:56:03.352674 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Apr 20 19:56:03.377692 systemd-logind[1609]: Watching system buttons on /dev/input/event2 (Power Button) Apr 20 19:56:03.377733 systemd-logind[1609]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Apr 20 19:56:03.381667 systemd-logind[1609]: New seat seat0. Apr 20 19:56:03.384947 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Apr 20 19:56:03.401216 systemd[1]: Started systemd-logind.service - User Login Management. Apr 20 19:56:03.414549 systemd[1]: Starting issuegen.service - Generate /run/issue... Apr 20 19:56:03.493633 systemd[1]: issuegen.service: Deactivated successfully. Apr 20 19:56:03.515427 systemd[1]: Finished issuegen.service - Generate /run/issue. Apr 20 19:56:03.533733 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Apr 20 19:56:03.937709 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Apr 20 19:56:03.946746 systemd[1]: Started getty@tty1.service - Getty on tty1. Apr 20 19:56:03.957793 dbus-daemon[1581]: [system] SELinux support is enabled Apr 20 19:56:03.965428 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Apr 20 19:56:03.970564 systemd[1]: Reached target getty.target - Login Prompts. Apr 20 19:56:03.975092 systemd[1]: Started dbus.service - D-Bus System Message Bus. Apr 20 19:56:04.344027 dbus-daemon[1581]: [system] Successfully activated service 'org.freedesktop.systemd1' Apr 20 19:56:04.348263 update_engine[1610]: I20260420 19:56:04.346987 1610 update_check_scheduler.cc:74] Next update check in 2m18s Apr 20 19:56:04.525310 systemd[1]: Started update-engine.service - Update Engine. Apr 20 19:56:04.552655 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Apr 20 19:56:04.582301 systemd[1]: Started sshd@0-1-10.0.0.6:22-10.0.0.1:36306.service - OpenSSH per-connection server daemon (10.0.0.1:36306). Apr 20 19:56:04.592395 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Apr 20 19:56:04.637093 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Apr 20 19:56:04.643796 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Apr 20 19:56:04.646565 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Apr 20 19:56:04.685561 systemd[1]: Started locksmithd.service - Cluster reboot manager. Apr 20 19:56:05.208474 locksmithd[1710]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Apr 20 19:56:05.665572 sshd[1709]: Accepted publickey for core from 10.0.0.1 port 36306 ssh2: RSA SHA256:6LFBX1y/KMd5HbH+HLq9TQlSkwszEIqAV5PH+tmKm7M Apr 20 19:56:05.681482 sshd-session[1709]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 20 19:56:05.787266 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Apr 20 19:56:05.854665 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Apr 20 19:56:06.059074 systemd-logind[1609]: New session '1' of user 'core' with class 'user' and type 'tty'. Apr 20 19:56:06.154705 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Apr 20 19:56:06.180115 systemd[1]: Starting user@500.service - User Manager for UID 500... Apr 20 19:56:06.652619 (systemd)[1721]: pam_unix(systemd-user:session): session opened for user core(uid=500) by core(uid=0) Apr 20 19:56:06.677480 systemd-logind[1609]: New session '2' of user 'core' with class 'manager-early' and type 'unspecified'. Apr 20 19:56:07.251920 containerd[1640]: time="2026-04-20T19:56:07Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Apr 20 19:56:07.333719 containerd[1640]: time="2026-04-20T19:56:07.323640359Z" level=info msg="starting containerd" revision=dea7da592f5d1d2b7755e3a161be07f43fad8f75 version=v2.2.1 Apr 20 19:56:07.591062 containerd[1640]: time="2026-04-20T19:56:07.571815641Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t=5.745504ms Apr 20 19:56:07.591062 containerd[1640]: time="2026-04-20T19:56:07.584182572Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Apr 20 19:56:07.603385 containerd[1640]: time="2026-04-20T19:56:07.601449365Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Apr 20 19:56:07.615497 containerd[1640]: time="2026-04-20T19:56:07.609275746Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Apr 20 19:56:07.622163 containerd[1640]: time="2026-04-20T19:56:07.621897682Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Apr 20 19:56:07.622708 containerd[1640]: time="2026-04-20T19:56:07.622686066Z" level=info msg="loading plugin" id=io.containerd.mount-handler.v1.erofs type=io.containerd.mount-handler.v1 Apr 20 19:56:07.622845 containerd[1640]: time="2026-04-20T19:56:07.622831106Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Apr 20 19:56:07.623091 containerd[1640]: time="2026-04-20T19:56:07.623073080Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Apr 20 19:56:07.623141 containerd[1640]: time="2026-04-20T19:56:07.623131988Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Apr 20 19:56:07.623803 containerd[1640]: time="2026-04-20T19:56:07.623774265Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Apr 20 19:56:07.623878 containerd[1640]: time="2026-04-20T19:56:07.623867156Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Apr 20 19:56:07.623924 containerd[1640]: time="2026-04-20T19:56:07.623914131Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Apr 20 19:56:07.623980 containerd[1640]: time="2026-04-20T19:56:07.623970525Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Apr 20 19:56:07.624376 containerd[1640]: time="2026-04-20T19:56:07.624320831Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Apr 20 19:56:07.624751 containerd[1640]: time="2026-04-20T19:56:07.624732321Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Apr 20 19:56:07.625167 containerd[1640]: time="2026-04-20T19:56:07.625146406Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Apr 20 19:56:07.625296 containerd[1640]: time="2026-04-20T19:56:07.625279781Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Apr 20 19:56:07.626608 containerd[1640]: time="2026-04-20T19:56:07.626042542Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Apr 20 19:56:07.654812 containerd[1640]: time="2026-04-20T19:56:07.643864850Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Apr 20 19:56:07.679733 systemd[1721]: Queued start job for default target default.target. Apr 20 19:56:07.694753 systemd[1721]: Created slice app.slice - User Application Slice. Apr 20 19:56:07.694849 systemd[1721]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of User's Temporary Directories. Apr 20 19:56:07.694868 systemd[1721]: Reached target machines.target - Virtual Machines and Containers. Apr 20 19:56:07.696835 systemd[1721]: Reached target paths.target - Paths. Apr 20 19:56:07.696896 systemd[1721]: Reached target timers.target - Timers. Apr 20 19:56:07.749685 systemd[1721]: Starting dbus.socket - D-Bus User Message Bus Socket... Apr 20 19:56:07.751713 systemd[1721]: Listening on systemd-ask-password.socket - Query the User Interactively for a Password. Apr 20 19:56:07.752661 systemd[1721]: Starting systemd-tmpfiles-setup.service - Create User Files and Directories... Apr 20 19:56:07.801030 containerd[1640]: time="2026-04-20T19:56:07.800654696Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Apr 20 19:56:07.803875 systemd[1721]: Listening on dbus.socket - D-Bus User Message Bus Socket. Apr 20 19:56:07.806917 systemd[1721]: Reached target sockets.target - Sockets. Apr 20 19:56:07.817451 containerd[1640]: time="2026-04-20T19:56:07.813734147Z" level=info msg="metadata content store policy set" policy=shared Apr 20 19:56:07.825401 systemd[1721]: Finished systemd-tmpfiles-setup.service - Create User Files and Directories. Apr 20 19:56:07.825561 systemd[1721]: Reached target basic.target - Basic System. Apr 20 19:56:07.825617 systemd[1721]: Reached target default.target - Main User Target. Apr 20 19:56:07.825946 systemd[1721]: Startup finished in 1.119s. Apr 20 19:56:07.827108 systemd[1]: Started user@500.service - User Manager for UID 500. Apr 20 19:56:07.880839 systemd[1]: Started session-1.scope - Session 1 of User core. Apr 20 19:56:07.964466 containerd[1640]: time="2026-04-20T19:56:07.963958931Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Apr 20 19:56:07.987764 containerd[1640]: time="2026-04-20T19:56:07.985511062Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Apr 20 19:56:07.987764 containerd[1640]: time="2026-04-20T19:56:07.987284922Z" level=info msg="built-in NRI default validator is disabled" Apr 20 19:56:07.987764 containerd[1640]: time="2026-04-20T19:56:07.987401323Z" level=info msg="runtime interface created" Apr 20 19:56:07.987764 containerd[1640]: time="2026-04-20T19:56:07.987410379Z" level=info msg="created NRI interface" Apr 20 19:56:07.991547 containerd[1640]: time="2026-04-20T19:56:07.988725850Z" level=info msg="loading plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Apr 20 19:56:08.015566 containerd[1640]: time="2026-04-20T19:56:08.013902161Z" level=info msg="skip loading plugin" error="failed to check mkfs.erofs availability: failed to run mkfs.erofs --help: exec: \"mkfs.erofs\": executable file not found in $PATH: skip plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Apr 20 19:56:08.015566 containerd[1640]: time="2026-04-20T19:56:08.014603152Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Apr 20 19:56:08.015566 containerd[1640]: time="2026-04-20T19:56:08.014770981Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Apr 20 19:56:08.015566 containerd[1640]: time="2026-04-20T19:56:08.014808861Z" level=info msg="loading plugin" id=io.containerd.mount-manager.v1.bolt type=io.containerd.mount-manager.v1 Apr 20 19:56:08.015566 containerd[1640]: time="2026-04-20T19:56:08.015226919Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Apr 20 19:56:08.431037 containerd[1640]: time="2026-04-20T19:56:08.395718594Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Apr 20 19:56:08.431037 containerd[1640]: time="2026-04-20T19:56:08.415801800Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Apr 20 19:56:08.473037 containerd[1640]: time="2026-04-20T19:56:08.470088890Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Apr 20 19:56:08.473037 containerd[1640]: time="2026-04-20T19:56:08.472866728Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Apr 20 19:56:08.473277 containerd[1640]: time="2026-04-20T19:56:08.473055024Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Apr 20 19:56:08.473277 containerd[1640]: time="2026-04-20T19:56:08.473072770Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Apr 20 19:56:08.473277 containerd[1640]: time="2026-04-20T19:56:08.473119219Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Apr 20 19:56:08.473277 containerd[1640]: time="2026-04-20T19:56:08.473149540Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Apr 20 19:56:08.482081 containerd[1640]: time="2026-04-20T19:56:08.478170237Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Apr 20 19:56:08.508077 containerd[1640]: time="2026-04-20T19:56:08.505770056Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Apr 20 19:56:08.510576 systemd[1]: Started sshd@1-4097-10.0.0.6:22-10.0.0.1:48856.service - OpenSSH per-connection server daemon (10.0.0.1:48856). Apr 20 19:56:08.512531 tar[1638]: linux-amd64/README.md Apr 20 19:56:08.522985 containerd[1640]: time="2026-04-20T19:56:08.516063920Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Apr 20 19:56:08.524877 containerd[1640]: time="2026-04-20T19:56:08.522956637Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Apr 20 19:56:08.524877 containerd[1640]: time="2026-04-20T19:56:08.523089311Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Apr 20 19:56:08.524877 containerd[1640]: time="2026-04-20T19:56:08.523135962Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Apr 20 19:56:08.537841 containerd[1640]: time="2026-04-20T19:56:08.535137335Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Apr 20 19:56:08.540944 containerd[1640]: time="2026-04-20T19:56:08.537967928Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Apr 20 19:56:08.540944 containerd[1640]: time="2026-04-20T19:56:08.540012927Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.mounts type=io.containerd.grpc.v1 Apr 20 19:56:08.540944 containerd[1640]: time="2026-04-20T19:56:08.540162840Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Apr 20 19:56:08.540944 containerd[1640]: time="2026-04-20T19:56:08.540180796Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Apr 20 19:56:08.569681 containerd[1640]: time="2026-04-20T19:56:08.567165515Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Apr 20 19:56:08.581977 containerd[1640]: time="2026-04-20T19:56:08.581838662Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Apr 20 19:56:08.591287 containerd[1640]: time="2026-04-20T19:56:08.590964936Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Apr 20 19:56:08.591287 containerd[1640]: time="2026-04-20T19:56:08.591287683Z" level=info msg="Start snapshots syncer" Apr 20 19:56:08.596585 containerd[1640]: time="2026-04-20T19:56:08.594387039Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Apr 20 19:56:08.628655 containerd[1640]: time="2026-04-20T19:56:08.625727740Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"cgroupWritable\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"\",\"binDirs\":[\"/opt/cni/bin\"],\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogLineSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Apr 20 19:56:08.633955 containerd[1640]: time="2026-04-20T19:56:08.632896731Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Apr 20 19:56:08.646921 containerd[1640]: time="2026-04-20T19:56:08.646556125Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Apr 20 19:56:08.666546 containerd[1640]: time="2026-04-20T19:56:08.664316333Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Apr 20 19:56:08.667266 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Apr 20 19:56:08.677571 containerd[1640]: time="2026-04-20T19:56:08.675055854Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Apr 20 19:56:08.681028 containerd[1640]: time="2026-04-20T19:56:08.680817787Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Apr 20 19:56:08.693506 containerd[1640]: time="2026-04-20T19:56:08.684247775Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Apr 20 19:56:08.693506 containerd[1640]: time="2026-04-20T19:56:08.692792581Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Apr 20 19:56:08.693506 containerd[1640]: time="2026-04-20T19:56:08.693015574Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Apr 20 19:56:08.693506 containerd[1640]: time="2026-04-20T19:56:08.693061157Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Apr 20 19:56:08.693506 containerd[1640]: time="2026-04-20T19:56:08.693139503Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Apr 20 19:56:08.693506 containerd[1640]: time="2026-04-20T19:56:08.693150112Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Apr 20 19:56:08.740156 containerd[1640]: time="2026-04-20T19:56:08.738266566Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Apr 20 19:56:08.740297 containerd[1640]: time="2026-04-20T19:56:08.740267910Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Apr 20 19:56:08.740321 containerd[1640]: time="2026-04-20T19:56:08.740297816Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Apr 20 19:56:08.740321 containerd[1640]: time="2026-04-20T19:56:08.740311531Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Apr 20 19:56:08.740864 containerd[1640]: time="2026-04-20T19:56:08.740320633Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Apr 20 19:56:08.744624 containerd[1640]: time="2026-04-20T19:56:08.741692198Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Apr 20 19:56:08.747136 containerd[1640]: time="2026-04-20T19:56:08.745747768Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Apr 20 19:56:08.749179 containerd[1640]: time="2026-04-20T19:56:08.748881598Z" level=info msg="Connect containerd service" Apr 20 19:56:08.754102 containerd[1640]: time="2026-04-20T19:56:08.752922894Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Apr 20 19:56:08.833531 containerd[1640]: time="2026-04-20T19:56:08.833268193Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 20 19:56:08.963849 sshd[1739]: Accepted publickey for core from 10.0.0.1 port 48856 ssh2: RSA SHA256:6LFBX1y/KMd5HbH+HLq9TQlSkwszEIqAV5PH+tmKm7M Apr 20 19:56:08.975285 sshd-session[1739]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 20 19:56:09.024009 systemd-logind[1609]: New session '3' of user 'core' with class 'user' and type 'tty'. Apr 20 19:56:09.033081 systemd[1]: Started session-3.scope - Session 3 of User core. Apr 20 19:56:09.381135 sshd[1756]: Connection closed by 10.0.0.1 port 48856 Apr 20 19:56:09.382996 sshd-session[1739]: pam_unix(sshd:session): session closed for user core Apr 20 19:56:09.459949 systemd[1]: sshd@1-4097-10.0.0.6:22-10.0.0.1:48856.service: Deactivated successfully. Apr 20 19:56:09.478632 systemd[1]: session-3.scope: Deactivated successfully. Apr 20 19:56:09.484729 systemd-logind[1609]: Session 3 logged out. Waiting for processes to exit. Apr 20 19:56:09.532834 systemd[1]: Started sshd@2-8193-10.0.0.6:22-10.0.0.1:48866.service - OpenSSH per-connection server daemon (10.0.0.1:48866). Apr 20 19:56:09.599727 systemd-logind[1609]: Removed session 3. Apr 20 19:56:09.838088 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 20 19:56:09.872487 (kubelet)[1776]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 20 19:56:09.956804 sshd[1768]: Accepted publickey for core from 10.0.0.1 port 48866 ssh2: RSA SHA256:6LFBX1y/KMd5HbH+HLq9TQlSkwszEIqAV5PH+tmKm7M Apr 20 19:56:09.960547 sshd-session[1768]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 20 19:56:09.980117 containerd[1640]: time="2026-04-20T19:56:09.975261223Z" level=info msg="Start subscribing containerd event" Apr 20 19:56:09.985679 containerd[1640]: time="2026-04-20T19:56:09.982964606Z" level=info msg="Start recovering state" Apr 20 19:56:09.991002 containerd[1640]: time="2026-04-20T19:56:09.990918005Z" level=info msg="Start event monitor" Apr 20 19:56:09.993123 containerd[1640]: time="2026-04-20T19:56:09.992565166Z" level=info msg="Start cni network conf syncer for default" Apr 20 19:56:09.994168 containerd[1640]: time="2026-04-20T19:56:09.994071718Z" level=info msg="Start streaming server" Apr 20 19:56:09.994431 containerd[1640]: time="2026-04-20T19:56:09.994405293Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Apr 20 19:56:09.994492 containerd[1640]: time="2026-04-20T19:56:09.994485457Z" level=info msg="runtime interface starting up..." Apr 20 19:56:09.994545 containerd[1640]: time="2026-04-20T19:56:09.994539661Z" level=info msg="starting plugins..." Apr 20 19:56:09.994633 containerd[1640]: time="2026-04-20T19:56:09.994626271Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Apr 20 19:56:10.021303 containerd[1640]: time="2026-04-20T19:56:10.021147659Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Apr 20 19:56:10.025014 containerd[1640]: time="2026-04-20T19:56:10.023151415Z" level=info msg=serving... address=/run/containerd/containerd.sock Apr 20 19:56:10.029636 containerd[1640]: time="2026-04-20T19:56:10.028706785Z" level=info msg="containerd successfully booted in 2.872995s" Apr 20 19:56:10.035029 systemd-logind[1609]: New session '4' of user 'core' with class 'user' and type 'tty'. Apr 20 19:56:10.038838 systemd[1]: Started containerd.service - containerd container runtime. Apr 20 19:56:10.058804 systemd[1]: Started session-4.scope - Session 4 of User core. Apr 20 19:56:10.065693 systemd[1]: Reached target multi-user.target - Multi-User System. Apr 20 19:56:10.066014 systemd[1]: Startup finished in 4.675s (kernel) + 11.111s (initrd) + 25.551s (userspace) = 41.338s. Apr 20 19:56:10.238160 sshd[1781]: Connection closed by 10.0.0.1 port 48866 Apr 20 19:56:10.243200 sshd-session[1768]: pam_unix(sshd:session): session closed for user core Apr 20 19:56:10.281679 systemd[1]: sshd@2-8193-10.0.0.6:22-10.0.0.1:48866.service: Deactivated successfully. Apr 20 19:56:10.358300 systemd[1]: session-4.scope: Deactivated successfully. Apr 20 19:56:10.391914 systemd-logind[1609]: Session 4 logged out. Waiting for processes to exit. Apr 20 19:56:10.436785 systemd-logind[1609]: Removed session 4. Apr 20 19:56:14.156905 kubelet[1776]: E0420 19:56:14.156508 1776 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 20 19:56:14.177080 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 20 19:56:14.189883 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 20 19:56:14.222809 systemd[1]: kubelet.service: Consumed 7.212s CPU time, 257.6M memory peak. Apr 20 19:56:20.396881 systemd[1]: Started sshd@3-12289-10.0.0.6:22-10.0.0.1:52608.service - OpenSSH per-connection server daemon (10.0.0.1:52608). Apr 20 19:56:20.729556 sshd[1795]: Accepted publickey for core from 10.0.0.1 port 52608 ssh2: RSA SHA256:6LFBX1y/KMd5HbH+HLq9TQlSkwszEIqAV5PH+tmKm7M Apr 20 19:56:20.736073 sshd-session[1795]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 20 19:56:20.784492 systemd-logind[1609]: New session '5' of user 'core' with class 'user' and type 'tty'. Apr 20 19:56:20.835560 systemd[1]: Started session-5.scope - Session 5 of User core. Apr 20 19:56:20.868485 sshd[1799]: Connection closed by 10.0.0.1 port 52608 Apr 20 19:56:20.870952 sshd-session[1795]: pam_unix(sshd:session): session closed for user core Apr 20 19:56:20.941616 systemd[1]: sshd@3-12289-10.0.0.6:22-10.0.0.1:52608.service: Deactivated successfully. Apr 20 19:56:20.964088 systemd[1]: session-5.scope: Deactivated successfully. Apr 20 19:56:21.008962 systemd-logind[1609]: Session 5 logged out. Waiting for processes to exit. Apr 20 19:56:21.027150 systemd[1]: Started sshd@4-8194-10.0.0.6:22-10.0.0.1:52610.service - OpenSSH per-connection server daemon (10.0.0.1:52610). Apr 20 19:56:21.046659 systemd-logind[1609]: Removed session 5. Apr 20 19:56:21.387440 sshd[1805]: Accepted publickey for core from 10.0.0.1 port 52610 ssh2: RSA SHA256:6LFBX1y/KMd5HbH+HLq9TQlSkwszEIqAV5PH+tmKm7M Apr 20 19:56:21.393965 sshd-session[1805]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 20 19:56:21.435953 systemd-logind[1609]: New session '6' of user 'core' with class 'user' and type 'tty'. Apr 20 19:56:21.470239 systemd[1]: Started session-6.scope - Session 6 of User core. Apr 20 19:56:21.595762 sshd[1809]: Connection closed by 10.0.0.1 port 52610 Apr 20 19:56:21.596226 sshd-session[1805]: pam_unix(sshd:session): session closed for user core Apr 20 19:56:21.654174 systemd[1]: sshd@4-8194-10.0.0.6:22-10.0.0.1:52610.service: Deactivated successfully. Apr 20 19:56:21.667523 systemd[1]: session-6.scope: Deactivated successfully. Apr 20 19:56:21.690650 systemd-logind[1609]: Session 6 logged out. Waiting for processes to exit. Apr 20 19:56:21.750777 systemd[1]: Started sshd@5-12290-10.0.0.6:22-10.0.0.1:52622.service - OpenSSH per-connection server daemon (10.0.0.1:52622). Apr 20 19:56:21.777249 systemd-logind[1609]: Removed session 6. Apr 20 19:56:22.180283 sshd[1815]: Accepted publickey for core from 10.0.0.1 port 52622 ssh2: RSA SHA256:6LFBX1y/KMd5HbH+HLq9TQlSkwszEIqAV5PH+tmKm7M Apr 20 19:56:22.192948 sshd-session[1815]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 20 19:56:22.303874 systemd-logind[1609]: New session '7' of user 'core' with class 'user' and type 'tty'. Apr 20 19:56:22.340234 systemd[1]: Started session-7.scope - Session 7 of User core. Apr 20 19:56:22.454787 sshd[1819]: Connection closed by 10.0.0.1 port 52622 Apr 20 19:56:22.455558 sshd-session[1815]: pam_unix(sshd:session): session closed for user core Apr 20 19:56:22.581248 systemd[1]: sshd@5-12290-10.0.0.6:22-10.0.0.1:52622.service: Deactivated successfully. Apr 20 19:56:22.625770 systemd[1]: session-7.scope: Deactivated successfully. Apr 20 19:56:22.650061 systemd-logind[1609]: Session 7 logged out. Waiting for processes to exit. Apr 20 19:56:22.790464 systemd[1]: Started sshd@6-8195-10.0.0.6:22-10.0.0.1:52624.service - OpenSSH per-connection server daemon (10.0.0.1:52624). Apr 20 19:56:22.807921 systemd-logind[1609]: Removed session 7. Apr 20 19:56:23.195279 sshd[1825]: Accepted publickey for core from 10.0.0.1 port 52624 ssh2: RSA SHA256:6LFBX1y/KMd5HbH+HLq9TQlSkwszEIqAV5PH+tmKm7M Apr 20 19:56:23.236963 sshd-session[1825]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 20 19:56:23.321898 systemd-logind[1609]: New session '8' of user 'core' with class 'user' and type 'tty'. Apr 20 19:56:23.407099 systemd[1]: Started session-8.scope - Session 8 of User core. Apr 20 19:56:23.622924 sudo[1830]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Apr 20 19:56:23.624047 sudo[1830]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 20 19:56:24.240603 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Apr 20 19:56:24.278258 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 20 19:56:26.834623 kernel: clocksource: Long readout interval, skipping watchdog check: cs_nsec: 1022369011 wd_nsec: 1022369105 Apr 20 19:56:26.993669 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 20 19:56:27.009287 (kubelet)[1857]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 20 19:56:29.987830 kubelet[1857]: E0420 19:56:29.987626 1857 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 20 19:56:30.043092 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 20 19:56:30.043985 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 20 19:56:30.052018 systemd[1]: kubelet.service: Consumed 3.390s CPU time, 110.6M memory peak. Apr 20 19:56:30.726880 systemd[1]: Starting docker.service - Docker Application Container Engine... Apr 20 19:56:30.755825 (dockerd)[1868]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Apr 20 19:56:34.382572 dockerd[1868]: time="2026-04-20T19:56:34.379593481Z" level=info msg="Starting up" Apr 20 19:56:34.483951 dockerd[1868]: time="2026-04-20T19:56:34.483732915Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Apr 20 19:56:34.705611 dockerd[1868]: time="2026-04-20T19:56:34.705184002Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Apr 20 19:56:35.311275 systemd[1]: var-lib-docker-metacopy\x2dcheck2071813309-merged.mount: Deactivated successfully. Apr 20 19:56:35.525683 dockerd[1868]: time="2026-04-20T19:56:35.524982771Z" level=info msg="Loading containers: start." Apr 20 19:56:35.593018 kernel: Initializing XFRM netlink socket Apr 20 19:56:39.742623 systemd-networkd[1423]: docker0: Link UP Apr 20 19:56:39.889192 dockerd[1868]: time="2026-04-20T19:56:39.887914127Z" level=info msg="Loading containers: done." Apr 20 19:56:40.246559 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Apr 20 19:56:40.315525 dockerd[1868]: time="2026-04-20T19:56:40.296119346Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Apr 20 19:56:40.319183 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 20 19:56:40.328497 dockerd[1868]: time="2026-04-20T19:56:40.326778214Z" level=info msg="Docker daemon" commit=45873be4ae3f5488c9498b3d9f17deaddaf609f4 containerd-snapshotter=false storage-driver=overlay2 version=28.2.2 Apr 20 19:56:40.348277 dockerd[1868]: time="2026-04-20T19:56:40.346026127Z" level=info msg="Initializing buildkit" Apr 20 19:56:40.520699 dockerd[1868]: time="2026-04-20T19:56:40.519860691Z" level=warning msg="CDI setup error /etc/cdi: failed to monitor for changes: no such file or directory" Apr 20 19:56:40.520699 dockerd[1868]: time="2026-04-20T19:56:40.519971677Z" level=warning msg="CDI setup error /var/run/cdi: failed to monitor for changes: no such file or directory" Apr 20 19:56:40.916369 dockerd[1868]: time="2026-04-20T19:56:40.915933372Z" level=info msg="Completed buildkit initialization" Apr 20 19:56:41.237550 dockerd[1868]: time="2026-04-20T19:56:41.237395554Z" level=info msg="Daemon has completed initialization" Apr 20 19:56:41.250854 systemd[1]: Started docker.service - Docker Application Container Engine. Apr 20 19:56:41.263643 dockerd[1868]: time="2026-04-20T19:56:41.238894889Z" level=info msg="API listen on /run/docker.sock" Apr 20 19:56:41.753904 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 20 19:56:41.776103 (kubelet)[2087]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 20 19:56:45.662461 kubelet[2087]: E0420 19:56:45.661943 2087 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 20 19:56:45.674180 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 20 19:56:45.674389 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 20 19:56:45.679148 systemd[1]: kubelet.service: Consumed 3.494s CPU time, 111.1M memory peak. Apr 20 19:56:48.077644 containerd[1640]: time="2026-04-20T19:56:48.076413409Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.35.4\"" Apr 20 19:56:49.744555 update_engine[1610]: I20260420 19:56:49.743083 1610 update_attempter.cc:509] Updating boot flags... Apr 20 19:56:55.756034 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Apr 20 19:56:55.788931 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 20 19:56:57.259453 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 20 19:56:57.287944 (kubelet)[2137]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 20 19:56:57.577448 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1817940727.mount: Deactivated successfully. Apr 20 19:56:59.095041 kubelet[2137]: E0420 19:56:59.094879 2137 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 20 19:56:59.135597 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 20 19:56:59.135770 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 20 19:56:59.138466 systemd[1]: kubelet.service: Consumed 2.384s CPU time, 112M memory peak. Apr 20 19:57:09.293870 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Apr 20 19:57:09.299979 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 20 19:57:10.567166 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 20 19:57:10.590329 (kubelet)[2201]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 20 19:57:11.846288 kubelet[2201]: E0420 19:57:11.846062 2201 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 20 19:57:11.851148 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 20 19:57:11.851421 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 20 19:57:11.855016 systemd[1]: kubelet.service: Consumed 1.694s CPU time, 110.5M memory peak. Apr 20 19:57:22.059821 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Apr 20 19:57:22.114949 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 20 19:57:22.349413 containerd[1640]: time="2026-04-20T19:57:22.348432121Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.35.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 20 19:57:22.366436 containerd[1640]: time="2026-04-20T19:57:22.365275153Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.35.4: active requests=0, bytes read=27567885" Apr 20 19:57:22.622992 containerd[1640]: time="2026-04-20T19:57:22.621360776Z" level=info msg="ImageCreate event name:\"sha256:840f22aa169cc9a11114a874832f60c2d4a4f7767d107303cd1ca6d9c228ee8b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 20 19:57:23.242069 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 20 19:57:23.254611 containerd[1640]: time="2026-04-20T19:57:23.254160160Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:06b4bb208634a107ab9e6c50cdb9df178d05166a700c0cc448d59522091074b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 20 19:57:23.265686 (kubelet)[2221]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 20 19:57:23.440600 containerd[1640]: time="2026-04-20T19:57:23.438626857Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.35.4\" with image id \"sha256:840f22aa169cc9a11114a874832f60c2d4a4f7767d107303cd1ca6d9c228ee8b\", repo tag \"registry.k8s.io/kube-apiserver:v1.35.4\", repo digest \"registry.k8s.io/kube-apiserver@sha256:06b4bb208634a107ab9e6c50cdb9df178d05166a700c0cc448d59522091074b5\", size \"27576022\" in 35.348767159s" Apr 20 19:57:23.447724 containerd[1640]: time="2026-04-20T19:57:23.442984603Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.35.4\" returns image reference \"sha256:840f22aa169cc9a11114a874832f60c2d4a4f7767d107303cd1ca6d9c228ee8b\"" Apr 20 19:57:23.552821 containerd[1640]: time="2026-04-20T19:57:23.551842969Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.35.4\"" Apr 20 19:57:25.320096 kubelet[2221]: E0420 19:57:25.319863 2221 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 20 19:57:25.349482 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 20 19:57:25.350049 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 20 19:57:25.396595 systemd[1]: kubelet.service: Consumed 2.240s CPU time, 110.9M memory peak. Apr 20 19:57:35.551940 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Apr 20 19:57:35.577250 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 20 19:57:36.968822 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 20 19:57:37.022834 (kubelet)[2241]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 20 19:57:41.185698 containerd[1640]: time="2026-04-20T19:57:41.183563364Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.35.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 20 19:57:41.253513 containerd[1640]: time="2026-04-20T19:57:41.250041396Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.35.4: active requests=0, bytes read=21447620" Apr 20 19:57:42.141074 containerd[1640]: time="2026-04-20T19:57:42.094114084Z" level=info msg="ImageCreate event name:\"sha256:96ce7469899d4d3ccad56b1a80b91609cb2203287112d73818296004948bb667\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 20 19:57:42.170191 kubelet[2241]: E0420 19:57:42.164982 2241 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 20 19:57:42.179711 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 20 19:57:42.182909 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 20 19:57:42.193816 systemd[1]: kubelet.service: Consumed 4.370s CPU time, 110.7M memory peak. Apr 20 19:57:43.785793 containerd[1640]: time="2026-04-20T19:57:43.784303411Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:7b036c805d57f203e9efaf43672cff6019b9083a9c0eb107ea8500eace29d8fd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 20 19:57:44.315322 containerd[1640]: time="2026-04-20T19:57:44.313180277Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.35.4\" with image id \"sha256:96ce7469899d4d3ccad56b1a80b91609cb2203287112d73818296004948bb667\", repo tag \"registry.k8s.io/kube-controller-manager:v1.35.4\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:7b036c805d57f203e9efaf43672cff6019b9083a9c0eb107ea8500eace29d8fd\", size \"23018006\" in 20.758976242s" Apr 20 19:57:44.318590 containerd[1640]: time="2026-04-20T19:57:44.317098024Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.35.4\" returns image reference \"sha256:96ce7469899d4d3ccad56b1a80b91609cb2203287112d73818296004948bb667\"" Apr 20 19:57:44.349896 containerd[1640]: time="2026-04-20T19:57:44.349514700Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.35.4\"" Apr 20 19:57:52.376901 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7. Apr 20 19:57:52.489539 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 20 19:57:53.750131 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 20 19:57:53.781913 (kubelet)[2261]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 20 19:58:00.848690 containerd[1640]: time="2026-04-20T19:58:00.845050786Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.35.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 20 19:58:00.857639 containerd[1640]: time="2026-04-20T19:58:00.856951212Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.35.4: active requests=0, bytes read=15546519" Apr 20 19:58:01.091945 containerd[1640]: time="2026-04-20T19:58:01.090239818Z" level=info msg="ImageCreate event name:\"sha256:a0eecd9b69a38f829c29b535f73c1a3de3c7cc9f1294a44dc42c808faf0a23ff\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 20 19:58:02.579726 containerd[1640]: time="2026-04-20T19:58:02.574089786Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:9054fecb4fa04cc63aec47b0913c8deb3487d414190cd15211f864cfe0d0b4d6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 20 19:58:02.977558 containerd[1640]: time="2026-04-20T19:58:02.969082006Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.35.4\" with image id \"sha256:a0eecd9b69a38f829c29b535f73c1a3de3c7cc9f1294a44dc42c808faf0a23ff\", repo tag \"registry.k8s.io/kube-scheduler:v1.35.4\", repo digest \"registry.k8s.io/kube-scheduler@sha256:9054fecb4fa04cc63aec47b0913c8deb3487d414190cd15211f864cfe0d0b4d6\", size \"17121655\" in 18.606335014s" Apr 20 19:58:02.996560 containerd[1640]: time="2026-04-20T19:58:02.989245640Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.35.4\" returns image reference \"sha256:a0eecd9b69a38f829c29b535f73c1a3de3c7cc9f1294a44dc42c808faf0a23ff\"" Apr 20 19:58:03.099563 containerd[1640]: time="2026-04-20T19:58:03.098909982Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.35.4\"" Apr 20 19:58:03.148555 kubelet[2261]: E0420 19:58:03.145262 2261 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 20 19:58:03.192535 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 20 19:58:03.192983 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 20 19:58:03.212986 systemd[1]: kubelet.service: Consumed 7.386s CPU time, 109.5M memory peak. Apr 20 19:58:13.277055 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 8. Apr 20 19:58:13.342520 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 20 19:58:15.052328 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 20 19:58:15.087745 (kubelet)[2283]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 20 19:58:18.976778 kubelet[2283]: E0420 19:58:18.974031 2283 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 20 19:58:18.986883 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 20 19:58:18.988550 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 20 19:58:18.992548 systemd[1]: kubelet.service: Consumed 3.845s CPU time, 109.8M memory peak. Apr 20 19:58:22.816750 update_engine[1610]: I20260420 19:58:22.813797 1610 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Apr 20 19:58:22.816750 update_engine[1610]: I20260420 19:58:22.814445 1610 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Apr 20 19:58:22.830395 update_engine[1610]: I20260420 19:58:22.828645 1610 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Apr 20 19:58:22.887869 update_engine[1610]: I20260420 19:58:22.887643 1610 omaha_request_params.cc:62] Current group set to alpha Apr 20 19:58:22.888457 update_engine[1610]: I20260420 19:58:22.888324 1610 update_attempter.cc:499] Already updated boot flags. Skipping. Apr 20 19:58:22.888457 update_engine[1610]: I20260420 19:58:22.888443 1610 update_attempter.cc:643] Scheduling an action processor start. Apr 20 19:58:22.888525 update_engine[1610]: I20260420 19:58:22.888476 1610 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Apr 20 19:58:22.890163 update_engine[1610]: I20260420 19:58:22.888633 1610 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Apr 20 19:58:22.890163 update_engine[1610]: I20260420 19:58:22.888702 1610 omaha_request_action.cc:271] Posting an Omaha request to disabled Apr 20 19:58:22.890163 update_engine[1610]: I20260420 19:58:22.888708 1610 omaha_request_action.cc:272] Request: Apr 20 19:58:22.890163 update_engine[1610]: Apr 20 19:58:22.890163 update_engine[1610]: Apr 20 19:58:22.890163 update_engine[1610]: Apr 20 19:58:22.890163 update_engine[1610]: Apr 20 19:58:22.890163 update_engine[1610]: Apr 20 19:58:22.890163 update_engine[1610]: Apr 20 19:58:22.890163 update_engine[1610]: Apr 20 19:58:22.890163 update_engine[1610]: Apr 20 19:58:22.890163 update_engine[1610]: I20260420 19:58:22.888715 1610 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 20 19:58:22.948179 locksmithd[1710]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Apr 20 19:58:22.951564 update_engine[1610]: I20260420 19:58:22.951478 1610 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 20 19:58:23.001887 update_engine[1610]: I20260420 19:58:23.000981 1610 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 20 19:58:23.041771 update_engine[1610]: E20260420 19:58:23.040125 1610 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (Domain name not found) Apr 20 19:58:23.068015 update_engine[1610]: I20260420 19:58:23.063899 1610 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Apr 20 19:58:29.270789 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 9. Apr 20 19:58:29.326877 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 20 19:58:31.629262 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 20 19:58:31.677015 (kubelet)[2300]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 20 19:58:33.756414 update_engine[1610]: I20260420 19:58:33.739562 1610 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 20 19:58:33.768085 update_engine[1610]: I20260420 19:58:33.767617 1610 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 20 19:58:33.832412 update_engine[1610]: I20260420 19:58:33.827931 1610 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 20 19:58:33.852906 update_engine[1610]: E20260420 19:58:33.847910 1610 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (Domain name not found) Apr 20 19:58:33.859714 update_engine[1610]: I20260420 19:58:33.857298 1610 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Apr 20 19:58:34.444137 kubelet[2300]: E0420 19:58:34.439629 2300 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 20 19:58:34.460318 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 20 19:58:34.461905 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 20 19:58:34.467889 systemd[1]: kubelet.service: Consumed 3.415s CPU time, 110.8M memory peak. Apr 20 19:58:43.742383 update_engine[1610]: I20260420 19:58:43.740523 1610 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 20 19:58:43.764176 update_engine[1610]: I20260420 19:58:43.760988 1610 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 20 19:58:43.811381 update_engine[1610]: I20260420 19:58:43.808965 1610 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 20 19:58:43.878126 update_engine[1610]: E20260420 19:58:43.873699 1610 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (Domain name not found) Apr 20 19:58:43.932951 update_engine[1610]: I20260420 19:58:43.895112 1610 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Apr 20 19:58:44.533626 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 10. Apr 20 19:58:44.606719 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 20 19:58:46.365783 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 20 19:58:46.434697 (kubelet)[2317]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 20 19:58:53.747901 update_engine[1610]: I20260420 19:58:53.745165 1610 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 20 19:58:53.770852 update_engine[1610]: I20260420 19:58:53.756960 1610 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 20 19:58:53.778435 update_engine[1610]: I20260420 19:58:53.775960 1610 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 20 19:58:53.792687 update_engine[1610]: E20260420 19:58:53.790125 1610 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (Domain name not found) Apr 20 19:58:53.829953 update_engine[1610]: I20260420 19:58:53.827951 1610 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Apr 20 19:58:53.830760 update_engine[1610]: I20260420 19:58:53.830673 1610 omaha_request_action.cc:617] Omaha request response: Apr 20 19:58:53.836994 update_engine[1610]: E20260420 19:58:53.836603 1610 omaha_request_action.cc:636] Omaha request network transfer failed. Apr 20 19:58:53.843325 update_engine[1610]: I20260420 19:58:53.843053 1610 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Apr 20 19:58:53.847425 update_engine[1610]: I20260420 19:58:53.846000 1610 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Apr 20 19:58:53.850143 update_engine[1610]: I20260420 19:58:53.848882 1610 update_attempter.cc:306] Processing Done. Apr 20 19:58:53.853507 update_engine[1610]: E20260420 19:58:53.852876 1610 update_attempter.cc:619] Update failed. Apr 20 19:58:53.856257 update_engine[1610]: I20260420 19:58:53.856009 1610 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Apr 20 19:58:53.857373 update_engine[1610]: I20260420 19:58:53.856855 1610 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Apr 20 19:58:53.857373 update_engine[1610]: I20260420 19:58:53.856951 1610 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Apr 20 19:58:53.862766 update_engine[1610]: I20260420 19:58:53.862010 1610 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Apr 20 19:58:53.866408 update_engine[1610]: I20260420 19:58:53.865993 1610 omaha_request_action.cc:271] Posting an Omaha request to disabled Apr 20 19:58:53.869676 update_engine[1610]: I20260420 19:58:53.867647 1610 omaha_request_action.cc:272] Request: Apr 20 19:58:53.869676 update_engine[1610]: Apr 20 19:58:53.869676 update_engine[1610]: Apr 20 19:58:53.869676 update_engine[1610]: Apr 20 19:58:53.869676 update_engine[1610]: Apr 20 19:58:53.869676 update_engine[1610]: Apr 20 19:58:53.869676 update_engine[1610]: Apr 20 19:58:53.869676 update_engine[1610]: I20260420 19:58:53.868477 1610 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 20 19:58:53.869676 update_engine[1610]: I20260420 19:58:53.868871 1610 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 20 19:58:53.870932 locksmithd[1710]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Apr 20 19:58:53.908264 update_engine[1610]: I20260420 19:58:53.891144 1610 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 20 19:58:53.934174 update_engine[1610]: E20260420 19:58:53.933115 1610 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (Domain name not found) Apr 20 19:58:53.969209 update_engine[1610]: I20260420 19:58:53.967139 1610 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Apr 20 19:58:53.971597 update_engine[1610]: I20260420 19:58:53.971094 1610 omaha_request_action.cc:617] Omaha request response: Apr 20 19:58:53.971597 update_engine[1610]: I20260420 19:58:53.971562 1610 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Apr 20 19:58:53.971795 update_engine[1610]: I20260420 19:58:53.971607 1610 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Apr 20 19:58:53.971795 update_engine[1610]: I20260420 19:58:53.971612 1610 update_attempter.cc:306] Processing Done. Apr 20 19:58:53.971795 update_engine[1610]: I20260420 19:58:53.971760 1610 update_attempter.cc:310] Error event sent. Apr 20 19:58:53.971994 update_engine[1610]: I20260420 19:58:53.971812 1610 update_check_scheduler.cc:74] Next update check in 43m56s Apr 20 19:58:53.982179 locksmithd[1710]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Apr 20 19:58:54.026504 kubelet[2317]: E0420 19:58:54.025013 2317 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 20 19:58:54.073049 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 20 19:58:54.074849 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 20 19:58:54.077493 systemd[1]: kubelet.service: Consumed 6.322s CPU time, 110.2M memory peak. Apr 20 19:59:04.320328 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 11. Apr 20 19:59:04.350709 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 20 19:59:06.212869 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 20 19:59:06.269869 (kubelet)[2334]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 20 19:59:08.695735 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount999962585.mount: Deactivated successfully. Apr 20 19:59:12.413038 kubelet[2334]: E0420 19:59:12.403155 2334 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 20 19:59:12.437104 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 20 19:59:12.437660 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 20 19:59:12.448301 systemd[1]: kubelet.service: Consumed 5.443s CPU time, 111.3M memory peak. Apr 20 19:59:17.162554 containerd[1640]: time="2026-04-20T19:59:17.157961019Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.35.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 20 19:59:17.173473 containerd[1640]: time="2026-04-20T19:59:17.166407154Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.35.4: active requests=0, bytes read=25696203" Apr 20 19:59:17.524778 containerd[1640]: time="2026-04-20T19:59:17.518890822Z" level=info msg="ImageCreate event name:\"sha256:f21f27cddb23d0d7131dc7c59666b3b0e0b5ca4c3f003225f90307ab6211b6e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 20 19:59:18.656508 containerd[1640]: time="2026-04-20T19:59:18.652854769Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:c5daa23c72474e5e4062c320177d3b485fd42e7010f052bc80d657c4c00a0672\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 20 19:59:19.199963 containerd[1640]: time="2026-04-20T19:59:19.199251194Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.35.4\" with image id \"sha256:f21f27cddb23d0d7131dc7c59666b3b0e0b5ca4c3f003225f90307ab6211b6e1\", repo tag \"registry.k8s.io/kube-proxy:v1.35.4\", repo digest \"registry.k8s.io/kube-proxy@sha256:c5daa23c72474e5e4062c320177d3b485fd42e7010f052bc80d657c4c00a0672\", size \"25698944\" in 1m16.097516653s" Apr 20 19:59:19.200887 containerd[1640]: time="2026-04-20T19:59:19.200179159Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.35.4\" returns image reference \"sha256:f21f27cddb23d0d7131dc7c59666b3b0e0b5ca4c3f003225f90307ab6211b6e1\"" Apr 20 19:59:19.268209 containerd[1640]: time="2026-04-20T19:59:19.264844104Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.13.1\"" Apr 20 19:59:22.580076 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 12. Apr 20 19:59:22.695263 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 20 19:59:24.749219 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 20 19:59:24.770231 (kubelet)[2356]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 20 19:59:28.949642 kubelet[2356]: E0420 19:59:28.940064 2356 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 20 19:59:28.965077 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 20 19:59:28.965733 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 20 19:59:28.972933 systemd[1]: kubelet.service: Consumed 3.856s CPU time, 110.8M memory peak. Apr 20 19:59:32.827013 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2254605249.mount: Deactivated successfully. Apr 20 19:59:39.017063 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 13. Apr 20 19:59:39.087631 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 20 19:59:41.067961 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 20 19:59:41.132865 (kubelet)[2384]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 20 19:59:44.702074 kubelet[2384]: E0420 19:59:44.694922 2384 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 20 19:59:44.742806 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 20 19:59:44.743032 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 20 19:59:44.770247 systemd[1]: kubelet.service: Consumed 3.603s CPU time, 111M memory peak. Apr 20 19:59:55.026652 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 14. Apr 20 19:59:55.052904 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 20 19:59:57.144181 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 20 19:59:57.194153 (kubelet)[2402]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 20 20:00:03.482786 kubelet[2402]: E0420 20:00:03.477828 2402 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 20 20:00:03.554932 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 20 20:00:03.560473 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 20 20:00:03.567162 systemd[1]: kubelet.service: Consumed 5.563s CPU time, 109.5M memory peak. Apr 20 20:00:13.799163 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 15. Apr 20 20:00:13.891013 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 20 20:00:16.088681 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 20 20:00:16.138181 (kubelet)[2419]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 20 20:00:20.754864 kubelet[2419]: E0420 20:00:20.752177 2419 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 20 20:00:20.768546 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 20 20:00:20.769474 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 20 20:00:20.773210 systemd[1]: kubelet.service: Consumed 4.247s CPU time, 110.5M memory peak. Apr 20 20:00:31.087880 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 16. Apr 20 20:00:31.137571 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 20 20:00:33.649910 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 20 20:00:33.716115 (kubelet)[2468]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 20 20:00:44.334852 kubelet[2468]: E0420 20:00:44.323899 2468 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 20 20:00:44.350439 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 20 20:00:44.351833 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 20 20:00:44.360110 systemd[1]: kubelet.service: Consumed 9.298s CPU time, 108.5M memory peak. Apr 20 20:00:46.488156 containerd[1640]: time="2026-04-20T20:00:46.486017401Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.13.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 20 20:00:46.539099 containerd[1640]: time="2026-04-20T20:00:46.537944415Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.13.1: active requests=0, bytes read=23551130" Apr 20 20:00:46.749564 containerd[1640]: time="2026-04-20T20:00:46.748593542Z" level=info msg="ImageCreate event name:\"sha256:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 20 20:00:47.586212 containerd[1640]: time="2026-04-20T20:00:47.583498814Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 20 20:00:47.898321 containerd[1640]: time="2026-04-20T20:00:47.897920004Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.13.1\" with image id \"sha256:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139\", repo tag \"registry.k8s.io/coredns/coredns:v1.13.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6\", size \"23553139\" in 1m28.628753335s" Apr 20 20:00:47.898321 containerd[1640]: time="2026-04-20T20:00:47.898266732Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.13.1\" returns image reference \"sha256:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139\"" Apr 20 20:00:47.910445 containerd[1640]: time="2026-04-20T20:00:47.910137875Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Apr 20 20:00:51.688205 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4097109504.mount: Deactivated successfully. Apr 20 20:00:51.955109 containerd[1640]: time="2026-04-20T20:00:51.953255473Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 20 20:00:51.966539 containerd[1640]: time="2026-04-20T20:00:51.963592206Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=316649" Apr 20 20:00:52.279415 containerd[1640]: time="2026-04-20T20:00:52.277753869Z" level=info msg="ImageCreate event name:\"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 20 20:00:53.474633 containerd[1640]: time="2026-04-20T20:00:53.469200346Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 20 20:00:53.718720 containerd[1640]: time="2026-04-20T20:00:53.717161764Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"320448\" in 5.806661496s" Apr 20 20:00:53.726032 containerd[1640]: time="2026-04-20T20:00:53.720150244Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\"" Apr 20 20:00:53.827203 containerd[1640]: time="2026-04-20T20:00:53.826479223Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.6-0\"" Apr 20 20:00:54.517215 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 17. Apr 20 20:00:54.564566 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 20 20:00:56.363214 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 20 20:00:56.382973 (kubelet)[2501]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 20 20:00:59.653357 kubelet[2501]: E0420 20:00:59.650248 2501 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 20 20:00:59.670405 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 20 20:00:59.670964 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 20 20:00:59.682139 systemd[1]: kubelet.service: Consumed 3.657s CPU time, 110.4M memory peak. Apr 20 20:01:01.722142 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1641620679.mount: Deactivated successfully. Apr 20 20:01:09.758029 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 18. Apr 20 20:01:09.823204 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 20 20:01:11.372870 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 20 20:01:11.391659 (kubelet)[2538]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 20 20:01:13.589735 kubelet[2538]: E0420 20:01:13.588225 2538 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 20 20:01:13.638047 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 20 20:01:13.639511 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 20 20:01:13.646115 systemd[1]: kubelet.service: Consumed 2.748s CPU time, 108.9M memory peak. Apr 20 20:01:23.235921 systemd[1721]: Created slice background.slice - User Background Tasks Slice. Apr 20 20:01:23.238504 systemd[1721]: Starting systemd-tmpfiles-clean.service - Cleanup of User's Temporary Files and Directories... Apr 20 20:01:23.332033 systemd[1721]: Finished systemd-tmpfiles-clean.service - Cleanup of User's Temporary Files and Directories. Apr 20 20:01:23.764911 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 19. Apr 20 20:01:23.852198 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 20 20:01:25.694632 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 20 20:01:25.706709 (kubelet)[2557]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 20 20:01:29.373966 kubelet[2557]: E0420 20:01:29.368036 2557 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 20 20:01:29.407977 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 20 20:01:29.414881 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 20 20:01:29.434218 systemd[1]: kubelet.service: Consumed 4.091s CPU time, 110.8M memory peak. Apr 20 20:01:39.503918 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 20. Apr 20 20:01:39.528437 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 20 20:01:41.078177 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 20 20:01:41.094268 (kubelet)[2617]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 20 20:01:41.802610 containerd[1640]: time="2026-04-20T20:01:41.797733881Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.6-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 20 20:01:41.810695 containerd[1640]: time="2026-04-20T20:01:41.810283546Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.6-0: active requests=0, bytes read=23638508" Apr 20 20:01:41.991240 containerd[1640]: time="2026-04-20T20:01:41.990748276Z" level=info msg="ImageCreate event name:\"sha256:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 20 20:01:42.241587 kubelet[2617]: E0420 20:01:42.241226 2617 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 20 20:01:42.249935 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 20 20:01:42.251125 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 20 20:01:42.254969 systemd[1]: kubelet.service: Consumed 1.976s CPU time, 109M memory peak. Apr 20 20:01:43.190937 containerd[1640]: time="2026-04-20T20:01:43.180964785Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:60a30b5d81b2217555e2cfb9537f655b7ba97220b99c39ee2e162a7127225890\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 20 20:01:43.582665 containerd[1640]: time="2026-04-20T20:01:43.581189707Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.6-0\" with image id \"sha256:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2\", repo tag \"registry.k8s.io/etcd:3.6.6-0\", repo digest \"registry.k8s.io/etcd@sha256:60a30b5d81b2217555e2cfb9537f655b7ba97220b99c39ee2e162a7127225890\", size \"23641797\" in 49.751902705s" Apr 20 20:01:43.582665 containerd[1640]: time="2026-04-20T20:01:43.581963804Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.6-0\" returns image reference \"sha256:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2\"" Apr 20 20:01:52.528199 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 21. Apr 20 20:01:52.583956 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 20 20:01:54.090217 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 20 20:01:54.163181 (kubelet)[2663]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 20 20:01:56.850163 kubelet[2663]: E0420 20:01:56.847250 2663 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 20 20:01:56.877218 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 20 20:01:56.878266 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 20 20:01:56.881200 systemd[1]: kubelet.service: Consumed 3.135s CPU time, 110.5M memory peak. Apr 20 20:01:59.654186 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 20 20:01:59.654379 systemd[1]: kubelet.service: Consumed 3.135s CPU time, 110.5M memory peak. Apr 20 20:01:59.672149 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 20 20:01:59.921483 systemd[1]: Reload requested from client PID 2680 ('systemctl') (unit session-8.scope)... Apr 20 20:01:59.921550 systemd[1]: Reloading... Apr 20 20:02:01.783474 zram_generator::config[2733]: No configuration found. Apr 20 20:02:01.805579 systemd-ssh-generator[2727]: Failed to query local AF_VSOCK CID: Cannot assign requested address Apr 20 20:02:01.807295 (sd-exec-[2711]: /usr/lib/systemd/system-generators/systemd-ssh-generator failed with exit status 1. Apr 20 20:02:04.234321 systemd[1]: /usr/lib/systemd/system/update-engine.service:10: Support for option BlockIOWeight= has been removed and it is ignored Apr 20 20:02:05.948998 systemd[1]: Reloading finished in 6021 ms. Apr 20 20:02:06.830984 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 20 20:02:06.866040 (kubelet)[2772]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 20 20:02:06.995004 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 20 20:02:07.002706 systemd[1]: kubelet.service: Deactivated successfully. Apr 20 20:02:07.004779 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 20 20:02:07.005206 systemd[1]: kubelet.service: Consumed 967ms CPU time, 100M memory peak. Apr 20 20:02:07.036189 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 20 20:02:08.589204 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 20 20:02:08.611766 (kubelet)[2789]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 20 20:02:09.655469 kubelet[2789]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 20 20:02:11.239695 kubelet[2789]: I0420 20:02:11.238757 2789 server.go:525] "Kubelet version" kubeletVersion="v1.35.1" Apr 20 20:02:11.239695 kubelet[2789]: I0420 20:02:11.239280 2789 server.go:527] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 20 20:02:11.244332 kubelet[2789]: I0420 20:02:11.241376 2789 watchdog_linux.go:95] "Systemd watchdog is not enabled" Apr 20 20:02:11.244332 kubelet[2789]: I0420 20:02:11.243263 2789 watchdog_linux.go:138] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 20 20:02:11.262879 kubelet[2789]: I0420 20:02:11.258288 2789 server.go:951] "Client rotation is on, will bootstrap in background" Apr 20 20:02:11.462424 kubelet[2789]: E0420 20:02:11.461921 2789 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.6:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.6:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 20 20:02:11.487258 kubelet[2789]: I0420 20:02:11.486739 2789 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 20 20:02:11.635555 kubelet[2789]: I0420 20:02:11.634278 2789 server.go:1418] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Apr 20 20:02:12.038846 kubelet[2789]: I0420 20:02:12.037719 2789 server.go:775] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Apr 20 20:02:12.062468 kubelet[2789]: I0420 20:02:12.059419 2789 container_manager_linux.go:272] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 20 20:02:12.088256 kubelet[2789]: I0420 20:02:12.071304 2789 container_manager_linux.go:277] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 20 20:02:12.089866 kubelet[2789]: I0420 20:02:12.088845 2789 topology_manager.go:143] "Creating topology manager with none policy" Apr 20 20:02:12.089866 kubelet[2789]: I0420 20:02:12.089110 2789 container_manager_linux.go:308] "Creating device plugin manager" Apr 20 20:02:12.089866 kubelet[2789]: I0420 20:02:12.089840 2789 container_manager_linux.go:317] "Creating Dynamic Resource Allocation (DRA) manager" Apr 20 20:02:12.135196 kubelet[2789]: I0420 20:02:12.134516 2789 state_mem.go:41] "Initialized" logger="CPUManager state memory" Apr 20 20:02:12.148381 kubelet[2789]: I0420 20:02:12.147893 2789 kubelet.go:482] "Attempting to sync node with API server" Apr 20 20:02:12.148381 kubelet[2789]: I0420 20:02:12.148256 2789 kubelet.go:383] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 20 20:02:12.148381 kubelet[2789]: I0420 20:02:12.148551 2789 kubelet.go:394] "Adding apiserver pod source" Apr 20 20:02:12.158067 kubelet[2789]: I0420 20:02:12.148620 2789 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 20 20:02:12.185387 kubelet[2789]: I0420 20:02:12.185165 2789 kuberuntime_manager.go:294] "Container runtime initialized" containerRuntime="containerd" version="v2.2.1" apiVersion="v1" Apr 20 20:02:12.197478 kubelet[2789]: I0420 20:02:12.193100 2789 kubelet.go:943] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 20 20:02:12.198976 kubelet[2789]: I0420 20:02:12.198703 2789 kubelet.go:970] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Apr 20 20:02:12.270981 kubelet[2789]: W0420 20:02:12.268851 2789 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Apr 20 20:02:12.316747 kubelet[2789]: I0420 20:02:12.314229 2789 server.go:1257] "Started kubelet" Apr 20 20:02:12.326729 kubelet[2789]: I0420 20:02:12.323580 2789 server.go:182] "Starting to listen" address="0.0.0.0" port=10250 Apr 20 20:02:12.333657 kubelet[2789]: I0420 20:02:12.330857 2789 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 20 20:02:12.351291 kubelet[2789]: I0420 20:02:12.350111 2789 server_v1.go:49] "podresources" method="list" useActivePods=true Apr 20 20:02:12.353964 kubelet[2789]: I0420 20:02:12.353410 2789 server.go:254] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 20 20:02:12.354522 kubelet[2789]: I0420 20:02:12.354483 2789 fs_resource_analyzer.go:69] "Starting FS ResourceAnalyzer" Apr 20 20:02:12.358465 kubelet[2789]: E0420 20:02:12.353944 2789 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.6:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.6:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18a8292e698594f4 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-20 20:02:12.313257204 +0000 UTC m=+3.686524018,LastTimestamp:2026-04-20 20:02:12.313257204 +0000 UTC m=+3.686524018,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 20 20:02:12.379812 kubelet[2789]: I0420 20:02:12.378542 2789 volume_manager.go:311] "Starting Kubelet Volume Manager" Apr 20 20:02:12.388040 kubelet[2789]: I0420 20:02:12.380163 2789 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 20 20:02:12.393439 kubelet[2789]: E0420 20:02:12.385478 2789 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 20 20:02:12.396577 kubelet[2789]: I0420 20:02:12.396409 2789 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Apr 20 20:02:12.397836 kubelet[2789]: I0420 20:02:12.396842 2789 reconciler.go:29] "Reconciler: start to sync state" Apr 20 20:02:12.473884 kubelet[2789]: E0420 20:02:12.472059 2789 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.6:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.6:6443: connect: connection refused" interval="200ms" Apr 20 20:02:12.495227 kubelet[2789]: E0420 20:02:12.494614 2789 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 20 20:02:12.511920 kubelet[2789]: I0420 20:02:12.511581 2789 server.go:317] "Adding debug handlers to kubelet server" Apr 20 20:02:12.557129 kubelet[2789]: I0420 20:02:12.556932 2789 factory.go:223] Registration of the containerd container factory successfully Apr 20 20:02:12.557129 kubelet[2789]: I0420 20:02:12.556989 2789 factory.go:223] Registration of the systemd container factory successfully Apr 20 20:02:12.562246 kubelet[2789]: I0420 20:02:12.560035 2789 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 20 20:02:12.688404 kubelet[2789]: E0420 20:02:12.681953 2789 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 20 20:02:12.694214 kubelet[2789]: E0420 20:02:12.693740 2789 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.6:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.6:6443: connect: connection refused" interval="400ms" Apr 20 20:02:12.770293 kubelet[2789]: E0420 20:02:12.768909 2789 kubelet.go:1656] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 20 20:02:12.795145 kubelet[2789]: I0420 20:02:12.781263 2789 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Apr 20 20:02:12.825279 kubelet[2789]: E0420 20:02:12.792489 2789 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 20 20:02:12.889911 kubelet[2789]: I0420 20:02:12.888220 2789 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Apr 20 20:02:12.898819 kubelet[2789]: I0420 20:02:12.896406 2789 status_manager.go:249] "Starting to sync pod status with apiserver" Apr 20 20:02:12.912066 kubelet[2789]: E0420 20:02:12.905605 2789 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 20 20:02:12.921062 kubelet[2789]: I0420 20:02:12.916262 2789 kubelet.go:2501] "Starting kubelet main sync loop" Apr 20 20:02:12.926761 kubelet[2789]: E0420 20:02:12.924398 2789 kubelet.go:2525] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 20 20:02:13.027666 kubelet[2789]: E0420 20:02:13.026934 2789 kubelet.go:2525] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 20 20:02:13.041674 kubelet[2789]: E0420 20:02:13.037070 2789 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 20 20:02:13.147787 kubelet[2789]: E0420 20:02:13.147546 2789 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 20 20:02:13.148983 kubelet[2789]: E0420 20:02:13.148486 2789 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.6:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.6:6443: connect: connection refused" interval="800ms" Apr 20 20:02:13.258972 kubelet[2789]: E0420 20:02:13.257938 2789 kubelet.go:2525] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Apr 20 20:02:13.273074 kubelet[2789]: E0420 20:02:13.263233 2789 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 20 20:02:13.384710 kubelet[2789]: E0420 20:02:13.382895 2789 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 20 20:02:13.480777 kubelet[2789]: I0420 20:02:13.480207 2789 cpu_manager.go:225] "Starting" policy="none" Apr 20 20:02:13.486511 kubelet[2789]: E0420 20:02:13.484081 2789 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 20 20:02:13.486511 kubelet[2789]: I0420 20:02:13.484278 2789 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Apr 20 20:02:13.486511 kubelet[2789]: I0420 20:02:13.486765 2789 state_mem.go:41] "Initialized" logger="CPUManager state checkpoint.CPUManager state memory" Apr 20 20:02:13.502249 kubelet[2789]: I0420 20:02:13.499140 2789 policy_none.go:50] "Start" Apr 20 20:02:13.509290 kubelet[2789]: I0420 20:02:13.502969 2789 memory_manager.go:187] "Starting memorymanager" policy="None" Apr 20 20:02:13.514042 kubelet[2789]: I0420 20:02:13.510712 2789 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Apr 20 20:02:13.535090 kubelet[2789]: I0420 20:02:13.533037 2789 policy_none.go:44] "Start" Apr 20 20:02:13.595402 kubelet[2789]: E0420 20:02:13.593222 2789 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 20 20:02:13.663479 kubelet[2789]: E0420 20:02:13.661771 2789 kubelet.go:2525] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Apr 20 20:02:13.675134 kubelet[2789]: E0420 20:02:13.672674 2789 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.6:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.6:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 20 20:02:13.735370 kubelet[2789]: E0420 20:02:13.734920 2789 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 20 20:02:13.842706 kubelet[2789]: E0420 20:02:13.841199 2789 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 20 20:02:13.880835 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Apr 20 20:02:13.956730 kubelet[2789]: E0420 20:02:13.947231 2789 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 20 20:02:13.986318 kubelet[2789]: E0420 20:02:13.984948 2789 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.6:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.6:6443: connect: connection refused" interval="1.6s" Apr 20 20:02:14.050543 kubelet[2789]: E0420 20:02:14.050239 2789 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 20 20:02:14.160730 kubelet[2789]: E0420 20:02:14.159921 2789 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 20 20:02:14.189210 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Apr 20 20:02:14.273975 kubelet[2789]: E0420 20:02:14.269273 2789 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 20 20:02:14.383448 kubelet[2789]: E0420 20:02:14.382013 2789 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 20 20:02:14.386608 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Apr 20 20:02:14.482622 kubelet[2789]: E0420 20:02:14.477231 2789 kubelet.go:2525] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Apr 20 20:02:14.487948 kubelet[2789]: E0420 20:02:14.485987 2789 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 20 20:02:14.498982 kubelet[2789]: E0420 20:02:14.497427 2789 manager.go:525] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 20 20:02:14.572073 kubelet[2789]: I0420 20:02:14.568307 2789 eviction_manager.go:194] "Eviction manager: starting control loop" Apr 20 20:02:14.580728 kubelet[2789]: I0420 20:02:14.579180 2789 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 20 20:02:14.607806 kubelet[2789]: E0420 20:02:14.602735 2789 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 20 20:02:14.607806 kubelet[2789]: I0420 20:02:14.603118 2789 plugin_manager.go:121] "Starting Kubelet Plugin Manager" Apr 20 20:02:14.635792 kubelet[2789]: E0420 20:02:14.635385 2789 eviction_manager.go:272] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 20 20:02:14.638963 kubelet[2789]: E0420 20:02:14.638611 2789 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 20 20:02:14.819163 kubelet[2789]: I0420 20:02:14.818768 2789 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Apr 20 20:02:14.830648 kubelet[2789]: E0420 20:02:14.827380 2789 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://10.0.0.6:6443/api/v1/nodes\": dial tcp 10.0.0.6:6443: connect: connection refused" node="localhost" Apr 20 20:02:15.096712 kubelet[2789]: I0420 20:02:15.095722 2789 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Apr 20 20:02:15.157485 kubelet[2789]: E0420 20:02:15.154639 2789 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://10.0.0.6:6443/api/v1/nodes\": dial tcp 10.0.0.6:6443: connect: connection refused" node="localhost" Apr 20 20:02:15.635729 kubelet[2789]: E0420 20:02:15.633767 2789 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.6:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.6:6443: connect: connection refused" interval="3.2s" Apr 20 20:02:15.635729 kubelet[2789]: I0420 20:02:15.635674 2789 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Apr 20 20:02:15.662028 kubelet[2789]: E0420 20:02:15.658967 2789 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://10.0.0.6:6443/api/v1/nodes\": dial tcp 10.0.0.6:6443: connect: connection refused" node="localhost" Apr 20 20:02:16.160663 kubelet[2789]: I0420 20:02:16.159917 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ba15b63dde517d3f49c1db0a4abcdbe1-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"ba15b63dde517d3f49c1db0a4abcdbe1\") " pod="kube-system/kube-apiserver-localhost" Apr 20 20:02:16.160663 kubelet[2789]: I0420 20:02:16.160126 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ba15b63dde517d3f49c1db0a4abcdbe1-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"ba15b63dde517d3f49c1db0a4abcdbe1\") " pod="kube-system/kube-apiserver-localhost" Apr 20 20:02:16.167187 kubelet[2789]: I0420 20:02:16.164979 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ba15b63dde517d3f49c1db0a4abcdbe1-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"ba15b63dde517d3f49c1db0a4abcdbe1\") " pod="kube-system/kube-apiserver-localhost" Apr 20 20:02:16.381576 kubelet[2789]: I0420 20:02:16.380163 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/14bc29ec35edba17af38052ec24275f2-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"14bc29ec35edba17af38052ec24275f2\") " pod="kube-system/kube-controller-manager-localhost" Apr 20 20:02:16.392074 kubelet[2789]: I0420 20:02:16.384567 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/14bc29ec35edba17af38052ec24275f2-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"14bc29ec35edba17af38052ec24275f2\") " pod="kube-system/kube-controller-manager-localhost" Apr 20 20:02:16.392074 kubelet[2789]: I0420 20:02:16.385215 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/14bc29ec35edba17af38052ec24275f2-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"14bc29ec35edba17af38052ec24275f2\") " pod="kube-system/kube-controller-manager-localhost" Apr 20 20:02:16.392074 kubelet[2789]: I0420 20:02:16.385239 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/14bc29ec35edba17af38052ec24275f2-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"14bc29ec35edba17af38052ec24275f2\") " pod="kube-system/kube-controller-manager-localhost" Apr 20 20:02:16.392074 kubelet[2789]: I0420 20:02:16.385257 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/14bc29ec35edba17af38052ec24275f2-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"14bc29ec35edba17af38052ec24275f2\") " pod="kube-system/kube-controller-manager-localhost" Apr 20 20:02:16.616883 kubelet[2789]: I0420 20:02:16.616508 2789 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Apr 20 20:02:16.626749 kubelet[2789]: E0420 20:02:16.626106 2789 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://10.0.0.6:6443/api/v1/nodes\": dial tcp 10.0.0.6:6443: connect: connection refused" node="localhost" Apr 20 20:02:16.676411 kubelet[2789]: I0420 20:02:16.676249 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f7c88b30fc803a3ec6b6c138191bdaca-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"f7c88b30fc803a3ec6b6c138191bdaca\") " pod="kube-system/kube-scheduler-localhost" Apr 20 20:02:16.824010 systemd[1]: Created slice kubepods-burstable-podba15b63dde517d3f49c1db0a4abcdbe1.slice - libcontainer container kubepods-burstable-podba15b63dde517d3f49c1db0a4abcdbe1.slice. Apr 20 20:02:16.984567 kubelet[2789]: E0420 20:02:16.983985 2789 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 20 20:02:17.011232 kubelet[2789]: E0420 20:02:17.010816 2789 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:02:17.012294 systemd[1]: Created slice kubepods-burstable-pod14bc29ec35edba17af38052ec24275f2.slice - libcontainer container kubepods-burstable-pod14bc29ec35edba17af38052ec24275f2.slice. Apr 20 20:02:17.041256 kubelet[2789]: E0420 20:02:17.040578 2789 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 20 20:02:17.056485 kubelet[2789]: E0420 20:02:17.055185 2789 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:02:17.061283 containerd[1640]: time="2026-04-20T20:02:17.061008749Z" level=info msg="RunPodSandbox for name:\"kube-apiserver-localhost\" uid:\"ba15b63dde517d3f49c1db0a4abcdbe1\" namespace:\"kube-system\"" Apr 20 20:02:17.065211 containerd[1640]: time="2026-04-20T20:02:17.063943038Z" level=info msg="RunPodSandbox for name:\"kube-controller-manager-localhost\" uid:\"14bc29ec35edba17af38052ec24275f2\" namespace:\"kube-system\"" Apr 20 20:02:17.074307 systemd[1]: Created slice kubepods-burstable-podf7c88b30fc803a3ec6b6c138191bdaca.slice - libcontainer container kubepods-burstable-podf7c88b30fc803a3ec6b6c138191bdaca.slice. Apr 20 20:02:17.230302 kubelet[2789]: E0420 20:02:17.228670 2789 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 20 20:02:17.295200 kubelet[2789]: E0420 20:02:17.292365 2789 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:02:17.395129 containerd[1640]: time="2026-04-20T20:02:17.394684266Z" level=info msg="RunPodSandbox for name:\"kube-scheduler-localhost\" uid:\"f7c88b30fc803a3ec6b6c138191bdaca\" namespace:\"kube-system\"" Apr 20 20:02:17.916637 kubelet[2789]: E0420 20:02:17.903289 2789 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.6:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.6:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18a8292e698594f4 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-20 20:02:12.313257204 +0000 UTC m=+3.686524018,LastTimestamp:2026-04-20 20:02:12.313257204 +0000 UTC m=+3.686524018,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 20 20:02:18.161549 kubelet[2789]: E0420 20:02:18.152155 2789 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.6:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.6:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 20 20:02:18.167561 containerd[1640]: time="2026-04-20T20:02:18.165027840Z" level=info msg="connecting to shim 64a97b8435f3ddeadc09165de86e15b48f97c0eb04468620f43d058579b319d1" address="unix:///run/containerd/s/e1f0007e2c6f1f748a9dc06ca555a4405d786137015c68dabb60c59e595b314f" namespace=k8s.io protocol=ttrpc version=3 Apr 20 20:02:18.176835 containerd[1640]: time="2026-04-20T20:02:18.176285946Z" level=info msg="connecting to shim 67d023cd84b433eae4beb7d64a8aba55517d82d1ec198026b6901dd9c728d04a" address="unix:///run/containerd/s/7b725d514cd95fc759e57fc10892b9b027f173518823e97140c3af4445658aea" namespace=k8s.io protocol=ttrpc version=3 Apr 20 20:02:18.419784 kubelet[2789]: I0420 20:02:18.418596 2789 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Apr 20 20:02:18.452442 kubelet[2789]: E0420 20:02:18.450431 2789 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://10.0.0.6:6443/api/v1/nodes\": dial tcp 10.0.0.6:6443: connect: connection refused" node="localhost" Apr 20 20:02:18.623931 containerd[1640]: time="2026-04-20T20:02:18.620274889Z" level=info msg="connecting to shim 2893dd2616135de175ff631c13d6f1271fb5b80119744fd45f4cb5ca2ecb3a9b" address="unix:///run/containerd/s/3f69fcb977b4c2b3ed2669a33314ccaae5bf4bf12700f043b9bbf852eee3e02f" namespace=k8s.io protocol=ttrpc version=3 Apr 20 20:02:18.890537 kubelet[2789]: E0420 20:02:18.888043 2789 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.6:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.6:6443: connect: connection refused" interval="6.4s" Apr 20 20:02:19.143017 systemd[1]: Started cri-containerd-2893dd2616135de175ff631c13d6f1271fb5b80119744fd45f4cb5ca2ecb3a9b.scope - libcontainer container 2893dd2616135de175ff631c13d6f1271fb5b80119744fd45f4cb5ca2ecb3a9b. Apr 20 20:02:19.224848 systemd[1]: Started cri-containerd-64a97b8435f3ddeadc09165de86e15b48f97c0eb04468620f43d058579b319d1.scope - libcontainer container 64a97b8435f3ddeadc09165de86e15b48f97c0eb04468620f43d058579b319d1. Apr 20 20:02:19.259377 systemd[1]: Started cri-containerd-67d023cd84b433eae4beb7d64a8aba55517d82d1ec198026b6901dd9c728d04a.scope - libcontainer container 67d023cd84b433eae4beb7d64a8aba55517d82d1ec198026b6901dd9c728d04a. Apr 20 20:02:20.265032 containerd[1640]: time="2026-04-20T20:02:20.259133017Z" level=info msg="RunPodSandbox for name:\"kube-scheduler-localhost\" uid:\"f7c88b30fc803a3ec6b6c138191bdaca\" namespace:\"kube-system\" returns sandbox id \"2893dd2616135de175ff631c13d6f1271fb5b80119744fd45f4cb5ca2ecb3a9b\"" Apr 20 20:02:20.299089 containerd[1640]: time="2026-04-20T20:02:20.297259921Z" level=info msg="RunPodSandbox for name:\"kube-controller-manager-localhost\" uid:\"14bc29ec35edba17af38052ec24275f2\" namespace:\"kube-system\" returns sandbox id \"64a97b8435f3ddeadc09165de86e15b48f97c0eb04468620f43d058579b319d1\"" Apr 20 20:02:20.378323 containerd[1640]: time="2026-04-20T20:02:20.377759134Z" level=info msg="RunPodSandbox for name:\"kube-apiserver-localhost\" uid:\"ba15b63dde517d3f49c1db0a4abcdbe1\" namespace:\"kube-system\" returns sandbox id \"67d023cd84b433eae4beb7d64a8aba55517d82d1ec198026b6901dd9c728d04a\"" Apr 20 20:02:20.392992 kubelet[2789]: E0420 20:02:20.390985 2789 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:02:20.392992 kubelet[2789]: E0420 20:02:20.391002 2789 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:02:20.458520 kubelet[2789]: E0420 20:02:20.455106 2789 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:02:20.750690 containerd[1640]: time="2026-04-20T20:02:20.749739347Z" level=info msg="CreateContainer within sandbox \"67d023cd84b433eae4beb7d64a8aba55517d82d1ec198026b6901dd9c728d04a\" for container name:\"kube-apiserver\"" Apr 20 20:02:20.754738 containerd[1640]: time="2026-04-20T20:02:20.749745653Z" level=info msg="CreateContainer within sandbox \"2893dd2616135de175ff631c13d6f1271fb5b80119744fd45f4cb5ca2ecb3a9b\" for container name:\"kube-scheduler\"" Apr 20 20:02:20.755776 containerd[1640]: time="2026-04-20T20:02:20.749753074Z" level=info msg="CreateContainer within sandbox \"64a97b8435f3ddeadc09165de86e15b48f97c0eb04468620f43d058579b319d1\" for container name:\"kube-controller-manager\"" Apr 20 20:02:21.080613 containerd[1640]: time="2026-04-20T20:02:21.075653605Z" level=info msg="Container 37048ce4ab3df3a2d4a897903a56366f0eddba041f8139b5370fd312170d9238: CDI devices from CRI Config.CDIDevices: []" Apr 20 20:02:21.135969 containerd[1640]: time="2026-04-20T20:02:21.132264182Z" level=info msg="Container 764cf012163dbdccf61655f37fbd0201763cea2f5d053b212104eb3cf93cf259: CDI devices from CRI Config.CDIDevices: []" Apr 20 20:02:21.175094 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1808089858.mount: Deactivated successfully. Apr 20 20:02:21.187130 containerd[1640]: time="2026-04-20T20:02:21.186708432Z" level=info msg="Container 92f997e10caaa969f76f6e97385a5a583d487b5ba75c74d285d90735d225065f: CDI devices from CRI Config.CDIDevices: []" Apr 20 20:02:21.491924 containerd[1640]: time="2026-04-20T20:02:21.491083585Z" level=info msg="CreateContainer within sandbox \"67d023cd84b433eae4beb7d64a8aba55517d82d1ec198026b6901dd9c728d04a\" for name:\"kube-apiserver\" returns container id \"764cf012163dbdccf61655f37fbd0201763cea2f5d053b212104eb3cf93cf259\"" Apr 20 20:02:21.620323 containerd[1640]: time="2026-04-20T20:02:21.620052186Z" level=info msg="StartContainer for \"764cf012163dbdccf61655f37fbd0201763cea2f5d053b212104eb3cf93cf259\"" Apr 20 20:02:21.622769 containerd[1640]: time="2026-04-20T20:02:21.620162938Z" level=info msg="CreateContainer within sandbox \"64a97b8435f3ddeadc09165de86e15b48f97c0eb04468620f43d058579b319d1\" for name:\"kube-controller-manager\" returns container id \"92f997e10caaa969f76f6e97385a5a583d487b5ba75c74d285d90735d225065f\"" Apr 20 20:02:21.631907 containerd[1640]: time="2026-04-20T20:02:21.631263537Z" level=info msg="CreateContainer within sandbox \"2893dd2616135de175ff631c13d6f1271fb5b80119744fd45f4cb5ca2ecb3a9b\" for name:\"kube-scheduler\" returns container id \"37048ce4ab3df3a2d4a897903a56366f0eddba041f8139b5370fd312170d9238\"" Apr 20 20:02:21.635520 containerd[1640]: time="2026-04-20T20:02:21.634984343Z" level=info msg="StartContainer for \"92f997e10caaa969f76f6e97385a5a583d487b5ba75c74d285d90735d225065f\"" Apr 20 20:02:21.692774 containerd[1640]: time="2026-04-20T20:02:21.692424804Z" level=info msg="StartContainer for \"37048ce4ab3df3a2d4a897903a56366f0eddba041f8139b5370fd312170d9238\"" Apr 20 20:02:21.693534 containerd[1640]: time="2026-04-20T20:02:21.693307942Z" level=info msg="connecting to shim 764cf012163dbdccf61655f37fbd0201763cea2f5d053b212104eb3cf93cf259" address="unix:///run/containerd/s/7b725d514cd95fc759e57fc10892b9b027f173518823e97140c3af4445658aea" protocol=ttrpc version=3 Apr 20 20:02:21.694605 containerd[1640]: time="2026-04-20T20:02:21.693630126Z" level=info msg="connecting to shim 92f997e10caaa969f76f6e97385a5a583d487b5ba75c74d285d90735d225065f" address="unix:///run/containerd/s/e1f0007e2c6f1f748a9dc06ca555a4405d786137015c68dabb60c59e595b314f" protocol=ttrpc version=3 Apr 20 20:02:21.763482 containerd[1640]: time="2026-04-20T20:02:21.759028741Z" level=info msg="connecting to shim 37048ce4ab3df3a2d4a897903a56366f0eddba041f8139b5370fd312170d9238" address="unix:///run/containerd/s/3f69fcb977b4c2b3ed2669a33314ccaae5bf4bf12700f043b9bbf852eee3e02f" protocol=ttrpc version=3 Apr 20 20:02:21.951651 kubelet[2789]: I0420 20:02:21.949651 2789 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Apr 20 20:02:21.981191 kubelet[2789]: E0420 20:02:21.976560 2789 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://10.0.0.6:6443/api/v1/nodes\": dial tcp 10.0.0.6:6443: connect: connection refused" node="localhost" Apr 20 20:02:22.279983 systemd[1]: Started cri-containerd-37048ce4ab3df3a2d4a897903a56366f0eddba041f8139b5370fd312170d9238.scope - libcontainer container 37048ce4ab3df3a2d4a897903a56366f0eddba041f8139b5370fd312170d9238. Apr 20 20:02:22.352006 systemd[1]: Started cri-containerd-764cf012163dbdccf61655f37fbd0201763cea2f5d053b212104eb3cf93cf259.scope - libcontainer container 764cf012163dbdccf61655f37fbd0201763cea2f5d053b212104eb3cf93cf259. Apr 20 20:02:22.398454 systemd[1]: Started cri-containerd-92f997e10caaa969f76f6e97385a5a583d487b5ba75c74d285d90735d225065f.scope - libcontainer container 92f997e10caaa969f76f6e97385a5a583d487b5ba75c74d285d90735d225065f. Apr 20 20:02:23.194643 containerd[1640]: time="2026-04-20T20:02:23.193411565Z" level=info msg="StartContainer for \"37048ce4ab3df3a2d4a897903a56366f0eddba041f8139b5370fd312170d9238\" returns successfully" Apr 20 20:02:23.246313 containerd[1640]: time="2026-04-20T20:02:23.246035806Z" level=info msg="StartContainer for \"92f997e10caaa969f76f6e97385a5a583d487b5ba75c74d285d90735d225065f\" returns successfully" Apr 20 20:02:23.274887 containerd[1640]: time="2026-04-20T20:02:23.274654221Z" level=info msg="StartContainer for \"764cf012163dbdccf61655f37fbd0201763cea2f5d053b212104eb3cf93cf259\" returns successfully" Apr 20 20:02:24.163547 kubelet[2789]: E0420 20:02:24.163298 2789 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 20 20:02:24.174484 kubelet[2789]: E0420 20:02:24.172302 2789 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:02:24.445156 kubelet[2789]: E0420 20:02:24.444144 2789 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 20 20:02:24.445156 kubelet[2789]: E0420 20:02:24.444941 2789 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:02:24.654984 kubelet[2789]: E0420 20:02:24.654777 2789 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 20 20:02:24.897233 kubelet[2789]: E0420 20:02:24.897030 2789 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 20 20:02:24.897233 kubelet[2789]: E0420 20:02:24.897535 2789 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:02:26.491371 kubelet[2789]: E0420 20:02:26.490997 2789 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 20 20:02:26.526771 kubelet[2789]: E0420 20:02:26.526001 2789 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 20 20:02:26.532133 kubelet[2789]: E0420 20:02:26.531820 2789 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:02:26.559626 kubelet[2789]: E0420 20:02:26.559308 2789 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:02:26.580649 kubelet[2789]: E0420 20:02:26.579858 2789 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 20 20:02:26.595206 kubelet[2789]: E0420 20:02:26.594632 2789 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:02:27.428422 kubelet[2789]: E0420 20:02:27.428208 2789 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 20 20:02:27.430225 kubelet[2789]: E0420 20:02:27.428173 2789 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 20 20:02:27.430225 kubelet[2789]: E0420 20:02:27.428950 2789 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:02:27.430225 kubelet[2789]: E0420 20:02:27.428984 2789 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:02:28.415969 kubelet[2789]: I0420 20:02:28.415808 2789 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Apr 20 20:02:28.992610 kubelet[2789]: E0420 20:02:28.989970 2789 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 20 20:02:29.004650 kubelet[2789]: E0420 20:02:28.999179 2789 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:02:29.371450 kubelet[2789]: I0420 20:02:29.367925 2789 apiserver.go:52] "Watching apiserver" Apr 20 20:02:29.861659 kubelet[2789]: I0420 20:02:29.856983 2789 kubelet_node_status.go:77] "Successfully registered node" node="localhost" Apr 20 20:02:30.180676 kubelet[2789]: I0420 20:02:29.953246 2789 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Apr 20 20:02:30.278303 kubelet[2789]: I0420 20:02:30.252258 2789 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Apr 20 20:02:30.301218 kubelet[2789]: E0420 20:02:30.057995 2789 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.18a8292e698594f4 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-20 20:02:12.313257204 +0000 UTC m=+3.686524018,LastTimestamp:2026-04-20 20:02:12.313257204 +0000 UTC m=+3.686524018,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 20 20:02:30.393835 kubelet[2789]: I0420 20:02:30.336585 2789 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Apr 20 20:02:30.762027 kubelet[2789]: E0420 20:02:30.760394 2789 controller.go:201] "Failed to ensure lease exists, will retry" err="namespaces \"kube-node-lease\" not found" interval="7s" Apr 20 20:02:30.947802 kubelet[2789]: E0420 20:02:30.888848 2789 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Apr 20 20:02:30.963613 kubelet[2789]: I0420 20:02:30.959801 2789 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Apr 20 20:02:31.099596 kubelet[2789]: E0420 20:02:31.092927 2789 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:02:31.555555 kubelet[2789]: I0420 20:02:31.550991 2789 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Apr 20 20:02:32.057311 kubelet[2789]: E0420 20:02:32.056078 2789 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:02:32.076933 kubelet[2789]: E0420 20:02:32.067466 2789 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:02:33.093707 kubelet[2789]: E0420 20:02:33.092589 2789 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:02:33.637387 kubelet[2789]: I0420 20:02:33.636970 2789 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=2.636902311 podStartE2EDuration="2.636902311s" podCreationTimestamp="2026-04-20 20:02:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-20 20:02:33.372809913 +0000 UTC m=+24.746076726" watchObservedRunningTime="2026-04-20 20:02:33.636902311 +0000 UTC m=+25.010169122" Apr 20 20:02:33.886682 kubelet[2789]: I0420 20:02:33.868614 2789 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=3.864163074 podStartE2EDuration="3.864163074s" podCreationTimestamp="2026-04-20 20:02:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-20 20:02:33.637325197 +0000 UTC m=+25.010592000" watchObservedRunningTime="2026-04-20 20:02:33.864163074 +0000 UTC m=+25.237429912" Apr 20 20:02:33.893835 kubelet[2789]: I0420 20:02:33.888540 2789 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=2.8873194509999998 podStartE2EDuration="2.887319451s" podCreationTimestamp="2026-04-20 20:02:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-20 20:02:33.887186638 +0000 UTC m=+25.260453447" watchObservedRunningTime="2026-04-20 20:02:33.887319451 +0000 UTC m=+25.260586260" Apr 20 20:02:38.562876 kubelet[2789]: E0420 20:02:38.561481 2789 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:02:39.789658 kubelet[2789]: E0420 20:02:39.785901 2789 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:02:53.592535 kubelet[2789]: E0420 20:02:53.587981 2789 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:02:53.961848 systemd[1]: Reload requested from client PID 3078 ('systemctl') (unit session-8.scope)... Apr 20 20:02:53.978980 systemd[1]: Reloading... Apr 20 20:02:55.763767 zram_generator::config[3135]: No configuration found. Apr 20 20:02:55.776787 systemd-ssh-generator[3128]: Failed to query local AF_VSOCK CID: Cannot assign requested address Apr 20 20:02:55.780245 (sd-exec-[3109]: /usr/lib/systemd/system-generators/systemd-ssh-generator failed with exit status 1. Apr 20 20:02:56.495060 kubelet[2789]: E0420 20:02:56.493815 2789 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:02:57.649481 systemd[1]: /usr/lib/systemd/system/update-engine.service:10: Support for option BlockIOWeight= has been removed and it is ignored Apr 20 20:02:57.858440 kubelet[2789]: E0420 20:02:57.857978 2789 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:02:59.309973 systemd[1]: Reloading finished in 5324 ms. Apr 20 20:02:59.945370 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 20 20:03:00.010155 systemd[1]: kubelet.service: Deactivated successfully. Apr 20 20:03:00.011474 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 20 20:03:00.011849 systemd[1]: kubelet.service: Consumed 36.480s CPU time, 133.5M memory peak. Apr 20 20:03:00.083718 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 20 20:03:01.692787 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 20 20:03:01.724251 (kubelet)[3176]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 20 20:03:03.633374 kubelet[3176]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 20 20:03:04.402125 kubelet[3176]: I0420 20:03:04.396945 3176 server.go:525] "Kubelet version" kubeletVersion="v1.35.1" Apr 20 20:03:04.405332 kubelet[3176]: I0420 20:03:04.404146 3176 server.go:527] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 20 20:03:04.405332 kubelet[3176]: I0420 20:03:04.404914 3176 watchdog_linux.go:95] "Systemd watchdog is not enabled" Apr 20 20:03:04.408443 kubelet[3176]: I0420 20:03:04.404963 3176 watchdog_linux.go:138] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 20 20:03:04.451787 kubelet[3176]: I0420 20:03:04.450454 3176 server.go:951] "Client rotation is on, will bootstrap in background" Apr 20 20:03:04.624185 kubelet[3176]: I0420 20:03:04.622854 3176 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Apr 20 20:03:05.069462 kubelet[3176]: I0420 20:03:05.068297 3176 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 20 20:03:09.528069 kubelet[3176]: I0420 20:03:09.520082 3176 server.go:1418] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Apr 20 20:03:10.659210 kubelet[3176]: I0420 20:03:10.657718 3176 server.go:775] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Apr 20 20:03:10.750499 kubelet[3176]: I0420 20:03:10.734008 3176 container_manager_linux.go:272] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 20 20:03:10.882432 kubelet[3176]: I0420 20:03:10.758155 3176 container_manager_linux.go:277] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 20 20:03:10.896581 kubelet[3176]: I0420 20:03:10.883919 3176 topology_manager.go:143] "Creating topology manager with none policy" Apr 20 20:03:10.896581 kubelet[3176]: I0420 20:03:10.895425 3176 container_manager_linux.go:308] "Creating device plugin manager" Apr 20 20:03:10.908591 kubelet[3176]: I0420 20:03:10.905226 3176 container_manager_linux.go:317] "Creating Dynamic Resource Allocation (DRA) manager" Apr 20 20:03:10.980594 kubelet[3176]: I0420 20:03:10.979512 3176 state_mem.go:41] "Initialized" logger="CPUManager state memory" Apr 20 20:03:11.069795 kubelet[3176]: I0420 20:03:11.057323 3176 kubelet.go:482] "Attempting to sync node with API server" Apr 20 20:03:11.069795 kubelet[3176]: I0420 20:03:11.060875 3176 kubelet.go:383] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 20 20:03:11.069795 kubelet[3176]: I0420 20:03:11.065730 3176 kubelet.go:394] "Adding apiserver pod source" Apr 20 20:03:11.089319 kubelet[3176]: I0420 20:03:11.087954 3176 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 20 20:03:11.973813 kubelet[3176]: I0420 20:03:11.972015 3176 kuberuntime_manager.go:294] "Container runtime initialized" containerRuntime="containerd" version="v2.2.1" apiVersion="v1" Apr 20 20:03:12.291434 kubelet[3176]: I0420 20:03:12.288316 3176 kubelet.go:943] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 20 20:03:12.293283 kubelet[3176]: I0420 20:03:12.291641 3176 kubelet.go:970] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Apr 20 20:03:12.680754 kubelet[3176]: I0420 20:03:12.679290 3176 server.go:1257] "Started kubelet" Apr 20 20:03:12.700762 kubelet[3176]: I0420 20:03:12.696169 3176 server.go:182] "Starting to listen" address="0.0.0.0" port=10250 Apr 20 20:03:12.748619 kubelet[3176]: I0420 20:03:12.723952 3176 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 20 20:03:12.915518 kubelet[3176]: I0420 20:03:12.865316 3176 server_v1.go:49] "podresources" method="list" useActivePods=true Apr 20 20:03:12.923822 kubelet[3176]: I0420 20:03:12.923619 3176 server.go:254] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 20 20:03:12.937982 kubelet[3176]: I0420 20:03:12.934694 3176 server.go:317] "Adding debug handlers to kubelet server" Apr 20 20:03:12.985552 kubelet[3176]: I0420 20:03:12.983869 3176 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 20 20:03:13.113118 kubelet[3176]: I0420 20:03:13.111384 3176 fs_resource_analyzer.go:69] "Starting FS ResourceAnalyzer" Apr 20 20:03:13.144740 kubelet[3176]: I0420 20:03:13.143164 3176 apiserver.go:52] "Watching apiserver" Apr 20 20:03:13.159526 kubelet[3176]: I0420 20:03:13.145240 3176 volume_manager.go:311] "Starting Kubelet Volume Manager" Apr 20 20:03:13.182435 kubelet[3176]: I0420 20:03:13.181836 3176 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Apr 20 20:03:13.188577 kubelet[3176]: I0420 20:03:13.188435 3176 reconciler.go:29] "Reconciler: start to sync state" Apr 20 20:03:13.513503 kubelet[3176]: I0420 20:03:13.513229 3176 factory.go:223] Registration of the containerd container factory successfully Apr 20 20:03:13.517283 kubelet[3176]: I0420 20:03:13.516191 3176 factory.go:223] Registration of the systemd container factory successfully Apr 20 20:03:13.521542 kubelet[3176]: I0420 20:03:13.516131 3176 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Apr 20 20:03:13.522543 kubelet[3176]: I0420 20:03:13.521281 3176 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 20 20:03:13.597212 kubelet[3176]: I0420 20:03:13.596878 3176 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Apr 20 20:03:13.643483 kubelet[3176]: I0420 20:03:13.638530 3176 status_manager.go:249] "Starting to sync pod status with apiserver" Apr 20 20:03:13.643483 kubelet[3176]: I0420 20:03:13.642035 3176 kubelet.go:2501] "Starting kubelet main sync loop" Apr 20 20:03:13.649243 kubelet[3176]: E0420 20:03:13.648703 3176 kubelet.go:2525] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 20 20:03:13.769239 kubelet[3176]: E0420 20:03:13.765123 3176 kubelet.go:2525] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Apr 20 20:03:14.001887 kubelet[3176]: E0420 20:03:13.971315 3176 kubelet.go:2525] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Apr 20 20:03:14.428743 kubelet[3176]: E0420 20:03:14.425647 3176 kubelet.go:2525] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Apr 20 20:03:15.228803 kubelet[3176]: E0420 20:03:15.228179 3176 kubelet.go:2525] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Apr 20 20:03:16.385501 kubelet[3176]: I0420 20:03:16.385132 3176 cpu_manager.go:225] "Starting" policy="none" Apr 20 20:03:16.385501 kubelet[3176]: I0420 20:03:16.385495 3176 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Apr 20 20:03:16.399325 kubelet[3176]: I0420 20:03:16.395710 3176 state_mem.go:41] "Initialized" logger="CPUManager state checkpoint.CPUManager state memory" Apr 20 20:03:16.413705 kubelet[3176]: I0420 20:03:16.411279 3176 state_mem.go:94] "Updated default CPUSet" logger="CPUManager state checkpoint.CPUManager state memory" cpuSet="" Apr 20 20:03:16.414943 kubelet[3176]: I0420 20:03:16.413426 3176 state_mem.go:102] "Updated CPUSet assignments" logger="CPUManager state checkpoint.CPUManager state memory" assignments={} Apr 20 20:03:16.414943 kubelet[3176]: I0420 20:03:16.414005 3176 policy_none.go:50] "Start" Apr 20 20:03:16.414943 kubelet[3176]: I0420 20:03:16.414162 3176 memory_manager.go:187] "Starting memorymanager" policy="None" Apr 20 20:03:16.414943 kubelet[3176]: I0420 20:03:16.414193 3176 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Apr 20 20:03:16.414943 kubelet[3176]: I0420 20:03:16.414805 3176 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Apr 20 20:03:16.414943 kubelet[3176]: I0420 20:03:16.414831 3176 policy_none.go:44] "Start" Apr 20 20:03:16.802549 kubelet[3176]: E0420 20:03:16.779141 3176 manager.go:525] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 20 20:03:16.820915 kubelet[3176]: I0420 20:03:16.820112 3176 eviction_manager.go:194] "Eviction manager: starting control loop" Apr 20 20:03:16.836619 kubelet[3176]: I0420 20:03:16.824917 3176 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 20 20:03:16.836619 kubelet[3176]: E0420 20:03:16.834890 3176 kubelet.go:2525] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Apr 20 20:03:16.953158 kubelet[3176]: I0420 20:03:16.952548 3176 plugin_manager.go:121] "Starting Kubelet Plugin Manager" Apr 20 20:03:17.185046 kubelet[3176]: E0420 20:03:17.184012 3176 eviction_manager.go:272] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 20 20:03:17.247997 kubelet[3176]: I0420 20:03:17.247807 3176 kuberuntime_manager.go:2062] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Apr 20 20:03:17.331581 containerd[1640]: time="2026-04-20T20:03:17.329601949Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Apr 20 20:03:17.360766 kubelet[3176]: I0420 20:03:17.360428 3176 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Apr 20 20:03:18.057843 kubelet[3176]: I0420 20:03:18.047676 3176 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Apr 20 20:03:18.388980 kubelet[3176]: I0420 20:03:18.386758 3176 kubelet_node_status.go:123] "Node was previously registered" node="localhost" Apr 20 20:03:18.395006 kubelet[3176]: I0420 20:03:18.393092 3176 kubelet_node_status.go:77] "Successfully registered node" node="localhost" Apr 20 20:03:18.770230 sudo[1830]: pam_unix(sudo:session): session closed for user root Apr 20 20:03:18.851608 sshd[1829]: Connection closed by 10.0.0.1 port 52624 Apr 20 20:03:18.856727 sshd-session[1825]: pam_unix(sshd:session): session closed for user core Apr 20 20:03:18.899079 systemd[1]: sshd@6-8195-10.0.0.6:22-10.0.0.1:52624.service: Deactivated successfully. Apr 20 20:03:18.946257 systemd[1]: session-8.scope: Deactivated successfully. Apr 20 20:03:18.947504 systemd[1]: session-8.scope: Consumed 32.868s CPU time, 219.6M memory peak. Apr 20 20:03:19.002181 systemd-logind[1609]: Session 8 logged out. Waiting for processes to exit. Apr 20 20:03:19.098073 systemd-logind[1609]: Removed session 8. Apr 20 20:03:20.123283 kubelet[3176]: I0420 20:03:20.118317 3176 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Apr 20 20:03:20.128780 kubelet[3176]: I0420 20:03:20.118401 3176 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Apr 20 20:03:20.320267 kubelet[3176]: I0420 20:03:20.262958 3176 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ba15b63dde517d3f49c1db0a4abcdbe1-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"ba15b63dde517d3f49c1db0a4abcdbe1\") " pod="kube-system/kube-apiserver-localhost" Apr 20 20:03:20.360697 kubelet[3176]: I0420 20:03:20.358444 3176 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ba15b63dde517d3f49c1db0a4abcdbe1-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"ba15b63dde517d3f49c1db0a4abcdbe1\") " pod="kube-system/kube-apiserver-localhost" Apr 20 20:03:20.433193 kubelet[3176]: E0420 20:03:20.426171 3176 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Apr 20 20:03:20.433193 kubelet[3176]: I0420 20:03:20.428316 3176 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ba15b63dde517d3f49c1db0a4abcdbe1-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"ba15b63dde517d3f49c1db0a4abcdbe1\") " pod="kube-system/kube-apiserver-localhost" Apr 20 20:03:20.439000 kubelet[3176]: I0420 20:03:20.435803 3176 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/14bc29ec35edba17af38052ec24275f2-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"14bc29ec35edba17af38052ec24275f2\") " pod="kube-system/kube-controller-manager-localhost" Apr 20 20:03:20.439121 kubelet[3176]: I0420 20:03:20.439042 3176 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/14bc29ec35edba17af38052ec24275f2-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"14bc29ec35edba17af38052ec24275f2\") " pod="kube-system/kube-controller-manager-localhost" Apr 20 20:03:20.439121 kubelet[3176]: E0420 20:03:20.439095 3176 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Apr 20 20:03:20.439121 kubelet[3176]: I0420 20:03:20.439109 3176 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/14bc29ec35edba17af38052ec24275f2-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"14bc29ec35edba17af38052ec24275f2\") " pod="kube-system/kube-controller-manager-localhost" Apr 20 20:03:20.439330 kubelet[3176]: I0420 20:03:20.439134 3176 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/14bc29ec35edba17af38052ec24275f2-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"14bc29ec35edba17af38052ec24275f2\") " pod="kube-system/kube-controller-manager-localhost" Apr 20 20:03:20.439330 kubelet[3176]: I0420 20:03:20.439194 3176 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/14bc29ec35edba17af38052ec24275f2-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"14bc29ec35edba17af38052ec24275f2\") " pod="kube-system/kube-controller-manager-localhost" Apr 20 20:03:20.550436 kubelet[3176]: I0420 20:03:20.547962 3176 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Apr 20 20:03:20.620661 kubelet[3176]: I0420 20:03:20.620083 3176 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-plugin\" (UniqueName: \"kubernetes.io/host-path/4bd74a2f-3cb2-4a1c-947d-d9f09ed6a192-cni-plugin\") pod \"kube-flannel-ds-dkx62\" (UID: \"4bd74a2f-3cb2-4a1c-947d-d9f09ed6a192\") " pod="kube-flannel/kube-flannel-ds-dkx62" Apr 20 20:03:20.623332 kubelet[3176]: I0420 20:03:20.621244 3176 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flannel-cfg\" (UniqueName: \"kubernetes.io/configmap/4bd74a2f-3cb2-4a1c-947d-d9f09ed6a192-flannel-cfg\") pod \"kube-flannel-ds-dkx62\" (UID: \"4bd74a2f-3cb2-4a1c-947d-d9f09ed6a192\") " pod="kube-flannel/kube-flannel-ds-dkx62" Apr 20 20:03:20.736481 kubelet[3176]: I0420 20:03:20.735309 3176 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f7c88b30fc803a3ec6b6c138191bdaca-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"f7c88b30fc803a3ec6b6c138191bdaca\") " pod="kube-system/kube-scheduler-localhost" Apr 20 20:03:20.739313 kubelet[3176]: I0420 20:03:20.739023 3176 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/20f69d17-4abd-4386-b244-8eb614dae827-kube-proxy\") pod \"kube-proxy-nklbf\" (UID: \"20f69d17-4abd-4386-b244-8eb614dae827\") " pod="kube-system/kube-proxy-nklbf" Apr 20 20:03:20.743279 kubelet[3176]: I0420 20:03:20.739535 3176 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/20f69d17-4abd-4386-b244-8eb614dae827-xtables-lock\") pod \"kube-proxy-nklbf\" (UID: \"20f69d17-4abd-4386-b244-8eb614dae827\") " pod="kube-system/kube-proxy-nklbf" Apr 20 20:03:20.743279 kubelet[3176]: I0420 20:03:20.739763 3176 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f8lkx\" (UniqueName: \"kubernetes.io/projected/20f69d17-4abd-4386-b244-8eb614dae827-kube-api-access-f8lkx\") pod \"kube-proxy-nklbf\" (UID: \"20f69d17-4abd-4386-b244-8eb614dae827\") " pod="kube-system/kube-proxy-nklbf" Apr 20 20:03:20.743279 kubelet[3176]: I0420 20:03:20.739800 3176 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/4bd74a2f-3cb2-4a1c-947d-d9f09ed6a192-run\") pod \"kube-flannel-ds-dkx62\" (UID: \"4bd74a2f-3cb2-4a1c-947d-d9f09ed6a192\") " pod="kube-flannel/kube-flannel-ds-dkx62" Apr 20 20:03:20.743279 kubelet[3176]: I0420 20:03:20.739817 3176 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4bd74a2f-3cb2-4a1c-947d-d9f09ed6a192-xtables-lock\") pod \"kube-flannel-ds-dkx62\" (UID: \"4bd74a2f-3cb2-4a1c-947d-d9f09ed6a192\") " pod="kube-flannel/kube-flannel-ds-dkx62" Apr 20 20:03:20.743279 kubelet[3176]: I0420 20:03:20.739863 3176 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni\" (UniqueName: \"kubernetes.io/host-path/4bd74a2f-3cb2-4a1c-947d-d9f09ed6a192-cni\") pod \"kube-flannel-ds-dkx62\" (UID: \"4bd74a2f-3cb2-4a1c-947d-d9f09ed6a192\") " pod="kube-flannel/kube-flannel-ds-dkx62" Apr 20 20:03:20.746267 kubelet[3176]: I0420 20:03:20.739883 3176 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jv4zm\" (UniqueName: \"kubernetes.io/projected/4bd74a2f-3cb2-4a1c-947d-d9f09ed6a192-kube-api-access-jv4zm\") pod \"kube-flannel-ds-dkx62\" (UID: \"4bd74a2f-3cb2-4a1c-947d-d9f09ed6a192\") " pod="kube-flannel/kube-flannel-ds-dkx62" Apr 20 20:03:20.746267 kubelet[3176]: I0420 20:03:20.739901 3176 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/20f69d17-4abd-4386-b244-8eb614dae827-lib-modules\") pod \"kube-proxy-nklbf\" (UID: \"20f69d17-4abd-4386-b244-8eb614dae827\") " pod="kube-system/kube-proxy-nklbf" Apr 20 20:03:21.099196 systemd[1]: Created slice kubepods-besteffort-pod20f69d17_4abd_4386_b244_8eb614dae827.slice - libcontainer container kubepods-besteffort-pod20f69d17_4abd_4386_b244_8eb614dae827.slice. Apr 20 20:03:21.463199 kubelet[3176]: E0420 20:03:21.392873 3176 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:03:21.479013 kubelet[3176]: E0420 20:03:21.349197 3176 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:03:21.504754 kubelet[3176]: E0420 20:03:21.501302 3176 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:03:21.796984 kubelet[3176]: E0420 20:03:21.795112 3176 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.257s" Apr 20 20:03:22.179773 systemd[1]: Created slice kubepods-burstable-pod4bd74a2f_3cb2_4a1c_947d_d9f09ed6a192.slice - libcontainer container kubepods-burstable-pod4bd74a2f_3cb2_4a1c_947d_d9f09ed6a192.slice. Apr 20 20:03:22.496487 kubelet[3176]: E0420 20:03:22.495449 3176 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:03:22.667835 kubelet[3176]: E0420 20:03:22.666630 3176 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:03:22.847812 kubelet[3176]: E0420 20:03:22.844836 3176 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.042s" Apr 20 20:03:22.903190 containerd[1640]: time="2026-04-20T20:03:22.898794947Z" level=info msg="RunPodSandbox for name:\"kube-proxy-nklbf\" uid:\"20f69d17-4abd-4386-b244-8eb614dae827\" namespace:\"kube-system\"" Apr 20 20:03:22.919641 containerd[1640]: time="2026-04-20T20:03:22.898996622Z" level=info msg="RunPodSandbox for name:\"kube-flannel-ds-dkx62\" uid:\"4bd74a2f-3cb2-4a1c-947d-d9f09ed6a192\" namespace:\"kube-flannel\"" Apr 20 20:03:22.943822 kubelet[3176]: E0420 20:03:22.941497 3176 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:03:24.271774 kubelet[3176]: E0420 20:03:24.269318 3176 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:03:24.552322 kubelet[3176]: E0420 20:03:24.542220 3176 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:03:25.276961 kubelet[3176]: E0420 20:03:25.276583 3176 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.598s" Apr 20 20:03:26.473870 kubelet[3176]: E0420 20:03:26.468738 3176 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:03:26.668631 containerd[1640]: time="2026-04-20T20:03:26.661304899Z" level=info msg="connecting to shim 2eac08b56e67d9ac9a22f638155f8721ef4efe3fdee1c03b0bcc56c372a6daf3" address="unix:///run/containerd/s/395f96b1ac62520841319aa741d4c44932f1af8257ca464339194e4655dad80c" namespace=k8s.io protocol=ttrpc version=3 Apr 20 20:03:26.744005 containerd[1640]: time="2026-04-20T20:03:26.660070194Z" level=info msg="connecting to shim a98f4ca1ec64d833b150414af6eb362bded137a5d500cf793c3032fb52e956ac" address="unix:///run/containerd/s/223d4d4ca7ec5e7f9a7f27730af4d6f7db5258356f7aa93966d9d398689c0adf" namespace=k8s.io protocol=ttrpc version=3 Apr 20 20:03:27.029585 kubelet[3176]: E0420 20:03:27.026640 3176 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.364s" Apr 20 20:03:27.649931 systemd[1]: Started cri-containerd-a98f4ca1ec64d833b150414af6eb362bded137a5d500cf793c3032fb52e956ac.scope - libcontainer container a98f4ca1ec64d833b150414af6eb362bded137a5d500cf793c3032fb52e956ac. Apr 20 20:03:28.179074 systemd[1]: Started cri-containerd-2eac08b56e67d9ac9a22f638155f8721ef4efe3fdee1c03b0bcc56c372a6daf3.scope - libcontainer container 2eac08b56e67d9ac9a22f638155f8721ef4efe3fdee1c03b0bcc56c372a6daf3. Apr 20 20:03:29.186798 containerd[1640]: time="2026-04-20T20:03:29.186102605Z" level=info msg="RunPodSandbox for name:\"kube-proxy-nklbf\" uid:\"20f69d17-4abd-4386-b244-8eb614dae827\" namespace:\"kube-system\" returns sandbox id \"a98f4ca1ec64d833b150414af6eb362bded137a5d500cf793c3032fb52e956ac\"" Apr 20 20:03:29.459289 kubelet[3176]: E0420 20:03:29.451656 3176 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:03:29.844550 containerd[1640]: time="2026-04-20T20:03:29.814908381Z" level=info msg="RunPodSandbox for name:\"kube-flannel-ds-dkx62\" uid:\"4bd74a2f-3cb2-4a1c-947d-d9f09ed6a192\" namespace:\"kube-flannel\" returns sandbox id \"2eac08b56e67d9ac9a22f638155f8721ef4efe3fdee1c03b0bcc56c372a6daf3\"" Apr 20 20:03:29.983981 kubelet[3176]: E0420 20:03:29.983587 3176 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:03:30.452919 containerd[1640]: time="2026-04-20T20:03:30.419770972Z" level=info msg="CreateContainer within sandbox \"a98f4ca1ec64d833b150414af6eb362bded137a5d500cf793c3032fb52e956ac\" for container name:\"kube-proxy\"" Apr 20 20:03:30.548566 containerd[1640]: time="2026-04-20T20:03:30.547766281Z" level=info msg="PullImage \"ghcr.io/flannel-io/flannel-cni-plugin:v1.6.2-flannel1\"" Apr 20 20:03:32.024317 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1404402686.mount: Deactivated successfully. Apr 20 20:03:32.050265 containerd[1640]: time="2026-04-20T20:03:32.049925496Z" level=info msg="Container 2364329ba0f91f4ba879de803665b1a8365ed986c63db80ab17b8460a8fbffec: CDI devices from CRI Config.CDIDevices: []" Apr 20 20:03:32.079357 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2296051591.mount: Deactivated successfully. Apr 20 20:03:33.044002 containerd[1640]: time="2026-04-20T20:03:33.043101854Z" level=info msg="CreateContainer within sandbox \"a98f4ca1ec64d833b150414af6eb362bded137a5d500cf793c3032fb52e956ac\" for name:\"kube-proxy\" returns container id \"2364329ba0f91f4ba879de803665b1a8365ed986c63db80ab17b8460a8fbffec\"" Apr 20 20:03:33.444903 containerd[1640]: time="2026-04-20T20:03:33.415057522Z" level=info msg="StartContainer for \"2364329ba0f91f4ba879de803665b1a8365ed986c63db80ab17b8460a8fbffec\"" Apr 20 20:03:33.840559 containerd[1640]: time="2026-04-20T20:03:33.836897303Z" level=info msg="connecting to shim 2364329ba0f91f4ba879de803665b1a8365ed986c63db80ab17b8460a8fbffec" address="unix:///run/containerd/s/223d4d4ca7ec5e7f9a7f27730af4d6f7db5258356f7aa93966d9d398689c0adf" protocol=ttrpc version=3 Apr 20 20:03:34.935803 systemd[1]: Started cri-containerd-2364329ba0f91f4ba879de803665b1a8365ed986c63db80ab17b8460a8fbffec.scope - libcontainer container 2364329ba0f91f4ba879de803665b1a8365ed986c63db80ab17b8460a8fbffec. Apr 20 20:03:37.456023 containerd[1640]: time="2026-04-20T20:03:37.447265382Z" level=info msg="StartContainer for \"2364329ba0f91f4ba879de803665b1a8365ed986c63db80ab17b8460a8fbffec\" returns successfully" Apr 20 20:03:38.571805 kubelet[3176]: E0420 20:03:38.570113 3176 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:03:39.644419 kubelet[3176]: I0420 20:03:39.643978 3176 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-proxy-nklbf" podStartSLOduration=29.643917104 podStartE2EDuration="29.643917104s" podCreationTimestamp="2026-04-20 20:03:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-20 20:03:39.631268434 +0000 UTC m=+37.891163248" watchObservedRunningTime="2026-04-20 20:03:39.643917104 +0000 UTC m=+37.903811910" Apr 20 20:03:39.847544 kubelet[3176]: E0420 20:03:39.847443 3176 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:03:44.378958 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4033685055.mount: Deactivated successfully. Apr 20 20:03:46.991596 containerd[1640]: time="2026-04-20T20:03:46.990955703Z" level=info msg="stop pulling image ghcr.io/flannel-io/flannel-cni-plugin:v1.6.2-flannel1: active requests=1, bytes read=2753185" Apr 20 20:03:47.016131 containerd[1640]: time="2026-04-20T20:03:47.015742553Z" level=info msg="ImageCreate event name:\"ghcr.io/flannel-io/flannel-cni-plugin:v1.6.2-flannel1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 20 20:03:47.219432 containerd[1640]: time="2026-04-20T20:03:47.219113538Z" level=info msg="ImageCreate event name:\"sha256:55ce2385d9d8c6f720091c177fbe885a21c9dc07c9e480bfb4d94b3001f58182\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 20 20:03:48.248815 containerd[1640]: time="2026-04-20T20:03:48.248555239Z" level=info msg="ImageCreate event name:\"ghcr.io/flannel-io/flannel-cni-plugin@sha256:f1812994f0edbcb5bb5ccb63be2147ba6ad10e1faaa7ca9fcdad4f441739d84f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 20 20:03:48.533553 containerd[1640]: time="2026-04-20T20:03:48.530668090Z" level=info msg="Pulled image \"ghcr.io/flannel-io/flannel-cni-plugin:v1.6.2-flannel1\" with image id \"sha256:55ce2385d9d8c6f720091c177fbe885a21c9dc07c9e480bfb4d94b3001f58182\", repo tag \"ghcr.io/flannel-io/flannel-cni-plugin:v1.6.2-flannel1\", repo digest \"ghcr.io/flannel-io/flannel-cni-plugin@sha256:f1812994f0edbcb5bb5ccb63be2147ba6ad10e1faaa7ca9fcdad4f441739d84f\", size \"4856838\" in 17.98219065s" Apr 20 20:03:48.535195 containerd[1640]: time="2026-04-20T20:03:48.535104351Z" level=info msg="PullImage \"ghcr.io/flannel-io/flannel-cni-plugin:v1.6.2-flannel1\" returns image reference \"sha256:55ce2385d9d8c6f720091c177fbe885a21c9dc07c9e480bfb4d94b3001f58182\"" Apr 20 20:03:49.061049 containerd[1640]: time="2026-04-20T20:03:49.060607184Z" level=info msg="CreateContainer within sandbox \"2eac08b56e67d9ac9a22f638155f8721ef4efe3fdee1c03b0bcc56c372a6daf3\" for container name:\"install-cni-plugin\"" Apr 20 20:03:50.099394 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4037137262.mount: Deactivated successfully. Apr 20 20:03:50.230041 containerd[1640]: time="2026-04-20T20:03:50.229315617Z" level=info msg="Container 6bae3f47a841ab12c56aae9a6d8492628a5f064046727466cd4947ba6ba2efd2: CDI devices from CRI Config.CDIDevices: []" Apr 20 20:03:50.986627 containerd[1640]: time="2026-04-20T20:03:50.985786922Z" level=info msg="CreateContainer within sandbox \"2eac08b56e67d9ac9a22f638155f8721ef4efe3fdee1c03b0bcc56c372a6daf3\" for name:\"install-cni-plugin\" returns container id \"6bae3f47a841ab12c56aae9a6d8492628a5f064046727466cd4947ba6ba2efd2\"" Apr 20 20:03:51.147513 containerd[1640]: time="2026-04-20T20:03:51.123455977Z" level=info msg="StartContainer for \"6bae3f47a841ab12c56aae9a6d8492628a5f064046727466cd4947ba6ba2efd2\"" Apr 20 20:03:51.707665 containerd[1640]: time="2026-04-20T20:03:51.703179552Z" level=info msg="connecting to shim 6bae3f47a841ab12c56aae9a6d8492628a5f064046727466cd4947ba6ba2efd2" address="unix:///run/containerd/s/395f96b1ac62520841319aa741d4c44932f1af8257ca464339194e4655dad80c" protocol=ttrpc version=3 Apr 20 20:03:53.579228 systemd[1]: Started cri-containerd-6bae3f47a841ab12c56aae9a6d8492628a5f064046727466cd4947ba6ba2efd2.scope - libcontainer container 6bae3f47a841ab12c56aae9a6d8492628a5f064046727466cd4947ba6ba2efd2. Apr 20 20:03:55.401572 systemd[1]: cri-containerd-6bae3f47a841ab12c56aae9a6d8492628a5f064046727466cd4947ba6ba2efd2.scope: Deactivated successfully. Apr 20 20:03:55.589516 containerd[1640]: time="2026-04-20T20:03:55.588481009Z" level=info msg="received container exit event container_id:\"6bae3f47a841ab12c56aae9a6d8492628a5f064046727466cd4947ba6ba2efd2\" id:\"6bae3f47a841ab12c56aae9a6d8492628a5f064046727466cd4947ba6ba2efd2\" pid:3534 exited_at:{seconds:1776715435 nanos:455745228}" Apr 20 20:03:55.793666 containerd[1640]: time="2026-04-20T20:03:55.793436862Z" level=info msg="StartContainer for \"6bae3f47a841ab12c56aae9a6d8492628a5f064046727466cd4947ba6ba2efd2\" returns successfully" Apr 20 20:03:56.836251 kubelet[3176]: E0420 20:03:56.834014 3176 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:03:57.528161 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6bae3f47a841ab12c56aae9a6d8492628a5f064046727466cd4947ba6ba2efd2-rootfs.mount: Deactivated successfully. Apr 20 20:03:58.593571 kubelet[3176]: E0420 20:03:58.589899 3176 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:03:59.129939 containerd[1640]: time="2026-04-20T20:03:59.129321748Z" level=info msg="PullImage \"ghcr.io/flannel-io/flannel:v0.26.7\"" Apr 20 20:04:32.980451 kubelet[3176]: E0420 20:04:32.952834 3176 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:04:39.797566 kubelet[3176]: E0420 20:04:39.796586 3176 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:04:40.833771 kubelet[3176]: E0420 20:04:40.831852 3176 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:04:41.717052 kubelet[3176]: E0420 20:04:41.716762 3176 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:04:51.586383 containerd[1640]: time="2026-04-20T20:04:51.577912289Z" level=info msg="ImageCreate event name:\"ghcr.io/flannel-io/flannel:v0.26.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 20 20:04:51.634851 containerd[1640]: time="2026-04-20T20:04:51.626198823Z" level=info msg="stop pulling image ghcr.io/flannel-io/flannel:v0.26.7: active requests=0, bytes read=29344049" Apr 20 20:04:52.243778 containerd[1640]: time="2026-04-20T20:04:52.237968469Z" level=info msg="ImageCreate event name:\"sha256:965b9dd4aa4c1b6b68a4c54a166692b4645b6e6f8a5937d8dc17736cb63f515e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 20 20:04:53.976328 containerd[1640]: time="2026-04-20T20:04:53.956211855Z" level=info msg="ImageCreate event name:\"ghcr.io/flannel-io/flannel@sha256:7f471907fa940f944867270de4ed78121b8b4c5d564e17f940dc787cb16dea82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 20 20:04:54.650904 containerd[1640]: time="2026-04-20T20:04:54.650530149Z" level=info msg="Pulled image \"ghcr.io/flannel-io/flannel:v0.26.7\" with image id \"sha256:965b9dd4aa4c1b6b68a4c54a166692b4645b6e6f8a5937d8dc17736cb63f515e\", repo tag \"ghcr.io/flannel-io/flannel:v0.26.7\", repo digest \"ghcr.io/flannel-io/flannel@sha256:7f471907fa940f944867270de4ed78121b8b4c5d564e17f940dc787cb16dea82\", size \"32996046\" in 55.507623805s" Apr 20 20:04:54.650904 containerd[1640]: time="2026-04-20T20:04:54.650692562Z" level=info msg="PullImage \"ghcr.io/flannel-io/flannel:v0.26.7\" returns image reference \"sha256:965b9dd4aa4c1b6b68a4c54a166692b4645b6e6f8a5937d8dc17736cb63f515e\"" Apr 20 20:04:55.475717 containerd[1640]: time="2026-04-20T20:04:55.473573323Z" level=info msg="CreateContainer within sandbox \"2eac08b56e67d9ac9a22f638155f8721ef4efe3fdee1c03b0bcc56c372a6daf3\" for container name:\"install-cni\"" Apr 20 20:04:57.248634 containerd[1640]: time="2026-04-20T20:04:57.247857094Z" level=info msg="Container f8dd3fbde30e578cb32ff2078ae0d18d8f16f7c4d8175e0aba5af017c4637e7c: CDI devices from CRI Config.CDIDevices: []" Apr 20 20:04:57.411485 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount740724795.mount: Deactivated successfully. Apr 20 20:04:58.845633 containerd[1640]: time="2026-04-20T20:04:58.844764279Z" level=info msg="CreateContainer within sandbox \"2eac08b56e67d9ac9a22f638155f8721ef4efe3fdee1c03b0bcc56c372a6daf3\" for name:\"install-cni\" returns container id \"f8dd3fbde30e578cb32ff2078ae0d18d8f16f7c4d8175e0aba5af017c4637e7c\"" Apr 20 20:04:58.967636 containerd[1640]: time="2026-04-20T20:04:58.966406954Z" level=info msg="StartContainer for \"f8dd3fbde30e578cb32ff2078ae0d18d8f16f7c4d8175e0aba5af017c4637e7c\"" Apr 20 20:04:59.572020 containerd[1640]: time="2026-04-20T20:04:59.566188816Z" level=info msg="connecting to shim f8dd3fbde30e578cb32ff2078ae0d18d8f16f7c4d8175e0aba5af017c4637e7c" address="unix:///run/containerd/s/395f96b1ac62520841319aa741d4c44932f1af8257ca464339194e4655dad80c" protocol=ttrpc version=3 Apr 20 20:05:00.250242 systemd[1]: Started cri-containerd-f8dd3fbde30e578cb32ff2078ae0d18d8f16f7c4d8175e0aba5af017c4637e7c.scope - libcontainer container f8dd3fbde30e578cb32ff2078ae0d18d8f16f7c4d8175e0aba5af017c4637e7c. Apr 20 20:05:00.864240 kubelet[3176]: E0420 20:05:00.860719 3176 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.203s" Apr 20 20:05:02.262058 systemd[1]: cri-containerd-f8dd3fbde30e578cb32ff2078ae0d18d8f16f7c4d8175e0aba5af017c4637e7c.scope: Deactivated successfully. Apr 20 20:05:02.663168 containerd[1640]: time="2026-04-20T20:05:02.653444657Z" level=info msg="received container exit event container_id:\"f8dd3fbde30e578cb32ff2078ae0d18d8f16f7c4d8175e0aba5af017c4637e7c\" id:\"f8dd3fbde30e578cb32ff2078ae0d18d8f16f7c4d8175e0aba5af017c4637e7c\" pid:3619 exited_at:{seconds:1776715502 nanos:323194442}" Apr 20 20:05:03.239612 kubelet[3176]: I0420 20:05:03.238473 3176 kubelet_node_status.go:427] "Fast updating node status as it just became ready" Apr 20 20:05:03.395221 containerd[1640]: time="2026-04-20T20:05:03.372258202Z" level=info msg="StartContainer for \"f8dd3fbde30e578cb32ff2078ae0d18d8f16f7c4d8175e0aba5af017c4637e7c\" returns successfully" Apr 20 20:05:04.151590 kubelet[3176]: E0420 20:05:04.149875 3176 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.321s" Apr 20 20:05:04.732453 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f8dd3fbde30e578cb32ff2078ae0d18d8f16f7c4d8175e0aba5af017c4637e7c-rootfs.mount: Deactivated successfully. Apr 20 20:05:05.635282 kubelet[3176]: I0420 20:05:05.632713 3176 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8vfm7\" (UniqueName: \"kubernetes.io/projected/09febe2e-e5a3-4b12-8d73-bae93c61f3a7-kube-api-access-8vfm7\") pod \"coredns-7d764666f9-762rx\" (UID: \"09febe2e-e5a3-4b12-8d73-bae93c61f3a7\") " pod="kube-system/coredns-7d764666f9-762rx" Apr 20 20:05:05.759615 kubelet[3176]: I0420 20:05:05.751795 3176 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e4d771d9-b307-485b-8003-d899e5c2927e-config-volume\") pod \"coredns-7d764666f9-kdhmz\" (UID: \"e4d771d9-b307-485b-8003-d899e5c2927e\") " pod="kube-system/coredns-7d764666f9-kdhmz" Apr 20 20:05:05.796582 kubelet[3176]: I0420 20:05:05.794687 3176 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4vtls\" (UniqueName: \"kubernetes.io/projected/e4d771d9-b307-485b-8003-d899e5c2927e-kube-api-access-4vtls\") pod \"coredns-7d764666f9-kdhmz\" (UID: \"e4d771d9-b307-485b-8003-d899e5c2927e\") " pod="kube-system/coredns-7d764666f9-kdhmz" Apr 20 20:05:05.823456 kubelet[3176]: I0420 20:05:05.820865 3176 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/09febe2e-e5a3-4b12-8d73-bae93c61f3a7-config-volume\") pod \"coredns-7d764666f9-762rx\" (UID: \"09febe2e-e5a3-4b12-8d73-bae93c61f3a7\") " pod="kube-system/coredns-7d764666f9-762rx" Apr 20 20:05:07.917889 kubelet[3176]: E0420 20:05:07.914111 3176 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.193s" Apr 20 20:05:08.071063 systemd[1]: Created slice kubepods-burstable-pod09febe2e_e5a3_4b12_8d73_bae93c61f3a7.slice - libcontainer container kubepods-burstable-pod09febe2e_e5a3_4b12_8d73_bae93c61f3a7.slice. Apr 20 20:05:09.421766 systemd[1]: Created slice kubepods-burstable-pode4d771d9_b307_485b_8003_d899e5c2927e.slice - libcontainer container kubepods-burstable-pode4d771d9_b307_485b_8003_d899e5c2927e.slice. Apr 20 20:05:09.476097 kubelet[3176]: E0420 20:05:09.372036 3176 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:05:09.584904 kubelet[3176]: E0420 20:05:09.390795 3176 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:05:10.185159 kubelet[3176]: E0420 20:05:10.184757 3176 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:05:10.432927 containerd[1640]: time="2026-04-20T20:05:10.432499615Z" level=info msg="RunPodSandbox for name:\"coredns-7d764666f9-762rx\" uid:\"09febe2e-e5a3-4b12-8d73-bae93c61f3a7\" namespace:\"kube-system\"" Apr 20 20:05:10.552860 containerd[1640]: time="2026-04-20T20:05:10.550882638Z" level=info msg="RunPodSandbox for name:\"coredns-7d764666f9-kdhmz\" uid:\"e4d771d9-b307-485b-8003-d899e5c2927e\" namespace:\"kube-system\"" Apr 20 20:05:13.381585 containerd[1640]: time="2026-04-20T20:05:13.378743602Z" level=info msg="CreateContainer within sandbox \"2eac08b56e67d9ac9a22f638155f8721ef4efe3fdee1c03b0bcc56c372a6daf3\" for container name:\"kube-flannel\"" Apr 20 20:05:13.954089 kubelet[3176]: E0420 20:05:13.953139 3176 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="5.974s" Apr 20 20:05:14.451634 systemd[1]: run-netns-cni\x2d96ecd0e9\x2dc4d6\x2d08fe\x2da874\x2d962010e1bd36.mount: Deactivated successfully. Apr 20 20:05:15.161807 systemd[1]: run-netns-cni\x2d4863b487\x2d65f8\x2da2a2\x2d6be0\x2d35dda0354ab2.mount: Deactivated successfully. Apr 20 20:05:15.414099 kubelet[3176]: E0420 20:05:15.406734 3176 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.422s" Apr 20 20:05:15.579755 containerd[1640]: time="2026-04-20T20:05:15.570271156Z" level=error msg="RunPodSandbox for name:\"coredns-7d764666f9-762rx\" uid:\"09febe2e-e5a3-4b12-8d73-bae93c61f3a7\" namespace:\"kube-system\" failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"352196d56653030b8bc9f77b72b776cb3cd1d96f2c16c665cd83e68c6f3d36bd\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Apr 20 20:05:15.747027 containerd[1640]: time="2026-04-20T20:05:15.689289227Z" level=error msg="RunPodSandbox for name:\"coredns-7d764666f9-kdhmz\" uid:\"e4d771d9-b307-485b-8003-d899e5c2927e\" namespace:\"kube-system\" failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"0f2440971c5ade8bfce36f2cdbf40919c4c586296a891a5ca0a54393034ef6ad\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Apr 20 20:05:15.765946 kubelet[3176]: E0420 20:05:15.763585 3176 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"352196d56653030b8bc9f77b72b776cb3cd1d96f2c16c665cd83e68c6f3d36bd\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Apr 20 20:05:15.773172 kubelet[3176]: E0420 20:05:15.767098 3176 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"352196d56653030b8bc9f77b72b776cb3cd1d96f2c16c665cd83e68c6f3d36bd\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-7d764666f9-762rx" Apr 20 20:05:15.773172 kubelet[3176]: E0420 20:05:15.767329 3176 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"352196d56653030b8bc9f77b72b776cb3cd1d96f2c16c665cd83e68c6f3d36bd\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-7d764666f9-762rx" Apr 20 20:05:15.794005 kubelet[3176]: E0420 20:05:15.792684 3176 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7d764666f9-762rx_kube-system(09febe2e-e5a3-4b12-8d73-bae93c61f3a7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7d764666f9-762rx_kube-system(09febe2e-e5a3-4b12-8d73-bae93c61f3a7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"352196d56653030b8bc9f77b72b776cb3cd1d96f2c16c665cd83e68c6f3d36bd\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-7d764666f9-762rx" podUID="09febe2e-e5a3-4b12-8d73-bae93c61f3a7" Apr 20 20:05:15.988307 kubelet[3176]: E0420 20:05:15.927139 3176 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0f2440971c5ade8bfce36f2cdbf40919c4c586296a891a5ca0a54393034ef6ad\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Apr 20 20:05:16.079123 kubelet[3176]: E0420 20:05:16.066062 3176 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0f2440971c5ade8bfce36f2cdbf40919c4c586296a891a5ca0a54393034ef6ad\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-7d764666f9-kdhmz" Apr 20 20:05:16.146932 kubelet[3176]: E0420 20:05:16.140269 3176 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0f2440971c5ade8bfce36f2cdbf40919c4c586296a891a5ca0a54393034ef6ad\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-7d764666f9-kdhmz" Apr 20 20:05:16.176788 kubelet[3176]: E0420 20:05:16.173582 3176 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7d764666f9-kdhmz_kube-system(e4d771d9-b307-485b-8003-d899e5c2927e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7d764666f9-kdhmz_kube-system(e4d771d9-b307-485b-8003-d899e5c2927e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0f2440971c5ade8bfce36f2cdbf40919c4c586296a891a5ca0a54393034ef6ad\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-7d764666f9-kdhmz" podUID="e4d771d9-b307-485b-8003-d899e5c2927e" Apr 20 20:05:17.230928 kubelet[3176]: E0420 20:05:17.219134 3176 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.566s" Apr 20 20:05:17.368110 containerd[1640]: time="2026-04-20T20:05:17.347531299Z" level=info msg="Container e10d4d4fe2f8f21e087d2c704be373d906950239e8f5ed8fbf64adaa664d5080: CDI devices from CRI Config.CDIDevices: []" Apr 20 20:05:18.772627 containerd[1640]: time="2026-04-20T20:05:18.770501081Z" level=info msg="CreateContainer within sandbox \"2eac08b56e67d9ac9a22f638155f8721ef4efe3fdee1c03b0bcc56c372a6daf3\" for name:\"kube-flannel\" returns container id \"e10d4d4fe2f8f21e087d2c704be373d906950239e8f5ed8fbf64adaa664d5080\"" Apr 20 20:05:18.834768 containerd[1640]: time="2026-04-20T20:05:18.834326383Z" level=info msg="StartContainer for \"e10d4d4fe2f8f21e087d2c704be373d906950239e8f5ed8fbf64adaa664d5080\"" Apr 20 20:05:19.066138 containerd[1640]: time="2026-04-20T20:05:19.063751767Z" level=info msg="connecting to shim e10d4d4fe2f8f21e087d2c704be373d906950239e8f5ed8fbf64adaa664d5080" address="unix:///run/containerd/s/395f96b1ac62520841319aa741d4c44932f1af8257ca464339194e4655dad80c" protocol=ttrpc version=3 Apr 20 20:05:20.592918 systemd[1]: Started cri-containerd-e10d4d4fe2f8f21e087d2c704be373d906950239e8f5ed8fbf64adaa664d5080.scope - libcontainer container e10d4d4fe2f8f21e087d2c704be373d906950239e8f5ed8fbf64adaa664d5080. Apr 20 20:05:20.956215 kubelet[3176]: E0420 20:05:20.951233 3176 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.253s" Apr 20 20:05:25.134782 kubelet[3176]: E0420 20:05:25.126324 3176 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.454s" Apr 20 20:05:25.273503 containerd[1640]: time="2026-04-20T20:05:25.273102442Z" level=info msg="StartContainer for \"e10d4d4fe2f8f21e087d2c704be373d906950239e8f5ed8fbf64adaa664d5080\" returns successfully" Apr 20 20:05:28.078770 kubelet[3176]: E0420 20:05:28.061631 3176 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.371s" Apr 20 20:05:28.340242 kubelet[3176]: E0420 20:05:28.336781 3176 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:05:29.678690 kubelet[3176]: E0420 20:05:29.671934 3176 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.579s" Apr 20 20:05:31.596011 kubelet[3176]: E0420 20:05:31.594474 3176 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.81s" Apr 20 20:05:31.958875 kubelet[3176]: E0420 20:05:31.892392 3176 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:05:31.968102 kubelet[3176]: E0420 20:05:31.960862 3176 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:05:32.035729 kubelet[3176]: E0420 20:05:32.035067 3176 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:05:32.239929 systemd-networkd[1423]: flannel.1: Link UP Apr 20 20:05:32.240677 systemd-networkd[1423]: flannel.1: Gained carrier Apr 20 20:05:32.267860 containerd[1640]: time="2026-04-20T20:05:32.264071745Z" level=info msg="RunPodSandbox for name:\"coredns-7d764666f9-kdhmz\" uid:\"e4d771d9-b307-485b-8003-d899e5c2927e\" namespace:\"kube-system\"" Apr 20 20:05:32.396458 containerd[1640]: time="2026-04-20T20:05:32.394033089Z" level=info msg="RunPodSandbox for name:\"coredns-7d764666f9-762rx\" uid:\"09febe2e-e5a3-4b12-8d73-bae93c61f3a7\" namespace:\"kube-system\"" Apr 20 20:05:34.078265 systemd-networkd[1423]: flannel.1: Gained IPv6LL Apr 20 20:05:34.948245 systemd[1]: run-netns-cni\x2de19b5e74\x2d5837\x2d4920\x2d4c74\x2dbe08e63285a2.mount: Deactivated successfully. Apr 20 20:05:35.221099 containerd[1640]: time="2026-04-20T20:05:35.209262762Z" level=error msg="RunPodSandbox for name:\"coredns-7d764666f9-762rx\" uid:\"09febe2e-e5a3-4b12-8d73-bae93c61f3a7\" namespace:\"kube-system\" failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"73c1036907b56c0f52c221245924973f368801f490b8eb4e059c07d7b5d8c9ed\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Apr 20 20:05:35.234444 containerd[1640]: time="2026-04-20T20:05:35.227968453Z" level=error msg="RunPodSandbox for name:\"coredns-7d764666f9-kdhmz\" uid:\"e4d771d9-b307-485b-8003-d899e5c2927e\" namespace:\"kube-system\" failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"32f198b20b6d00a4f43928f10e90ae1ecb38e58f8327b669a3413ab1eec97013\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Apr 20 20:05:35.236382 kubelet[3176]: E0420 20:05:35.233234 3176 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"73c1036907b56c0f52c221245924973f368801f490b8eb4e059c07d7b5d8c9ed\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Apr 20 20:05:35.238400 systemd[1]: run-netns-cni\x2d6a018cc1\x2de0fc\x2d04d8\x2dec84\x2dc0f8a9f8922c.mount: Deactivated successfully. Apr 20 20:05:35.258150 kubelet[3176]: E0420 20:05:35.238950 3176 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"32f198b20b6d00a4f43928f10e90ae1ecb38e58f8327b669a3413ab1eec97013\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Apr 20 20:05:35.258150 kubelet[3176]: E0420 20:05:35.257182 3176 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"73c1036907b56c0f52c221245924973f368801f490b8eb4e059c07d7b5d8c9ed\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-7d764666f9-762rx" Apr 20 20:05:35.258150 kubelet[3176]: E0420 20:05:35.257709 3176 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"32f198b20b6d00a4f43928f10e90ae1ecb38e58f8327b669a3413ab1eec97013\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-7d764666f9-kdhmz" Apr 20 20:05:35.258150 kubelet[3176]: E0420 20:05:35.258164 3176 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"73c1036907b56c0f52c221245924973f368801f490b8eb4e059c07d7b5d8c9ed\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-7d764666f9-762rx" Apr 20 20:05:35.259048 kubelet[3176]: E0420 20:05:35.258495 3176 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7d764666f9-762rx_kube-system(09febe2e-e5a3-4b12-8d73-bae93c61f3a7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7d764666f9-762rx_kube-system(09febe2e-e5a3-4b12-8d73-bae93c61f3a7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"73c1036907b56c0f52c221245924973f368801f490b8eb4e059c07d7b5d8c9ed\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-7d764666f9-762rx" podUID="09febe2e-e5a3-4b12-8d73-bae93c61f3a7" Apr 20 20:05:35.262009 kubelet[3176]: E0420 20:05:35.258429 3176 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"32f198b20b6d00a4f43928f10e90ae1ecb38e58f8327b669a3413ab1eec97013\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-7d764666f9-kdhmz" Apr 20 20:05:35.262009 kubelet[3176]: E0420 20:05:35.259659 3176 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7d764666f9-kdhmz_kube-system(e4d771d9-b307-485b-8003-d899e5c2927e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7d764666f9-kdhmz_kube-system(e4d771d9-b307-485b-8003-d899e5c2927e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"32f198b20b6d00a4f43928f10e90ae1ecb38e58f8327b669a3413ab1eec97013\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-7d764666f9-kdhmz" podUID="e4d771d9-b307-485b-8003-d899e5c2927e" Apr 20 20:05:35.699474 kubelet[3176]: E0420 20:05:35.698075 3176 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:05:46.786079 kubelet[3176]: E0420 20:05:46.785581 3176 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:05:46.831543 containerd[1640]: time="2026-04-20T20:05:46.823251401Z" level=info msg="RunPodSandbox for name:\"coredns-7d764666f9-762rx\" uid:\"09febe2e-e5a3-4b12-8d73-bae93c61f3a7\" namespace:\"kube-system\"" Apr 20 20:05:47.191428 systemd-networkd[1423]: cni0: Link UP Apr 20 20:05:47.191484 systemd-networkd[1423]: cni0: Gained carrier Apr 20 20:05:47.248960 kernel: cni0: port 1(veth78ac5c76) entered blocking state Apr 20 20:05:47.260478 kernel: cni0: port 1(veth78ac5c76) entered disabled state Apr 20 20:05:47.260927 kernel: veth78ac5c76: entered allmulticast mode Apr 20 20:05:47.260956 kernel: veth78ac5c76: entered promiscuous mode Apr 20 20:05:47.274433 systemd-networkd[1423]: veth78ac5c76: Link UP Apr 20 20:05:47.404781 systemd-networkd[1423]: cni0: Lost carrier Apr 20 20:05:47.857826 kernel: cni0: port 1(veth78ac5c76) entered blocking state Apr 20 20:05:47.861492 kernel: cni0: port 1(veth78ac5c76) entered forwarding state Apr 20 20:05:47.862810 systemd-networkd[1423]: veth78ac5c76: Gained carrier Apr 20 20:05:47.877022 systemd-networkd[1423]: cni0: Gained carrier Apr 20 20:05:47.950071 containerd[1640]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil), MTU:0, AdvMSS:0, Priority:0, Table:(*int)(nil), Scope:(*int)(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0xc0000a2950), "name":"cbr0", "type":"bridge"} Apr 20 20:05:47.950071 containerd[1640]: delegateAdd: netconf sent to delegate plugin: Apr 20 20:05:48.732969 systemd-networkd[1423]: cni0: Gained IPv6LL Apr 20 20:05:48.737293 kubelet[3176]: E0420 20:05:48.736538 3176 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:05:48.967456 containerd[1640]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2026-04-20T20:05:48.874721079Z" level=info msg="connecting to shim 8656484539d2f4bacc5fa8343f0227e2fcd4b8c28e00141f5762478a7670bdc2" address="unix:///run/containerd/s/a0a2d11b85a1441792ab7acf584321417f5bab5a30bf2acfaee94bc1e258571e" namespace=k8s.io protocol=ttrpc version=3 Apr 20 20:05:48.980940 containerd[1640]: time="2026-04-20T20:05:48.980849293Z" level=info msg="RunPodSandbox for name:\"coredns-7d764666f9-kdhmz\" uid:\"e4d771d9-b307-485b-8003-d899e5c2927e\" namespace:\"kube-system\"" Apr 20 20:05:49.153947 systemd-networkd[1423]: veth78ac5c76: Gained IPv6LL Apr 20 20:05:49.629152 systemd-networkd[1423]: vethc642ae6a: Link UP Apr 20 20:05:49.655300 kernel: cni0: port 2(vethc642ae6a) entered blocking state Apr 20 20:05:49.656874 kernel: cni0: port 2(vethc642ae6a) entered disabled state Apr 20 20:05:49.667246 kernel: vethc642ae6a: entered allmulticast mode Apr 20 20:05:49.697889 kernel: vethc642ae6a: entered promiscuous mode Apr 20 20:05:50.088533 kernel: cni0: port 2(vethc642ae6a) entered blocking state Apr 20 20:05:50.088299 systemd-networkd[1423]: vethc642ae6a: Gained carrier Apr 20 20:05:50.105938 kernel: cni0: port 2(vethc642ae6a) entered forwarding state Apr 20 20:05:50.173604 containerd[1640]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil), MTU:0, AdvMSS:0, Priority:0, Table:(*int)(nil), Scope:(*int)(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0xc00009a950), "name":"cbr0", "type":"bridge"} Apr 20 20:05:50.173604 containerd[1640]: delegateAdd: netconf sent to delegate plugin: Apr 20 20:05:50.374583 systemd[1]: Started cri-containerd-8656484539d2f4bacc5fa8343f0227e2fcd4b8c28e00141f5762478a7670bdc2.scope - libcontainer container 8656484539d2f4bacc5fa8343f0227e2fcd4b8c28e00141f5762478a7670bdc2. Apr 20 20:05:51.058775 systemd-resolved[1392]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 20 20:05:51.802019 systemd-networkd[1423]: vethc642ae6a: Gained IPv6LL Apr 20 20:05:52.299431 containerd[1640]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2026-04-20T20:05:52.298863812Z" level=info msg="connecting to shim 5f07f9a79dfb29849098138d719c6648584757ccc6a02d8d255117bcd8cd1c59" address="unix:///run/containerd/s/d44e80e404b7f69f2819a53a202ed9500167cd4ac31df696c4c7d7691b2b41ff" namespace=k8s.io protocol=ttrpc version=3 Apr 20 20:05:52.724664 kubelet[3176]: E0420 20:05:52.720700 3176 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:05:53.092730 containerd[1640]: time="2026-04-20T20:05:53.089361060Z" level=info msg="RunPodSandbox for name:\"coredns-7d764666f9-762rx\" uid:\"09febe2e-e5a3-4b12-8d73-bae93c61f3a7\" namespace:\"kube-system\" returns sandbox id \"8656484539d2f4bacc5fa8343f0227e2fcd4b8c28e00141f5762478a7670bdc2\"" Apr 20 20:05:53.239853 kubelet[3176]: E0420 20:05:53.239273 3176 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:05:53.785588 containerd[1640]: time="2026-04-20T20:05:53.781201520Z" level=info msg="CreateContainer within sandbox \"8656484539d2f4bacc5fa8343f0227e2fcd4b8c28e00141f5762478a7670bdc2\" for container name:\"coredns\"" Apr 20 20:05:53.839170 systemd[1]: Started cri-containerd-5f07f9a79dfb29849098138d719c6648584757ccc6a02d8d255117bcd8cd1c59.scope - libcontainer container 5f07f9a79dfb29849098138d719c6648584757ccc6a02d8d255117bcd8cd1c59. Apr 20 20:05:54.644294 systemd-resolved[1392]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 20 20:05:54.647562 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4262538868.mount: Deactivated successfully. Apr 20 20:05:54.759281 containerd[1640]: time="2026-04-20T20:05:54.754028242Z" level=info msg="Container 6bd5cdcb2003c3d87edef649d4ff3488f498b5f2f085db51080f6ef0fe16c3a8: CDI devices from CRI Config.CDIDevices: []" Apr 20 20:05:54.852279 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1249052464.mount: Deactivated successfully. Apr 20 20:05:55.769018 containerd[1640]: time="2026-04-20T20:05:55.761596635Z" level=info msg="CreateContainer within sandbox \"8656484539d2f4bacc5fa8343f0227e2fcd4b8c28e00141f5762478a7670bdc2\" for name:\"coredns\" returns container id \"6bd5cdcb2003c3d87edef649d4ff3488f498b5f2f085db51080f6ef0fe16c3a8\"" Apr 20 20:05:55.898884 containerd[1640]: time="2026-04-20T20:05:55.896818087Z" level=info msg="RunPodSandbox for name:\"coredns-7d764666f9-kdhmz\" uid:\"e4d771d9-b307-485b-8003-d899e5c2927e\" namespace:\"kube-system\" returns sandbox id \"5f07f9a79dfb29849098138d719c6648584757ccc6a02d8d255117bcd8cd1c59\"" Apr 20 20:05:55.947895 containerd[1640]: time="2026-04-20T20:05:55.898861714Z" level=info msg="StartContainer for \"6bd5cdcb2003c3d87edef649d4ff3488f498b5f2f085db51080f6ef0fe16c3a8\"" Apr 20 20:05:56.141174 kubelet[3176]: E0420 20:05:56.138741 3176 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:05:56.216821 containerd[1640]: time="2026-04-20T20:05:56.212885454Z" level=info msg="connecting to shim 6bd5cdcb2003c3d87edef649d4ff3488f498b5f2f085db51080f6ef0fe16c3a8" address="unix:///run/containerd/s/a0a2d11b85a1441792ab7acf584321417f5bab5a30bf2acfaee94bc1e258571e" protocol=ttrpc version=3 Apr 20 20:05:56.576839 containerd[1640]: time="2026-04-20T20:05:56.570048335Z" level=info msg="CreateContainer within sandbox \"5f07f9a79dfb29849098138d719c6648584757ccc6a02d8d255117bcd8cd1c59\" for container name:\"coredns\"" Apr 20 20:05:57.617322 systemd[1]: Started cri-containerd-6bd5cdcb2003c3d87edef649d4ff3488f498b5f2f085db51080f6ef0fe16c3a8.scope - libcontainer container 6bd5cdcb2003c3d87edef649d4ff3488f498b5f2f085db51080f6ef0fe16c3a8. Apr 20 20:05:58.361964 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount921767945.mount: Deactivated successfully. Apr 20 20:05:58.560491 containerd[1640]: time="2026-04-20T20:05:58.547579796Z" level=info msg="Container db8d7074640aaaaab39cff221f9850b72998207f59db727a37fa655df82bfa5a: CDI devices from CRI Config.CDIDevices: []" Apr 20 20:05:58.561872 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4234179320.mount: Deactivated successfully. Apr 20 20:05:59.344304 containerd[1640]: time="2026-04-20T20:05:59.343993101Z" level=info msg="CreateContainer within sandbox \"5f07f9a79dfb29849098138d719c6648584757ccc6a02d8d255117bcd8cd1c59\" for name:\"coredns\" returns container id \"db8d7074640aaaaab39cff221f9850b72998207f59db727a37fa655df82bfa5a\"" Apr 20 20:05:59.425485 containerd[1640]: time="2026-04-20T20:05:59.424186529Z" level=info msg="StartContainer for \"db8d7074640aaaaab39cff221f9850b72998207f59db727a37fa655df82bfa5a\"" Apr 20 20:05:59.727267 containerd[1640]: time="2026-04-20T20:05:59.726922592Z" level=info msg="connecting to shim db8d7074640aaaaab39cff221f9850b72998207f59db727a37fa655df82bfa5a" address="unix:///run/containerd/s/d44e80e404b7f69f2819a53a202ed9500167cd4ac31df696c4c7d7691b2b41ff" protocol=ttrpc version=3 Apr 20 20:06:00.315157 containerd[1640]: time="2026-04-20T20:06:00.314946503Z" level=info msg="StartContainer for \"6bd5cdcb2003c3d87edef649d4ff3488f498b5f2f085db51080f6ef0fe16c3a8\" returns successfully" Apr 20 20:06:00.363586 systemd[1]: Started cri-containerd-db8d7074640aaaaab39cff221f9850b72998207f59db727a37fa655df82bfa5a.scope - libcontainer container db8d7074640aaaaab39cff221f9850b72998207f59db727a37fa655df82bfa5a. Apr 20 20:06:03.011735 kubelet[3176]: E0420 20:06:03.010278 3176 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:06:03.790663 kubelet[3176]: I0420 20:06:03.787536 3176 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-flannel/kube-flannel-ds-dkx62" podStartSLOduration=67.720655669 podStartE2EDuration="2m47.785934881s" podCreationTimestamp="2026-04-20 20:03:16 +0000 UTC" firstStartedPulling="2026-04-20 20:03:30.252528708 +0000 UTC m=+28.512423524" lastFinishedPulling="2026-04-20 20:05:10.31780794 +0000 UTC m=+128.577702736" observedRunningTime="2026-04-20 20:05:30.077065769 +0000 UTC m=+148.336960582" watchObservedRunningTime="2026-04-20 20:06:03.785934881 +0000 UTC m=+182.045829692" Apr 20 20:06:03.857992 kubelet[3176]: I0420 20:06:03.848021 3176 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/coredns-7d764666f9-762rx" podStartSLOduration=172.846931278 podStartE2EDuration="2m52.846931278s" podCreationTimestamp="2026-04-20 20:03:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-20 20:06:03.468602409 +0000 UTC m=+181.728497218" watchObservedRunningTime="2026-04-20 20:06:03.846931278 +0000 UTC m=+182.106826079" Apr 20 20:06:05.424204 containerd[1640]: time="2026-04-20T20:06:05.423605824Z" level=info msg="StartContainer for \"db8d7074640aaaaab39cff221f9850b72998207f59db727a37fa655df82bfa5a\" returns successfully" Apr 20 20:06:05.442576 kubelet[3176]: E0420 20:06:05.424519 3176 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.683s" Apr 20 20:06:06.677577 kubelet[3176]: E0420 20:06:06.675029 3176 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:06:06.745735 kubelet[3176]: E0420 20:06:06.745005 3176 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:06:06.989250 kubelet[3176]: E0420 20:06:06.988979 3176 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:06:07.764995 kubelet[3176]: E0420 20:06:07.751944 3176 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:06:08.312045 kubelet[3176]: I0420 20:06:08.307741 3176 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/coredns-7d764666f9-kdhmz" podStartSLOduration=177.307286481 podStartE2EDuration="2m57.307286481s" podCreationTimestamp="2026-04-20 20:03:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-20 20:06:07.581277434 +0000 UTC m=+185.841172243" watchObservedRunningTime="2026-04-20 20:06:08.307286481 +0000 UTC m=+186.567181281" Apr 20 20:06:08.642017 kubelet[3176]: E0420 20:06:08.637111 3176 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:06:09.724184 kubelet[3176]: E0420 20:06:09.723513 3176 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:06:50.712117 kubelet[3176]: E0420 20:06:50.710892 3176 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:06:55.770222 kubelet[3176]: E0420 20:06:55.767416 3176 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:07:06.031679 kubelet[3176]: E0420 20:07:06.031295 3176 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:07:12.842027 kubelet[3176]: E0420 20:07:12.839446 3176 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.068s" Apr 20 20:07:19.107534 systemd[1]: Started sshd@7-4098-10.0.0.6:22-10.0.0.1:47018.service - OpenSSH per-connection server daemon (10.0.0.1:47018). Apr 20 20:07:19.591691 sshd[4510]: Accepted publickey for core from 10.0.0.1 port 47018 ssh2: RSA SHA256:6LFBX1y/KMd5HbH+HLq9TQlSkwszEIqAV5PH+tmKm7M Apr 20 20:07:19.603144 sshd-session[4510]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 20 20:07:19.683661 kubelet[3176]: E0420 20:07:19.682834 3176 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:07:19.749312 systemd-logind[1609]: New session '9' of user 'core' with class 'user' and type 'tty'. Apr 20 20:07:19.835891 systemd[1]: Started session-9.scope - Session 9 of User core. Apr 20 20:07:20.273802 containerd[1640]: time="2026-04-20T20:07:20.260320829Z" level=info msg="container event discarded" container=2893dd2616135de175ff631c13d6f1271fb5b80119744fd45f4cb5ca2ecb3a9b type=CONTAINER_CREATED_EVENT Apr 20 20:07:20.299282 containerd[1640]: time="2026-04-20T20:07:20.295978215Z" level=info msg="container event discarded" container=2893dd2616135de175ff631c13d6f1271fb5b80119744fd45f4cb5ca2ecb3a9b type=CONTAINER_STARTED_EVENT Apr 20 20:07:20.316263 containerd[1640]: time="2026-04-20T20:07:20.314621988Z" level=info msg="container event discarded" container=64a97b8435f3ddeadc09165de86e15b48f97c0eb04468620f43d058579b319d1 type=CONTAINER_CREATED_EVENT Apr 20 20:07:20.316263 containerd[1640]: time="2026-04-20T20:07:20.316194705Z" level=info msg="container event discarded" container=64a97b8435f3ddeadc09165de86e15b48f97c0eb04468620f43d058579b319d1 type=CONTAINER_STARTED_EVENT Apr 20 20:07:20.393776 containerd[1640]: time="2026-04-20T20:07:20.386892140Z" level=info msg="container event discarded" container=67d023cd84b433eae4beb7d64a8aba55517d82d1ec198026b6901dd9c728d04a type=CONTAINER_CREATED_EVENT Apr 20 20:07:20.401120 containerd[1640]: time="2026-04-20T20:07:20.397584604Z" level=info msg="container event discarded" container=67d023cd84b433eae4beb7d64a8aba55517d82d1ec198026b6901dd9c728d04a type=CONTAINER_STARTED_EVENT Apr 20 20:07:20.635263 sshd[4524]: Connection closed by 10.0.0.1 port 47018 Apr 20 20:07:20.635304 sshd-session[4510]: pam_unix(sshd:session): session closed for user core Apr 20 20:07:20.658002 systemd[1]: sshd@7-4098-10.0.0.6:22-10.0.0.1:47018.service: Deactivated successfully. Apr 20 20:07:20.686624 systemd[1]: session-9.scope: Deactivated successfully. Apr 20 20:07:20.709199 systemd-logind[1609]: Session 9 logged out. Waiting for processes to exit. Apr 20 20:07:20.727557 systemd-logind[1609]: Removed session 9. Apr 20 20:07:21.480172 containerd[1640]: time="2026-04-20T20:07:21.477503956Z" level=info msg="container event discarded" container=764cf012163dbdccf61655f37fbd0201763cea2f5d053b212104eb3cf93cf259 type=CONTAINER_CREATED_EVENT Apr 20 20:07:21.556702 containerd[1640]: time="2026-04-20T20:07:21.554578372Z" level=info msg="container event discarded" container=92f997e10caaa969f76f6e97385a5a583d487b5ba75c74d285d90735d225065f type=CONTAINER_CREATED_EVENT Apr 20 20:07:21.568686 containerd[1640]: time="2026-04-20T20:07:21.564790498Z" level=info msg="container event discarded" container=37048ce4ab3df3a2d4a897903a56366f0eddba041f8139b5370fd312170d9238 type=CONTAINER_CREATED_EVENT Apr 20 20:07:23.170541 containerd[1640]: time="2026-04-20T20:07:23.166069569Z" level=info msg="container event discarded" container=764cf012163dbdccf61655f37fbd0201763cea2f5d053b212104eb3cf93cf259 type=CONTAINER_STARTED_EVENT Apr 20 20:07:23.192753 containerd[1640]: time="2026-04-20T20:07:23.171440623Z" level=info msg="container event discarded" container=37048ce4ab3df3a2d4a897903a56366f0eddba041f8139b5370fd312170d9238 type=CONTAINER_STARTED_EVENT Apr 20 20:07:23.194157 containerd[1640]: time="2026-04-20T20:07:23.192330170Z" level=info msg="container event discarded" container=92f997e10caaa969f76f6e97385a5a583d487b5ba75c74d285d90735d225065f type=CONTAINER_STARTED_EVENT Apr 20 20:07:26.052209 systemd[1]: Started sshd@8-12291-10.0.0.6:22-10.0.0.1:51542.service - OpenSSH per-connection server daemon (10.0.0.1:51542). Apr 20 20:07:26.485624 sshd[4563]: Accepted publickey for core from 10.0.0.1 port 51542 ssh2: RSA SHA256:6LFBX1y/KMd5HbH+HLq9TQlSkwszEIqAV5PH+tmKm7M Apr 20 20:07:26.498052 sshd-session[4563]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 20 20:07:27.127810 systemd-logind[1609]: New session '10' of user 'core' with class 'user' and type 'tty'. Apr 20 20:07:27.213419 systemd[1]: Started session-10.scope - Session 10 of User core. Apr 20 20:07:28.873023 kubelet[3176]: E0420 20:07:28.870042 3176 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:07:29.890949 sshd[4568]: Connection closed by 10.0.0.1 port 51542 Apr 20 20:07:29.932522 sshd-session[4563]: pam_unix(sshd:session): session closed for user core Apr 20 20:07:30.106443 systemd[1]: sshd@8-12291-10.0.0.6:22-10.0.0.1:51542.service: Deactivated successfully. Apr 20 20:07:30.155331 systemd[1]: session-10.scope: Deactivated successfully. Apr 20 20:07:30.162817 systemd[1]: session-10.scope: Consumed 2.169s CPU time, 14.1M memory peak. Apr 20 20:07:30.186467 systemd-logind[1609]: Session 10 logged out. Waiting for processes to exit. Apr 20 20:07:30.251965 systemd-logind[1609]: Removed session 10. Apr 20 20:07:32.697215 kubelet[3176]: E0420 20:07:32.695875 3176 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:07:33.747456 kubelet[3176]: E0420 20:07:33.746877 3176 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:07:35.594456 systemd[1]: Started sshd@9-12292-10.0.0.6:22-10.0.0.1:51544.service - OpenSSH per-connection server daemon (10.0.0.1:51544). Apr 20 20:07:36.249189 sshd[4611]: Accepted publickey for core from 10.0.0.1 port 51544 ssh2: RSA SHA256:6LFBX1y/KMd5HbH+HLq9TQlSkwszEIqAV5PH+tmKm7M Apr 20 20:07:36.291327 sshd-session[4611]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 20 20:07:36.495112 systemd-logind[1609]: New session '11' of user 'core' with class 'user' and type 'tty'. Apr 20 20:07:36.532172 systemd[1]: Started session-11.scope - Session 11 of User core. Apr 20 20:07:38.661726 sshd[4621]: Connection closed by 10.0.0.1 port 51544 Apr 20 20:07:38.663130 sshd-session[4611]: pam_unix(sshd:session): session closed for user core Apr 20 20:07:38.689189 systemd[1]: sshd@9-12292-10.0.0.6:22-10.0.0.1:51544.service: Deactivated successfully. Apr 20 20:07:38.707987 systemd[1]: session-11.scope: Deactivated successfully. Apr 20 20:07:38.711866 systemd[1]: session-11.scope: Consumed 1.899s CPU time, 14.2M memory peak. Apr 20 20:07:38.722283 systemd-logind[1609]: Session 11 logged out. Waiting for processes to exit. Apr 20 20:07:38.875729 systemd-logind[1609]: Removed session 11. Apr 20 20:07:44.382415 systemd[1]: Started sshd@10-4099-10.0.0.6:22-10.0.0.1:48042.service - OpenSSH per-connection server daemon (10.0.0.1:48042). Apr 20 20:07:45.679164 sshd[4664]: Accepted publickey for core from 10.0.0.1 port 48042 ssh2: RSA SHA256:6LFBX1y/KMd5HbH+HLq9TQlSkwszEIqAV5PH+tmKm7M Apr 20 20:07:45.816805 sshd-session[4664]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 20 20:07:46.873240 systemd-logind[1609]: New session '12' of user 'core' with class 'user' and type 'tty'. Apr 20 20:07:47.088819 systemd[1]: Started session-12.scope - Session 12 of User core. Apr 20 20:07:47.577018 kubelet[3176]: E0420 20:07:47.576686 3176 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.926s" Apr 20 20:07:53.498164 kubelet[3176]: E0420 20:07:53.496826 3176 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.837s" Apr 20 20:07:54.588955 sshd[4677]: Connection closed by 10.0.0.1 port 48042 Apr 20 20:07:54.679397 sshd-session[4664]: pam_unix(sshd:session): session closed for user core Apr 20 20:07:55.703673 systemd[1]: sshd@10-4099-10.0.0.6:22-10.0.0.1:48042.service: Deactivated successfully. Apr 20 20:07:56.011746 systemd[1]: session-12.scope: Deactivated successfully. Apr 20 20:07:56.060306 systemd[1]: session-12.scope: Consumed 4.264s CPU time, 14.4M memory peak. Apr 20 20:07:56.381021 systemd-logind[1609]: Session 12 logged out. Waiting for processes to exit. Apr 20 20:07:56.577138 systemd[1]: Started sshd@11-2-10.0.0.6:22-10.0.0.1:40438.service - OpenSSH per-connection server daemon (10.0.0.1:40438). Apr 20 20:07:57.260683 systemd-logind[1609]: Removed session 12. Apr 20 20:07:58.328591 sshd[4716]: Accepted publickey for core from 10.0.0.1 port 40438 ssh2: RSA SHA256:6LFBX1y/KMd5HbH+HLq9TQlSkwszEIqAV5PH+tmKm7M Apr 20 20:07:58.376631 sshd-session[4716]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 20 20:07:58.975125 systemd-logind[1609]: New session '13' of user 'core' with class 'user' and type 'tty'. Apr 20 20:07:59.365255 systemd[1]: Started session-13.scope - Session 13 of User core. Apr 20 20:08:02.593925 kubelet[3176]: E0420 20:08:02.593213 3176 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="9.079s" Apr 20 20:08:03.260052 kubelet[3176]: E0420 20:08:03.256164 3176 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:08:04.516959 kubelet[3176]: E0420 20:08:04.515664 3176 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.756s" Apr 20 20:08:07.046856 kubelet[3176]: E0420 20:08:07.040067 3176 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.24s" Apr 20 20:08:10.536701 kubelet[3176]: E0420 20:08:10.354727 3176 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.228s" Apr 20 20:08:15.866067 sshd[4735]: Connection closed by 10.0.0.1 port 40438 Apr 20 20:08:15.860224 sshd-session[4716]: pam_unix(sshd:session): session closed for user core Apr 20 20:08:16.793243 systemd[1]: sshd@11-2-10.0.0.6:22-10.0.0.1:40438.service: Deactivated successfully. Apr 20 20:08:16.949094 systemd[1]: session-13.scope: Deactivated successfully. Apr 20 20:08:16.958325 systemd[1]: session-13.scope: Consumed 11.098s CPU time, 19.7M memory peak. Apr 20 20:08:17.642286 systemd-logind[1609]: Session 13 logged out. Waiting for processes to exit. Apr 20 20:08:17.914297 systemd[1]: Started sshd@12-8196-10.0.0.6:22-10.0.0.1:60828.service - OpenSSH per-connection server daemon (10.0.0.1:60828). Apr 20 20:08:18.644168 systemd-logind[1609]: Removed session 13. Apr 20 20:08:23.360822 sshd[4782]: Accepted publickey for core from 10.0.0.1 port 60828 ssh2: RSA SHA256:6LFBX1y/KMd5HbH+HLq9TQlSkwszEIqAV5PH+tmKm7M Apr 20 20:08:23.676031 sshd-session[4782]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 20 20:08:24.844114 systemd-logind[1609]: New session '14' of user 'core' with class 'user' and type 'tty'. Apr 20 20:08:25.453775 systemd[1]: Started session-14.scope - Session 14 of User core. Apr 20 20:08:28.022651 kubelet[3176]: E0420 20:08:28.022028 3176 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="17.489s" Apr 20 20:08:29.529758 containerd[1640]: time="2026-04-20T20:08:29.434977802Z" level=info msg="container event discarded" container=a98f4ca1ec64d833b150414af6eb362bded137a5d500cf793c3032fb52e956ac type=CONTAINER_CREATED_EVENT Apr 20 20:08:29.805155 containerd[1640]: time="2026-04-20T20:08:29.657218723Z" level=info msg="container event discarded" container=a98f4ca1ec64d833b150414af6eb362bded137a5d500cf793c3032fb52e956ac type=CONTAINER_STARTED_EVENT Apr 20 20:08:29.967058 containerd[1640]: time="2026-04-20T20:08:29.965870857Z" level=info msg="container event discarded" container=2eac08b56e67d9ac9a22f638155f8721ef4efe3fdee1c03b0bcc56c372a6daf3 type=CONTAINER_CREATED_EVENT Apr 20 20:08:30.081068 containerd[1640]: time="2026-04-20T20:08:30.000270160Z" level=info msg="container event discarded" container=2eac08b56e67d9ac9a22f638155f8721ef4efe3fdee1c03b0bcc56c372a6daf3 type=CONTAINER_STARTED_EVENT Apr 20 20:08:30.395314 kubelet[3176]: E0420 20:08:30.379998 3176 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:08:31.190744 kubelet[3176]: E0420 20:08:31.173018 3176 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:08:32.110312 kubelet[3176]: E0420 20:08:31.869760 3176 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:08:32.991002 containerd[1640]: time="2026-04-20T20:08:32.942315735Z" level=info msg="container event discarded" container=2364329ba0f91f4ba879de803665b1a8365ed986c63db80ab17b8460a8fbffec type=CONTAINER_CREATED_EVENT Apr 20 20:08:33.646723 sshd[4803]: Connection closed by 10.0.0.1 port 60828 Apr 20 20:08:33.658752 sshd-session[4782]: pam_unix(sshd:session): session closed for user core Apr 20 20:08:34.156216 systemd[1]: sshd@12-8196-10.0.0.6:22-10.0.0.1:60828.service: Deactivated successfully. Apr 20 20:08:34.255328 systemd[1]: sshd@12-8196-10.0.0.6:22-10.0.0.1:60828.service: Consumed 1.495s CPU time, 4.1M memory peak. Apr 20 20:08:34.460427 systemd[1]: session-14.scope: Deactivated successfully. Apr 20 20:08:34.585246 systemd[1]: session-14.scope: Consumed 4.544s CPU time, 15.3M memory peak. Apr 20 20:08:34.996281 systemd-logind[1609]: Session 14 logged out. Waiting for processes to exit. Apr 20 20:08:35.453977 systemd-logind[1609]: Removed session 14. Apr 20 20:08:37.081559 containerd[1640]: time="2026-04-20T20:08:37.079072025Z" level=info msg="container event discarded" container=2364329ba0f91f4ba879de803665b1a8365ed986c63db80ab17b8460a8fbffec type=CONTAINER_STARTED_EVENT Apr 20 20:08:39.348713 systemd[1]: Started sshd@13-3-10.0.0.6:22-10.0.0.1:40410.service - OpenSSH per-connection server daemon (10.0.0.1:40410). Apr 20 20:08:41.944888 kubelet[3176]: E0420 20:08:41.930040 3176 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="13.817s" Apr 20 20:08:43.522992 sshd[4843]: Accepted publickey for core from 10.0.0.1 port 40410 ssh2: RSA SHA256:6LFBX1y/KMd5HbH+HLq9TQlSkwszEIqAV5PH+tmKm7M Apr 20 20:08:43.726953 sshd-session[4843]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 20 20:08:44.834272 systemd-logind[1609]: New session '15' of user 'core' with class 'user' and type 'tty'. Apr 20 20:08:45.100598 systemd[1]: Started session-15.scope - Session 15 of User core. Apr 20 20:08:51.117015 containerd[1640]: time="2026-04-20T20:08:51.112850037Z" level=info msg="container event discarded" container=6bae3f47a841ab12c56aae9a6d8492628a5f064046727466cd4947ba6ba2efd2 type=CONTAINER_CREATED_EVENT Apr 20 20:08:55.882800 containerd[1640]: time="2026-04-20T20:08:55.764282637Z" level=info msg="container event discarded" container=6bae3f47a841ab12c56aae9a6d8492628a5f064046727466cd4947ba6ba2efd2 type=CONTAINER_STARTED_EVENT Apr 20 20:08:57.948263 containerd[1640]: time="2026-04-20T20:08:57.937837890Z" level=info msg="container event discarded" container=6bae3f47a841ab12c56aae9a6d8492628a5f064046727466cd4947ba6ba2efd2 type=CONTAINER_STOPPED_EVENT Apr 20 20:08:59.724846 sshd[4866]: Connection closed by 10.0.0.1 port 40410 Apr 20 20:08:59.731834 sshd-session[4843]: pam_unix(sshd:session): session closed for user core Apr 20 20:09:00.129702 systemd[1]: sshd@13-3-10.0.0.6:22-10.0.0.1:40410.service: Deactivated successfully. Apr 20 20:09:00.166782 systemd[1]: sshd@13-3-10.0.0.6:22-10.0.0.1:40410.service: Consumed 1.291s CPU time, 4.1M memory peak. Apr 20 20:09:00.547366 systemd[1]: session-15.scope: Deactivated successfully. Apr 20 20:09:00.587817 systemd[1]: session-15.scope: Consumed 8.158s CPU time, 14.2M memory peak. Apr 20 20:09:00.902867 systemd-logind[1609]: Session 15 logged out. Waiting for processes to exit. Apr 20 20:09:01.300035 systemd-logind[1609]: Removed session 15. Apr 20 20:09:01.694598 kubelet[3176]: E0420 20:09:01.686159 3176 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="19.418s" Apr 20 20:09:05.455565 systemd[1]: Started sshd@14-8197-10.0.0.6:22-10.0.0.1:58478.service - OpenSSH per-connection server daemon (10.0.0.1:58478). Apr 20 20:09:06.895744 kubelet[3176]: E0420 20:09:06.684093 3176 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="4.776s" Apr 20 20:09:08.476136 sshd[4909]: Accepted publickey for core from 10.0.0.1 port 58478 ssh2: RSA SHA256:6LFBX1y/KMd5HbH+HLq9TQlSkwszEIqAV5PH+tmKm7M Apr 20 20:09:08.642811 sshd-session[4909]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 20 20:09:10.571783 systemd-logind[1609]: New session '16' of user 'core' with class 'user' and type 'tty'. Apr 20 20:09:10.978057 systemd[1]: Started session-16.scope - Session 16 of User core. Apr 20 20:09:11.164809 kubelet[3176]: E0420 20:09:11.146300 3176 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:09:12.140080 kubelet[3176]: E0420 20:09:12.130273 3176 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:09:15.133954 kubelet[3176]: E0420 20:09:15.115830 3176 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:09:19.244889 sshd[4924]: Connection closed by 10.0.0.1 port 58478 Apr 20 20:09:19.260692 sshd-session[4909]: pam_unix(sshd:session): session closed for user core Apr 20 20:09:19.594420 systemd[1]: sshd@14-8197-10.0.0.6:22-10.0.0.1:58478.service: Deactivated successfully. Apr 20 20:09:19.692028 systemd[1]: sshd@14-8197-10.0.0.6:22-10.0.0.1:58478.service: Consumed 1.061s CPU time, 4.4M memory peak. Apr 20 20:09:19.969400 systemd[1]: session-16.scope: Deactivated successfully. Apr 20 20:09:19.984461 systemd[1]: session-16.scope: Consumed 4.882s CPU time, 13.7M memory peak. Apr 20 20:09:20.194095 systemd-logind[1609]: Session 16 logged out. Waiting for processes to exit. Apr 20 20:09:20.564078 systemd-logind[1609]: Removed session 16. Apr 20 20:09:20.696757 kubelet[3176]: E0420 20:09:20.696218 3176 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="13.114s" Apr 20 20:09:23.963241 kubelet[3176]: E0420 20:09:23.960876 3176 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.205s" Apr 20 20:09:25.006779 systemd[1]: Started sshd@15-8198-10.0.0.6:22-10.0.0.1:49890.service - OpenSSH per-connection server daemon (10.0.0.1:49890). Apr 20 20:09:26.266169 kubelet[3176]: E0420 20:09:26.248895 3176 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.279s" Apr 20 20:09:27.154577 kubelet[3176]: E0420 20:09:27.153605 3176 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:09:27.489560 kubelet[3176]: E0420 20:09:27.370715 3176 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:09:28.595278 sshd[4971]: Accepted publickey for core from 10.0.0.1 port 49890 ssh2: RSA SHA256:6LFBX1y/KMd5HbH+HLq9TQlSkwszEIqAV5PH+tmKm7M Apr 20 20:09:28.864779 sshd-session[4971]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 20 20:09:30.459240 systemd-logind[1609]: New session '17' of user 'core' with class 'user' and type 'tty'. Apr 20 20:09:30.814852 systemd[1]: Started session-17.scope - Session 17 of User core. Apr 20 20:09:42.200861 kubelet[3176]: E0420 20:09:42.197735 3176 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="15.478s" Apr 20 20:09:42.680255 kubelet[3176]: E0420 20:09:42.677952 3176 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:09:44.071692 sshd[4983]: Connection closed by 10.0.0.1 port 49890 Apr 20 20:09:44.090130 sshd-session[4971]: pam_unix(sshd:session): session closed for user core Apr 20 20:09:44.394779 systemd[1]: sshd@15-8198-10.0.0.6:22-10.0.0.1:49890.service: Deactivated successfully. Apr 20 20:09:44.422495 systemd[1]: sshd@15-8198-10.0.0.6:22-10.0.0.1:49890.service: Consumed 1.077s CPU time, 4.4M memory peak. Apr 20 20:09:44.729774 systemd[1]: session-17.scope: Deactivated successfully. Apr 20 20:09:44.803255 systemd[1]: session-17.scope: Consumed 7.348s CPU time, 14.3M memory peak. Apr 20 20:09:44.945954 systemd-logind[1609]: Session 17 logged out. Waiting for processes to exit. Apr 20 20:09:45.441029 systemd-logind[1609]: Removed session 17. Apr 20 20:09:50.128736 systemd[1]: Started sshd@16-8199-10.0.0.6:22-10.0.0.1:34662.service - OpenSSH per-connection server daemon (10.0.0.1:34662). Apr 20 20:09:51.181868 kubelet[3176]: E0420 20:09:51.178066 3176 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="8.573s" Apr 20 20:09:53.009197 sshd[5043]: Accepted publickey for core from 10.0.0.1 port 34662 ssh2: RSA SHA256:6LFBX1y/KMd5HbH+HLq9TQlSkwszEIqAV5PH+tmKm7M Apr 20 20:09:53.159923 kubelet[3176]: E0420 20:09:52.648980 3176 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:09:53.289904 sshd-session[5043]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 20 20:09:53.895305 kubelet[3176]: E0420 20:09:53.850277 3176 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:09:54.396095 systemd-logind[1609]: New session '18' of user 'core' with class 'user' and type 'tty'. Apr 20 20:09:54.696254 systemd[1]: Started session-18.scope - Session 18 of User core. Apr 20 20:09:58.568289 containerd[1640]: time="2026-04-20T20:09:58.420330249Z" level=info msg="container event discarded" container=f8dd3fbde30e578cb32ff2078ae0d18d8f16f7c4d8175e0aba5af017c4637e7c type=CONTAINER_CREATED_EVENT Apr 20 20:10:02.407089 kubelet[3176]: E0420 20:10:02.402080 3176 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="11.162s" Apr 20 20:10:02.995971 containerd[1640]: time="2026-04-20T20:10:02.817137541Z" level=info msg="container event discarded" container=f8dd3fbde30e578cb32ff2078ae0d18d8f16f7c4d8175e0aba5af017c4637e7c type=CONTAINER_STARTED_EVENT Apr 20 20:10:05.569331 containerd[1640]: time="2026-04-20T20:10:05.564317167Z" level=info msg="container event discarded" container=f8dd3fbde30e578cb32ff2078ae0d18d8f16f7c4d8175e0aba5af017c4637e7c type=CONTAINER_STOPPED_EVENT Apr 20 20:10:06.160504 sshd[5063]: Connection closed by 10.0.0.1 port 34662 Apr 20 20:10:06.172635 sshd-session[5043]: pam_unix(sshd:session): session closed for user core Apr 20 20:10:06.344608 systemd[1]: sshd@16-8199-10.0.0.6:22-10.0.0.1:34662.service: Deactivated successfully. Apr 20 20:10:06.360974 systemd[1]: sshd@16-8199-10.0.0.6:22-10.0.0.1:34662.service: Consumed 1.081s CPU time, 4.5M memory peak. Apr 20 20:10:06.636019 systemd[1]: session-18.scope: Deactivated successfully. Apr 20 20:10:06.664063 systemd[1]: session-18.scope: Consumed 6.839s CPU time, 15.1M memory peak. Apr 20 20:10:06.843890 systemd-logind[1609]: Session 18 logged out. Waiting for processes to exit. Apr 20 20:10:07.166817 systemd-logind[1609]: Removed session 18. Apr 20 20:10:08.721649 kubelet[3176]: E0420 20:10:08.719521 3176 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="6.312s" Apr 20 20:10:10.131197 kubelet[3176]: E0420 20:10:10.128678 3176 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.402s" Apr 20 20:10:12.289946 systemd[1]: Started sshd@17-8200-10.0.0.6:22-10.0.0.1:55824.service - OpenSSH per-connection server daemon (10.0.0.1:55824). Apr 20 20:10:13.597422 kubelet[3176]: E0420 20:10:13.596563 3176 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.432s" Apr 20 20:10:15.279018 sshd[5109]: Accepted publickey for core from 10.0.0.1 port 55824 ssh2: RSA SHA256:6LFBX1y/KMd5HbH+HLq9TQlSkwszEIqAV5PH+tmKm7M Apr 20 20:10:15.399248 sshd-session[5109]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 20 20:10:16.495265 systemd-logind[1609]: New session '19' of user 'core' with class 'user' and type 'tty'. Apr 20 20:10:16.789868 systemd[1]: Started session-19.scope - Session 19 of User core. Apr 20 20:10:18.883072 containerd[1640]: time="2026-04-20T20:10:18.865968881Z" level=info msg="container event discarded" container=e10d4d4fe2f8f21e087d2c704be373d906950239e8f5ed8fbf64adaa664d5080 type=CONTAINER_CREATED_EVENT Apr 20 20:10:19.651484 kubelet[3176]: E0420 20:10:19.643709 3176 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="5.891s" Apr 20 20:10:23.174881 sshd[5136]: Connection closed by 10.0.0.1 port 55824 Apr 20 20:10:23.217882 sshd-session[5109]: pam_unix(sshd:session): session closed for user core Apr 20 20:10:23.460273 systemd[1]: sshd@17-8200-10.0.0.6:22-10.0.0.1:55824.service: Deactivated successfully. Apr 20 20:10:23.779582 systemd[1]: session-19.scope: Deactivated successfully. Apr 20 20:10:23.811889 systemd[1]: session-19.scope: Consumed 3.290s CPU time, 16.3M memory peak. Apr 20 20:10:24.058547 systemd-logind[1609]: Session 19 logged out. Waiting for processes to exit. Apr 20 20:10:24.252098 systemd-logind[1609]: Removed session 19. Apr 20 20:10:24.647717 containerd[1640]: time="2026-04-20T20:10:24.563205508Z" level=info msg="container event discarded" container=e10d4d4fe2f8f21e087d2c704be373d906950239e8f5ed8fbf64adaa664d5080 type=CONTAINER_STARTED_EVENT Apr 20 20:10:25.337996 kubelet[3176]: E0420 20:10:25.333843 3176 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="5.683s" Apr 20 20:10:25.727986 kubelet[3176]: E0420 20:10:25.689618 3176 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:10:25.727986 kubelet[3176]: E0420 20:10:25.720683 3176 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:10:27.288017 kubelet[3176]: E0420 20:10:27.287315 3176 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.914s" Apr 20 20:10:29.249862 systemd[1]: Started sshd@18-4100-10.0.0.6:22-10.0.0.1:40004.service - OpenSSH per-connection server daemon (10.0.0.1:40004). Apr 20 20:10:29.523737 systemd[1]: Starting systemd-tmpfiles-clean.service - Cleanup of Temporary Directories... Apr 20 20:10:32.165261 systemd-tmpfiles[5177]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Apr 20 20:10:32.177020 systemd-tmpfiles[5177]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Apr 20 20:10:32.265530 systemd-tmpfiles[5177]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Apr 20 20:10:32.967831 systemd-tmpfiles[5177]: ACLs are not supported, ignoring. Apr 20 20:10:33.028671 systemd-tmpfiles[5177]: ACLs are not supported, ignoring. Apr 20 20:10:33.443832 systemd-tmpfiles[5177]: Detected autofs mount point /boot during canonicalization of boot. Apr 20 20:10:33.462903 systemd-tmpfiles[5177]: Skipping /boot Apr 20 20:10:33.837901 systemd[1]: systemd-tmpfiles-clean.service: Deactivated successfully. Apr 20 20:10:33.856032 systemd[1]: Finished systemd-tmpfiles-clean.service - Cleanup of Temporary Directories. Apr 20 20:10:35.307267 sshd[5176]: Accepted publickey for core from 10.0.0.1 port 40004 ssh2: RSA SHA256:6LFBX1y/KMd5HbH+HLq9TQlSkwszEIqAV5PH+tmKm7M Apr 20 20:10:35.368844 sshd-session[5176]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 20 20:10:35.573588 kubelet[3176]: E0420 20:10:35.561279 3176 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="7.565s" Apr 20 20:10:36.094002 systemd-logind[1609]: New session '20' of user 'core' with class 'user' and type 'tty'. Apr 20 20:10:36.222000 systemd[1]: Started session-20.scope - Session 20 of User core. Apr 20 20:10:37.467794 kubelet[3176]: E0420 20:10:37.459058 3176 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.889s" Apr 20 20:10:40.771539 kubelet[3176]: E0420 20:10:40.769960 3176 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.275s" Apr 20 20:10:51.498456 sshd[5203]: Connection closed by 10.0.0.1 port 40004 Apr 20 20:10:51.589289 sshd-session[5176]: pam_unix(sshd:session): session closed for user core Apr 20 20:10:52.866433 systemd[1]: sshd@18-4100-10.0.0.6:22-10.0.0.1:40004.service: Deactivated successfully. Apr 20 20:10:52.881641 systemd[1]: sshd@18-4100-10.0.0.6:22-10.0.0.1:40004.service: Consumed 1.346s CPU time, 4.1M memory peak. Apr 20 20:10:53.025052 systemd[1]: session-20.scope: Deactivated successfully. Apr 20 20:10:53.057840 systemd[1]: session-20.scope: Consumed 9.674s CPU time, 14.3M memory peak. Apr 20 20:10:53.237380 containerd[1640]: time="2026-04-20T20:10:53.235859419Z" level=info msg="container event discarded" container=8656484539d2f4bacc5fa8343f0227e2fcd4b8c28e00141f5762478a7670bdc2 type=CONTAINER_CREATED_EVENT Apr 20 20:10:53.380778 systemd-logind[1609]: Session 20 logged out. Waiting for processes to exit. Apr 20 20:10:53.785490 systemd[1]: Started sshd@19-4-10.0.0.6:22-10.0.0.1:36612.service - OpenSSH per-connection server daemon (10.0.0.1:36612). Apr 20 20:10:54.405859 containerd[1640]: time="2026-04-20T20:10:53.772817883Z" level=info msg="container event discarded" container=8656484539d2f4bacc5fa8343f0227e2fcd4b8c28e00141f5762478a7670bdc2 type=CONTAINER_STARTED_EVENT Apr 20 20:10:54.958316 systemd-logind[1609]: Removed session 20. Apr 20 20:10:55.640872 containerd[1640]: time="2026-04-20T20:10:55.621866719Z" level=info msg="container event discarded" container=6bd5cdcb2003c3d87edef649d4ff3488f498b5f2f085db51080f6ef0fe16c3a8 type=CONTAINER_CREATED_EVENT Apr 20 20:10:56.295691 containerd[1640]: time="2026-04-20T20:10:56.042260804Z" level=info msg="container event discarded" container=5f07f9a79dfb29849098138d719c6648584757ccc6a02d8d255117bcd8cd1c59 type=CONTAINER_CREATED_EVENT Apr 20 20:10:56.381086 containerd[1640]: time="2026-04-20T20:10:56.320852173Z" level=info msg="container event discarded" container=5f07f9a79dfb29849098138d719c6648584757ccc6a02d8d255117bcd8cd1c59 type=CONTAINER_STARTED_EVENT Apr 20 20:10:59.393657 kubelet[3176]: E0420 20:10:59.393313 3176 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="18.416s" Apr 20 20:10:59.584034 containerd[1640]: time="2026-04-20T20:10:59.389045251Z" level=info msg="container event discarded" container=db8d7074640aaaaab39cff221f9850b72998207f59db727a37fa655df82bfa5a type=CONTAINER_CREATED_EVENT Apr 20 20:11:00.260802 sshd[5255]: Accepted publickey for core from 10.0.0.1 port 36612 ssh2: RSA SHA256:6LFBX1y/KMd5HbH+HLq9TQlSkwszEIqAV5PH+tmKm7M Apr 20 20:11:00.467451 containerd[1640]: time="2026-04-20T20:11:00.159094574Z" level=info msg="container event discarded" container=6bd5cdcb2003c3d87edef649d4ff3488f498b5f2f085db51080f6ef0fe16c3a8 type=CONTAINER_STARTED_EVENT Apr 20 20:11:00.693054 sshd-session[5255]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 20 20:11:02.086482 systemd-logind[1609]: New session '21' of user 'core' with class 'user' and type 'tty'. Apr 20 20:11:02.718230 systemd[1]: Started session-21.scope - Session 21 of User core. Apr 20 20:11:04.585220 containerd[1640]: time="2026-04-20T20:11:04.447200860Z" level=info msg="container event discarded" container=db8d7074640aaaaab39cff221f9850b72998207f59db727a37fa655df82bfa5a type=CONTAINER_STARTED_EVENT Apr 20 20:11:15.743762 systemd[1]: cri-containerd-92f997e10caaa969f76f6e97385a5a583d487b5ba75c74d285d90735d225065f.scope: Deactivated successfully. Apr 20 20:11:15.772124 systemd[1]: cri-containerd-92f997e10caaa969f76f6e97385a5a583d487b5ba75c74d285d90735d225065f.scope: Consumed 1min 20.050s CPU time, 55.5M memory peak. Apr 20 20:11:16.822827 containerd[1640]: time="2026-04-20T20:11:16.806219903Z" level=info msg="received container exit event container_id:\"92f997e10caaa969f76f6e97385a5a583d487b5ba75c74d285d90735d225065f\" id:\"92f997e10caaa969f76f6e97385a5a583d487b5ba75c74d285d90735d225065f\" pid:3019 exit_status:1 exited_at:{seconds:1776715876 nanos:244507281}" Apr 20 20:11:20.581760 systemd[1]: cri-containerd-37048ce4ab3df3a2d4a897903a56366f0eddba041f8139b5370fd312170d9238.scope: Deactivated successfully. Apr 20 20:11:20.660039 systemd[1]: cri-containerd-37048ce4ab3df3a2d4a897903a56366f0eddba041f8139b5370fd312170d9238.scope: Consumed 31.166s CPU time, 25.1M memory peak. Apr 20 20:11:21.416002 containerd[1640]: time="2026-04-20T20:11:21.415607650Z" level=info msg="received container exit event container_id:\"37048ce4ab3df3a2d4a897903a56366f0eddba041f8139b5370fd312170d9238\" id:\"37048ce4ab3df3a2d4a897903a56366f0eddba041f8139b5370fd312170d9238\" pid:3006 exit_status:1 exited_at:{seconds:1776715880 nanos:848322378}" Apr 20 20:11:21.449593 kubelet[3176]: E0420 20:11:20.576102 3176 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="20.649s" Apr 20 20:11:28.692641 containerd[1640]: time="2026-04-20T20:11:27.630119082Z" level=error msg="ttrpc: received message on inactive stream" stream=89 Apr 20 20:11:28.947838 containerd[1640]: time="2026-04-20T20:11:28.685145627Z" level=error msg="failed to handle container TaskExit event container_id:\"92f997e10caaa969f76f6e97385a5a583d487b5ba75c74d285d90735d225065f\" id:\"92f997e10caaa969f76f6e97385a5a583d487b5ba75c74d285d90735d225065f\" pid:3019 exit_status:1 exited_at:{seconds:1776715876 nanos:244507281}" error="failed to stop container: unknown error after kill: runc did not terminate successfully: exit status 137: " Apr 20 20:11:29.844790 sshd[5280]: Connection closed by 10.0.0.1 port 36612 Apr 20 20:11:29.837083 sshd-session[5255]: pam_unix(sshd:session): session closed for user core Apr 20 20:11:30.346099 systemd[1]: sshd@19-4-10.0.0.6:22-10.0.0.1:36612.service: Deactivated successfully. Apr 20 20:11:30.367141 systemd[1]: sshd@19-4-10.0.0.6:22-10.0.0.1:36612.service: Consumed 2.264s CPU time, 4.1M memory peak. Apr 20 20:11:30.586429 systemd[1]: session-21.scope: Deactivated successfully. Apr 20 20:11:30.588961 systemd[1]: session-21.scope: Consumed 19.257s CPU time, 25.1M memory peak. Apr 20 20:11:30.681687 containerd[1640]: time="2026-04-20T20:11:30.680121286Z" level=info msg="TaskExit event container_id:\"92f997e10caaa969f76f6e97385a5a583d487b5ba75c74d285d90735d225065f\" id:\"92f997e10caaa969f76f6e97385a5a583d487b5ba75c74d285d90735d225065f\" pid:3019 exit_status:1 exited_at:{seconds:1776715876 nanos:244507281}" Apr 20 20:11:30.837162 systemd-logind[1609]: Session 21 logged out. Waiting for processes to exit. Apr 20 20:11:31.051832 systemd[1]: Started sshd@20-8201-10.0.0.6:22-10.0.0.1:39216.service - OpenSSH per-connection server daemon (10.0.0.1:39216). Apr 20 20:11:31.683836 systemd-logind[1609]: Removed session 21. Apr 20 20:11:32.198815 containerd[1640]: time="2026-04-20T20:11:32.027757947Z" level=error msg="ttrpc: received message on inactive stream" stream=89 Apr 20 20:11:32.375617 containerd[1640]: time="2026-04-20T20:11:32.182911825Z" level=error msg="failed to handle container TaskExit event container_id:\"37048ce4ab3df3a2d4a897903a56366f0eddba041f8139b5370fd312170d9238\" id:\"37048ce4ab3df3a2d4a897903a56366f0eddba041f8139b5370fd312170d9238\" pid:3006 exit_status:1 exited_at:{seconds:1776715880 nanos:848322378}" error="failed to stop container: context deadline exceeded" Apr 20 20:11:32.721080 containerd[1640]: time="2026-04-20T20:11:32.445096988Z" level=error msg="ttrpc: received message on inactive stream" stream=85 Apr 20 20:11:36.277024 kubelet[3176]: E0420 20:11:36.268613 3176 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="14.576s" Apr 20 20:11:36.992581 sshd[5346]: Accepted publickey for core from 10.0.0.1 port 39216 ssh2: RSA SHA256:6LFBX1y/KMd5HbH+HLq9TQlSkwszEIqAV5PH+tmKm7M Apr 20 20:11:37.251983 sshd-session[5346]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 20 20:11:38.324101 systemd-logind[1609]: New session '22' of user 'core' with class 'user' and type 'tty'. Apr 20 20:11:38.403911 systemd[1]: Started session-22.scope - Session 22 of User core. Apr 20 20:11:38.425794 kubelet[3176]: E0420 20:11:38.422109 3176 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.148s" Apr 20 20:11:40.575011 containerd[1640]: time="2026-04-20T20:11:40.559684619Z" level=error msg="failed to delete task" error="context deadline exceeded" id=92f997e10caaa969f76f6e97385a5a583d487b5ba75c74d285d90735d225065f Apr 20 20:11:40.671195 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-92f997e10caaa969f76f6e97385a5a583d487b5ba75c74d285d90735d225065f-rootfs.mount: Deactivated successfully. Apr 20 20:11:40.981299 containerd[1640]: time="2026-04-20T20:11:40.699165810Z" level=error msg="ttrpc: received message on inactive stream" stream=103 Apr 20 20:11:41.344844 containerd[1640]: time="2026-04-20T20:11:41.337107051Z" level=error msg="Failed to handle backOff event container_id:\"92f997e10caaa969f76f6e97385a5a583d487b5ba75c74d285d90735d225065f\" id:\"92f997e10caaa969f76f6e97385a5a583d487b5ba75c74d285d90735d225065f\" pid:3019 exit_status:1 exited_at:{seconds:1776715876 nanos:244507281} for 92f997e10caaa969f76f6e97385a5a583d487b5ba75c74d285d90735d225065f" error="failed to handle container TaskExit event: failed to stop container: failed to delete task: context deadline exceeded" Apr 20 20:11:41.396765 containerd[1640]: time="2026-04-20T20:11:41.350483194Z" level=info msg="TaskExit event container_id:\"37048ce4ab3df3a2d4a897903a56366f0eddba041f8139b5370fd312170d9238\" id:\"37048ce4ab3df3a2d4a897903a56366f0eddba041f8139b5370fd312170d9238\" pid:3006 exit_status:1 exited_at:{seconds:1776715880 nanos:848322378}" Apr 20 20:11:48.999581 kubelet[3176]: E0420 20:11:48.994119 3176 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="10.569s" Apr 20 20:11:51.581868 containerd[1640]: time="2026-04-20T20:11:51.576188897Z" level=error msg="Failed to handle backOff event container_id:\"37048ce4ab3df3a2d4a897903a56366f0eddba041f8139b5370fd312170d9238\" id:\"37048ce4ab3df3a2d4a897903a56366f0eddba041f8139b5370fd312170d9238\" pid:3006 exit_status:1 exited_at:{seconds:1776715880 nanos:848322378} for 37048ce4ab3df3a2d4a897903a56366f0eddba041f8139b5370fd312170d9238" error="failed to handle container TaskExit event: failed to stop container: context deadline exceeded" Apr 20 20:11:51.972072 containerd[1640]: time="2026-04-20T20:11:51.867069504Z" level=error msg="ttrpc: received message on inactive stream" stream=101 Apr 20 20:11:52.149593 containerd[1640]: time="2026-04-20T20:11:52.011439718Z" level=info msg="TaskExit event container_id:\"92f997e10caaa969f76f6e97385a5a583d487b5ba75c74d285d90735d225065f\" id:\"92f997e10caaa969f76f6e97385a5a583d487b5ba75c74d285d90735d225065f\" pid:3019 exit_status:1 exited_at:{seconds:1776715876 nanos:244507281}" Apr 20 20:11:53.621819 kubelet[3176]: E0420 20:11:53.620857 3176 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="4.327s" Apr 20 20:11:56.666155 kubelet[3176]: E0420 20:11:56.663965 3176 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.041s" Apr 20 20:11:59.285793 kubelet[3176]: E0420 20:11:59.270979 3176 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.063s" Apr 20 20:12:00.357718 containerd[1640]: time="2026-04-20T20:12:00.350656213Z" level=info msg="TaskExit event container_id:\"37048ce4ab3df3a2d4a897903a56366f0eddba041f8139b5370fd312170d9238\" id:\"37048ce4ab3df3a2d4a897903a56366f0eddba041f8139b5370fd312170d9238\" pid:3006 exit_status:1 exited_at:{seconds:1776715880 nanos:848322378}" Apr 20 20:12:00.752929 kubelet[3176]: E0420 20:12:00.752531 3176 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:12:00.916744 kubelet[3176]: E0420 20:12:00.915674 3176 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:12:00.971085 kubelet[3176]: E0420 20:12:00.921978 3176 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:12:02.185623 kubelet[3176]: E0420 20:12:02.178027 3176 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:12:02.485264 kubelet[3176]: E0420 20:12:02.125238 3176 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:12:03.046847 kubelet[3176]: E0420 20:12:02.938968 3176 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:12:03.627003 kubelet[3176]: E0420 20:12:03.625051 3176 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:12:10.561719 containerd[1640]: time="2026-04-20T20:12:10.560855519Z" level=error msg="failed to delete task" error="context deadline exceeded" id=37048ce4ab3df3a2d4a897903a56366f0eddba041f8139b5370fd312170d9238 Apr 20 20:12:10.616790 containerd[1640]: time="2026-04-20T20:12:10.615400596Z" level=error msg="Failed to handle backOff event container_id:\"37048ce4ab3df3a2d4a897903a56366f0eddba041f8139b5370fd312170d9238\" id:\"37048ce4ab3df3a2d4a897903a56366f0eddba041f8139b5370fd312170d9238\" pid:3006 exit_status:1 exited_at:{seconds:1776715880 nanos:848322378} for 37048ce4ab3df3a2d4a897903a56366f0eddba041f8139b5370fd312170d9238" error="failed to handle container TaskExit event: failed to stop container: failed to delete task: context deadline exceeded" Apr 20 20:12:10.996992 containerd[1640]: time="2026-04-20T20:12:10.951786394Z" level=error msg="ttrpc: received message on inactive stream" stream=115 Apr 20 20:12:11.033671 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-37048ce4ab3df3a2d4a897903a56366f0eddba041f8139b5370fd312170d9238-rootfs.mount: Deactivated successfully. Apr 20 20:12:15.559081 kubelet[3176]: E0420 20:12:15.552227 3176 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="13.628s" Apr 20 20:12:20.222154 containerd[1640]: time="2026-04-20T20:12:20.100606264Z" level=info msg="TaskExit event container_id:\"37048ce4ab3df3a2d4a897903a56366f0eddba041f8139b5370fd312170d9238\" id:\"37048ce4ab3df3a2d4a897903a56366f0eddba041f8139b5370fd312170d9238\" pid:3006 exit_status:1 exited_at:{seconds:1776715880 nanos:848322378}" Apr 20 20:12:27.532679 kubelet[3176]: I0420 20:12:27.531624 3176 scope.go:122] "RemoveContainer" containerID="92f997e10caaa969f76f6e97385a5a583d487b5ba75c74d285d90735d225065f" Apr 20 20:12:27.891324 kubelet[3176]: E0420 20:12:27.854293 3176 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:12:28.274707 kubelet[3176]: E0420 20:12:28.259072 3176 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:12:28.327078 kubelet[3176]: E0420 20:12:28.326608 3176 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:12:28.402697 kubelet[3176]: E0420 20:12:28.359171 3176 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="12.558s" Apr 20 20:12:29.739552 kubelet[3176]: E0420 20:12:29.739206 3176 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.336s" Apr 20 20:12:30.294110 containerd[1640]: time="2026-04-20T20:12:30.098797229Z" level=error msg="ttrpc: received message on inactive stream" stream=131 Apr 20 20:12:30.675812 containerd[1640]: time="2026-04-20T20:12:30.660182000Z" level=error msg="failed to delete task" error="context deadline exceeded" id=37048ce4ab3df3a2d4a897903a56366f0eddba041f8139b5370fd312170d9238 Apr 20 20:12:30.736518 containerd[1640]: time="2026-04-20T20:12:30.662182733Z" level=error msg="Failed to handle backOff event container_id:\"37048ce4ab3df3a2d4a897903a56366f0eddba041f8139b5370fd312170d9238\" id:\"37048ce4ab3df3a2d4a897903a56366f0eddba041f8139b5370fd312170d9238\" pid:3006 exit_status:1 exited_at:{seconds:1776715880 nanos:848322378} for 37048ce4ab3df3a2d4a897903a56366f0eddba041f8139b5370fd312170d9238" error="failed to handle container TaskExit event: failed to stop container: failed to delete task: context deadline exceeded" Apr 20 20:12:30.736518 containerd[1640]: time="2026-04-20T20:12:30.671393979Z" level=info msg="CreateContainer within sandbox \"64a97b8435f3ddeadc09165de86e15b48f97c0eb04468620f43d058579b319d1\" for container name:\"kube-controller-manager\" attempt:1" Apr 20 20:12:31.195897 kubelet[3176]: E0420 20:12:31.178908 3176 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.424s" Apr 20 20:12:32.482753 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3542455527.mount: Deactivated successfully. Apr 20 20:12:32.601551 containerd[1640]: time="2026-04-20T20:12:32.595969493Z" level=info msg="Container b5a4f9b9730597fdf8bf6c8fcd262e07b1e0d7a67479d45d6406926d9ace4fee: CDI devices from CRI Config.CDIDevices: []" Apr 20 20:12:34.088929 containerd[1640]: time="2026-04-20T20:12:34.080083875Z" level=info msg="CreateContainer within sandbox \"64a97b8435f3ddeadc09165de86e15b48f97c0eb04468620f43d058579b319d1\" for name:\"kube-controller-manager\" attempt:1 returns container id \"b5a4f9b9730597fdf8bf6c8fcd262e07b1e0d7a67479d45d6406926d9ace4fee\"" Apr 20 20:12:34.176716 containerd[1640]: time="2026-04-20T20:12:34.173867596Z" level=info msg="StartContainer for \"b5a4f9b9730597fdf8bf6c8fcd262e07b1e0d7a67479d45d6406926d9ace4fee\"" Apr 20 20:12:34.412804 containerd[1640]: time="2026-04-20T20:12:34.401033493Z" level=info msg="StopContainer for \"37048ce4ab3df3a2d4a897903a56366f0eddba041f8139b5370fd312170d9238\" with timeout 30 (s)" Apr 20 20:12:34.443912 containerd[1640]: time="2026-04-20T20:12:34.440244250Z" level=info msg="Stop container \"37048ce4ab3df3a2d4a897903a56366f0eddba041f8139b5370fd312170d9238\" with signal terminated" Apr 20 20:12:34.746243 containerd[1640]: time="2026-04-20T20:12:34.688957870Z" level=info msg="connecting to shim b5a4f9b9730597fdf8bf6c8fcd262e07b1e0d7a67479d45d6406926d9ace4fee" address="unix:///run/containerd/s/e1f0007e2c6f1f748a9dc06ca555a4405d786137015c68dabb60c59e595b314f" protocol=ttrpc version=3 Apr 20 20:12:35.620914 systemd[1]: Started cri-containerd-b5a4f9b9730597fdf8bf6c8fcd262e07b1e0d7a67479d45d6406926d9ace4fee.scope - libcontainer container b5a4f9b9730597fdf8bf6c8fcd262e07b1e0d7a67479d45d6406926d9ace4fee. Apr 20 20:12:38.447757 containerd[1640]: time="2026-04-20T20:12:38.384025983Z" level=info msg="StartContainer for \"b5a4f9b9730597fdf8bf6c8fcd262e07b1e0d7a67479d45d6406926d9ace4fee\" returns successfully" Apr 20 20:12:39.155946 containerd[1640]: time="2026-04-20T20:12:39.147100494Z" level=info msg="TaskExit event container_id:\"37048ce4ab3df3a2d4a897903a56366f0eddba041f8139b5370fd312170d9238\" id:\"37048ce4ab3df3a2d4a897903a56366f0eddba041f8139b5370fd312170d9238\" pid:3006 exit_status:1 exited_at:{seconds:1776715880 nanos:848322378}" Apr 20 20:12:39.284515 kubelet[3176]: E0420 20:12:39.274742 3176 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.618s" Apr 20 20:12:43.291604 kubelet[3176]: E0420 20:12:43.289912 3176 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.634s" Apr 20 20:12:43.454972 sshd[5377]: Connection closed by 10.0.0.1 port 39216 Apr 20 20:12:43.494804 kubelet[3176]: E0420 20:12:43.449738 3176 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:12:43.474841 sshd-session[5346]: pam_unix(sshd:session): session closed for user core Apr 20 20:12:43.705614 systemd[1]: Started sshd@21-8202-10.0.0.6:22-10.0.0.1:47988.service - OpenSSH per-connection server daemon (10.0.0.1:47988). Apr 20 20:12:43.765993 systemd[1]: sshd@20-8201-10.0.0.6:22-10.0.0.1:39216.service: Deactivated successfully. Apr 20 20:12:43.820188 systemd[1]: sshd@20-8201-10.0.0.6:22-10.0.0.1:39216.service: Consumed 2.250s CPU time, 4.3M memory peak. Apr 20 20:12:44.009399 systemd[1]: session-22.scope: Deactivated successfully. Apr 20 20:12:44.025152 systemd[1]: session-22.scope: Consumed 43.697s CPU time, 44.6M memory peak. Apr 20 20:12:44.221362 systemd-logind[1609]: Session 22 logged out. Waiting for processes to exit. Apr 20 20:12:44.778064 systemd-logind[1609]: Removed session 22. Apr 20 20:12:47.337860 sshd[5638]: Accepted publickey for core from 10.0.0.1 port 47988 ssh2: RSA SHA256:6LFBX1y/KMd5HbH+HLq9TQlSkwszEIqAV5PH+tmKm7M Apr 20 20:12:47.393987 sshd-session[5638]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 20 20:12:48.708990 systemd-logind[1609]: New session '23' of user 'core' with class 'user' and type 'tty'. Apr 20 20:12:48.838966 systemd[1]: Started session-23.scope - Session 23 of User core. Apr 20 20:12:49.188641 containerd[1640]: time="2026-04-20T20:12:49.187387776Z" level=error msg="ttrpc: received message on inactive stream" stream=151 Apr 20 20:12:49.252656 containerd[1640]: time="2026-04-20T20:12:49.187766378Z" level=error msg="failed to delete task" error="context deadline exceeded" id=37048ce4ab3df3a2d4a897903a56366f0eddba041f8139b5370fd312170d9238 Apr 20 20:12:49.252656 containerd[1640]: time="2026-04-20T20:12:49.197175194Z" level=error msg="Failed to handle backOff event container_id:\"37048ce4ab3df3a2d4a897903a56366f0eddba041f8139b5370fd312170d9238\" id:\"37048ce4ab3df3a2d4a897903a56366f0eddba041f8139b5370fd312170d9238\" pid:3006 exit_status:1 exited_at:{seconds:1776715880 nanos:848322378} for 37048ce4ab3df3a2d4a897903a56366f0eddba041f8139b5370fd312170d9238" error="failed to handle container TaskExit event: failed to stop container: failed to delete task: context deadline exceeded" Apr 20 20:12:50.024635 kubelet[3176]: E0420 20:12:50.023732 3176 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="6.733s" Apr 20 20:12:52.229821 kubelet[3176]: E0420 20:12:52.228842 3176 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:12:53.832212 kubelet[3176]: E0420 20:12:53.831092 3176 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.807s" Apr 20 20:12:55.664810 kubelet[3176]: E0420 20:12:55.653856 3176 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:12:56.660001 kubelet[3176]: E0420 20:12:56.658987 3176 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.721s" Apr 20 20:12:57.815360 kubelet[3176]: E0420 20:12:57.807862 3176 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.148s" Apr 20 20:13:00.934014 kubelet[3176]: E0420 20:13:00.858505 3176 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.025s" Apr 20 20:13:05.671694 containerd[1640]: time="2026-04-20T20:13:05.660173017Z" level=info msg="Kill container \"37048ce4ab3df3a2d4a897903a56366f0eddba041f8139b5370fd312170d9238\"" Apr 20 20:13:05.828155 kubelet[3176]: E0420 20:13:05.672835 3176 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:13:06.329036 containerd[1640]: time="2026-04-20T20:13:06.311989422Z" level=info msg="TaskExit event container_id:\"37048ce4ab3df3a2d4a897903a56366f0eddba041f8139b5370fd312170d9238\" id:\"37048ce4ab3df3a2d4a897903a56366f0eddba041f8139b5370fd312170d9238\" pid:3006 exit_status:1 exited_at:{seconds:1776715880 nanos:848322378}" Apr 20 20:13:08.991720 containerd[1640]: time="2026-04-20T20:13:08.773038281Z" level=error msg="get state for 37048ce4ab3df3a2d4a897903a56366f0eddba041f8139b5370fd312170d9238" error="context deadline exceeded" Apr 20 20:13:09.136072 containerd[1640]: time="2026-04-20T20:13:09.130041532Z" level=warning msg="unknown status" status=0 Apr 20 20:13:10.330454 kubelet[3176]: E0420 20:13:10.286802 3176 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="8.97s" Apr 20 20:13:11.557903 containerd[1640]: time="2026-04-20T20:13:11.498072808Z" level=error msg="ttrpc: received message on inactive stream" stream=155 Apr 20 20:13:16.827146 containerd[1640]: time="2026-04-20T20:13:16.432184517Z" level=error msg="failed to delete task" error="context deadline exceeded" id=37048ce4ab3df3a2d4a897903a56366f0eddba041f8139b5370fd312170d9238 Apr 20 20:13:17.512115 containerd[1640]: time="2026-04-20T20:13:17.465124126Z" level=error msg="ttrpc: received message on inactive stream" stream=163 Apr 20 20:13:17.939972 containerd[1640]: time="2026-04-20T20:13:17.933722729Z" level=error msg="Failed to handle backOff event container_id:\"37048ce4ab3df3a2d4a897903a56366f0eddba041f8139b5370fd312170d9238\" id:\"37048ce4ab3df3a2d4a897903a56366f0eddba041f8139b5370fd312170d9238\" pid:3006 exit_status:1 exited_at:{seconds:1776715880 nanos:848322378} for 37048ce4ab3df3a2d4a897903a56366f0eddba041f8139b5370fd312170d9238" error="failed to handle container TaskExit event: failed to stop container: failed to delete task: context deadline exceeded" Apr 20 20:13:37.332800 sshd[5666]: Connection closed by 10.0.0.1 port 47988 Apr 20 20:13:37.458036 sshd-session[5638]: pam_unix(sshd:session): session closed for user core Apr 20 20:13:38.391304 systemd[1]: sshd@21-8202-10.0.0.6:22-10.0.0.1:47988.service: Deactivated successfully. Apr 20 20:13:38.491223 systemd[1]: sshd@21-8202-10.0.0.6:22-10.0.0.1:47988.service: Consumed 1.239s CPU time, 4.4M memory peak. Apr 20 20:13:38.694588 systemd[1]: session-23.scope: Deactivated successfully. Apr 20 20:13:38.730874 systemd[1]: session-23.scope: Consumed 31.725s CPU time, 29.3M memory peak. Apr 20 20:13:38.983510 systemd-logind[1609]: Session 23 logged out. Waiting for processes to exit. Apr 20 20:13:39.478773 systemd[1]: Started sshd@22-5-10.0.0.6:22-10.0.0.1:44082.service - OpenSSH per-connection server daemon (10.0.0.1:44082). Apr 20 20:13:40.079949 systemd-logind[1609]: Removed session 23. Apr 20 20:13:40.612265 kubelet[3176]: E0420 20:13:40.610077 3176 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="30.259s" Apr 20 20:13:45.497177 sshd[5789]: Accepted publickey for core from 10.0.0.1 port 44082 ssh2: RSA SHA256:6LFBX1y/KMd5HbH+HLq9TQlSkwszEIqAV5PH+tmKm7M Apr 20 20:13:45.743913 sshd-session[5789]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 20 20:13:47.254226 systemd-logind[1609]: New session '24' of user 'core' with class 'user' and type 'tty'. Apr 20 20:13:47.927100 systemd[1]: Started session-24.scope - Session 24 of User core. Apr 20 20:13:50.336165 kubelet[3176]: E0420 20:13:50.331293 3176 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:13:50.766986 containerd[1640]: time="2026-04-20T20:13:50.295251032Z" level=info msg="TaskExit event container_id:\"37048ce4ab3df3a2d4a897903a56366f0eddba041f8139b5370fd312170d9238\" id:\"37048ce4ab3df3a2d4a897903a56366f0eddba041f8139b5370fd312170d9238\" pid:3006 exit_status:1 exited_at:{seconds:1776715880 nanos:848322378}" Apr 20 20:13:59.469050 sshd[5812]: Connection closed by 10.0.0.1 port 44082 Apr 20 20:13:59.466292 sshd-session[5789]: pam_unix(sshd:session): session closed for user core Apr 20 20:13:59.834070 systemd[1]: sshd@22-5-10.0.0.6:22-10.0.0.1:44082.service: Deactivated successfully. Apr 20 20:13:59.868172 systemd[1]: sshd@22-5-10.0.0.6:22-10.0.0.1:44082.service: Consumed 1.880s CPU time, 4.4M memory peak. Apr 20 20:14:00.223273 systemd[1]: session-24.scope: Deactivated successfully. Apr 20 20:14:00.251316 systemd[1]: session-24.scope: Consumed 6.996s CPU time, 16.2M memory peak. Apr 20 20:14:00.508227 systemd-logind[1609]: Session 24 logged out. Waiting for processes to exit. Apr 20 20:14:00.944199 containerd[1640]: time="2026-04-20T20:14:00.562107492Z" level=error msg="ttrpc: received message on inactive stream" stream=173 Apr 20 20:14:00.985792 systemd-logind[1609]: Removed session 24. Apr 20 20:14:01.097169 containerd[1640]: time="2026-04-20T20:14:01.062074552Z" level=error msg="Failed to handle backOff event container_id:\"37048ce4ab3df3a2d4a897903a56366f0eddba041f8139b5370fd312170d9238\" id:\"37048ce4ab3df3a2d4a897903a56366f0eddba041f8139b5370fd312170d9238\" pid:3006 exit_status:1 exited_at:{seconds:1776715880 nanos:848322378} for 37048ce4ab3df3a2d4a897903a56366f0eddba041f8139b5370fd312170d9238" error="failed to handle container TaskExit event: failed to stop container: unknown error after kill: runc did not terminate successfully: exit status 137: " Apr 20 20:14:05.850694 systemd[1]: Started sshd@23-8203-10.0.0.6:22-10.0.0.1:59852.service - OpenSSH per-connection server daemon (10.0.0.1:59852). Apr 20 20:14:11.434134 sshd[5861]: Accepted publickey for core from 10.0.0.1 port 59852 ssh2: RSA SHA256:6LFBX1y/KMd5HbH+HLq9TQlSkwszEIqAV5PH+tmKm7M Apr 20 20:14:11.799967 sshd-session[5861]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 20 20:14:13.748763 systemd-logind[1609]: New session '25' of user 'core' with class 'user' and type 'tty'. Apr 20 20:14:14.384332 systemd[1]: Started session-25.scope - Session 25 of User core. Apr 20 20:14:15.829266 kubelet[3176]: E0420 20:14:15.826825 3176 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="31.387s" Apr 20 20:14:20.261316 kubelet[3176]: E0420 20:14:20.246986 3176 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:14:28.419508 kubelet[3176]: E0420 20:14:28.417045 3176 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:14:29.450594 systemd[1]: cri-containerd-b5a4f9b9730597fdf8bf6c8fcd262e07b1e0d7a67479d45d6406926d9ace4fee.scope: Deactivated successfully. Apr 20 20:14:29.488304 systemd[1]: cri-containerd-b5a4f9b9730597fdf8bf6c8fcd262e07b1e0d7a67479d45d6406926d9ace4fee.scope: Consumed 39.250s CPU time, 34M memory peak, 4K read from disk. Apr 20 20:14:29.817128 sshd[5880]: Connection closed by 10.0.0.1 port 59852 Apr 20 20:14:29.763842 sshd-session[5861]: pam_unix(sshd:session): session closed for user core Apr 20 20:14:30.188220 containerd[1640]: time="2026-04-20T20:14:29.820269051Z" level=info msg="received container exit event container_id:\"b5a4f9b9730597fdf8bf6c8fcd262e07b1e0d7a67479d45d6406926d9ace4fee\" id:\"b5a4f9b9730597fdf8bf6c8fcd262e07b1e0d7a67479d45d6406926d9ace4fee\" pid:5599 exit_status:1 exited_at:{seconds:1776716069 nanos:457699597}" Apr 20 20:14:30.170989 systemd[1]: sshd@23-8203-10.0.0.6:22-10.0.0.1:59852.service: Deactivated successfully. Apr 20 20:14:30.238163 systemd[1]: sshd@23-8203-10.0.0.6:22-10.0.0.1:59852.service: Consumed 1.627s CPU time, 4.1M memory peak. Apr 20 20:14:30.577283 systemd[1]: session-25.scope: Deactivated successfully. Apr 20 20:14:30.631703 systemd[1]: session-25.scope: Consumed 8.851s CPU time, 18.1M memory peak. Apr 20 20:14:31.112546 systemd-logind[1609]: Session 25 logged out. Waiting for processes to exit. Apr 20 20:14:31.149882 kubelet[3176]: E0420 20:14:31.113106 3176 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:14:31.306088 systemd-logind[1609]: Removed session 25. Apr 20 20:14:34.132228 kubelet[3176]: E0420 20:14:34.099267 3176 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:14:36.347225 systemd[1]: Started sshd@24-8204-10.0.0.6:22-10.0.0.1:53446.service - OpenSSH per-connection server daemon (10.0.0.1:53446). Apr 20 20:14:40.219001 containerd[1640]: time="2026-04-20T20:14:40.152126011Z" level=error msg="failed to delete task" error="context deadline exceeded" id=b5a4f9b9730597fdf8bf6c8fcd262e07b1e0d7a67479d45d6406926d9ace4fee Apr 20 20:14:40.540957 containerd[1640]: time="2026-04-20T20:14:40.527728754Z" level=error msg="failed to handle container TaskExit event container_id:\"b5a4f9b9730597fdf8bf6c8fcd262e07b1e0d7a67479d45d6406926d9ace4fee\" id:\"b5a4f9b9730597fdf8bf6c8fcd262e07b1e0d7a67479d45d6406926d9ace4fee\" pid:5599 exit_status:1 exited_at:{seconds:1776716069 nanos:457699597}" error="failed to stop container: failed to delete task: context deadline exceeded" Apr 20 20:14:40.859159 containerd[1640]: time="2026-04-20T20:14:40.830688979Z" level=error msg="ttrpc: received message on inactive stream" stream=35 Apr 20 20:14:41.055637 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b5a4f9b9730597fdf8bf6c8fcd262e07b1e0d7a67479d45d6406926d9ace4fee-rootfs.mount: Deactivated successfully. Apr 20 20:14:41.770895 sshd[5937]: Accepted publickey for core from 10.0.0.1 port 53446 ssh2: RSA SHA256:6LFBX1y/KMd5HbH+HLq9TQlSkwszEIqAV5PH+tmKm7M Apr 20 20:14:42.227901 sshd-session[5937]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 20 20:14:42.357883 containerd[1640]: time="2026-04-20T20:14:42.343969895Z" level=info msg="TaskExit event container_id:\"b5a4f9b9730597fdf8bf6c8fcd262e07b1e0d7a67479d45d6406926d9ace4fee\" id:\"b5a4f9b9730597fdf8bf6c8fcd262e07b1e0d7a67479d45d6406926d9ace4fee\" pid:5599 exit_status:1 exited_at:{seconds:1776716069 nanos:457699597}" Apr 20 20:14:44.165291 systemd-logind[1609]: New session '26' of user 'core' with class 'user' and type 'tty'. Apr 20 20:14:44.540019 systemd[1]: Started session-26.scope - Session 26 of User core. Apr 20 20:14:48.278920 kubelet[3176]: E0420 20:14:48.278215 3176 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="30.511s" Apr 20 20:14:52.513163 containerd[1640]: time="2026-04-20T20:14:52.481241847Z" level=error msg="Failed to handle backOff event container_id:\"b5a4f9b9730597fdf8bf6c8fcd262e07b1e0d7a67479d45d6406926d9ace4fee\" id:\"b5a4f9b9730597fdf8bf6c8fcd262e07b1e0d7a67479d45d6406926d9ace4fee\" pid:5599 exit_status:1 exited_at:{seconds:1776716069 nanos:457699597} for b5a4f9b9730597fdf8bf6c8fcd262e07b1e0d7a67479d45d6406926d9ace4fee" error="failed to handle container TaskExit event: failed to stop container: unknown error after kill: runc did not terminate successfully: exit status 137: " Apr 20 20:14:55.219124 containerd[1640]: time="2026-04-20T20:14:55.217989022Z" level=info msg="TaskExit event container_id:\"b5a4f9b9730597fdf8bf6c8fcd262e07b1e0d7a67479d45d6406926d9ace4fee\" id:\"b5a4f9b9730597fdf8bf6c8fcd262e07b1e0d7a67479d45d6406926d9ace4fee\" pid:5599 exit_status:1 exited_at:{seconds:1776716069 nanos:457699597}" Apr 20 20:14:55.802990 sshd[5972]: Connection closed by 10.0.0.1 port 53446 Apr 20 20:14:55.809805 sshd-session[5937]: pam_unix(sshd:session): session closed for user core Apr 20 20:14:56.093177 systemd[1]: sshd@24-8204-10.0.0.6:22-10.0.0.1:53446.service: Deactivated successfully. Apr 20 20:14:56.131245 systemd[1]: sshd@24-8204-10.0.0.6:22-10.0.0.1:53446.service: Consumed 2.027s CPU time, 4.4M memory peak. Apr 20 20:14:56.353038 systemd[1]: session-26.scope: Deactivated successfully. Apr 20 20:14:56.378076 systemd[1]: session-26.scope: Consumed 7.541s CPU time, 16.5M memory peak. Apr 20 20:14:56.590048 systemd-logind[1609]: Session 26 logged out. Waiting for processes to exit. Apr 20 20:14:56.732148 kubelet[3176]: E0420 20:14:56.685283 3176 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="8.079s" Apr 20 20:14:56.955147 systemd-logind[1609]: Removed session 26. Apr 20 20:14:57.981844 kubelet[3176]: E0420 20:14:57.852996 3176 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:14:58.553794 kubelet[3176]: E0420 20:14:58.551682 3176 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:14:58.674107 kubelet[3176]: E0420 20:14:58.671977 3176 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:15:00.850414 kubelet[3176]: E0420 20:15:00.848295 3176 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.895s" Apr 20 20:15:01.898209 systemd[1]: Started sshd@25-12293-10.0.0.6:22-10.0.0.1:54146.service - OpenSSH per-connection server daemon (10.0.0.1:54146). Apr 20 20:15:04.248906 kubelet[3176]: E0420 20:15:04.247612 3176 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.161s" Apr 20 20:15:04.380259 containerd[1640]: time="2026-04-20T20:15:04.372955874Z" level=error msg="StopContainer for \"37048ce4ab3df3a2d4a897903a56366f0eddba041f8139b5370fd312170d9238\" failed" error="rpc error: code = Canceled desc = an error occurs during waiting for container \"37048ce4ab3df3a2d4a897903a56366f0eddba041f8139b5370fd312170d9238\" to be killed: wait container \"37048ce4ab3df3a2d4a897903a56366f0eddba041f8139b5370fd312170d9238\": context canceled" Apr 20 20:15:04.429973 kubelet[3176]: E0420 20:15:04.427230 3176 log.go:32] "StopContainer from runtime service failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" containerID="37048ce4ab3df3a2d4a897903a56366f0eddba041f8139b5370fd312170d9238" Apr 20 20:15:04.447992 kubelet[3176]: E0420 20:15:04.429939 3176 kuberuntime_container.go:895] "Container termination failed with gracePeriod" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" pod="kube-system/kube-scheduler-localhost" podUID="f7c88b30fc803a3ec6b6c138191bdaca" containerName="kube-scheduler" containerID="containerd://37048ce4ab3df3a2d4a897903a56366f0eddba041f8139b5370fd312170d9238" gracePeriod=30 Apr 20 20:15:04.455326 sshd[6047]: Accepted publickey for core from 10.0.0.1 port 54146 ssh2: RSA SHA256:6LFBX1y/KMd5HbH+HLq9TQlSkwszEIqAV5PH+tmKm7M Apr 20 20:15:04.558844 kubelet[3176]: E0420 20:15:04.430215 3176 kuberuntime_manager.go:1437] "killContainer for pod failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" containerName="kube-scheduler" containerID={"Type":"containerd","ID":"37048ce4ab3df3a2d4a897903a56366f0eddba041f8139b5370fd312170d9238"} pod="kube-system/kube-scheduler-localhost" Apr 20 20:15:04.630873 kubelet[3176]: E0420 20:15:04.475202 3176 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillContainer\" for \"kube-scheduler\" with KillContainerError: \"rpc error: code = DeadlineExceeded desc = context deadline exceeded\"" pod="kube-system/kube-scheduler-localhost" podUID="f7c88b30fc803a3ec6b6c138191bdaca" Apr 20 20:15:04.637391 sshd-session[6047]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 20 20:15:05.306188 kubelet[3176]: E0420 20:15:05.303103 3176 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.049s" Apr 20 20:15:05.653436 systemd-logind[1609]: New session '27' of user 'core' with class 'user' and type 'tty'. Apr 20 20:15:06.102175 containerd[1640]: time="2026-04-20T20:15:06.096174439Z" level=info msg="TaskExit event container_id:\"37048ce4ab3df3a2d4a897903a56366f0eddba041f8139b5370fd312170d9238\" id:\"37048ce4ab3df3a2d4a897903a56366f0eddba041f8139b5370fd312170d9238\" pid:3006 exit_status:1 exited_at:{seconds:1776715880 nanos:848322378}" Apr 20 20:15:06.125685 systemd[1]: Started session-27.scope - Session 27 of User core. Apr 20 20:15:06.260021 kubelet[3176]: E0420 20:15:06.258147 3176 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:15:06.273232 kubelet[3176]: E0420 20:15:06.266532 3176 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:15:06.311856 kubelet[3176]: I0420 20:15:06.307643 3176 scope.go:122] "RemoveContainer" containerID="92f997e10caaa969f76f6e97385a5a583d487b5ba75c74d285d90735d225065f" Apr 20 20:15:06.320154 kubelet[3176]: I0420 20:15:06.318886 3176 scope.go:122] "RemoveContainer" containerID="b5a4f9b9730597fdf8bf6c8fcd262e07b1e0d7a67479d45d6406926d9ace4fee" Apr 20 20:15:06.320154 kubelet[3176]: E0420 20:15:06.319053 3176 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:15:06.975765 containerd[1640]: time="2026-04-20T20:15:06.959290678Z" level=info msg="CreateContainer within sandbox \"64a97b8435f3ddeadc09165de86e15b48f97c0eb04468620f43d058579b319d1\" for container name:\"kube-controller-manager\" attempt:2" Apr 20 20:15:07.590151 containerd[1640]: time="2026-04-20T20:15:07.579277507Z" level=info msg="RemoveContainer for \"92f997e10caaa969f76f6e97385a5a583d487b5ba75c74d285d90735d225065f\"" Apr 20 20:15:10.970161 kubelet[3176]: E0420 20:15:10.967655 3176 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:15:14.284111 containerd[1640]: time="2026-04-20T20:15:14.165269508Z" level=info msg="StopContainer for \"37048ce4ab3df3a2d4a897903a56366f0eddba041f8139b5370fd312170d9238\" with timeout 30 (s)" Apr 20 20:15:14.950156 containerd[1640]: time="2026-04-20T20:15:14.941908822Z" level=info msg="Skipping the sending of signal terminated to container \"37048ce4ab3df3a2d4a897903a56366f0eddba041f8139b5370fd312170d9238\" because a prior stop with timeout>0 request already sent the signal" Apr 20 20:15:17.399280 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount98459673.mount: Deactivated successfully. Apr 20 20:15:17.763278 containerd[1640]: time="2026-04-20T20:15:17.749306537Z" level=error msg="failed to delete task" error="context deadline exceeded" id=37048ce4ab3df3a2d4a897903a56366f0eddba041f8139b5370fd312170d9238 Apr 20 20:15:18.225402 containerd[1640]: time="2026-04-20T20:15:18.224642886Z" level=info msg="RemoveContainer for \"92f997e10caaa969f76f6e97385a5a583d487b5ba75c74d285d90735d225065f\" returns successfully" Apr 20 20:15:18.225402 containerd[1640]: time="2026-04-20T20:15:18.224996978Z" level=error msg="Failed to handle backOff event container_id:\"37048ce4ab3df3a2d4a897903a56366f0eddba041f8139b5370fd312170d9238\" id:\"37048ce4ab3df3a2d4a897903a56366f0eddba041f8139b5370fd312170d9238\" pid:3006 exit_status:1 exited_at:{seconds:1776715880 nanos:848322378} for 37048ce4ab3df3a2d4a897903a56366f0eddba041f8139b5370fd312170d9238" error="failed to handle container TaskExit event: failed to stop container: failed to delete task: context deadline exceeded" Apr 20 20:15:18.368231 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1017207087.mount: Deactivated successfully. Apr 20 20:15:18.584780 containerd[1640]: time="2026-04-20T20:15:18.549583867Z" level=info msg="Container 49f062dd1a04e6efc9c1a2c9a0d02a0f520f5cce96d51ce589381563fc6fc543: CDI devices from CRI Config.CDIDevices: []" Apr 20 20:15:18.758277 sshd[6058]: Connection closed by 10.0.0.1 port 54146 Apr 20 20:15:18.757844 sshd-session[6047]: pam_unix(sshd:session): session closed for user core Apr 20 20:15:19.187764 systemd[1]: sshd@25-12293-10.0.0.6:22-10.0.0.1:54146.service: Deactivated successfully. Apr 20 20:15:19.406313 systemd[1]: session-27.scope: Deactivated successfully. Apr 20 20:15:19.423682 systemd[1]: session-27.scope: Consumed 8.372s CPU time, 15.9M memory peak. Apr 20 20:15:19.680156 systemd-logind[1609]: Session 27 logged out. Waiting for processes to exit. Apr 20 20:15:20.050781 systemd-logind[1609]: Removed session 27. Apr 20 20:15:21.053773 kubelet[3176]: E0420 20:15:21.051722 3176 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="13.218s" Apr 20 20:15:22.667895 containerd[1640]: time="2026-04-20T20:15:22.663962218Z" level=info msg="CreateContainer within sandbox \"64a97b8435f3ddeadc09165de86e15b48f97c0eb04468620f43d058579b319d1\" for name:\"kube-controller-manager\" attempt:2 returns container id \"49f062dd1a04e6efc9c1a2c9a0d02a0f520f5cce96d51ce589381563fc6fc543\"" Apr 20 20:15:22.945271 kubelet[3176]: E0420 20:15:22.932945 3176 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.88s" Apr 20 20:15:23.471874 containerd[1640]: time="2026-04-20T20:15:23.439952306Z" level=info msg="StartContainer for \"49f062dd1a04e6efc9c1a2c9a0d02a0f520f5cce96d51ce589381563fc6fc543\"" Apr 20 20:15:25.398703 systemd[1]: Started sshd@26-12294-10.0.0.6:22-10.0.0.1:42754.service - OpenSSH per-connection server daemon (10.0.0.1:42754). Apr 20 20:15:25.967100 containerd[1640]: time="2026-04-20T20:15:25.965785293Z" level=info msg="connecting to shim 49f062dd1a04e6efc9c1a2c9a0d02a0f520f5cce96d51ce589381563fc6fc543" address="unix:///run/containerd/s/e1f0007e2c6f1f748a9dc06ca555a4405d786137015c68dabb60c59e595b314f" protocol=ttrpc version=3 Apr 20 20:15:26.180315 kubelet[3176]: E0420 20:15:26.179560 3176 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.228s" Apr 20 20:15:27.913768 sshd[6133]: Accepted publickey for core from 10.0.0.1 port 42754 ssh2: RSA SHA256:6LFBX1y/KMd5HbH+HLq9TQlSkwszEIqAV5PH+tmKm7M Apr 20 20:15:28.088238 sshd-session[6133]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 20 20:15:28.346771 systemd[1]: Started cri-containerd-49f062dd1a04e6efc9c1a2c9a0d02a0f520f5cce96d51ce589381563fc6fc543.scope - libcontainer container 49f062dd1a04e6efc9c1a2c9a0d02a0f520f5cce96d51ce589381563fc6fc543. Apr 20 20:15:28.407565 systemd-logind[1609]: New session '28' of user 'core' with class 'user' and type 'tty'. Apr 20 20:15:28.422926 systemd[1]: Started session-28.scope - Session 28 of User core. Apr 20 20:15:31.181864 containerd[1640]: time="2026-04-20T20:15:31.034882122Z" level=error msg="get state for 49f062dd1a04e6efc9c1a2c9a0d02a0f520f5cce96d51ce589381563fc6fc543" error="context deadline exceeded" Apr 20 20:15:31.350027 containerd[1640]: time="2026-04-20T20:15:31.341258654Z" level=warning msg="unknown status" status=0 Apr 20 20:15:33.598006 kubelet[3176]: E0420 20:15:33.378133 3176 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.729s" Apr 20 20:15:34.822508 kubelet[3176]: I0420 20:15:34.821100 3176 scope.go:122] "RemoveContainer" containerID="b5a4f9b9730597fdf8bf6c8fcd262e07b1e0d7a67479d45d6406926d9ace4fee" Apr 20 20:15:34.970543 kubelet[3176]: E0420 20:15:34.968606 3176 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.277s" Apr 20 20:15:35.145926 containerd[1640]: time="2026-04-20T20:15:35.085172345Z" level=error msg="ttrpc: received message on inactive stream" stream=3 Apr 20 20:15:35.192184 sshd[6174]: Connection closed by 10.0.0.1 port 42754 Apr 20 20:15:35.231896 sshd-session[6133]: pam_unix(sshd:session): session closed for user core Apr 20 20:15:35.438590 systemd[1]: sshd@26-12294-10.0.0.6:22-10.0.0.1:42754.service: Deactivated successfully. Apr 20 20:15:35.446583 systemd[1]: sshd@26-12294-10.0.0.6:22-10.0.0.1:42754.service: Consumed 1.181s CPU time, 4.3M memory peak. Apr 20 20:15:35.516038 systemd[1]: session-28.scope: Deactivated successfully. Apr 20 20:15:35.518800 systemd[1]: session-28.scope: Consumed 5.090s CPU time, 16.2M memory peak. Apr 20 20:15:35.558181 systemd-logind[1609]: Session 28 logged out. Waiting for processes to exit. Apr 20 20:15:35.751007 systemd-logind[1609]: Removed session 28. Apr 20 20:15:35.860243 containerd[1640]: time="2026-04-20T20:15:35.858208566Z" level=info msg="RemoveContainer for \"b5a4f9b9730597fdf8bf6c8fcd262e07b1e0d7a67479d45d6406926d9ace4fee\"" Apr 20 20:15:36.354288 containerd[1640]: time="2026-04-20T20:15:36.353048124Z" level=info msg="RemoveContainer for \"b5a4f9b9730597fdf8bf6c8fcd262e07b1e0d7a67479d45d6406926d9ace4fee\" returns successfully" Apr 20 20:15:37.144144 containerd[1640]: time="2026-04-20T20:15:37.143873594Z" level=info msg="StartContainer for \"49f062dd1a04e6efc9c1a2c9a0d02a0f520f5cce96d51ce589381563fc6fc543\" returns successfully" Apr 20 20:15:37.689610 kubelet[3176]: E0420 20:15:37.689034 3176 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:15:38.840987 kubelet[3176]: E0420 20:15:38.838991 3176 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:15:40.218791 kubelet[3176]: E0420 20:15:40.218705 3176 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:15:40.664869 systemd[1]: Started sshd@27-6-10.0.0.6:22-10.0.0.1:44558.service - OpenSSH per-connection server daemon (10.0.0.1:44558). Apr 20 20:15:40.987285 kubelet[3176]: E0420 20:15:40.960777 3176 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:15:43.353391 sshd[6230]: Accepted publickey for core from 10.0.0.1 port 44558 ssh2: RSA SHA256:6LFBX1y/KMd5HbH+HLq9TQlSkwszEIqAV5PH+tmKm7M Apr 20 20:15:43.480135 sshd-session[6230]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 20 20:15:44.828951 systemd-logind[1609]: New session '29' of user 'core' with class 'user' and type 'tty'. Apr 20 20:15:44.987918 containerd[1640]: time="2026-04-20T20:15:44.987278605Z" level=info msg="Kill container \"37048ce4ab3df3a2d4a897903a56366f0eddba041f8139b5370fd312170d9238\"" Apr 20 20:15:45.082986 systemd[1]: Started session-29.scope - Session 29 of User core. Apr 20 20:15:45.755998 kubelet[3176]: E0420 20:15:45.741308 3176 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:15:48.720474 kubelet[3176]: E0420 20:15:48.719765 3176 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.057s" Apr 20 20:15:51.099963 kubelet[3176]: E0420 20:15:51.096673 3176 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.439s" Apr 20 20:15:51.294017 kubelet[3176]: E0420 20:15:51.290768 3176 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:15:51.313059 kubelet[3176]: E0420 20:15:51.308524 3176 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:15:51.477892 sshd[6254]: Connection closed by 10.0.0.1 port 44558 Apr 20 20:15:51.461176 sshd-session[6230]: pam_unix(sshd:session): session closed for user core Apr 20 20:15:51.821497 systemd[1]: sshd@27-6-10.0.0.6:22-10.0.0.1:44558.service: Deactivated successfully. Apr 20 20:15:51.841185 systemd[1]: sshd@27-6-10.0.0.6:22-10.0.0.1:44558.service: Consumed 1.383s CPU time, 4.1M memory peak. Apr 20 20:15:52.026969 systemd[1]: session-29.scope: Deactivated successfully. Apr 20 20:15:52.059392 systemd[1]: session-29.scope: Consumed 4.354s CPU time, 16.2M memory peak. Apr 20 20:15:52.210057 systemd-logind[1609]: Session 29 logged out. Waiting for processes to exit. Apr 20 20:15:52.600188 systemd-logind[1609]: Removed session 29. Apr 20 20:15:52.838666 kubelet[3176]: E0420 20:15:52.838044 3176 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.061s" Apr 20 20:15:57.734199 systemd[1]: Started sshd@28-7-10.0.0.6:22-10.0.0.1:47488.service - OpenSSH per-connection server daemon (10.0.0.1:47488). Apr 20 20:15:59.238431 kubelet[3176]: E0420 20:15:59.237993 3176 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.513s" Apr 20 20:16:00.742584 sshd[6314]: Accepted publickey for core from 10.0.0.1 port 47488 ssh2: RSA SHA256:6LFBX1y/KMd5HbH+HLq9TQlSkwszEIqAV5PH+tmKm7M Apr 20 20:16:00.838319 sshd-session[6314]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 20 20:16:01.653143 systemd-logind[1609]: New session '30' of user 'core' with class 'user' and type 'tty'. Apr 20 20:16:01.831280 systemd[1]: Started session-30.scope - Session 30 of User core. Apr 20 20:16:04.420211 kubelet[3176]: E0420 20:16:04.414720 3176 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.564s" Apr 20 20:16:08.959648 sshd[6326]: Connection closed by 10.0.0.1 port 47488 Apr 20 20:16:08.978963 sshd-session[6314]: pam_unix(sshd:session): session closed for user core Apr 20 20:16:09.336640 systemd[1]: sshd@28-7-10.0.0.6:22-10.0.0.1:47488.service: Deactivated successfully. Apr 20 20:16:09.344246 systemd[1]: sshd@28-7-10.0.0.6:22-10.0.0.1:47488.service: Consumed 1.252s CPU time, 4.4M memory peak. Apr 20 20:16:09.656176 systemd[1]: session-30.scope: Deactivated successfully. Apr 20 20:16:09.705237 systemd[1]: session-30.scope: Consumed 4.024s CPU time, 14.4M memory peak. Apr 20 20:16:10.049477 systemd-logind[1609]: Session 30 logged out. Waiting for processes to exit. Apr 20 20:16:10.553850 systemd-logind[1609]: Removed session 30. Apr 20 20:16:15.662030 systemd[1]: Started sshd@29-8205-10.0.0.6:22-10.0.0.1:46584.service - OpenSSH per-connection server daemon (10.0.0.1:46584). Apr 20 20:16:19.165995 kubelet[3176]: E0420 20:16:18.829104 3176 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="14.406s" Apr 20 20:16:20.365069 kubelet[3176]: E0420 20:16:20.361812 3176 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:16:21.632159 sshd[6367]: Accepted publickey for core from 10.0.0.1 port 46584 ssh2: RSA SHA256:6LFBX1y/KMd5HbH+HLq9TQlSkwszEIqAV5PH+tmKm7M Apr 20 20:16:21.693413 sshd-session[6367]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 20 20:16:23.916743 systemd-logind[1609]: New session '31' of user 'core' with class 'user' and type 'tty'. Apr 20 20:16:23.942103 systemd[1]: Started session-31.scope - Session 31 of User core. Apr 20 20:16:28.812850 kubelet[3176]: E0420 20:16:28.803259 3176 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="8.812s" Apr 20 20:16:32.032669 sshd[6395]: Connection closed by 10.0.0.1 port 46584 Apr 20 20:16:32.032124 sshd-session[6367]: pam_unix(sshd:session): session closed for user core Apr 20 20:16:32.350585 systemd[1]: sshd@29-8205-10.0.0.6:22-10.0.0.1:46584.service: Deactivated successfully. Apr 20 20:16:32.433477 systemd[1]: sshd@29-8205-10.0.0.6:22-10.0.0.1:46584.service: Consumed 1.987s CPU time, 4.4M memory peak. Apr 20 20:16:32.847448 systemd[1]: session-31.scope: Deactivated successfully. Apr 20 20:16:32.854973 systemd[1]: session-31.scope: Consumed 4.793s CPU time, 15.8M memory peak. Apr 20 20:16:33.023046 systemd-logind[1609]: Session 31 logged out. Waiting for processes to exit. Apr 20 20:16:33.090426 systemd-logind[1609]: Removed session 31. Apr 20 20:16:34.537856 kubelet[3176]: E0420 20:16:34.531661 3176 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="5.266s" Apr 20 20:16:35.670258 kubelet[3176]: E0420 20:16:35.656010 3176 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:16:38.428553 systemd[1]: Started sshd@30-8206-10.0.0.6:22-10.0.0.1:35886.service - OpenSSH per-connection server daemon (10.0.0.1:35886). Apr 20 20:16:45.170975 sshd[6434]: Accepted publickey for core from 10.0.0.1 port 35886 ssh2: RSA SHA256:6LFBX1y/KMd5HbH+HLq9TQlSkwszEIqAV5PH+tmKm7M Apr 20 20:16:45.618736 sshd-session[6434]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 20 20:16:47.449243 systemd-logind[1609]: New session '32' of user 'core' with class 'user' and type 'tty'. Apr 20 20:16:47.567645 systemd[1]: Started session-32.scope - Session 32 of User core. Apr 20 20:16:49.952814 systemd[1]: cri-containerd-49f062dd1a04e6efc9c1a2c9a0d02a0f520f5cce96d51ce589381563fc6fc543.scope: Deactivated successfully. Apr 20 20:16:49.970164 systemd[1]: cri-containerd-49f062dd1a04e6efc9c1a2c9a0d02a0f520f5cce96d51ce589381563fc6fc543.scope: Consumed 29.371s CPU time, 41.7M memory peak. Apr 20 20:16:51.026273 containerd[1640]: time="2026-04-20T20:16:51.015207225Z" level=info msg="received container exit event container_id:\"49f062dd1a04e6efc9c1a2c9a0d02a0f520f5cce96d51ce589381563fc6fc543\" id:\"49f062dd1a04e6efc9c1a2c9a0d02a0f520f5cce96d51ce589381563fc6fc543\" pid:6168 exit_status:1 exited_at:{seconds:1776716210 nanos:127014458}" Apr 20 20:16:59.060455 kubelet[3176]: E0420 20:16:59.058044 3176 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="21.144s" Apr 20 20:16:59.668711 kubelet[3176]: E0420 20:16:59.665908 3176 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:17:00.759043 containerd[1640]: time="2026-04-20T20:17:00.597020439Z" level=info msg="container event discarded" container=92f997e10caaa969f76f6e97385a5a583d487b5ba75c74d285d90735d225065f type=CONTAINER_STOPPED_EVENT Apr 20 20:17:01.645918 containerd[1640]: time="2026-04-20T20:17:01.398071590Z" level=error msg="ttrpc: received message on inactive stream" stream=33 Apr 20 20:17:02.149977 containerd[1640]: time="2026-04-20T20:17:01.744087287Z" level=error msg="failed to handle container TaskExit event container_id:\"49f062dd1a04e6efc9c1a2c9a0d02a0f520f5cce96d51ce589381563fc6fc543\" id:\"49f062dd1a04e6efc9c1a2c9a0d02a0f520f5cce96d51ce589381563fc6fc543\" pid:6168 exit_status:1 exited_at:{seconds:1776716210 nanos:127014458}" error="failed to stop container: context deadline exceeded" Apr 20 20:17:02.345124 containerd[1640]: time="2026-04-20T20:17:01.848274904Z" level=error msg="ttrpc: received message on inactive stream" stream=37 Apr 20 20:17:02.862220 sshd[6462]: Connection closed by 10.0.0.1 port 35886 Apr 20 20:17:02.887042 sshd-session[6434]: pam_unix(sshd:session): session closed for user core Apr 20 20:17:03.176330 systemd[1]: sshd@30-8206-10.0.0.6:22-10.0.0.1:35886.service: Deactivated successfully. Apr 20 20:17:03.240246 systemd[1]: sshd@30-8206-10.0.0.6:22-10.0.0.1:35886.service: Consumed 2.062s CPU time, 4.1M memory peak. Apr 20 20:17:03.433268 systemd[1]: session-32.scope: Deactivated successfully. Apr 20 20:17:03.435234 systemd[1]: session-32.scope: Consumed 10.096s CPU time, 16.4M memory peak. Apr 20 20:17:03.757043 systemd-logind[1609]: Session 32 logged out. Waiting for processes to exit. Apr 20 20:17:04.139227 systemd-logind[1609]: Removed session 32. Apr 20 20:17:04.221119 containerd[1640]: time="2026-04-20T20:17:04.216569601Z" level=info msg="TaskExit event container_id:\"49f062dd1a04e6efc9c1a2c9a0d02a0f520f5cce96d51ce589381563fc6fc543\" id:\"49f062dd1a04e6efc9c1a2c9a0d02a0f520f5cce96d51ce589381563fc6fc543\" pid:6168 exit_status:1 exited_at:{seconds:1776716210 nanos:127014458}" Apr 20 20:17:06.190739 kubelet[3176]: E0420 20:17:06.187028 3176 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="7.015s" Apr 20 20:17:07.398164 kubelet[3176]: E0420 20:17:07.384325 3176 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:17:07.763008 kubelet[3176]: E0420 20:17:07.762005 3176 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:17:08.072239 kubelet[3176]: E0420 20:17:07.984247 3176 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:17:09.067920 systemd[1]: Started sshd@31-8-10.0.0.6:22-10.0.0.1:57292.service - OpenSSH per-connection server daemon (10.0.0.1:57292). Apr 20 20:17:13.854044 kubelet[3176]: E0420 20:17:13.847297 3176 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:17:14.091117 containerd[1640]: time="2026-04-20T20:17:14.071777500Z" level=error msg="failed to delete task" error="context deadline exceeded" id=49f062dd1a04e6efc9c1a2c9a0d02a0f520f5cce96d51ce589381563fc6fc543 Apr 20 20:17:14.260128 sshd[6535]: Accepted publickey for core from 10.0.0.1 port 57292 ssh2: RSA SHA256:6LFBX1y/KMd5HbH+HLq9TQlSkwszEIqAV5PH+tmKm7M Apr 20 20:17:14.465820 containerd[1640]: time="2026-04-20T20:17:14.457270729Z" level=error msg="Failed to handle backOff event container_id:\"49f062dd1a04e6efc9c1a2c9a0d02a0f520f5cce96d51ce589381563fc6fc543\" id:\"49f062dd1a04e6efc9c1a2c9a0d02a0f520f5cce96d51ce589381563fc6fc543\" pid:6168 exit_status:1 exited_at:{seconds:1776716210 nanos:127014458} for 49f062dd1a04e6efc9c1a2c9a0d02a0f520f5cce96d51ce589381563fc6fc543" error="failed to handle container TaskExit event: failed to stop container: failed to delete task: context deadline exceeded" Apr 20 20:17:14.598301 sshd-session[6535]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 20 20:17:14.715870 containerd[1640]: time="2026-04-20T20:17:14.661301629Z" level=error msg="ttrpc: received message on inactive stream" stream=51 Apr 20 20:17:14.843927 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-49f062dd1a04e6efc9c1a2c9a0d02a0f520f5cce96d51ce589381563fc6fc543-rootfs.mount: Deactivated successfully. Apr 20 20:17:16.479661 systemd-logind[1609]: New session '33' of user 'core' with class 'user' and type 'tty'. Apr 20 20:17:17.088208 systemd[1]: Started session-33.scope - Session 33 of User core. Apr 20 20:17:17.445955 containerd[1640]: time="2026-04-20T20:17:17.437250233Z" level=info msg="TaskExit event container_id:\"49f062dd1a04e6efc9c1a2c9a0d02a0f520f5cce96d51ce589381563fc6fc543\" id:\"49f062dd1a04e6efc9c1a2c9a0d02a0f520f5cce96d51ce589381563fc6fc543\" pid:6168 exit_status:1 exited_at:{seconds:1776716210 nanos:127014458}" Apr 20 20:17:17.935676 kubelet[3176]: E0420 20:17:17.934968 3176 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="10.184s" Apr 20 20:17:20.484933 kubelet[3176]: E0420 20:17:20.483728 3176 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.543s" Apr 20 20:17:26.289995 kubelet[3176]: E0420 20:17:26.274416 3176 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:17:27.284003 containerd[1640]: time="2026-04-20T20:17:27.256165061Z" level=error msg="failed to delete task" error="context deadline exceeded" id=49f062dd1a04e6efc9c1a2c9a0d02a0f520f5cce96d51ce589381563fc6fc543 Apr 20 20:17:27.573296 containerd[1640]: time="2026-04-20T20:17:27.459950974Z" level=error msg="ttrpc: received message on inactive stream" stream=67 Apr 20 20:17:27.731574 containerd[1640]: time="2026-04-20T20:17:27.723269483Z" level=error msg="Failed to handle backOff event container_id:\"49f062dd1a04e6efc9c1a2c9a0d02a0f520f5cce96d51ce589381563fc6fc543\" id:\"49f062dd1a04e6efc9c1a2c9a0d02a0f520f5cce96d51ce589381563fc6fc543\" pid:6168 exit_status:1 exited_at:{seconds:1776716210 nanos:127014458} for 49f062dd1a04e6efc9c1a2c9a0d02a0f520f5cce96d51ce589381563fc6fc543" error="failed to handle container TaskExit event: failed to stop container: failed to delete task: context deadline exceeded" Apr 20 20:17:27.902209 containerd[1640]: time="2026-04-20T20:17:27.826102143Z" level=info msg="TaskExit event container_id:\"37048ce4ab3df3a2d4a897903a56366f0eddba041f8139b5370fd312170d9238\" id:\"37048ce4ab3df3a2d4a897903a56366f0eddba041f8139b5370fd312170d9238\" pid:3006 exit_status:1 exited_at:{seconds:1776715880 nanos:848322378}" Apr 20 20:17:29.336171 kubelet[3176]: E0420 20:17:29.277064 3176 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="8.716s" Apr 20 20:17:33.618267 sshd[6570]: Connection closed by 10.0.0.1 port 57292 Apr 20 20:17:33.644607 sshd-session[6535]: pam_unix(sshd:session): session closed for user core Apr 20 20:17:34.062252 containerd[1640]: time="2026-04-20T20:17:33.850238872Z" level=info msg="container event discarded" container=b5a4f9b9730597fdf8bf6c8fcd262e07b1e0d7a67479d45d6406926d9ace4fee type=CONTAINER_CREATED_EVENT Apr 20 20:17:34.163140 systemd[1]: sshd@31-8-10.0.0.6:22-10.0.0.1:57292.service: Deactivated successfully. Apr 20 20:17:34.254121 systemd[1]: sshd@31-8-10.0.0.6:22-10.0.0.1:57292.service: Consumed 1.828s CPU time, 4.1M memory peak. Apr 20 20:17:34.631531 systemd[1]: session-33.scope: Deactivated successfully. Apr 20 20:17:34.649776 systemd[1]: session-33.scope: Consumed 10.709s CPU time, 15.7M memory peak. Apr 20 20:17:35.048578 systemd-logind[1609]: Session 33 logged out. Waiting for processes to exit. Apr 20 20:17:35.371312 systemd-logind[1609]: Removed session 33. Apr 20 20:17:35.939910 kubelet[3176]: E0420 20:17:35.936530 3176 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="6.591s" Apr 20 20:17:37.535176 containerd[1640]: time="2026-04-20T20:17:37.491604034Z" level=info msg="container event discarded" container=b5a4f9b9730597fdf8bf6c8fcd262e07b1e0d7a67479d45d6406926d9ace4fee type=CONTAINER_STARTED_EVENT Apr 20 20:17:37.992131 containerd[1640]: time="2026-04-20T20:17:37.839177027Z" level=error msg="failed to delete task" error="context deadline exceeded" id=37048ce4ab3df3a2d4a897903a56366f0eddba041f8139b5370fd312170d9238 Apr 20 20:17:38.226944 containerd[1640]: time="2026-04-20T20:17:38.126704272Z" level=error msg="ttrpc: received message on inactive stream" stream=219 Apr 20 20:17:38.240141 kubelet[3176]: E0420 20:17:38.227828 3176 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.19s" Apr 20 20:17:38.298261 kubelet[3176]: E0420 20:17:38.240382 3176 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:17:38.319056 containerd[1640]: time="2026-04-20T20:17:38.272266684Z" level=error msg="Failed to handle backOff event container_id:\"37048ce4ab3df3a2d4a897903a56366f0eddba041f8139b5370fd312170d9238\" id:\"37048ce4ab3df3a2d4a897903a56366f0eddba041f8139b5370fd312170d9238\" pid:3006 exit_status:1 exited_at:{seconds:1776715880 nanos:848322378} for 37048ce4ab3df3a2d4a897903a56366f0eddba041f8139b5370fd312170d9238" error="failed to handle container TaskExit event: failed to stop container: failed to delete task: context deadline exceeded" Apr 20 20:17:38.319056 containerd[1640]: time="2026-04-20T20:17:38.285906221Z" level=info msg="TaskExit event container_id:\"49f062dd1a04e6efc9c1a2c9a0d02a0f520f5cce96d51ce589381563fc6fc543\" id:\"49f062dd1a04e6efc9c1a2c9a0d02a0f520f5cce96d51ce589381563fc6fc543\" pid:6168 exit_status:1 exited_at:{seconds:1776716210 nanos:127014458}" Apr 20 20:17:40.350698 systemd[1]: Started sshd@32-12295-10.0.0.6:22-10.0.0.1:35698.service - OpenSSH per-connection server daemon (10.0.0.1:35698). Apr 20 20:17:41.975572 kubelet[3176]: E0420 20:17:41.969856 3176 log.go:32] "StopContainer from runtime service failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" containerID="37048ce4ab3df3a2d4a897903a56366f0eddba041f8139b5370fd312170d9238" Apr 20 20:17:42.325384 kubelet[3176]: E0420 20:17:41.987482 3176 kuberuntime_container.go:895] "Container termination failed with gracePeriod" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" pod="kube-system/kube-scheduler-localhost" podUID="f7c88b30fc803a3ec6b6c138191bdaca" containerName="kube-scheduler" containerID="containerd://37048ce4ab3df3a2d4a897903a56366f0eddba041f8139b5370fd312170d9238" gracePeriod=30 Apr 20 20:17:42.441285 kubelet[3176]: E0420 20:17:42.396832 3176 kuberuntime_manager.go:1437] "killContainer for pod failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" containerName="kube-scheduler" containerID={"Type":"containerd","ID":"37048ce4ab3df3a2d4a897903a56366f0eddba041f8139b5370fd312170d9238"} pod="kube-system/kube-scheduler-localhost" Apr 20 20:17:42.522283 kubelet[3176]: E0420 20:17:42.442926 3176 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillContainer\" for \"kube-scheduler\" with KillContainerError: \"rpc error: code = DeadlineExceeded desc = context deadline exceeded\"" pod="kube-system/kube-scheduler-localhost" podUID="f7c88b30fc803a3ec6b6c138191bdaca" Apr 20 20:17:42.553885 containerd[1640]: time="2026-04-20T20:17:42.454311223Z" level=error msg="StopContainer for \"37048ce4ab3df3a2d4a897903a56366f0eddba041f8139b5370fd312170d9238\" failed" error="rpc error: code = Canceled desc = an error occurs during waiting for container \"37048ce4ab3df3a2d4a897903a56366f0eddba041f8139b5370fd312170d9238\" to be killed: wait container \"37048ce4ab3df3a2d4a897903a56366f0eddba041f8139b5370fd312170d9238\": context canceled" Apr 20 20:17:44.319197 kubelet[3176]: E0420 20:17:44.318732 3176 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="6.089s" Apr 20 20:17:44.756853 sshd[6652]: Accepted publickey for core from 10.0.0.1 port 35698 ssh2: RSA SHA256:6LFBX1y/KMd5HbH+HLq9TQlSkwszEIqAV5PH+tmKm7M Apr 20 20:17:44.845171 sshd-session[6652]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 20 20:17:44.872586 containerd[1640]: time="2026-04-20T20:17:44.840200081Z" level=info msg="StopContainer for \"37048ce4ab3df3a2d4a897903a56366f0eddba041f8139b5370fd312170d9238\" with timeout 30 (s)" Apr 20 20:17:45.663882 containerd[1640]: time="2026-04-20T20:17:45.585302682Z" level=info msg="Skipping the sending of signal terminated to container \"37048ce4ab3df3a2d4a897903a56366f0eddba041f8139b5370fd312170d9238\" because a prior stop with timeout>0 request already sent the signal" Apr 20 20:17:46.020563 systemd-logind[1609]: New session '34' of user 'core' with class 'user' and type 'tty'. Apr 20 20:17:46.299842 systemd[1]: Started session-34.scope - Session 34 of User core. Apr 20 20:17:48.447617 containerd[1640]: time="2026-04-20T20:17:48.439256557Z" level=error msg="failed to delete task" error="context deadline exceeded" id=49f062dd1a04e6efc9c1a2c9a0d02a0f520f5cce96d51ce589381563fc6fc543 Apr 20 20:17:48.645539 containerd[1640]: time="2026-04-20T20:17:48.637152109Z" level=error msg="ttrpc: received message on inactive stream" stream=81 Apr 20 20:17:48.754634 containerd[1640]: time="2026-04-20T20:17:48.711028923Z" level=error msg="Failed to handle backOff event container_id:\"49f062dd1a04e6efc9c1a2c9a0d02a0f520f5cce96d51ce589381563fc6fc543\" id:\"49f062dd1a04e6efc9c1a2c9a0d02a0f520f5cce96d51ce589381563fc6fc543\" pid:6168 exit_status:1 exited_at:{seconds:1776716210 nanos:127014458} for 49f062dd1a04e6efc9c1a2c9a0d02a0f520f5cce96d51ce589381563fc6fc543" error="failed to handle container TaskExit event: failed to stop container: failed to delete task: context deadline exceeded" Apr 20 20:17:52.335171 kubelet[3176]: E0420 20:17:52.186178 3176 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="7.757s" Apr 20 20:17:57.255998 containerd[1640]: time="2026-04-20T20:17:57.246285378Z" level=info msg="TaskExit event container_id:\"49f062dd1a04e6efc9c1a2c9a0d02a0f520f5cce96d51ce589381563fc6fc543\" id:\"49f062dd1a04e6efc9c1a2c9a0d02a0f520f5cce96d51ce589381563fc6fc543\" pid:6168 exit_status:1 exited_at:{seconds:1776716210 nanos:127014458}" Apr 20 20:17:58.139323 kubelet[3176]: E0420 20:17:57.937044 3176 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="5.187s" Apr 20 20:18:00.652117 sshd[6692]: Connection closed by 10.0.0.1 port 35698 Apr 20 20:18:00.641323 sshd-session[6652]: pam_unix(sshd:session): session closed for user core Apr 20 20:18:01.284285 systemd[1]: sshd@32-12295-10.0.0.6:22-10.0.0.1:35698.service: Deactivated successfully. Apr 20 20:18:01.347196 systemd[1]: sshd@32-12295-10.0.0.6:22-10.0.0.1:35698.service: Consumed 1.764s CPU time, 4.1M memory peak. Apr 20 20:18:01.819270 systemd[1]: session-34.scope: Deactivated successfully. Apr 20 20:18:01.839601 systemd[1]: session-34.scope: Consumed 9.796s CPU time, 14.3M memory peak. Apr 20 20:18:02.166855 systemd-logind[1609]: Session 34 logged out. Waiting for processes to exit. Apr 20 20:18:02.620428 systemd-logind[1609]: Removed session 34. Apr 20 20:18:04.131760 kubelet[3176]: E0420 20:18:04.130309 3176 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="5.992s" Apr 20 20:18:04.338487 kubelet[3176]: E0420 20:18:04.338026 3176 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:18:04.458542 containerd[1640]: time="2026-04-20T20:18:04.456940640Z" level=info msg="StopContainer for \"49f062dd1a04e6efc9c1a2c9a0d02a0f520f5cce96d51ce589381563fc6fc543\" with timeout 30 (s)" Apr 20 20:18:05.525578 containerd[1640]: time="2026-04-20T20:18:05.521750029Z" level=info msg="Stop container \"49f062dd1a04e6efc9c1a2c9a0d02a0f520f5cce96d51ce589381563fc6fc543\" with signal terminated" Apr 20 20:18:05.759734 containerd[1640]: time="2026-04-20T20:18:05.756156583Z" level=info msg="StopContainer for \"49f062dd1a04e6efc9c1a2c9a0d02a0f520f5cce96d51ce589381563fc6fc543\" returns successfully" Apr 20 20:18:05.832520 kubelet[3176]: E0420 20:18:05.801756 3176 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:18:06.352215 systemd[1]: Started sshd@33-8207-10.0.0.6:22-10.0.0.1:46958.service - OpenSSH per-connection server daemon (10.0.0.1:46958). Apr 20 20:18:06.373869 containerd[1640]: time="2026-04-20T20:18:06.373202204Z" level=info msg="CreateContainer within sandbox \"64a97b8435f3ddeadc09165de86e15b48f97c0eb04468620f43d058579b319d1\" for container name:\"kube-controller-manager\" attempt:3" Apr 20 20:18:07.805432 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2646637178.mount: Deactivated successfully. Apr 20 20:18:08.242103 containerd[1640]: time="2026-04-20T20:18:08.225230636Z" level=info msg="Container f79ebd2694a0ea078794fec9eee6729d612e0a064e4847cf3345504c8b675e51: CDI devices from CRI Config.CDIDevices: []" Apr 20 20:18:08.791509 sshd[6763]: Accepted publickey for core from 10.0.0.1 port 46958 ssh2: RSA SHA256:6LFBX1y/KMd5HbH+HLq9TQlSkwszEIqAV5PH+tmKm7M Apr 20 20:18:08.843426 containerd[1640]: time="2026-04-20T20:18:08.842517560Z" level=info msg="CreateContainer within sandbox \"64a97b8435f3ddeadc09165de86e15b48f97c0eb04468620f43d058579b319d1\" for name:\"kube-controller-manager\" attempt:3 returns container id \"f79ebd2694a0ea078794fec9eee6729d612e0a064e4847cf3345504c8b675e51\"" Apr 20 20:18:08.842799 sshd-session[6763]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 20 20:18:08.987567 containerd[1640]: time="2026-04-20T20:18:08.986173581Z" level=info msg="StartContainer for \"f79ebd2694a0ea078794fec9eee6729d612e0a064e4847cf3345504c8b675e51\"" Apr 20 20:18:09.542507 containerd[1640]: time="2026-04-20T20:18:09.542174974Z" level=info msg="connecting to shim f79ebd2694a0ea078794fec9eee6729d612e0a064e4847cf3345504c8b675e51" address="unix:///run/containerd/s/e1f0007e2c6f1f748a9dc06ca555a4405d786137015c68dabb60c59e595b314f" protocol=ttrpc version=3 Apr 20 20:18:09.624698 systemd-logind[1609]: New session '35' of user 'core' with class 'user' and type 'tty'. Apr 20 20:18:09.732593 systemd[1]: Started session-35.scope - Session 35 of User core. Apr 20 20:18:10.550191 systemd[1]: Started cri-containerd-f79ebd2694a0ea078794fec9eee6729d612e0a064e4847cf3345504c8b675e51.scope - libcontainer container f79ebd2694a0ea078794fec9eee6729d612e0a064e4847cf3345504c8b675e51. Apr 20 20:18:11.858538 sshd[6793]: Connection closed by 10.0.0.1 port 46958 Apr 20 20:18:11.860406 sshd-session[6763]: pam_unix(sshd:session): session closed for user core Apr 20 20:18:11.913312 systemd[1]: sshd@33-8207-10.0.0.6:22-10.0.0.1:46958.service: Deactivated successfully. Apr 20 20:18:11.920665 systemd[1]: sshd@33-8207-10.0.0.6:22-10.0.0.1:46958.service: Consumed 1.136s CPU time, 4.1M memory peak. Apr 20 20:18:11.928051 systemd[1]: session-35.scope: Deactivated successfully. Apr 20 20:18:11.930214 systemd[1]: session-35.scope: Consumed 1.535s CPU time, 16M memory peak. Apr 20 20:18:11.951606 systemd-logind[1609]: Session 35 logged out. Waiting for processes to exit. Apr 20 20:18:11.958278 containerd[1640]: time="2026-04-20T20:18:11.958187253Z" level=info msg="StartContainer for \"f79ebd2694a0ea078794fec9eee6729d612e0a064e4847cf3345504c8b675e51\" returns successfully" Apr 20 20:18:12.056183 systemd-logind[1609]: Removed session 35. Apr 20 20:18:13.960093 kubelet[3176]: E0420 20:18:13.948306 3176 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:18:15.647289 kubelet[3176]: E0420 20:18:15.642332 3176 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:18:15.783829 containerd[1640]: time="2026-04-20T20:18:15.687639511Z" level=info msg="Kill container \"37048ce4ab3df3a2d4a897903a56366f0eddba041f8139b5370fd312170d9238\"" Apr 20 20:18:17.458519 systemd[1]: Started sshd@34-4101-10.0.0.6:22-10.0.0.1:49758.service - OpenSSH per-connection server daemon (10.0.0.1:49758). Apr 20 20:18:18.253606 sshd[6864]: Accepted publickey for core from 10.0.0.1 port 49758 ssh2: RSA SHA256:6LFBX1y/KMd5HbH+HLq9TQlSkwszEIqAV5PH+tmKm7M Apr 20 20:18:18.318450 sshd-session[6864]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 20 20:18:18.822461 systemd-logind[1609]: New session '36' of user 'core' with class 'user' and type 'tty'. Apr 20 20:18:18.830467 systemd[1]: Started session-36.scope - Session 36 of User core. Apr 20 20:18:20.275597 kubelet[3176]: E0420 20:18:20.270182 3176 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:18:22.718476 kubelet[3176]: E0420 20:18:22.717562 3176 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.065s" Apr 20 20:18:22.902908 kubelet[3176]: E0420 20:18:22.889066 3176 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:18:22.951785 kubelet[3176]: E0420 20:18:22.949828 3176 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:18:23.308792 sshd[6868]: Connection closed by 10.0.0.1 port 49758 Apr 20 20:18:23.319744 sshd-session[6864]: pam_unix(sshd:session): session closed for user core Apr 20 20:18:23.468016 systemd[1]: sshd@34-4101-10.0.0.6:22-10.0.0.1:49758.service: Deactivated successfully. Apr 20 20:18:23.580017 systemd[1]: session-36.scope: Deactivated successfully. Apr 20 20:18:23.602560 systemd[1]: session-36.scope: Consumed 3.886s CPU time, 15.8M memory peak. Apr 20 20:18:23.709256 systemd-logind[1609]: Session 36 logged out. Waiting for processes to exit. Apr 20 20:18:23.724791 systemd-logind[1609]: Removed session 36. Apr 20 20:18:28.861288 systemd[1]: Started sshd@35-8208-10.0.0.6:22-10.0.0.1:38446.service - OpenSSH per-connection server daemon (10.0.0.1:38446). Apr 20 20:18:30.163693 kubelet[3176]: E0420 20:18:30.163121 3176 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:18:30.382528 sshd[6914]: Accepted publickey for core from 10.0.0.1 port 38446 ssh2: RSA SHA256:6LFBX1y/KMd5HbH+HLq9TQlSkwszEIqAV5PH+tmKm7M Apr 20 20:18:30.420097 sshd-session[6914]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 20 20:18:30.772939 kubelet[3176]: E0420 20:18:30.771496 3176 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:18:30.933620 systemd-logind[1609]: New session '37' of user 'core' with class 'user' and type 'tty'. Apr 20 20:18:31.443653 systemd[1]: Started session-37.scope - Session 37 of User core. Apr 20 20:18:35.076229 sshd[6933]: Connection closed by 10.0.0.1 port 38446 Apr 20 20:18:35.156203 sshd-session[6914]: pam_unix(sshd:session): session closed for user core Apr 20 20:18:35.270184 systemd[1]: sshd@35-8208-10.0.0.6:22-10.0.0.1:38446.service: Deactivated successfully. Apr 20 20:18:35.598453 systemd[1]: session-37.scope: Deactivated successfully. Apr 20 20:18:35.627469 systemd[1]: session-37.scope: Consumed 2.866s CPU time, 14.8M memory peak. Apr 20 20:18:36.085852 systemd-logind[1609]: Session 37 logged out. Waiting for processes to exit. Apr 20 20:18:36.319076 systemd-logind[1609]: Removed session 37. Apr 20 20:18:37.791267 kubelet[3176]: E0420 20:18:37.787648 3176 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.911s" Apr 20 20:18:40.043080 kubelet[3176]: E0420 20:18:40.039705 3176 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.109s" Apr 20 20:18:41.350581 systemd[1]: Started sshd@36-8209-10.0.0.6:22-10.0.0.1:54962.service - OpenSSH per-connection server daemon (10.0.0.1:54962). Apr 20 20:18:45.444840 sshd[6967]: Accepted publickey for core from 10.0.0.1 port 54962 ssh2: RSA SHA256:6LFBX1y/KMd5HbH+HLq9TQlSkwszEIqAV5PH+tmKm7M Apr 20 20:18:45.676897 sshd-session[6967]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 20 20:18:46.519035 systemd-logind[1609]: New session '38' of user 'core' with class 'user' and type 'tty'. Apr 20 20:18:46.648960 systemd[1]: Started session-38.scope - Session 38 of User core. Apr 20 20:18:52.536049 kubelet[3176]: E0420 20:18:52.100788 3176 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="11.856s" Apr 20 20:18:58.031555 sshd[6988]: Connection closed by 10.0.0.1 port 54962 Apr 20 20:18:58.052729 sshd-session[6967]: pam_unix(sshd:session): session closed for user core Apr 20 20:18:58.714618 systemd[1]: sshd@36-8209-10.0.0.6:22-10.0.0.1:54962.service: Deactivated successfully. Apr 20 20:18:58.846070 systemd[1]: sshd@36-8209-10.0.0.6:22-10.0.0.1:54962.service: Consumed 1.736s CPU time, 4.1M memory peak. Apr 20 20:18:59.241222 systemd[1]: session-38.scope: Deactivated successfully. Apr 20 20:18:59.370592 systemd[1]: session-38.scope: Consumed 7.545s CPU time, 15.8M memory peak. Apr 20 20:18:59.833949 systemd-logind[1609]: Session 38 logged out. Waiting for processes to exit. Apr 20 20:19:00.274003 systemd-logind[1609]: Removed session 38. Apr 20 20:19:03.674309 systemd[1]: Started sshd@37-8210-10.0.0.6:22-10.0.0.1:55976.service - OpenSSH per-connection server daemon (10.0.0.1:55976). Apr 20 20:19:05.200761 kubelet[3176]: E0420 20:19:05.192768 3176 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="11.582s" Apr 20 20:19:05.668618 kubelet[3176]: E0420 20:19:05.664398 3176 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:19:05.716110 kubelet[3176]: E0420 20:19:05.664326 3176 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:19:06.901415 sshd[7033]: Accepted publickey for core from 10.0.0.1 port 55976 ssh2: RSA SHA256:6LFBX1y/KMd5HbH+HLq9TQlSkwszEIqAV5PH+tmKm7M Apr 20 20:19:07.171086 sshd-session[7033]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 20 20:19:07.388052 kubelet[3176]: E0420 20:19:07.379009 3176 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.042s" Apr 20 20:19:08.019575 systemd-logind[1609]: New session '39' of user 'core' with class 'user' and type 'tty'. Apr 20 20:19:08.664130 systemd[1]: Started session-39.scope - Session 39 of User core. Apr 20 20:19:21.051108 sshd[7050]: Connection closed by 10.0.0.1 port 55976 Apr 20 20:19:21.065069 sshd-session[7033]: pam_unix(sshd:session): session closed for user core Apr 20 20:19:21.598202 systemd[1]: sshd@37-8210-10.0.0.6:22-10.0.0.1:55976.service: Deactivated successfully. Apr 20 20:19:21.704108 systemd[1]: sshd@37-8210-10.0.0.6:22-10.0.0.1:55976.service: Consumed 1.498s CPU time, 4.3M memory peak. Apr 20 20:19:22.091692 systemd[1]: session-39.scope: Deactivated successfully. Apr 20 20:19:22.167485 systemd[1]: session-39.scope: Consumed 6.811s CPU time, 15.8M memory peak. Apr 20 20:19:22.531937 systemd-logind[1609]: Session 39 logged out. Waiting for processes to exit. Apr 20 20:19:23.153143 systemd-logind[1609]: Removed session 39. Apr 20 20:19:25.820494 systemd[1]: cri-containerd-f79ebd2694a0ea078794fec9eee6729d612e0a064e4847cf3345504c8b675e51.scope: Deactivated successfully. Apr 20 20:19:26.019713 systemd[1]: cri-containerd-f79ebd2694a0ea078794fec9eee6729d612e0a064e4847cf3345504c8b675e51.scope: Consumed 27.247s CPU time, 38.3M memory peak. Apr 20 20:19:26.240029 containerd[1640]: time="2026-04-20T20:19:26.149725875Z" level=info msg="received container exit event container_id:\"f79ebd2694a0ea078794fec9eee6729d612e0a064e4847cf3345504c8b675e51\" id:\"f79ebd2694a0ea078794fec9eee6729d612e0a064e4847cf3345504c8b675e51\" pid:6810 exit_status:1 exited_at:{seconds:1776716365 nanos:819122265}" Apr 20 20:19:27.677754 systemd[1]: Started sshd@38-4102-10.0.0.6:22-10.0.0.1:53412.service - OpenSSH per-connection server daemon (10.0.0.1:53412). Apr 20 20:19:31.940806 kubelet[3176]: E0420 20:19:31.940049 3176 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="24.518s" Apr 20 20:19:33.719673 kubelet[3176]: E0420 20:19:33.567712 3176 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:19:33.964657 sshd[7087]: Accepted publickey for core from 10.0.0.1 port 53412 ssh2: RSA SHA256:6LFBX1y/KMd5HbH+HLq9TQlSkwszEIqAV5PH+tmKm7M Apr 20 20:19:34.362647 kubelet[3176]: E0420 20:19:34.357523 3176 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:19:34.381325 sshd-session[7087]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 20 20:19:35.856506 systemd-logind[1609]: New session '40' of user 'core' with class 'user' and type 'tty'. Apr 20 20:19:36.114299 systemd[1]: Started session-40.scope - Session 40 of User core. Apr 20 20:19:36.631100 kubelet[3176]: E0420 20:19:36.397790 3176 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="4.148s" Apr 20 20:19:36.803197 containerd[1640]: time="2026-04-20T20:19:36.711040955Z" level=error msg="failed to delete task" error="context deadline exceeded" id=f79ebd2694a0ea078794fec9eee6729d612e0a064e4847cf3345504c8b675e51 Apr 20 20:19:37.027031 containerd[1640]: time="2026-04-20T20:19:36.776193037Z" level=error msg="ttrpc: received message on inactive stream" stream=39 Apr 20 20:19:37.027031 containerd[1640]: time="2026-04-20T20:19:36.964085173Z" level=error msg="failed to handle container TaskExit event container_id:\"f79ebd2694a0ea078794fec9eee6729d612e0a064e4847cf3345504c8b675e51\" id:\"f79ebd2694a0ea078794fec9eee6729d612e0a064e4847cf3345504c8b675e51\" pid:6810 exit_status:1 exited_at:{seconds:1776716365 nanos:819122265}" error="failed to stop container: failed to delete task: context deadline exceeded" Apr 20 20:19:36.822881 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f79ebd2694a0ea078794fec9eee6729d612e0a064e4847cf3345504c8b675e51-rootfs.mount: Deactivated successfully. Apr 20 20:19:38.447066 containerd[1640]: time="2026-04-20T20:19:38.400797510Z" level=info msg="TaskExit event container_id:\"f79ebd2694a0ea078794fec9eee6729d612e0a064e4847cf3345504c8b675e51\" id:\"f79ebd2694a0ea078794fec9eee6729d612e0a064e4847cf3345504c8b675e51\" pid:6810 exit_status:1 exited_at:{seconds:1776716365 nanos:819122265}" Apr 20 20:19:40.748712 kubelet[3176]: E0420 20:19:40.739267 3176 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="4.085s" Apr 20 20:19:43.097053 kubelet[3176]: E0420 20:19:42.179912 3176 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:19:45.147608 sshd[7126]: Connection closed by 10.0.0.1 port 53412 Apr 20 20:19:45.148376 sshd-session[7087]: pam_unix(sshd:session): session closed for user core Apr 20 20:19:45.481754 kubelet[3176]: E0420 20:19:45.470732 3176 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:19:45.563242 systemd[1]: sshd@38-4102-10.0.0.6:22-10.0.0.1:53412.service: Deactivated successfully. Apr 20 20:19:45.584751 systemd[1]: sshd@38-4102-10.0.0.6:22-10.0.0.1:53412.service: Consumed 2.445s CPU time, 4.1M memory peak. Apr 20 20:19:45.824330 systemd[1]: session-40.scope: Deactivated successfully. Apr 20 20:19:45.844022 systemd[1]: session-40.scope: Consumed 5.643s CPU time, 16M memory peak. Apr 20 20:19:46.167129 systemd-logind[1609]: Session 40 logged out. Waiting for processes to exit. Apr 20 20:19:46.520224 kubelet[3176]: E0420 20:19:46.231010 3176 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:19:46.570060 systemd-logind[1609]: Removed session 40. Apr 20 20:19:48.464868 containerd[1640]: time="2026-04-20T20:19:48.448069004Z" level=error msg="ttrpc: received message on inactive stream" stream=49 Apr 20 20:19:49.212524 containerd[1640]: time="2026-04-20T20:19:49.208322572Z" level=error msg="Failed to handle backOff event container_id:\"f79ebd2694a0ea078794fec9eee6729d612e0a064e4847cf3345504c8b675e51\" id:\"f79ebd2694a0ea078794fec9eee6729d612e0a064e4847cf3345504c8b675e51\" pid:6810 exit_status:1 exited_at:{seconds:1776716365 nanos:819122265} for f79ebd2694a0ea078794fec9eee6729d612e0a064e4847cf3345504c8b675e51" error="failed to handle container TaskExit event: failed to stop container: context deadline exceeded" Apr 20 20:19:51.008038 systemd[1]: Started sshd@39-8211-10.0.0.6:22-10.0.0.1:50674.service - OpenSSH per-connection server daemon (10.0.0.1:50674). Apr 20 20:19:52.034032 containerd[1640]: time="2026-04-20T20:19:52.031515412Z" level=info msg="TaskExit event container_id:\"f79ebd2694a0ea078794fec9eee6729d612e0a064e4847cf3345504c8b675e51\" id:\"f79ebd2694a0ea078794fec9eee6729d612e0a064e4847cf3345504c8b675e51\" pid:6810 exit_status:1 exited_at:{seconds:1776716365 nanos:819122265}" Apr 20 20:19:52.080661 kubelet[3176]: E0420 20:19:52.038198 3176 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="11.192s" Apr 20 20:19:52.931869 sshd[7183]: Accepted publickey for core from 10.0.0.1 port 50674 ssh2: RSA SHA256:6LFBX1y/KMd5HbH+HLq9TQlSkwszEIqAV5PH+tmKm7M Apr 20 20:19:52.950703 sshd-session[7183]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 20 20:19:53.625289 kubelet[3176]: E0420 20:19:53.608162 3176 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:19:54.543544 systemd-logind[1609]: New session '41' of user 'core' with class 'user' and type 'tty'. Apr 20 20:19:54.833959 systemd[1]: Started session-41.scope - Session 41 of User core. Apr 20 20:19:58.600058 kubelet[3176]: E0420 20:19:58.598105 3176 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="4.848s" Apr 20 20:20:00.861206 sshd[7195]: Connection closed by 10.0.0.1 port 50674 Apr 20 20:20:00.894287 sshd-session[7183]: pam_unix(sshd:session): session closed for user core Apr 20 20:20:01.152665 kubelet[3176]: E0420 20:20:01.125923 3176 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.449s" Apr 20 20:20:01.241616 systemd[1]: sshd@39-8211-10.0.0.6:22-10.0.0.1:50674.service: Deactivated successfully. Apr 20 20:20:01.242688 systemd[1]: sshd@39-8211-10.0.0.6:22-10.0.0.1:50674.service: Consumed 1.004s CPU time, 4.1M memory peak. Apr 20 20:20:01.513221 systemd[1]: session-41.scope: Deactivated successfully. Apr 20 20:20:01.561718 systemd[1]: session-41.scope: Consumed 4.181s CPU time, 16.3M memory peak. Apr 20 20:20:01.850736 systemd-logind[1609]: Session 41 logged out. Waiting for processes to exit. Apr 20 20:20:01.912057 systemd-logind[1609]: Removed session 41. Apr 20 20:20:02.215858 kubelet[3176]: I0420 20:20:02.192513 3176 scope.go:122] "RemoveContainer" containerID="49f062dd1a04e6efc9c1a2c9a0d02a0f520f5cce96d51ce589381563fc6fc543" Apr 20 20:20:02.236005 kubelet[3176]: I0420 20:20:02.232048 3176 scope.go:122] "RemoveContainer" containerID="f79ebd2694a0ea078794fec9eee6729d612e0a064e4847cf3345504c8b675e51" Apr 20 20:20:02.236005 kubelet[3176]: E0420 20:20:02.232308 3176 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:20:02.927531 containerd[1640]: time="2026-04-20T20:20:02.926754752Z" level=info msg="RemoveContainer for \"49f062dd1a04e6efc9c1a2c9a0d02a0f520f5cce96d51ce589381563fc6fc543\"" Apr 20 20:20:02.945137 containerd[1640]: time="2026-04-20T20:20:02.940793881Z" level=info msg="CreateContainer within sandbox \"64a97b8435f3ddeadc09165de86e15b48f97c0eb04468620f43d058579b319d1\" for container name:\"kube-controller-manager\" attempt:4" Apr 20 20:20:03.881021 containerd[1640]: time="2026-04-20T20:20:03.844889335Z" level=info msg="container event discarded" container=b5a4f9b9730597fdf8bf6c8fcd262e07b1e0d7a67479d45d6406926d9ace4fee type=CONTAINER_STOPPED_EVENT Apr 20 20:20:04.402121 containerd[1640]: time="2026-04-20T20:20:04.382015866Z" level=info msg="RemoveContainer for \"49f062dd1a04e6efc9c1a2c9a0d02a0f520f5cce96d51ce589381563fc6fc543\" returns successfully" Apr 20 20:20:05.818900 kubelet[3176]: E0420 20:20:05.818650 3176 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.065s" Apr 20 20:20:06.760279 containerd[1640]: time="2026-04-20T20:20:06.756802100Z" level=info msg="Container 9c99966fe9970e21cd8970c4aaf04f8fb0a8a35c21791fb2c06ae2c692982967: CDI devices from CRI Config.CDIDevices: []" Apr 20 20:20:06.819302 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4020118401.mount: Deactivated successfully. Apr 20 20:20:07.364968 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1861406280.mount: Deactivated successfully. Apr 20 20:20:07.579860 kubelet[3176]: E0420 20:20:07.577806 3176 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.758s" Apr 20 20:20:08.147826 kubelet[3176]: E0420 20:20:07.969874 3176 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:20:08.879967 systemd[1]: Started sshd@40-12296-10.0.0.6:22-10.0.0.1:58276.service - OpenSSH per-connection server daemon (10.0.0.1:58276). Apr 20 20:20:10.165895 kubelet[3176]: E0420 20:20:10.146286 3176 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.431s" Apr 20 20:20:10.332187 containerd[1640]: time="2026-04-20T20:20:10.301800345Z" level=info msg="CreateContainer within sandbox \"64a97b8435f3ddeadc09165de86e15b48f97c0eb04468620f43d058579b319d1\" for name:\"kube-controller-manager\" attempt:4 returns container id \"9c99966fe9970e21cd8970c4aaf04f8fb0a8a35c21791fb2c06ae2c692982967\"" Apr 20 20:20:10.749681 containerd[1640]: time="2026-04-20T20:20:10.749431691Z" level=info msg="StartContainer for \"9c99966fe9970e21cd8970c4aaf04f8fb0a8a35c21791fb2c06ae2c692982967\"" Apr 20 20:20:11.355633 sshd[7257]: Accepted publickey for core from 10.0.0.1 port 58276 ssh2: RSA SHA256:6LFBX1y/KMd5HbH+HLq9TQlSkwszEIqAV5PH+tmKm7M Apr 20 20:20:11.375442 sshd-session[7257]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 20 20:20:11.414805 kubelet[3176]: E0420 20:20:11.413656 3176 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.225s" Apr 20 20:20:11.744906 containerd[1640]: time="2026-04-20T20:20:11.718889898Z" level=info msg="connecting to shim 9c99966fe9970e21cd8970c4aaf04f8fb0a8a35c21791fb2c06ae2c692982967" address="unix:///run/containerd/s/e1f0007e2c6f1f748a9dc06ca555a4405d786137015c68dabb60c59e595b314f" protocol=ttrpc version=3 Apr 20 20:20:12.454617 systemd-logind[1609]: New session '42' of user 'core' with class 'user' and type 'tty'. Apr 20 20:20:12.778748 systemd[1]: Started session-42.scope - Session 42 of User core. Apr 20 20:20:14.783854 containerd[1640]: time="2026-04-20T20:20:14.780233070Z" level=error msg="StopContainer for \"37048ce4ab3df3a2d4a897903a56366f0eddba041f8139b5370fd312170d9238\" failed" error="rpc error: code = DeadlineExceeded desc = an error occurs during waiting for container \"37048ce4ab3df3a2d4a897903a56366f0eddba041f8139b5370fd312170d9238\" to be killed: wait container \"37048ce4ab3df3a2d4a897903a56366f0eddba041f8139b5370fd312170d9238\": context deadline exceeded" Apr 20 20:20:14.943189 kubelet[3176]: E0420 20:20:14.939629 3176 log.go:32] "StopContainer from runtime service failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" containerID="37048ce4ab3df3a2d4a897903a56366f0eddba041f8139b5370fd312170d9238" Apr 20 20:20:14.960146 kubelet[3176]: E0420 20:20:14.951907 3176 kuberuntime_container.go:895] "Container termination failed with gracePeriod" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" pod="kube-system/kube-scheduler-localhost" podUID="f7c88b30fc803a3ec6b6c138191bdaca" containerName="kube-scheduler" containerID="containerd://37048ce4ab3df3a2d4a897903a56366f0eddba041f8139b5370fd312170d9238" gracePeriod=30 Apr 20 20:20:15.384252 kubelet[3176]: E0420 20:20:15.169763 3176 kuberuntime_manager.go:1437] "killContainer for pod failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" containerName="kube-scheduler" containerID={"Type":"containerd","ID":"37048ce4ab3df3a2d4a897903a56366f0eddba041f8139b5370fd312170d9238"} pod="kube-system/kube-scheduler-localhost" Apr 20 20:20:15.548989 kubelet[3176]: E0420 20:20:15.540856 3176 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillContainer\" for \"kube-scheduler\" with KillContainerError: \"rpc error: code = DeadlineExceeded desc = context deadline exceeded\"" pod="kube-system/kube-scheduler-localhost" podUID="f7c88b30fc803a3ec6b6c138191bdaca" Apr 20 20:20:18.358700 containerd[1640]: time="2026-04-20T20:20:18.241760644Z" level=info msg="container event discarded" container=92f997e10caaa969f76f6e97385a5a583d487b5ba75c74d285d90735d225065f type=CONTAINER_DELETED_EVENT Apr 20 20:20:19.833303 kubelet[3176]: E0420 20:20:19.774702 3176 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="8.044s" Apr 20 20:20:21.182611 systemd[1]: Started cri-containerd-9c99966fe9970e21cd8970c4aaf04f8fb0a8a35c21791fb2c06ae2c692982967.scope - libcontainer container 9c99966fe9970e21cd8970c4aaf04f8fb0a8a35c21791fb2c06ae2c692982967. Apr 20 20:20:21.958152 containerd[1640]: time="2026-04-20T20:20:21.739884753Z" level=info msg="container event discarded" container=49f062dd1a04e6efc9c1a2c9a0d02a0f520f5cce96d51ce589381563fc6fc543 type=CONTAINER_CREATED_EVENT Apr 20 20:20:25.947924 sshd[7279]: Connection closed by 10.0.0.1 port 58276 Apr 20 20:20:25.981840 sshd-session[7257]: pam_unix(sshd:session): session closed for user core Apr 20 20:20:26.616125 systemd[1]: sshd@40-12296-10.0.0.6:22-10.0.0.1:58276.service: Deactivated successfully. Apr 20 20:20:26.650802 systemd[1]: sshd@40-12296-10.0.0.6:22-10.0.0.1:58276.service: Consumed 1.092s CPU time, 4.2M memory peak. Apr 20 20:20:26.956426 systemd[1]: session-42.scope: Deactivated successfully. Apr 20 20:20:27.024814 systemd[1]: session-42.scope: Consumed 8.300s CPU time, 16.8M memory peak. Apr 20 20:20:27.339330 systemd-logind[1609]: Session 42 logged out. Waiting for processes to exit. Apr 20 20:20:27.797884 systemd-logind[1609]: Removed session 42. Apr 20 20:20:30.168936 kubelet[3176]: E0420 20:20:30.168081 3176 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="10.037s" Apr 20 20:20:32.672826 systemd[1]: Started sshd@41-8212-10.0.0.6:22-10.0.0.1:45750.service - OpenSSH per-connection server daemon (10.0.0.1:45750). Apr 20 20:20:32.923827 kubelet[3176]: E0420 20:20:32.693282 3176 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.521s" Apr 20 20:20:36.525190 containerd[1640]: time="2026-04-20T20:20:36.520806503Z" level=info msg="container event discarded" container=b5a4f9b9730597fdf8bf6c8fcd262e07b1e0d7a67479d45d6406926d9ace4fee type=CONTAINER_DELETED_EVENT Apr 20 20:20:37.142932 containerd[1640]: time="2026-04-20T20:20:37.099958336Z" level=info msg="container event discarded" container=49f062dd1a04e6efc9c1a2c9a0d02a0f520f5cce96d51ce589381563fc6fc543 type=CONTAINER_STARTED_EVENT Apr 20 20:20:38.537436 containerd[1640]: time="2026-04-20T20:20:38.532890676Z" level=info msg="StopContainer for \"37048ce4ab3df3a2d4a897903a56366f0eddba041f8139b5370fd312170d9238\" with timeout 30 (s)" Apr 20 20:20:39.267906 containerd[1640]: time="2026-04-20T20:20:39.264271907Z" level=info msg="Skipping the sending of signal terminated to container \"37048ce4ab3df3a2d4a897903a56366f0eddba041f8139b5370fd312170d9238\" because a prior stop with timeout>0 request already sent the signal" Apr 20 20:20:39.779663 sshd[7347]: Accepted publickey for core from 10.0.0.1 port 45750 ssh2: RSA SHA256:6LFBX1y/KMd5HbH+HLq9TQlSkwszEIqAV5PH+tmKm7M Apr 20 20:20:40.289304 sshd-session[7347]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 20 20:20:41.719202 systemd-logind[1609]: New session '43' of user 'core' with class 'user' and type 'tty'. Apr 20 20:20:42.422546 systemd[1]: Started session-43.scope - Session 43 of User core. Apr 20 20:20:55.954906 sshd[7383]: Connection closed by 10.0.0.1 port 45750 Apr 20 20:20:56.052579 sshd-session[7347]: pam_unix(sshd:session): session closed for user core Apr 20 20:20:56.418030 systemd[1]: sshd@41-8212-10.0.0.6:22-10.0.0.1:45750.service: Deactivated successfully. Apr 20 20:20:56.442619 systemd[1]: sshd@41-8212-10.0.0.6:22-10.0.0.1:45750.service: Consumed 2.247s CPU time, 4.1M memory peak. Apr 20 20:20:56.740470 systemd[1]: session-43.scope: Deactivated successfully. Apr 20 20:20:56.798741 systemd[1]: session-43.scope: Consumed 8.267s CPU time, 15.8M memory peak. Apr 20 20:20:57.251999 systemd-logind[1609]: Session 43 logged out. Waiting for processes to exit. Apr 20 20:20:57.806126 systemd-logind[1609]: Removed session 43. Apr 20 20:20:58.536904 kubelet[3176]: E0420 20:20:58.534824 3176 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="25.198s" Apr 20 20:20:58.843413 containerd[1640]: time="2026-04-20T20:20:58.839740868Z" level=info msg="StartContainer for \"9c99966fe9970e21cd8970c4aaf04f8fb0a8a35c21791fb2c06ae2c692982967\" returns successfully" Apr 20 20:21:00.683089 kubelet[3176]: E0420 20:21:00.670953 3176 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.064s" Apr 20 20:21:01.868132 systemd[1]: Started sshd@42-8213-10.0.0.6:22-10.0.0.1:55364.service - OpenSSH per-connection server daemon (10.0.0.1:55364). Apr 20 20:21:08.957638 sshd[7429]: Accepted publickey for core from 10.0.0.1 port 55364 ssh2: RSA SHA256:6LFBX1y/KMd5HbH+HLq9TQlSkwszEIqAV5PH+tmKm7M Apr 20 20:21:09.318650 sshd-session[7429]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 20 20:21:09.582939 containerd[1640]: time="2026-04-20T20:21:09.575019511Z" level=info msg="Kill container \"37048ce4ab3df3a2d4a897903a56366f0eddba041f8139b5370fd312170d9238\"" Apr 20 20:21:10.177141 systemd-logind[1609]: New session '44' of user 'core' with class 'user' and type 'tty'. Apr 20 20:21:11.003794 systemd[1]: Started session-44.scope - Session 44 of User core. Apr 20 20:21:17.779825 kubelet[3176]: E0420 20:21:17.764982 3176 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="17.031s" Apr 20 20:21:21.138861 kubelet[3176]: E0420 20:21:21.138132 3176 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:21:21.974216 kubelet[3176]: E0420 20:21:21.851518 3176 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="4.018s" Apr 20 20:21:25.659973 kubelet[3176]: E0420 20:21:25.465314 3176 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.386s" Apr 20 20:21:28.239093 kubelet[3176]: E0420 20:21:27.994078 3176 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:21:28.885850 sshd[7460]: Connection closed by 10.0.0.1 port 55364 Apr 20 20:21:28.923094 sshd-session[7429]: pam_unix(sshd:session): session closed for user core Apr 20 20:21:29.270976 systemd[1]: sshd@42-8213-10.0.0.6:22-10.0.0.1:55364.service: Deactivated successfully. Apr 20 20:21:29.338742 systemd[1]: sshd@42-8213-10.0.0.6:22-10.0.0.1:55364.service: Consumed 2.216s CPU time, 4.1M memory peak. Apr 20 20:21:29.703741 systemd[1]: session-44.scope: Deactivated successfully. Apr 20 20:21:29.732436 systemd[1]: session-44.scope: Consumed 10.528s CPU time, 16.4M memory peak. Apr 20 20:21:30.054951 systemd-logind[1609]: Session 44 logged out. Waiting for processes to exit. Apr 20 20:21:30.335988 systemd-logind[1609]: Removed session 44. Apr 20 20:21:35.342837 systemd[1]: Started sshd@43-4103-10.0.0.6:22-10.0.0.1:37344.service - OpenSSH per-connection server daemon (10.0.0.1:37344). Apr 20 20:21:38.936007 kubelet[3176]: E0420 20:21:38.932199 3176 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:21:41.477623 sshd[7525]: Accepted publickey for core from 10.0.0.1 port 37344 ssh2: RSA SHA256:6LFBX1y/KMd5HbH+HLq9TQlSkwszEIqAV5PH+tmKm7M Apr 20 20:21:41.846913 sshd-session[7525]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 20 20:21:43.325475 systemd-logind[1609]: New session '45' of user 'core' with class 'user' and type 'tty'. Apr 20 20:21:43.448131 systemd[1]: Started session-45.scope - Session 45 of User core. Apr 20 20:21:55.774109 containerd[1640]: time="2026-04-20T20:21:55.541329724Z" level=info msg="TaskExit event container_id:\"37048ce4ab3df3a2d4a897903a56366f0eddba041f8139b5370fd312170d9238\" id:\"37048ce4ab3df3a2d4a897903a56366f0eddba041f8139b5370fd312170d9238\" pid:3006 exit_status:1 exited_at:{seconds:1776715880 nanos:848322378}" Apr 20 20:21:59.180092 kubelet[3176]: E0420 20:21:59.175358 3176 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="33.245s" Apr 20 20:21:59.492934 sshd[7544]: Connection closed by 10.0.0.1 port 37344 Apr 20 20:21:59.471964 sshd-session[7525]: pam_unix(sshd:session): session closed for user core Apr 20 20:21:59.737290 systemd[1]: sshd@43-4103-10.0.0.6:22-10.0.0.1:37344.service: Deactivated successfully. Apr 20 20:21:59.757850 systemd[1]: sshd@43-4103-10.0.0.6:22-10.0.0.1:37344.service: Consumed 1.937s CPU time, 4.4M memory peak. Apr 20 20:21:59.965142 systemd[1]: session-45.scope: Deactivated successfully. Apr 20 20:21:59.989141 systemd[1]: session-45.scope: Consumed 9.623s CPU time, 17.8M memory peak. Apr 20 20:22:00.258621 systemd-logind[1609]: Session 45 logged out. Waiting for processes to exit. Apr 20 20:22:00.452153 systemd-logind[1609]: Removed session 45. Apr 20 20:22:02.558014 kubelet[3176]: E0420 20:22:02.555582 3176 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:22:02.729001 kubelet[3176]: E0420 20:22:02.588674 3176 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:22:02.762920 kubelet[3176]: E0420 20:22:02.759183 3176 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:22:02.882107 kubelet[3176]: E0420 20:22:02.797360 3176 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:22:05.877092 systemd[1]: Started sshd@44-8214-10.0.0.6:22-10.0.0.1:49918.service - OpenSSH per-connection server daemon (10.0.0.1:49918). Apr 20 20:22:06.257742 kubelet[3176]: E0420 20:22:06.188205 3176 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:22:06.777089 kubelet[3176]: E0420 20:22:06.774763 3176 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:22:08.219653 containerd[1640]: time="2026-04-20T20:22:08.217242809Z" level=error msg="Failed to handle backOff event container_id:\"37048ce4ab3df3a2d4a897903a56366f0eddba041f8139b5370fd312170d9238\" id:\"37048ce4ab3df3a2d4a897903a56366f0eddba041f8139b5370fd312170d9238\" pid:3006 exit_status:1 exited_at:{seconds:1776715880 nanos:848322378} for 37048ce4ab3df3a2d4a897903a56366f0eddba041f8139b5370fd312170d9238" error="failed to handle container TaskExit event: failed to stop container: unknown error after kill: runc did not terminate successfully: exit status 137: " Apr 20 20:22:15.678424 sshd[7591]: Accepted publickey for core from 10.0.0.1 port 49918 ssh2: RSA SHA256:6LFBX1y/KMd5HbH+HLq9TQlSkwszEIqAV5PH+tmKm7M Apr 20 20:22:16.144887 sshd-session[7591]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 20 20:22:17.741716 systemd-logind[1609]: New session '46' of user 'core' with class 'user' and type 'tty'. Apr 20 20:22:17.999094 systemd[1]: Started session-46.scope - Session 46 of User core. Apr 20 20:22:25.478677 kubelet[3176]: E0420 20:22:25.472084 3176 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="24s" Apr 20 20:22:34.448672 sshd[7624]: Connection closed by 10.0.0.1 port 49918 Apr 20 20:22:34.543119 sshd-session[7591]: pam_unix(sshd:session): session closed for user core Apr 20 20:22:34.968106 systemd[1]: sshd@44-8214-10.0.0.6:22-10.0.0.1:49918.service: Deactivated successfully. Apr 20 20:22:35.073030 systemd[1]: sshd@44-8214-10.0.0.6:22-10.0.0.1:49918.service: Consumed 2.679s CPU time, 4.3M memory peak. Apr 20 20:22:35.178732 systemd[1]: session-46.scope: Deactivated successfully. Apr 20 20:22:35.180762 systemd[1]: session-46.scope: Consumed 9.915s CPU time, 15.9M memory peak. Apr 20 20:22:35.677469 systemd-logind[1609]: Session 46 logged out. Waiting for processes to exit. Apr 20 20:22:36.408050 systemd-logind[1609]: Removed session 46. Apr 20 20:22:40.814936 systemd[1]: Started sshd@45-12297-10.0.0.6:22-10.0.0.1:33456.service - OpenSSH per-connection server daemon (10.0.0.1:33456). Apr 20 20:22:48.144022 sshd[7670]: Accepted publickey for core from 10.0.0.1 port 33456 ssh2: RSA SHA256:6LFBX1y/KMd5HbH+HLq9TQlSkwszEIqAV5PH+tmKm7M Apr 20 20:22:48.320580 sshd-session[7670]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 20 20:22:49.989433 systemd-logind[1609]: New session '47' of user 'core' with class 'user' and type 'tty'. Apr 20 20:22:50.264616 systemd[1]: Started session-47.scope - Session 47 of User core. Apr 20 20:22:54.907866 systemd[1]: cri-containerd-9c99966fe9970e21cd8970c4aaf04f8fb0a8a35c21791fb2c06ae2c692982967.scope: Deactivated successfully. Apr 20 20:22:54.985408 systemd[1]: cri-containerd-9c99966fe9970e21cd8970c4aaf04f8fb0a8a35c21791fb2c06ae2c692982967.scope: Consumed 50.460s CPU time, 35.8M memory peak. Apr 20 20:22:57.431938 containerd[1640]: time="2026-04-20T20:22:57.423945079Z" level=info msg="received container exit event container_id:\"9c99966fe9970e21cd8970c4aaf04f8fb0a8a35c21791fb2c06ae2c692982967\" id:\"9c99966fe9970e21cd8970c4aaf04f8fb0a8a35c21791fb2c06ae2c692982967\" pid:7320 exit_status:1 exited_at:{seconds:1776716575 nanos:120034502}" Apr 20 20:23:01.037942 sshd[7690]: Connection closed by 10.0.0.1 port 33456 Apr 20 20:23:01.058036 sshd-session[7670]: pam_unix(sshd:session): session closed for user core Apr 20 20:23:01.450288 systemd[1]: sshd@45-12297-10.0.0.6:22-10.0.0.1:33456.service: Deactivated successfully. Apr 20 20:23:01.539944 systemd[1]: sshd@45-12297-10.0.0.6:22-10.0.0.1:33456.service: Consumed 1.981s CPU time, 4.2M memory peak. Apr 20 20:23:01.772994 systemd[1]: session-47.scope: Deactivated successfully. Apr 20 20:23:01.884104 systemd[1]: session-47.scope: Consumed 6.479s CPU time, 16.2M memory peak. Apr 20 20:23:02.165684 systemd-logind[1609]: Session 47 logged out. Waiting for processes to exit. Apr 20 20:23:02.341479 systemd-logind[1609]: Removed session 47. Apr 20 20:23:02.600081 kubelet[3176]: E0420 20:23:02.595389 3176 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="36.103s" Apr 20 20:23:03.248751 kubelet[3176]: E0420 20:23:03.199184 3176 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:23:05.877081 containerd[1640]: time="2026-04-20T20:23:05.769065842Z" level=info msg="container event discarded" container=49f062dd1a04e6efc9c1a2c9a0d02a0f520f5cce96d51ce589381563fc6fc543 type=CONTAINER_STOPPED_EVENT Apr 20 20:23:06.263105 kubelet[3176]: E0420 20:23:05.889877 3176 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.148s" Apr 20 20:23:07.544830 systemd[1]: Started sshd@46-8215-10.0.0.6:22-10.0.0.1:34330.service - OpenSSH per-connection server daemon (10.0.0.1:34330). Apr 20 20:23:07.783011 containerd[1640]: time="2026-04-20T20:23:07.644042611Z" level=error msg="ttrpc: received message on inactive stream" stream=29 Apr 20 20:23:07.783011 containerd[1640]: time="2026-04-20T20:23:07.722139892Z" level=error msg="failed to handle container TaskExit event container_id:\"9c99966fe9970e21cd8970c4aaf04f8fb0a8a35c21791fb2c06ae2c692982967\" id:\"9c99966fe9970e21cd8970c4aaf04f8fb0a8a35c21791fb2c06ae2c692982967\" pid:7320 exit_status:1 exited_at:{seconds:1776716575 nanos:120034502}" error="failed to stop container: context deadline exceeded" Apr 20 20:23:07.966785 containerd[1640]: time="2026-04-20T20:23:07.807105191Z" level=error msg="ttrpc: received message on inactive stream" stream=31 Apr 20 20:23:08.160686 kubelet[3176]: E0420 20:23:08.153126 3176 log.go:32] "StopContainer from runtime service failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" containerID="37048ce4ab3df3a2d4a897903a56366f0eddba041f8139b5370fd312170d9238" Apr 20 20:23:08.382046 containerd[1640]: time="2026-04-20T20:23:08.358130106Z" level=error msg="StopContainer for \"37048ce4ab3df3a2d4a897903a56366f0eddba041f8139b5370fd312170d9238\" failed" error="rpc error: code = Canceled desc = an error occurs during waiting for container \"37048ce4ab3df3a2d4a897903a56366f0eddba041f8139b5370fd312170d9238\" to be killed: wait container \"37048ce4ab3df3a2d4a897903a56366f0eddba041f8139b5370fd312170d9238\": context canceled" Apr 20 20:23:08.649873 kubelet[3176]: E0420 20:23:08.474220 3176 kuberuntime_container.go:895] "Container termination failed with gracePeriod" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" pod="kube-system/kube-scheduler-localhost" podUID="f7c88b30fc803a3ec6b6c138191bdaca" containerName="kube-scheduler" containerID="containerd://37048ce4ab3df3a2d4a897903a56366f0eddba041f8139b5370fd312170d9238" gracePeriod=30 Apr 20 20:23:08.805871 kubelet[3176]: E0420 20:23:08.801176 3176 kuberuntime_manager.go:1437] "killContainer for pod failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" containerName="kube-scheduler" containerID={"Type":"containerd","ID":"37048ce4ab3df3a2d4a897903a56366f0eddba041f8139b5370fd312170d9238"} pod="kube-system/kube-scheduler-localhost" Apr 20 20:23:08.940732 containerd[1640]: time="2026-04-20T20:23:08.874606424Z" level=info msg="container event discarded" container=f79ebd2694a0ea078794fec9eee6729d612e0a064e4847cf3345504c8b675e51 type=CONTAINER_CREATED_EVENT Apr 20 20:23:08.994054 kubelet[3176]: E0420 20:23:08.960834 3176 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillContainer\" for \"kube-scheduler\" with KillContainerError: \"rpc error: code = DeadlineExceeded desc = context deadline exceeded\"" pod="kube-system/kube-scheduler-localhost" podUID="f7c88b30fc803a3ec6b6c138191bdaca" Apr 20 20:23:09.319961 containerd[1640]: time="2026-04-20T20:23:09.269623087Z" level=info msg="TaskExit event container_id:\"9c99966fe9970e21cd8970c4aaf04f8fb0a8a35c21791fb2c06ae2c692982967\" id:\"9c99966fe9970e21cd8970c4aaf04f8fb0a8a35c21791fb2c06ae2c692982967\" pid:7320 exit_status:1 exited_at:{seconds:1776716575 nanos:120034502}" Apr 20 20:23:10.546254 kubelet[3176]: E0420 20:23:10.501521 3176 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="4.557s" Apr 20 20:23:12.172649 containerd[1640]: time="2026-04-20T20:23:12.110972058Z" level=info msg="container event discarded" container=f79ebd2694a0ea078794fec9eee6729d612e0a064e4847cf3345504c8b675e51 type=CONTAINER_STARTED_EVENT Apr 20 20:23:13.549064 containerd[1640]: time="2026-04-20T20:23:13.435721401Z" level=info msg="StopContainer for \"37048ce4ab3df3a2d4a897903a56366f0eddba041f8139b5370fd312170d9238\" with timeout 30 (s)" Apr 20 20:23:13.867100 kubelet[3176]: E0420 20:23:13.834955 3176 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.133s" Apr 20 20:23:14.069975 containerd[1640]: time="2026-04-20T20:23:14.060840283Z" level=info msg="Skipping the sending of signal terminated to container \"37048ce4ab3df3a2d4a897903a56366f0eddba041f8139b5370fd312170d9238\" because a prior stop with timeout>0 request already sent the signal" Apr 20 20:23:14.840834 sshd[7741]: Accepted publickey for core from 10.0.0.1 port 34330 ssh2: RSA SHA256:6LFBX1y/KMd5HbH+HLq9TQlSkwszEIqAV5PH+tmKm7M Apr 20 20:23:15.040777 sshd-session[7741]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 20 20:23:16.765295 systemd-logind[1609]: New session '48' of user 'core' with class 'user' and type 'tty'. Apr 20 20:23:17.788744 systemd[1]: Started session-48.scope - Session 48 of User core. Apr 20 20:23:19.273919 containerd[1640]: time="2026-04-20T20:23:19.192873836Z" level=error msg="ttrpc: received message on inactive stream" stream=41 Apr 20 20:23:19.560184 containerd[1640]: time="2026-04-20T20:23:19.442673024Z" level=error msg="ttrpc: received message on inactive stream" stream=39 Apr 20 20:23:19.679130 containerd[1640]: time="2026-04-20T20:23:19.677745502Z" level=error msg="Failed to handle backOff event container_id:\"9c99966fe9970e21cd8970c4aaf04f8fb0a8a35c21791fb2c06ae2c692982967\" id:\"9c99966fe9970e21cd8970c4aaf04f8fb0a8a35c21791fb2c06ae2c692982967\" pid:7320 exit_status:1 exited_at:{seconds:1776716575 nanos:120034502} for 9c99966fe9970e21cd8970c4aaf04f8fb0a8a35c21791fb2c06ae2c692982967" error="failed to handle container TaskExit event: failed to stop container: context deadline exceeded" Apr 20 20:23:21.271664 kubelet[3176]: E0420 20:23:21.266856 3176 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="7.333s" Apr 20 20:23:22.131683 containerd[1640]: time="2026-04-20T20:23:22.103322286Z" level=info msg="TaskExit event container_id:\"9c99966fe9970e21cd8970c4aaf04f8fb0a8a35c21791fb2c06ae2c692982967\" id:\"9c99966fe9970e21cd8970c4aaf04f8fb0a8a35c21791fb2c06ae2c692982967\" pid:7320 exit_status:1 exited_at:{seconds:1776716575 nanos:120034502}" Apr 20 20:23:22.334926 kubelet[3176]: E0420 20:23:22.261910 3176 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:23:22.598872 kubelet[3176]: E0420 20:23:22.594856 3176 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:23:22.693439 kubelet[3176]: E0420 20:23:22.690796 3176 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:23:22.989686 kubelet[3176]: E0420 20:23:22.834023 3176 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:23:26.394997 kubelet[3176]: E0420 20:23:26.393260 3176 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="4.857s" Apr 20 20:23:26.640938 sshd[7790]: Connection closed by 10.0.0.1 port 34330 Apr 20 20:23:26.663804 sshd-session[7741]: pam_unix(sshd:session): session closed for user core Apr 20 20:23:27.146260 systemd[1]: sshd@46-8215-10.0.0.6:22-10.0.0.1:34330.service: Deactivated successfully. Apr 20 20:23:27.221330 systemd[1]: sshd@46-8215-10.0.0.6:22-10.0.0.1:34330.service: Consumed 2.345s CPU time, 4.1M memory peak. Apr 20 20:23:27.464678 systemd[1]: session-48.scope: Deactivated successfully. Apr 20 20:23:27.520014 systemd[1]: session-48.scope: Consumed 5.099s CPU time, 18.1M memory peak. Apr 20 20:23:27.990032 systemd-logind[1609]: Session 48 logged out. Waiting for processes to exit. Apr 20 20:23:28.185186 systemd-logind[1609]: Removed session 48. Apr 20 20:23:29.248290 kubelet[3176]: E0420 20:23:29.245875 3176 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.84s" Apr 20 20:23:30.362037 kubelet[3176]: E0420 20:23:30.359036 3176 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.056s" Apr 20 20:23:32.142807 containerd[1640]: time="2026-04-20T20:23:32.103146241Z" level=error msg="failed to delete task" error="context deadline exceeded" id=9c99966fe9970e21cd8970c4aaf04f8fb0a8a35c21791fb2c06ae2c692982967 Apr 20 20:23:32.286645 containerd[1640]: time="2026-04-20T20:23:32.281232593Z" level=error msg="Failed to handle backOff event container_id:\"9c99966fe9970e21cd8970c4aaf04f8fb0a8a35c21791fb2c06ae2c692982967\" id:\"9c99966fe9970e21cd8970c4aaf04f8fb0a8a35c21791fb2c06ae2c692982967\" pid:7320 exit_status:1 exited_at:{seconds:1776716575 nanos:120034502} for 9c99966fe9970e21cd8970c4aaf04f8fb0a8a35c21791fb2c06ae2c692982967" error="failed to handle container TaskExit event: failed to stop container: failed to delete task: context deadline exceeded" Apr 20 20:23:32.650100 containerd[1640]: time="2026-04-20T20:23:32.630112655Z" level=error msg="ttrpc: received message on inactive stream" stream=57 Apr 20 20:23:32.937505 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9c99966fe9970e21cd8970c4aaf04f8fb0a8a35c21791fb2c06ae2c692982967-rootfs.mount: Deactivated successfully. Apr 20 20:23:33.260841 kubelet[3176]: E0420 20:23:33.180409 3176 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.774s" Apr 20 20:23:33.739217 systemd[1]: Started sshd@47-8216-10.0.0.6:22-10.0.0.1:41426.service - OpenSSH per-connection server daemon (10.0.0.1:41426). Apr 20 20:23:37.171953 containerd[1640]: time="2026-04-20T20:23:37.170624308Z" level=info msg="TaskExit event container_id:\"9c99966fe9970e21cd8970c4aaf04f8fb0a8a35c21791fb2c06ae2c692982967\" id:\"9c99966fe9970e21cd8970c4aaf04f8fb0a8a35c21791fb2c06ae2c692982967\" pid:7320 exit_status:1 exited_at:{seconds:1776716575 nanos:120034502}" Apr 20 20:23:37.835065 kubelet[3176]: E0420 20:23:37.829975 3176 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="4.532s" Apr 20 20:23:38.384439 kubelet[3176]: E0420 20:23:38.375520 3176 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:23:39.499894 sshd[7839]: Accepted publickey for core from 10.0.0.1 port 41426 ssh2: RSA SHA256:6LFBX1y/KMd5HbH+HLq9TQlSkwszEIqAV5PH+tmKm7M Apr 20 20:23:39.536195 sshd-session[7839]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 20 20:23:40.693767 systemd-logind[1609]: New session '49' of user 'core' with class 'user' and type 'tty'. Apr 20 20:23:41.075624 systemd[1]: Started session-49.scope - Session 49 of User core. Apr 20 20:23:45.948714 containerd[1640]: time="2026-04-20T20:23:45.945666628Z" level=info msg="Kill container \"37048ce4ab3df3a2d4a897903a56366f0eddba041f8139b5370fd312170d9238\"" Apr 20 20:23:48.132996 containerd[1640]: time="2026-04-20T20:23:47.681651353Z" level=error msg="failed to delete task" error="rpc error: code = Unknown desc = failed to delete task: runc did not terminate successfully: exit status 137: " id=9c99966fe9970e21cd8970c4aaf04f8fb0a8a35c21791fb2c06ae2c692982967 Apr 20 20:23:48.432780 containerd[1640]: time="2026-04-20T20:23:48.412543154Z" level=error msg="Failed to handle backOff event container_id:\"9c99966fe9970e21cd8970c4aaf04f8fb0a8a35c21791fb2c06ae2c692982967\" id:\"9c99966fe9970e21cd8970c4aaf04f8fb0a8a35c21791fb2c06ae2c692982967\" pid:7320 exit_status:1 exited_at:{seconds:1776716575 nanos:120034502} for 9c99966fe9970e21cd8970c4aaf04f8fb0a8a35c21791fb2c06ae2c692982967" error="failed to handle container TaskExit event: failed to stop container: failed to delete task: failed to delete task: runc did not terminate successfully: exit status 137: " Apr 20 20:23:51.094393 kubelet[3176]: E0420 20:23:51.093105 3176 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="11.381s" Apr 20 20:23:54.445039 kubelet[3176]: E0420 20:23:54.090810 3176 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.817s" Apr 20 20:23:56.117011 sshd[7874]: Connection closed by 10.0.0.1 port 41426 Apr 20 20:23:56.135535 sshd-session[7839]: pam_unix(sshd:session): session closed for user core Apr 20 20:23:56.558090 systemd[1]: sshd@47-8216-10.0.0.6:22-10.0.0.1:41426.service: Deactivated successfully. Apr 20 20:23:56.579298 systemd[1]: sshd@47-8216-10.0.0.6:22-10.0.0.1:41426.service: Consumed 2.211s CPU time, 4.1M memory peak. Apr 20 20:23:56.928976 systemd[1]: session-49.scope: Deactivated successfully. Apr 20 20:23:56.960568 systemd[1]: session-49.scope: Consumed 8.901s CPU time, 15.8M memory peak. Apr 20 20:23:57.238286 containerd[1640]: time="2026-04-20T20:23:57.185771876Z" level=info msg="TaskExit event container_id:\"9c99966fe9970e21cd8970c4aaf04f8fb0a8a35c21791fb2c06ae2c692982967\" id:\"9c99966fe9970e21cd8970c4aaf04f8fb0a8a35c21791fb2c06ae2c692982967\" pid:7320 exit_status:1 exited_at:{seconds:1776716575 nanos:120034502}" Apr 20 20:23:57.273939 systemd-logind[1609]: Session 49 logged out. Waiting for processes to exit. Apr 20 20:23:57.624019 systemd-logind[1609]: Removed session 49. Apr 20 20:23:57.960469 kubelet[3176]: E0420 20:23:57.948719 3176 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.279s" Apr 20 20:24:00.176522 kubelet[3176]: E0420 20:24:00.173694 3176 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.224s" Apr 20 20:24:02.286279 systemd[1]: Started sshd@48-9-10.0.0.6:22-10.0.0.1:57676.service - OpenSSH per-connection server daemon (10.0.0.1:57676). Apr 20 20:24:07.242098 containerd[1640]: time="2026-04-20T20:24:07.236752516Z" level=error msg="failed to delete task" error="context deadline exceeded" id=9c99966fe9970e21cd8970c4aaf04f8fb0a8a35c21791fb2c06ae2c692982967 Apr 20 20:24:07.519210 containerd[1640]: time="2026-04-20T20:24:07.488186309Z" level=error msg="Failed to handle backOff event container_id:\"9c99966fe9970e21cd8970c4aaf04f8fb0a8a35c21791fb2c06ae2c692982967\" id:\"9c99966fe9970e21cd8970c4aaf04f8fb0a8a35c21791fb2c06ae2c692982967\" pid:7320 exit_status:1 exited_at:{seconds:1776716575 nanos:120034502} for 9c99966fe9970e21cd8970c4aaf04f8fb0a8a35c21791fb2c06ae2c692982967" error="failed to handle container TaskExit event: failed to stop container: failed to delete task: context deadline exceeded" Apr 20 20:24:07.571691 containerd[1640]: time="2026-04-20T20:24:07.552969663Z" level=error msg="ttrpc: received message on inactive stream" stream=87 Apr 20 20:24:07.751302 sshd[7957]: Accepted publickey for core from 10.0.0.1 port 57676 ssh2: RSA SHA256:6LFBX1y/KMd5HbH+HLq9TQlSkwszEIqAV5PH+tmKm7M Apr 20 20:24:07.950316 sshd-session[7957]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 20 20:24:08.460508 kubelet[3176]: E0420 20:24:08.366880 3176 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="6.674s" Apr 20 20:24:09.552568 containerd[1640]: time="2026-04-20T20:24:09.546080026Z" level=info msg="StopContainer for \"9c99966fe9970e21cd8970c4aaf04f8fb0a8a35c21791fb2c06ae2c692982967\" with timeout 30 (s)" Apr 20 20:24:09.640936 systemd-logind[1609]: New session '50' of user 'core' with class 'user' and type 'tty'. Apr 20 20:24:09.987253 systemd[1]: Started session-50.scope - Session 50 of User core. Apr 20 20:24:10.663842 containerd[1640]: time="2026-04-20T20:24:10.650646829Z" level=info msg="Stop container \"9c99966fe9970e21cd8970c4aaf04f8fb0a8a35c21791fb2c06ae2c692982967\" with signal terminated" Apr 20 20:24:11.624555 kubelet[3176]: E0420 20:24:11.586650 3176 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.093s" Apr 20 20:24:13.311000 kubelet[3176]: E0420 20:24:13.300943 3176 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.708s" Apr 20 20:24:20.904461 kubelet[3176]: E0420 20:24:20.861258 3176 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="7.554s" Apr 20 20:24:22.269042 sshd[7985]: Connection closed by 10.0.0.1 port 57676 Apr 20 20:24:22.289921 sshd-session[7957]: pam_unix(sshd:session): session closed for user core Apr 20 20:24:22.678933 systemd[1]: sshd@48-9-10.0.0.6:22-10.0.0.1:57676.service: Deactivated successfully. Apr 20 20:24:22.835014 systemd[1]: sshd@48-9-10.0.0.6:22-10.0.0.1:57676.service: Consumed 2.036s CPU time, 4.1M memory peak. Apr 20 20:24:23.094206 systemd[1]: session-50.scope: Deactivated successfully. Apr 20 20:24:23.104328 systemd[1]: session-50.scope: Consumed 7.273s CPU time, 15.8M memory peak. Apr 20 20:24:23.391082 systemd-logind[1609]: Session 50 logged out. Waiting for processes to exit. Apr 20 20:24:23.658602 systemd-logind[1609]: Removed session 50. Apr 20 20:24:24.020857 containerd[1640]: time="2026-04-20T20:24:24.006964154Z" level=info msg="TaskExit event container_id:\"9c99966fe9970e21cd8970c4aaf04f8fb0a8a35c21791fb2c06ae2c692982967\" id:\"9c99966fe9970e21cd8970c4aaf04f8fb0a8a35c21791fb2c06ae2c692982967\" pid:7320 exit_status:1 exited_at:{seconds:1776716575 nanos:120034502}" Apr 20 20:24:26.826853 kubelet[3176]: E0420 20:24:26.824318 3176 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="5.824s" Apr 20 20:24:28.625144 systemd[1]: Started sshd@49-12298-10.0.0.6:22-10.0.0.1:40898.service - OpenSSH per-connection server daemon (10.0.0.1:40898). Apr 20 20:24:29.070022 kubelet[3176]: E0420 20:24:29.065133 3176 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.227s" Apr 20 20:24:29.945633 kubelet[3176]: E0420 20:24:29.936210 3176 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:24:30.350553 kubelet[3176]: E0420 20:24:30.349927 3176 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:24:31.933818 kubelet[3176]: E0420 20:24:31.933497 3176 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.772s" Apr 20 20:24:33.465998 sshd[8055]: Accepted publickey for core from 10.0.0.1 port 40898 ssh2: RSA SHA256:6LFBX1y/KMd5HbH+HLq9TQlSkwszEIqAV5PH+tmKm7M Apr 20 20:24:33.960834 sshd-session[8055]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 20 20:24:34.541778 containerd[1640]: time="2026-04-20T20:24:34.390853382Z" level=error msg="failed to delete task" error="context deadline exceeded" id=9c99966fe9970e21cd8970c4aaf04f8fb0a8a35c21791fb2c06ae2c692982967 Apr 20 20:24:34.838949 containerd[1640]: time="2026-04-20T20:24:34.683777395Z" level=error msg="ttrpc: received message on inactive stream" stream=107 Apr 20 20:24:35.289979 containerd[1640]: time="2026-04-20T20:24:34.812876983Z" level=error msg="Failed to handle backOff event container_id:\"9c99966fe9970e21cd8970c4aaf04f8fb0a8a35c21791fb2c06ae2c692982967\" id:\"9c99966fe9970e21cd8970c4aaf04f8fb0a8a35c21791fb2c06ae2c692982967\" pid:7320 exit_status:1 exited_at:{seconds:1776716575 nanos:120034502} for 9c99966fe9970e21cd8970c4aaf04f8fb0a8a35c21791fb2c06ae2c692982967" error="failed to handle container TaskExit event: failed to stop container: failed to delete task: context deadline exceeded" Apr 20 20:24:35.496284 systemd-logind[1609]: New session '51' of user 'core' with class 'user' and type 'tty'. Apr 20 20:24:35.824237 systemd[1]: Started session-51.scope - Session 51 of User core. Apr 20 20:24:39.067890 kubelet[3176]: E0420 20:24:39.063610 3176 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="7.105s" Apr 20 20:24:45.370557 kubelet[3176]: E0420 20:24:45.369653 3176 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="6.127s" Apr 20 20:24:46.167116 containerd[1640]: time="2026-04-20T20:24:46.162198785Z" level=info msg="Kill container \"9c99966fe9970e21cd8970c4aaf04f8fb0a8a35c21791fb2c06ae2c692982967\"" Apr 20 20:24:47.706943 kubelet[3176]: E0420 20:24:47.693087 3176 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.253s" Apr 20 20:24:47.825371 kubelet[3176]: E0420 20:24:47.759943 3176 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:24:47.825371 kubelet[3176]: E0420 20:24:47.774642 3176 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:24:50.464716 kubelet[3176]: E0420 20:24:50.459714 3176 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.763s" Apr 20 20:24:51.454950 sshd[8081]: Connection closed by 10.0.0.1 port 40898 Apr 20 20:24:51.495970 sshd-session[8055]: pam_unix(sshd:session): session closed for user core Apr 20 20:24:51.876015 systemd[1]: sshd@49-12298-10.0.0.6:22-10.0.0.1:40898.service: Deactivated successfully. Apr 20 20:24:51.935158 systemd[1]: sshd@49-12298-10.0.0.6:22-10.0.0.1:40898.service: Consumed 1.653s CPU time, 4.2M memory peak. Apr 20 20:24:52.334059 systemd[1]: session-51.scope: Deactivated successfully. Apr 20 20:24:52.358239 systemd[1]: session-51.scope: Consumed 8.300s CPU time, 15.6M memory peak. Apr 20 20:24:52.540952 systemd-logind[1609]: Session 51 logged out. Waiting for processes to exit. Apr 20 20:24:52.982925 systemd-logind[1609]: Removed session 51. Apr 20 20:24:53.456988 kubelet[3176]: E0420 20:24:53.444270 3176 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.891s" Apr 20 20:24:57.565240 kubelet[3176]: E0420 20:24:57.489257 3176 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.925s" Apr 20 20:24:58.029861 systemd[1]: Started sshd@50-8217-10.0.0.6:22-10.0.0.1:52022.service - OpenSSH per-connection server daemon (10.0.0.1:52022). Apr 20 20:25:00.104276 containerd[1640]: time="2026-04-20T20:24:59.948138425Z" level=info msg="container event discarded" container=f79ebd2694a0ea078794fec9eee6729d612e0a064e4847cf3345504c8b675e51 type=CONTAINER_STOPPED_EVENT Apr 20 20:25:02.132106 kubelet[3176]: E0420 20:25:02.130295 3176 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="4.569s" Apr 20 20:25:03.358103 sshd[8150]: Accepted publickey for core from 10.0.0.1 port 52022 ssh2: RSA SHA256:6LFBX1y/KMd5HbH+HLq9TQlSkwszEIqAV5PH+tmKm7M Apr 20 20:25:03.782470 sshd-session[8150]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 20 20:25:05.169215 containerd[1640]: time="2026-04-20T20:25:04.590933228Z" level=info msg="container event discarded" container=49f062dd1a04e6efc9c1a2c9a0d02a0f520f5cce96d51ce589381563fc6fc543 type=CONTAINER_DELETED_EVENT Apr 20 20:25:05.594228 kubelet[3176]: E0420 20:25:05.592978 3176 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.365s" Apr 20 20:25:05.799944 systemd-logind[1609]: New session '52' of user 'core' with class 'user' and type 'tty'. Apr 20 20:25:06.320211 systemd[1]: Started session-52.scope - Session 52 of User core. Apr 20 20:25:07.842320 kubelet[3176]: E0420 20:25:07.840159 3176 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.175s" Apr 20 20:25:08.173686 containerd[1640]: time="2026-04-20T20:25:08.153657583Z" level=info msg="TaskExit event container_id:\"9c99966fe9970e21cd8970c4aaf04f8fb0a8a35c21791fb2c06ae2c692982967\" id:\"9c99966fe9970e21cd8970c4aaf04f8fb0a8a35c21791fb2c06ae2c692982967\" pid:7320 exit_status:1 exited_at:{seconds:1776716575 nanos:120034502}" Apr 20 20:25:09.181291 containerd[1640]: time="2026-04-20T20:25:09.174143249Z" level=info msg="container event discarded" container=9c99966fe9970e21cd8970c4aaf04f8fb0a8a35c21791fb2c06ae2c692982967 type=CONTAINER_CREATED_EVENT Apr 20 20:25:09.561197 kubelet[3176]: E0420 20:25:09.557847 3176 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.715s" Apr 20 20:25:10.204874 kubelet[3176]: E0420 20:25:10.201859 3176 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:25:17.428023 sshd[8177]: Connection closed by 10.0.0.1 port 52022 Apr 20 20:25:17.451017 sshd-session[8150]: pam_unix(sshd:session): session closed for user core Apr 20 20:25:17.758726 systemd[1]: sshd@50-8217-10.0.0.6:22-10.0.0.1:52022.service: Deactivated successfully. Apr 20 20:25:17.864573 systemd[1]: sshd@50-8217-10.0.0.6:22-10.0.0.1:52022.service: Consumed 1.913s CPU time, 4.1M memory peak. Apr 20 20:25:17.957307 systemd[1]: session-52.scope: Deactivated successfully. Apr 20 20:25:17.989326 systemd[1]: session-52.scope: Consumed 5.844s CPU time, 16.1M memory peak. Apr 20 20:25:18.135059 systemd-logind[1609]: Session 52 logged out. Waiting for processes to exit. Apr 20 20:25:18.243086 systemd-logind[1609]: Removed session 52. Apr 20 20:25:18.510755 containerd[1640]: time="2026-04-20T20:25:18.449062938Z" level=error msg="failed to drain init process 9c99966fe9970e21cd8970c4aaf04f8fb0a8a35c21791fb2c06ae2c692982967 io" error="context deadline exceeded" runtime=io.containerd.runc.v2 Apr 20 20:25:18.668599 containerd[1640]: time="2026-04-20T20:25:18.472950861Z" level=error msg="failed to delete task" error="context deadline exceeded" id=9c99966fe9970e21cd8970c4aaf04f8fb0a8a35c21791fb2c06ae2c692982967 Apr 20 20:25:18.787203 containerd[1640]: time="2026-04-20T20:25:18.599273836Z" level=error msg="ttrpc: received message on inactive stream" stream=127 Apr 20 20:25:19.034043 containerd[1640]: time="2026-04-20T20:25:18.970173159Z" level=error msg="Failed to handle backOff event container_id:\"9c99966fe9970e21cd8970c4aaf04f8fb0a8a35c21791fb2c06ae2c692982967\" id:\"9c99966fe9970e21cd8970c4aaf04f8fb0a8a35c21791fb2c06ae2c692982967\" pid:7320 exit_status:1 exited_at:{seconds:1776716575 nanos:120034502} for 9c99966fe9970e21cd8970c4aaf04f8fb0a8a35c21791fb2c06ae2c692982967" error="failed to handle container TaskExit event: failed to stop container: failed to delete task: context deadline exceeded" Apr 20 20:25:20.670172 kubelet[3176]: E0420 20:25:20.668318 3176 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="10.771s" Apr 20 20:25:24.458129 systemd[1]: Started sshd@51-8218-10.0.0.6:22-10.0.0.1:52076.service - OpenSSH per-connection server daemon (10.0.0.1:52076). Apr 20 20:25:29.896292 kubelet[3176]: E0420 20:25:29.801420 3176 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="9.087s" Apr 20 20:25:31.507963 kubelet[3176]: E0420 20:25:31.423168 3176 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.361s" Apr 20 20:25:32.059097 sshd[8232]: Accepted publickey for core from 10.0.0.1 port 52076 ssh2: RSA SHA256:6LFBX1y/KMd5HbH+HLq9TQlSkwszEIqAV5PH+tmKm7M Apr 20 20:25:32.188155 sshd-session[8232]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 20 20:25:33.664200 systemd-logind[1609]: New session '53' of user 'core' with class 'user' and type 'tty'. Apr 20 20:25:34.339264 systemd[1]: Started session-53.scope - Session 53 of User core. Apr 20 20:25:42.610215 kubelet[3176]: E0420 20:25:42.609044 3176 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="10.958s" Apr 20 20:25:43.185897 kubelet[3176]: E0420 20:25:43.183078 3176 log.go:32] "StopContainer from runtime service failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" containerID="37048ce4ab3df3a2d4a897903a56366f0eddba041f8139b5370fd312170d9238" Apr 20 20:25:43.216192 containerd[1640]: time="2026-04-20T20:25:43.181160623Z" level=error msg="StopContainer for \"37048ce4ab3df3a2d4a897903a56366f0eddba041f8139b5370fd312170d9238\" failed" error="rpc error: code = DeadlineExceeded desc = an error occurs during waiting for container \"37048ce4ab3df3a2d4a897903a56366f0eddba041f8139b5370fd312170d9238\" to be killed: wait container \"37048ce4ab3df3a2d4a897903a56366f0eddba041f8139b5370fd312170d9238\": context deadline exceeded" Apr 20 20:25:43.526164 kubelet[3176]: E0420 20:25:43.287065 3176 kuberuntime_container.go:895] "Container termination failed with gracePeriod" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" pod="kube-system/kube-scheduler-localhost" podUID="f7c88b30fc803a3ec6b6c138191bdaca" containerName="kube-scheduler" containerID="containerd://37048ce4ab3df3a2d4a897903a56366f0eddba041f8139b5370fd312170d9238" gracePeriod=30 Apr 20 20:25:43.596284 kubelet[3176]: E0420 20:25:43.571531 3176 kuberuntime_manager.go:1437] "killContainer for pod failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" containerName="kube-scheduler" containerID={"Type":"containerd","ID":"37048ce4ab3df3a2d4a897903a56366f0eddba041f8139b5370fd312170d9238"} pod="kube-system/kube-scheduler-localhost" Apr 20 20:25:43.687966 kubelet[3176]: E0420 20:25:43.633093 3176 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillContainer\" for \"kube-scheduler\" with KillContainerError: \"rpc error: code = DeadlineExceeded desc = context deadline exceeded\"" pod="kube-system/kube-scheduler-localhost" podUID="f7c88b30fc803a3ec6b6c138191bdaca" Apr 20 20:25:51.452759 containerd[1640]: time="2026-04-20T20:25:51.451796733Z" level=info msg="StopContainer for \"37048ce4ab3df3a2d4a897903a56366f0eddba041f8139b5370fd312170d9238\" with timeout 30 (s)" Apr 20 20:25:51.836968 containerd[1640]: time="2026-04-20T20:25:51.832543874Z" level=info msg="Skipping the sending of signal terminated to container \"37048ce4ab3df3a2d4a897903a56366f0eddba041f8139b5370fd312170d9238\" because a prior stop with timeout>0 request already sent the signal" Apr 20 20:25:52.276745 kubelet[3176]: E0420 20:25:52.275815 3176 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="9.54s" Apr 20 20:25:52.346790 sshd[8260]: Connection closed by 10.0.0.1 port 52076 Apr 20 20:25:52.370540 sshd-session[8232]: pam_unix(sshd:session): session closed for user core Apr 20 20:25:52.495097 kubelet[3176]: E0420 20:25:52.433989 3176 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:25:52.525208 systemd[1]: sshd@51-8218-10.0.0.6:22-10.0.0.1:52076.service: Deactivated successfully. Apr 20 20:25:52.551083 systemd[1]: sshd@51-8218-10.0.0.6:22-10.0.0.1:52076.service: Consumed 2.107s CPU time, 4.2M memory peak. Apr 20 20:25:52.788682 containerd[1640]: time="2026-04-20T20:25:52.782892365Z" level=info msg="container event discarded" container=9c99966fe9970e21cd8970c4aaf04f8fb0a8a35c21791fb2c06ae2c692982967 type=CONTAINER_STARTED_EVENT Apr 20 20:25:52.834275 systemd[1]: session-53.scope: Deactivated successfully. Apr 20 20:25:52.990824 kubelet[3176]: E0420 20:25:52.874838 3176 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:25:52.838232 systemd[1]: session-53.scope: Consumed 10.711s CPU time, 15.6M memory peak. Apr 20 20:25:53.088288 systemd-logind[1609]: Session 53 logged out. Waiting for processes to exit. Apr 20 20:25:53.193626 kubelet[3176]: E0420 20:25:53.188978 3176 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:25:53.330076 systemd-logind[1609]: Removed session 53. Apr 20 20:25:54.294238 kubelet[3176]: E0420 20:25:54.293689 3176 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.944s" Apr 20 20:25:54.509856 kubelet[3176]: E0420 20:25:54.364831 3176 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:25:55.567718 kubelet[3176]: E0420 20:25:55.564670 3176 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.26s" Apr 20 20:25:58.287830 systemd[1]: Started sshd@52-10-10.0.0.6:22-10.0.0.1:48220.service - OpenSSH per-connection server daemon (10.0.0.1:48220). Apr 20 20:25:59.140580 kubelet[3176]: E0420 20:25:59.139973 3176 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.458s" Apr 20 20:26:01.001425 sshd[8335]: Accepted publickey for core from 10.0.0.1 port 48220 ssh2: RSA SHA256:6LFBX1y/KMd5HbH+HLq9TQlSkwszEIqAV5PH+tmKm7M Apr 20 20:26:01.230267 sshd-session[8335]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 20 20:26:02.469428 systemd-logind[1609]: New session '54' of user 'core' with class 'user' and type 'tty'. Apr 20 20:26:02.764132 systemd[1]: Started session-54.scope - Session 54 of User core. Apr 20 20:26:03.904021 kubelet[3176]: E0420 20:26:03.870129 3176 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.136s" Apr 20 20:26:12.213312 kubelet[3176]: E0420 20:26:12.210311 3176 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="8.289s" Apr 20 20:26:19.177661 sshd[8348]: Connection closed by 10.0.0.1 port 48220 Apr 20 20:26:19.267196 sshd-session[8335]: pam_unix(sshd:session): session closed for user core Apr 20 20:26:19.947210 systemd[1]: sshd@52-10-10.0.0.6:22-10.0.0.1:48220.service: Deactivated successfully. Apr 20 20:26:19.997134 systemd[1]: sshd@52-10-10.0.0.6:22-10.0.0.1:48220.service: Consumed 1.082s CPU time, 4.4M memory peak. Apr 20 20:26:20.391004 systemd[1]: session-54.scope: Deactivated successfully. Apr 20 20:26:20.486680 systemd[1]: session-54.scope: Consumed 8.706s CPU time, 14.2M memory peak. Apr 20 20:26:20.796249 systemd-logind[1609]: Session 54 logged out. Waiting for processes to exit. Apr 20 20:26:21.250121 systemd-logind[1609]: Removed session 54. Apr 20 20:26:22.016820 containerd[1640]: time="2026-04-20T20:26:22.013499413Z" level=info msg="Kill container \"37048ce4ab3df3a2d4a897903a56366f0eddba041f8139b5370fd312170d9238\"" Apr 20 20:26:23.689801 containerd[1640]: time="2026-04-20T20:26:23.667668166Z" level=info msg="TaskExit event container_id:\"9c99966fe9970e21cd8970c4aaf04f8fb0a8a35c21791fb2c06ae2c692982967\" id:\"9c99966fe9970e21cd8970c4aaf04f8fb0a8a35c21791fb2c06ae2c692982967\" pid:7320 exit_status:1 exited_at:{seconds:1776716575 nanos:120034502}" Apr 20 20:26:26.766290 systemd[1]: Started sshd@53-8219-10.0.0.6:22-10.0.0.1:37156.service - OpenSSH per-connection server daemon (10.0.0.1:37156). Apr 20 20:26:28.064921 kubelet[3176]: E0420 20:26:28.063526 3176 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="15.7s" Apr 20 20:26:33.534193 containerd[1640]: time="2026-04-20T20:26:33.533269823Z" level=error msg="ttrpc: received message on inactive stream" stream=143 Apr 20 20:26:33.760115 containerd[1640]: time="2026-04-20T20:26:33.544614244Z" level=error msg="ttrpc: received message on inactive stream" stream=141 Apr 20 20:26:33.935710 containerd[1640]: time="2026-04-20T20:26:33.860026955Z" level=error msg="Failed to handle backOff event container_id:\"9c99966fe9970e21cd8970c4aaf04f8fb0a8a35c21791fb2c06ae2c692982967\" id:\"9c99966fe9970e21cd8970c4aaf04f8fb0a8a35c21791fb2c06ae2c692982967\" pid:7320 exit_status:1 exited_at:{seconds:1776716575 nanos:120034502} for 9c99966fe9970e21cd8970c4aaf04f8fb0a8a35c21791fb2c06ae2c692982967" error="failed to handle container TaskExit event: failed to stop container: context deadline exceeded" Apr 20 20:26:34.136835 sshd[8415]: Accepted publickey for core from 10.0.0.1 port 37156 ssh2: RSA SHA256:6LFBX1y/KMd5HbH+HLq9TQlSkwszEIqAV5PH+tmKm7M Apr 20 20:26:34.341191 sshd-session[8415]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 20 20:26:36.027494 systemd-logind[1609]: New session '55' of user 'core' with class 'user' and type 'tty'. Apr 20 20:26:36.898374 systemd[1]: Started session-55.scope - Session 55 of User core. Apr 20 20:26:38.235020 kubelet[3176]: E0420 20:26:38.233374 3176 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="9.948s" Apr 20 20:26:39.486779 containerd[1640]: time="2026-04-20T20:26:39.398946272Z" level=error msg="StopContainer for \"9c99966fe9970e21cd8970c4aaf04f8fb0a8a35c21791fb2c06ae2c692982967\" failed" error="rpc error: code = DeadlineExceeded desc = an error occurs during waiting for container \"9c99966fe9970e21cd8970c4aaf04f8fb0a8a35c21791fb2c06ae2c692982967\" to be killed: wait container \"9c99966fe9970e21cd8970c4aaf04f8fb0a8a35c21791fb2c06ae2c692982967\": context deadline exceeded" Apr 20 20:26:40.061906 kubelet[3176]: E0420 20:26:39.842182 3176 log.go:32] "StopContainer from runtime service failed" err="rpc error: code = DeadlineExceeded desc = stream terminated by RST_STREAM with error code: CANCEL" containerID="9c99966fe9970e21cd8970c4aaf04f8fb0a8a35c21791fb2c06ae2c692982967" Apr 20 20:26:40.367262 kubelet[3176]: E0420 20:26:40.138322 3176 kuberuntime_container.go:895] "Container termination failed with gracePeriod" err="rpc error: code = DeadlineExceeded desc = stream terminated by RST_STREAM with error code: CANCEL" pod="kube-system/kube-controller-manager-localhost" podUID="14bc29ec35edba17af38052ec24275f2" containerName="kube-controller-manager" containerID="containerd://9c99966fe9970e21cd8970c4aaf04f8fb0a8a35c21791fb2c06ae2c692982967" gracePeriod=30 Apr 20 20:26:40.535100 kubelet[3176]: E0420 20:26:40.476949 3176 kuberuntime_manager.go:1437] "killContainer for pod failed" err="rpc error: code = DeadlineExceeded desc = stream terminated by RST_STREAM with error code: CANCEL" containerName="kube-controller-manager" containerID={"Type":"containerd","ID":"9c99966fe9970e21cd8970c4aaf04f8fb0a8a35c21791fb2c06ae2c692982967"} pod="kube-system/kube-controller-manager-localhost" Apr 20 20:26:41.379146 kubelet[3176]: E0420 20:26:41.373191 3176 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillContainer\" for \"kube-controller-manager\" with KillContainerError: \"rpc error: code = DeadlineExceeded desc = stream terminated by RST_STREAM with error code: CANCEL\"" pod="kube-system/kube-controller-manager-localhost" podUID="14bc29ec35edba17af38052ec24275f2" Apr 20 20:26:46.087214 kubelet[3176]: E0420 20:26:46.085081 3176 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="7.845s" Apr 20 20:26:47.093906 kubelet[3176]: E0420 20:26:47.089105 3176 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:26:47.191988 containerd[1640]: time="2026-04-20T20:26:47.190784860Z" level=info msg="StopContainer for \"9c99966fe9970e21cd8970c4aaf04f8fb0a8a35c21791fb2c06ae2c692982967\" with timeout 30 (s)" Apr 20 20:26:47.290024 containerd[1640]: time="2026-04-20T20:26:47.282247528Z" level=info msg="Skipping the sending of signal terminated to container \"9c99966fe9970e21cd8970c4aaf04f8fb0a8a35c21791fb2c06ae2c692982967\" because a prior stop with timeout>0 request already sent the signal" Apr 20 20:26:50.721065 sshd[8449]: Connection closed by 10.0.0.1 port 37156 Apr 20 20:26:50.773178 sshd-session[8415]: pam_unix(sshd:session): session closed for user core Apr 20 20:26:51.388852 systemd[1]: sshd@53-8219-10.0.0.6:22-10.0.0.1:37156.service: Deactivated successfully. Apr 20 20:26:51.462012 systemd[1]: sshd@53-8219-10.0.0.6:22-10.0.0.1:37156.service: Consumed 1.647s CPU time, 4.1M memory peak. Apr 20 20:26:51.963425 systemd[1]: session-55.scope: Deactivated successfully. Apr 20 20:26:52.026083 systemd[1]: session-55.scope: Consumed 7.920s CPU time, 16M memory peak. Apr 20 20:26:52.568066 systemd-logind[1609]: Session 55 logged out. Waiting for processes to exit. Apr 20 20:26:53.263999 systemd-logind[1609]: Removed session 55. Apr 20 20:26:55.678103 kubelet[3176]: E0420 20:26:55.674153 3176 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="9.407s" Apr 20 20:26:58.935556 systemd[1]: Started sshd@54-11-10.0.0.6:22-10.0.0.1:34300.service - OpenSSH per-connection server daemon (10.0.0.1:34300). Apr 20 20:27:03.392861 sshd[8511]: Accepted publickey for core from 10.0.0.1 port 34300 ssh2: RSA SHA256:6LFBX1y/KMd5HbH+HLq9TQlSkwszEIqAV5PH+tmKm7M Apr 20 20:27:03.982182 sshd-session[8511]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 20 20:27:04.094015 kubelet[3176]: E0420 20:27:04.093532 3176 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="8.177s" Apr 20 20:27:04.222432 systemd-logind[1609]: New session '56' of user 'core' with class 'user' and type 'tty'. Apr 20 20:27:04.247751 systemd[1]: Started session-56.scope - Session 56 of User core. Apr 20 20:27:05.189833 kubelet[3176]: E0420 20:27:05.186330 3176 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.092s" Apr 20 20:27:05.432534 kubelet[3176]: E0420 20:27:05.432248 3176 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:27:05.884904 kubelet[3176]: E0420 20:27:05.866969 3176 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:27:07.984599 sshd[8538]: Connection closed by 10.0.0.1 port 34300 Apr 20 20:27:07.990300 sshd-session[8511]: pam_unix(sshd:session): session closed for user core Apr 20 20:27:08.026760 systemd[1]: sshd@54-11-10.0.0.6:22-10.0.0.1:34300.service: Deactivated successfully. Apr 20 20:27:08.036280 systemd[1]: sshd@54-11-10.0.0.6:22-10.0.0.1:34300.service: Consumed 2.074s CPU time, 4.1M memory peak. Apr 20 20:27:08.161306 systemd[1]: session-56.scope: Deactivated successfully. Apr 20 20:27:08.166762 systemd[1]: session-56.scope: Consumed 3.109s CPU time, 16.3M memory peak. Apr 20 20:27:08.213297 systemd-logind[1609]: Session 56 logged out. Waiting for processes to exit. Apr 20 20:27:08.224173 systemd-logind[1609]: Removed session 56. Apr 20 20:27:09.002052 containerd[1640]: time="2026-04-20T20:27:09.001298941Z" level=info msg="TaskExit event container_id:\"37048ce4ab3df3a2d4a897903a56366f0eddba041f8139b5370fd312170d9238\" id:\"37048ce4ab3df3a2d4a897903a56366f0eddba041f8139b5370fd312170d9238\" pid:3006 exit_status:1 exited_at:{seconds:1776715880 nanos:848322378}" Apr 20 20:27:09.259773 containerd[1640]: time="2026-04-20T20:27:09.258750688Z" level=info msg="StopContainer for \"37048ce4ab3df3a2d4a897903a56366f0eddba041f8139b5370fd312170d9238\" returns successfully" Apr 20 20:27:09.260502 kubelet[3176]: E0420 20:27:09.260272 3176 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:27:09.397087 containerd[1640]: time="2026-04-20T20:27:09.396677885Z" level=info msg="CreateContainer within sandbox \"2893dd2616135de175ff631c13d6f1271fb5b80119744fd45f4cb5ca2ecb3a9b\" for container name:\"kube-scheduler\" attempt:1" Apr 20 20:27:09.452393 containerd[1640]: time="2026-04-20T20:27:09.451871323Z" level=info msg="Container 0b1280b80edeb75b83221d638b0821ba7fc44b5e563a004999c619dcd896a7bb: CDI devices from CRI Config.CDIDevices: []" Apr 20 20:27:09.529986 containerd[1640]: time="2026-04-20T20:27:09.528043748Z" level=info msg="CreateContainer within sandbox \"2893dd2616135de175ff631c13d6f1271fb5b80119744fd45f4cb5ca2ecb3a9b\" for name:\"kube-scheduler\" attempt:1 returns container id \"0b1280b80edeb75b83221d638b0821ba7fc44b5e563a004999c619dcd896a7bb\"" Apr 20 20:27:09.538554 containerd[1640]: time="2026-04-20T20:27:09.537978550Z" level=info msg="StartContainer for \"0b1280b80edeb75b83221d638b0821ba7fc44b5e563a004999c619dcd896a7bb\"" Apr 20 20:27:09.563953 containerd[1640]: time="2026-04-20T20:27:09.562852685Z" level=info msg="connecting to shim 0b1280b80edeb75b83221d638b0821ba7fc44b5e563a004999c619dcd896a7bb" address="unix:///run/containerd/s/3f69fcb977b4c2b3ed2669a33314ccaae5bf4bf12700f043b9bbf852eee3e02f" protocol=ttrpc version=3 Apr 20 20:27:09.673369 kubelet[3176]: E0420 20:27:09.672929 3176 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:27:09.994391 systemd[1]: Started cri-containerd-0b1280b80edeb75b83221d638b0821ba7fc44b5e563a004999c619dcd896a7bb.scope - libcontainer container 0b1280b80edeb75b83221d638b0821ba7fc44b5e563a004999c619dcd896a7bb. Apr 20 20:27:12.299881 containerd[1640]: time="2026-04-20T20:27:12.295133494Z" level=info msg="StartContainer for \"0b1280b80edeb75b83221d638b0821ba7fc44b5e563a004999c619dcd896a7bb\" returns successfully" Apr 20 20:27:13.820624 systemd[1]: Started sshd@55-8220-10.0.0.6:22-10.0.0.1:41424.service - OpenSSH per-connection server daemon (10.0.0.1:41424). Apr 20 20:27:16.367993 sshd[8618]: Accepted publickey for core from 10.0.0.1 port 41424 ssh2: RSA SHA256:6LFBX1y/KMd5HbH+HLq9TQlSkwszEIqAV5PH+tmKm7M Apr 20 20:27:16.656107 kubelet[3176]: E0420 20:27:16.643110 3176 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:27:16.991870 sshd-session[8618]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 20 20:27:17.445305 containerd[1640]: time="2026-04-20T20:27:17.443066253Z" level=info msg="Kill container \"9c99966fe9970e21cd8970c4aaf04f8fb0a8a35c21791fb2c06ae2c692982967\"" Apr 20 20:27:18.390131 systemd-logind[1609]: New session '57' of user 'core' with class 'user' and type 'tty'. Apr 20 20:27:18.813172 systemd[1]: Started session-57.scope - Session 57 of User core. Apr 20 20:27:27.460190 sshd[8655]: Connection closed by 10.0.0.1 port 41424 Apr 20 20:27:27.482572 sshd-session[8618]: pam_unix(sshd:session): session closed for user core Apr 20 20:27:28.062990 systemd[1]: sshd@55-8220-10.0.0.6:22-10.0.0.1:41424.service: Deactivated successfully. Apr 20 20:27:28.186221 systemd[1]: sshd@55-8220-10.0.0.6:22-10.0.0.1:41424.service: Consumed 1.036s CPU time, 4.3M memory peak. Apr 20 20:27:28.572276 systemd[1]: session-57.scope: Deactivated successfully. Apr 20 20:27:28.643238 systemd[1]: session-57.scope: Consumed 5.725s CPU time, 16.1M memory peak. Apr 20 20:27:28.852927 systemd-logind[1609]: Session 57 logged out. Waiting for processes to exit. Apr 20 20:27:29.167029 systemd-logind[1609]: Removed session 57. Apr 20 20:27:34.663683 systemd[1]: Started sshd@56-12-10.0.0.6:22-10.0.0.1:42000.service - OpenSSH per-connection server daemon (10.0.0.1:42000). Apr 20 20:27:35.434257 kubelet[3176]: E0420 20:27:35.405057 3176 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:27:36.713987 kubelet[3176]: E0420 20:27:36.555154 3176 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="22.732s" Apr 20 20:27:41.188053 sshd[8698]: Accepted publickey for core from 10.0.0.1 port 42000 ssh2: RSA SHA256:6LFBX1y/KMd5HbH+HLq9TQlSkwszEIqAV5PH+tmKm7M Apr 20 20:27:41.562239 sshd-session[8698]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 20 20:27:43.917585 systemd-logind[1609]: New session '58' of user 'core' with class 'user' and type 'tty'. Apr 20 20:27:44.299283 systemd[1]: Started session-58.scope - Session 58 of User core. Apr 20 20:27:51.931722 kubelet[3176]: E0420 20:27:51.927795 3176 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="14.684s" Apr 20 20:27:53.476695 kubelet[3176]: E0420 20:27:53.469297 3176 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.534s" Apr 20 20:27:53.658683 kubelet[3176]: I0420 20:27:53.606221 3176 status_manager.go:462] "Container readiness changed for unknown container" pod="kube-system/kube-scheduler-localhost" containerID="containerd://37048ce4ab3df3a2d4a897903a56366f0eddba041f8139b5370fd312170d9238" Apr 20 20:27:54.761055 sshd[8716]: Connection closed by 10.0.0.1 port 42000 Apr 20 20:27:54.860591 sshd-session[8698]: pam_unix(sshd:session): session closed for user core Apr 20 20:27:55.162386 systemd[1]: sshd@56-12-10.0.0.6:22-10.0.0.1:42000.service: Deactivated successfully. Apr 20 20:27:55.163402 systemd[1]: sshd@56-12-10.0.0.6:22-10.0.0.1:42000.service: Consumed 2.268s CPU time, 4.1M memory peak. Apr 20 20:27:55.583171 systemd[1]: session-58.scope: Deactivated successfully. Apr 20 20:27:55.631006 systemd[1]: session-58.scope: Consumed 6.490s CPU time, 16M memory peak. Apr 20 20:27:56.189191 systemd-logind[1609]: Session 58 logged out. Waiting for processes to exit. Apr 20 20:27:56.659523 systemd-logind[1609]: Removed session 58. Apr 20 20:27:59.494795 kubelet[3176]: E0420 20:27:59.492198 3176 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:28:00.286149 kubelet[3176]: E0420 20:28:00.283216 3176 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.014s" Apr 20 20:28:00.632362 systemd[1]: Started sshd@57-8221-10.0.0.6:22-10.0.0.1:39122.service - OpenSSH per-connection server daemon (10.0.0.1:39122). Apr 20 20:28:00.977012 kubelet[3176]: E0420 20:28:00.959253 3176 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:28:02.540690 kubelet[3176]: E0420 20:28:02.535141 3176 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.188s" Apr 20 20:28:02.984253 kubelet[3176]: E0420 20:28:02.982840 3176 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:28:03.051441 kubelet[3176]: E0420 20:28:03.050460 3176 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:28:03.696378 sshd[8764]: Accepted publickey for core from 10.0.0.1 port 39122 ssh2: RSA SHA256:6LFBX1y/KMd5HbH+HLq9TQlSkwszEIqAV5PH+tmKm7M Apr 20 20:28:04.045991 sshd-session[8764]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 20 20:28:05.860205 systemd-logind[1609]: New session '59' of user 'core' with class 'user' and type 'tty'. Apr 20 20:28:05.943897 systemd[1]: Started session-59.scope - Session 59 of User core. Apr 20 20:28:07.033084 kubelet[3176]: E0420 20:28:07.031192 3176 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:28:08.573574 kubelet[3176]: E0420 20:28:08.569749 3176 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="4.592s" Apr 20 20:28:09.853094 sshd[8786]: Connection closed by 10.0.0.1 port 39122 Apr 20 20:28:09.856711 sshd-session[8764]: pam_unix(sshd:session): session closed for user core Apr 20 20:28:09.863762 systemd[1]: sshd@57-8221-10.0.0.6:22-10.0.0.1:39122.service: Deactivated successfully. Apr 20 20:28:09.903424 systemd[1]: sshd@57-8221-10.0.0.6:22-10.0.0.1:39122.service: Consumed 1.357s CPU time, 4.1M memory peak. Apr 20 20:28:10.236829 systemd[1]: session-59.scope: Deactivated successfully. Apr 20 20:28:10.257168 systemd[1]: session-59.scope: Consumed 2.528s CPU time, 16M memory peak. Apr 20 20:28:10.460680 systemd-logind[1609]: Session 59 logged out. Waiting for processes to exit. Apr 20 20:28:10.488629 systemd-logind[1609]: Removed session 59.