Apr 20 16:02:54.191645 kernel: Linux version 6.12.81-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 15.2.1_p20260214 p5) 15.2.1 20260214, GNU ld (Gentoo 2.46.0 p1) 2.46.0) #1 SMP PREEMPT_DYNAMIC Tue Apr 14 02:21:25 -00 2026 Apr 20 16:02:54.191702 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=70de22794192cd52e167b5a4b1ae0509811ded61dbe4152dfc02378f843ae81a Apr 20 16:02:54.191712 kernel: BIOS-provided physical RAM map: Apr 20 16:02:54.191720 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Apr 20 16:02:54.191729 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Apr 20 16:02:54.191736 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Apr 20 16:02:54.191747 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Apr 20 16:02:54.191781 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Apr 20 16:02:54.191792 kernel: BIOS-e820: [mem 0x000000000080c000-0x0000000000810fff] usable Apr 20 16:02:54.191799 kernel: BIOS-e820: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Apr 20 16:02:54.191809 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009bd3efff] usable Apr 20 16:02:54.191836 kernel: BIOS-e820: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Apr 20 16:02:54.191844 kernel: BIOS-e820: [mem 0x000000009be00000-0x000000009c8ecfff] usable Apr 20 16:02:54.191851 kernel: BIOS-e820: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Apr 20 16:02:54.191866 kernel: BIOS-e820: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Apr 20 16:02:54.191874 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Apr 20 16:02:54.191881 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009ce90fff] usable Apr 20 16:02:54.191889 kernel: BIOS-e820: [mem 0x000000009ce91000-0x000000009ce94fff] reserved Apr 20 16:02:54.191897 kernel: BIOS-e820: [mem 0x000000009ce95000-0x000000009ce96fff] ACPI NVS Apr 20 16:02:54.191907 kernel: BIOS-e820: [mem 0x000000009ce97000-0x000000009cedbfff] usable Apr 20 16:02:54.191916 kernel: BIOS-e820: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Apr 20 16:02:54.191923 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Apr 20 16:02:54.191930 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Apr 20 16:02:54.191940 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Apr 20 16:02:54.191950 kernel: BIOS-e820: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Apr 20 16:02:54.191960 kernel: NX (Execute Disable) protection: active Apr 20 16:02:54.191967 kernel: APIC: Static calls initialized Apr 20 16:02:54.191975 kernel: e820: update [mem 0x9b31e018-0x9b327c57] usable ==> usable Apr 20 16:02:54.191985 kernel: e820: update [mem 0x9b2e1018-0x9b31de57] usable ==> usable Apr 20 16:02:54.191995 kernel: extended physical RAM map: Apr 20 16:02:54.192003 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable Apr 20 16:02:54.192011 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000007fffff] usable Apr 20 16:02:54.192018 kernel: reserve setup_data: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Apr 20 16:02:54.192028 kernel: reserve setup_data: [mem 0x0000000000808000-0x000000000080afff] usable Apr 20 16:02:54.192035 kernel: reserve setup_data: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Apr 20 16:02:54.192045 kernel: reserve setup_data: [mem 0x000000000080c000-0x0000000000810fff] usable Apr 20 16:02:54.192055 kernel: reserve setup_data: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Apr 20 16:02:54.192062 kernel: reserve setup_data: [mem 0x0000000000900000-0x000000009b2e1017] usable Apr 20 16:02:54.192069 kernel: reserve setup_data: [mem 0x000000009b2e1018-0x000000009b31de57] usable Apr 20 16:02:54.192076 kernel: reserve setup_data: [mem 0x000000009b31de58-0x000000009b31e017] usable Apr 20 16:02:54.192090 kernel: reserve setup_data: [mem 0x000000009b31e018-0x000000009b327c57] usable Apr 20 16:02:54.192125 kernel: reserve setup_data: [mem 0x000000009b327c58-0x000000009bd3efff] usable Apr 20 16:02:54.192135 kernel: reserve setup_data: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Apr 20 16:02:54.192146 kernel: reserve setup_data: [mem 0x000000009be00000-0x000000009c8ecfff] usable Apr 20 16:02:54.192586 kernel: reserve setup_data: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Apr 20 16:02:54.192599 kernel: reserve setup_data: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Apr 20 16:02:54.192610 kernel: reserve setup_data: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Apr 20 16:02:54.192668 kernel: reserve setup_data: [mem 0x000000009cbff000-0x000000009ce90fff] usable Apr 20 16:02:54.192697 kernel: reserve setup_data: [mem 0x000000009ce91000-0x000000009ce94fff] reserved Apr 20 16:02:54.192725 kernel: reserve setup_data: [mem 0x000000009ce95000-0x000000009ce96fff] ACPI NVS Apr 20 16:02:54.192733 kernel: reserve setup_data: [mem 0x000000009ce97000-0x000000009cedbfff] usable Apr 20 16:02:54.192740 kernel: reserve setup_data: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Apr 20 16:02:54.192748 kernel: reserve setup_data: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Apr 20 16:02:54.192781 kernel: reserve setup_data: [mem 0x00000000e0000000-0x00000000efffffff] reserved Apr 20 16:02:54.192794 kernel: reserve setup_data: [mem 0x00000000feffc000-0x00000000feffffff] reserved Apr 20 16:02:54.192827 kernel: reserve setup_data: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Apr 20 16:02:54.192836 kernel: efi: EFI v2.7 by EDK II Apr 20 16:02:54.192928 kernel: efi: SMBIOS=0x9c988000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b9e4198 RNG=0x9cb73018 Apr 20 16:02:54.192975 kernel: random: crng init done Apr 20 16:02:54.193007 kernel: efi: Remove mem151: MMIO range=[0xffc00000-0xffffffff] (4MB) from e820 map Apr 20 16:02:54.193555 kernel: e820: remove [mem 0xffc00000-0xffffffff] reserved Apr 20 16:02:54.193632 kernel: secureboot: Secure boot disabled Apr 20 16:02:54.193687 kernel: SMBIOS 2.8 present. Apr 20 16:02:54.193727 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS unknown 02/02/2022 Apr 20 16:02:54.193754 kernel: DMI: Memory slots populated: 1/1 Apr 20 16:02:54.193803 kernel: Hypervisor detected: KVM Apr 20 16:02:54.193829 kernel: last_pfn = 0x9cedc max_arch_pfn = 0x10000000000 Apr 20 16:02:54.193874 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Apr 20 16:02:54.193924 kernel: kvm-clock: using sched offset of 9971293291 cycles Apr 20 16:02:54.193952 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Apr 20 16:02:54.194013 kernel: tsc: Detected 2793.438 MHz processor Apr 20 16:02:54.194039 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Apr 20 16:02:54.194080 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Apr 20 16:02:54.194120 kernel: last_pfn = 0x9cedc max_arch_pfn = 0x10000000000 Apr 20 16:02:54.194140 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Apr 20 16:02:54.194250 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Apr 20 16:02:54.194258 kernel: Using GB pages for direct mapping Apr 20 16:02:54.194264 kernel: ACPI: Early table checksum verification disabled Apr 20 16:02:54.194270 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Apr 20 16:02:54.194290 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Apr 20 16:02:54.194314 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Apr 20 16:02:54.194337 kernel: ACPI: DSDT 0x000000009CB7A000 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 20 16:02:54.194361 kernel: ACPI: FACS 0x000000009CBDD000 000040 Apr 20 16:02:54.194388 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 20 16:02:54.194409 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 20 16:02:54.194430 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 20 16:02:54.194450 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 20 16:02:54.194456 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Apr 20 16:02:54.194478 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Apr 20 16:02:54.194499 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1b9] Apr 20 16:02:54.194520 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Apr 20 16:02:54.194540 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Apr 20 16:02:54.194560 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Apr 20 16:02:54.194580 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Apr 20 16:02:54.194600 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Apr 20 16:02:54.194621 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Apr 20 16:02:54.194641 kernel: No NUMA configuration found Apr 20 16:02:54.194647 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cedbfff] Apr 20 16:02:54.194668 kernel: NODE_DATA(0) allocated [mem 0x9ce36dc0-0x9ce3dfff] Apr 20 16:02:54.194688 kernel: Zone ranges: Apr 20 16:02:54.194709 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Apr 20 16:02:54.194729 kernel: DMA32 [mem 0x0000000001000000-0x000000009cedbfff] Apr 20 16:02:54.194764 kernel: Normal empty Apr 20 16:02:54.194770 kernel: Device empty Apr 20 16:02:54.194814 kernel: Movable zone start for each node Apr 20 16:02:54.194839 kernel: Early memory node ranges Apr 20 16:02:54.194845 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Apr 20 16:02:54.194851 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Apr 20 16:02:54.194857 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Apr 20 16:02:54.194863 kernel: node 0: [mem 0x000000000080c000-0x0000000000810fff] Apr 20 16:02:54.194886 kernel: node 0: [mem 0x0000000000900000-0x000000009bd3efff] Apr 20 16:02:54.194907 kernel: node 0: [mem 0x000000009be00000-0x000000009c8ecfff] Apr 20 16:02:54.194913 kernel: node 0: [mem 0x000000009cbff000-0x000000009ce90fff] Apr 20 16:02:54.194918 kernel: node 0: [mem 0x000000009ce97000-0x000000009cedbfff] Apr 20 16:02:54.194945 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cedbfff] Apr 20 16:02:54.194989 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Apr 20 16:02:54.195043 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Apr 20 16:02:54.195072 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Apr 20 16:02:54.195232 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Apr 20 16:02:54.195260 kernel: On node 0, zone DMA: 239 pages in unavailable ranges Apr 20 16:02:54.195272 kernel: On node 0, zone DMA32: 193 pages in unavailable ranges Apr 20 16:02:54.195303 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Apr 20 16:02:54.195350 kernel: On node 0, zone DMA32: 6 pages in unavailable ranges Apr 20 16:02:54.195379 kernel: On node 0, zone DMA32: 12580 pages in unavailable ranges Apr 20 16:02:54.195427 kernel: ACPI: PM-Timer IO Port: 0x608 Apr 20 16:02:54.195474 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Apr 20 16:02:54.195506 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Apr 20 16:02:54.195554 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Apr 20 16:02:54.195579 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Apr 20 16:02:54.195627 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Apr 20 16:02:54.195655 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Apr 20 16:02:54.195706 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Apr 20 16:02:54.195754 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Apr 20 16:02:54.195783 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Apr 20 16:02:54.195811 kernel: TSC deadline timer available Apr 20 16:02:54.195839 kernel: CPU topo: Max. logical packages: 1 Apr 20 16:02:54.195848 kernel: CPU topo: Max. logical dies: 1 Apr 20 16:02:54.195893 kernel: CPU topo: Max. dies per package: 1 Apr 20 16:02:54.195943 kernel: CPU topo: Max. threads per core: 1 Apr 20 16:02:54.195971 kernel: CPU topo: Num. cores per package: 4 Apr 20 16:02:54.196002 kernel: CPU topo: Num. threads per package: 4 Apr 20 16:02:54.196012 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs Apr 20 16:02:54.196021 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Apr 20 16:02:54.196031 kernel: kvm-guest: KVM setup pv remote TLB flush Apr 20 16:02:54.196040 kernel: kvm-guest: setup PV sched yield Apr 20 16:02:54.196049 kernel: [mem 0x9d000000-0xdfffffff] available for PCI devices Apr 20 16:02:54.196060 kernel: Booting paravirtualized kernel on KVM Apr 20 16:02:54.196069 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Apr 20 16:02:54.196078 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Apr 20 16:02:54.196087 kernel: percpu: Embedded 60 pages/cpu s207960 r8192 d29608 u524288 Apr 20 16:02:54.196471 kernel: pcpu-alloc: s207960 r8192 d29608 u524288 alloc=1*2097152 Apr 20 16:02:54.196552 kernel: pcpu-alloc: [0] 0 1 2 3 Apr 20 16:02:54.196578 kernel: kvm-guest: PV spinlocks enabled Apr 20 16:02:54.196610 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Apr 20 16:02:54.196661 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=70de22794192cd52e167b5a4b1ae0509811ded61dbe4152dfc02378f843ae81a Apr 20 16:02:54.196709 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Apr 20 16:02:54.196755 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Apr 20 16:02:54.196797 kernel: Fallback order for Node 0: 0 Apr 20 16:02:54.196808 kernel: Built 1 zonelists, mobility grouping on. Total pages: 641450 Apr 20 16:02:54.196859 kernel: Policy zone: DMA32 Apr 20 16:02:54.196869 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Apr 20 16:02:54.196899 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Apr 20 16:02:54.196926 kernel: ftrace: allocating 40346 entries in 158 pages Apr 20 16:02:54.196935 kernel: ftrace: allocated 158 pages with 5 groups Apr 20 16:02:54.196965 kernel: Dynamic Preempt: voluntary Apr 20 16:02:54.196990 kernel: rcu: Preemptible hierarchical RCU implementation. Apr 20 16:02:54.197003 kernel: rcu: RCU event tracing is enabled. Apr 20 16:02:54.197012 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Apr 20 16:02:54.197021 kernel: Trampoline variant of Tasks RCU enabled. Apr 20 16:02:54.197030 kernel: Rude variant of Tasks RCU enabled. Apr 20 16:02:54.197040 kernel: Tracing variant of Tasks RCU enabled. Apr 20 16:02:54.197049 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Apr 20 16:02:54.197058 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Apr 20 16:02:54.197068 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Apr 20 16:02:54.197415 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Apr 20 16:02:54.197450 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Apr 20 16:02:54.197459 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Apr 20 16:02:54.197469 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Apr 20 16:02:54.197498 kernel: Console: colour dummy device 80x25 Apr 20 16:02:54.197508 kernel: printk: legacy console [ttyS0] enabled Apr 20 16:02:54.197517 kernel: ACPI: Core revision 20240827 Apr 20 16:02:54.197533 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Apr 20 16:02:54.197541 kernel: APIC: Switch to symmetric I/O mode setup Apr 20 16:02:54.197550 kernel: x2apic enabled Apr 20 16:02:54.197559 kernel: APIC: Switched APIC routing to: physical x2apic Apr 20 16:02:54.197568 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Apr 20 16:02:54.197578 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Apr 20 16:02:54.197587 kernel: kvm-guest: setup PV IPIs Apr 20 16:02:54.197598 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Apr 20 16:02:54.197607 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x284409db922, max_idle_ns: 440795228871 ns Apr 20 16:02:54.197616 kernel: Calibrating delay loop (skipped) preset value.. 5586.87 BogoMIPS (lpj=2793438) Apr 20 16:02:54.197624 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Apr 20 16:02:54.197634 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Apr 20 16:02:54.197643 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Apr 20 16:02:54.197652 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Apr 20 16:02:54.197663 kernel: Spectre V2 : Mitigation: Retpolines Apr 20 16:02:54.197672 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Apr 20 16:02:54.197706 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Apr 20 16:02:54.197716 kernel: RETBleed: Vulnerable Apr 20 16:02:54.197725 kernel: Speculative Store Bypass: Vulnerable Apr 20 16:02:54.197734 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Apr 20 16:02:54.197742 kernel: GDS: Unknown: Dependent on hypervisor status Apr 20 16:02:54.197754 kernel: active return thunk: its_return_thunk Apr 20 16:02:54.197763 kernel: ITS: Mitigation: Aligned branch/return thunks Apr 20 16:02:54.197773 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Apr 20 16:02:54.197782 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Apr 20 16:02:54.197790 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Apr 20 16:02:54.197799 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Apr 20 16:02:54.197808 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Apr 20 16:02:54.197819 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Apr 20 16:02:54.197828 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Apr 20 16:02:54.197838 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Apr 20 16:02:54.197847 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Apr 20 16:02:54.197856 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Apr 20 16:02:54.197866 kernel: x86/fpu: Enabled xstate features 0xe7, context size is 2432 bytes, using 'compacted' format. Apr 20 16:02:54.197875 kernel: Freeing SMP alternatives memory: 32K Apr 20 16:02:54.197886 kernel: pid_max: default: 32768 minimum: 301 Apr 20 16:02:54.197894 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Apr 20 16:02:54.197903 kernel: landlock: Up and running. Apr 20 16:02:54.197912 kernel: SELinux: Initializing. Apr 20 16:02:54.197920 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 20 16:02:54.197931 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 20 16:02:54.197940 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8370C CPU @ 2.80GHz (family: 0x6, model: 0x6a, stepping: 0x6) Apr 20 16:02:54.197951 kernel: Performance Events: unsupported p6 CPU model 106 no PMU driver, software events only. Apr 20 16:02:54.197960 kernel: signal: max sigframe size: 3632 Apr 20 16:02:54.197969 kernel: rcu: Hierarchical SRCU implementation. Apr 20 16:02:54.197978 kernel: rcu: Max phase no-delay instances is 400. Apr 20 16:02:54.197986 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Apr 20 16:02:54.197996 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Apr 20 16:02:54.198006 kernel: smp: Bringing up secondary CPUs ... Apr 20 16:02:54.198017 kernel: smpboot: x86: Booting SMP configuration: Apr 20 16:02:54.198026 kernel: .... node #0, CPUs: #1 #2 #3 Apr 20 16:02:54.198034 kernel: smp: Brought up 1 node, 4 CPUs Apr 20 16:02:54.198068 kernel: smpboot: Total of 4 processors activated (22347.50 BogoMIPS) Apr 20 16:02:54.198080 kernel: Memory: 2399268K/2565800K available (14336K kernel code, 2458K rwdata, 31736K rodata, 15944K init, 2284K bss, 160636K reserved, 0K cma-reserved) Apr 20 16:02:54.198089 kernel: devtmpfs: initialized Apr 20 16:02:54.198098 kernel: x86/mm: Memory block size: 128MB Apr 20 16:02:54.198109 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Apr 20 16:02:54.198118 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Apr 20 16:02:54.198127 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00811000-0x008fffff] (978944 bytes) Apr 20 16:02:54.198136 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Apr 20 16:02:54.198145 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9ce95000-0x9ce96fff] (8192 bytes) Apr 20 16:02:54.198514 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Apr 20 16:02:54.198525 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Apr 20 16:02:54.198540 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Apr 20 16:02:54.198550 kernel: pinctrl core: initialized pinctrl subsystem Apr 20 16:02:54.198560 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Apr 20 16:02:54.198569 kernel: audit: initializing netlink subsys (disabled) Apr 20 16:02:54.198579 kernel: audit: type=2000 audit(1776700965.930:1): state=initialized audit_enabled=0 res=1 Apr 20 16:02:54.198588 kernel: thermal_sys: Registered thermal governor 'step_wise' Apr 20 16:02:54.198598 kernel: thermal_sys: Registered thermal governor 'user_space' Apr 20 16:02:54.198610 kernel: cpuidle: using governor menu Apr 20 16:02:54.198619 kernel: efi: Freeing EFI boot services memory: 38812K Apr 20 16:02:54.198628 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Apr 20 16:02:54.198636 kernel: dca service started, version 1.12.1 Apr 20 16:02:54.198645 kernel: PCI: ECAM [mem 0xe0000000-0xefffffff] (base 0xe0000000) for domain 0000 [bus 00-ff] Apr 20 16:02:54.198654 kernel: PCI: Using configuration type 1 for base access Apr 20 16:02:54.198663 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Apr 20 16:02:54.198674 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Apr 20 16:02:54.198684 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Apr 20 16:02:54.198693 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Apr 20 16:02:54.198702 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Apr 20 16:02:54.198711 kernel: ACPI: Added _OSI(Module Device) Apr 20 16:02:54.198721 kernel: ACPI: Added _OSI(Processor Device) Apr 20 16:02:54.198729 kernel: ACPI: Added _OSI(Processor Aggregator Device) Apr 20 16:02:54.198740 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Apr 20 16:02:54.198749 kernel: ACPI: Interpreter enabled Apr 20 16:02:54.198758 kernel: ACPI: PM: (supports S0 S3 S5) Apr 20 16:02:54.198766 kernel: ACPI: Using IOAPIC for interrupt routing Apr 20 16:02:54.198775 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Apr 20 16:02:54.198785 kernel: PCI: Using E820 reservations for host bridge windows Apr 20 16:02:54.198794 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Apr 20 16:02:54.198806 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Apr 20 16:02:54.199643 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Apr 20 16:02:54.199810 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Apr 20 16:02:54.199950 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Apr 20 16:02:54.199962 kernel: PCI host bridge to bus 0000:00 Apr 20 16:02:54.200108 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Apr 20 16:02:54.200322 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Apr 20 16:02:54.200447 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Apr 20 16:02:54.200565 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xdfffffff window] Apr 20 16:02:54.200689 kernel: pci_bus 0000:00: root bus resource [mem 0xf0000000-0xfebfffff window] Apr 20 16:02:54.200824 kernel: pci_bus 0000:00: root bus resource [mem 0x380000000000-0x3807ffffffff window] Apr 20 16:02:54.200958 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Apr 20 16:02:54.201117 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Apr 20 16:02:54.201344 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Apr 20 16:02:54.201486 kernel: pci 0000:00:01.0: BAR 0 [mem 0xc0000000-0xc0ffffff pref] Apr 20 16:02:54.201627 kernel: pci 0000:00:01.0: BAR 2 [mem 0xc1044000-0xc1044fff] Apr 20 16:02:54.201768 kernel: pci 0000:00:01.0: ROM [mem 0xffff0000-0xffffffff pref] Apr 20 16:02:54.201912 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Apr 20 16:02:54.202060 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Apr 20 16:02:54.202296 kernel: pci 0000:00:02.0: BAR 0 [io 0x6100-0x611f] Apr 20 16:02:54.202443 kernel: pci 0000:00:02.0: BAR 1 [mem 0xc1043000-0xc1043fff] Apr 20 16:02:54.202624 kernel: pci 0000:00:02.0: BAR 4 [mem 0x380000000000-0x380000003fff 64bit pref] Apr 20 16:02:54.202768 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Apr 20 16:02:54.202912 kernel: pci 0000:00:03.0: BAR 0 [io 0x6000-0x607f] Apr 20 16:02:54.203053 kernel: pci 0000:00:03.0: BAR 1 [mem 0xc1042000-0xc1042fff] Apr 20 16:02:54.203847 kernel: pci 0000:00:03.0: BAR 4 [mem 0x380000004000-0x380000007fff 64bit pref] Apr 20 16:02:54.204028 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Apr 20 16:02:54.204792 kernel: pci 0000:00:04.0: BAR 0 [io 0x60e0-0x60ff] Apr 20 16:02:54.204993 kernel: pci 0000:00:04.0: BAR 1 [mem 0xc1041000-0xc1041fff] Apr 20 16:02:54.205135 kernel: pci 0000:00:04.0: BAR 4 [mem 0x380000008000-0x38000000bfff 64bit pref] Apr 20 16:02:54.205316 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref] Apr 20 16:02:54.205420 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Apr 20 16:02:54.205517 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Apr 20 16:02:54.205624 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Apr 20 16:02:54.205724 kernel: pci 0000:00:1f.2: BAR 4 [io 0x60c0-0x60df] Apr 20 16:02:54.205820 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xc1040000-0xc1040fff] Apr 20 16:02:54.205921 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Apr 20 16:02:54.206017 kernel: pci 0000:00:1f.3: BAR 4 [io 0x6080-0x60bf] Apr 20 16:02:54.206026 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Apr 20 16:02:54.206036 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Apr 20 16:02:54.206042 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Apr 20 16:02:54.206049 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Apr 20 16:02:54.206055 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Apr 20 16:02:54.206061 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Apr 20 16:02:54.206068 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Apr 20 16:02:54.206074 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Apr 20 16:02:54.206080 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Apr 20 16:02:54.206088 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Apr 20 16:02:54.206094 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Apr 20 16:02:54.206101 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Apr 20 16:02:54.206107 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Apr 20 16:02:54.206113 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Apr 20 16:02:54.206119 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Apr 20 16:02:54.206126 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Apr 20 16:02:54.206133 kernel: iommu: Default domain type: Translated Apr 20 16:02:54.206140 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Apr 20 16:02:54.206146 kernel: efivars: Registered efivars operations Apr 20 16:02:54.206687 kernel: PCI: Using ACPI for IRQ routing Apr 20 16:02:54.206697 kernel: PCI: pci_cache_line_size set to 64 bytes Apr 20 16:02:54.206708 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Apr 20 16:02:54.206717 kernel: e820: reserve RAM buffer [mem 0x00811000-0x008fffff] Apr 20 16:02:54.206744 kernel: e820: reserve RAM buffer [mem 0x9b2e1018-0x9bffffff] Apr 20 16:02:54.206754 kernel: e820: reserve RAM buffer [mem 0x9b31e018-0x9bffffff] Apr 20 16:02:54.206763 kernel: e820: reserve RAM buffer [mem 0x9bd3f000-0x9bffffff] Apr 20 16:02:54.206771 kernel: e820: reserve RAM buffer [mem 0x9c8ed000-0x9fffffff] Apr 20 16:02:54.206781 kernel: e820: reserve RAM buffer [mem 0x9ce91000-0x9fffffff] Apr 20 16:02:54.206789 kernel: e820: reserve RAM buffer [mem 0x9cedc000-0x9fffffff] Apr 20 16:02:54.207078 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Apr 20 16:02:54.207829 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Apr 20 16:02:54.207999 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Apr 20 16:02:54.208013 kernel: vgaarb: loaded Apr 20 16:02:54.208022 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Apr 20 16:02:54.208030 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Apr 20 16:02:54.208039 kernel: clocksource: Switched to clocksource kvm-clock Apr 20 16:02:54.208048 kernel: VFS: Disk quotas dquot_6.6.0 Apr 20 16:02:54.208057 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Apr 20 16:02:54.208070 kernel: pnp: PnP ACPI init Apr 20 16:02:54.209788 kernel: system 00:05: [mem 0xe0000000-0xefffffff window] has been reserved Apr 20 16:02:54.209810 kernel: pnp: PnP ACPI: found 6 devices Apr 20 16:02:54.209821 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Apr 20 16:02:54.209849 kernel: NET: Registered PF_INET protocol family Apr 20 16:02:54.209860 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Apr 20 16:02:54.209922 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Apr 20 16:02:54.209938 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Apr 20 16:02:54.209948 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Apr 20 16:02:54.209980 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Apr 20 16:02:54.209990 kernel: TCP: Hash tables configured (established 32768 bind 32768) Apr 20 16:02:54.210023 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 20 16:02:54.210032 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 20 16:02:54.210042 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Apr 20 16:02:54.210055 kernel: NET: Registered PF_XDP protocol family Apr 20 16:02:54.210312 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref]: can't claim; no compatible bridge window Apr 20 16:02:54.210506 kernel: pci 0000:00:04.0: ROM [mem 0x9d000000-0x9d03ffff pref]: assigned Apr 20 16:02:54.210643 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Apr 20 16:02:54.210774 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Apr 20 16:02:54.210900 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Apr 20 16:02:54.211032 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xdfffffff window] Apr 20 16:02:54.211272 kernel: pci_bus 0000:00: resource 8 [mem 0xf0000000-0xfebfffff window] Apr 20 16:02:54.211407 kernel: pci_bus 0000:00: resource 9 [mem 0x380000000000-0x3807ffffffff window] Apr 20 16:02:54.211420 kernel: PCI: CLS 0 bytes, default 64 Apr 20 16:02:54.211430 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Apr 20 16:02:54.211440 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x284409db922, max_idle_ns: 440795228871 ns Apr 20 16:02:54.211450 kernel: Initialise system trusted keyrings Apr 20 16:02:54.211464 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Apr 20 16:02:54.211475 kernel: Key type asymmetric registered Apr 20 16:02:54.211485 kernel: Asymmetric key parser 'x509' registered Apr 20 16:02:54.211494 kernel: hrtimer: interrupt took 3393754 ns Apr 20 16:02:54.211506 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Apr 20 16:02:54.211516 kernel: io scheduler mq-deadline registered Apr 20 16:02:54.211526 kernel: io scheduler kyber registered Apr 20 16:02:54.211536 kernel: io scheduler bfq registered Apr 20 16:02:54.211546 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Apr 20 16:02:54.211557 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Apr 20 16:02:54.211567 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Apr 20 16:02:54.211578 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Apr 20 16:02:54.211588 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Apr 20 16:02:54.211598 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Apr 20 16:02:54.211608 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Apr 20 16:02:54.211619 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Apr 20 16:02:54.211627 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Apr 20 16:02:54.211637 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Apr 20 16:02:54.211783 kernel: rtc_cmos 00:04: RTC can wake from S4 Apr 20 16:02:54.211976 kernel: rtc_cmos 00:04: registered as rtc0 Apr 20 16:02:54.212108 kernel: rtc_cmos 00:04: setting system clock to 2026-04-20T16:02:49 UTC (1776700969) Apr 20 16:02:54.212342 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Apr 20 16:02:54.212356 kernel: intel_pstate: CPU model not supported Apr 20 16:02:54.212366 kernel: efifb: probing for efifb Apr 20 16:02:54.212376 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k Apr 20 16:02:54.212389 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 Apr 20 16:02:54.212401 kernel: efifb: scrolling: redraw Apr 20 16:02:54.212411 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Apr 20 16:02:54.212421 kernel: Console: switching to colour frame buffer device 160x50 Apr 20 16:02:54.212430 kernel: fb0: EFI VGA frame buffer device Apr 20 16:02:54.212439 kernel: pstore: Using crash dump compression: deflate Apr 20 16:02:54.212448 kernel: pstore: Registered efi_pstore as persistent store backend Apr 20 16:02:54.212460 kernel: NET: Registered PF_INET6 protocol family Apr 20 16:02:54.212470 kernel: Segment Routing with IPv6 Apr 20 16:02:54.212480 kernel: In-situ OAM (IOAM) with IPv6 Apr 20 16:02:54.212489 kernel: NET: Registered PF_PACKET protocol family Apr 20 16:02:54.212498 kernel: Key type dns_resolver registered Apr 20 16:02:54.212507 kernel: IPI shorthand broadcast: enabled Apr 20 16:02:54.212516 kernel: sched_clock: Marking stable (4579086452, 1359144078)->(6691019965, -752789435) Apr 20 16:02:54.212528 kernel: registered taskstats version 1 Apr 20 16:02:54.212537 kernel: Loading compiled-in X.509 certificates Apr 20 16:02:54.212547 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.81-flatcar: 7cf14208c08026297bea8a5678f7340932b35e4b' Apr 20 16:02:54.212556 kernel: Demotion targets for Node 0: null Apr 20 16:02:54.212565 kernel: Key type .fscrypt registered Apr 20 16:02:54.212573 kernel: Key type fscrypt-provisioning registered Apr 20 16:02:54.212583 kernel: ima: No TPM chip found, activating TPM-bypass! Apr 20 16:02:54.212594 kernel: ima: Allocated hash algorithm: sha1 Apr 20 16:02:54.212608 kernel: ima: No architecture policies found Apr 20 16:02:54.212617 kernel: clk: Disabling unused clocks Apr 20 16:02:54.212626 kernel: Freeing unused kernel image (initmem) memory: 15944K Apr 20 16:02:54.212635 kernel: Write protecting the kernel read-only data: 47104k Apr 20 16:02:54.212644 kernel: Freeing unused kernel image (rodata/data gap) memory: 1032K Apr 20 16:02:54.212653 kernel: Run /init as init process Apr 20 16:02:54.212662 kernel: with arguments: Apr 20 16:02:54.212674 kernel: /init Apr 20 16:02:54.212683 kernel: with environment: Apr 20 16:02:54.212692 kernel: HOME=/ Apr 20 16:02:54.212701 kernel: TERM=linux Apr 20 16:02:54.212709 kernel: SCSI subsystem initialized Apr 20 16:02:54.212718 kernel: libata version 3.00 loaded. Apr 20 16:02:54.212867 kernel: ahci 0000:00:1f.2: version 3.0 Apr 20 16:02:54.212884 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Apr 20 16:02:54.213022 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Apr 20 16:02:54.213269 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Apr 20 16:02:54.213411 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Apr 20 16:02:54.213559 kernel: scsi host0: ahci Apr 20 16:02:54.213714 kernel: scsi host1: ahci Apr 20 16:02:54.213872 kernel: scsi host2: ahci Apr 20 16:02:54.214025 kernel: scsi host3: ahci Apr 20 16:02:54.214816 kernel: scsi host4: ahci Apr 20 16:02:54.214993 kernel: scsi host5: ahci Apr 20 16:02:54.215007 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 26 lpm-pol 1 Apr 20 16:02:54.215022 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 26 lpm-pol 1 Apr 20 16:02:54.215032 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 26 lpm-pol 1 Apr 20 16:02:54.215042 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 26 lpm-pol 1 Apr 20 16:02:54.215052 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 26 lpm-pol 1 Apr 20 16:02:54.215061 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 26 lpm-pol 1 Apr 20 16:02:54.215070 kernel: ata4: SATA link down (SStatus 0 SControl 300) Apr 20 16:02:54.215079 kernel: ata1: SATA link down (SStatus 0 SControl 300) Apr 20 16:02:54.215091 kernel: ata5: SATA link down (SStatus 0 SControl 300) Apr 20 16:02:54.215100 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Apr 20 16:02:54.215110 kernel: ata2: SATA link down (SStatus 0 SControl 300) Apr 20 16:02:54.215120 kernel: ata3.00: LPM support broken, forcing max_power Apr 20 16:02:54.215130 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Apr 20 16:02:54.215139 kernel: ata3.00: applying bridge limits Apr 20 16:02:54.215147 kernel: ata6: SATA link down (SStatus 0 SControl 300) Apr 20 16:02:54.215228 kernel: ata3.00: LPM support broken, forcing max_power Apr 20 16:02:54.215237 kernel: ata3.00: configured for UDMA/100 Apr 20 16:02:54.215409 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Apr 20 16:02:54.216679 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Apr 20 16:02:54.216806 kernel: virtio_blk virtio1: [vda] 27000832 512-byte logical blocks (13.8 GB/12.9 GiB) Apr 20 16:02:54.216816 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Apr 20 16:02:54.216827 kernel: GPT:16515071 != 27000831 Apr 20 16:02:54.216834 kernel: GPT:Alternate GPT header not at the end of the disk. Apr 20 16:02:54.216841 kernel: GPT:16515071 != 27000831 Apr 20 16:02:54.216848 kernel: GPT: Use GNU Parted to correct GPT errors. Apr 20 16:02:54.216855 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 20 16:02:54.216973 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Apr 20 16:02:54.216983 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Apr 20 16:02:54.217093 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Apr 20 16:02:54.217102 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Apr 20 16:02:54.217109 kernel: device-mapper: uevent: version 1.0.3 Apr 20 16:02:54.217116 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Apr 20 16:02:54.217123 kernel: device-mapper: verity: sha256 using shash "sha256-generic" Apr 20 16:02:54.217129 kernel: raid6: avx512x4 gen() 29589 MB/s Apr 20 16:02:54.217136 kernel: raid6: avx512x2 gen() 24722 MB/s Apr 20 16:02:54.217144 kernel: raid6: avx512x1 gen() 5185 MB/s Apr 20 16:02:54.217230 kernel: raid6: avx2x4 gen() 7388 MB/s Apr 20 16:02:54.217237 kernel: raid6: avx2x2 gen() 25765 MB/s Apr 20 16:02:54.217244 kernel: raid6: avx2x1 gen() 21644 MB/s Apr 20 16:02:54.217250 kernel: raid6: using algorithm avx512x4 gen() 29589 MB/s Apr 20 16:02:54.217257 kernel: raid6: .... xor() 975 MB/s, rmw enabled Apr 20 16:02:54.217264 kernel: raid6: using avx512x2 recovery algorithm Apr 20 16:02:54.217273 kernel: xor: automatically using best checksumming function avx Apr 20 16:02:54.217280 kernel: Btrfs loaded, zoned=no, fsverity=no Apr 20 16:02:54.217286 kernel: BTRFS: device fsid 2b1891e6-d4d2-4c02-a1ed-3a6feccae86f devid 1 transid 45 /dev/mapper/usr (253:0) scanned by mount (181) Apr 20 16:02:54.217293 kernel: BTRFS info (device dm-0): first mount of filesystem 2b1891e6-d4d2-4c02-a1ed-3a6feccae86f Apr 20 16:02:54.217300 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Apr 20 16:02:54.217306 kernel: BTRFS info (device dm-0 state E): disabling log replay at mount time Apr 20 16:02:54.217313 kernel: BTRFS info (device dm-0 state E): enabling free space tree Apr 20 16:02:54.217321 kernel: loop: module loaded Apr 20 16:02:54.217327 kernel: loop0: detected capacity change from 0 to 106960 Apr 20 16:02:54.217334 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Apr 20 16:02:54.217342 systemd[1]: /etc/systemd/system.conf.d/nocgroup.conf:2: Support for option DefaultCPUAccounting= has been removed and it is ignored Apr 20 16:02:54.217352 systemd[1]: /etc/systemd/system.conf.d/nocgroup.conf:5: Support for option DefaultBlockIOAccounting= has been removed and it is ignored Apr 20 16:02:54.217359 systemd[1]: Successfully made /usr/ read-only. Apr 20 16:02:54.217368 systemd[1]: systemd 258.3 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Apr 20 16:02:54.217376 systemd[1]: Detected virtualization kvm. Apr 20 16:02:54.217382 systemd[1]: Detected architecture x86-64. Apr 20 16:02:54.217389 systemd[1]: Running in initrd. Apr 20 16:02:54.217396 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Apr 20 16:02:54.217402 systemd[1]: No hostname configured, using default hostname. Apr 20 16:02:54.217411 systemd[1]: Hostname set to . Apr 20 16:02:54.217418 systemd[1]: Queued start job for default target initrd.target. Apr 20 16:02:54.217425 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Apr 20 16:02:54.217432 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 20 16:02:54.217439 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 20 16:02:54.217447 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Apr 20 16:02:54.217456 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 20 16:02:54.217462 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Apr 20 16:02:54.217470 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Apr 20 16:02:54.217476 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 20 16:02:54.217484 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 20 16:02:54.217491 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Apr 20 16:02:54.217499 systemd[1]: Reached target paths.target - Path Units. Apr 20 16:02:54.217506 systemd[1]: Reached target slices.target - Slice Units. Apr 20 16:02:54.217513 systemd[1]: Reached target swap.target - Swaps. Apr 20 16:02:54.217520 systemd[1]: Reached target timers.target - Timer Units. Apr 20 16:02:54.217527 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Apr 20 16:02:54.217533 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 20 16:02:54.217540 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Apr 20 16:02:54.217549 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 20 16:02:54.217556 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Apr 20 16:02:54.217563 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 20 16:02:54.217570 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 20 16:02:54.217577 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 20 16:02:54.217583 systemd[1]: Reached target sockets.target - Socket Units. Apr 20 16:02:54.217590 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Apr 20 16:02:54.217599 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Apr 20 16:02:54.217606 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 20 16:02:54.217613 systemd[1]: Finished network-cleanup.service - Network Cleanup. Apr 20 16:02:54.217620 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Apr 20 16:02:54.217627 systemd[1]: Starting systemd-fsck-usr.service... Apr 20 16:02:54.217636 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 20 16:02:54.217642 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 20 16:02:54.217650 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 20 16:02:54.217656 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Apr 20 16:02:54.217664 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 20 16:02:54.217672 systemd[1]: Finished systemd-fsck-usr.service. Apr 20 16:02:54.217679 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 20 16:02:54.217711 systemd-journald[320]: Collecting audit messages is enabled. Apr 20 16:02:54.217731 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Apr 20 16:02:54.217737 kernel: Bridge firewalling registered Apr 20 16:02:54.217744 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 20 16:02:54.217752 kernel: audit: type=1130 audit(1776700974.216:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 16:02:54.217761 systemd-journald[320]: Journal started Apr 20 16:02:54.217776 systemd-journald[320]: Runtime Journal (/run/log/journal/80fc1434d1bc4aaeb954097d38972301) is 6M, max 48M, 42M free. Apr 20 16:02:54.216000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 16:02:54.211930 systemd-modules-load[321]: Inserted module 'br_netfilter' Apr 20 16:02:54.235608 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 20 16:02:54.240000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 16:02:54.251628 kernel: audit: type=1130 audit(1776700974.240:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 16:02:54.251820 systemd[1]: Started systemd-journald.service - Journal Service. Apr 20 16:02:54.259000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 16:02:54.268143 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 20 16:02:54.275000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 16:02:54.288848 kernel: audit: type=1130 audit(1776700974.259:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 16:02:54.288889 kernel: audit: type=1130 audit(1776700974.275:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 16:02:54.291714 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 20 16:02:54.300823 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 20 16:02:54.349839 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 20 16:02:54.366323 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 20 16:02:54.394557 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 20 16:02:54.397000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 16:02:54.405302 kernel: audit: type=1130 audit(1776700974.397:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 16:02:54.405510 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 20 16:02:54.411000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 16:02:54.412252 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 20 16:02:54.416634 systemd-tmpfiles[338]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Apr 20 16:02:54.426000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 16:02:54.450783 kernel: audit: type=1130 audit(1776700974.411:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 16:02:54.450812 kernel: audit: type=1130 audit(1776700974.426:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 16:02:54.454472 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Apr 20 16:02:54.455000 audit: BPF prog-id=5 op=LOAD Apr 20 16:02:54.470865 kernel: audit: type=1334 audit(1776700974.455:9): prog-id=5 op=LOAD Apr 20 16:02:54.466020 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 20 16:02:54.489947 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 20 16:02:54.490000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 16:02:54.512571 kernel: audit: type=1130 audit(1776700974.490:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 16:02:54.559535 dracut-cmdline[354]: dracut-109 Apr 20 16:02:54.579834 dracut-cmdline[354]: Using kernel command line parameters: SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=70de22794192cd52e167b5a4b1ae0509811ded61dbe4152dfc02378f843ae81a Apr 20 16:02:54.655840 systemd-resolved[355]: Positive Trust Anchors: Apr 20 16:02:54.655891 systemd-resolved[355]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 20 16:02:54.655895 systemd-resolved[355]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Apr 20 16:02:54.655929 systemd-resolved[355]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 20 16:02:54.705591 systemd-resolved[355]: Defaulting to hostname 'linux'. Apr 20 16:02:54.713791 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 20 16:02:54.723000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 16:02:54.723564 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 20 16:02:54.739728 kernel: audit: type=1130 audit(1776700974.723:11): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 16:02:55.113453 kernel: Loading iSCSI transport class v2.0-870. Apr 20 16:02:55.138313 kernel: iscsi: registered transport (tcp) Apr 20 16:02:55.182993 kernel: iscsi: registered transport (qla4xxx) Apr 20 16:02:55.183507 kernel: QLogic iSCSI HBA Driver Apr 20 16:02:55.272729 systemd[1]: Starting systemd-network-generator.service - Generate Network Units from Kernel Command Line... Apr 20 16:02:55.342402 systemd[1]: Finished systemd-network-generator.service - Generate Network Units from Kernel Command Line. Apr 20 16:02:55.351000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 16:02:55.354836 systemd[1]: Reached target network-pre.target - Preparation for Network. Apr 20 16:02:55.686101 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Apr 20 16:02:55.696000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 16:02:55.744413 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Apr 20 16:02:55.761061 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Apr 20 16:02:55.837477 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Apr 20 16:02:55.846000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 16:02:55.847000 audit: BPF prog-id=6 op=LOAD Apr 20 16:02:55.847000 audit: BPF prog-id=7 op=LOAD Apr 20 16:02:55.847819 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 20 16:02:55.937023 systemd-udevd[588]: Using default interface naming scheme 'v258'. Apr 20 16:02:55.964729 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 20 16:02:55.964000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 16:02:55.979746 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Apr 20 16:02:56.022846 dracut-pre-trigger[664]: rd.md=0: removing MD RAID activation Apr 20 16:02:56.038981 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 20 16:02:56.038000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 16:02:56.040000 audit: BPF prog-id=8 op=LOAD Apr 20 16:02:56.040755 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 20 16:02:56.078811 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Apr 20 16:02:56.087000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 16:02:56.090964 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 20 16:02:56.116874 systemd-networkd[722]: lo: Link UP Apr 20 16:02:56.116902 systemd-networkd[722]: lo: Gained carrier Apr 20 16:02:56.120516 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 20 16:02:56.126000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 16:02:56.127549 systemd[1]: Reached target network.target - Network. Apr 20 16:02:56.203450 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 20 16:02:56.217000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 16:02:56.219655 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Apr 20 16:02:56.397983 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Apr 20 16:02:56.428908 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Apr 20 16:02:56.460891 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Apr 20 16:02:56.561453 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Apr 20 16:02:56.583659 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Apr 20 16:02:56.588598 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Apr 20 16:02:56.588620 kernel: cryptd: max_cpu_qlen set to 1000 Apr 20 16:02:56.625301 kernel: AES CTR mode by8 optimization enabled Apr 20 16:02:56.628972 disk-uuid[778]: Primary Header is updated. Apr 20 16:02:56.628972 disk-uuid[778]: Secondary Entries is updated. Apr 20 16:02:56.628972 disk-uuid[778]: Secondary Header is updated. Apr 20 16:02:56.645330 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 20 16:02:56.645487 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 20 16:02:56.655000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 16:02:56.655637 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Apr 20 16:02:56.674840 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 20 16:02:56.720129 systemd-networkd[722]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Apr 20 16:02:56.720138 systemd-networkd[722]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 20 16:02:56.722929 systemd-networkd[722]: eth0: Link UP Apr 20 16:02:56.723147 systemd-networkd[722]: eth0: Gained carrier Apr 20 16:02:56.723747 systemd-networkd[722]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Apr 20 16:02:56.741997 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 20 16:02:56.744000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 16:02:56.755292 systemd-networkd[722]: eth0: DHCPv4 address 10.0.0.48/16, gateway 10.0.0.1 acquired from 10.0.0.1 Apr 20 16:02:56.905549 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Apr 20 16:02:56.916000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 16:02:56.919732 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Apr 20 16:02:56.927077 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 20 16:02:56.928701 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 20 16:02:56.941351 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Apr 20 16:02:57.034938 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Apr 20 16:02:57.039000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 16:02:57.729888 disk-uuid[790]: Warning: The kernel is still using the old partition table. Apr 20 16:02:57.729888 disk-uuid[790]: The new table will be used at the next reboot or after you Apr 20 16:02:57.729888 disk-uuid[790]: run partprobe(8) or kpartx(8) Apr 20 16:02:57.729888 disk-uuid[790]: The operation has completed successfully. Apr 20 16:02:57.773947 systemd[1]: disk-uuid.service: Deactivated successfully. Apr 20 16:02:57.781913 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Apr 20 16:02:57.852000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 16:02:57.854000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 16:02:57.873869 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Apr 20 16:02:58.082606 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (902) Apr 20 16:02:58.094776 kernel: BTRFS info (device vda6): first mount of filesystem 17906e87-85d1-46f5-980e-3e85555360cf Apr 20 16:02:58.095576 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 20 16:02:58.139760 kernel: BTRFS info (device vda6): turning on async discard Apr 20 16:02:58.139975 kernel: BTRFS info (device vda6): enabling free space tree Apr 20 16:02:58.151334 systemd-networkd[722]: eth0: Gained IPv6LL Apr 20 16:02:58.173631 kernel: BTRFS info (device vda6): last unmount of filesystem 17906e87-85d1-46f5-980e-3e85555360cf Apr 20 16:02:58.184760 systemd[1]: Finished ignition-setup.service - Ignition (setup). Apr 20 16:02:58.195000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 16:02:58.207057 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Apr 20 16:02:58.643531 ignition[921]: Ignition 2.24.0 Apr 20 16:02:58.643580 ignition[921]: Stage: fetch-offline Apr 20 16:02:58.643698 ignition[921]: no configs at "/usr/lib/ignition/base.d" Apr 20 16:02:58.643709 ignition[921]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 20 16:02:58.647443 ignition[921]: parsed url from cmdline: "" Apr 20 16:02:58.647458 ignition[921]: no config URL provided Apr 20 16:02:58.647569 ignition[921]: reading system config file "/usr/lib/ignition/user.ign" Apr 20 16:02:58.647611 ignition[921]: no config at "/usr/lib/ignition/user.ign" Apr 20 16:02:58.647655 ignition[921]: op(1): [started] loading QEMU firmware config module Apr 20 16:02:58.647660 ignition[921]: op(1): executing: "modprobe" "qemu_fw_cfg" Apr 20 16:02:58.741741 ignition[921]: op(1): [finished] loading QEMU firmware config module Apr 20 16:02:58.848441 ignition[921]: parsing config with SHA512: b9ef5fa547ac0ffed5395fb06bdc4f440a0241ad30a8f415f422eb2861c41b87a14db18898b686bb3c8047f77ffa1ca54fc7108589535fef9b2ce123d9d6c7e8 Apr 20 16:02:58.864417 unknown[921]: fetched base config from "system" Apr 20 16:02:58.865020 ignition[921]: fetch-offline: fetch-offline passed Apr 20 16:02:58.864634 unknown[921]: fetched user config from "qemu" Apr 20 16:02:58.865075 ignition[921]: Ignition finished successfully Apr 20 16:02:58.884700 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Apr 20 16:02:58.897000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 16:02:58.905113 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Apr 20 16:02:58.955860 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Apr 20 16:02:59.087719 ignition[931]: Ignition 2.24.0 Apr 20 16:02:59.087763 ignition[931]: Stage: kargs Apr 20 16:02:59.088001 ignition[931]: no configs at "/usr/lib/ignition/base.d" Apr 20 16:02:59.088011 ignition[931]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 20 16:02:59.129631 ignition[931]: kargs: kargs passed Apr 20 16:02:59.143331 ignition[931]: Ignition finished successfully Apr 20 16:02:59.158935 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Apr 20 16:02:59.170000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 16:02:59.187881 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Apr 20 16:02:59.386040 ignition[939]: Ignition 2.24.0 Apr 20 16:02:59.386595 ignition[939]: Stage: disks Apr 20 16:02:59.386853 ignition[939]: no configs at "/usr/lib/ignition/base.d" Apr 20 16:02:59.386860 ignition[939]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 20 16:02:59.387658 ignition[939]: disks: disks passed Apr 20 16:02:59.387700 ignition[939]: Ignition finished successfully Apr 20 16:02:59.444786 systemd[1]: Finished ignition-disks.service - Ignition (disks). Apr 20 16:02:59.451000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 16:02:59.464861 kernel: kauditd_printk_skb: 20 callbacks suppressed Apr 20 16:02:59.457272 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Apr 20 16:02:59.474119 kernel: audit: type=1130 audit(1776700979.451:32): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 16:02:59.470385 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 20 16:02:59.480113 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 20 16:02:59.488611 systemd[1]: Reached target sysinit.target - System Initialization. Apr 20 16:02:59.497296 systemd[1]: Reached target basic.target - Basic System. Apr 20 16:02:59.508697 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Apr 20 16:02:59.718575 systemd-fsck[950]: ROOT: clean, 15/456736 files, 38230/456704 blocks Apr 20 16:02:59.728122 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Apr 20 16:02:59.740000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 16:02:59.749743 kernel: audit: type=1130 audit(1776700979.740:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 16:02:59.744592 systemd[1]: Mounting sysroot.mount - /sysroot... Apr 20 16:03:00.079721 kernel: EXT4-fs (vda9): mounted filesystem 2bdffc2e-451a-418b-b04b-9e3cd9229e7e r/w with ordered data mode. Quota mode: none. Apr 20 16:03:00.080648 systemd[1]: Mounted sysroot.mount - /sysroot. Apr 20 16:03:00.086915 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Apr 20 16:03:00.095129 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 20 16:03:00.104766 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Apr 20 16:03:00.108792 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Apr 20 16:03:00.108854 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Apr 20 16:03:00.108881 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Apr 20 16:03:00.148806 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Apr 20 16:03:00.155871 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Apr 20 16:03:00.167546 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (958) Apr 20 16:03:00.183378 kernel: BTRFS info (device vda6): first mount of filesystem 17906e87-85d1-46f5-980e-3e85555360cf Apr 20 16:03:00.183532 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 20 16:03:00.255034 kernel: BTRFS info (device vda6): turning on async discard Apr 20 16:03:00.255557 kernel: BTRFS info (device vda6): enabling free space tree Apr 20 16:03:00.260125 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 20 16:03:01.204821 kernel: loop1: detected capacity change from 0 to 43472 Apr 20 16:03:01.211362 kernel: loop1: p1 p2 p3 Apr 20 16:03:01.429803 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Apr 20 16:03:01.430011 kernel: device-mapper: verity: Unrecognized verity feature request: root_hash_sig_key_desc Apr 20 16:03:01.430029 kernel: device-mapper: table: 253:1: verity: Unrecognized verity feature request (-EINVAL) Apr 20 16:03:01.440601 kernel: device-mapper: ioctl: error adding target to table Apr 20 16:03:01.441102 systemd-confext[1048]: device-mapper: reload ioctl on 036bab43330e5a16b58b2997d79b59667c299046b83a0d438261a470d6586a8f-verity (253:1) failed: Invalid argument Apr 20 16:03:01.555686 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Apr 20 16:03:01.935283 kernel: erofs: (device dm-1): mounted with root inode @ nid 40. Apr 20 16:03:01.973777 kernel: loop2: detected capacity change from 0 to 43472 Apr 20 16:03:01.981356 kernel: loop2: p1 p2 p3 Apr 20 16:03:02.167644 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Apr 20 16:03:02.168438 kernel: device-mapper: verity: Unrecognized verity feature request: root_hash_sig_key_desc Apr 20 16:03:02.168577 kernel: device-mapper: table: 253:1: verity: Unrecognized verity feature request (-EINVAL) Apr 20 16:03:02.172851 kernel: device-mapper: ioctl: error adding target to table Apr 20 16:03:02.177101 (sd-merge)[1058]: device-mapper: reload ioctl on 036bab43330e5a16b58b2997d79b59667c299046b83a0d438261a470d6586a8f-verity (253:1) failed: Invalid argument Apr 20 16:03:02.200648 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Apr 20 16:03:02.662573 kernel: erofs: (device dm-1): mounted with root inode @ nid 40. Apr 20 16:03:02.665928 (sd-merge)[1058]: Using extensions '00-flatcar-default.raw'. Apr 20 16:03:02.672392 (sd-merge)[1058]: Merged extensions into '/sysroot/etc'. Apr 20 16:03:02.747674 initrd-setup-root[1065]: /etc 00-flatcar-default Mon 2026-04-20 16:02:54 UTC Apr 20 16:03:02.761790 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Apr 20 16:03:02.769000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 16:03:02.783776 kernel: audit: type=1130 audit(1776700982.769:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 16:03:02.789027 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Apr 20 16:03:02.821713 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Apr 20 16:03:02.871118 systemd[1]: sysroot-oem.mount: Deactivated successfully. Apr 20 16:03:02.878390 kernel: BTRFS info (device vda6): last unmount of filesystem 17906e87-85d1-46f5-980e-3e85555360cf Apr 20 16:03:02.910376 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Apr 20 16:03:02.910000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 16:03:02.923522 kernel: audit: type=1130 audit(1776700982.910:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 16:03:02.951989 ignition[1075]: INFO : Ignition 2.24.0 Apr 20 16:03:02.951989 ignition[1075]: INFO : Stage: mount Apr 20 16:03:02.968618 ignition[1075]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 20 16:03:02.968618 ignition[1075]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 20 16:03:02.981728 ignition[1075]: INFO : mount: mount passed Apr 20 16:03:02.981728 ignition[1075]: INFO : Ignition finished successfully Apr 20 16:03:02.985945 systemd[1]: Finished ignition-mount.service - Ignition (mount). Apr 20 16:03:02.999000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 16:03:03.044682 kernel: audit: type=1130 audit(1776700982.999:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 16:03:03.058780 systemd[1]: Starting ignition-files.service - Ignition (files)... Apr 20 16:03:03.121112 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 20 16:03:03.266811 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1086) Apr 20 16:03:03.285771 kernel: BTRFS info (device vda6): first mount of filesystem 17906e87-85d1-46f5-980e-3e85555360cf Apr 20 16:03:03.286100 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 20 16:03:03.345574 kernel: BTRFS info (device vda6): turning on async discard Apr 20 16:03:03.346889 kernel: BTRFS info (device vda6): enabling free space tree Apr 20 16:03:03.448146 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 20 16:03:04.166957 ignition[1103]: INFO : Ignition 2.24.0 Apr 20 16:03:04.166957 ignition[1103]: INFO : Stage: files Apr 20 16:03:04.177385 ignition[1103]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 20 16:03:04.177385 ignition[1103]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 20 16:03:04.177385 ignition[1103]: DEBUG : files: compiled without relabeling support, skipping Apr 20 16:03:04.199841 ignition[1103]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Apr 20 16:03:04.240892 ignition[1103]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Apr 20 16:03:04.259716 ignition[1103]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Apr 20 16:03:04.273630 ignition[1103]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Apr 20 16:03:04.293914 unknown[1103]: wrote ssh authorized keys file for user: core Apr 20 16:03:04.299785 ignition[1103]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Apr 20 16:03:04.299785 ignition[1103]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Apr 20 16:03:04.299785 ignition[1103]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Apr 20 16:03:04.424048 ignition[1103]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Apr 20 16:03:04.728869 ignition[1103]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Apr 20 16:03:04.739904 ignition[1103]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Apr 20 16:03:04.739904 ignition[1103]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Apr 20 16:03:04.739904 ignition[1103]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Apr 20 16:03:04.739904 ignition[1103]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Apr 20 16:03:04.739904 ignition[1103]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 20 16:03:04.739904 ignition[1103]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 20 16:03:04.739904 ignition[1103]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 20 16:03:04.739904 ignition[1103]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 20 16:03:04.802116 ignition[1103]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Apr 20 16:03:04.802116 ignition[1103]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Apr 20 16:03:04.802116 ignition[1103]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Apr 20 16:03:04.802116 ignition[1103]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Apr 20 16:03:04.802116 ignition[1103]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Apr 20 16:03:04.802116 ignition[1103]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.8-x86-64.raw: attempt #1 Apr 20 16:03:05.234464 ignition[1103]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Apr 20 16:03:06.394080 ignition[1103]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Apr 20 16:03:06.394080 ignition[1103]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Apr 20 16:03:06.444072 ignition[1103]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 20 16:03:06.444072 ignition[1103]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 20 16:03:06.444072 ignition[1103]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Apr 20 16:03:06.444072 ignition[1103]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Apr 20 16:03:06.444072 ignition[1103]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Apr 20 16:03:06.444072 ignition[1103]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Apr 20 16:03:06.444072 ignition[1103]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Apr 20 16:03:06.444072 ignition[1103]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Apr 20 16:03:06.720004 ignition[1103]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Apr 20 16:03:06.735595 ignition[1103]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Apr 20 16:03:06.744833 ignition[1103]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Apr 20 16:03:06.744833 ignition[1103]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Apr 20 16:03:06.744833 ignition[1103]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Apr 20 16:03:06.768812 ignition[1103]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Apr 20 16:03:06.768812 ignition[1103]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Apr 20 16:03:06.768812 ignition[1103]: INFO : files: files passed Apr 20 16:03:06.768812 ignition[1103]: INFO : Ignition finished successfully Apr 20 16:03:06.780000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 16:03:06.849921 kernel: audit: type=1130 audit(1776700986.780:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 16:03:06.776941 systemd[1]: Finished ignition-files.service - Ignition (files). Apr 20 16:03:06.846116 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Apr 20 16:03:06.869859 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Apr 20 16:03:06.927088 systemd[1]: ignition-quench.service: Deactivated successfully. Apr 20 16:03:06.928218 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Apr 20 16:03:06.935000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 16:03:06.935000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 16:03:06.954732 kernel: audit: type=1130 audit(1776700986.935:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 16:03:06.954753 initrd-setup-root-after-ignition[1134]: grep: /sysroot/oem/oem-release: No such file or directory Apr 20 16:03:06.964535 kernel: audit: type=1131 audit(1776700986.935:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 16:03:06.964586 initrd-setup-root-after-ignition[1136]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 20 16:03:06.964586 initrd-setup-root-after-ignition[1136]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Apr 20 16:03:06.978696 initrd-setup-root-after-ignition[1140]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 20 16:03:06.996342 kernel: loop3: detected capacity change from 0 to 43472 Apr 20 16:03:07.003429 kernel: loop3: p1 p2 p3 Apr 20 16:03:07.056598 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Apr 20 16:03:07.056671 kernel: device-mapper: verity: Unrecognized verity feature request: root_hash_sig_key_desc Apr 20 16:03:07.056687 kernel: device-mapper: table: 253:2: verity: Unrecognized verity feature request (-EINVAL) Apr 20 16:03:07.062709 kernel: device-mapper: ioctl: error adding target to table Apr 20 16:03:07.062950 systemd-confext[1142]: device-mapper: reload ioctl on loop3p1-verity (253:2) failed: Invalid argument Apr 20 16:03:07.073624 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Apr 20 16:03:07.243688 kernel: erofs: (device dm-2): mounted with root inode @ nid 40. Apr 20 16:03:07.296032 kernel: loop4: detected capacity change from 0 to 43472 Apr 20 16:03:07.347822 kernel: loop4: p1 p2 p3 Apr 20 16:03:07.399465 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Apr 20 16:03:07.399789 kernel: device-mapper: verity: Unrecognized verity feature request: root_hash_sig_key_desc Apr 20 16:03:07.400062 kernel: device-mapper: table: 253:2: verity: Unrecognized verity feature request (-EINVAL) Apr 20 16:03:07.408342 kernel: device-mapper: ioctl: error adding target to table Apr 20 16:03:07.408601 (sd-merge)[1154]: device-mapper: reload ioctl on loop4p1-verity (253:2) failed: Invalid argument Apr 20 16:03:07.414520 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Apr 20 16:03:07.508329 kernel: erofs: (device dm-2): mounted with root inode @ nid 40. Apr 20 16:03:07.508517 (sd-merge)[1154]: Skipping extension refresh because no change was found, use --always-refresh=yes to always do a refresh. Apr 20 16:03:07.529212 kernel: device-mapper: ioctl: remove_all left 2 open device(s) Apr 20 16:03:07.532278 kernel: loop5: detected capacity change from 0 to 378016 Apr 20 16:03:07.536206 kernel: loop5: p1 p2 p3 Apr 20 16:03:07.573116 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Apr 20 16:03:07.573382 kernel: device-mapper: verity: Unrecognized verity feature request: root_hash_sig_key_desc Apr 20 16:03:07.573399 kernel: device-mapper: table: 253:2: verity: Unrecognized verity feature request (-EINVAL) Apr 20 16:03:07.579626 kernel: device-mapper: ioctl: error adding target to table Apr 20 16:03:07.579853 systemd-sysext[1162]: device-mapper: reload ioctl on 5f63b01eb609e19b7df6b1f3554b098a8644903507171258f91f339ee69140b0-verity (253:2) failed: Invalid argument Apr 20 16:03:07.593567 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Apr 20 16:03:07.724624 kernel: erofs: (device dm-2): mounted with root inode @ nid 39. Apr 20 16:03:07.802800 kernel: loop4: detected capacity change from 0 to 228704 Apr 20 16:03:07.880480 kernel: loop6: detected capacity change from 0 to 178200 Apr 20 16:03:07.886420 kernel: loop6: p1 p2 p3 Apr 20 16:03:07.938662 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Apr 20 16:03:07.938861 kernel: device-mapper: verity: Unrecognized verity feature request: root_hash_sig_key_desc Apr 20 16:03:07.938881 kernel: device-mapper: table: 253:2: verity: Unrecognized verity feature request (-EINVAL) Apr 20 16:03:07.945647 kernel: device-mapper: ioctl: error adding target to table Apr 20 16:03:07.946003 systemd-sysext[1162]: device-mapper: reload ioctl on 47e3a0d62726bde98fcb471f946aa0f0e9f97280e4f7267ec40f142aba643eb6-verity (253:2) failed: Invalid argument Apr 20 16:03:07.957313 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Apr 20 16:03:08.255647 kernel: erofs: (device dm-2): mounted with root inode @ nid 39. Apr 20 16:03:08.313396 kernel: loop7: detected capacity change from 0 to 378016 Apr 20 16:03:08.321346 kernel: loop7: p1 p2 p3 Apr 20 16:03:08.394331 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Apr 20 16:03:08.394497 kernel: device-mapper: verity: Unrecognized verity feature request: root_hash_sig_key_desc Apr 20 16:03:08.394514 kernel: device-mapper: table: 253:2: verity: Unrecognized verity feature request (-EINVAL) Apr 20 16:03:08.399400 kernel: device-mapper: ioctl: error adding target to table Apr 20 16:03:08.399833 (sd-merge)[1179]: device-mapper: reload ioctl on 5f63b01eb609e19b7df6b1f3554b098a8644903507171258f91f339ee69140b0-verity (253:2) failed: Invalid argument Apr 20 16:03:08.413736 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Apr 20 16:03:08.656397 kernel: erofs: (device dm-2): mounted with root inode @ nid 39. Apr 20 16:03:08.673305 kernel: loop1: detected capacity change from 0 to 228704 Apr 20 16:03:08.725603 kernel: loop3: detected capacity change from 0 to 178200 Apr 20 16:03:08.731717 kernel: loop3: p1 p2 p3 Apr 20 16:03:08.743549 kernel: loop3: p1 p2 p3 Apr 20 16:03:08.796268 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Apr 20 16:03:08.796332 kernel: device-mapper: verity: Unrecognized verity feature request: root_hash_sig_key_desc Apr 20 16:03:08.829821 kernel: device-mapper: table: 253:3: verity: Unrecognized verity feature request (-EINVAL) Apr 20 16:03:08.829885 kernel: device-mapper: ioctl: error adding target to table Apr 20 16:03:08.833505 (sd-merge)[1179]: device-mapper: reload ioctl on 47e3a0d62726bde98fcb471f946aa0f0e9f97280e4f7267ec40f142aba643eb6-verity (253:3) failed: Invalid argument Apr 20 16:03:08.849117 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Apr 20 16:03:08.960371 kernel: erofs: (device dm-3): mounted with root inode @ nid 39. Apr 20 16:03:08.967722 (sd-merge)[1179]: Using extensions 'containerd-flatcar.raw', 'docker-flatcar.raw', 'kubernetes-v1.33.8-x86-64.raw'. Apr 20 16:03:08.974349 (sd-merge)[1179]: Merged extensions into '/sysroot/usr'. Apr 20 16:03:08.986558 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 20 16:03:08.993000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 16:03:09.050375 kernel: audit: type=1130 audit(1776700988.993:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 16:03:08.995206 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Apr 20 16:03:09.064853 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Apr 20 16:03:09.130858 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Apr 20 16:03:09.131849 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Apr 20 16:03:09.134000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 16:03:09.134000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 16:03:09.162370 kernel: audit: type=1130 audit(1776700989.134:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 16:03:09.134916 systemd[1]: initrd-parse-etc.service: Triggering OnSuccess= dependencies. Apr 20 16:03:09.168282 kernel: audit: type=1131 audit(1776700989.134:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 16:03:09.135430 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Apr 20 16:03:09.159963 systemd[1]: Reached target initrd.target - Initrd Default Target. Apr 20 16:03:09.166912 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Apr 20 16:03:09.168555 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Apr 20 16:03:09.259018 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 20 16:03:09.267000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 16:03:09.272672 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Apr 20 16:03:09.280467 kernel: audit: type=1130 audit(1776700989.267:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 16:03:09.389981 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Apr 20 16:03:09.398498 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 20 16:03:09.409553 systemd[1]: Stopped target timers.target - Timer Units. Apr 20 16:03:09.417285 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Apr 20 16:03:09.422630 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 20 16:03:09.433000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 16:03:09.433463 systemd[1]: Stopped target initrd.target - Initrd Default Target. Apr 20 16:03:09.440290 kernel: audit: type=1131 audit(1776700989.433:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 16:03:09.451867 systemd[1]: Stopped target basic.target - Basic System. Apr 20 16:03:09.455972 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Apr 20 16:03:09.467840 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Apr 20 16:03:09.480861 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Apr 20 16:03:09.492484 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Apr 20 16:03:09.506650 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Apr 20 16:03:09.520340 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Apr 20 16:03:09.526317 systemd[1]: Stopped target sysinit.target - System Initialization. Apr 20 16:03:09.534803 systemd[1]: Stopped target local-fs.target - Local File Systems. Apr 20 16:03:09.539370 systemd[1]: Stopped target swap.target - Swaps. Apr 20 16:03:09.545548 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Apr 20 16:03:09.550000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 16:03:09.563952 kernel: audit: type=1131 audit(1776700989.550:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 16:03:09.546125 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Apr 20 16:03:09.553028 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Apr 20 16:03:09.568671 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 20 16:03:09.583504 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Apr 20 16:03:09.585055 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 20 16:03:09.602000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 16:03:09.655741 kernel: audit: type=1131 audit(1776700989.602:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 16:03:09.590047 systemd[1]: dracut-initqueue.service: Deactivated successfully. Apr 20 16:03:09.661000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 16:03:09.590399 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Apr 20 16:03:09.602899 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Apr 20 16:03:09.603061 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Apr 20 16:03:09.662134 systemd[1]: Stopped target paths.target - Path Units. Apr 20 16:03:09.670515 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Apr 20 16:03:09.671472 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 20 16:03:09.673643 systemd[1]: Stopped target slices.target - Slice Units. Apr 20 16:03:09.695969 systemd[1]: Stopped target sockets.target - Socket Units. Apr 20 16:03:09.704486 systemd[1]: iscsid.socket: Deactivated successfully. Apr 20 16:03:09.755000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 16:03:09.704695 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Apr 20 16:03:09.713037 systemd[1]: iscsiuio.socket: Deactivated successfully. Apr 20 16:03:09.772000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 16:03:09.714097 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 20 16:03:09.725612 systemd[1]: systemd-journald-audit.socket: Deactivated successfully. Apr 20 16:03:09.727271 systemd[1]: Closed systemd-journald-audit.socket - Journal Audit Socket. Apr 20 16:03:09.741965 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Apr 20 16:03:09.742284 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 20 16:03:09.759707 systemd[1]: ignition-files.service: Deactivated successfully. Apr 20 16:03:09.763431 systemd[1]: Stopped ignition-files.service - Ignition (files). Apr 20 16:03:09.830000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 16:03:09.772588 systemd[1]: ignition-files.service: Consumed 2.652s CPU time. Apr 20 16:03:09.797613 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Apr 20 16:03:09.849000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 16:03:09.807972 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Apr 20 16:03:09.860000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 16:03:09.818643 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Apr 20 16:03:09.818852 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 20 16:03:09.830786 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Apr 20 16:03:09.830980 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Apr 20 16:03:09.849803 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Apr 20 16:03:09.853844 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Apr 20 16:03:09.901738 systemd[1]: initrd-cleanup.service: Deactivated successfully. Apr 20 16:03:09.936363 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Apr 20 16:03:09.945000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 16:03:09.945000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 16:03:09.949608 ignition[1208]: INFO : Ignition 2.24.0 Apr 20 16:03:09.949608 ignition[1208]: INFO : Stage: umount Apr 20 16:03:09.949608 ignition[1208]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 20 16:03:09.949608 ignition[1208]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 20 16:03:09.949608 ignition[1208]: INFO : umount: umount passed Apr 20 16:03:09.949608 ignition[1208]: INFO : Ignition finished successfully Apr 20 16:03:09.953000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 16:03:09.956000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 16:03:09.957000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 16:03:09.965000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 16:03:09.969000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup-pre comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 16:03:10.008000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 16:03:09.952988 systemd[1]: ignition-mount.service: Deactivated successfully. Apr 20 16:03:09.953129 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Apr 20 16:03:09.954657 systemd[1]: Stopped target network.target - Network. Apr 20 16:03:10.028000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 16:03:09.954905 systemd[1]: ignition-disks.service: Deactivated successfully. Apr 20 16:03:09.954936 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Apr 20 16:03:09.956370 systemd[1]: ignition-kargs.service: Deactivated successfully. Apr 20 16:03:09.956431 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Apr 20 16:03:09.957544 systemd[1]: ignition-setup.service: Deactivated successfully. Apr 20 16:03:10.049000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 16:03:09.960126 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Apr 20 16:03:10.058000 audit: BPF prog-id=8 op=UNLOAD Apr 20 16:03:09.968711 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Apr 20 16:03:09.968995 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Apr 20 16:03:10.064000 audit: BPF prog-id=5 op=UNLOAD Apr 20 16:03:09.969980 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Apr 20 16:03:09.970100 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Apr 20 16:03:09.997071 systemd[1]: sysroot-boot.mount: Deactivated successfully. Apr 20 16:03:09.997910 systemd[1]: systemd-resolved.service: Deactivated successfully. Apr 20 16:03:10.147000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 16:03:09.998099 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Apr 20 16:03:10.022926 systemd[1]: systemd-networkd.service: Deactivated successfully. Apr 20 16:03:10.023287 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Apr 20 16:03:10.042756 systemd[1]: sysroot-boot.service: Deactivated successfully. Apr 20 16:03:10.042950 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Apr 20 16:03:10.061965 systemd[1]: Stopped target network-pre.target - Preparation for Network. Apr 20 16:03:10.183000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 16:03:10.083676 systemd[1]: systemd-networkd.socket: Deactivated successfully. Apr 20 16:03:10.198000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 16:03:10.084309 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Apr 20 16:03:10.218000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 16:03:10.085974 systemd[1]: initrd-setup-root.service: Deactivated successfully. Apr 20 16:03:10.086087 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Apr 20 16:03:10.151141 systemd[1]: initrd-setup-root.service: Consumed 1.117s CPU time. Apr 20 16:03:10.165877 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Apr 20 16:03:10.180283 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Apr 20 16:03:10.182614 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 20 16:03:10.184056 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 20 16:03:10.184105 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 20 16:03:10.199120 systemd[1]: systemd-modules-load.service: Deactivated successfully. Apr 20 16:03:10.199750 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Apr 20 16:03:10.219527 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 20 16:03:10.284093 systemd[1]: systemd-udevd.service: Deactivated successfully. Apr 20 16:03:10.285524 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 20 16:03:10.299000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 16:03:10.299707 systemd[1]: systemd-udevd.service: Consumed 2.433s CPU time. Apr 20 16:03:10.301981 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Apr 20 16:03:10.316000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 16:03:10.302752 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Apr 20 16:03:10.324000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 16:03:10.308356 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Apr 20 16:03:10.333000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 16:03:10.308455 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Apr 20 16:03:10.317854 systemd[1]: dracut-cmdline.service: Deactivated successfully. Apr 20 16:03:10.319218 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Apr 20 16:03:10.324947 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 20 16:03:10.324993 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 20 16:03:10.346567 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Apr 20 16:03:10.358036 systemd[1]: systemd-network-generator.service: Deactivated successfully. Apr 20 16:03:10.358225 systemd[1]: Stopped systemd-network-generator.service - Generate Network Units from Kernel Command Line. Apr 20 16:03:10.371000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 16:03:10.375970 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Apr 20 16:03:10.378867 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 20 16:03:10.391000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 16:03:10.392536 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Apr 20 16:03:10.439000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 16:03:10.392606 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 20 16:03:10.455000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 16:03:10.442896 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Apr 20 16:03:10.466000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 16:03:10.443061 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Apr 20 16:03:10.460127 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 20 16:03:10.482000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 16:03:10.462008 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 20 16:03:10.472024 systemd[1]: network-cleanup.service: Deactivated successfully. Apr 20 16:03:10.473053 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Apr 20 16:03:10.507000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 16:03:10.507000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 16:03:10.493301 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Apr 20 16:03:10.497905 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Apr 20 16:03:10.512450 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Apr 20 16:03:10.525841 systemd[1]: Starting initrd-switch-root.service - Switch Root... Apr 20 16:03:10.668897 systemd[1]: Switching root. Apr 20 16:03:10.761978 systemd-journald[320]: Journal stopped Apr 20 16:03:16.949825 systemd-journald[320]: Received SIGTERM from PID 1 (systemd). Apr 20 16:03:16.949978 kernel: SELinux: policy capability network_peer_controls=1 Apr 20 16:03:16.950006 kernel: SELinux: policy capability open_perms=1 Apr 20 16:03:16.950019 kernel: SELinux: policy capability extended_socket_class=1 Apr 20 16:03:16.950035 kernel: SELinux: policy capability always_check_network=0 Apr 20 16:03:16.950049 kernel: SELinux: policy capability cgroup_seclabel=1 Apr 20 16:03:16.950062 kernel: SELinux: policy capability nnp_nosuid_transition=1 Apr 20 16:03:16.950079 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Apr 20 16:03:16.950092 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Apr 20 16:03:16.950108 kernel: SELinux: policy capability userspace_initial_context=0 Apr 20 16:03:16.950123 systemd[1]: Successfully loaded SELinux policy in 174.923ms. Apr 20 16:03:16.950150 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 23.810ms. Apr 20 16:03:16.950167 systemd[1]: systemd 258.3 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Apr 20 16:03:16.950902 systemd[1]: Detected virtualization kvm. Apr 20 16:03:16.952016 systemd[1]: Detected architecture x86-64. Apr 20 16:03:16.952246 systemd[1]: Detected first boot. Apr 20 16:03:16.952299 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Apr 20 16:03:16.952317 kernel: kauditd_printk_skb: 36 callbacks suppressed Apr 20 16:03:16.952332 kernel: audit: type=1334 audit(1776700992.001:83): prog-id=9 op=LOAD Apr 20 16:03:16.952346 kernel: audit: type=1334 audit(1776700992.001:84): prog-id=9 op=UNLOAD Apr 20 16:03:16.952360 zram_generator::config[1256]: No configuration found. Apr 20 16:03:16.952375 kernel: Guest personality initialized and is inactive Apr 20 16:03:16.952391 kernel: VMCI host device registered (name=vmci, major=10, minor=258) Apr 20 16:03:16.952404 kernel: Initialized host personality Apr 20 16:03:16.952417 kernel: NET: Registered PF_VSOCK protocol family Apr 20 16:03:16.952430 systemd-ssh-generator[1252]: Failed to query local AF_VSOCK CID: Cannot assign requested address Apr 20 16:03:16.952448 (sd-exec-[1237]: /usr/lib/systemd/system-generators/systemd-ssh-generator failed with exit status 1. Apr 20 16:03:16.952463 systemd[1]: Applying preset policy. Apr 20 16:03:16.952478 systemd[1]: Created symlink '/etc/systemd/system/multi-user.target.wants/prepare-helm.service' → '/etc/systemd/system/prepare-helm.service'. Apr 20 16:03:16.952495 systemd[1]: Created symlink '/etc/systemd/system/timers.target.wants/google-oslogin-cache.timer' → '/usr/lib/systemd/system/google-oslogin-cache.timer'. Apr 20 16:03:16.952508 systemd[1]: Populated /etc with preset unit settings. Apr 20 16:03:16.952522 systemd[1]: /usr/lib/systemd/system/update-engine.service:10: Support for option BlockIOWeight= has been removed and it is ignored Apr 20 16:03:16.952536 kernel: audit: type=1334 audit(1776700994.713:85): prog-id=10 op=LOAD Apr 20 16:03:16.952552 kernel: audit: type=1334 audit(1776700994.714:86): prog-id=2 op=UNLOAD Apr 20 16:03:16.952563 kernel: audit: type=1334 audit(1776700994.718:87): prog-id=11 op=LOAD Apr 20 16:03:16.952577 kernel: audit: type=1334 audit(1776700994.719:88): prog-id=12 op=LOAD Apr 20 16:03:16.952589 kernel: audit: type=1334 audit(1776700994.720:89): prog-id=3 op=UNLOAD Apr 20 16:03:16.952602 kernel: audit: type=1334 audit(1776700994.720:90): prog-id=4 op=UNLOAD Apr 20 16:03:16.952617 kernel: audit: type=1334 audit(1776700994.751:91): prog-id=13 op=LOAD Apr 20 16:03:16.952630 kernel: audit: type=1334 audit(1776700994.758:92): prog-id=10 op=UNLOAD Apr 20 16:03:16.952644 systemd[1]: initrd-switch-root.service: Deactivated successfully. Apr 20 16:03:16.952660 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Apr 20 16:03:16.952674 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Apr 20 16:03:16.952689 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Apr 20 16:03:16.952704 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Apr 20 16:03:16.952721 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Apr 20 16:03:16.952733 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Apr 20 16:03:16.952745 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Apr 20 16:03:16.952759 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Apr 20 16:03:16.952775 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Apr 20 16:03:16.952789 systemd[1]: Created slice user.slice - User and Session Slice. Apr 20 16:03:16.952805 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 20 16:03:16.952820 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 20 16:03:16.952832 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Apr 20 16:03:16.952848 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Apr 20 16:03:16.952864 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Apr 20 16:03:16.952879 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 20 16:03:16.952893 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Apr 20 16:03:16.952906 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 20 16:03:16.952920 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 20 16:03:16.952933 systemd[1]: Reached target imports.target - Image Downloads. Apr 20 16:03:16.952948 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Apr 20 16:03:16.952965 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Apr 20 16:03:16.952980 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Apr 20 16:03:16.952993 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Apr 20 16:03:16.953007 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 20 16:03:16.953021 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 20 16:03:16.953035 systemd[1]: Reached target remote-integritysetup.target - Remote Integrity Protected Volumes. Apr 20 16:03:16.953049 systemd[1]: Reached target remote-veritysetup.target - Remote Verity Protected Volumes. Apr 20 16:03:16.953065 systemd[1]: Reached target slices.target - Slice Units. Apr 20 16:03:16.953079 systemd[1]: Reached target swap.target - Swaps. Apr 20 16:03:16.953093 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Apr 20 16:03:16.953111 systemd[1]: Listening on systemd-ask-password.socket - Query the User Interactively for a Password. Apr 20 16:03:16.953126 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Apr 20 16:03:16.953141 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Apr 20 16:03:16.953818 systemd[1]: Listening on systemd-factory-reset.socket - Factory Reset Management. Apr 20 16:03:16.953853 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Apr 20 16:03:16.953868 systemd[1]: Listening on systemd-mountfsd.socket - DDI File System Mounter Socket. Apr 20 16:03:16.953882 systemd[1]: Listening on systemd-networkd-varlink.socket - Network Service Varlink Socket. Apr 20 16:03:16.953897 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 20 16:03:16.953913 systemd[1]: Listening on systemd-nsresourced.socket - Namespace Resource Manager Socket. Apr 20 16:03:16.953928 systemd[1]: Listening on systemd-oomd.socket - Userspace Out-Of-Memory (OOM) Killer Socket. Apr 20 16:03:16.953943 systemd[1]: Listening on systemd-resolved-monitor.socket - Resolve Monitor Varlink Socket. Apr 20 16:03:16.953959 systemd[1]: Listening on systemd-resolved-varlink.socket - Resolve Service Varlink Socket. Apr 20 16:03:16.953972 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 20 16:03:16.953987 systemd[1]: Listening on systemd-udevd-varlink.socket - udev Varlink Socket. Apr 20 16:03:16.954001 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Apr 20 16:03:16.954014 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Apr 20 16:03:16.954029 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Apr 20 16:03:16.954043 systemd[1]: Mounting media.mount - External Media Directory... Apr 20 16:03:16.954060 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 20 16:03:16.954075 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Apr 20 16:03:16.954090 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Apr 20 16:03:16.954104 systemd[1]: tmp.mount: x-systemd.graceful-option=usrquota specified, but option is not available, suppressing. Apr 20 16:03:16.954118 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Apr 20 16:03:16.954133 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Apr 20 16:03:16.954149 systemd[1]: Reached target machines.target - Virtual Machines and Containers. Apr 20 16:03:16.954919 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Apr 20 16:03:16.954941 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 20 16:03:16.954956 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 20 16:03:16.963287 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Apr 20 16:03:16.963398 systemd[1]: modprobe@dm_mod.service - Load Kernel Module dm_mod was skipped because of an unmet condition check (ConditionKernelModuleLoaded=!dm_mod). Apr 20 16:03:16.963414 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 20 16:03:16.963429 systemd[1]: modprobe@efi_pstore.service - Load Kernel Module efi_pstore was skipped because of an unmet condition check (ConditionKernelModuleLoaded=!efi_pstore). Apr 20 16:03:16.963443 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Apr 20 16:03:16.963458 systemd[1]: modprobe@loop.service - Load Kernel Module loop was skipped because of an unmet condition check (ConditionKernelModuleLoaded=!loop). Apr 20 16:03:16.963471 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Apr 20 16:03:16.963492 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Apr 20 16:03:16.963505 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Apr 20 16:03:16.963518 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Apr 20 16:03:16.963532 systemd[1]: Stopped systemd-fsck-usr.service. Apr 20 16:03:16.963549 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Apr 20 16:03:16.963562 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 20 16:03:16.963580 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 20 16:03:16.963594 kernel: fuse: init (API version 7.41) Apr 20 16:03:16.963609 systemd[1]: Starting systemd-network-generator.service - Generate Network Units from Kernel Command Line... Apr 20 16:03:16.963622 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Apr 20 16:03:16.963635 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 20 16:03:16.963651 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Apr 20 16:03:16.963665 kernel: ACPI: bus type drm_connector registered Apr 20 16:03:16.963677 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 20 16:03:16.963690 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 20 16:03:16.963746 systemd-journald[1326]: Collecting audit messages is enabled. Apr 20 16:03:16.963781 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Apr 20 16:03:16.963793 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Apr 20 16:03:16.963805 systemd[1]: Mounted media.mount - External Media Directory. Apr 20 16:03:16.963819 systemd-journald[1326]: Journal started Apr 20 16:03:16.965490 systemd-journald[1326]: Runtime Journal (/run/log/journal/80fc1434d1bc4aaeb954097d38972301) is 6M, max 48M, 42M free. Apr 20 16:03:15.788000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Apr 20 16:03:16.629000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 16:03:16.645000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 16:03:16.660000 audit: BPF prog-id=15 op=UNLOAD Apr 20 16:03:16.660000 audit: BPF prog-id=14 op=UNLOAD Apr 20 16:03:16.662000 audit: BPF prog-id=16 op=LOAD Apr 20 16:03:16.663000 audit: BPF prog-id=17 op=LOAD Apr 20 16:03:16.664000 audit: BPF prog-id=18 op=LOAD Apr 20 16:03:16.930000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Apr 20 16:03:16.930000 audit[1326]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=3 a1=7ffd4a1d30d0 a2=4000 a3=0 items=0 ppid=1 pid=1326 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 16:03:16.930000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Apr 20 16:03:14.596626 systemd[1]: Queued start job for default target multi-user.target. Apr 20 16:03:14.782887 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Apr 20 16:03:14.787086 systemd[1]: systemd-journald.service: Deactivated successfully. Apr 20 16:03:14.798311 systemd[1]: systemd-journald.service: Consumed 1.707s CPU time. Apr 20 16:03:16.982523 systemd[1]: Started systemd-journald.service - Journal Service. Apr 20 16:03:16.984000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 16:03:16.986772 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Apr 20 16:03:16.993842 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Apr 20 16:03:16.999939 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Apr 20 16:03:17.030796 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Apr 20 16:03:17.036000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 16:03:17.041247 kernel: kauditd_printk_skb: 20 callbacks suppressed Apr 20 16:03:17.041803 kernel: audit: type=1130 audit(1776700997.036:111): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 16:03:17.041572 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 20 16:03:17.061000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 16:03:17.061945 systemd[1]: modprobe@configfs.service: Deactivated successfully. Apr 20 16:03:17.062884 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Apr 20 16:03:17.082464 kernel: audit: type=1130 audit(1776700997.061:112): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 16:03:17.082000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 16:03:17.083017 kernel: audit: type=1130 audit(1776700997.082:113): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 16:03:17.085673 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 20 16:03:17.086979 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 20 16:03:17.082000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 16:03:17.110473 kernel: audit: type=1131 audit(1776700997.082:114): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 16:03:17.114000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 16:03:17.115872 kernel: audit: type=1130 audit(1776700997.114:115): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 16:03:17.117032 systemd[1]: modprobe@fuse.service: Deactivated successfully. Apr 20 16:03:17.119077 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Apr 20 16:03:17.114000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 16:03:17.132010 kernel: audit: type=1131 audit(1776700997.114:116): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 16:03:17.138000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 16:03:17.151992 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 20 16:03:17.139000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 16:03:17.154985 kernel: audit: type=1130 audit(1776700997.138:117): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 16:03:17.155017 kernel: audit: type=1131 audit(1776700997.139:118): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 16:03:17.166000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 16:03:17.178576 kernel: audit: type=1130 audit(1776700997.166:119): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 16:03:17.180575 systemd[1]: Finished systemd-network-generator.service - Generate Network Units from Kernel Command Line. Apr 20 16:03:17.187000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 16:03:17.198720 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Apr 20 16:03:17.246042 kernel: audit: type=1130 audit(1776700997.187:120): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 16:03:17.246000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 16:03:17.250760 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Apr 20 16:03:17.260000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-load-credentials comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 16:03:17.292044 systemd[1]: Reached target network-pre.target - Preparation for Network. Apr 20 16:03:17.299997 systemd[1]: Listening on systemd-importd.socket - Disk Image Download Service Socket. Apr 20 16:03:17.315639 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Apr 20 16:03:17.323052 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Apr 20 16:03:17.328796 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Apr 20 16:03:17.329704 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 20 16:03:17.342112 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Apr 20 16:03:17.354699 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 20 16:03:17.358999 systemd[1]: Starting systemd-confext.service - Merge System Configuration Images into /etc/... Apr 20 16:03:17.381774 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Apr 20 16:03:17.444411 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Apr 20 16:03:17.448800 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 20 16:03:17.460060 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Apr 20 16:03:17.482135 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 20 16:03:17.495709 systemd-journald[1326]: Time spent on flushing to /var/log/journal/80fc1434d1bc4aaeb954097d38972301 is 120.181ms for 1301 entries. Apr 20 16:03:17.495709 systemd-journald[1326]: System Journal (/var/log/journal/80fc1434d1bc4aaeb954097d38972301) is 8M, max 163.5M, 155.5M free. Apr 20 16:03:17.553000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 16:03:17.590000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 16:03:17.659647 systemd-journald[1326]: Received client request to flush runtime journal. Apr 20 16:03:17.496526 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 20 16:03:17.521871 systemd[1]: Starting systemd-userdb-load-credentials.service - Load JSON user/group Records from Credentials... Apr 20 16:03:17.543653 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 20 16:03:17.554993 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Apr 20 16:03:17.561002 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Apr 20 16:03:17.583520 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Apr 20 16:03:17.600119 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Apr 20 16:03:17.648772 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Apr 20 16:03:17.665800 systemd-tmpfiles[1372]: ACLs are not supported, ignoring. Apr 20 16:03:17.665814 systemd-tmpfiles[1372]: ACLs are not supported, ignoring. Apr 20 16:03:17.671116 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Apr 20 16:03:17.676000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 16:03:17.679500 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 20 16:03:17.685000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 16:03:17.686877 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 20 16:03:17.696059 kernel: loop4: detected capacity change from 0 to 43472 Apr 20 16:03:17.699000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 16:03:17.704906 systemd[1]: Finished systemd-userdb-load-credentials.service - Load JSON user/group Records from Credentials. Apr 20 16:03:17.708635 kernel: loop4: p1 p2 p3 Apr 20 16:03:17.713000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdb-load-credentials comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 16:03:17.729628 systemd[1]: Starting systemd-sysusers.service - Create System Users... Apr 20 16:03:17.737843 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Apr 20 16:03:17.744000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 16:03:17.836137 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Apr 20 16:03:17.838003 kernel: device-mapper: verity: Unrecognized verity feature request: root_hash_sig_key_desc Apr 20 16:03:17.838108 kernel: device-mapper: table: 253:4: verity: Unrecognized verity feature request (-EINVAL) Apr 20 16:03:17.841337 kernel: device-mapper: ioctl: error adding target to table Apr 20 16:03:17.845728 systemd-confext[1377]: device-mapper: reload ioctl on loop4p1-verity (253:4) failed: Invalid argument Apr 20 16:03:17.855606 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Apr 20 16:03:17.892592 systemd[1]: Finished systemd-sysusers.service - Create System Users. Apr 20 16:03:17.899000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 16:03:17.913000 audit: BPF prog-id=19 op=LOAD Apr 20 16:03:17.914000 audit: BPF prog-id=20 op=LOAD Apr 20 16:03:17.914000 audit: BPF prog-id=21 op=LOAD Apr 20 16:03:17.917637 systemd[1]: Starting systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer... Apr 20 16:03:17.927000 audit: BPF prog-id=22 op=LOAD Apr 20 16:03:17.931443 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 20 16:03:17.937000 audit: BPF prog-id=23 op=LOAD Apr 20 16:03:17.940608 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Apr 20 16:03:17.948122 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 20 16:03:17.958050 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Apr 20 16:03:17.966648 systemd[1]: Starting modprobe@tun.service - Load Kernel Module tun... Apr 20 16:03:17.992000 audit: BPF prog-id=24 op=LOAD Apr 20 16:03:17.994000 audit: BPF prog-id=25 op=LOAD Apr 20 16:03:17.994000 audit: BPF prog-id=26 op=LOAD Apr 20 16:03:17.997906 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Apr 20 16:03:18.059489 kernel: tun: Universal TUN/TAP device driver, 1.6 Apr 20 16:03:18.064043 systemd[1]: modprobe@tun.service: Deactivated successfully. Apr 20 16:03:18.066662 systemd[1]: Finished modprobe@tun.service - Load Kernel Module tun. Apr 20 16:03:18.071000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@tun comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 16:03:18.071000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@tun comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 16:03:18.072000 audit: BPF prog-id=27 op=LOAD Apr 20 16:03:18.072000 audit: BPF prog-id=28 op=LOAD Apr 20 16:03:18.072000 audit: BPF prog-id=29 op=LOAD Apr 20 16:03:18.075466 systemd[1]: Starting systemd-nsresourced.service - Namespace Resource Manager... Apr 20 16:03:18.089091 systemd-tmpfiles[1401]: ACLs are not supported, ignoring. Apr 20 16:03:18.089106 systemd-tmpfiles[1401]: ACLs are not supported, ignoring. Apr 20 16:03:18.133685 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 20 16:03:18.138000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 16:03:18.179413 systemd[1]: Started systemd-userdbd.service - User Database Manager. Apr 20 16:03:18.184000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 16:03:18.220987 systemd-nsresourced[1406]: Not setting up BPF subsystem, as functionality has been disabled at compile time. Apr 20 16:03:18.232759 systemd[1]: Started systemd-nsresourced.service - Namespace Resource Manager. Apr 20 16:03:18.242000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-nsresourced comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 16:03:18.373478 systemd-oomd[1398]: No swap; memory pressure usage will be degraded Apr 20 16:03:18.377321 systemd[1]: Started systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer. Apr 20 16:03:18.383000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-oomd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 16:03:18.432378 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Apr 20 16:03:18.438000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-timesyncd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 16:03:18.441664 systemd[1]: Reached target time-set.target - System Time Set. Apr 20 16:03:18.451777 systemd-resolved[1399]: Positive Trust Anchors: Apr 20 16:03:18.452924 systemd-resolved[1399]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 20 16:03:18.452941 systemd-resolved[1399]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Apr 20 16:03:18.452970 systemd-resolved[1399]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 20 16:03:18.474579 systemd-resolved[1399]: Defaulting to hostname 'linux'. Apr 20 16:03:18.482648 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 20 16:03:18.489000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 16:03:18.490242 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 20 16:03:19.386703 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Apr 20 16:03:19.391000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 16:03:19.393000 audit: BPF prog-id=7 op=UNLOAD Apr 20 16:03:19.393000 audit: BPF prog-id=6 op=UNLOAD Apr 20 16:03:19.394000 audit: BPF prog-id=30 op=LOAD Apr 20 16:03:19.395000 audit: BPF prog-id=31 op=LOAD Apr 20 16:03:19.398255 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 20 16:03:19.573374 systemd-udevd[1427]: Using default interface naming scheme 'v258'. Apr 20 16:03:19.832087 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 20 16:03:19.836000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 16:03:19.838000 audit: BPF prog-id=32 op=LOAD Apr 20 16:03:19.840873 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 20 16:03:20.050819 systemd-networkd[1429]: lo: Link UP Apr 20 16:03:20.050834 systemd-networkd[1429]: lo: Gained carrier Apr 20 16:03:20.057660 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 20 16:03:20.065000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 16:03:20.067064 systemd[1]: Reached target network.target - Network. Apr 20 16:03:20.074748 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Apr 20 16:03:20.083055 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Apr 20 16:03:20.122080 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Apr 20 16:03:20.134585 systemd-networkd[1429]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Apr 20 16:03:20.134596 systemd-networkd[1429]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 20 16:03:20.136637 systemd-networkd[1429]: eth0: Link UP Apr 20 16:03:20.137146 systemd-networkd[1429]: eth0: Gained carrier Apr 20 16:03:20.137255 systemd-networkd[1429]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Apr 20 16:03:20.141870 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Apr 20 16:03:20.149000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-persistent-storage comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 16:03:20.159698 systemd-networkd[1429]: eth0: DHCPv4 address 10.0.0.48/16, gateway 10.0.0.1 acquired from 10.0.0.1 Apr 20 16:03:20.163863 systemd-timesyncd[1400]: Network configuration changed, trying to establish connection. Apr 20 16:03:22.069858 systemd-resolved[1399]: Clock change detected. Flushing caches. Apr 20 16:03:22.070922 systemd-timesyncd[1400]: Contacted time server 10.0.0.1:123 (10.0.0.1). Apr 20 16:03:22.071333 systemd-timesyncd[1400]: Initial clock synchronization to Mon 2026-04-20 16:03:22.069762 UTC. Apr 20 16:03:22.265148 kernel: mousedev: PS/2 mouse device common for all mice Apr 20 16:03:22.293224 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Apr 20 16:03:22.352640 kernel: ACPI: button: Power Button [PWRF] Apr 20 16:03:22.387042 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Apr 20 16:03:22.409916 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Apr 20 16:03:22.428626 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Apr 20 16:03:22.429143 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Apr 20 16:03:22.430440 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Apr 20 16:03:22.561882 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Apr 20 16:03:22.567000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 16:03:22.887563 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 20 16:03:22.909803 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 20 16:03:22.911377 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 20 16:03:22.915000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 16:03:22.915000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 16:03:22.926482 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 20 16:03:23.073659 kernel: erofs: (device dm-4): mounted with root inode @ nid 40. Apr 20 16:03:23.196607 kernel: loop4: detected capacity change from 0 to 43472 Apr 20 16:03:23.232264 kernel: loop4: p1 p2 p3 Apr 20 16:03:23.247531 kernel: loop4: p1 p2 p3 Apr 20 16:03:23.385902 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Apr 20 16:03:23.386672 kernel: device-mapper: verity: Unrecognized verity feature request: root_hash_sig_key_desc Apr 20 16:03:23.386945 kernel: device-mapper: table: 253:4: verity: Unrecognized verity feature request (-EINVAL) Apr 20 16:03:23.386744 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 20 16:03:23.392555 (sd-merge)[1493]: device-mapper: reload ioctl on loop4p1-verity (253:4) failed: Invalid argument Apr 20 16:03:23.393533 kernel: device-mapper: ioctl: error adding target to table Apr 20 16:03:23.403496 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Apr 20 16:03:23.403000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 16:03:23.473108 systemd-networkd[1429]: eth0: Gained IPv6LL Apr 20 16:03:23.483775 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Apr 20 16:03:23.488000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-wait-online comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 16:03:23.489974 systemd[1]: Reached target network-online.target - Network is Online. Apr 20 16:03:23.550134 (sd-merge)[1493]: Skipping extension refresh because no change was found, use --always-refresh=yes to always do a refresh. Apr 20 16:03:23.550724 kernel: erofs: (device dm-4): mounted with root inode @ nid 40. Apr 20 16:03:23.556991 systemd[1]: Finished systemd-confext.service - Merge System Configuration Images into /etc/. Apr 20 16:03:23.561000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-confext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 16:03:23.570268 kernel: device-mapper: ioctl: remove_all left 4 open device(s) Apr 20 16:03:23.593139 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Apr 20 16:03:23.631355 kernel: loop4: detected capacity change from 0 to 228704 Apr 20 16:03:23.733501 kernel: loop4: detected capacity change from 0 to 378016 Apr 20 16:03:23.739321 kernel: loop4: p1 p2 p3 Apr 20 16:03:23.795723 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Apr 20 16:03:23.796135 kernel: device-mapper: verity: Unrecognized verity feature request: root_hash_sig_key_desc Apr 20 16:03:23.796230 kernel: device-mapper: table: 253:4: verity: Unrecognized verity feature request (-EINVAL) Apr 20 16:03:23.802976 kernel: device-mapper: ioctl: error adding target to table Apr 20 16:03:23.803695 systemd-sysext[1504]: device-mapper: reload ioctl on loop4p1-verity (253:4) failed: Invalid argument Apr 20 16:03:23.816928 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Apr 20 16:03:24.041139 kernel: erofs: (device dm-4): mounted with root inode @ nid 39. Apr 20 16:03:24.139543 kernel: loop4: detected capacity change from 0 to 178200 Apr 20 16:03:24.145405 kernel: loop4: p1 p2 p3 Apr 20 16:03:24.163727 kernel: loop4: p1 p2 p3 Apr 20 16:03:24.263948 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Apr 20 16:03:24.265929 kernel: device-mapper: verity: Unrecognized verity feature request: root_hash_sig_key_desc Apr 20 16:03:24.266853 kernel: device-mapper: table: 253:4: verity: Unrecognized verity feature request (-EINVAL) Apr 20 16:03:24.270735 kernel: device-mapper: ioctl: error adding target to table Apr 20 16:03:24.275016 systemd-sysext[1504]: device-mapper: reload ioctl on loop4p1-verity (253:4) failed: Invalid argument Apr 20 16:03:24.285685 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Apr 20 16:03:24.393931 kernel: erofs: (device dm-4): mounted with root inode @ nid 39. Apr 20 16:03:24.550664 kernel: loop4: detected capacity change from 0 to 228704 Apr 20 16:03:24.674515 kernel: loop5: detected capacity change from 0 to 378016 Apr 20 16:03:24.684342 kernel: loop5: p1 p2 p3 Apr 20 16:03:24.702505 kernel: loop5: p1 p2 p3 Apr 20 16:03:24.866074 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Apr 20 16:03:24.866609 kernel: device-mapper: verity: Unrecognized verity feature request: root_hash_sig_key_desc Apr 20 16:03:24.866996 kernel: device-mapper: table: 253:4: verity: Unrecognized verity feature request (-EINVAL) Apr 20 16:03:24.871923 kernel: device-mapper: ioctl: error adding target to table Apr 20 16:03:24.876579 (sd-merge)[1524]: device-mapper: reload ioctl on loop5p1-verity (253:4) failed: Invalid argument Apr 20 16:03:24.889802 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Apr 20 16:03:25.207532 kernel: erofs: (device dm-4): mounted with root inode @ nid 39. Apr 20 16:03:25.240700 kernel: loop6: detected capacity change from 0 to 178200 Apr 20 16:03:25.252407 kernel: loop6: p1 p2 p3 Apr 20 16:03:25.270512 kernel: loop6: p1 p2 p3 Apr 20 16:03:25.330271 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Apr 20 16:03:25.330419 kernel: device-mapper: verity: Unrecognized verity feature request: root_hash_sig_key_desc Apr 20 16:03:25.330491 kernel: device-mapper: table: 253:5: verity: Unrecognized verity feature request (-EINVAL) Apr 20 16:03:25.334464 kernel: device-mapper: ioctl: error adding target to table Apr 20 16:03:25.339814 (sd-merge)[1524]: device-mapper: reload ioctl on loop6p1-verity (253:5) failed: Invalid argument Apr 20 16:03:25.347320 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Apr 20 16:03:25.511742 kernel: erofs: (device dm-5): mounted with root inode @ nid 39. Apr 20 16:03:25.518113 (sd-merge)[1524]: Skipping extension refresh because no change was found, use --always-refresh=yes to always do a refresh. Apr 20 16:03:25.532221 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Apr 20 16:03:25.538000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 16:03:25.550955 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 20 16:03:25.554936 kernel: kauditd_printk_skb: 44 callbacks suppressed Apr 20 16:03:25.554964 kernel: audit: type=1130 audit(1776701005.538:165): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 16:03:25.595786 kernel: device-mapper: ioctl: remove_all left 4 open device(s) Apr 20 16:03:25.689697 systemd-tmpfiles[1541]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Apr 20 16:03:25.689805 systemd-tmpfiles[1541]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Apr 20 16:03:25.690109 systemd-tmpfiles[1541]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Apr 20 16:03:25.691554 systemd-tmpfiles[1541]: ACLs are not supported, ignoring. Apr 20 16:03:25.692246 systemd-tmpfiles[1541]: ACLs are not supported, ignoring. Apr 20 16:03:25.709102 systemd-tmpfiles[1541]: Detected autofs mount point /boot during canonicalization of boot. Apr 20 16:03:25.709890 systemd-tmpfiles[1541]: Skipping /boot Apr 20 16:03:25.747060 systemd-tmpfiles[1541]: Detected autofs mount point /boot during canonicalization of boot. Apr 20 16:03:25.747127 systemd-tmpfiles[1541]: Skipping /boot Apr 20 16:03:25.860949 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 20 16:03:25.867000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 16:03:25.880257 kernel: audit: type=1130 audit(1776701005.867:166): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 16:03:25.880785 systemd[1]: Starting audit-rules.service - Load Audit Rules... Apr 20 16:03:25.894846 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Apr 20 16:03:25.933572 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Apr 20 16:03:25.947047 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Apr 20 16:03:25.965462 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Apr 20 16:03:26.084000 audit[1558]: AUDIT1127 pid=1558 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Apr 20 16:03:26.127994 kernel: audit: type=1127 audit(1776701006.084:167): pid=1558 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Apr 20 16:03:26.127936 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Apr 20 16:03:26.139000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 16:03:26.156320 kernel: audit: type=1130 audit(1776701006.139:168): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 16:03:26.185000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Apr 20 16:03:26.189365 augenrules[1573]: No rules Apr 20 16:03:26.190389 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Apr 20 16:03:26.185000 audit[1573]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7fff16f77d70 a2=420 a3=0 items=0 ppid=1548 pid=1573 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 16:03:26.197622 kernel: audit: type=1305 audit(1776701006.185:169): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Apr 20 16:03:26.197701 kernel: audit: type=1300 audit(1776701006.185:169): arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7fff16f77d70 a2=420 a3=0 items=0 ppid=1548 pid=1573 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 16:03:26.185000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Apr 20 16:03:26.214697 kernel: audit: type=1327 audit(1776701006.185:169): proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Apr 20 16:03:26.226661 systemd[1]: audit-rules.service: Deactivated successfully. Apr 20 16:03:26.227476 systemd[1]: Finished audit-rules.service - Load Audit Rules. Apr 20 16:03:26.234879 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Apr 20 16:03:26.246870 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Apr 20 16:03:28.490106 ldconfig[1550]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Apr 20 16:03:28.511984 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Apr 20 16:03:28.521659 systemd[1]: Starting systemd-update-done.service - Update is Completed... Apr 20 16:03:28.753987 systemd[1]: Finished systemd-update-done.service - Update is Completed. Apr 20 16:03:28.762009 systemd[1]: Reached target sysinit.target - System Initialization. Apr 20 16:03:28.771415 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Apr 20 16:03:28.781064 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Apr 20 16:03:28.794974 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Apr 20 16:03:28.806216 systemd[1]: Started logrotate.timer - Daily rotation of log files. Apr 20 16:03:28.816560 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Apr 20 16:03:28.826925 systemd[1]: Started systemd-sysupdate-reboot.timer - Reboot Automatically After System Update. Apr 20 16:03:28.835084 systemd[1]: Started systemd-sysupdate.timer - Automatic System Update. Apr 20 16:03:28.841921 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Apr 20 16:03:28.848575 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Apr 20 16:03:28.849901 systemd[1]: Reached target paths.target - Path Units. Apr 20 16:03:28.855633 systemd[1]: Reached target timers.target - Timer Units. Apr 20 16:03:28.876762 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Apr 20 16:03:28.898552 systemd[1]: Starting docker.socket - Docker Socket for the API... Apr 20 16:03:28.954915 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Apr 20 16:03:29.001578 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Apr 20 16:03:29.010099 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Apr 20 16:03:29.018534 systemd[1]: Listening on systemd-logind-varlink.socket - User Login Management Varlink Socket. Apr 20 16:03:29.025624 systemd[1]: Listening on systemd-machined.socket - Virtual Machine and Container Registration Service Socket. Apr 20 16:03:29.045955 systemd[1]: Listening on docker.socket - Docker Socket for the API. Apr 20 16:03:29.071099 systemd[1]: Reached target sockets.target - Socket Units. Apr 20 16:03:29.079626 systemd[1]: Reached target basic.target - Basic System. Apr 20 16:03:29.087738 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Apr 20 16:03:29.087804 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Apr 20 16:03:29.097866 systemd[1]: Starting containerd.service - containerd container runtime... Apr 20 16:03:29.115716 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Apr 20 16:03:29.127709 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Apr 20 16:03:29.142754 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Apr 20 16:03:29.171856 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Apr 20 16:03:29.190632 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Apr 20 16:03:29.222636 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Apr 20 16:03:29.230117 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Apr 20 16:03:29.235650 jq[1591]: false Apr 20 16:03:29.238647 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 20 16:03:29.270749 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Apr 20 16:03:29.287052 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Apr 20 16:03:29.295735 google_oslogin_nss_cache[1593]: oslogin_cache_refresh[1593]: Refreshing passwd entry cache Apr 20 16:03:29.297405 oslogin_cache_refresh[1593]: Refreshing passwd entry cache Apr 20 16:03:29.301525 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Apr 20 16:03:29.309086 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Apr 20 16:03:29.355812 extend-filesystems[1592]: Found /dev/vda6 Apr 20 16:03:29.349758 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Apr 20 16:03:29.449777 google_oslogin_nss_cache[1593]: oslogin_cache_refresh[1593]: Failure getting users, quitting Apr 20 16:03:29.449777 google_oslogin_nss_cache[1593]: oslogin_cache_refresh[1593]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Apr 20 16:03:29.449777 google_oslogin_nss_cache[1593]: oslogin_cache_refresh[1593]: Refreshing group entry cache Apr 20 16:03:29.448484 oslogin_cache_refresh[1593]: Failure getting users, quitting Apr 20 16:03:29.449866 extend-filesystems[1592]: Found /dev/vda9 Apr 20 16:03:29.449866 extend-filesystems[1592]: Checking size of /dev/vda9 Apr 20 16:03:29.448506 oslogin_cache_refresh[1593]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Apr 20 16:03:29.448692 oslogin_cache_refresh[1593]: Refreshing group entry cache Apr 20 16:03:29.474331 systemd[1]: Starting systemd-logind.service - User Login Management... Apr 20 16:03:29.480919 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Apr 20 16:03:29.481011 oslogin_cache_refresh[1593]: Failure getting groups, quitting Apr 20 16:03:29.481433 google_oslogin_nss_cache[1593]: oslogin_cache_refresh[1593]: Failure getting groups, quitting Apr 20 16:03:29.481433 google_oslogin_nss_cache[1593]: oslogin_cache_refresh[1593]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Apr 20 16:03:29.481024 oslogin_cache_refresh[1593]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Apr 20 16:03:29.492717 systemd[1]: Starting update-engine.service - Update Engine... Apr 20 16:03:29.526236 extend-filesystems[1592]: Resized partition /dev/vda9 Apr 20 16:03:29.527782 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Apr 20 16:03:29.562588 extend-filesystems[1625]: resize2fs 1.47.3 (8-Jul-2025) Apr 20 16:03:29.577067 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Apr 20 16:03:29.584904 kernel: EXT4-fs (vda9): resizing filesystem from 456704 to 1784827 blocks Apr 20 16:03:29.586750 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Apr 20 16:03:29.587134 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Apr 20 16:03:29.587769 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Apr 20 16:03:29.588807 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Apr 20 16:03:29.651104 systemd[1]: motdgen.service: Deactivated successfully. Apr 20 16:03:29.655088 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Apr 20 16:03:29.666913 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Apr 20 16:03:29.682347 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Apr 20 16:03:29.683756 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Apr 20 16:03:29.718746 jq[1621]: true Apr 20 16:03:29.758241 kernel: EXT4-fs (vda9): resized filesystem to 1784827 Apr 20 16:03:29.878805 systemd[1]: coreos-metadata.service: Deactivated successfully. Apr 20 16:03:29.881544 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Apr 20 16:03:29.892950 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Apr 20 16:03:29.923467 extend-filesystems[1625]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Apr 20 16:03:29.923467 extend-filesystems[1625]: old_desc_blocks = 1, new_desc_blocks = 1 Apr 20 16:03:29.923467 extend-filesystems[1625]: The filesystem on /dev/vda9 is now 1784827 (4k) blocks long. Apr 20 16:03:29.962647 update_engine[1613]: I20260420 16:03:29.922067 1613 main.cc:92] Flatcar Update Engine starting Apr 20 16:03:29.963885 jq[1648]: true Apr 20 16:03:29.929963 systemd[1]: extend-filesystems.service: Deactivated successfully. Apr 20 16:03:30.058605 extend-filesystems[1592]: Resized filesystem in /dev/vda9 Apr 20 16:03:29.930420 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Apr 20 16:03:30.083603 tar[1636]: linux-amd64/LICENSE Apr 20 16:03:30.086999 tar[1636]: linux-amd64/helm Apr 20 16:03:30.203452 systemd-logind[1611]: Watching system buttons on /dev/input/event2 (Power Button) Apr 20 16:03:30.203507 systemd-logind[1611]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Apr 20 16:03:30.208743 systemd-logind[1611]: New seat seat0. Apr 20 16:03:30.210038 systemd[1]: Started systemd-logind.service - User Login Management. Apr 20 16:03:30.246220 dbus-daemon[1589]: [system] SELinux support is enabled Apr 20 16:03:30.261766 systemd[1]: Started dbus.service - D-Bus System Message Bus. Apr 20 16:03:30.361543 bash[1682]: Updated "/home/core/.ssh/authorized_keys" Apr 20 16:03:30.418445 dbus-daemon[1589]: [system] Successfully activated service 'org.freedesktop.systemd1' Apr 20 16:03:30.426083 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Apr 20 16:03:30.871540 update_engine[1613]: I20260420 16:03:30.851517 1613 update_check_scheduler.cc:74] Next update check in 4m0s Apr 20 16:03:30.874686 systemd[1]: Started update-engine.service - Update Engine. Apr 20 16:03:30.880044 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Apr 20 16:03:30.880262 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Apr 20 16:03:30.880385 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Apr 20 16:03:30.885259 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Apr 20 16:03:30.885446 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Apr 20 16:03:30.896084 systemd[1]: Started locksmithd.service - Cluster reboot manager. Apr 20 16:03:32.169630 sshd_keygen[1630]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Apr 20 16:03:32.239517 locksmithd[1695]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Apr 20 16:03:32.846003 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Apr 20 16:03:32.882212 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Apr 20 16:03:32.928092 systemd[1]: Starting issuegen.service - Generate /run/issue... Apr 20 16:03:32.936023 systemd[1]: Started sshd@0-1-10.0.0.48:22-10.0.0.1:43584.service - OpenSSH per-connection server daemon (10.0.0.1:43584). Apr 20 16:03:33.046573 systemd[1]: issuegen.service: Deactivated successfully. Apr 20 16:03:33.048051 systemd[1]: Finished issuegen.service - Generate /run/issue. Apr 20 16:03:33.087074 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Apr 20 16:03:33.460968 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Apr 20 16:03:33.679722 systemd[1]: Started getty@tty1.service - Getty on tty1. Apr 20 16:03:33.692984 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Apr 20 16:03:33.696805 systemd[1]: Reached target getty.target - Login Prompts. Apr 20 16:03:34.422747 containerd[1642]: time="2026-04-20T16:03:34Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Apr 20 16:03:34.446069 sshd[1718]: Accepted publickey for core from 10.0.0.1 port 43584 ssh2: RSA SHA256:TY3ywpKQxrHjYT+ud73OgntCcwLDp6eqaAmsAXbkkEQ Apr 20 16:03:34.496250 containerd[1642]: time="2026-04-20T16:03:34.492105760Z" level=info msg="starting containerd" revision=dea7da592f5d1d2b7755e3a161be07f43fad8f75 version=v2.2.1 Apr 20 16:03:34.533479 sshd-session[1718]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 20 16:03:34.645390 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Apr 20 16:03:34.667471 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Apr 20 16:03:35.245917 systemd-logind[1611]: New session '1' of user 'core' with class 'user' and type 'tty'. Apr 20 16:03:35.375840 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Apr 20 16:03:35.423970 systemd[1]: Starting user@500.service - User Manager for UID 500... Apr 20 16:03:35.463639 containerd[1642]: time="2026-04-20T16:03:35.462747920Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="299.147µs" Apr 20 16:03:35.463639 containerd[1642]: time="2026-04-20T16:03:35.462892559Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Apr 20 16:03:35.471067 containerd[1642]: time="2026-04-20T16:03:35.464883053Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Apr 20 16:03:35.471067 containerd[1642]: time="2026-04-20T16:03:35.468136563Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Apr 20 16:03:35.472211 containerd[1642]: time="2026-04-20T16:03:35.471539981Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Apr 20 16:03:35.472211 containerd[1642]: time="2026-04-20T16:03:35.471675372Z" level=info msg="loading plugin" id=io.containerd.mount-handler.v1.erofs type=io.containerd.mount-handler.v1 Apr 20 16:03:35.472211 containerd[1642]: time="2026-04-20T16:03:35.471732412Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Apr 20 16:03:35.472211 containerd[1642]: time="2026-04-20T16:03:35.471786864Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Apr 20 16:03:35.472211 containerd[1642]: time="2026-04-20T16:03:35.471797188Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Apr 20 16:03:35.472211 containerd[1642]: time="2026-04-20T16:03:35.472145199Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Apr 20 16:03:35.472211 containerd[1642]: time="2026-04-20T16:03:35.472211310Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Apr 20 16:03:35.475810 containerd[1642]: time="2026-04-20T16:03:35.472239113Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Apr 20 16:03:35.475810 containerd[1642]: time="2026-04-20T16:03:35.472250340Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Apr 20 16:03:35.477990 containerd[1642]: time="2026-04-20T16:03:35.476732432Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Apr 20 16:03:35.477990 containerd[1642]: time="2026-04-20T16:03:35.476974327Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Apr 20 16:03:35.484389 containerd[1642]: time="2026-04-20T16:03:35.483266682Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Apr 20 16:03:35.484389 containerd[1642]: time="2026-04-20T16:03:35.483899849Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Apr 20 16:03:35.484389 containerd[1642]: time="2026-04-20T16:03:35.483919845Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Apr 20 16:03:35.486093 containerd[1642]: time="2026-04-20T16:03:35.484080058Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Apr 20 16:03:35.488724 containerd[1642]: time="2026-04-20T16:03:35.487359972Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Apr 20 16:03:35.489105 containerd[1642]: time="2026-04-20T16:03:35.488844350Z" level=info msg="metadata content store policy set" policy=shared Apr 20 16:03:35.564598 containerd[1642]: time="2026-04-20T16:03:35.560215591Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Apr 20 16:03:35.583866 containerd[1642]: time="2026-04-20T16:03:35.581043702Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Apr 20 16:03:35.591384 tar[1636]: linux-amd64/README.md Apr 20 16:03:35.593091 (systemd)[1734]: pam_unix(systemd-user:session): session opened for user core(uid=500) by core(uid=0) Apr 20 16:03:35.600242 containerd[1642]: time="2026-04-20T16:03:35.590427752Z" level=info msg="built-in NRI default validator is disabled" Apr 20 16:03:35.601975 containerd[1642]: time="2026-04-20T16:03:35.600485799Z" level=info msg="runtime interface created" Apr 20 16:03:35.604950 containerd[1642]: time="2026-04-20T16:03:35.604655887Z" level=info msg="created NRI interface" Apr 20 16:03:35.605634 containerd[1642]: time="2026-04-20T16:03:35.605571228Z" level=info msg="loading plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Apr 20 16:03:35.606084 containerd[1642]: time="2026-04-20T16:03:35.606058617Z" level=info msg="skip loading plugin" error="failed to check mkfs.erofs availability: failed to run mkfs.erofs --help: exec: \"mkfs.erofs\": executable file not found in $PATH: skip plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Apr 20 16:03:35.606153 containerd[1642]: time="2026-04-20T16:03:35.606140879Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Apr 20 16:03:35.612281 containerd[1642]: time="2026-04-20T16:03:35.611883510Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Apr 20 16:03:35.612787 containerd[1642]: time="2026-04-20T16:03:35.612764711Z" level=info msg="loading plugin" id=io.containerd.mount-manager.v1.bolt type=io.containerd.mount-manager.v1 Apr 20 16:03:35.619011 containerd[1642]: time="2026-04-20T16:03:35.618864777Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Apr 20 16:03:35.619322 systemd-logind[1611]: New session '2' of user 'core' with class 'manager-early' and type 'unspecified'. Apr 20 16:03:35.619640 containerd[1642]: time="2026-04-20T16:03:35.619299867Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Apr 20 16:03:35.624884 containerd[1642]: time="2026-04-20T16:03:35.623455333Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Apr 20 16:03:35.624884 containerd[1642]: time="2026-04-20T16:03:35.623808698Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Apr 20 16:03:35.624884 containerd[1642]: time="2026-04-20T16:03:35.623901346Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Apr 20 16:03:35.624884 containerd[1642]: time="2026-04-20T16:03:35.623955047Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Apr 20 16:03:35.624884 containerd[1642]: time="2026-04-20T16:03:35.623971465Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Apr 20 16:03:35.627492 containerd[1642]: time="2026-04-20T16:03:35.627222968Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Apr 20 16:03:35.637870 containerd[1642]: time="2026-04-20T16:03:35.627709834Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Apr 20 16:03:35.651141 containerd[1642]: time="2026-04-20T16:03:35.648684925Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Apr 20 16:03:35.659231 containerd[1642]: time="2026-04-20T16:03:35.655141948Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Apr 20 16:03:35.659858 containerd[1642]: time="2026-04-20T16:03:35.659629716Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Apr 20 16:03:35.659927 containerd[1642]: time="2026-04-20T16:03:35.659878491Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Apr 20 16:03:35.672695 containerd[1642]: time="2026-04-20T16:03:35.665234256Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Apr 20 16:03:35.680745 containerd[1642]: time="2026-04-20T16:03:35.675732120Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Apr 20 16:03:35.690594 containerd[1642]: time="2026-04-20T16:03:35.687074185Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Apr 20 16:03:35.691064 containerd[1642]: time="2026-04-20T16:03:35.690826975Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Apr 20 16:03:35.691140 containerd[1642]: time="2026-04-20T16:03:35.691075328Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.mounts type=io.containerd.grpc.v1 Apr 20 16:03:35.795200 containerd[1642]: time="2026-04-20T16:03:35.774791717Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Apr 20 16:03:35.795200 containerd[1642]: time="2026-04-20T16:03:35.793219470Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Apr 20 16:03:35.928836 containerd[1642]: time="2026-04-20T16:03:35.795434935Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Apr 20 16:03:35.937977 containerd[1642]: time="2026-04-20T16:03:35.933469123Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Apr 20 16:03:35.945226 containerd[1642]: time="2026-04-20T16:03:35.941743471Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Apr 20 16:03:35.945898 containerd[1642]: time="2026-04-20T16:03:35.945105621Z" level=info msg="Start snapshots syncer" Apr 20 16:03:35.953637 containerd[1642]: time="2026-04-20T16:03:35.950779623Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Apr 20 16:03:35.969617 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Apr 20 16:03:36.080894 containerd[1642]: time="2026-04-20T16:03:36.078714147Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"cgroupWritable\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"\",\"binDirs\":[\"/opt/cni/bin\"],\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogLineSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Apr 20 16:03:36.089487 containerd[1642]: time="2026-04-20T16:03:36.086498564Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Apr 20 16:03:36.142549 containerd[1642]: time="2026-04-20T16:03:36.141106509Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Apr 20 16:03:36.154224 containerd[1642]: time="2026-04-20T16:03:36.150745789Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Apr 20 16:03:36.154700 containerd[1642]: time="2026-04-20T16:03:36.154554839Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Apr 20 16:03:36.155835 containerd[1642]: time="2026-04-20T16:03:36.154918410Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Apr 20 16:03:36.157298 containerd[1642]: time="2026-04-20T16:03:36.154976465Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Apr 20 16:03:36.158506 containerd[1642]: time="2026-04-20T16:03:36.157303550Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Apr 20 16:03:36.158506 containerd[1642]: time="2026-04-20T16:03:36.157494605Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Apr 20 16:03:36.161266 containerd[1642]: time="2026-04-20T16:03:36.158972190Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Apr 20 16:03:36.163960 containerd[1642]: time="2026-04-20T16:03:36.161746736Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Apr 20 16:03:36.167438 containerd[1642]: time="2026-04-20T16:03:36.164139956Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Apr 20 16:03:36.182580 containerd[1642]: time="2026-04-20T16:03:36.178980952Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Apr 20 16:03:36.190401 containerd[1642]: time="2026-04-20T16:03:36.189982364Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Apr 20 16:03:36.190401 containerd[1642]: time="2026-04-20T16:03:36.190400517Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Apr 20 16:03:36.194963 containerd[1642]: time="2026-04-20T16:03:36.190528358Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Apr 20 16:03:36.197499 containerd[1642]: time="2026-04-20T16:03:36.197106077Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Apr 20 16:03:36.197650 containerd[1642]: time="2026-04-20T16:03:36.197546370Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Apr 20 16:03:36.197650 containerd[1642]: time="2026-04-20T16:03:36.197578084Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Apr 20 16:03:36.197726 containerd[1642]: time="2026-04-20T16:03:36.197688583Z" level=info msg="Connect containerd service" Apr 20 16:03:36.197857 containerd[1642]: time="2026-04-20T16:03:36.197803812Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Apr 20 16:03:36.288644 containerd[1642]: time="2026-04-20T16:03:36.285895002Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 20 16:03:38.214731 systemd[1734]: Queued start job for default target default.target. Apr 20 16:03:38.228139 systemd[1734]: Created slice app.slice - User Application Slice. Apr 20 16:03:38.228724 systemd[1734]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of User's Temporary Directories. Apr 20 16:03:38.228747 systemd[1734]: Reached target machines.target - Virtual Machines and Containers. Apr 20 16:03:38.228814 systemd[1734]: Reached target paths.target - Paths. Apr 20 16:03:38.228842 systemd[1734]: Reached target timers.target - Timers. Apr 20 16:03:38.232547 systemd[1734]: Starting dbus.socket - D-Bus User Message Bus Socket... Apr 20 16:03:38.234539 systemd[1734]: Listening on systemd-ask-password.socket - Query the User Interactively for a Password. Apr 20 16:03:38.241299 systemd[1734]: Starting systemd-tmpfiles-setup.service - Create User Files and Directories... Apr 20 16:03:38.319285 systemd[1734]: Listening on dbus.socket - D-Bus User Message Bus Socket. Apr 20 16:03:38.319724 systemd[1734]: Reached target sockets.target - Sockets. Apr 20 16:03:38.344825 systemd[1734]: Finished systemd-tmpfiles-setup.service - Create User Files and Directories. Apr 20 16:03:38.345556 systemd[1734]: Reached target basic.target - Basic System. Apr 20 16:03:38.345643 systemd[1734]: Reached target default.target - Main User Target. Apr 20 16:03:38.345683 systemd[1734]: Startup finished in 2.395s. Apr 20 16:03:38.346666 systemd[1]: Started user@500.service - User Manager for UID 500. Apr 20 16:03:38.398472 systemd[1]: Started session-1.scope - Session 1 of User core. Apr 20 16:03:38.773065 systemd[1]: Started sshd@1-4097-10.0.0.48:22-10.0.0.1:53732.service - OpenSSH per-connection server daemon (10.0.0.1:53732). Apr 20 16:03:38.823872 containerd[1642]: time="2026-04-20T16:03:38.823427385Z" level=info msg="Start subscribing containerd event" Apr 20 16:03:38.843881 containerd[1642]: time="2026-04-20T16:03:38.838820217Z" level=info msg="Start recovering state" Apr 20 16:03:38.873084 containerd[1642]: time="2026-04-20T16:03:38.870089398Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Apr 20 16:03:38.912251 containerd[1642]: time="2026-04-20T16:03:38.911407643Z" level=info msg=serving... address=/run/containerd/containerd.sock Apr 20 16:03:38.922541 containerd[1642]: time="2026-04-20T16:03:38.920484447Z" level=info msg="Start event monitor" Apr 20 16:03:38.922541 containerd[1642]: time="2026-04-20T16:03:38.921060872Z" level=info msg="Start cni network conf syncer for default" Apr 20 16:03:38.928098 containerd[1642]: time="2026-04-20T16:03:38.921388585Z" level=info msg="Start streaming server" Apr 20 16:03:38.949663 containerd[1642]: time="2026-04-20T16:03:38.949509519Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Apr 20 16:03:38.950657 containerd[1642]: time="2026-04-20T16:03:38.950062316Z" level=info msg="runtime interface starting up..." Apr 20 16:03:38.950657 containerd[1642]: time="2026-04-20T16:03:38.950109185Z" level=info msg="starting plugins..." Apr 20 16:03:38.950657 containerd[1642]: time="2026-04-20T16:03:38.950273136Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Apr 20 16:03:38.963493 systemd[1]: Started containerd.service - containerd container runtime. Apr 20 16:03:38.965577 containerd[1642]: time="2026-04-20T16:03:38.964021408Z" level=info msg="containerd successfully booted in 4.549339s" Apr 20 16:03:39.798261 sshd[1765]: Accepted publickey for core from 10.0.0.1 port 53732 ssh2: RSA SHA256:TY3ywpKQxrHjYT+ud73OgntCcwLDp6eqaAmsAXbkkEQ Apr 20 16:03:39.824676 sshd-session[1765]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 20 16:03:39.941810 systemd[1]: Started session-3.scope - Session 3 of User core. Apr 20 16:03:39.942657 systemd-logind[1611]: New session '3' of user 'core' with class 'user' and type 'tty'. Apr 20 16:03:40.570306 sshd[1776]: Connection closed by 10.0.0.1 port 53732 Apr 20 16:03:40.590286 sshd-session[1765]: pam_unix(sshd:session): session closed for user core Apr 20 16:03:40.660867 systemd[1]: sshd@1-4097-10.0.0.48:22-10.0.0.1:53732.service: Deactivated successfully. Apr 20 16:03:40.697714 systemd[1]: session-3.scope: Deactivated successfully. Apr 20 16:03:40.711589 systemd-logind[1611]: Session 3 logged out. Waiting for processes to exit. Apr 20 16:03:40.718598 systemd[1]: Started sshd@2-4098-10.0.0.48:22-10.0.0.1:53746.service - OpenSSH per-connection server daemon (10.0.0.1:53746). Apr 20 16:03:40.734550 systemd-logind[1611]: Removed session 3. Apr 20 16:03:40.874477 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 20 16:03:40.888974 systemd[1]: Reached target multi-user.target - Multi-User System. Apr 20 16:03:40.892629 systemd[1]: Startup finished in 7.125s (kernel) + 18.837s (initrd) + 28.012s (userspace) = 53.975s. Apr 20 16:03:40.910026 (kubelet)[1789]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 20 16:03:41.271519 sshd[1782]: Accepted publickey for core from 10.0.0.1 port 53746 ssh2: RSA SHA256:TY3ywpKQxrHjYT+ud73OgntCcwLDp6eqaAmsAXbkkEQ Apr 20 16:03:41.305105 sshd-session[1782]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 20 16:03:41.425444 systemd-logind[1611]: New session '4' of user 'core' with class 'user' and type 'tty'. Apr 20 16:03:41.523464 systemd[1]: Started session-4.scope - Session 4 of User core. Apr 20 16:03:41.913830 sshd[1792]: Connection closed by 10.0.0.1 port 53746 Apr 20 16:03:41.917055 sshd-session[1782]: pam_unix(sshd:session): session closed for user core Apr 20 16:03:41.985419 systemd[1]: sshd@2-4098-10.0.0.48:22-10.0.0.1:53746.service: Deactivated successfully. Apr 20 16:03:42.095589 systemd[1]: session-4.scope: Deactivated successfully. Apr 20 16:03:42.147033 systemd-logind[1611]: Session 4 logged out. Waiting for processes to exit. Apr 20 16:03:42.277651 systemd-logind[1611]: Removed session 4. Apr 20 16:03:48.810870 kubelet[1789]: E0420 16:03:48.809911 1789 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 20 16:03:48.825855 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 20 16:03:48.826026 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 20 16:03:48.833073 systemd[1]: kubelet.service: Consumed 8.762s CPU time, 269.1M memory peak. Apr 20 16:03:52.096112 systemd[1]: Started sshd@3-4099-10.0.0.48:22-10.0.0.1:51238.service - OpenSSH per-connection server daemon (10.0.0.1:51238). Apr 20 16:03:53.013009 sshd[1806]: Accepted publickey for core from 10.0.0.1 port 51238 ssh2: RSA SHA256:TY3ywpKQxrHjYT+ud73OgntCcwLDp6eqaAmsAXbkkEQ Apr 20 16:03:53.021925 sshd-session[1806]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 20 16:03:53.183061 systemd-logind[1611]: New session '5' of user 'core' with class 'user' and type 'tty'. Apr 20 16:03:53.257905 systemd[1]: Started session-5.scope - Session 5 of User core. Apr 20 16:03:53.685551 sshd[1810]: Connection closed by 10.0.0.1 port 51238 Apr 20 16:03:53.683004 sshd-session[1806]: pam_unix(sshd:session): session closed for user core Apr 20 16:03:53.720739 systemd[1]: sshd@3-4099-10.0.0.48:22-10.0.0.1:51238.service: Deactivated successfully. Apr 20 16:03:53.743024 systemd[1]: session-5.scope: Deactivated successfully. Apr 20 16:03:53.753460 systemd-logind[1611]: Session 5 logged out. Waiting for processes to exit. Apr 20 16:03:53.758086 systemd[1]: Started sshd@4-2-10.0.0.48:22-10.0.0.1:51246.service - OpenSSH per-connection server daemon (10.0.0.1:51246). Apr 20 16:03:53.784755 systemd-logind[1611]: Removed session 5. Apr 20 16:03:54.492263 sshd[1816]: Accepted publickey for core from 10.0.0.1 port 51246 ssh2: RSA SHA256:TY3ywpKQxrHjYT+ud73OgntCcwLDp6eqaAmsAXbkkEQ Apr 20 16:03:54.507576 sshd-session[1816]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 20 16:03:54.595475 systemd-logind[1611]: New session '6' of user 'core' with class 'user' and type 'tty'. Apr 20 16:03:54.623567 systemd[1]: Started session-6.scope - Session 6 of User core. Apr 20 16:03:54.760699 sshd[1820]: Connection closed by 10.0.0.1 port 51246 Apr 20 16:03:54.777934 sshd-session[1816]: pam_unix(sshd:session): session closed for user core Apr 20 16:03:54.855558 systemd[1]: sshd@4-2-10.0.0.48:22-10.0.0.1:51246.service: Deactivated successfully. Apr 20 16:03:54.892075 systemd[1]: session-6.scope: Deactivated successfully. Apr 20 16:03:54.944977 systemd-logind[1611]: Session 6 logged out. Waiting for processes to exit. Apr 20 16:03:55.065860 systemd[1]: Started sshd@5-3-10.0.0.48:22-10.0.0.1:51254.service - OpenSSH per-connection server daemon (10.0.0.1:51254). Apr 20 16:03:55.116495 systemd-logind[1611]: Removed session 6. Apr 20 16:03:56.581060 sshd[1826]: Accepted publickey for core from 10.0.0.1 port 51254 ssh2: RSA SHA256:TY3ywpKQxrHjYT+ud73OgntCcwLDp6eqaAmsAXbkkEQ Apr 20 16:03:56.583917 sshd-session[1826]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 20 16:03:56.608778 systemd-logind[1611]: New session '7' of user 'core' with class 'user' and type 'tty'. Apr 20 16:03:56.627428 systemd[1]: Started session-7.scope - Session 7 of User core. Apr 20 16:03:56.702349 sshd[1830]: Connection closed by 10.0.0.1 port 51254 Apr 20 16:03:56.703408 sshd-session[1826]: pam_unix(sshd:session): session closed for user core Apr 20 16:03:56.718420 systemd[1]: sshd@5-3-10.0.0.48:22-10.0.0.1:51254.service: Deactivated successfully. Apr 20 16:03:56.720340 systemd[1]: session-7.scope: Deactivated successfully. Apr 20 16:03:56.722615 systemd-logind[1611]: Session 7 logged out. Waiting for processes to exit. Apr 20 16:03:56.726310 systemd[1]: Started sshd@6-4100-10.0.0.48:22-10.0.0.1:35416.service - OpenSSH per-connection server daemon (10.0.0.1:35416). Apr 20 16:03:56.726895 systemd-logind[1611]: Removed session 7. Apr 20 16:03:57.223525 sshd[1836]: Accepted publickey for core from 10.0.0.1 port 35416 ssh2: RSA SHA256:TY3ywpKQxrHjYT+ud73OgntCcwLDp6eqaAmsAXbkkEQ Apr 20 16:03:57.245915 sshd-session[1836]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 20 16:03:57.341706 systemd-logind[1611]: New session '8' of user 'core' with class 'user' and type 'tty'. Apr 20 16:03:57.363771 systemd[1]: Started session-8.scope - Session 8 of User core. Apr 20 16:03:57.673979 sudo[1841]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Apr 20 16:03:57.684494 sudo[1841]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 20 16:03:58.962679 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Apr 20 16:03:59.081619 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 20 16:04:02.007037 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 20 16:04:02.025899 (kubelet)[1869]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 20 16:04:03.982957 kubelet[1869]: E0420 16:04:03.982316 1869 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 20 16:04:04.065375 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 20 16:04:04.067795 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 20 16:04:04.075407 systemd[1]: kubelet.service: Consumed 3.703s CPU time, 108.8M memory peak. Apr 20 16:04:05.721959 systemd[1]: Starting docker.service - Docker Application Container Engine... Apr 20 16:04:05.781401 (dockerd)[1878]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Apr 20 16:04:11.340729 dockerd[1878]: time="2026-04-20T16:04:11.340247893Z" level=info msg="Starting up" Apr 20 16:04:11.445569 dockerd[1878]: time="2026-04-20T16:04:11.445366331Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Apr 20 16:04:11.713045 dockerd[1878]: time="2026-04-20T16:04:11.712744223Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Apr 20 16:04:12.115714 systemd[1]: var-lib-docker-metacopy\x2dcheck3367080096-merged.mount: Deactivated successfully. Apr 20 16:04:12.579055 dockerd[1878]: time="2026-04-20T16:04:12.577119141Z" level=info msg="Loading containers: start." Apr 20 16:04:12.942729 kernel: Initializing XFRM netlink socket Apr 20 16:04:14.184951 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Apr 20 16:04:14.212591 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 20 16:04:14.977753 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 20 16:04:14.994033 (kubelet)[2009]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 20 16:04:15.776523 kubelet[2009]: E0420 16:04:15.776435 2009 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 20 16:04:15.786237 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 20 16:04:15.787809 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 20 16:04:15.790913 systemd[1]: kubelet.service: Consumed 1.192s CPU time, 109.5M memory peak. Apr 20 16:04:15.958637 update_engine[1613]: I20260420 16:04:15.955153 1613 update_attempter.cc:509] Updating boot flags... Apr 20 16:04:16.088507 systemd-networkd[1429]: docker0: Link UP Apr 20 16:04:16.308328 dockerd[1878]: time="2026-04-20T16:04:16.307493105Z" level=info msg="Loading containers: done." Apr 20 16:04:16.916670 dockerd[1878]: time="2026-04-20T16:04:16.915656522Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Apr 20 16:04:16.923805 dockerd[1878]: time="2026-04-20T16:04:16.923459433Z" level=info msg="Docker daemon" commit=45873be4ae3f5488c9498b3d9f17deaddaf609f4 containerd-snapshotter=false storage-driver=overlay2 version=28.2.2 Apr 20 16:04:16.925852 dockerd[1878]: time="2026-04-20T16:04:16.925777641Z" level=info msg="Initializing buildkit" Apr 20 16:04:17.040532 dockerd[1878]: time="2026-04-20T16:04:17.040424680Z" level=warning msg="CDI setup error /etc/cdi: failed to monitor for changes: no such file or directory" Apr 20 16:04:17.040760 dockerd[1878]: time="2026-04-20T16:04:17.040712802Z" level=warning msg="CDI setup error /var/run/cdi: failed to monitor for changes: no such file or directory" Apr 20 16:04:17.584130 dockerd[1878]: time="2026-04-20T16:04:17.583235653Z" level=info msg="Completed buildkit initialization" Apr 20 16:04:17.897758 dockerd[1878]: time="2026-04-20T16:04:17.879010405Z" level=info msg="Daemon has completed initialization" Apr 20 16:04:17.929061 systemd[1]: Started docker.service - Docker Application Container Engine. Apr 20 16:04:17.936671 dockerd[1878]: time="2026-04-20T16:04:17.928880215Z" level=info msg="API listen on /run/docker.sock" Apr 20 16:04:25.972065 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Apr 20 16:04:26.026224 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 20 16:04:27.220887 containerd[1642]: time="2026-04-20T16:04:27.220623986Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.11\"" Apr 20 16:04:27.241599 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 20 16:04:27.328334 (kubelet)[2140]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 20 16:04:29.969732 kubelet[2140]: E0420 16:04:29.965821 2140 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 20 16:04:29.997451 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 20 16:04:30.053050 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 20 16:04:30.073390 systemd[1]: kubelet.service: Consumed 3.477s CPU time, 110.6M memory peak. Apr 20 16:04:34.626541 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2156457319.mount: Deactivated successfully. Apr 20 16:04:40.197612 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Apr 20 16:04:40.209566 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 20 16:04:41.906286 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 20 16:04:41.957912 (kubelet)[2218]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 20 16:04:43.009502 kubelet[2218]: E0420 16:04:43.009431 2218 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 20 16:04:43.034557 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 20 16:04:43.034714 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 20 16:04:43.049119 systemd[1]: kubelet.service: Consumed 1.581s CPU time, 109M memory peak. Apr 20 16:04:48.794722 containerd[1642]: time="2026-04-20T16:04:48.794423546Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 20 16:04:48.829644 containerd[1642]: time="2026-04-20T16:04:48.797456930Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.11: active requests=0, bytes read=30181786" Apr 20 16:04:49.091012 containerd[1642]: time="2026-04-20T16:04:49.089816108Z" level=info msg="ImageCreate event name:\"sha256:7ea99c30f23b106a042b6c46e565fddb42b20bbe58ba6852e562eed03477aec2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 20 16:04:49.553072 containerd[1642]: time="2026-04-20T16:04:49.552692993Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:18e9f2b6e4d67c24941e14b2d41ec0aa6e5f628e39f2ef2163e176de85bbe39e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 20 16:04:49.593198 containerd[1642]: time="2026-04-20T16:04:49.590994288Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.11\" with image id \"sha256:7ea99c30f23b106a042b6c46e565fddb42b20bbe58ba6852e562eed03477aec2\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.11\", repo digest \"registry.k8s.io/kube-apiserver@sha256:18e9f2b6e4d67c24941e14b2d41ec0aa6e5f628e39f2ef2163e176de85bbe39e\", size \"30190588\" in 22.365003987s" Apr 20 16:04:49.595674 containerd[1642]: time="2026-04-20T16:04:49.595092085Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.11\" returns image reference \"sha256:7ea99c30f23b106a042b6c46e565fddb42b20bbe58ba6852e562eed03477aec2\"" Apr 20 16:04:49.626291 containerd[1642]: time="2026-04-20T16:04:49.625739636Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.11\"" Apr 20 16:04:53.254882 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Apr 20 16:04:53.295594 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 20 16:04:55.376849 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 20 16:04:55.442901 (kubelet)[2240]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 20 16:04:57.891549 kubelet[2240]: E0420 16:04:57.891012 2240 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 20 16:04:57.903457 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 20 16:04:57.903576 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 20 16:04:57.908748 systemd[1]: kubelet.service: Consumed 2.940s CPU time, 108.5M memory peak. Apr 20 16:05:02.893525 containerd[1642]: time="2026-04-20T16:05:02.893271387Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 20 16:05:02.931494 containerd[1642]: time="2026-04-20T16:05:02.927833690Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.11: active requests=0, bytes read=26162658" Apr 20 16:05:02.944403 containerd[1642]: time="2026-04-20T16:05:02.943792868Z" level=info msg="ImageCreate event name:\"sha256:c75dc8a6c47e2f7491fa2e367879f53c6f46053066e6b7135df4b154ddd94a1f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 20 16:05:03.200306 containerd[1642]: time="2026-04-20T16:05:03.199864355Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:7579451c5b3c2715da4a263c5d80a3367a24fdc12e86fde6851674d567d1dfb2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 20 16:05:03.277680 containerd[1642]: time="2026-04-20T16:05:03.274036170Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.11\" with image id \"sha256:c75dc8a6c47e2f7491fa2e367879f53c6f46053066e6b7135df4b154ddd94a1f\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.11\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:7579451c5b3c2715da4a263c5d80a3367a24fdc12e86fde6851674d567d1dfb2\", size \"27737794\" in 13.644115755s" Apr 20 16:05:03.279755 containerd[1642]: time="2026-04-20T16:05:03.278021026Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.11\" returns image reference \"sha256:c75dc8a6c47e2f7491fa2e367879f53c6f46053066e6b7135df4b154ddd94a1f\"" Apr 20 16:05:03.318265 containerd[1642]: time="2026-04-20T16:05:03.317959628Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.11\"" Apr 20 16:05:07.960989 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Apr 20 16:05:07.982310 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 20 16:05:09.434122 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 20 16:05:09.500774 (kubelet)[2261]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 20 16:05:09.996435 kubelet[2261]: E0420 16:05:09.991698 2261 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 20 16:05:10.038625 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 20 16:05:10.070435 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 20 16:05:10.077114 systemd[1]: kubelet.service: Consumed 1.034s CPU time, 109.9M memory peak. Apr 20 16:05:11.308901 containerd[1642]: time="2026-04-20T16:05:11.308387322Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 20 16:05:11.323302 containerd[1642]: time="2026-04-20T16:05:11.321482590Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.11: active requests=0, bytes read=20280985" Apr 20 16:05:11.337652 containerd[1642]: time="2026-04-20T16:05:11.337389436Z" level=info msg="ImageCreate event name:\"sha256:3febad3451e2d599688a8ad13d19d03c48c9054be209342c748fac2bb6c56f97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 20 16:05:11.372716 containerd[1642]: time="2026-04-20T16:05:11.372606386Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:5506f0f94c4d9aeb071664893aabc12166bcb7f775008a6fff02d004e6091d28\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 20 16:05:11.434505 containerd[1642]: time="2026-04-20T16:05:11.430686377Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.11\" with image id \"sha256:3febad3451e2d599688a8ad13d19d03c48c9054be209342c748fac2bb6c56f97\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.11\", repo digest \"registry.k8s.io/kube-scheduler@sha256:5506f0f94c4d9aeb071664893aabc12166bcb7f775008a6fff02d004e6091d28\", size \"21856121\" in 8.112369867s" Apr 20 16:05:11.434505 containerd[1642]: time="2026-04-20T16:05:11.431019989Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.11\" returns image reference \"sha256:3febad3451e2d599688a8ad13d19d03c48c9054be209342c748fac2bb6c56f97\"" Apr 20 16:05:11.458095 containerd[1642]: time="2026-04-20T16:05:11.457790974Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.11\"" Apr 20 16:05:20.297107 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7. Apr 20 16:05:21.010444 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 20 16:05:22.757046 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 20 16:05:22.803406 (kubelet)[2281]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 20 16:05:24.141097 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2099352008.mount: Deactivated successfully. Apr 20 16:05:24.345751 kubelet[2281]: E0420 16:05:24.344846 2281 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 20 16:05:24.386092 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 20 16:05:24.390062 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 20 16:05:24.417415 systemd[1]: kubelet.service: Consumed 1.645s CPU time, 108.5M memory peak. Apr 20 16:05:28.334475 containerd[1642]: time="2026-04-20T16:05:28.325147217Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 20 16:05:28.344493 containerd[1642]: time="2026-04-20T16:05:28.338862722Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.11: active requests=1, bytes read=22214939" Apr 20 16:05:28.355618 containerd[1642]: time="2026-04-20T16:05:28.353874030Z" level=info msg="ImageCreate event name:\"sha256:4ce1332df15d2a0b1c2d3b18292afb4ff670070401211daebb00b7293b26f6d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 20 16:05:28.408650 containerd[1642]: time="2026-04-20T16:05:28.408478014Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:8d18637b5c5f58a4ca0163d3cf184e53d4c522963c242860562be7cb25e9303e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 20 16:05:28.412373 containerd[1642]: time="2026-04-20T16:05:28.410822602Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.11\" with image id \"sha256:4ce1332df15d2a0b1c2d3b18292afb4ff670070401211daebb00b7293b26f6d0\", repo tag \"registry.k8s.io/kube-proxy:v1.33.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:8d18637b5c5f58a4ca0163d3cf184e53d4c522963c242860562be7cb25e9303e\", size \"32009730\" in 16.952549866s" Apr 20 16:05:28.412373 containerd[1642]: time="2026-04-20T16:05:28.410989919Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.11\" returns image reference \"sha256:4ce1332df15d2a0b1c2d3b18292afb4ff670070401211daebb00b7293b26f6d0\"" Apr 20 16:05:28.418139 containerd[1642]: time="2026-04-20T16:05:28.418051181Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Apr 20 16:05:31.651059 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1621893586.mount: Deactivated successfully. Apr 20 16:05:34.600780 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 8. Apr 20 16:05:34.972036 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 20 16:05:36.191913 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 20 16:05:36.219476 (kubelet)[2326]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 20 16:05:37.010415 kubelet[2326]: E0420 16:05:37.009568 2326 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 20 16:05:37.093659 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 20 16:05:37.097466 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 20 16:05:37.111537 systemd[1]: kubelet.service: Consumed 1.072s CPU time, 110.4M memory peak. Apr 20 16:05:41.252876 containerd[1642]: time="2026-04-20T16:05:41.252726192Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 20 16:05:41.258228 containerd[1642]: time="2026-04-20T16:05:41.258044460Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20918154" Apr 20 16:05:41.266701 containerd[1642]: time="2026-04-20T16:05:41.266399789Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 20 16:05:41.334529 containerd[1642]: time="2026-04-20T16:05:41.333253576Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 20 16:05:41.346242 containerd[1642]: time="2026-04-20T16:05:41.345532393Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 12.927392686s" Apr 20 16:05:41.346242 containerd[1642]: time="2026-04-20T16:05:41.345714415Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Apr 20 16:05:41.350114 containerd[1642]: time="2026-04-20T16:05:41.350052822Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Apr 20 16:05:44.424072 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4245419121.mount: Deactivated successfully. Apr 20 16:05:44.584616 containerd[1642]: time="2026-04-20T16:05:44.583449444Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 20 16:05:44.595292 containerd[1642]: time="2026-04-20T16:05:44.595091215Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=881" Apr 20 16:05:44.599458 containerd[1642]: time="2026-04-20T16:05:44.599090213Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 20 16:05:44.672316 containerd[1642]: time="2026-04-20T16:05:44.672039752Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 20 16:05:44.682452 containerd[1642]: time="2026-04-20T16:05:44.675936414Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 3.325653203s" Apr 20 16:05:44.682452 containerd[1642]: time="2026-04-20T16:05:44.676067604Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Apr 20 16:05:44.748879 containerd[1642]: time="2026-04-20T16:05:44.748483369Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.24-0\"" Apr 20 16:05:47.236754 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 9. Apr 20 16:05:47.255752 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 20 16:05:48.260833 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 20 16:05:48.295446 (kubelet)[2379]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 20 16:05:48.523865 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount898258430.mount: Deactivated successfully. Apr 20 16:05:53.061348 kubelet[2379]: E0420 16:05:53.061248 2379 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 20 16:05:53.101363 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 20 16:05:53.101712 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 20 16:05:53.112975 systemd[1]: kubelet.service: Consumed 3.235s CPU time, 109.7M memory peak. Apr 20 16:06:01.661207 containerd[1642]: time="2026-04-20T16:06:01.660834282Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.24-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 20 16:06:01.665228 containerd[1642]: time="2026-04-20T16:06:01.661573757Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.24-0: active requests=0, bytes read=23709256" Apr 20 16:06:01.679570 containerd[1642]: time="2026-04-20T16:06:01.679315759Z" level=info msg="ImageCreate event name:\"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 20 16:06:01.768791 containerd[1642]: time="2026-04-20T16:06:01.767605861Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 20 16:06:01.826874 containerd[1642]: time="2026-04-20T16:06:01.826595827Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.24-0\" with image id \"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\", repo tag \"registry.k8s.io/etcd:3.5.24-0\", repo digest \"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\", size \"23716032\" in 17.07784385s" Apr 20 16:06:01.827464 containerd[1642]: time="2026-04-20T16:06:01.826915534Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.24-0\" returns image reference \"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\"" Apr 20 16:06:03.191903 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 10. Apr 20 16:06:03.201597 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 20 16:06:04.897679 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 20 16:06:04.925345 (kubelet)[2480]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 20 16:06:05.418255 kubelet[2480]: E0420 16:06:05.418083 2480 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 20 16:06:05.424068 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 20 16:06:05.424239 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 20 16:06:05.425069 systemd[1]: kubelet.service: Consumed 1.684s CPU time, 110.1M memory peak. Apr 20 16:06:14.741575 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 20 16:06:14.741989 systemd[1]: kubelet.service: Consumed 1.684s CPU time, 110.1M memory peak. Apr 20 16:06:14.773123 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 20 16:06:14.971231 systemd[1]: Reload requested from client PID 2496 ('systemctl') (unit session-8.scope)... Apr 20 16:06:14.974633 systemd[1]: Reloading... Apr 20 16:06:15.573434 zram_generator::config[2559]: No configuration found. Apr 20 16:06:15.574965 systemd-ssh-generator[2549]: Failed to query local AF_VSOCK CID: Cannot assign requested address Apr 20 16:06:15.576806 (sd-exec-[2527]: /usr/lib/systemd/system-generators/systemd-ssh-generator failed with exit status 1. Apr 20 16:06:16.787123 systemd[1]: /usr/lib/systemd/system/update-engine.service:10: Support for option BlockIOWeight= has been removed and it is ignored Apr 20 16:06:17.915045 systemd[1]: Reloading finished in 2935 ms. Apr 20 16:06:18.053968 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Apr 20 16:06:18.055386 systemd[1]: kubelet.service: Failed with result 'signal'. Apr 20 16:06:18.057861 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 20 16:06:18.058860 systemd[1]: kubelet.service: Consumed 383ms CPU time, 98.5M memory peak. Apr 20 16:06:18.073039 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 20 16:06:18.949014 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 20 16:06:18.967815 (kubelet)[2598]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 20 16:06:19.709960 kubelet[2598]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 20 16:06:19.709960 kubelet[2598]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 20 16:06:19.709960 kubelet[2598]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 20 16:06:19.709960 kubelet[2598]: I0420 16:06:19.709569 2598 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 20 16:06:21.537974 kubelet[2598]: I0420 16:06:21.537004 2598 server.go:530] "Kubelet version" kubeletVersion="v1.33.8" Apr 20 16:06:21.548956 kubelet[2598]: I0420 16:06:21.548687 2598 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 20 16:06:21.555830 kubelet[2598]: I0420 16:06:21.551928 2598 server.go:956] "Client rotation is on, will bootstrap in background" Apr 20 16:06:21.676203 kubelet[2598]: E0420 16:06:21.676088 2598 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.48:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.48:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 20 16:06:21.692576 kubelet[2598]: I0420 16:06:21.692321 2598 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 20 16:06:21.806462 kubelet[2598]: I0420 16:06:21.803225 2598 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Apr 20 16:06:21.846257 kubelet[2598]: I0420 16:06:21.845804 2598 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Apr 20 16:06:21.851138 kubelet[2598]: I0420 16:06:21.850741 2598 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 20 16:06:21.851845 kubelet[2598]: I0420 16:06:21.851096 2598 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 20 16:06:21.852244 kubelet[2598]: I0420 16:06:21.851898 2598 topology_manager.go:138] "Creating topology manager with none policy" Apr 20 16:06:21.852244 kubelet[2598]: I0420 16:06:21.851916 2598 container_manager_linux.go:303] "Creating device plugin manager" Apr 20 16:06:21.856691 kubelet[2598]: I0420 16:06:21.855356 2598 state_mem.go:36] "Initialized new in-memory state store" Apr 20 16:06:21.883532 kubelet[2598]: I0420 16:06:21.882852 2598 kubelet.go:480] "Attempting to sync node with API server" Apr 20 16:06:21.885511 kubelet[2598]: I0420 16:06:21.884398 2598 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 20 16:06:21.887853 kubelet[2598]: I0420 16:06:21.887722 2598 kubelet.go:386] "Adding apiserver pod source" Apr 20 16:06:21.888212 kubelet[2598]: I0420 16:06:21.888122 2598 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 20 16:06:21.898703 kubelet[2598]: E0420 16:06:21.898077 2598 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.48:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.48:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 20 16:06:21.898703 kubelet[2598]: E0420 16:06:21.898646 2598 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.48:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.48:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 20 16:06:21.906648 kubelet[2598]: I0420 16:06:21.906557 2598 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.2.1" apiVersion="v1" Apr 20 16:06:21.914644 kubelet[2598]: I0420 16:06:21.914514 2598 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 20 16:06:21.927669 kubelet[2598]: W0420 16:06:21.925800 2598 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Apr 20 16:06:21.971301 kubelet[2598]: I0420 16:06:21.970498 2598 watchdog_linux.go:99] "Systemd watchdog is not enabled" Apr 20 16:06:21.977860 kubelet[2598]: I0420 16:06:21.973427 2598 server.go:1289] "Started kubelet" Apr 20 16:06:21.977860 kubelet[2598]: I0420 16:06:21.973683 2598 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Apr 20 16:06:21.984830 kubelet[2598]: I0420 16:06:21.980146 2598 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 20 16:06:21.984830 kubelet[2598]: I0420 16:06:21.982648 2598 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 20 16:06:21.992895 kubelet[2598]: I0420 16:06:21.989083 2598 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 20 16:06:21.992895 kubelet[2598]: I0420 16:06:21.989124 2598 server.go:317] "Adding debug handlers to kubelet server" Apr 20 16:06:21.993929 kubelet[2598]: I0420 16:06:21.993859 2598 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 20 16:06:22.002718 kubelet[2598]: I0420 16:06:21.999343 2598 volume_manager.go:297] "Starting Kubelet Volume Manager" Apr 20 16:06:22.072704 kubelet[2598]: I0420 16:06:22.072293 2598 reconciler.go:26] "Reconciler: start to sync state" Apr 20 16:06:22.096733 kubelet[2598]: E0420 16:06:22.072015 2598 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.48:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.48:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18a81c4fc77bb6e1 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-20 16:06:21.972428513 +0000 UTC m=+2.951081620,LastTimestamp:2026-04-20 16:06:21.972428513 +0000 UTC m=+2.951081620,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 20 16:06:22.100465 kubelet[2598]: E0420 16:06:22.089409 2598 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 20 16:06:22.116656 kubelet[2598]: I0420 16:06:22.116437 2598 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Apr 20 16:06:22.141283 kubelet[2598]: I0420 16:06:22.137520 2598 factory.go:223] Registration of the systemd container factory successfully Apr 20 16:06:22.147915 kubelet[2598]: E0420 16:06:22.141520 2598 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.48:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.48:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 20 16:06:22.147915 kubelet[2598]: E0420 16:06:22.141806 2598 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.48:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.48:6443: connect: connection refused" interval="200ms" Apr 20 16:06:22.157604 kubelet[2598]: I0420 16:06:22.142589 2598 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 20 16:06:22.198799 kubelet[2598]: E0420 16:06:22.197419 2598 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 20 16:06:22.204032 kubelet[2598]: E0420 16:06:22.201572 2598 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 20 16:06:22.255242 kubelet[2598]: I0420 16:06:22.254971 2598 factory.go:223] Registration of the containerd container factory successfully Apr 20 16:06:22.304001 kubelet[2598]: E0420 16:06:22.303686 2598 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 20 16:06:22.373529 kubelet[2598]: E0420 16:06:22.373041 2598 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.48:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.48:6443: connect: connection refused" interval="400ms" Apr 20 16:06:22.409261 kubelet[2598]: E0420 16:06:22.406946 2598 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 20 16:06:22.409261 kubelet[2598]: I0420 16:06:22.407120 2598 cpu_manager.go:221] "Starting CPU manager" policy="none" Apr 20 16:06:22.409261 kubelet[2598]: I0420 16:06:22.407214 2598 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Apr 20 16:06:22.409261 kubelet[2598]: I0420 16:06:22.407300 2598 state_mem.go:36] "Initialized new in-memory state store" Apr 20 16:06:22.425964 kubelet[2598]: I0420 16:06:22.425893 2598 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Apr 20 16:06:22.428990 kubelet[2598]: I0420 16:06:22.428846 2598 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Apr 20 16:06:22.429786 kubelet[2598]: I0420 16:06:22.429724 2598 status_manager.go:230] "Starting to sync pod status with apiserver" Apr 20 16:06:22.429854 kubelet[2598]: I0420 16:06:22.429788 2598 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 20 16:06:22.429854 kubelet[2598]: I0420 16:06:22.429822 2598 kubelet.go:2436] "Starting kubelet main sync loop" Apr 20 16:06:22.430149 kubelet[2598]: E0420 16:06:22.430002 2598 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 20 16:06:22.442714 kubelet[2598]: E0420 16:06:22.442445 2598 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.48:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.48:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 20 16:06:22.492595 kubelet[2598]: I0420 16:06:22.492379 2598 policy_none.go:49] "None policy: Start" Apr 20 16:06:22.494499 kubelet[2598]: I0420 16:06:22.493385 2598 memory_manager.go:186] "Starting memorymanager" policy="None" Apr 20 16:06:22.494499 kubelet[2598]: I0420 16:06:22.493464 2598 state_mem.go:35] "Initializing new in-memory state store" Apr 20 16:06:22.507917 kubelet[2598]: E0420 16:06:22.507854 2598 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 20 16:06:22.533702 kubelet[2598]: E0420 16:06:22.532981 2598 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Apr 20 16:06:22.614611 kubelet[2598]: E0420 16:06:22.610691 2598 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 20 16:06:22.635810 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Apr 20 16:06:22.717587 kubelet[2598]: E0420 16:06:22.716659 2598 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 20 16:06:22.739738 kubelet[2598]: E0420 16:06:22.736443 2598 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Apr 20 16:06:22.778473 kubelet[2598]: E0420 16:06:22.778404 2598 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.48:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.48:6443: connect: connection refused" interval="800ms" Apr 20 16:06:22.778755 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Apr 20 16:06:22.803330 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Apr 20 16:06:22.829973 kubelet[2598]: E0420 16:06:22.826072 2598 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 20 16:06:22.861825 kubelet[2598]: E0420 16:06:22.859863 2598 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 20 16:06:22.861825 kubelet[2598]: I0420 16:06:22.860505 2598 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 20 16:06:22.861825 kubelet[2598]: I0420 16:06:22.860535 2598 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 20 16:06:22.861825 kubelet[2598]: I0420 16:06:22.861428 2598 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 20 16:06:22.880577 kubelet[2598]: E0420 16:06:22.870768 2598 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 20 16:06:22.912794 kubelet[2598]: E0420 16:06:22.911673 2598 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 20 16:06:23.031225 kubelet[2598]: I0420 16:06:23.030408 2598 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 20 16:06:23.036095 kubelet[2598]: E0420 16:06:23.035474 2598 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.48:6443/api/v1/nodes\": dial tcp 10.0.0.48:6443: connect: connection refused" node="localhost" Apr 20 16:06:23.223219 kubelet[2598]: E0420 16:06:23.222889 2598 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.48:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.48:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 20 16:06:23.290633 kubelet[2598]: I0420 16:06:23.286575 2598 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 20 16:06:23.304439 kubelet[2598]: E0420 16:06:23.300985 2598 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.48:6443/api/v1/nodes\": dial tcp 10.0.0.48:6443: connect: connection refused" node="localhost" Apr 20 16:06:23.339884 kubelet[2598]: E0420 16:06:23.337383 2598 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.48:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.48:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 20 16:06:23.347615 kubelet[2598]: I0420 16:06:23.347126 2598 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f014144595f5820c6cd8da57b59f2e50-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"f014144595f5820c6cd8da57b59f2e50\") " pod="kube-system/kube-apiserver-localhost" Apr 20 16:06:23.348197 kubelet[2598]: I0420 16:06:23.348077 2598 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f014144595f5820c6cd8da57b59f2e50-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"f014144595f5820c6cd8da57b59f2e50\") " pod="kube-system/kube-apiserver-localhost" Apr 20 16:06:23.348251 kubelet[2598]: I0420 16:06:23.348126 2598 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f014144595f5820c6cd8da57b59f2e50-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"f014144595f5820c6cd8da57b59f2e50\") " pod="kube-system/kube-apiserver-localhost" Apr 20 16:06:23.348347 kubelet[2598]: I0420 16:06:23.348267 2598 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e9ca41790ae21be9f4cbd451ade0acec-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"e9ca41790ae21be9f4cbd451ade0acec\") " pod="kube-system/kube-controller-manager-localhost" Apr 20 16:06:23.348470 kubelet[2598]: I0420 16:06:23.348381 2598 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/e9ca41790ae21be9f4cbd451ade0acec-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"e9ca41790ae21be9f4cbd451ade0acec\") " pod="kube-system/kube-controller-manager-localhost" Apr 20 16:06:23.348580 kubelet[2598]: I0420 16:06:23.348519 2598 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e9ca41790ae21be9f4cbd451ade0acec-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"e9ca41790ae21be9f4cbd451ade0acec\") " pod="kube-system/kube-controller-manager-localhost" Apr 20 16:06:23.348779 kubelet[2598]: I0420 16:06:23.348625 2598 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e9ca41790ae21be9f4cbd451ade0acec-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"e9ca41790ae21be9f4cbd451ade0acec\") " pod="kube-system/kube-controller-manager-localhost" Apr 20 16:06:23.348779 kubelet[2598]: I0420 16:06:23.348652 2598 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e9ca41790ae21be9f4cbd451ade0acec-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"e9ca41790ae21be9f4cbd451ade0acec\") " pod="kube-system/kube-controller-manager-localhost" Apr 20 16:06:23.348779 kubelet[2598]: I0420 16:06:23.348743 2598 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/33fee6ba1581201eda98a989140db110-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"33fee6ba1581201eda98a989140db110\") " pod="kube-system/kube-scheduler-localhost" Apr 20 16:06:23.415291 systemd[1]: Created slice kubepods-burstable-podf014144595f5820c6cd8da57b59f2e50.slice - libcontainer container kubepods-burstable-podf014144595f5820c6cd8da57b59f2e50.slice. Apr 20 16:06:23.455134 systemd[1]: Created slice kubepods-burstable-pode9ca41790ae21be9f4cbd451ade0acec.slice - libcontainer container kubepods-burstable-pode9ca41790ae21be9f4cbd451ade0acec.slice. Apr 20 16:06:23.456242 kubelet[2598]: E0420 16:06:23.455070 2598 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 20 16:06:23.500767 kubelet[2598]: E0420 16:06:23.498757 2598 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.48:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.48:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 20 16:06:23.513220 kubelet[2598]: E0420 16:06:23.512845 2598 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 20 16:06:23.515980 kubelet[2598]: E0420 16:06:23.515954 2598 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 16:06:23.539049 containerd[1642]: time="2026-04-20T16:06:23.533754419Z" level=info msg="RunPodSandbox for name:\"kube-controller-manager-localhost\" uid:\"e9ca41790ae21be9f4cbd451ade0acec\" namespace:\"kube-system\"" Apr 20 16:06:23.605804 kubelet[2598]: E0420 16:06:23.605651 2598 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.48:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.48:6443: connect: connection refused" interval="1.6s" Apr 20 16:06:23.622502 systemd[1]: Created slice kubepods-burstable-pod33fee6ba1581201eda98a989140db110.slice - libcontainer container kubepods-burstable-pod33fee6ba1581201eda98a989140db110.slice. Apr 20 16:06:23.690867 kubelet[2598]: E0420 16:06:23.690723 2598 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 20 16:06:23.692031 kubelet[2598]: E0420 16:06:23.691898 2598 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 16:06:23.711260 containerd[1642]: time="2026-04-20T16:06:23.710776914Z" level=info msg="RunPodSandbox for name:\"kube-scheduler-localhost\" uid:\"33fee6ba1581201eda98a989140db110\" namespace:\"kube-system\"" Apr 20 16:06:23.713388 kubelet[2598]: I0420 16:06:23.713325 2598 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 20 16:06:23.714686 kubelet[2598]: E0420 16:06:23.714622 2598 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.48:6443/api/v1/nodes\": dial tcp 10.0.0.48:6443: connect: connection refused" node="localhost" Apr 20 16:06:23.868470 kubelet[2598]: E0420 16:06:23.861606 2598 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 16:06:23.868470 kubelet[2598]: E0420 16:06:23.865981 2598 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.48:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.48:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 20 16:06:23.868470 kubelet[2598]: E0420 16:06:23.866349 2598 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.48:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.48:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 20 16:06:23.871240 containerd[1642]: time="2026-04-20T16:06:23.871067894Z" level=info msg="RunPodSandbox for name:\"kube-apiserver-localhost\" uid:\"f014144595f5820c6cd8da57b59f2e50\" namespace:\"kube-system\"" Apr 20 16:06:24.935999 kubelet[2598]: I0420 16:06:24.931494 2598 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 20 16:06:25.014306 kubelet[2598]: E0420 16:06:25.012659 2598 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.48:6443/api/v1/nodes\": dial tcp 10.0.0.48:6443: connect: connection refused" node="localhost" Apr 20 16:06:25.226463 kubelet[2598]: E0420 16:06:25.224514 2598 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.48:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.48:6443: connect: connection refused" interval="3.2s" Apr 20 16:06:25.627224 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3506954749.mount: Deactivated successfully. Apr 20 16:06:25.679319 containerd[1642]: time="2026-04-20T16:06:25.679013027Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 20 16:06:25.722499 containerd[1642]: time="2026-04-20T16:06:25.721828116Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=0" Apr 20 16:06:25.734241 kubelet[2598]: E0420 16:06:25.732719 2598 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.48:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.48:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 20 16:06:25.734241 kubelet[2598]: E0420 16:06:25.734068 2598 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.48:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.48:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 20 16:06:25.747818 containerd[1642]: time="2026-04-20T16:06:25.746484779Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 20 16:06:25.760607 containerd[1642]: time="2026-04-20T16:06:25.759629975Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=0" Apr 20 16:06:25.768754 containerd[1642]: time="2026-04-20T16:06:25.766970013Z" level=info msg="ImageCreate event name:\"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 20 16:06:25.805132 containerd[1642]: time="2026-04-20T16:06:25.804251158Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=0" Apr 20 16:06:25.816301 containerd[1642]: time="2026-04-20T16:06:25.816125739Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 20 16:06:25.835967 containerd[1642]: time="2026-04-20T16:06:25.830991140Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 20 16:06:25.852592 containerd[1642]: time="2026-04-20T16:06:25.852412772Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"320448\" in 2.11720877s" Apr 20 16:06:25.855344 containerd[1642]: time="2026-04-20T16:06:25.855242913Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"320448\" in 2.205111747s" Apr 20 16:06:25.856459 containerd[1642]: time="2026-04-20T16:06:25.856357359Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"320448\" in 1.948893735s" Apr 20 16:06:26.135244 containerd[1642]: time="2026-04-20T16:06:26.134829301Z" level=info msg="connecting to shim a4f8e106b2c659c398fec8bc48b50dd31c841c50583cc946254869d16aa74adb" address="unix:///run/containerd/s/954064e80f6724cd9b3557ef5e8f14a291671be0b664e8d783df675780d63b2d" namespace=k8s.io protocol=ttrpc version=3 Apr 20 16:06:26.177680 containerd[1642]: time="2026-04-20T16:06:26.169358535Z" level=info msg="connecting to shim bcb1213fb762cb01d7cad99cfe50564de5d80901cf15182f86eb69aab9596910" address="unix:///run/containerd/s/fea015718c3b780d3f475ca07cc94aee7b32240562ed54fe7ca53764a3c05bf2" namespace=k8s.io protocol=ttrpc version=3 Apr 20 16:06:26.204340 containerd[1642]: time="2026-04-20T16:06:26.193404286Z" level=info msg="connecting to shim de0a6fed9e83721940ec0876e7564d929596bb760b8ee3d26516d2cfbcb58f12" address="unix:///run/containerd/s/8ba738c5d72f5be7f5b351660ebfd030f4e2519d4471e5f3aff01bd5bd4eda0f" namespace=k8s.io protocol=ttrpc version=3 Apr 20 16:06:26.365143 systemd[1]: Started cri-containerd-de0a6fed9e83721940ec0876e7564d929596bb760b8ee3d26516d2cfbcb58f12.scope - libcontainer container de0a6fed9e83721940ec0876e7564d929596bb760b8ee3d26516d2cfbcb58f12. Apr 20 16:06:26.411129 kubelet[2598]: E0420 16:06:26.408889 2598 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.48:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.48:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 20 16:06:26.440977 systemd[1]: Started cri-containerd-a4f8e106b2c659c398fec8bc48b50dd31c841c50583cc946254869d16aa74adb.scope - libcontainer container a4f8e106b2c659c398fec8bc48b50dd31c841c50583cc946254869d16aa74adb. Apr 20 16:06:26.482341 systemd[1]: Started cri-containerd-bcb1213fb762cb01d7cad99cfe50564de5d80901cf15182f86eb69aab9596910.scope - libcontainer container bcb1213fb762cb01d7cad99cfe50564de5d80901cf15182f86eb69aab9596910. Apr 20 16:06:26.598893 kubelet[2598]: E0420 16:06:26.583513 2598 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.48:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.48:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 20 16:06:27.142380 kubelet[2598]: I0420 16:06:27.138260 2598 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 20 16:06:27.164314 kubelet[2598]: E0420 16:06:27.164062 2598 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.48:6443/api/v1/nodes\": dial tcp 10.0.0.48:6443: connect: connection refused" node="localhost" Apr 20 16:06:27.411153 kubelet[2598]: E0420 16:06:27.405321 2598 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.48:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.48:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18a81c4fc77bb6e1 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-20 16:06:21.972428513 +0000 UTC m=+2.951081620,LastTimestamp:2026-04-20 16:06:21.972428513 +0000 UTC m=+2.951081620,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 20 16:06:27.777445 containerd[1642]: time="2026-04-20T16:06:27.773265262Z" level=info msg="RunPodSandbox for name:\"kube-controller-manager-localhost\" uid:\"e9ca41790ae21be9f4cbd451ade0acec\" namespace:\"kube-system\" returns sandbox id \"a4f8e106b2c659c398fec8bc48b50dd31c841c50583cc946254869d16aa74adb\"" Apr 20 16:06:27.777445 containerd[1642]: time="2026-04-20T16:06:27.774831152Z" level=info msg="RunPodSandbox for name:\"kube-scheduler-localhost\" uid:\"33fee6ba1581201eda98a989140db110\" namespace:\"kube-system\" returns sandbox id \"bcb1213fb762cb01d7cad99cfe50564de5d80901cf15182f86eb69aab9596910\"" Apr 20 16:06:27.778780 kubelet[2598]: E0420 16:06:27.776956 2598 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 16:06:27.778780 kubelet[2598]: E0420 16:06:27.777002 2598 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 16:06:27.844396 containerd[1642]: time="2026-04-20T16:06:27.835270313Z" level=info msg="CreateContainer within sandbox \"bcb1213fb762cb01d7cad99cfe50564de5d80901cf15182f86eb69aab9596910\" for container name:\"kube-scheduler\"" Apr 20 16:06:27.860630 containerd[1642]: time="2026-04-20T16:06:27.854951761Z" level=info msg="RunPodSandbox for name:\"kube-apiserver-localhost\" uid:\"f014144595f5820c6cd8da57b59f2e50\" namespace:\"kube-system\" returns sandbox id \"de0a6fed9e83721940ec0876e7564d929596bb760b8ee3d26516d2cfbcb58f12\"" Apr 20 16:06:27.879291 containerd[1642]: time="2026-04-20T16:06:27.877557138Z" level=info msg="CreateContainer within sandbox \"a4f8e106b2c659c398fec8bc48b50dd31c841c50583cc946254869d16aa74adb\" for container name:\"kube-controller-manager\"" Apr 20 16:06:27.879773 kubelet[2598]: E0420 16:06:27.879408 2598 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 16:06:27.912653 containerd[1642]: time="2026-04-20T16:06:27.912478970Z" level=info msg="Container 456d4bb384e753507dd13b372b2df359997e43521644fd88e72216de163d6273: CDI devices from CRI Config.CDIDevices: []" Apr 20 16:06:27.938765 containerd[1642]: time="2026-04-20T16:06:27.938693806Z" level=info msg="CreateContainer within sandbox \"de0a6fed9e83721940ec0876e7564d929596bb760b8ee3d26516d2cfbcb58f12\" for container name:\"kube-apiserver\"" Apr 20 16:06:27.963303 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1851160141.mount: Deactivated successfully. Apr 20 16:06:27.965764 containerd[1642]: time="2026-04-20T16:06:27.963648978Z" level=info msg="CreateContainer within sandbox \"bcb1213fb762cb01d7cad99cfe50564de5d80901cf15182f86eb69aab9596910\" for name:\"kube-scheduler\" returns container id \"456d4bb384e753507dd13b372b2df359997e43521644fd88e72216de163d6273\"" Apr 20 16:06:27.972034 containerd[1642]: time="2026-04-20T16:06:27.971996522Z" level=info msg="StartContainer for \"456d4bb384e753507dd13b372b2df359997e43521644fd88e72216de163d6273\"" Apr 20 16:06:27.974962 containerd[1642]: time="2026-04-20T16:06:27.972782029Z" level=info msg="Container d17bb97a3f3282b85a6720c6e98b5c169be8dca35dce64f8b767b8712f2815f1: CDI devices from CRI Config.CDIDevices: []" Apr 20 16:06:27.983251 containerd[1642]: time="2026-04-20T16:06:27.983137111Z" level=info msg="connecting to shim 456d4bb384e753507dd13b372b2df359997e43521644fd88e72216de163d6273" address="unix:///run/containerd/s/fea015718c3b780d3f475ca07cc94aee7b32240562ed54fe7ca53764a3c05bf2" protocol=ttrpc version=3 Apr 20 16:06:28.075346 containerd[1642]: time="2026-04-20T16:06:28.074063307Z" level=info msg="Container bf9fac9b6bd039a6f8a1b255771a448d8f558c263c99d03e5d00573b2a43b092: CDI devices from CRI Config.CDIDevices: []" Apr 20 16:06:28.111674 containerd[1642]: time="2026-04-20T16:06:28.109091011Z" level=info msg="CreateContainer within sandbox \"a4f8e106b2c659c398fec8bc48b50dd31c841c50583cc946254869d16aa74adb\" for name:\"kube-controller-manager\" returns container id \"d17bb97a3f3282b85a6720c6e98b5c169be8dca35dce64f8b767b8712f2815f1\"" Apr 20 16:06:28.111674 containerd[1642]: time="2026-04-20T16:06:28.111061984Z" level=info msg="StartContainer for \"d17bb97a3f3282b85a6720c6e98b5c169be8dca35dce64f8b767b8712f2815f1\"" Apr 20 16:06:28.114695 kubelet[2598]: E0420 16:06:28.112841 2598 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.48:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.48:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 20 16:06:28.121769 containerd[1642]: time="2026-04-20T16:06:28.118488458Z" level=info msg="connecting to shim d17bb97a3f3282b85a6720c6e98b5c169be8dca35dce64f8b767b8712f2815f1" address="unix:///run/containerd/s/954064e80f6724cd9b3557ef5e8f14a291671be0b664e8d783df675780d63b2d" protocol=ttrpc version=3 Apr 20 16:06:28.191240 systemd[1]: Started cri-containerd-456d4bb384e753507dd13b372b2df359997e43521644fd88e72216de163d6273.scope - libcontainer container 456d4bb384e753507dd13b372b2df359997e43521644fd88e72216de163d6273. Apr 20 16:06:28.446739 kubelet[2598]: E0420 16:06:28.443836 2598 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.48:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.48:6443: connect: connection refused" interval="6.4s" Apr 20 16:06:28.452629 containerd[1642]: time="2026-04-20T16:06:28.452547744Z" level=info msg="CreateContainer within sandbox \"de0a6fed9e83721940ec0876e7564d929596bb760b8ee3d26516d2cfbcb58f12\" for name:\"kube-apiserver\" returns container id \"bf9fac9b6bd039a6f8a1b255771a448d8f558c263c99d03e5d00573b2a43b092\"" Apr 20 16:06:28.464471 containerd[1642]: time="2026-04-20T16:06:28.464388832Z" level=info msg="StartContainer for \"bf9fac9b6bd039a6f8a1b255771a448d8f558c263c99d03e5d00573b2a43b092\"" Apr 20 16:06:28.482837 systemd[1]: Started cri-containerd-d17bb97a3f3282b85a6720c6e98b5c169be8dca35dce64f8b767b8712f2815f1.scope - libcontainer container d17bb97a3f3282b85a6720c6e98b5c169be8dca35dce64f8b767b8712f2815f1. Apr 20 16:06:28.500816 containerd[1642]: time="2026-04-20T16:06:28.499627972Z" level=info msg="connecting to shim bf9fac9b6bd039a6f8a1b255771a448d8f558c263c99d03e5d00573b2a43b092" address="unix:///run/containerd/s/8ba738c5d72f5be7f5b351660ebfd030f4e2519d4471e5f3aff01bd5bd4eda0f" protocol=ttrpc version=3 Apr 20 16:06:28.976967 systemd[1]: Started cri-containerd-bf9fac9b6bd039a6f8a1b255771a448d8f558c263c99d03e5d00573b2a43b092.scope - libcontainer container bf9fac9b6bd039a6f8a1b255771a448d8f558c263c99d03e5d00573b2a43b092. Apr 20 16:06:29.794620 containerd[1642]: time="2026-04-20T16:06:29.784019463Z" level=info msg="StartContainer for \"456d4bb384e753507dd13b372b2df359997e43521644fd88e72216de163d6273\" returns successfully" Apr 20 16:06:30.241686 kubelet[2598]: E0420 16:06:30.241148 2598 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.48:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.48:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 20 16:06:30.339865 containerd[1642]: time="2026-04-20T16:06:30.336505018Z" level=info msg="StartContainer for \"d17bb97a3f3282b85a6720c6e98b5c169be8dca35dce64f8b767b8712f2815f1\" returns successfully" Apr 20 16:06:30.502407 kubelet[2598]: I0420 16:06:30.500096 2598 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 20 16:06:30.509075 kubelet[2598]: E0420 16:06:30.502416 2598 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.48:6443/api/v1/nodes\": dial tcp 10.0.0.48:6443: connect: connection refused" node="localhost" Apr 20 16:06:30.609799 kubelet[2598]: E0420 16:06:30.609461 2598 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.48:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.48:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 20 16:06:30.646282 containerd[1642]: time="2026-04-20T16:06:30.645901766Z" level=info msg="StartContainer for \"bf9fac9b6bd039a6f8a1b255771a448d8f558c263c99d03e5d00573b2a43b092\" returns successfully" Apr 20 16:06:31.271209 kubelet[2598]: E0420 16:06:31.266959 2598 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 20 16:06:31.387610 kubelet[2598]: E0420 16:06:31.387325 2598 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 16:06:31.606389 kubelet[2598]: E0420 16:06:31.584785 2598 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.48:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.48:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 20 16:06:31.967716 kubelet[2598]: E0420 16:06:31.967557 2598 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 20 16:06:31.970485 kubelet[2598]: E0420 16:06:31.969311 2598 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 16:06:32.562625 kubelet[2598]: E0420 16:06:32.562559 2598 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 20 16:06:32.567510 kubelet[2598]: E0420 16:06:32.567486 2598 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 16:06:32.959466 kubelet[2598]: E0420 16:06:32.959222 2598 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 20 16:06:33.742221 kubelet[2598]: E0420 16:06:33.741041 2598 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 20 16:06:33.750989 kubelet[2598]: E0420 16:06:33.747315 2598 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 16:06:33.800438 kubelet[2598]: E0420 16:06:33.797232 2598 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 20 16:06:33.810202 kubelet[2598]: E0420 16:06:33.806311 2598 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 20 16:06:33.832290 kubelet[2598]: E0420 16:06:33.830909 2598 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 16:06:33.832290 kubelet[2598]: E0420 16:06:33.831395 2598 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 16:06:35.109929 kubelet[2598]: E0420 16:06:35.109809 2598 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 20 16:06:35.112703 kubelet[2598]: E0420 16:06:35.109818 2598 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 20 16:06:35.112703 kubelet[2598]: E0420 16:06:35.110471 2598 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 16:06:35.123282 kubelet[2598]: E0420 16:06:35.123055 2598 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 16:06:37.051523 kubelet[2598]: I0420 16:06:37.051312 2598 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 20 16:06:37.433345 kubelet[2598]: E0420 16:06:37.432755 2598 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 20 16:06:37.433345 kubelet[2598]: E0420 16:06:37.432977 2598 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 16:06:38.993812 kubelet[2598]: E0420 16:06:38.990096 2598 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 20 16:06:39.062968 kubelet[2598]: E0420 16:06:39.052612 2598 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 16:06:39.856245 kubelet[2598]: E0420 16:06:39.856010 2598 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 20 16:06:39.880800 kubelet[2598]: E0420 16:06:39.880688 2598 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 16:06:42.153726 kubelet[2598]: E0420 16:06:42.153451 2598 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.48:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 20 16:06:42.982655 kubelet[2598]: E0420 16:06:42.982393 2598 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 20 16:06:44.856230 kubelet[2598]: E0420 16:06:44.854488 2598 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.48:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded" interval="7s" Apr 20 16:06:45.028714 kubelet[2598]: I0420 16:06:45.028629 2598 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Apr 20 16:06:45.032548 kubelet[2598]: E0420 16:06:45.031218 2598 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Apr 20 16:06:45.069816 kubelet[2598]: E0420 16:06:45.065110 2598 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.18a81c4fc77bb6e1 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-20 16:06:21.972428513 +0000 UTC m=+2.951081620,LastTimestamp:2026-04-20 16:06:21.972428513 +0000 UTC m=+2.951081620,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 20 16:06:45.117476 kubelet[2598]: I0420 16:06:45.117118 2598 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Apr 20 16:06:45.148096 kubelet[2598]: I0420 16:06:45.147891 2598 apiserver.go:52] "Watching apiserver" Apr 20 16:06:45.224471 kubelet[2598]: I0420 16:06:45.224344 2598 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Apr 20 16:06:45.233894 kubelet[2598]: E0420 16:06:45.227078 2598 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Apr 20 16:06:45.258139 kubelet[2598]: I0420 16:06:45.255553 2598 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Apr 20 16:06:45.294223 kubelet[2598]: E0420 16:06:45.291468 2598 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Apr 20 16:06:45.294223 kubelet[2598]: I0420 16:06:45.291583 2598 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Apr 20 16:06:45.317990 kubelet[2598]: E0420 16:06:45.311977 2598 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Apr 20 16:06:49.073776 kubelet[2598]: I0420 16:06:49.071715 2598 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Apr 20 16:06:49.239017 kubelet[2598]: E0420 16:06:49.237987 2598 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 16:06:49.992553 kubelet[2598]: I0420 16:06:49.991126 2598 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Apr 20 16:06:50.057946 kubelet[2598]: E0420 16:06:50.057832 2598 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 16:06:50.093095 kubelet[2598]: E0420 16:06:50.092928 2598 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 16:06:50.465226 kubelet[2598]: I0420 16:06:50.464747 2598 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.464562728 podStartE2EDuration="1.464562728s" podCreationTimestamp="2026-04-20 16:06:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-20 16:06:50.23950429 +0000 UTC m=+31.218157394" watchObservedRunningTime="2026-04-20 16:06:50.464562728 +0000 UTC m=+31.443215817" Apr 20 16:06:50.575135 kubelet[2598]: I0420 16:06:50.570474 2598 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=0.560906276 podStartE2EDuration="560.906276ms" podCreationTimestamp="2026-04-20 16:06:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-20 16:06:50.465091322 +0000 UTC m=+31.443744410" watchObservedRunningTime="2026-04-20 16:06:50.560906276 +0000 UTC m=+31.539559378" Apr 20 16:06:51.183985 kubelet[2598]: E0420 16:06:51.183623 2598 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 16:07:07.476191 kubelet[2598]: I0420 16:07:07.474329 2598 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Apr 20 16:07:07.546675 kubelet[2598]: E0420 16:07:07.543113 2598 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 16:07:07.916830 kubelet[2598]: I0420 16:07:07.915015 2598 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=0.91479341 podStartE2EDuration="914.79341ms" podCreationTimestamp="2026-04-20 16:07:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-20 16:07:07.88850365 +0000 UTC m=+48.867156746" watchObservedRunningTime="2026-04-20 16:07:07.91479341 +0000 UTC m=+48.893446510" Apr 20 16:07:08.334240 kubelet[2598]: E0420 16:07:08.323107 2598 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 16:07:08.992510 kubelet[2598]: E0420 16:07:08.992344 2598 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 16:07:12.510979 systemd[1]: Reload requested from client PID 2896 ('systemctl') (unit session-8.scope)... Apr 20 16:07:12.511030 systemd[1]: Reloading... Apr 20 16:07:14.560552 zram_generator::config[2950]: No configuration found. Apr 20 16:07:15.484656 systemd-ssh-generator[2945]: Failed to query local AF_VSOCK CID: Cannot assign requested address Apr 20 16:07:16.157032 (sd-exec-[2927]: /usr/lib/systemd/system-generators/systemd-ssh-generator failed with exit status 1. Apr 20 16:07:16.652291 kubelet[2598]: E0420 16:07:16.652242 2598 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="4.18s" Apr 20 16:07:17.654860 systemd[1]: /usr/lib/systemd/system/update-engine.service:10: Support for option BlockIOWeight= has been removed and it is ignored Apr 20 16:07:18.288108 systemd[1]: Reloading finished in 5747 ms. Apr 20 16:07:18.592935 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 20 16:07:18.889536 systemd[1]: kubelet.service: Deactivated successfully. Apr 20 16:07:18.939611 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 20 16:07:18.942115 systemd[1]: kubelet.service: Consumed 22.915s CPU time, 135M memory peak. Apr 20 16:07:19.248399 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 20 16:07:20.652388 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 20 16:07:20.668697 (kubelet)[2995]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 20 16:07:21.848966 kubelet[2995]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 20 16:07:21.848966 kubelet[2995]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 20 16:07:21.848966 kubelet[2995]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 20 16:07:21.857725 kubelet[2995]: I0420 16:07:21.849649 2995 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 20 16:07:22.144153 kubelet[2995]: I0420 16:07:22.120212 2995 server.go:530] "Kubelet version" kubeletVersion="v1.33.8" Apr 20 16:07:22.144153 kubelet[2995]: I0420 16:07:22.120354 2995 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 20 16:07:22.144153 kubelet[2995]: I0420 16:07:22.141104 2995 server.go:956] "Client rotation is on, will bootstrap in background" Apr 20 16:07:22.190686 kubelet[2995]: I0420 16:07:22.187832 2995 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Apr 20 16:07:22.294636 kubelet[2995]: I0420 16:07:22.294216 2995 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 20 16:07:22.903851 kubelet[2995]: I0420 16:07:22.903227 2995 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Apr 20 16:07:23.795349 kubelet[2995]: I0420 16:07:23.794876 2995 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Apr 20 16:07:23.877205 kubelet[2995]: I0420 16:07:23.865794 2995 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 20 16:07:23.982329 kubelet[2995]: I0420 16:07:23.903938 2995 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 20 16:07:23.982329 kubelet[2995]: I0420 16:07:23.982768 2995 topology_manager.go:138] "Creating topology manager with none policy" Apr 20 16:07:23.982329 kubelet[2995]: I0420 16:07:23.982900 2995 container_manager_linux.go:303] "Creating device plugin manager" Apr 20 16:07:23.982329 kubelet[2995]: I0420 16:07:23.989279 2995 state_mem.go:36] "Initialized new in-memory state store" Apr 20 16:07:24.092045 kubelet[2995]: I0420 16:07:23.997025 2995 kubelet.go:480] "Attempting to sync node with API server" Apr 20 16:07:24.092045 kubelet[2995]: I0420 16:07:23.997116 2995 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 20 16:07:24.092045 kubelet[2995]: I0420 16:07:23.997544 2995 kubelet.go:386] "Adding apiserver pod source" Apr 20 16:07:24.092045 kubelet[2995]: I0420 16:07:23.998388 2995 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 20 16:07:24.209997 kubelet[2995]: I0420 16:07:24.209920 2995 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.2.1" apiVersion="v1" Apr 20 16:07:24.277999 kubelet[2995]: I0420 16:07:24.266313 2995 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 20 16:07:24.403659 kubelet[2995]: I0420 16:07:24.402858 2995 watchdog_linux.go:99] "Systemd watchdog is not enabled" Apr 20 16:07:24.403659 kubelet[2995]: I0420 16:07:24.403248 2995 server.go:1289] "Started kubelet" Apr 20 16:07:24.565441 kubelet[2995]: I0420 16:07:24.437742 2995 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Apr 20 16:07:24.565441 kubelet[2995]: I0420 16:07:24.463116 2995 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 20 16:07:24.609867 kubelet[2995]: I0420 16:07:24.609802 2995 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 20 16:07:24.611615 kubelet[2995]: I0420 16:07:24.611521 2995 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 20 16:07:24.671552 kubelet[2995]: I0420 16:07:24.611527 2995 server.go:317] "Adding debug handlers to kubelet server" Apr 20 16:07:24.812394 kubelet[2995]: I0420 16:07:24.809270 2995 volume_manager.go:297] "Starting Kubelet Volume Manager" Apr 20 16:07:24.820118 kubelet[2995]: I0420 16:07:24.819993 2995 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 20 16:07:24.823531 kubelet[2995]: I0420 16:07:24.823507 2995 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Apr 20 16:07:24.824382 kubelet[2995]: I0420 16:07:24.824362 2995 reconciler.go:26] "Reconciler: start to sync state" Apr 20 16:07:24.833819 kubelet[2995]: I0420 16:07:24.833640 2995 factory.go:223] Registration of the systemd container factory successfully Apr 20 16:07:24.838717 kubelet[2995]: I0420 16:07:24.834028 2995 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 20 16:07:24.966340 kubelet[2995]: I0420 16:07:24.965823 2995 factory.go:223] Registration of the containerd container factory successfully Apr 20 16:07:24.978796 kubelet[2995]: E0420 16:07:24.975254 2995 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 20 16:07:25.054205 kubelet[2995]: I0420 16:07:25.054080 2995 apiserver.go:52] "Watching apiserver" Apr 20 16:07:25.253439 kubelet[2995]: I0420 16:07:25.245113 2995 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Apr 20 16:07:25.338082 kubelet[2995]: I0420 16:07:25.337982 2995 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Apr 20 16:07:25.353042 kubelet[2995]: I0420 16:07:25.352918 2995 status_manager.go:230] "Starting to sync pod status with apiserver" Apr 20 16:07:25.364104 kubelet[2995]: I0420 16:07:25.363940 2995 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 20 16:07:25.364816 kubelet[2995]: I0420 16:07:25.364795 2995 kubelet.go:2436] "Starting kubelet main sync loop" Apr 20 16:07:25.365834 kubelet[2995]: E0420 16:07:25.365254 2995 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 20 16:07:25.469072 kubelet[2995]: E0420 16:07:25.468899 2995 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 20 16:07:25.683272 kubelet[2995]: E0420 16:07:25.675847 2995 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 20 16:07:26.090948 kubelet[2995]: E0420 16:07:26.088148 2995 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 20 16:07:27.060526 kubelet[2995]: E0420 16:07:27.060349 2995 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Apr 20 16:07:28.692269 kubelet[2995]: E0420 16:07:28.687843 2995 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Apr 20 16:07:28.783031 kubelet[2995]: I0420 16:07:28.781059 2995 cpu_manager.go:221] "Starting CPU manager" policy="none" Apr 20 16:07:28.783031 kubelet[2995]: I0420 16:07:28.781105 2995 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Apr 20 16:07:28.783031 kubelet[2995]: I0420 16:07:28.781296 2995 state_mem.go:36] "Initialized new in-memory state store" Apr 20 16:07:28.806016 kubelet[2995]: I0420 16:07:28.783784 2995 state_mem.go:88] "Updated default CPUSet" cpuSet="" Apr 20 16:07:28.806016 kubelet[2995]: I0420 16:07:28.786427 2995 state_mem.go:96] "Updated CPUSet assignments" assignments={} Apr 20 16:07:28.806016 kubelet[2995]: I0420 16:07:28.787773 2995 policy_none.go:49] "None policy: Start" Apr 20 16:07:28.806016 kubelet[2995]: I0420 16:07:28.787889 2995 memory_manager.go:186] "Starting memorymanager" policy="None" Apr 20 16:07:28.806016 kubelet[2995]: I0420 16:07:28.791345 2995 state_mem.go:35] "Initializing new in-memory state store" Apr 20 16:07:28.806016 kubelet[2995]: I0420 16:07:28.801277 2995 state_mem.go:75] "Updated machine memory state" Apr 20 16:07:28.968035 kubelet[2995]: E0420 16:07:28.967117 2995 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 20 16:07:28.981538 kubelet[2995]: I0420 16:07:28.969974 2995 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 20 16:07:28.981538 kubelet[2995]: I0420 16:07:28.970008 2995 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 20 16:07:28.981538 kubelet[2995]: I0420 16:07:28.971615 2995 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 20 16:07:28.999813 kubelet[2995]: E0420 16:07:28.996786 2995 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 20 16:07:29.567312 kubelet[2995]: I0420 16:07:29.566592 2995 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 20 16:07:30.068484 kubelet[2995]: I0420 16:07:30.068303 2995 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Apr 20 16:07:30.087740 kubelet[2995]: I0420 16:07:30.075842 2995 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Apr 20 16:07:30.932632 update_engine[1613]: I20260420 16:07:30.929328 1613 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Apr 20 16:07:30.932632 update_engine[1613]: I20260420 16:07:30.929414 1613 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Apr 20 16:07:30.932632 update_engine[1613]: I20260420 16:07:30.930023 1613 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Apr 20 16:07:30.934097 update_engine[1613]: I20260420 16:07:30.932852 1613 omaha_request_params.cc:62] Current group set to alpha Apr 20 16:07:30.945943 update_engine[1613]: I20260420 16:07:30.939420 1613 update_attempter.cc:499] Already updated boot flags. Skipping. Apr 20 16:07:30.945943 update_engine[1613]: I20260420 16:07:30.939461 1613 update_attempter.cc:643] Scheduling an action processor start. Apr 20 16:07:30.945943 update_engine[1613]: I20260420 16:07:30.939488 1613 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Apr 20 16:07:31.003434 update_engine[1613]: I20260420 16:07:30.981771 1613 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Apr 20 16:07:31.003434 update_engine[1613]: I20260420 16:07:30.998522 1613 omaha_request_action.cc:271] Posting an Omaha request to disabled Apr 20 16:07:31.003434 update_engine[1613]: I20260420 16:07:31.000761 1613 omaha_request_action.cc:272] Request: Apr 20 16:07:31.003434 update_engine[1613]: Apr 20 16:07:31.003434 update_engine[1613]: Apr 20 16:07:31.003434 update_engine[1613]: Apr 20 16:07:31.003434 update_engine[1613]: Apr 20 16:07:31.003434 update_engine[1613]: Apr 20 16:07:31.003434 update_engine[1613]: Apr 20 16:07:31.003434 update_engine[1613]: Apr 20 16:07:31.003434 update_engine[1613]: Apr 20 16:07:31.003434 update_engine[1613]: I20260420 16:07:31.000802 1613 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 20 16:07:31.003434 update_engine[1613]: I20260420 16:07:31.010926 1613 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 20 16:07:31.314085 update_engine[1613]: I20260420 16:07:31.020715 1613 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 20 16:07:31.314085 update_engine[1613]: E20260420 16:07:31.150810 1613 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (Domain name not found) Apr 20 16:07:31.314085 update_engine[1613]: I20260420 16:07:31.158293 1613 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Apr 20 16:07:31.360492 locksmithd[1695]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Apr 20 16:07:32.272875 kubelet[2995]: I0420 16:07:32.272606 2995 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/e9ca41790ae21be9f4cbd451ade0acec-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"e9ca41790ae21be9f4cbd451ade0acec\") " pod="kube-system/kube-controller-manager-localhost" Apr 20 16:07:32.287598 kubelet[2995]: I0420 16:07:32.287540 2995 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e9ca41790ae21be9f4cbd451ade0acec-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"e9ca41790ae21be9f4cbd451ade0acec\") " pod="kube-system/kube-controller-manager-localhost" Apr 20 16:07:32.290380 kubelet[2995]: I0420 16:07:32.288116 2995 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e9ca41790ae21be9f4cbd451ade0acec-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"e9ca41790ae21be9f4cbd451ade0acec\") " pod="kube-system/kube-controller-manager-localhost" Apr 20 16:07:32.290380 kubelet[2995]: I0420 16:07:32.288305 2995 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e9ca41790ae21be9f4cbd451ade0acec-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"e9ca41790ae21be9f4cbd451ade0acec\") " pod="kube-system/kube-controller-manager-localhost" Apr 20 16:07:32.290380 kubelet[2995]: I0420 16:07:32.288332 2995 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e9ca41790ae21be9f4cbd451ade0acec-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"e9ca41790ae21be9f4cbd451ade0acec\") " pod="kube-system/kube-controller-manager-localhost" Apr 20 16:07:32.290772 kubelet[2995]: I0420 16:07:32.290751 2995 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Apr 20 16:07:32.316146 kubelet[2995]: I0420 16:07:32.316096 2995 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Apr 20 16:07:32.336968 kubelet[2995]: I0420 16:07:32.332580 2995 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Apr 20 16:07:32.406731 kubelet[2995]: I0420 16:07:32.404385 2995 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/33fee6ba1581201eda98a989140db110-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"33fee6ba1581201eda98a989140db110\") " pod="kube-system/kube-scheduler-localhost" Apr 20 16:07:32.407503 kubelet[2995]: I0420 16:07:32.407424 2995 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f014144595f5820c6cd8da57b59f2e50-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"f014144595f5820c6cd8da57b59f2e50\") " pod="kube-system/kube-apiserver-localhost" Apr 20 16:07:32.407602 kubelet[2995]: I0420 16:07:32.407505 2995 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f014144595f5820c6cd8da57b59f2e50-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"f014144595f5820c6cd8da57b59f2e50\") " pod="kube-system/kube-apiserver-localhost" Apr 20 16:07:32.407602 kubelet[2995]: I0420 16:07:32.407533 2995 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f014144595f5820c6cd8da57b59f2e50-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"f014144595f5820c6cd8da57b59f2e50\") " pod="kube-system/kube-apiserver-localhost" Apr 20 16:07:32.408618 containerd[1642]: time="2026-04-20T16:07:32.408250062Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Apr 20 16:07:32.580443 kubelet[2995]: I0420 16:07:32.578101 2995 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Apr 20 16:07:32.679349 kubelet[2995]: E0420 16:07:32.678738 2995 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 16:07:32.927339 kubelet[2995]: E0420 16:07:32.927291 2995 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 16:07:33.407927 kubelet[2995]: E0420 16:07:33.403681 2995 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Apr 20 16:07:33.407927 kubelet[2995]: E0420 16:07:33.405978 2995 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 16:07:33.971349 kubelet[2995]: E0420 16:07:33.966387 2995 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 16:07:34.333346 kubelet[2995]: E0420 16:07:34.307738 2995 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 16:07:34.405661 kubelet[2995]: E0420 16:07:34.404129 2995 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 16:07:35.062730 kubelet[2995]: E0420 16:07:35.024485 2995 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 16:07:35.095662 kubelet[2995]: E0420 16:07:35.066411 2995 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 16:07:35.165649 kubelet[2995]: I0420 16:07:35.164222 2995 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c9b61d8b-3b29-4c0b-8fa5-fb8760c2904d-xtables-lock\") pod \"kube-proxy-6k4c6\" (UID: \"c9b61d8b-3b29-4c0b-8fa5-fb8760c2904d\") " pod="kube-system/kube-proxy-6k4c6" Apr 20 16:07:35.165649 kubelet[2995]: I0420 16:07:35.164428 2995 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c9b61d8b-3b29-4c0b-8fa5-fb8760c2904d-lib-modules\") pod \"kube-proxy-6k4c6\" (UID: \"c9b61d8b-3b29-4c0b-8fa5-fb8760c2904d\") " pod="kube-system/kube-proxy-6k4c6" Apr 20 16:07:35.165649 kubelet[2995]: I0420 16:07:35.164487 2995 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/c9b61d8b-3b29-4c0b-8fa5-fb8760c2904d-kube-proxy\") pod \"kube-proxy-6k4c6\" (UID: \"c9b61d8b-3b29-4c0b-8fa5-fb8760c2904d\") " pod="kube-system/kube-proxy-6k4c6" Apr 20 16:07:35.165649 kubelet[2995]: I0420 16:07:35.164584 2995 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kwz29\" (UniqueName: \"kubernetes.io/projected/c9b61d8b-3b29-4c0b-8fa5-fb8760c2904d-kube-api-access-kwz29\") pod \"kube-proxy-6k4c6\" (UID: \"c9b61d8b-3b29-4c0b-8fa5-fb8760c2904d\") " pod="kube-system/kube-proxy-6k4c6" Apr 20 16:07:35.381565 systemd[1]: Created slice kubepods-besteffort-podc9b61d8b_3b29_4c0b_8fa5_fb8760c2904d.slice - libcontainer container kubepods-besteffort-podc9b61d8b_3b29_4c0b_8fa5_fb8760c2904d.slice. Apr 20 16:07:35.948287 kubelet[2995]: E0420 16:07:35.948144 2995 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 16:07:35.955586 containerd[1642]: time="2026-04-20T16:07:35.954916611Z" level=info msg="RunPodSandbox for name:\"kube-proxy-6k4c6\" uid:\"c9b61d8b-3b29-4c0b-8fa5-fb8760c2904d\" namespace:\"kube-system\"" Apr 20 16:07:37.138310 containerd[1642]: time="2026-04-20T16:07:37.135965406Z" level=info msg="connecting to shim 1fca368652511c60d532b9c1060a34d2e04d688dca024e8b28591c326cbd3dc8" address="unix:///run/containerd/s/6a84121f7040f079e5433657c5d4f0976b8388eb0d3cbb574acfd42c5bb2c5fb" namespace=k8s.io protocol=ttrpc version=3 Apr 20 16:07:38.573621 kubelet[2995]: E0420 16:07:38.573250 2995 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.192s" Apr 20 16:07:38.582321 kubelet[2995]: E0420 16:07:38.576836 2995 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 16:07:38.973418 systemd[1]: Started cri-containerd-1fca368652511c60d532b9c1060a34d2e04d688dca024e8b28591c326cbd3dc8.scope - libcontainer container 1fca368652511c60d532b9c1060a34d2e04d688dca024e8b28591c326cbd3dc8. Apr 20 16:07:40.955765 update_engine[1613]: I20260420 16:07:40.927084 1613 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 20 16:07:41.160482 update_engine[1613]: I20260420 16:07:40.959404 1613 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 20 16:07:41.160482 update_engine[1613]: I20260420 16:07:40.990944 1613 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 20 16:07:41.160482 update_engine[1613]: E20260420 16:07:41.103013 1613 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (Domain name not found) Apr 20 16:07:41.203902 update_engine[1613]: I20260420 16:07:41.130100 1613 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Apr 20 16:07:41.402016 containerd[1642]: time="2026-04-20T16:07:41.377525733Z" level=error msg="get state for 1fca368652511c60d532b9c1060a34d2e04d688dca024e8b28591c326cbd3dc8" error="context deadline exceeded" Apr 20 16:07:41.586042 containerd[1642]: time="2026-04-20T16:07:41.534018154Z" level=warning msg="unknown status" status=0 Apr 20 16:07:43.587365 kubelet[2995]: E0420 16:07:43.576121 2995 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="4.15s" Apr 20 16:07:44.548946 containerd[1642]: time="2026-04-20T16:07:44.543070433Z" level=error msg="get state for 1fca368652511c60d532b9c1060a34d2e04d688dca024e8b28591c326cbd3dc8" error="context deadline exceeded" Apr 20 16:07:44.548946 containerd[1642]: time="2026-04-20T16:07:44.543608702Z" level=warning msg="unknown status" status=0 Apr 20 16:07:47.921730 containerd[1642]: time="2026-04-20T16:07:47.919673280Z" level=error msg="ttrpc: received message on inactive stream" stream=3 Apr 20 16:07:48.092431 containerd[1642]: time="2026-04-20T16:07:47.943144918Z" level=error msg="ttrpc: received message on inactive stream" stream=5 Apr 20 16:07:48.788127 kubelet[2995]: E0420 16:07:48.773057 2995 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="5.097s" Apr 20 16:07:49.066931 kubelet[2995]: I0420 16:07:49.055062 2995 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/ab5fcd41-b107-4ccd-91a7-6726f21e7a70-run\") pod \"kube-flannel-ds-cvh2r\" (UID: \"ab5fcd41-b107-4ccd-91a7-6726f21e7a70\") " pod="kube-flannel/kube-flannel-ds-cvh2r" Apr 20 16:07:49.066931 kubelet[2995]: I0420 16:07:49.063946 2995 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rnmp8\" (UniqueName: \"kubernetes.io/projected/ab5fcd41-b107-4ccd-91a7-6726f21e7a70-kube-api-access-rnmp8\") pod \"kube-flannel-ds-cvh2r\" (UID: \"ab5fcd41-b107-4ccd-91a7-6726f21e7a70\") " pod="kube-flannel/kube-flannel-ds-cvh2r" Apr 20 16:07:49.066931 kubelet[2995]: I0420 16:07:49.064324 2995 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni\" (UniqueName: \"kubernetes.io/host-path/ab5fcd41-b107-4ccd-91a7-6726f21e7a70-cni\") pod \"kube-flannel-ds-cvh2r\" (UID: \"ab5fcd41-b107-4ccd-91a7-6726f21e7a70\") " pod="kube-flannel/kube-flannel-ds-cvh2r" Apr 20 16:07:49.066931 kubelet[2995]: I0420 16:07:49.064342 2995 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-plugin\" (UniqueName: \"kubernetes.io/host-path/ab5fcd41-b107-4ccd-91a7-6726f21e7a70-cni-plugin\") pod \"kube-flannel-ds-cvh2r\" (UID: \"ab5fcd41-b107-4ccd-91a7-6726f21e7a70\") " pod="kube-flannel/kube-flannel-ds-cvh2r" Apr 20 16:07:49.066931 kubelet[2995]: I0420 16:07:49.064356 2995 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flannel-cfg\" (UniqueName: \"kubernetes.io/configmap/ab5fcd41-b107-4ccd-91a7-6726f21e7a70-flannel-cfg\") pod \"kube-flannel-ds-cvh2r\" (UID: \"ab5fcd41-b107-4ccd-91a7-6726f21e7a70\") " pod="kube-flannel/kube-flannel-ds-cvh2r" Apr 20 16:07:49.109457 kubelet[2995]: I0420 16:07:49.064534 2995 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ab5fcd41-b107-4ccd-91a7-6726f21e7a70-xtables-lock\") pod \"kube-flannel-ds-cvh2r\" (UID: \"ab5fcd41-b107-4ccd-91a7-6726f21e7a70\") " pod="kube-flannel/kube-flannel-ds-cvh2r" Apr 20 16:07:49.726436 systemd[1]: Created slice kubepods-burstable-podab5fcd41_b107_4ccd_91a7_6726f21e7a70.slice - libcontainer container kubepods-burstable-podab5fcd41_b107_4ccd_91a7_6726f21e7a70.slice. Apr 20 16:07:49.880142 sudo[1841]: pam_unix(sudo:session): session closed for user root Apr 20 16:07:49.988111 sshd[1840]: Connection closed by 10.0.0.1 port 35416 Apr 20 16:07:49.997854 sshd-session[1836]: pam_unix(sshd:session): session closed for user core Apr 20 16:07:50.493671 kubelet[2995]: E0420 16:07:50.492727 2995 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 16:07:50.668736 systemd[1]: sshd@6-4100-10.0.0.48:22-10.0.0.1:35416.service: Deactivated successfully. Apr 20 16:07:50.977657 update_engine[1613]: I20260420 16:07:50.966090 1613 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 20 16:07:51.018405 update_engine[1613]: I20260420 16:07:50.986964 1613 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 20 16:07:51.007974 systemd[1]: session-8.scope: Deactivated successfully. Apr 20 16:07:51.018920 systemd[1]: session-8.scope: Consumed 36.739s CPU time, 211M memory peak. Apr 20 16:07:51.044150 update_engine[1613]: I20260420 16:07:51.023793 1613 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 20 16:07:51.077820 update_engine[1613]: E20260420 16:07:51.050794 1613 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (Domain name not found) Apr 20 16:07:51.077820 update_engine[1613]: I20260420 16:07:51.051100 1613 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Apr 20 16:07:51.092805 systemd-logind[1611]: Session 8 logged out. Waiting for processes to exit. Apr 20 16:07:51.208393 systemd-logind[1611]: Removed session 8. Apr 20 16:07:52.313130 kubelet[2995]: E0420 16:07:52.286906 2995 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 16:07:52.472553 containerd[1642]: time="2026-04-20T16:07:52.383110732Z" level=info msg="RunPodSandbox for name:\"kube-proxy-6k4c6\" uid:\"c9b61d8b-3b29-4c0b-8fa5-fb8760c2904d\" namespace:\"kube-system\" returns sandbox id \"1fca368652511c60d532b9c1060a34d2e04d688dca024e8b28591c326cbd3dc8\"" Apr 20 16:07:52.662923 kubelet[2995]: E0420 16:07:52.654786 2995 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 16:07:52.856977 containerd[1642]: time="2026-04-20T16:07:52.855032600Z" level=info msg="RunPodSandbox for name:\"kube-flannel-ds-cvh2r\" uid:\"ab5fcd41-b107-4ccd-91a7-6726f21e7a70\" namespace:\"kube-flannel\"" Apr 20 16:07:53.343564 kubelet[2995]: E0420 16:07:53.295832 2995 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 16:07:53.537698 kubelet[2995]: E0420 16:07:53.536924 2995 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.094s" Apr 20 16:07:55.669405 kubelet[2995]: E0420 16:07:55.595837 2995 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.942s" Apr 20 16:07:57.042038 containerd[1642]: time="2026-04-20T16:07:57.040454677Z" level=info msg="CreateContainer within sandbox \"1fca368652511c60d532b9c1060a34d2e04d688dca024e8b28591c326cbd3dc8\" for container name:\"kube-proxy\"" Apr 20 16:07:59.597077 kubelet[2995]: E0420 16:07:59.596979 2995 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 16:07:59.681314 kubelet[2995]: E0420 16:07:59.597115 2995 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 16:07:59.681314 kubelet[2995]: E0420 16:07:59.673145 2995 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.814s" Apr 20 16:07:59.839766 containerd[1642]: time="2026-04-20T16:07:59.791498897Z" level=info msg="connecting to shim 0841463db47c0e7c9c6d85cfb6642e85afb7fb990d35c75714898c948ba39613" address="unix:///run/containerd/s/0ed820c0dcc55393a306a831ea5ff34ff6397fb5c5c6190f1f8692e983757cf0" namespace=k8s.io protocol=ttrpc version=3 Apr 20 16:08:00.965411 update_engine[1613]: I20260420 16:08:00.961696 1613 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 20 16:08:00.965411 update_engine[1613]: I20260420 16:08:00.964072 1613 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 20 16:08:00.965411 update_engine[1613]: I20260420 16:08:00.965054 1613 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 20 16:08:01.033920 update_engine[1613]: E20260420 16:08:00.979902 1613 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (Domain name not found) Apr 20 16:08:01.033920 update_engine[1613]: I20260420 16:08:00.993054 1613 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Apr 20 16:08:01.033920 update_engine[1613]: I20260420 16:08:01.000614 1613 omaha_request_action.cc:617] Omaha request response: Apr 20 16:08:01.037580 update_engine[1613]: E20260420 16:08:01.031893 1613 omaha_request_action.cc:636] Omaha request network transfer failed. Apr 20 16:08:01.047754 update_engine[1613]: I20260420 16:08:01.036993 1613 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Apr 20 16:08:01.047754 update_engine[1613]: I20260420 16:08:01.040218 1613 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Apr 20 16:08:01.047754 update_engine[1613]: I20260420 16:08:01.040266 1613 update_attempter.cc:306] Processing Done. Apr 20 16:08:01.047754 update_engine[1613]: E20260420 16:08:01.040357 1613 update_attempter.cc:619] Update failed. Apr 20 16:08:01.047754 update_engine[1613]: I20260420 16:08:01.040383 1613 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Apr 20 16:08:01.047754 update_engine[1613]: I20260420 16:08:01.040388 1613 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Apr 20 16:08:01.047754 update_engine[1613]: I20260420 16:08:01.040393 1613 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Apr 20 16:08:01.233059 update_engine[1613]: I20260420 16:08:01.053215 1613 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Apr 20 16:08:01.233059 update_engine[1613]: I20260420 16:08:01.053551 1613 omaha_request_action.cc:271] Posting an Omaha request to disabled Apr 20 16:08:01.233059 update_engine[1613]: I20260420 16:08:01.053557 1613 omaha_request_action.cc:272] Request: Apr 20 16:08:01.233059 update_engine[1613]: Apr 20 16:08:01.233059 update_engine[1613]: Apr 20 16:08:01.233059 update_engine[1613]: Apr 20 16:08:01.233059 update_engine[1613]: Apr 20 16:08:01.233059 update_engine[1613]: Apr 20 16:08:01.233059 update_engine[1613]: Apr 20 16:08:01.233059 update_engine[1613]: I20260420 16:08:01.053565 1613 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 20 16:08:01.233059 update_engine[1613]: I20260420 16:08:01.053637 1613 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 20 16:08:01.233059 update_engine[1613]: I20260420 16:08:01.093628 1613 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 20 16:08:01.233059 update_engine[1613]: E20260420 16:08:01.179020 1613 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (Domain name not found) Apr 20 16:08:01.262946 locksmithd[1695]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Apr 20 16:08:01.283749 update_engine[1613]: I20260420 16:08:01.200001 1613 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Apr 20 16:08:01.283749 update_engine[1613]: I20260420 16:08:01.283529 1613 omaha_request_action.cc:617] Omaha request response: Apr 20 16:08:01.283749 update_engine[1613]: I20260420 16:08:01.283563 1613 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Apr 20 16:08:01.283749 update_engine[1613]: I20260420 16:08:01.283571 1613 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Apr 20 16:08:01.283749 update_engine[1613]: I20260420 16:08:01.283576 1613 update_attempter.cc:306] Processing Done. Apr 20 16:08:01.283749 update_engine[1613]: I20260420 16:08:01.283586 1613 update_attempter.cc:310] Error event sent. Apr 20 16:08:01.283749 update_engine[1613]: I20260420 16:08:01.283717 1613 update_check_scheduler.cc:74] Next update check in 41m47s Apr 20 16:08:01.338780 locksmithd[1695]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Apr 20 16:08:01.344421 kubelet[2995]: E0420 16:08:01.344085 2995 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.669s" Apr 20 16:08:01.652377 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount784458215.mount: Deactivated successfully. Apr 20 16:08:01.664632 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1858644563.mount: Deactivated successfully. Apr 20 16:08:01.686629 containerd[1642]: time="2026-04-20T16:08:01.684659336Z" level=info msg="Container cdffdba84ba06cb830e103ae945ccbed39828f3caa5d924e4d59c2711614d2b6: CDI devices from CRI Config.CDIDevices: []" Apr 20 16:08:02.748632 systemd[1]: Started cri-containerd-0841463db47c0e7c9c6d85cfb6642e85afb7fb990d35c75714898c948ba39613.scope - libcontainer container 0841463db47c0e7c9c6d85cfb6642e85afb7fb990d35c75714898c948ba39613. Apr 20 16:08:13.773792 systemd[1]: cri-containerd-d17bb97a3f3282b85a6720c6e98b5c169be8dca35dce64f8b767b8712f2815f1.scope: Deactivated successfully. Apr 20 16:08:13.879563 systemd[1]: cri-containerd-d17bb97a3f3282b85a6720c6e98b5c169be8dca35dce64f8b767b8712f2815f1.scope: Consumed 24.291s CPU time, 47.5M memory peak. Apr 20 16:08:15.006923 containerd[1642]: time="2026-04-20T16:08:14.999890193Z" level=info msg="received container exit event container_id:\"d17bb97a3f3282b85a6720c6e98b5c169be8dca35dce64f8b767b8712f2815f1\" id:\"d17bb97a3f3282b85a6720c6e98b5c169be8dca35dce64f8b767b8712f2815f1\" pid:2815 exit_status:1 exited_at:{seconds:1776701294 nanos:594509536}" Apr 20 16:08:16.531728 containerd[1642]: time="2026-04-20T16:08:16.528565410Z" level=error msg="get state for 0841463db47c0e7c9c6d85cfb6642e85afb7fb990d35c75714898c948ba39613" error="context deadline exceeded" Apr 20 16:08:16.692757 containerd[1642]: time="2026-04-20T16:08:16.691500990Z" level=warning msg="unknown status" status=0 Apr 20 16:08:16.890140 containerd[1642]: time="2026-04-20T16:08:16.879055481Z" level=info msg="CreateContainer within sandbox \"1fca368652511c60d532b9c1060a34d2e04d688dca024e8b28591c326cbd3dc8\" for name:\"kube-proxy\" returns container id \"cdffdba84ba06cb830e103ae945ccbed39828f3caa5d924e4d59c2711614d2b6\"" Apr 20 16:08:16.951238 kubelet[2995]: E0420 16:08:16.950954 2995 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="12.817s" Apr 20 16:08:17.292796 containerd[1642]: time="2026-04-20T16:08:17.291869354Z" level=info msg="StartContainer for \"cdffdba84ba06cb830e103ae945ccbed39828f3caa5d924e4d59c2711614d2b6\"" Apr 20 16:08:20.232831 containerd[1642]: time="2026-04-20T16:08:20.186635316Z" level=error msg="get state for 0841463db47c0e7c9c6d85cfb6642e85afb7fb990d35c75714898c948ba39613" error="context deadline exceeded" Apr 20 16:08:20.281816 containerd[1642]: time="2026-04-20T16:08:20.233120650Z" level=warning msg="unknown status" status=0 Apr 20 16:08:20.949423 containerd[1642]: time="2026-04-20T16:08:20.948314843Z" level=info msg="connecting to shim cdffdba84ba06cb830e103ae945ccbed39828f3caa5d924e4d59c2711614d2b6" address="unix:///run/containerd/s/6a84121f7040f079e5433657c5d4f0976b8388eb0d3cbb574acfd42c5bb2c5fb" protocol=ttrpc version=3 Apr 20 16:08:24.587294 containerd[1642]: time="2026-04-20T16:08:24.586327756Z" level=error msg="get state for 0841463db47c0e7c9c6d85cfb6642e85afb7fb990d35c75714898c948ba39613" error="context deadline exceeded" Apr 20 16:08:24.964675 containerd[1642]: time="2026-04-20T16:08:24.640089126Z" level=warning msg="unknown status" status=0 Apr 20 16:08:25.174071 containerd[1642]: time="2026-04-20T16:08:25.167978591Z" level=error msg="failed to delete task" error="context deadline exceeded" id=d17bb97a3f3282b85a6720c6e98b5c169be8dca35dce64f8b767b8712f2815f1 Apr 20 16:08:25.245456 containerd[1642]: time="2026-04-20T16:08:25.175893923Z" level=error msg="ttrpc: received message on inactive stream" stream=5 Apr 20 16:08:25.245456 containerd[1642]: time="2026-04-20T16:08:25.226875463Z" level=error msg="ttrpc: received message on inactive stream" stream=3 Apr 20 16:08:25.245456 containerd[1642]: time="2026-04-20T16:08:25.234525209Z" level=error msg="ttrpc: received message on inactive stream" stream=7 Apr 20 16:08:25.375658 kubelet[2995]: E0420 16:08:25.331729 2995 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="8.369s" Apr 20 16:08:25.537639 containerd[1642]: time="2026-04-20T16:08:25.509554410Z" level=error msg="failed to handle container TaskExit event container_id:\"d17bb97a3f3282b85a6720c6e98b5c169be8dca35dce64f8b767b8712f2815f1\" id:\"d17bb97a3f3282b85a6720c6e98b5c169be8dca35dce64f8b767b8712f2815f1\" pid:2815 exit_status:1 exited_at:{seconds:1776701294 nanos:594509536}" error="failed to stop container: failed to delete task: context deadline exceeded" Apr 20 16:08:25.859661 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d17bb97a3f3282b85a6720c6e98b5c169be8dca35dce64f8b767b8712f2815f1-rootfs.mount: Deactivated successfully. Apr 20 16:08:26.084363 containerd[1642]: time="2026-04-20T16:08:25.899771259Z" level=error msg="ttrpc: received message on inactive stream" stream=43 Apr 20 16:08:27.260853 containerd[1642]: time="2026-04-20T16:08:27.084083889Z" level=info msg="TaskExit event container_id:\"d17bb97a3f3282b85a6720c6e98b5c169be8dca35dce64f8b767b8712f2815f1\" id:\"d17bb97a3f3282b85a6720c6e98b5c169be8dca35dce64f8b767b8712f2815f1\" pid:2815 exit_status:1 exited_at:{seconds:1776701294 nanos:594509536}" Apr 20 16:08:28.599151 kubelet[2995]: E0420 16:08:28.599035 2995 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.148s" Apr 20 16:08:28.708695 containerd[1642]: time="2026-04-20T16:08:28.708027772Z" level=error msg="get state for 0841463db47c0e7c9c6d85cfb6642e85afb7fb990d35c75714898c948ba39613" error="context deadline exceeded" Apr 20 16:08:28.738814 containerd[1642]: time="2026-04-20T16:08:28.738074410Z" level=warning msg="unknown status" status=0 Apr 20 16:08:28.762789 containerd[1642]: time="2026-04-20T16:08:28.723628027Z" level=error msg="ttrpc: received message on inactive stream" stream=17 Apr 20 16:08:29.059999 containerd[1642]: time="2026-04-20T16:08:29.058662262Z" level=info msg="RunPodSandbox for name:\"kube-flannel-ds-cvh2r\" uid:\"ab5fcd41-b107-4ccd-91a7-6726f21e7a70\" namespace:\"kube-flannel\" returns sandbox id \"0841463db47c0e7c9c6d85cfb6642e85afb7fb990d35c75714898c948ba39613\"" Apr 20 16:08:29.172781 systemd[1]: Started cri-containerd-cdffdba84ba06cb830e103ae945ccbed39828f3caa5d924e4d59c2711614d2b6.scope - libcontainer container cdffdba84ba06cb830e103ae945ccbed39828f3caa5d924e4d59c2711614d2b6. Apr 20 16:08:30.061262 kubelet[2995]: E0420 16:08:30.060905 2995 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 16:08:31.053741 containerd[1642]: time="2026-04-20T16:08:31.052463379Z" level=info msg="PullImage \"ghcr.io/flannel-io/flannel-cni-plugin:v1.6.2-flannel1\"" Apr 20 16:08:32.001367 kubelet[2995]: E0420 16:08:32.000976 2995 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.591s" Apr 20 16:08:33.248745 kubelet[2995]: E0420 16:08:33.247902 2995 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.205s" Apr 20 16:08:34.534667 kubelet[2995]: E0420 16:08:34.532140 2995 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.088s" Apr 20 16:08:34.918705 containerd[1642]: time="2026-04-20T16:08:34.881113872Z" level=error msg="get state for cdffdba84ba06cb830e103ae945ccbed39828f3caa5d924e4d59c2711614d2b6" error="context deadline exceeded" Apr 20 16:08:34.918705 containerd[1642]: time="2026-04-20T16:08:34.894073136Z" level=warning msg="unknown status" status=0 Apr 20 16:08:37.233579 kubelet[2995]: E0420 16:08:37.232141 2995 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.866s" Apr 20 16:08:38.070595 containerd[1642]: time="2026-04-20T16:08:37.970974416Z" level=error msg="get state for cdffdba84ba06cb830e103ae945ccbed39828f3caa5d924e4d59c2711614d2b6" error="context deadline exceeded" Apr 20 16:08:38.188749 containerd[1642]: time="2026-04-20T16:08:38.076104291Z" level=warning msg="unknown status" status=0 Apr 20 16:08:38.993280 systemd[1734]: Created slice background.slice - User Background Tasks Slice. Apr 20 16:08:39.163013 systemd[1734]: Starting systemd-tmpfiles-clean.service - Cleanup of User's Temporary Files and Directories... Apr 20 16:08:39.991282 systemd[1734]: Finished systemd-tmpfiles-clean.service - Cleanup of User's Temporary Files and Directories. Apr 20 16:08:41.569821 containerd[1642]: time="2026-04-20T16:08:41.532819997Z" level=error msg="get state for cdffdba84ba06cb830e103ae945ccbed39828f3caa5d924e4d59c2711614d2b6" error="context deadline exceeded" Apr 20 16:08:41.682116 containerd[1642]: time="2026-04-20T16:08:41.662708376Z" level=warning msg="unknown status" status=0 Apr 20 16:08:41.682116 containerd[1642]: time="2026-04-20T16:08:41.662908572Z" level=error msg="ttrpc: received message on inactive stream" stream=3 Apr 20 16:08:41.682116 containerd[1642]: time="2026-04-20T16:08:41.667068058Z" level=error msg="ttrpc: received message on inactive stream" stream=5 Apr 20 16:08:41.753811 containerd[1642]: time="2026-04-20T16:08:41.736369946Z" level=error msg="ttrpc: received message on inactive stream" stream=7 Apr 20 16:08:42.244864 kubelet[2995]: E0420 16:08:42.244832 2995 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="4.878s" Apr 20 16:08:43.669572 kubelet[2995]: E0420 16:08:43.668024 2995 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 16:08:43.875986 kubelet[2995]: I0420 16:08:43.861467 2995 scope.go:117] "RemoveContainer" containerID="d17bb97a3f3282b85a6720c6e98b5c169be8dca35dce64f8b767b8712f2815f1" Apr 20 16:08:43.958953 kubelet[2995]: E0420 16:08:43.955989 2995 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 16:08:44.380659 containerd[1642]: time="2026-04-20T16:08:44.359335421Z" level=error msg="get state for cdffdba84ba06cb830e103ae945ccbed39828f3caa5d924e4d59c2711614d2b6" error="context deadline exceeded" Apr 20 16:08:44.536753 containerd[1642]: time="2026-04-20T16:08:44.373024735Z" level=error msg="ttrpc: received message on inactive stream" stream=19 Apr 20 16:08:44.636745 containerd[1642]: time="2026-04-20T16:08:44.459907305Z" level=warning msg="unknown status" status=0 Apr 20 16:08:45.950979 kubelet[2995]: E0420 16:08:45.934051 2995 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.411s" Apr 20 16:08:46.559418 containerd[1642]: time="2026-04-20T16:08:46.550940265Z" level=info msg="StartContainer for \"cdffdba84ba06cb830e103ae945ccbed39828f3caa5d924e4d59c2711614d2b6\" returns successfully" Apr 20 16:08:47.864779 containerd[1642]: time="2026-04-20T16:08:47.863698650Z" level=info msg="CreateContainer within sandbox \"a4f8e106b2c659c398fec8bc48b50dd31c841c50583cc946254869d16aa74adb\" for container name:\"kube-controller-manager\" attempt:1" Apr 20 16:08:48.953328 kubelet[2995]: E0420 16:08:48.953049 2995 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.866s" Apr 20 16:08:49.967367 containerd[1642]: time="2026-04-20T16:08:49.908005955Z" level=info msg="Container 508156ca7c4de162cfc24ca5317f4ca402e68de276766ea0bf5e1c50d94323a7: CDI devices from CRI Config.CDIDevices: []" Apr 20 16:08:49.973114 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2580072847.mount: Deactivated successfully. Apr 20 16:08:50.201826 kubelet[2995]: E0420 16:08:50.201325 2995 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 16:08:51.966704 kubelet[2995]: E0420 16:08:51.965891 2995 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 16:08:52.339289 containerd[1642]: time="2026-04-20T16:08:52.330008635Z" level=info msg="CreateContainer within sandbox \"a4f8e106b2c659c398fec8bc48b50dd31c841c50583cc946254869d16aa74adb\" for name:\"kube-controller-manager\" attempt:1 returns container id \"508156ca7c4de162cfc24ca5317f4ca402e68de276766ea0bf5e1c50d94323a7\"" Apr 20 16:08:53.543004 containerd[1642]: time="2026-04-20T16:08:53.539943140Z" level=info msg="StartContainer for \"508156ca7c4de162cfc24ca5317f4ca402e68de276766ea0bf5e1c50d94323a7\"" Apr 20 16:08:54.863082 containerd[1642]: time="2026-04-20T16:08:54.852053086Z" level=info msg="connecting to shim 508156ca7c4de162cfc24ca5317f4ca402e68de276766ea0bf5e1c50d94323a7" address="unix:///run/containerd/s/954064e80f6724cd9b3557ef5e8f14a291671be0b664e8d783df675780d63b2d" protocol=ttrpc version=3 Apr 20 16:08:55.205287 kubelet[2995]: E0420 16:08:55.205095 2995 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.818s" Apr 20 16:08:56.332838 systemd[1]: Started cri-containerd-508156ca7c4de162cfc24ca5317f4ca402e68de276766ea0bf5e1c50d94323a7.scope - libcontainer container 508156ca7c4de162cfc24ca5317f4ca402e68de276766ea0bf5e1c50d94323a7. Apr 20 16:08:59.299779 kubelet[2995]: E0420 16:08:59.297516 2995 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.908s" Apr 20 16:08:59.783713 containerd[1642]: time="2026-04-20T16:08:59.780874306Z" level=error msg="get state for 508156ca7c4de162cfc24ca5317f4ca402e68de276766ea0bf5e1c50d94323a7" error="context deadline exceeded" Apr 20 16:08:59.799965 containerd[1642]: time="2026-04-20T16:08:59.786895723Z" level=error msg="ttrpc: received message on inactive stream" stream=3 Apr 20 16:08:59.799965 containerd[1642]: time="2026-04-20T16:08:59.786979462Z" level=warning msg="unknown status" status=0 Apr 20 16:09:03.657260 kubelet[2995]: E0420 16:09:03.656346 2995 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.28s" Apr 20 16:09:03.696399 containerd[1642]: time="2026-04-20T16:09:03.664113941Z" level=info msg="StartContainer for \"508156ca7c4de162cfc24ca5317f4ca402e68de276766ea0bf5e1c50d94323a7\" returns successfully" Apr 20 16:09:06.062188 kubelet[2995]: E0420 16:09:06.062063 2995 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.405s" Apr 20 16:09:06.952439 kubelet[2995]: E0420 16:09:06.947519 2995 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 16:09:09.704833 kubelet[2995]: I0420 16:09:09.702092 2995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-6k4c6" podStartSLOduration=95.698995861 podStartE2EDuration="1m35.698995861s" podCreationTimestamp="2026-04-20 16:07:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-20 16:08:51.764951327 +0000 UTC m=+91.027096742" watchObservedRunningTime="2026-04-20 16:09:09.698995861 +0000 UTC m=+108.961141281" Apr 20 16:09:10.237980 kubelet[2995]: E0420 16:09:10.237533 2995 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.695s" Apr 20 16:09:10.307607 kubelet[2995]: E0420 16:09:10.306889 2995 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 16:09:11.642785 kubelet[2995]: E0420 16:09:11.639711 2995 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 16:09:12.666289 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount618310284.mount: Deactivated successfully. Apr 20 16:09:15.666953 containerd[1642]: time="2026-04-20T16:09:15.666562246Z" level=info msg="ImageCreate event name:\"ghcr.io/flannel-io/flannel-cni-plugin:v1.6.2-flannel1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 20 16:09:15.677461 containerd[1642]: time="2026-04-20T16:09:15.668368097Z" level=info msg="stop pulling image ghcr.io/flannel-io/flannel-cni-plugin:v1.6.2-flannel1: active requests=0, bytes read=4850109" Apr 20 16:09:16.495645 kubelet[2995]: E0420 16:09:16.489080 2995 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.112s" Apr 20 16:09:16.792650 containerd[1642]: time="2026-04-20T16:09:16.769140869Z" level=info msg="ImageCreate event name:\"sha256:55ce2385d9d8c6f720091c177fbe885a21c9dc07c9e480bfb4d94b3001f58182\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 20 16:09:18.538389 containerd[1642]: time="2026-04-20T16:09:18.533880449Z" level=info msg="ImageCreate event name:\"ghcr.io/flannel-io/flannel-cni-plugin@sha256:f1812994f0edbcb5bb5ccb63be2147ba6ad10e1faaa7ca9fcdad4f441739d84f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 20 16:09:19.455035 kubelet[2995]: E0420 16:09:19.452699 2995 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.041s" Apr 20 16:09:20.970647 containerd[1642]: time="2026-04-20T16:09:20.846386302Z" level=info msg="Pulled image \"ghcr.io/flannel-io/flannel-cni-plugin:v1.6.2-flannel1\" with image id \"sha256:55ce2385d9d8c6f720091c177fbe885a21c9dc07c9e480bfb4d94b3001f58182\", repo tag \"ghcr.io/flannel-io/flannel-cni-plugin:v1.6.2-flannel1\", repo digest \"ghcr.io/flannel-io/flannel-cni-plugin@sha256:f1812994f0edbcb5bb5ccb63be2147ba6ad10e1faaa7ca9fcdad4f441739d84f\", size \"4856838\" in 49.706002546s" Apr 20 16:09:21.146119 containerd[1642]: time="2026-04-20T16:09:21.089868575Z" level=info msg="PullImage \"ghcr.io/flannel-io/flannel-cni-plugin:v1.6.2-flannel1\" returns image reference \"sha256:55ce2385d9d8c6f720091c177fbe885a21c9dc07c9e480bfb4d94b3001f58182\"" Apr 20 16:09:25.438045 kubelet[2995]: E0420 16:09:25.398115 2995 kubelet_node_status.go:460] "Node not becoming ready in time after startup" Apr 20 16:09:25.734480 kubelet[2995]: E0420 16:09:25.731347 2995 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="6.277s" Apr 20 16:09:25.914682 kubelet[2995]: E0420 16:09:25.914346 2995 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 16:09:26.299979 containerd[1642]: time="2026-04-20T16:09:26.299649821Z" level=info msg="CreateContainer within sandbox \"0841463db47c0e7c9c6d85cfb6642e85afb7fb990d35c75714898c948ba39613\" for container name:\"install-cni-plugin\"" Apr 20 16:09:27.958225 containerd[1642]: time="2026-04-20T16:09:27.957428372Z" level=info msg="Container 422d0e2bb2346de8cfd758daa0fc84958f8f887807811049b17d86cf8cd02f75: CDI devices from CRI Config.CDIDevices: []" Apr 20 16:09:28.089482 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount791336693.mount: Deactivated successfully. Apr 20 16:09:29.405948 kubelet[2995]: E0420 16:09:29.402885 2995 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 16:09:29.591882 kubelet[2995]: E0420 16:09:29.591085 2995 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.036s" Apr 20 16:09:30.581930 containerd[1642]: time="2026-04-20T16:09:30.576936786Z" level=info msg="CreateContainer within sandbox \"0841463db47c0e7c9c6d85cfb6642e85afb7fb990d35c75714898c948ba39613\" for name:\"install-cni-plugin\" returns container id \"422d0e2bb2346de8cfd758daa0fc84958f8f887807811049b17d86cf8cd02f75\"" Apr 20 16:09:31.030263 containerd[1642]: time="2026-04-20T16:09:31.029683660Z" level=info msg="StartContainer for \"422d0e2bb2346de8cfd758daa0fc84958f8f887807811049b17d86cf8cd02f75\"" Apr 20 16:09:31.416124 containerd[1642]: time="2026-04-20T16:09:31.402409504Z" level=info msg="connecting to shim 422d0e2bb2346de8cfd758daa0fc84958f8f887807811049b17d86cf8cd02f75" address="unix:///run/containerd/s/0ed820c0dcc55393a306a831ea5ff34ff6397fb5c5c6190f1f8692e983757cf0" protocol=ttrpc version=3 Apr 20 16:09:31.628553 kubelet[2995]: E0420 16:09:31.628392 2995 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.801s" Apr 20 16:09:31.708376 kubelet[2995]: E0420 16:09:31.708295 2995 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 16:09:32.708119 systemd[1]: Started cri-containerd-422d0e2bb2346de8cfd758daa0fc84958f8f887807811049b17d86cf8cd02f75.scope - libcontainer container 422d0e2bb2346de8cfd758daa0fc84958f8f887807811049b17d86cf8cd02f75. Apr 20 16:09:35.251793 kubelet[2995]: E0420 16:09:35.251687 2995 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.795s" Apr 20 16:09:35.281358 kubelet[2995]: E0420 16:09:35.270715 2995 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 16:09:36.470768 kubelet[2995]: E0420 16:09:36.469342 2995 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.022s" Apr 20 16:09:39.396747 systemd[1]: cri-containerd-422d0e2bb2346de8cfd758daa0fc84958f8f887807811049b17d86cf8cd02f75.scope: Deactivated successfully. Apr 20 16:09:39.514877 systemd[1]: cri-containerd-422d0e2bb2346de8cfd758daa0fc84958f8f887807811049b17d86cf8cd02f75.scope: Consumed 1.021s CPU time, 4M memory peak, 412K written to disk. Apr 20 16:09:40.151091 containerd[1642]: time="2026-04-20T16:09:40.150497646Z" level=info msg="received container exit event container_id:\"422d0e2bb2346de8cfd758daa0fc84958f8f887807811049b17d86cf8cd02f75\" id:\"422d0e2bb2346de8cfd758daa0fc84958f8f887807811049b17d86cf8cd02f75\" pid:3430 exited_at:{seconds:1776701379 nanos:981137822}" Apr 20 16:09:40.380731 kubelet[2995]: E0420 16:09:40.377444 2995 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.963s" Apr 20 16:09:40.744042 kubelet[2995]: E0420 16:09:40.740852 2995 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 16:09:41.018370 containerd[1642]: time="2026-04-20T16:09:40.996073439Z" level=info msg="StartContainer for \"422d0e2bb2346de8cfd758daa0fc84958f8f887807811049b17d86cf8cd02f75\" returns successfully" Apr 20 16:09:41.381912 kubelet[2995]: E0420 16:09:41.380019 2995 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.001s" Apr 20 16:09:42.282056 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-422d0e2bb2346de8cfd758daa0fc84958f8f887807811049b17d86cf8cd02f75-rootfs.mount: Deactivated successfully. Apr 20 16:09:43.659502 kubelet[2995]: E0420 16:09:43.659154 2995 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 16:09:44.420987 containerd[1642]: time="2026-04-20T16:09:44.410661969Z" level=info msg="PullImage \"ghcr.io/flannel-io/flannel:v0.26.7\"" Apr 20 16:09:47.071730 kubelet[2995]: E0420 16:09:47.020902 2995 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 16:09:48.116024 kubelet[2995]: E0420 16:09:48.051128 2995 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.652s" Apr 20 16:09:53.336718 kubelet[2995]: E0420 16:09:53.335454 2995 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="5.031s" Apr 20 16:09:53.460347 kubelet[2995]: E0420 16:09:53.326067 2995 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 16:10:03.765079 kubelet[2995]: E0420 16:10:03.641674 2995 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 16:10:04.738063 kubelet[2995]: E0420 16:10:04.737953 2995 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="11.402s" Apr 20 16:10:06.924799 kubelet[2995]: E0420 16:10:06.922031 2995 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.133s" Apr 20 16:10:07.081476 kubelet[2995]: E0420 16:10:07.081080 2995 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 16:10:07.375676 kubelet[2995]: E0420 16:10:07.374438 2995 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 16:10:09.167829 kubelet[2995]: E0420 16:10:09.167203 2995 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 16:10:10.559368 kubelet[2995]: E0420 16:10:10.550730 2995 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.116s" Apr 20 16:10:14.600285 kubelet[2995]: E0420 16:10:14.598035 2995 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.194s" Apr 20 16:10:14.600285 kubelet[2995]: E0420 16:10:14.598125 2995 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 16:10:19.002142 kubelet[2995]: E0420 16:10:18.998445 2995 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.623s" Apr 20 16:10:19.765994 kubelet[2995]: E0420 16:10:19.761290 2995 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 16:10:25.184662 kubelet[2995]: E0420 16:10:25.172989 2995 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 16:10:25.953081 kubelet[2995]: E0420 16:10:25.952912 2995 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.499s" Apr 20 16:10:28.471126 kubelet[2995]: E0420 16:10:28.466243 2995 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 16:10:30.425821 kubelet[2995]: E0420 16:10:30.404545 2995 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 16:10:36.235937 kubelet[2995]: E0420 16:10:36.228893 2995 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 16:10:36.717338 kubelet[2995]: E0420 16:10:36.691962 2995 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.272s" Apr 20 16:10:39.610678 kubelet[2995]: E0420 16:10:39.610469 2995 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.135s" Apr 20 16:10:41.582990 kubelet[2995]: E0420 16:10:41.582135 2995 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 16:10:43.422423 kubelet[2995]: E0420 16:10:43.422267 2995 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 16:10:46.620416 kubelet[2995]: E0420 16:10:46.619010 2995 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 16:10:51.893981 kubelet[2995]: E0420 16:10:51.879154 2995 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 16:10:54.771442 kubelet[2995]: E0420 16:10:54.768080 2995 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.336s" Apr 20 16:10:57.831757 systemd[1]: Started sshd@7-4101-10.0.0.48:22-10.0.0.1:48212.service - OpenSSH per-connection server daemon (10.0.0.1:48212). Apr 20 16:10:59.062402 kubelet[2995]: E0420 16:10:58.988204 2995 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 16:10:59.187367 kubelet[2995]: E0420 16:10:59.180334 2995 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.713s" Apr 20 16:11:02.032150 sshd[3480]: Accepted publickey for core from 10.0.0.1 port 48212 ssh2: RSA SHA256:TY3ywpKQxrHjYT+ud73OgntCcwLDp6eqaAmsAXbkkEQ Apr 20 16:11:02.843610 sshd-session[3480]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 20 16:11:06.769032 systemd-logind[1611]: New session '9' of user 'core' with class 'user' and type 'tty'. Apr 20 16:11:07.500875 systemd[1]: Started session-9.scope - Session 9 of User core. Apr 20 16:11:16.862122 systemd[1]: cri-containerd-508156ca7c4de162cfc24ca5317f4ca402e68de276766ea0bf5e1c50d94323a7.scope: Deactivated successfully. Apr 20 16:11:16.990700 systemd[1]: cri-containerd-508156ca7c4de162cfc24ca5317f4ca402e68de276766ea0bf5e1c50d94323a7.scope: Consumed 47.925s CPU time, 47.1M memory peak. Apr 20 16:11:19.764963 kubelet[2995]: I0420 16:11:19.695190 2995 request.go:752] "Waited before sending request" delay="1.030158004s" reason="client-side throttling, not priority and fairness" verb="POST" URL="https://10.0.0.48:6443/api/v1/namespaces/kube-system/events" Apr 20 16:11:21.228514 kubelet[2995]: E0420 16:11:20.046925 2995 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 16:11:21.479119 systemd[1]: cri-containerd-456d4bb384e753507dd13b372b2df359997e43521644fd88e72216de163d6273.scope: Deactivated successfully. Apr 20 16:11:21.569886 systemd[1]: cri-containerd-456d4bb384e753507dd13b372b2df359997e43521644fd88e72216de163d6273.scope: Consumed 38.891s CPU time, 24M memory peak. Apr 20 16:11:23.140552 kubelet[2995]: E0420 16:11:23.137583 2995 controller.go:195] "Failed to update lease" err="Put \"https://10.0.0.48:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Apr 20 16:11:25.321913 containerd[1642]: time="2026-04-20T16:11:25.292776462Z" level=info msg="received container exit event container_id:\"508156ca7c4de162cfc24ca5317f4ca402e68de276766ea0bf5e1c50d94323a7\" id:\"508156ca7c4de162cfc24ca5317f4ca402e68de276766ea0bf5e1c50d94323a7\" pid:3267 exit_status:1 exited_at:{seconds:1776701478 nanos:258436488}" Apr 20 16:11:27.255831 containerd[1642]: time="2026-04-20T16:11:26.582427132Z" level=info msg="received container exit event container_id:\"456d4bb384e753507dd13b372b2df359997e43521644fd88e72216de163d6273\" id:\"456d4bb384e753507dd13b372b2df359997e43521644fd88e72216de163d6273\" pid:2802 exit_status:1 exited_at:{seconds:1776701484 nanos:525710017}" Apr 20 16:11:28.547011 containerd[1642]: time="2026-04-20T16:11:27.893704794Z" level=info msg="container event discarded" container=a4f8e106b2c659c398fec8bc48b50dd31c841c50583cc946254869d16aa74adb type=CONTAINER_CREATED_EVENT Apr 20 16:11:28.698088 containerd[1642]: time="2026-04-20T16:11:28.690633771Z" level=info msg="container event discarded" container=a4f8e106b2c659c398fec8bc48b50dd31c841c50583cc946254869d16aa74adb type=CONTAINER_STARTED_EVENT Apr 20 16:11:29.072988 containerd[1642]: time="2026-04-20T16:11:28.916259465Z" level=info msg="container event discarded" container=bcb1213fb762cb01d7cad99cfe50564de5d80901cf15182f86eb69aab9596910 type=CONTAINER_CREATED_EVENT Apr 20 16:11:29.231831 containerd[1642]: time="2026-04-20T16:11:29.228785582Z" level=info msg="container event discarded" container=bcb1213fb762cb01d7cad99cfe50564de5d80901cf15182f86eb69aab9596910 type=CONTAINER_STARTED_EVENT Apr 20 16:11:29.316790 containerd[1642]: time="2026-04-20T16:11:29.300669511Z" level=info msg="container event discarded" container=de0a6fed9e83721940ec0876e7564d929596bb760b8ee3d26516d2cfbcb58f12 type=CONTAINER_CREATED_EVENT Apr 20 16:11:32.031012 containerd[1642]: time="2026-04-20T16:11:31.361633649Z" level=info msg="container event discarded" container=de0a6fed9e83721940ec0876e7564d929596bb760b8ee3d26516d2cfbcb58f12 type=CONTAINER_STARTED_EVENT Apr 20 16:11:33.531122 containerd[1642]: time="2026-04-20T16:11:32.597656123Z" level=info msg="container event discarded" container=456d4bb384e753507dd13b372b2df359997e43521644fd88e72216de163d6273 type=CONTAINER_CREATED_EVENT Apr 20 16:11:34.356755 kubelet[2995]: E0420 16:11:33.453150 2995 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 16:11:35.347543 containerd[1642]: time="2026-04-20T16:11:34.072882642Z" level=info msg="container event discarded" container=d17bb97a3f3282b85a6720c6e98b5c169be8dca35dce64f8b767b8712f2815f1 type=CONTAINER_CREATED_EVENT Apr 20 16:11:36.801647 containerd[1642]: time="2026-04-20T16:11:35.778490776Z" level=info msg="container event discarded" container=bf9fac9b6bd039a6f8a1b255771a448d8f558c263c99d03e5d00573b2a43b092 type=CONTAINER_CREATED_EVENT Apr 20 16:11:38.656308 containerd[1642]: time="2026-04-20T16:11:37.071097052Z" level=info msg="container event discarded" container=456d4bb384e753507dd13b372b2df359997e43521644fd88e72216de163d6273 type=CONTAINER_STARTED_EVENT Apr 20 16:11:38.860698 containerd[1642]: time="2026-04-20T16:11:38.656314395Z" level=info msg="container event discarded" container=d17bb97a3f3282b85a6720c6e98b5c169be8dca35dce64f8b767b8712f2815f1 type=CONTAINER_STARTED_EVENT Apr 20 16:11:38.860698 containerd[1642]: time="2026-04-20T16:11:38.656694670Z" level=info msg="container event discarded" container=bf9fac9b6bd039a6f8a1b255771a448d8f558c263c99d03e5d00573b2a43b092 type=CONTAINER_STARTED_EVENT Apr 20 16:11:38.860698 containerd[1642]: time="2026-04-20T16:11:38.655588645Z" level=error msg="get state for a4f8e106b2c659c398fec8bc48b50dd31c841c50583cc946254869d16aa74adb" error="context deadline exceeded" Apr 20 16:11:38.860698 containerd[1642]: time="2026-04-20T16:11:38.656819585Z" level=warning msg="unknown status" status=0 Apr 20 16:11:39.682971 containerd[1642]: time="2026-04-20T16:11:39.419001630Z" level=error msg="ttrpc: received message on inactive stream" stream=39 Apr 20 16:11:43.284656 containerd[1642]: time="2026-04-20T16:11:43.175504069Z" level=error msg="ttrpc: received message on inactive stream" stream=41 Apr 20 16:11:43.630819 containerd[1642]: time="2026-04-20T16:11:43.407812636Z" level=error msg="ttrpc: received message on inactive stream" stream=43 Apr 20 16:11:43.630819 containerd[1642]: time="2026-04-20T16:11:43.472695008Z" level=error msg="failed to handle container TaskExit event container_id:\"508156ca7c4de162cfc24ca5317f4ca402e68de276766ea0bf5e1c50d94323a7\" id:\"508156ca7c4de162cfc24ca5317f4ca402e68de276766ea0bf5e1c50d94323a7\" pid:3267 exit_status:1 exited_at:{seconds:1776701478 nanos:258436488}" error="failed to stop container: context deadline exceeded" Apr 20 16:11:45.888905 kubelet[2995]: E0420 16:11:45.518687 2995 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 16:11:46.496115 containerd[1642]: time="2026-04-20T16:11:46.253889725Z" level=info msg="TaskExit event container_id:\"508156ca7c4de162cfc24ca5317f4ca402e68de276766ea0bf5e1c50d94323a7\" id:\"508156ca7c4de162cfc24ca5317f4ca402e68de276766ea0bf5e1c50d94323a7\" pid:3267 exit_status:1 exited_at:{seconds:1776701478 nanos:258436488}" Apr 20 16:11:48.025537 containerd[1642]: time="2026-04-20T16:11:48.021820155Z" level=error msg="ttrpc: received message on inactive stream" stream=53 Apr 20 16:11:48.234769 containerd[1642]: time="2026-04-20T16:11:48.234512412Z" level=error msg="failed to handle container TaskExit event container_id:\"456d4bb384e753507dd13b372b2df359997e43521644fd88e72216de163d6273\" id:\"456d4bb384e753507dd13b372b2df359997e43521644fd88e72216de163d6273\" pid:2802 exit_status:1 exited_at:{seconds:1776701484 nanos:525710017}" error="failed to stop container: context deadline exceeded" Apr 20 16:11:48.584741 containerd[1642]: time="2026-04-20T16:11:48.578922209Z" level=error msg="ttrpc: received message on inactive stream" stream=55 Apr 20 16:11:49.592820 kubelet[2995]: E0420 16:11:49.592771 2995 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="50.196s" Apr 20 16:11:51.925440 kubelet[2995]: E0420 16:11:51.922416 2995 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 16:11:52.499581 sshd[3487]: Connection closed by 10.0.0.1 port 48212 Apr 20 16:11:52.592327 sshd-session[3480]: pam_unix(sshd:session): session closed for user core Apr 20 16:11:53.542571 systemd[1]: sshd@7-4101-10.0.0.48:22-10.0.0.1:48212.service: Deactivated successfully. Apr 20 16:11:53.681739 systemd[1]: sshd@7-4101-10.0.0.48:22-10.0.0.1:48212.service: Consumed 1.089s CPU time, 5.2M memory peak. Apr 20 16:11:53.776807 kubelet[2995]: E0420 16:11:53.060124 2995 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 16:11:54.000500 systemd[1]: session-9.scope: Deactivated successfully. Apr 20 16:11:54.067627 systemd[1]: session-9.scope: Consumed 27.297s CPU time, 18.1M memory peak. Apr 20 16:11:54.384707 systemd-logind[1611]: Session 9 logged out. Waiting for processes to exit. Apr 20 16:11:54.578077 kubelet[2995]: E0420 16:11:54.497366 2995 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 16:11:54.752915 systemd-logind[1611]: Removed session 9. Apr 20 16:11:56.760013 containerd[1642]: time="2026-04-20T16:11:56.239230844Z" level=error msg="ttrpc: received message on inactive stream" stream=49 Apr 20 16:11:56.953324 containerd[1642]: time="2026-04-20T16:11:56.171619331Z" level=error msg="Failed to handle backOff event container_id:\"508156ca7c4de162cfc24ca5317f4ca402e68de276766ea0bf5e1c50d94323a7\" id:\"508156ca7c4de162cfc24ca5317f4ca402e68de276766ea0bf5e1c50d94323a7\" pid:3267 exit_status:1 exited_at:{seconds:1776701478 nanos:258436488} for 508156ca7c4de162cfc24ca5317f4ca402e68de276766ea0bf5e1c50d94323a7" error="failed to handle container TaskExit event: failed to stop container: context deadline exceeded" Apr 20 16:11:56.954398 containerd[1642]: time="2026-04-20T16:11:56.954256753Z" level=info msg="TaskExit event container_id:\"456d4bb384e753507dd13b372b2df359997e43521644fd88e72216de163d6273\" id:\"456d4bb384e753507dd13b372b2df359997e43521644fd88e72216de163d6273\" pid:2802 exit_status:1 exited_at:{seconds:1776701484 nanos:525710017}" Apr 20 16:11:57.107623 containerd[1642]: time="2026-04-20T16:11:56.857353898Z" level=error msg="ttrpc: received message on inactive stream" stream=53 Apr 20 16:11:57.468935 kubelet[2995]: E0420 16:11:57.461496 2995 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 16:11:59.134551 systemd[1]: Started sshd@8-4102-10.0.0.48:22-10.0.0.1:50700.service - OpenSSH per-connection server daemon (10.0.0.1:50700). Apr 20 16:12:07.631639 containerd[1642]: time="2026-04-20T16:12:07.595879686Z" level=error msg="ttrpc: received message on inactive stream" stream=63 Apr 20 16:12:08.094287 containerd[1642]: time="2026-04-20T16:12:07.898797026Z" level=error msg="ttrpc: received message on inactive stream" stream=67 Apr 20 16:12:08.252799 containerd[1642]: time="2026-04-20T16:12:08.081065451Z" level=error msg="Failed to handle backOff event container_id:\"456d4bb384e753507dd13b372b2df359997e43521644fd88e72216de163d6273\" id:\"456d4bb384e753507dd13b372b2df359997e43521644fd88e72216de163d6273\" pid:2802 exit_status:1 exited_at:{seconds:1776701484 nanos:525710017} for 456d4bb384e753507dd13b372b2df359997e43521644fd88e72216de163d6273" error="failed to handle container TaskExit event: failed to stop container: context deadline exceeded" Apr 20 16:12:08.361055 containerd[1642]: time="2026-04-20T16:12:08.270054692Z" level=info msg="TaskExit event container_id:\"508156ca7c4de162cfc24ca5317f4ca402e68de276766ea0bf5e1c50d94323a7\" id:\"508156ca7c4de162cfc24ca5317f4ca402e68de276766ea0bf5e1c50d94323a7\" pid:3267 exit_status:1 exited_at:{seconds:1776701478 nanos:258436488}" Apr 20 16:12:08.638879 sshd[3517]: Accepted publickey for core from 10.0.0.1 port 50700 ssh2: RSA SHA256:TY3ywpKQxrHjYT+ud73OgntCcwLDp6eqaAmsAXbkkEQ Apr 20 16:12:08.882082 kubelet[2995]: E0420 16:12:08.784522 2995 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 16:12:08.941798 sshd-session[3517]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 20 16:12:09.671438 kubelet[2995]: E0420 16:12:09.671371 2995 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="19.55s" Apr 20 16:12:10.693784 systemd-logind[1611]: New session '10' of user 'core' with class 'user' and type 'tty'. Apr 20 16:12:11.644510 systemd[1]: Started session-10.scope - Session 10 of User core. Apr 20 16:12:12.672463 kubelet[2995]: E0420 16:12:12.671100 2995 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 16:12:18.161494 kubelet[2995]: E0420 16:12:18.075067 2995 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 16:12:19.298919 containerd[1642]: time="2026-04-20T16:12:19.183797682Z" level=error msg="ttrpc: received message on inactive stream" stream=61 Apr 20 16:12:20.396917 containerd[1642]: time="2026-04-20T16:12:20.098923012Z" level=error msg="Failed to handle backOff event container_id:\"508156ca7c4de162cfc24ca5317f4ca402e68de276766ea0bf5e1c50d94323a7\" id:\"508156ca7c4de162cfc24ca5317f4ca402e68de276766ea0bf5e1c50d94323a7\" pid:3267 exit_status:1 exited_at:{seconds:1776701478 nanos:258436488} for 508156ca7c4de162cfc24ca5317f4ca402e68de276766ea0bf5e1c50d94323a7" error="failed to handle container TaskExit event: failed to stop container: context deadline exceeded" Apr 20 16:12:20.497601 containerd[1642]: time="2026-04-20T16:12:20.432885553Z" level=info msg="TaskExit event container_id:\"456d4bb384e753507dd13b372b2df359997e43521644fd88e72216de163d6273\" id:\"456d4bb384e753507dd13b372b2df359997e43521644fd88e72216de163d6273\" pid:2802 exit_status:1 exited_at:{seconds:1776701484 nanos:525710017}" Apr 20 16:12:20.497601 containerd[1642]: time="2026-04-20T16:12:20.387890273Z" level=error msg="ttrpc: received message on inactive stream" stream=63 Apr 20 16:12:23.996930 kubelet[2995]: E0420 16:12:23.992062 2995 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="14.299s" Apr 20 16:12:25.071622 kubelet[2995]: E0420 16:12:25.068637 2995 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 16:12:31.298663 sshd[3524]: Connection closed by 10.0.0.1 port 50700 Apr 20 16:12:31.398386 sshd-session[3517]: pam_unix(sshd:session): session closed for user core Apr 20 16:12:31.601914 systemd[1]: sshd@8-4102-10.0.0.48:22-10.0.0.1:50700.service: Deactivated successfully. Apr 20 16:12:31.637457 systemd[1]: sshd@8-4102-10.0.0.48:22-10.0.0.1:50700.service: Consumed 3.214s CPU time, 4.2M memory peak. Apr 20 16:12:31.956752 systemd[1]: session-10.scope: Deactivated successfully. Apr 20 16:12:32.067778 systemd[1]: session-10.scope: Consumed 12.849s CPU time, 17.9M memory peak. Apr 20 16:12:32.553859 systemd-logind[1611]: Session 10 logged out. Waiting for processes to exit. Apr 20 16:12:32.800532 containerd[1642]: time="2026-04-20T16:12:31.402664333Z" level=error msg="ttrpc: received message on inactive stream" stream=77 Apr 20 16:12:33.152435 systemd-logind[1611]: Removed session 10. Apr 20 16:12:33.262085 kubelet[2995]: E0420 16:12:33.021033 2995 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 16:12:33.566824 containerd[1642]: time="2026-04-20T16:12:33.560714350Z" level=error msg="ttrpc: received message on inactive stream" stream=79 Apr 20 16:12:33.595365 containerd[1642]: time="2026-04-20T16:12:33.574619531Z" level=error msg="get state for 456d4bb384e753507dd13b372b2df359997e43521644fd88e72216de163d6273" error="context deadline exceeded" Apr 20 16:12:33.837924 containerd[1642]: time="2026-04-20T16:12:33.772872907Z" level=warning msg="unknown status" status=0 Apr 20 16:12:34.725603 containerd[1642]: time="2026-04-20T16:12:34.724410727Z" level=error msg="failed to delete task" error="context deadline exceeded" id=456d4bb384e753507dd13b372b2df359997e43521644fd88e72216de163d6273 Apr 20 16:12:34.787657 containerd[1642]: time="2026-04-20T16:12:34.651763844Z" level=error msg="failed to drain init process 456d4bb384e753507dd13b372b2df359997e43521644fd88e72216de163d6273 io" error="context deadline exceeded" runtime=io.containerd.runc.v2 Apr 20 16:12:34.787657 containerd[1642]: time="2026-04-20T16:12:34.760262033Z" level=error msg="Failed to handle backOff event container_id:\"456d4bb384e753507dd13b372b2df359997e43521644fd88e72216de163d6273\" id:\"456d4bb384e753507dd13b372b2df359997e43521644fd88e72216de163d6273\" pid:2802 exit_status:1 exited_at:{seconds:1776701484 nanos:525710017} for 456d4bb384e753507dd13b372b2df359997e43521644fd88e72216de163d6273" error="failed to handle container TaskExit event: failed to stop container: failed to delete task: context deadline exceeded" Apr 20 16:12:35.185141 containerd[1642]: time="2026-04-20T16:12:35.045988728Z" level=info msg="TaskExit event container_id:\"508156ca7c4de162cfc24ca5317f4ca402e68de276766ea0bf5e1c50d94323a7\" id:\"508156ca7c4de162cfc24ca5317f4ca402e68de276766ea0bf5e1c50d94323a7\" pid:3267 exit_status:1 exited_at:{seconds:1776701478 nanos:258436488}" Apr 20 16:12:35.200069 kubelet[2995]: E0420 16:12:35.179209 2995 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="11.112s" Apr 20 16:12:35.592778 containerd[1642]: time="2026-04-20T16:12:35.502395105Z" level=error msg="ttrpc: received message on inactive stream" stream=81 Apr 20 16:12:35.634597 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-456d4bb384e753507dd13b372b2df359997e43521644fd88e72216de163d6273-rootfs.mount: Deactivated successfully. Apr 20 16:12:38.851913 systemd[1]: Started sshd@9-8193-10.0.0.48:22-10.0.0.1:58130.service - OpenSSH per-connection server daemon (10.0.0.1:58130). Apr 20 16:12:39.955049 containerd[1642]: time="2026-04-20T16:12:39.954339679Z" level=info msg="StopContainer for \"456d4bb384e753507dd13b372b2df359997e43521644fd88e72216de163d6273\" with timeout 30 (s)" Apr 20 16:12:42.383798 containerd[1642]: time="2026-04-20T16:12:42.295837623Z" level=info msg="Stop container \"456d4bb384e753507dd13b372b2df359997e43521644fd88e72216de163d6273\" with signal terminated" Apr 20 16:12:43.275660 kubelet[2995]: E0420 16:12:43.042831 2995 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 16:12:44.965495 sshd[3551]: Accepted publickey for core from 10.0.0.1 port 58130 ssh2: RSA SHA256:TY3ywpKQxrHjYT+ud73OgntCcwLDp6eqaAmsAXbkkEQ Apr 20 16:12:44.967557 sshd-session[3551]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 20 16:12:45.098210 containerd[1642]: time="2026-04-20T16:12:45.092947192Z" level=error msg="Failed to handle backOff event container_id:\"508156ca7c4de162cfc24ca5317f4ca402e68de276766ea0bf5e1c50d94323a7\" id:\"508156ca7c4de162cfc24ca5317f4ca402e68de276766ea0bf5e1c50d94323a7\" pid:3267 exit_status:1 exited_at:{seconds:1776701478 nanos:258436488} for 508156ca7c4de162cfc24ca5317f4ca402e68de276766ea0bf5e1c50d94323a7" error="failed to handle container TaskExit event: failed to stop container: context deadline exceeded" Apr 20 16:12:45.098210 containerd[1642]: time="2026-04-20T16:12:45.094056558Z" level=info msg="TaskExit event container_id:\"456d4bb384e753507dd13b372b2df359997e43521644fd88e72216de163d6273\" id:\"456d4bb384e753507dd13b372b2df359997e43521644fd88e72216de163d6273\" pid:2802 exit_status:1 exited_at:{seconds:1776701484 nanos:525710017}" Apr 20 16:12:45.156652 containerd[1642]: time="2026-04-20T16:12:45.142426799Z" level=error msg="ttrpc: received message on inactive stream" stream=75 Apr 20 16:12:45.557761 systemd-logind[1611]: New session '11' of user 'core' with class 'user' and type 'tty'. Apr 20 16:12:45.739210 systemd[1]: Started session-11.scope - Session 11 of User core. Apr 20 16:12:45.927637 kubelet[2995]: E0420 16:12:45.917689 2995 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="10.736s" Apr 20 16:12:49.680895 kubelet[2995]: E0420 16:12:49.676759 2995 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 16:12:51.196929 kubelet[2995]: E0420 16:12:51.196715 2995 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="5.279s" Apr 20 16:12:52.214996 containerd[1642]: time="2026-04-20T16:12:52.206386731Z" level=info msg="container event discarded" container=1fca368652511c60d532b9c1060a34d2e04d688dca024e8b28591c326cbd3dc8 type=CONTAINER_CREATED_EVENT Apr 20 16:12:52.272201 kubelet[2995]: E0420 16:12:52.269546 2995 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.073s" Apr 20 16:12:52.377902 containerd[1642]: time="2026-04-20T16:12:52.341423013Z" level=info msg="container event discarded" container=1fca368652511c60d532b9c1060a34d2e04d688dca024e8b28591c326cbd3dc8 type=CONTAINER_STARTED_EVENT Apr 20 16:12:52.728324 containerd[1642]: time="2026-04-20T16:12:52.727001287Z" level=info msg="StopContainer for \"508156ca7c4de162cfc24ca5317f4ca402e68de276766ea0bf5e1c50d94323a7\" with timeout 30 (s)" Apr 20 16:12:53.150585 containerd[1642]: time="2026-04-20T16:12:53.004968772Z" level=info msg="Stop container \"508156ca7c4de162cfc24ca5317f4ca402e68de276766ea0bf5e1c50d94323a7\" with signal terminated" Apr 20 16:12:53.326787 kubelet[2995]: E0420 16:12:53.324540 2995 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.052s" Apr 20 16:12:54.004664 sshd[3562]: Connection closed by 10.0.0.1 port 58130 Apr 20 16:12:54.133214 sshd-session[3551]: pam_unix(sshd:session): session closed for user core Apr 20 16:12:54.511725 systemd[1]: sshd@9-8193-10.0.0.48:22-10.0.0.1:58130.service: Deactivated successfully. Apr 20 16:12:54.578310 systemd[1]: sshd@9-8193-10.0.0.48:22-10.0.0.1:58130.service: Consumed 2.271s CPU time, 4.3M memory peak. Apr 20 16:12:54.808929 systemd[1]: session-11.scope: Deactivated successfully. Apr 20 16:12:54.812059 systemd[1]: session-11.scope: Consumed 4.274s CPU time, 15.9M memory peak. Apr 20 16:12:54.815243 kubelet[2995]: E0420 16:12:54.812808 2995 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.446s" Apr 20 16:12:54.857344 systemd-logind[1611]: Session 11 logged out. Waiting for processes to exit. Apr 20 16:12:54.862772 systemd-logind[1611]: Removed session 11. Apr 20 16:12:55.213900 kubelet[2995]: E0420 16:12:55.172142 2995 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 16:12:55.918556 containerd[1642]: time="2026-04-20T16:12:55.881151363Z" level=info msg="TaskExit event container_id:\"508156ca7c4de162cfc24ca5317f4ca402e68de276766ea0bf5e1c50d94323a7\" id:\"508156ca7c4de162cfc24ca5317f4ca402e68de276766ea0bf5e1c50d94323a7\" pid:3267 exit_status:1 exited_at:{seconds:1776701478 nanos:258436488}" Apr 20 16:12:56.089029 containerd[1642]: time="2026-04-20T16:12:56.064287420Z" level=info msg="StopContainer for \"456d4bb384e753507dd13b372b2df359997e43521644fd88e72216de163d6273\" returns successfully" Apr 20 16:12:56.111892 kubelet[2995]: E0420 16:12:56.111731 2995 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 16:12:56.191715 containerd[1642]: time="2026-04-20T16:12:56.191640324Z" level=info msg="CreateContainer within sandbox \"bcb1213fb762cb01d7cad99cfe50564de5d80901cf15182f86eb69aab9596910\" for container name:\"kube-scheduler\" attempt:1" Apr 20 16:12:57.404437 containerd[1642]: time="2026-04-20T16:12:57.402424494Z" level=info msg="Container 2cd2d8ccad6a0144eb176f6d4306ced991d40e9d9a29f2a4ef85b3fd4d429eee: CDI devices from CRI Config.CDIDevices: []" Apr 20 16:12:58.657838 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-508156ca7c4de162cfc24ca5317f4ca402e68de276766ea0bf5e1c50d94323a7-rootfs.mount: Deactivated successfully. Apr 20 16:12:58.664786 containerd[1642]: time="2026-04-20T16:12:58.663709562Z" level=info msg="CreateContainer within sandbox \"bcb1213fb762cb01d7cad99cfe50564de5d80901cf15182f86eb69aab9596910\" for name:\"kube-scheduler\" attempt:1 returns container id \"2cd2d8ccad6a0144eb176f6d4306ced991d40e9d9a29f2a4ef85b3fd4d429eee\"" Apr 20 16:12:59.591848 containerd[1642]: time="2026-04-20T16:12:59.316049813Z" level=info msg="StartContainer for \"2cd2d8ccad6a0144eb176f6d4306ced991d40e9d9a29f2a4ef85b3fd4d429eee\"" Apr 20 16:13:01.665855 systemd[1]: Started sshd@10-4-10.0.0.48:22-10.0.0.1:44010.service - OpenSSH per-connection server daemon (10.0.0.1:44010). Apr 20 16:13:01.847882 kubelet[2995]: E0420 16:13:01.846869 2995 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 16:13:02.176936 containerd[1642]: time="2026-04-20T16:13:02.176609516Z" level=info msg="connecting to shim 2cd2d8ccad6a0144eb176f6d4306ced991d40e9d9a29f2a4ef85b3fd4d429eee" address="unix:///run/containerd/s/fea015718c3b780d3f475ca07cc94aee7b32240562ed54fe7ca53764a3c05bf2" protocol=ttrpc version=3 Apr 20 16:13:02.419710 containerd[1642]: time="2026-04-20T16:13:02.419069536Z" level=info msg="StopContainer for \"508156ca7c4de162cfc24ca5317f4ca402e68de276766ea0bf5e1c50d94323a7\" returns successfully" Apr 20 16:13:02.447989 kubelet[2995]: E0420 16:13:02.447288 2995 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 16:13:02.692865 kubelet[2995]: E0420 16:13:02.692635 2995 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.149s" Apr 20 16:13:02.719777 sshd[3612]: Accepted publickey for core from 10.0.0.1 port 44010 ssh2: RSA SHA256:TY3ywpKQxrHjYT+ud73OgntCcwLDp6eqaAmsAXbkkEQ Apr 20 16:13:02.769519 sshd-session[3612]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 20 16:13:02.773249 containerd[1642]: time="2026-04-20T16:13:02.769434977Z" level=info msg="CreateContainer within sandbox \"a4f8e106b2c659c398fec8bc48b50dd31c841c50583cc946254869d16aa74adb\" for container name:\"kube-controller-manager\" attempt:2" Apr 20 16:13:03.271665 systemd-logind[1611]: New session '12' of user 'core' with class 'user' and type 'tty'. Apr 20 16:13:03.692984 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2622226656.mount: Deactivated successfully. Apr 20 16:13:03.743651 containerd[1642]: time="2026-04-20T16:13:03.742674091Z" level=info msg="Container 6ecf2e6532aba84094a3abed497a20b6cf9a99af4368a143b9ea2427a3970801: CDI devices from CRI Config.CDIDevices: []" Apr 20 16:13:03.787487 systemd[1]: Started cri-containerd-2cd2d8ccad6a0144eb176f6d4306ced991d40e9d9a29f2a4ef85b3fd4d429eee.scope - libcontainer container 2cd2d8ccad6a0144eb176f6d4306ced991d40e9d9a29f2a4ef85b3fd4d429eee. Apr 20 16:13:04.032768 systemd[1]: Started session-12.scope - Session 12 of User core. Apr 20 16:13:04.565195 containerd[1642]: time="2026-04-20T16:13:04.564909077Z" level=info msg="CreateContainer within sandbox \"a4f8e106b2c659c398fec8bc48b50dd31c841c50583cc946254869d16aa74adb\" for name:\"kube-controller-manager\" attempt:2 returns container id \"6ecf2e6532aba84094a3abed497a20b6cf9a99af4368a143b9ea2427a3970801\"" Apr 20 16:13:04.572134 containerd[1642]: time="2026-04-20T16:13:04.566774311Z" level=info msg="StartContainer for \"6ecf2e6532aba84094a3abed497a20b6cf9a99af4368a143b9ea2427a3970801\"" Apr 20 16:13:04.750693 containerd[1642]: time="2026-04-20T16:13:04.746055545Z" level=info msg="connecting to shim 6ecf2e6532aba84094a3abed497a20b6cf9a99af4368a143b9ea2427a3970801" address="unix:///run/containerd/s/954064e80f6724cd9b3557ef5e8f14a291671be0b664e8d783df675780d63b2d" protocol=ttrpc version=3 Apr 20 16:13:05.139399 kubelet[2995]: I0420 16:13:05.121587 2995 scope.go:117] "RemoveContainer" containerID="d17bb97a3f3282b85a6720c6e98b5c169be8dca35dce64f8b767b8712f2815f1" Apr 20 16:13:06.150338 sshd[3634]: Connection closed by 10.0.0.1 port 44010 Apr 20 16:13:06.187716 sshd-session[3612]: pam_unix(sshd:session): session closed for user core Apr 20 16:13:06.350762 systemd[1]: Started cri-containerd-6ecf2e6532aba84094a3abed497a20b6cf9a99af4368a143b9ea2427a3970801.scope - libcontainer container 6ecf2e6532aba84094a3abed497a20b6cf9a99af4368a143b9ea2427a3970801. Apr 20 16:13:06.651950 systemd[1]: sshd@10-4-10.0.0.48:22-10.0.0.1:44010.service: Deactivated successfully. Apr 20 16:13:07.132340 systemd[1]: session-12.scope: Deactivated successfully. Apr 20 16:13:07.172725 containerd[1642]: time="2026-04-20T16:13:07.160244456Z" level=info msg="RemoveContainer for \"d17bb97a3f3282b85a6720c6e98b5c169be8dca35dce64f8b767b8712f2815f1\"" Apr 20 16:13:07.180473 systemd-logind[1611]: Session 12 logged out. Waiting for processes to exit. Apr 20 16:13:07.536051 systemd-logind[1611]: Removed session 12. Apr 20 16:13:08.685759 kubelet[2995]: E0420 16:13:08.685530 2995 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 16:13:08.781719 kubelet[2995]: E0420 16:13:08.683085 2995 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.126s" Apr 20 16:13:08.821994 kubelet[2995]: E0420 16:13:08.821071 2995 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 16:13:10.424416 kubelet[2995]: E0420 16:13:10.422343 2995 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 16:13:11.175452 containerd[1642]: time="2026-04-20T16:13:11.174821181Z" level=info msg="StartContainer for \"2cd2d8ccad6a0144eb176f6d4306ced991d40e9d9a29f2a4ef85b3fd4d429eee\" returns successfully" Apr 20 16:13:11.683416 systemd[1]: Started sshd@11-5-10.0.0.48:22-10.0.0.1:60600.service - OpenSSH per-connection server daemon (10.0.0.1:60600). Apr 20 16:13:11.864803 containerd[1642]: time="2026-04-20T16:13:11.859152989Z" level=info msg="RemoveContainer for \"d17bb97a3f3282b85a6720c6e98b5c169be8dca35dce64f8b767b8712f2815f1\" returns successfully" Apr 20 16:13:13.097875 containerd[1642]: time="2026-04-20T16:13:13.097391506Z" level=info msg="StartContainer for \"6ecf2e6532aba84094a3abed497a20b6cf9a99af4368a143b9ea2427a3970801\" returns successfully" Apr 20 16:13:13.615749 sshd[3689]: Accepted publickey for core from 10.0.0.1 port 60600 ssh2: RSA SHA256:TY3ywpKQxrHjYT+ud73OgntCcwLDp6eqaAmsAXbkkEQ Apr 20 16:13:13.939851 sshd-session[3689]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 20 16:13:15.295828 systemd-logind[1611]: New session '13' of user 'core' with class 'user' and type 'tty'. Apr 20 16:13:15.370860 systemd[1]: Started session-13.scope - Session 13 of User core. Apr 20 16:13:16.548792 containerd[1642]: time="2026-04-20T16:13:16.386077206Z" level=info msg="container event discarded" container=cdffdba84ba06cb830e103ae945ccbed39828f3caa5d924e4d59c2711614d2b6 type=CONTAINER_CREATED_EVENT Apr 20 16:13:18.449643 kubelet[2995]: E0420 16:13:18.442698 2995 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 16:13:23.154910 kubelet[2995]: E0420 16:13:23.152339 2995 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="9.584s" Apr 20 16:13:23.431767 sshd[3701]: Connection closed by 10.0.0.1 port 60600 Apr 20 16:13:23.459095 sshd-session[3689]: pam_unix(sshd:session): session closed for user core Apr 20 16:13:23.643111 systemd[1]: sshd@11-5-10.0.0.48:22-10.0.0.1:60600.service: Deactivated successfully. Apr 20 16:13:23.703584 systemd[1]: session-13.scope: Deactivated successfully. Apr 20 16:13:23.728008 systemd[1]: session-13.scope: Consumed 3.373s CPU time, 17.2M memory peak. Apr 20 16:13:23.942846 systemd-logind[1611]: Session 13 logged out. Waiting for processes to exit. Apr 20 16:13:24.001903 systemd-logind[1611]: Removed session 13. Apr 20 16:13:24.062668 kubelet[2995]: E0420 16:13:24.062089 2995 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 16:13:24.991628 kubelet[2995]: E0420 16:13:24.975808 2995 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 16:13:26.274336 kubelet[2995]: E0420 16:13:26.240823 2995 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 16:13:26.905029 kubelet[2995]: E0420 16:13:26.901989 2995 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 16:13:28.107303 kubelet[2995]: E0420 16:13:27.943995 2995 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 16:13:28.491990 kubelet[2995]: E0420 16:13:28.488626 2995 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.101s" Apr 20 16:13:29.281608 containerd[1642]: time="2026-04-20T16:13:29.190098942Z" level=info msg="container event discarded" container=0841463db47c0e7c9c6d85cfb6642e85afb7fb990d35c75714898c948ba39613 type=CONTAINER_CREATED_EVENT Apr 20 16:13:29.742890 containerd[1642]: time="2026-04-20T16:13:29.716096031Z" level=info msg="container event discarded" container=0841463db47c0e7c9c6d85cfb6642e85afb7fb990d35c75714898c948ba39613 type=CONTAINER_STARTED_EVENT Apr 20 16:13:30.118982 systemd[1]: Started sshd@12-4103-10.0.0.48:22-10.0.0.1:35018.service - OpenSSH per-connection server daemon (10.0.0.1:35018). Apr 20 16:13:30.634349 kubelet[2995]: E0420 16:13:30.632597 2995 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 16:13:30.634349 kubelet[2995]: E0420 16:13:30.632903 2995 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 16:13:32.237132 kubelet[2995]: E0420 16:13:32.236799 2995 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.704s" Apr 20 16:13:33.683821 kubelet[2995]: E0420 16:13:33.679627 2995 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.34s" Apr 20 16:13:34.773750 kubelet[2995]: E0420 16:13:34.648855 2995 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 16:13:34.773750 kubelet[2995]: E0420 16:13:34.777047 2995 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 16:13:35.323361 sshd[3736]: Accepted publickey for core from 10.0.0.1 port 35018 ssh2: RSA SHA256:TY3ywpKQxrHjYT+ud73OgntCcwLDp6eqaAmsAXbkkEQ Apr 20 16:13:35.070737 sshd-session[3736]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 20 16:13:35.728424 kubelet[2995]: E0420 16:13:35.464935 2995 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.361s" Apr 20 16:13:36.928124 containerd[1642]: time="2026-04-20T16:13:36.901037459Z" level=info msg="container event discarded" container=d17bb97a3f3282b85a6720c6e98b5c169be8dca35dce64f8b767b8712f2815f1 type=CONTAINER_STOPPED_EVENT Apr 20 16:13:37.052352 systemd-logind[1611]: New session '14' of user 'core' with class 'user' and type 'tty'. Apr 20 16:13:37.727112 systemd[1]: Started session-14.scope - Session 14 of User core. Apr 20 16:13:38.770357 kubelet[2995]: E0420 16:13:38.654985 2995 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 16:13:47.127436 sshd[3756]: Connection closed by 10.0.0.1 port 35018 Apr 20 16:13:47.127873 sshd-session[3736]: pam_unix(sshd:session): session closed for user core Apr 20 16:13:47.431867 systemd[1]: sshd@12-4103-10.0.0.48:22-10.0.0.1:35018.service: Deactivated successfully. Apr 20 16:13:47.525981 systemd[1]: sshd@12-4103-10.0.0.48:22-10.0.0.1:35018.service: Consumed 1.166s CPU time, 4.7M memory peak. Apr 20 16:13:47.834572 systemd[1]: session-14.scope: Deactivated successfully. Apr 20 16:13:47.901875 systemd[1]: session-14.scope: Consumed 3.623s CPU time, 17.6M memory peak. Apr 20 16:13:48.472107 systemd-logind[1611]: Session 14 logged out. Waiting for processes to exit. Apr 20 16:13:48.802776 systemd-logind[1611]: Removed session 14. Apr 20 16:13:49.083900 containerd[1642]: time="2026-04-20T16:13:47.189122014Z" level=info msg="container event discarded" container=cdffdba84ba06cb830e103ae945ccbed39828f3caa5d924e4d59c2711614d2b6 type=CONTAINER_STARTED_EVENT Apr 20 16:13:51.388956 kubelet[2995]: E0420 16:13:51.383884 2995 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 16:13:51.738503 kubelet[2995]: E0420 16:13:51.736589 2995 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="16.239s" Apr 20 16:13:52.034960 kubelet[2995]: E0420 16:13:52.003541 2995 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 16:13:52.119678 containerd[1642]: time="2026-04-20T16:13:52.113836783Z" level=info msg="container event discarded" container=508156ca7c4de162cfc24ca5317f4ca402e68de276766ea0bf5e1c50d94323a7 type=CONTAINER_CREATED_EVENT Apr 20 16:13:52.778948 systemd[1]: Started sshd@13-12289-10.0.0.48:22-10.0.0.1:51124.service - OpenSSH per-connection server daemon (10.0.0.1:51124). Apr 20 16:13:53.045883 containerd[1642]: time="2026-04-20T16:13:53.043089860Z" level=info msg="ImageCreate event name:\"ghcr.io/flannel-io/flannel:v0.26.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 20 16:13:53.371071 containerd[1642]: time="2026-04-20T16:13:53.295131049Z" level=info msg="stop pulling image ghcr.io/flannel-io/flannel:v0.26.7: active requests=0, bytes read=29349403" Apr 20 16:13:55.402090 sshd[3773]: Accepted publickey for core from 10.0.0.1 port 51124 ssh2: RSA SHA256:TY3ywpKQxrHjYT+ud73OgntCcwLDp6eqaAmsAXbkkEQ Apr 20 16:13:55.639735 sshd-session[3773]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 20 16:13:57.026068 systemd-logind[1611]: New session '15' of user 'core' with class 'user' and type 'tty'. Apr 20 16:13:57.333357 systemd[1]: Started session-15.scope - Session 15 of User core. Apr 20 16:13:57.681696 kubelet[2995]: E0420 16:13:57.673365 2995 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 16:13:57.924653 containerd[1642]: time="2026-04-20T16:13:57.890687249Z" level=info msg="ImageCreate event name:\"sha256:965b9dd4aa4c1b6b68a4c54a166692b4645b6e6f8a5937d8dc17736cb63f515e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 20 16:13:58.388831 kubelet[2995]: E0420 16:13:58.379847 2995 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="4.92s" Apr 20 16:13:59.190439 containerd[1642]: time="2026-04-20T16:13:59.187982547Z" level=info msg="ImageCreate event name:\"ghcr.io/flannel-io/flannel@sha256:7f471907fa940f944867270de4ed78121b8b4c5d564e17f940dc787cb16dea82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 20 16:13:59.345942 containerd[1642]: time="2026-04-20T16:13:59.341653268Z" level=info msg="Pulled image \"ghcr.io/flannel-io/flannel:v0.26.7\" with image id \"sha256:965b9dd4aa4c1b6b68a4c54a166692b4645b6e6f8a5937d8dc17736cb63f515e\", repo tag \"ghcr.io/flannel-io/flannel:v0.26.7\", repo digest \"ghcr.io/flannel-io/flannel@sha256:7f471907fa940f944867270de4ed78121b8b4c5d564e17f940dc787cb16dea82\", size \"32996046\" in 4m14.860566393s" Apr 20 16:13:59.345942 containerd[1642]: time="2026-04-20T16:13:59.344793810Z" level=info msg="PullImage \"ghcr.io/flannel-io/flannel:v0.26.7\" returns image reference \"sha256:965b9dd4aa4c1b6b68a4c54a166692b4645b6e6f8a5937d8dc17736cb63f515e\"" Apr 20 16:14:00.617520 kubelet[2995]: E0420 16:14:00.616858 2995 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.235s" Apr 20 16:14:01.128249 containerd[1642]: time="2026-04-20T16:14:01.127466440Z" level=info msg="CreateContainer within sandbox \"0841463db47c0e7c9c6d85cfb6642e85afb7fb990d35c75714898c948ba39613\" for container name:\"install-cni\"" Apr 20 16:14:01.144581 sshd[3779]: Connection closed by 10.0.0.1 port 51124 Apr 20 16:14:01.143463 sshd-session[3773]: pam_unix(sshd:session): session closed for user core Apr 20 16:14:01.315027 systemd[1]: sshd@13-12289-10.0.0.48:22-10.0.0.1:51124.service: Deactivated successfully. Apr 20 16:14:01.492157 systemd[1]: session-15.scope: Deactivated successfully. Apr 20 16:14:01.518339 systemd[1]: session-15.scope: Consumed 1.575s CPU time, 16.1M memory peak. Apr 20 16:14:01.563993 systemd-logind[1611]: Session 15 logged out. Waiting for processes to exit. Apr 20 16:14:01.617336 systemd-logind[1611]: Removed session 15. Apr 20 16:14:01.992982 containerd[1642]: time="2026-04-20T16:14:01.935496222Z" level=info msg="Container b82a4cc1f3c560519077acef144c0785a7e58f1ad8314d93d1b941024092e78b: CDI devices from CRI Config.CDIDevices: []" Apr 20 16:14:02.284276 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2077570580.mount: Deactivated successfully. Apr 20 16:14:02.347650 containerd[1642]: time="2026-04-20T16:14:02.321259165Z" level=info msg="container event discarded" container=508156ca7c4de162cfc24ca5317f4ca402e68de276766ea0bf5e1c50d94323a7 type=CONTAINER_STARTED_EVENT Apr 20 16:14:03.489757 kubelet[2995]: E0420 16:14:03.339440 2995 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 16:14:04.051705 containerd[1642]: time="2026-04-20T16:14:04.048652058Z" level=info msg="CreateContainer within sandbox \"0841463db47c0e7c9c6d85cfb6642e85afb7fb990d35c75714898c948ba39613\" for name:\"install-cni\" returns container id \"b82a4cc1f3c560519077acef144c0785a7e58f1ad8314d93d1b941024092e78b\"" Apr 20 16:14:04.934929 containerd[1642]: time="2026-04-20T16:14:04.925082861Z" level=info msg="StartContainer for \"b82a4cc1f3c560519077acef144c0785a7e58f1ad8314d93d1b941024092e78b\"" Apr 20 16:14:08.093972 systemd[1]: Started sshd@14-4104-10.0.0.48:22-10.0.0.1:48332.service - OpenSSH per-connection server daemon (10.0.0.1:48332). Apr 20 16:14:08.516646 kubelet[2995]: E0420 16:14:08.175119 2995 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="4.683s" Apr 20 16:14:09.387463 containerd[1642]: time="2026-04-20T16:14:09.383828353Z" level=info msg="connecting to shim b82a4cc1f3c560519077acef144c0785a7e58f1ad8314d93d1b941024092e78b" address="unix:///run/containerd/s/0ed820c0dcc55393a306a831ea5ff34ff6397fb5c5c6190f1f8692e983757cf0" protocol=ttrpc version=3 Apr 20 16:14:10.161960 kubelet[2995]: E0420 16:14:10.157831 2995 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 16:14:14.768114 sshd[3798]: Accepted publickey for core from 10.0.0.1 port 48332 ssh2: RSA SHA256:TY3ywpKQxrHjYT+ud73OgntCcwLDp6eqaAmsAXbkkEQ Apr 20 16:14:15.313312 sshd-session[3798]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 20 16:14:17.588557 systemd-logind[1611]: New session '16' of user 'core' with class 'user' and type 'tty'. Apr 20 16:14:17.663110 systemd[1]: Started session-16.scope - Session 16 of User core. Apr 20 16:14:18.619922 kubelet[2995]: E0420 16:14:18.616357 2995 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 16:14:18.628651 kubelet[2995]: E0420 16:14:18.628579 2995 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="10.443s" Apr 20 16:14:18.764992 kubelet[2995]: E0420 16:14:18.764896 2995 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 16:14:18.972401 systemd[1]: Started cri-containerd-b82a4cc1f3c560519077acef144c0785a7e58f1ad8314d93d1b941024092e78b.scope - libcontainer container b82a4cc1f3c560519077acef144c0785a7e58f1ad8314d93d1b941024092e78b. Apr 20 16:14:21.992086 containerd[1642]: time="2026-04-20T16:14:21.954675144Z" level=error msg="get state for b82a4cc1f3c560519077acef144c0785a7e58f1ad8314d93d1b941024092e78b" error="context deadline exceeded" Apr 20 16:14:22.170047 containerd[1642]: time="2026-04-20T16:14:21.998149933Z" level=warning msg="unknown status" status=0 Apr 20 16:14:22.903549 sshd[3810]: Connection closed by 10.0.0.1 port 48332 Apr 20 16:14:23.007959 sshd-session[3798]: pam_unix(sshd:session): session closed for user core Apr 20 16:14:23.723051 systemd[1]: sshd@14-4104-10.0.0.48:22-10.0.0.1:48332.service: Deactivated successfully. Apr 20 16:14:23.757647 systemd[1]: sshd@14-4104-10.0.0.48:22-10.0.0.1:48332.service: Consumed 1.200s CPU time, 4.4M memory peak. Apr 20 16:14:24.592280 systemd[1]: session-16.scope: Deactivated successfully. Apr 20 16:14:24.646276 systemd[1]: session-16.scope: Consumed 2.073s CPU time, 17.6M memory peak. Apr 20 16:14:24.787472 systemd-logind[1611]: Session 16 logged out. Waiting for processes to exit. Apr 20 16:14:24.893593 systemd-logind[1611]: Removed session 16. Apr 20 16:14:25.440345 kubelet[2995]: E0420 16:14:25.380854 2995 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 16:14:27.181465 containerd[1642]: time="2026-04-20T16:14:27.036099127Z" level=error msg="get state for b82a4cc1f3c560519077acef144c0785a7e58f1ad8314d93d1b941024092e78b" error="context deadline exceeded" Apr 20 16:14:27.181465 containerd[1642]: time="2026-04-20T16:14:27.178484689Z" level=warning msg="unknown status" status=0 Apr 20 16:14:29.370610 systemd[1]: Started sshd@15-4105-10.0.0.48:22-10.0.0.1:54826.service - OpenSSH per-connection server daemon (10.0.0.1:54826). Apr 20 16:14:30.923681 containerd[1642]: time="2026-04-20T16:14:30.806815254Z" level=info msg="container event discarded" container=422d0e2bb2346de8cfd758daa0fc84958f8f887807811049b17d86cf8cd02f75 type=CONTAINER_CREATED_EVENT Apr 20 16:14:31.185109 containerd[1642]: time="2026-04-20T16:14:31.183257087Z" level=error msg="ttrpc: received message on inactive stream" stream=3 Apr 20 16:14:31.185109 containerd[1642]: time="2026-04-20T16:14:31.183896851Z" level=error msg="ttrpc: received message on inactive stream" stream=5 Apr 20 16:14:31.250014 kubelet[2995]: E0420 16:14:31.247104 2995 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 16:14:33.455859 kubelet[2995]: E0420 16:14:33.455762 2995 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="12.034s" Apr 20 16:14:33.968672 sshd[3842]: Accepted publickey for core from 10.0.0.1 port 54826 ssh2: RSA SHA256:TY3ywpKQxrHjYT+ud73OgntCcwLDp6eqaAmsAXbkkEQ Apr 20 16:14:34.401801 sshd-session[3842]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 20 16:14:35.717912 systemd-logind[1611]: New session '17' of user 'core' with class 'user' and type 'tty'. Apr 20 16:14:36.190828 systemd[1]: Started session-17.scope - Session 17 of User core. Apr 20 16:14:37.624444 kubelet[2995]: E0420 16:14:37.587840 2995 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 16:14:40.924972 containerd[1642]: time="2026-04-20T16:14:40.767990280Z" level=info msg="container event discarded" container=422d0e2bb2346de8cfd758daa0fc84958f8f887807811049b17d86cf8cd02f75 type=CONTAINER_STARTED_EVENT Apr 20 16:14:43.241692 containerd[1642]: time="2026-04-20T16:14:43.019236483Z" level=info msg="container event discarded" container=422d0e2bb2346de8cfd758daa0fc84958f8f887807811049b17d86cf8cd02f75 type=CONTAINER_STOPPED_EVENT Apr 20 16:14:45.711737 kubelet[2995]: E0420 16:14:45.684131 2995 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 16:14:47.339871 systemd[1]: cri-containerd-b82a4cc1f3c560519077acef144c0785a7e58f1ad8314d93d1b941024092e78b.scope: Deactivated successfully. Apr 20 16:14:47.593305 systemd[1]: cri-containerd-b82a4cc1f3c560519077acef144c0785a7e58f1ad8314d93d1b941024092e78b.scope: Consumed 2.313s CPU time, 4.1M memory peak, 4K read from disk. Apr 20 16:14:49.720958 kubelet[2995]: E0420 16:14:49.717303 2995 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="15.988s" Apr 20 16:14:50.011996 containerd[1642]: time="2026-04-20T16:14:49.991024538Z" level=info msg="received container exit event container_id:\"b82a4cc1f3c560519077acef144c0785a7e58f1ad8314d93d1b941024092e78b\" id:\"b82a4cc1f3c560519077acef144c0785a7e58f1ad8314d93d1b941024092e78b\" pid:3826 exited_at:{seconds:1776701687 nanos:950907982}" Apr 20 16:14:51.039224 systemd[1]: cri-containerd-2cd2d8ccad6a0144eb176f6d4306ced991d40e9d9a29f2a4ef85b3fd4d429eee.scope: Deactivated successfully. Apr 20 16:14:51.063608 systemd[1]: cri-containerd-2cd2d8ccad6a0144eb176f6d4306ced991d40e9d9a29f2a4ef85b3fd4d429eee.scope: Consumed 15.964s CPU time, 20.4M memory peak. Apr 20 16:14:51.165124 containerd[1642]: time="2026-04-20T16:14:51.162154780Z" level=info msg="received container exit event container_id:\"2cd2d8ccad6a0144eb176f6d4306ced991d40e9d9a29f2a4ef85b3fd4d429eee\" id:\"2cd2d8ccad6a0144eb176f6d4306ced991d40e9d9a29f2a4ef85b3fd4d429eee\" pid:3628 exit_status:1 exited_at:{seconds:1776701691 nanos:132767272}" Apr 20 16:14:51.406011 sshd[3849]: Connection closed by 10.0.0.1 port 54826 Apr 20 16:14:51.206078 sshd-session[3842]: pam_unix(sshd:session): session closed for user core Apr 20 16:14:51.602145 systemd[1]: sshd@15-4105-10.0.0.48:22-10.0.0.1:54826.service: Deactivated successfully. Apr 20 16:14:51.847423 systemd[1]: sshd@15-4105-10.0.0.48:22-10.0.0.1:54826.service: Consumed 1.031s CPU time, 4.5M memory peak. Apr 20 16:14:52.500513 systemd[1]: session-17.scope: Deactivated successfully. Apr 20 16:14:52.502615 systemd[1]: session-17.scope: Consumed 5.766s CPU time, 17.4M memory peak. Apr 20 16:14:52.649412 systemd-logind[1611]: Session 17 logged out. Waiting for processes to exit. Apr 20 16:14:52.725723 containerd[1642]: time="2026-04-20T16:14:52.647992076Z" level=info msg="StartContainer for \"b82a4cc1f3c560519077acef144c0785a7e58f1ad8314d93d1b941024092e78b\" returns successfully" Apr 20 16:14:52.804123 systemd-logind[1611]: Removed session 17. Apr 20 16:14:53.700081 kubelet[2995]: E0420 16:14:53.698939 2995 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.968s" Apr 20 16:14:54.471012 kubelet[2995]: E0420 16:14:54.436750 2995 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 16:14:54.826505 kubelet[2995]: E0420 16:14:54.822763 2995 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 16:14:54.830562 kubelet[2995]: E0420 16:14:54.830436 2995 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.125s" Apr 20 16:14:58.193582 systemd[1]: Started sshd@16-6-10.0.0.48:22-10.0.0.1:48538.service - OpenSSH per-connection server daemon (10.0.0.1:48538). Apr 20 16:15:00.548114 containerd[1642]: time="2026-04-20T16:15:00.434993211Z" level=error msg="failed to delete task" error="context deadline exceeded" id=b82a4cc1f3c560519077acef144c0785a7e58f1ad8314d93d1b941024092e78b Apr 20 16:15:02.311900 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2cd2d8ccad6a0144eb176f6d4306ced991d40e9d9a29f2a4ef85b3fd4d429eee-rootfs.mount: Deactivated successfully. Apr 20 16:15:02.773043 sshd[3881]: Accepted publickey for core from 10.0.0.1 port 48538 ssh2: RSA SHA256:TY3ywpKQxrHjYT+ud73OgntCcwLDp6eqaAmsAXbkkEQ Apr 20 16:15:02.969228 containerd[1642]: time="2026-04-20T16:15:02.267978760Z" level=error msg="ttrpc: received message on inactive stream" stream=39 Apr 20 16:15:03.151372 sshd-session[3881]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 20 16:15:03.288619 kubelet[2995]: E0420 16:15:03.227110 2995 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 16:15:04.674080 containerd[1642]: time="2026-04-20T16:15:03.756877624Z" level=error msg="failed to handle container TaskExit event container_id:\"b82a4cc1f3c560519077acef144c0785a7e58f1ad8314d93d1b941024092e78b\" id:\"b82a4cc1f3c560519077acef144c0785a7e58f1ad8314d93d1b941024092e78b\" pid:3826 exited_at:{seconds:1776701687 nanos:950907982}" error="failed to stop container: failed to delete task: context deadline exceeded" Apr 20 16:15:04.946007 containerd[1642]: time="2026-04-20T16:15:02.311885663Z" level=error msg="failed to delete task" error="context deadline exceeded" id=2cd2d8ccad6a0144eb176f6d4306ced991d40e9d9a29f2a4ef85b3fd4d429eee Apr 20 16:15:05.050812 systemd-logind[1611]: New session '18' of user 'core' with class 'user' and type 'tty'. Apr 20 16:15:05.293139 systemd[1]: Started session-18.scope - Session 18 of User core. Apr 20 16:15:05.994011 containerd[1642]: time="2026-04-20T16:15:05.804945149Z" level=error msg="failed to handle container TaskExit event container_id:\"2cd2d8ccad6a0144eb176f6d4306ced991d40e9d9a29f2a4ef85b3fd4d429eee\" id:\"2cd2d8ccad6a0144eb176f6d4306ced991d40e9d9a29f2a4ef85b3fd4d429eee\" pid:3628 exit_status:1 exited_at:{seconds:1776701691 nanos:132767272}" error="failed to stop container: failed to delete task: context deadline exceeded" Apr 20 16:15:06.528102 containerd[1642]: time="2026-04-20T16:15:06.047146964Z" level=info msg="TaskExit event container_id:\"b82a4cc1f3c560519077acef144c0785a7e58f1ad8314d93d1b941024092e78b\" id:\"b82a4cc1f3c560519077acef144c0785a7e58f1ad8314d93d1b941024092e78b\" pid:3826 exited_at:{seconds:1776701687 nanos:950907982}" Apr 20 16:15:06.985340 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b82a4cc1f3c560519077acef144c0785a7e58f1ad8314d93d1b941024092e78b-rootfs.mount: Deactivated successfully. Apr 20 16:15:07.887754 containerd[1642]: time="2026-04-20T16:15:07.445835509Z" level=error msg="ttrpc: received message on inactive stream" stream=37 Apr 20 16:15:09.098579 containerd[1642]: time="2026-04-20T16:15:09.080372055Z" level=error msg="get state for b82a4cc1f3c560519077acef144c0785a7e58f1ad8314d93d1b941024092e78b" error="context deadline exceeded" Apr 20 16:15:09.452404 containerd[1642]: time="2026-04-20T16:15:09.094134958Z" level=error msg="ttrpc: received message on inactive stream" stream=39 Apr 20 16:15:09.452404 containerd[1642]: time="2026-04-20T16:15:09.117115070Z" level=warning msg="unknown status" status=0 Apr 20 16:15:11.728858 systemd[1]: cri-containerd-6ecf2e6532aba84094a3abed497a20b6cf9a99af4368a143b9ea2427a3970801.scope: Deactivated successfully. Apr 20 16:15:11.807906 systemd[1]: cri-containerd-6ecf2e6532aba84094a3abed497a20b6cf9a99af4368a143b9ea2427a3970801.scope: Consumed 36.558s CPU time, 43M memory peak, 8K read from disk. Apr 20 16:15:12.917910 containerd[1642]: time="2026-04-20T16:15:12.915853843Z" level=info msg="received container exit event container_id:\"6ecf2e6532aba84094a3abed497a20b6cf9a99af4368a143b9ea2427a3970801\" id:\"6ecf2e6532aba84094a3abed497a20b6cf9a99af4368a143b9ea2427a3970801\" pid:3665 exit_status:1 exited_at:{seconds:1776701711 nanos:941126450}" Apr 20 16:15:15.079553 kubelet[2995]: E0420 16:15:15.064018 2995 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="19.675s" Apr 20 16:15:15.295948 kubelet[2995]: E0420 16:15:15.201946 2995 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 16:15:16.075531 containerd[1642]: time="2026-04-20T16:15:16.007481343Z" level=error msg="failed to delete task" error="context deadline exceeded" id=b82a4cc1f3c560519077acef144c0785a7e58f1ad8314d93d1b941024092e78b Apr 20 16:15:16.143449 containerd[1642]: time="2026-04-20T16:15:16.109986961Z" level=error msg="Failed to handle backOff event container_id:\"b82a4cc1f3c560519077acef144c0785a7e58f1ad8314d93d1b941024092e78b\" id:\"b82a4cc1f3c560519077acef144c0785a7e58f1ad8314d93d1b941024092e78b\" pid:3826 exited_at:{seconds:1776701687 nanos:950907982} for b82a4cc1f3c560519077acef144c0785a7e58f1ad8314d93d1b941024092e78b" error="failed to handle container TaskExit event: failed to stop container: failed to delete task: context deadline exceeded" Apr 20 16:15:16.497810 containerd[1642]: time="2026-04-20T16:15:16.142602715Z" level=error msg="ttrpc: received message on inactive stream" stream=45 Apr 20 16:15:16.551906 containerd[1642]: time="2026-04-20T16:15:16.394932353Z" level=info msg="TaskExit event container_id:\"2cd2d8ccad6a0144eb176f6d4306ced991d40e9d9a29f2a4ef85b3fd4d429eee\" id:\"2cd2d8ccad6a0144eb176f6d4306ced991d40e9d9a29f2a4ef85b3fd4d429eee\" pid:3628 exit_status:1 exited_at:{seconds:1776701691 nanos:132767272}" Apr 20 16:15:16.579348 kubelet[2995]: E0420 16:15:16.578785 2995 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.514s" Apr 20 16:15:16.806482 sshd[3889]: Connection closed by 10.0.0.1 port 48538 Apr 20 16:15:16.788079 sshd-session[3881]: pam_unix(sshd:session): session closed for user core Apr 20 16:15:17.764419 systemd[1]: sshd@16-6-10.0.0.48:22-10.0.0.1:48538.service: Deactivated successfully. Apr 20 16:15:17.832403 systemd[1]: sshd@16-6-10.0.0.48:22-10.0.0.1:48538.service: Consumed 1.250s CPU time, 4.1M memory peak. Apr 20 16:15:17.888924 systemd[1]: session-18.scope: Deactivated successfully. Apr 20 16:15:17.913343 kubelet[2995]: E0420 16:15:17.888238 2995 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 16:15:17.901139 systemd[1]: session-18.scope: Consumed 5.626s CPU time, 17.7M memory peak. Apr 20 16:15:17.991307 systemd-logind[1611]: Session 18 logged out. Waiting for processes to exit. Apr 20 16:15:18.595891 systemd[1]: Started sshd@17-4106-10.0.0.48:22-10.0.0.1:60700.service - OpenSSH per-connection server daemon (10.0.0.1:60700). Apr 20 16:15:18.769844 systemd-logind[1611]: Removed session 18. Apr 20 16:15:19.041773 kubelet[2995]: E0420 16:15:19.040456 2995 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.422s" Apr 20 16:15:20.625987 sshd[3929]: Accepted publickey for core from 10.0.0.1 port 60700 ssh2: RSA SHA256:TY3ywpKQxrHjYT+ud73OgntCcwLDp6eqaAmsAXbkkEQ Apr 20 16:15:20.731571 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6ecf2e6532aba84094a3abed497a20b6cf9a99af4368a143b9ea2427a3970801-rootfs.mount: Deactivated successfully. Apr 20 16:15:20.874200 sshd-session[3929]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 20 16:15:22.177835 systemd-logind[1611]: New session '19' of user 'core' with class 'user' and type 'tty'. Apr 20 16:15:22.207060 systemd[1]: Started session-19.scope - Session 19 of User core. Apr 20 16:15:22.249999 kubelet[2995]: E0420 16:15:22.248606 2995 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.304s" Apr 20 16:15:22.822066 kubelet[2995]: E0420 16:15:22.821894 2995 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 16:15:24.958308 containerd[1642]: time="2026-04-20T16:15:24.956420585Z" level=info msg="TaskExit event container_id:\"b82a4cc1f3c560519077acef144c0785a7e58f1ad8314d93d1b941024092e78b\" id:\"b82a4cc1f3c560519077acef144c0785a7e58f1ad8314d93d1b941024092e78b\" pid:3826 exited_at:{seconds:1776701687 nanos:950907982}" Apr 20 16:15:26.023978 kubelet[2995]: E0420 16:15:26.023507 2995 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.583s" Apr 20 16:15:28.284801 kubelet[2995]: E0420 16:15:28.281857 2995 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.124s" Apr 20 16:15:28.688875 kubelet[2995]: I0420 16:15:28.687077 2995 scope.go:117] "RemoveContainer" containerID="508156ca7c4de162cfc24ca5317f4ca402e68de276766ea0bf5e1c50d94323a7" Apr 20 16:15:28.933244 kubelet[2995]: I0420 16:15:28.932353 2995 scope.go:117] "RemoveContainer" containerID="6ecf2e6532aba84094a3abed497a20b6cf9a99af4368a143b9ea2427a3970801" Apr 20 16:15:29.838325 kubelet[2995]: E0420 16:15:29.837640 2995 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 16:15:30.699496 kubelet[2995]: E0420 16:15:30.696298 2995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-controller-manager pod=kube-controller-manager-localhost_kube-system(e9ca41790ae21be9f4cbd451ade0acec)\"" pod="kube-system/kube-controller-manager-localhost" podUID="e9ca41790ae21be9f4cbd451ade0acec" Apr 20 16:15:35.568507 containerd[1642]: time="2026-04-20T16:15:35.538838864Z" level=error msg="get state for b82a4cc1f3c560519077acef144c0785a7e58f1ad8314d93d1b941024092e78b" error="context deadline exceeded" Apr 20 16:15:35.703901 containerd[1642]: time="2026-04-20T16:15:35.580113879Z" level=error msg="ttrpc: received message on inactive stream" stream=59 Apr 20 16:15:35.703901 containerd[1642]: time="2026-04-20T16:15:35.672701351Z" level=warning msg="unknown status" status=0 Apr 20 16:15:36.456471 containerd[1642]: time="2026-04-20T16:15:36.438864096Z" level=error msg="failed to drain init process b82a4cc1f3c560519077acef144c0785a7e58f1ad8314d93d1b941024092e78b io" error="context deadline exceeded" runtime=io.containerd.runc.v2 Apr 20 16:15:36.592350 containerd[1642]: time="2026-04-20T16:15:36.445802332Z" level=error msg="failed to delete task" error="context deadline exceeded" id=b82a4cc1f3c560519077acef144c0785a7e58f1ad8314d93d1b941024092e78b Apr 20 16:15:36.820890 containerd[1642]: time="2026-04-20T16:15:36.581924904Z" level=error msg="ttrpc: received message on inactive stream" stream=61 Apr 20 16:15:36.820890 containerd[1642]: time="2026-04-20T16:15:36.607706681Z" level=error msg="Failed to handle backOff event container_id:\"b82a4cc1f3c560519077acef144c0785a7e58f1ad8314d93d1b941024092e78b\" id:\"b82a4cc1f3c560519077acef144c0785a7e58f1ad8314d93d1b941024092e78b\" pid:3826 exited_at:{seconds:1776701687 nanos:950907982} for b82a4cc1f3c560519077acef144c0785a7e58f1ad8314d93d1b941024092e78b" error="failed to handle container TaskExit event: failed to stop container: failed to delete task: context deadline exceeded" Apr 20 16:15:38.024803 containerd[1642]: time="2026-04-20T16:15:38.024103380Z" level=info msg="RemoveContainer for \"508156ca7c4de162cfc24ca5317f4ca402e68de276766ea0bf5e1c50d94323a7\"" Apr 20 16:15:40.209793 containerd[1642]: time="2026-04-20T16:15:40.191026373Z" level=info msg="RemoveContainer for \"508156ca7c4de162cfc24ca5317f4ca402e68de276766ea0bf5e1c50d94323a7\" returns successfully" Apr 20 16:15:40.960553 sshd[3939]: Connection closed by 10.0.0.1 port 60700 Apr 20 16:15:41.116118 sshd-session[3929]: pam_unix(sshd:session): session closed for user core Apr 20 16:15:41.557130 containerd[1642]: time="2026-04-20T16:15:41.549526759Z" level=info msg="TaskExit event container_id:\"b82a4cc1f3c560519077acef144c0785a7e58f1ad8314d93d1b941024092e78b\" id:\"b82a4cc1f3c560519077acef144c0785a7e58f1ad8314d93d1b941024092e78b\" pid:3826 exited_at:{seconds:1776701687 nanos:950907982}" Apr 20 16:15:42.758004 kubelet[2995]: E0420 16:15:42.702694 2995 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="14.151s" Apr 20 16:15:44.400712 systemd[1]: sshd@17-4106-10.0.0.48:22-10.0.0.1:60700.service: Deactivated successfully. Apr 20 16:15:45.186290 systemd[1]: session-19.scope: Deactivated successfully. Apr 20 16:15:45.292263 systemd[1]: session-19.scope: Consumed 11.367s CPU time, 24.4M memory peak. Apr 20 16:15:45.889831 systemd-logind[1611]: Session 19 logged out. Waiting for processes to exit. Apr 20 16:15:46.635444 systemd[1]: Started sshd@18-7-10.0.0.48:22-10.0.0.1:42920.service - OpenSSH per-connection server daemon (10.0.0.1:42920). Apr 20 16:15:46.647570 containerd[1642]: time="2026-04-20T16:15:46.636550404Z" level=error msg="ttrpc: received message on inactive stream" stream=65 Apr 20 16:15:46.694863 containerd[1642]: time="2026-04-20T16:15:46.679896327Z" level=error msg="get state for b82a4cc1f3c560519077acef144c0785a7e58f1ad8314d93d1b941024092e78b" error="context deadline exceeded" Apr 20 16:15:46.828899 containerd[1642]: time="2026-04-20T16:15:46.799148123Z" level=warning msg="unknown status" status=0 Apr 20 16:15:46.981325 systemd-logind[1611]: Removed session 19. Apr 20 16:15:51.451101 containerd[1642]: time="2026-04-20T16:15:51.429931595Z" level=error msg="failed to delete task" error="context deadline exceeded" id=b82a4cc1f3c560519077acef144c0785a7e58f1ad8314d93d1b941024092e78b Apr 20 16:15:51.594555 containerd[1642]: time="2026-04-20T16:15:51.568954364Z" level=error msg="Failed to handle backOff event container_id:\"b82a4cc1f3c560519077acef144c0785a7e58f1ad8314d93d1b941024092e78b\" id:\"b82a4cc1f3c560519077acef144c0785a7e58f1ad8314d93d1b941024092e78b\" pid:3826 exited_at:{seconds:1776701687 nanos:950907982} for b82a4cc1f3c560519077acef144c0785a7e58f1ad8314d93d1b941024092e78b" error="failed to handle container TaskExit event: failed to stop container: failed to delete task: context deadline exceeded" Apr 20 16:15:51.767024 containerd[1642]: time="2026-04-20T16:15:51.746812440Z" level=error msg="ttrpc: received message on inactive stream" stream=71 Apr 20 16:15:51.825906 kubelet[2995]: E0420 16:15:51.770074 2995 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="8.299s" Apr 20 16:15:51.944590 kubelet[2995]: I0420 16:15:51.833666 2995 scope.go:117] "RemoveContainer" containerID="456d4bb384e753507dd13b372b2df359997e43521644fd88e72216de163d6273" Apr 20 16:15:52.986943 kubelet[2995]: I0420 16:15:52.939597 2995 scope.go:117] "RemoveContainer" containerID="2cd2d8ccad6a0144eb176f6d4306ced991d40e9d9a29f2a4ef85b3fd4d429eee" Apr 20 16:15:53.984104 kubelet[2995]: E0420 16:15:53.882014 2995 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 16:15:54.261792 kubelet[2995]: E0420 16:15:54.241948 2995 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.337s" Apr 20 16:15:55.288133 kubelet[2995]: I0420 16:15:55.282088 2995 scope.go:117] "RemoveContainer" containerID="6ecf2e6532aba84094a3abed497a20b6cf9a99af4368a143b9ea2427a3970801" Apr 20 16:15:55.720798 kubelet[2995]: E0420 16:15:55.689833 2995 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 16:15:58.085021 containerd[1642]: time="2026-04-20T16:15:58.049130045Z" level=info msg="RemoveContainer for \"456d4bb384e753507dd13b372b2df359997e43521644fd88e72216de163d6273\"" Apr 20 16:16:00.003815 sshd[3961]: Accepted publickey for core from 10.0.0.1 port 42920 ssh2: RSA SHA256:TY3ywpKQxrHjYT+ud73OgntCcwLDp6eqaAmsAXbkkEQ Apr 20 16:16:00.360740 sshd-session[3961]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 20 16:16:02.391110 containerd[1642]: time="2026-04-20T16:16:02.297850772Z" level=info msg="CreateContainer within sandbox \"a4f8e106b2c659c398fec8bc48b50dd31c841c50583cc946254869d16aa74adb\" for container name:\"kube-controller-manager\" attempt:3" Apr 20 16:16:02.757014 containerd[1642]: time="2026-04-20T16:16:02.596060861Z" level=info msg="CreateContainer within sandbox \"bcb1213fb762cb01d7cad99cfe50564de5d80901cf15182f86eb69aab9596910\" for container name:\"kube-scheduler\" attempt:2" Apr 20 16:16:03.345484 systemd-logind[1611]: New session '20' of user 'core' with class 'user' and type 'tty'. Apr 20 16:16:03.904854 systemd[1]: Started session-20.scope - Session 20 of User core. Apr 20 16:16:04.624666 containerd[1642]: time="2026-04-20T16:16:04.080095157Z" level=info msg="TaskExit event container_id:\"b82a4cc1f3c560519077acef144c0785a7e58f1ad8314d93d1b941024092e78b\" id:\"b82a4cc1f3c560519077acef144c0785a7e58f1ad8314d93d1b941024092e78b\" pid:3826 exited_at:{seconds:1776701687 nanos:950907982}" Apr 20 16:16:07.941244 kubelet[2995]: E0420 16:16:07.940529 2995 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="13.481s" Apr 20 16:16:14.100340 containerd[1642]: time="2026-04-20T16:16:14.098818718Z" level=error msg="Failed to handle backOff event container_id:\"b82a4cc1f3c560519077acef144c0785a7e58f1ad8314d93d1b941024092e78b\" id:\"b82a4cc1f3c560519077acef144c0785a7e58f1ad8314d93d1b941024092e78b\" pid:3826 exited_at:{seconds:1776701687 nanos:950907982} for b82a4cc1f3c560519077acef144c0785a7e58f1ad8314d93d1b941024092e78b" error="failed to handle container TaskExit event: failed to stop container: context deadline exceeded" Apr 20 16:16:14.100340 containerd[1642]: time="2026-04-20T16:16:14.098927303Z" level=error msg="ttrpc: received message on inactive stream" stream=77 Apr 20 16:16:14.780058 containerd[1642]: time="2026-04-20T16:16:14.540976172Z" level=error msg="ttrpc: received message on inactive stream" stream=79 Apr 20 16:16:16.910913 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2149216095.mount: Deactivated successfully. Apr 20 16:16:19.782227 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1384925124.mount: Deactivated successfully. Apr 20 16:16:21.751735 containerd[1642]: time="2026-04-20T16:16:21.593996340Z" level=info msg="RemoveContainer for \"456d4bb384e753507dd13b372b2df359997e43521644fd88e72216de163d6273\" returns successfully" Apr 20 16:16:22.880069 containerd[1642]: time="2026-04-20T16:16:22.876384875Z" level=info msg="Container d6ce1c97595a028ca86fd46a278e031e5563d277a0c1061f4394f7594a2eb203: CDI devices from CRI Config.CDIDevices: []" Apr 20 16:16:22.878413 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1762052649.mount: Deactivated successfully. Apr 20 16:16:25.184200 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount59628361.mount: Deactivated successfully. Apr 20 16:16:26.983058 containerd[1642]: time="2026-04-20T16:16:25.636833390Z" level=info msg="Container f987633b97f74095524e604507867d747a6eb15bcc29b8d4ed774c4bc22f113e: CDI devices from CRI Config.CDIDevices: []" Apr 20 16:16:31.755350 containerd[1642]: time="2026-04-20T16:16:31.753922809Z" level=info msg="TaskExit event container_id:\"b82a4cc1f3c560519077acef144c0785a7e58f1ad8314d93d1b941024092e78b\" id:\"b82a4cc1f3c560519077acef144c0785a7e58f1ad8314d93d1b941024092e78b\" pid:3826 exited_at:{seconds:1776701687 nanos:950907982}" Apr 20 16:16:36.995439 containerd[1642]: time="2026-04-20T16:16:36.994543864Z" level=info msg="CreateContainer within sandbox \"a4f8e106b2c659c398fec8bc48b50dd31c841c50583cc946254869d16aa74adb\" for name:\"kube-controller-manager\" attempt:3 returns container id \"d6ce1c97595a028ca86fd46a278e031e5563d277a0c1061f4394f7594a2eb203\"" Apr 20 16:16:37.769494 sshd[3969]: Connection closed by 10.0.0.1 port 42920 Apr 20 16:16:37.838608 sshd-session[3961]: pam_unix(sshd:session): session closed for user core Apr 20 16:16:38.052433 kubelet[2995]: E0420 16:16:38.047931 2995 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="30.107s" Apr 20 16:16:38.298710 systemd[1]: sshd@18-7-10.0.0.48:22-10.0.0.1:42920.service: Deactivated successfully. Apr 20 16:16:38.346691 systemd[1]: sshd@18-7-10.0.0.48:22-10.0.0.1:42920.service: Consumed 4.866s CPU time, 4.4M memory peak. Apr 20 16:16:38.634310 systemd[1]: session-20.scope: Deactivated successfully. Apr 20 16:16:38.661321 systemd[1]: session-20.scope: Consumed 22.363s CPU time, 18.1M memory peak. Apr 20 16:16:38.705582 containerd[1642]: time="2026-04-20T16:16:38.662969200Z" level=info msg="StartContainer for \"d6ce1c97595a028ca86fd46a278e031e5563d277a0c1061f4394f7594a2eb203\"" Apr 20 16:16:39.127070 systemd-logind[1611]: Session 20 logged out. Waiting for processes to exit. Apr 20 16:16:39.176968 containerd[1642]: time="2026-04-20T16:16:39.135716178Z" level=info msg="CreateContainer within sandbox \"bcb1213fb762cb01d7cad99cfe50564de5d80901cf15182f86eb69aab9596910\" for name:\"kube-scheduler\" attempt:2 returns container id \"f987633b97f74095524e604507867d747a6eb15bcc29b8d4ed774c4bc22f113e\"" Apr 20 16:16:39.237930 containerd[1642]: time="2026-04-20T16:16:39.194825749Z" level=info msg="connecting to shim d6ce1c97595a028ca86fd46a278e031e5563d277a0c1061f4394f7594a2eb203" address="unix:///run/containerd/s/954064e80f6724cd9b3557ef5e8f14a291671be0b664e8d783df675780d63b2d" protocol=ttrpc version=3 Apr 20 16:16:39.237930 containerd[1642]: time="2026-04-20T16:16:39.197389478Z" level=info msg="StartContainer for \"f987633b97f74095524e604507867d747a6eb15bcc29b8d4ed774c4bc22f113e\"" Apr 20 16:16:39.180665 systemd-logind[1611]: Removed session 20. Apr 20 16:16:39.476818 kubelet[2995]: E0420 16:16:39.387589 2995 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.258s" Apr 20 16:16:40.356963 containerd[1642]: time="2026-04-20T16:16:40.268750046Z" level=info msg="connecting to shim f987633b97f74095524e604507867d747a6eb15bcc29b8d4ed774c4bc22f113e" address="unix:///run/containerd/s/fea015718c3b780d3f475ca07cc94aee7b32240562ed54fe7ca53764a3c05bf2" protocol=ttrpc version=3 Apr 20 16:16:40.404621 kubelet[2995]: E0420 16:16:40.358722 2995 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 16:16:40.404621 kubelet[2995]: E0420 16:16:40.358734 2995 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 16:16:40.796123 kubelet[2995]: E0420 16:16:40.785706 2995 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 16:16:41.924352 containerd[1642]: time="2026-04-20T16:16:41.923241473Z" level=error msg="Failed to handle backOff event container_id:\"b82a4cc1f3c560519077acef144c0785a7e58f1ad8314d93d1b941024092e78b\" id:\"b82a4cc1f3c560519077acef144c0785a7e58f1ad8314d93d1b941024092e78b\" pid:3826 exited_at:{seconds:1776701687 nanos:950907982} for b82a4cc1f3c560519077acef144c0785a7e58f1ad8314d93d1b941024092e78b" error="failed to handle container TaskExit event: failed to stop container: context deadline exceeded" Apr 20 16:16:42.336636 containerd[1642]: time="2026-04-20T16:16:42.293927785Z" level=error msg="ttrpc: received message on inactive stream" stream=87 Apr 20 16:16:42.505270 containerd[1642]: time="2026-04-20T16:16:42.488438229Z" level=error msg="ttrpc: received message on inactive stream" stream=91 Apr 20 16:16:45.628053 systemd[1]: Started sshd@19-4107-10.0.0.48:22-10.0.0.1:37708.service - OpenSSH per-connection server daemon (10.0.0.1:37708). Apr 20 16:16:49.514984 kubelet[2995]: E0420 16:16:49.465489 2995 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="9.626s" Apr 20 16:16:52.896397 sshd[4014]: Accepted publickey for core from 10.0.0.1 port 37708 ssh2: RSA SHA256:TY3ywpKQxrHjYT+ud73OgntCcwLDp6eqaAmsAXbkkEQ Apr 20 16:16:53.449006 sshd-session[4014]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 20 16:16:54.761492 systemd-logind[1611]: New session '21' of user 'core' with class 'user' and type 'tty'. Apr 20 16:16:54.983720 systemd[1]: Started session-21.scope - Session 21 of User core. Apr 20 16:16:57.732525 systemd[1]: Started cri-containerd-d6ce1c97595a028ca86fd46a278e031e5563d277a0c1061f4394f7594a2eb203.scope - libcontainer container d6ce1c97595a028ca86fd46a278e031e5563d277a0c1061f4394f7594a2eb203. Apr 20 16:17:00.199757 systemd[1]: Started cri-containerd-f987633b97f74095524e604507867d747a6eb15bcc29b8d4ed774c4bc22f113e.scope - libcontainer container f987633b97f74095524e604507867d747a6eb15bcc29b8d4ed774c4bc22f113e. Apr 20 16:17:01.750148 kubelet[2995]: E0420 16:17:01.747102 2995 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="12.234s" Apr 20 16:17:14.071765 sshd[4030]: Connection closed by 10.0.0.1 port 37708 Apr 20 16:17:14.582906 containerd[1642]: time="2026-04-20T16:17:13.733659340Z" level=error msg="get state for d6ce1c97595a028ca86fd46a278e031e5563d277a0c1061f4394f7594a2eb203" error="context deadline exceeded" Apr 20 16:17:14.130722 sshd-session[4014]: pam_unix(sshd:session): session closed for user core Apr 20 16:17:14.963275 systemd[1]: sshd@19-4107-10.0.0.48:22-10.0.0.1:37708.service: Deactivated successfully. Apr 20 16:17:15.094984 systemd[1]: sshd@19-4107-10.0.0.48:22-10.0.0.1:37708.service: Consumed 2.944s CPU time, 4.1M memory peak. Apr 20 16:17:15.573741 systemd[1]: session-21.scope: Deactivated successfully. Apr 20 16:17:15.679838 systemd[1]: session-21.scope: Consumed 11.651s CPU time, 18.3M memory peak. Apr 20 16:17:15.935002 containerd[1642]: time="2026-04-20T16:17:14.095055181Z" level=warning msg="unknown status" status=0 Apr 20 16:17:16.298746 containerd[1642]: time="2026-04-20T16:17:16.281863351Z" level=info msg="TaskExit event container_id:\"b82a4cc1f3c560519077acef144c0785a7e58f1ad8314d93d1b941024092e78b\" id:\"b82a4cc1f3c560519077acef144c0785a7e58f1ad8314d93d1b941024092e78b\" pid:3826 exited_at:{seconds:1776701687 nanos:950907982}" Apr 20 16:17:16.361714 systemd-logind[1611]: Session 21 logged out. Waiting for processes to exit. Apr 20 16:17:17.157318 systemd-logind[1611]: Removed session 21. Apr 20 16:17:22.293809 systemd[1]: Started sshd@20-4108-10.0.0.48:22-10.0.0.1:34810.service - OpenSSH per-connection server daemon (10.0.0.1:34810). Apr 20 16:17:25.651528 containerd[1642]: time="2026-04-20T16:17:25.642032542Z" level=error msg="ttrpc: received message on inactive stream" stream=3 Apr 20 16:17:26.339613 kubelet[2995]: E0420 16:17:26.339441 2995 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="24.565s" Apr 20 16:17:26.496040 containerd[1642]: time="2026-04-20T16:17:26.495819602Z" level=error msg="Failed to handle backOff event container_id:\"b82a4cc1f3c560519077acef144c0785a7e58f1ad8314d93d1b941024092e78b\" id:\"b82a4cc1f3c560519077acef144c0785a7e58f1ad8314d93d1b941024092e78b\" pid:3826 exited_at:{seconds:1776701687 nanos:950907982} for b82a4cc1f3c560519077acef144c0785a7e58f1ad8314d93d1b941024092e78b" error="failed to handle container TaskExit event: failed to stop container: context deadline exceeded" Apr 20 16:17:26.589400 containerd[1642]: time="2026-04-20T16:17:26.576674716Z" level=error msg="ttrpc: received message on inactive stream" stream=97 Apr 20 16:17:26.685253 containerd[1642]: time="2026-04-20T16:17:26.589657854Z" level=error msg="ttrpc: received message on inactive stream" stream=99 Apr 20 16:17:27.942667 sshd[4065]: Accepted publickey for core from 10.0.0.1 port 34810 ssh2: RSA SHA256:TY3ywpKQxrHjYT+ud73OgntCcwLDp6eqaAmsAXbkkEQ Apr 20 16:17:28.043539 sshd-session[4065]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 20 16:17:28.367788 kubelet[2995]: E0420 16:17:28.300855 2995 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.952s" Apr 20 16:17:28.873547 systemd-logind[1611]: New session '22' of user 'core' with class 'user' and type 'tty'. Apr 20 16:17:29.078624 systemd[1]: Started session-22.scope - Session 22 of User core. Apr 20 16:17:31.301810 containerd[1642]: time="2026-04-20T16:17:31.294535041Z" level=error msg="get state for f987633b97f74095524e604507867d747a6eb15bcc29b8d4ed774c4bc22f113e" error="context deadline exceeded" Apr 20 16:17:31.301810 containerd[1642]: time="2026-04-20T16:17:31.295002279Z" level=warning msg="unknown status" status=0 Apr 20 16:17:31.412987 kubelet[2995]: E0420 16:17:31.370652 2995 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.008s" Apr 20 16:17:31.444038 containerd[1642]: time="2026-04-20T16:17:31.440407389Z" level=info msg="StartContainer for \"d6ce1c97595a028ca86fd46a278e031e5563d277a0c1061f4394f7594a2eb203\" returns successfully" Apr 20 16:17:32.373199 kubelet[2995]: E0420 16:17:32.373040 2995 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.002s" Apr 20 16:17:32.744448 containerd[1642]: time="2026-04-20T16:17:32.709957404Z" level=error msg="ttrpc: received message on inactive stream" stream=15 Apr 20 16:17:32.822221 kubelet[2995]: E0420 16:17:32.821296 2995 cadvisor_stats_provider.go:525] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod33fee6ba1581201eda98a989140db110.slice/cri-containerd-f987633b97f74095524e604507867d747a6eb15bcc29b8d4ed774c4bc22f113e.scope\": RecentStats: unable to find data in memory cache]" Apr 20 16:17:32.874522 containerd[1642]: time="2026-04-20T16:17:32.873468727Z" level=info msg="StartContainer for \"f987633b97f74095524e604507867d747a6eb15bcc29b8d4ed774c4bc22f113e\" returns successfully" Apr 20 16:17:33.562272 sshd[4082]: Connection closed by 10.0.0.1 port 34810 Apr 20 16:17:33.573248 sshd-session[4065]: pam_unix(sshd:session): session closed for user core Apr 20 16:17:33.680735 systemd[1]: sshd@20-4108-10.0.0.48:22-10.0.0.1:34810.service: Deactivated successfully. Apr 20 16:17:33.718990 systemd[1]: sshd@20-4108-10.0.0.48:22-10.0.0.1:34810.service: Consumed 1.960s CPU time, 4.3M memory peak. Apr 20 16:17:33.773842 systemd[1]: session-22.scope: Deactivated successfully. Apr 20 16:17:33.776958 systemd[1]: session-22.scope: Consumed 2.971s CPU time, 17.8M memory peak. Apr 20 16:17:33.801046 systemd-logind[1611]: Session 22 logged out. Waiting for processes to exit. Apr 20 16:17:33.878007 systemd-logind[1611]: Removed session 22. Apr 20 16:17:34.297005 kubelet[2995]: E0420 16:17:34.286335 2995 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 16:17:34.437544 kubelet[2995]: E0420 16:17:34.400199 2995 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 16:17:36.903018 kubelet[2995]: E0420 16:17:36.781977 2995 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 16:17:38.193898 kubelet[2995]: E0420 16:17:37.882998 2995 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 16:17:39.668049 systemd[1]: Started sshd@21-4109-10.0.0.48:22-10.0.0.1:34290.service - OpenSSH per-connection server daemon (10.0.0.1:34290). Apr 20 16:17:42.685020 sshd[4117]: Accepted publickey for core from 10.0.0.1 port 34290 ssh2: RSA SHA256:TY3ywpKQxrHjYT+ud73OgntCcwLDp6eqaAmsAXbkkEQ Apr 20 16:17:43.575921 sshd-session[4117]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 20 16:17:44.481580 systemd-logind[1611]: New session '23' of user 'core' with class 'user' and type 'tty'. Apr 20 16:17:44.549434 systemd[1]: Started session-23.scope - Session 23 of User core. Apr 20 16:17:45.657701 kubelet[2995]: E0420 16:17:45.657524 2995 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="10.273s" Apr 20 16:17:46.417469 kubelet[2995]: E0420 16:17:46.416027 2995 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 16:17:48.023863 kubelet[2995]: E0420 16:17:48.021280 2995 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.346s" Apr 20 16:17:50.935029 systemd[1]: Starting systemd-tmpfiles-clean.service - Cleanup of Temporary Directories... Apr 20 16:17:51.383007 kubelet[2995]: E0420 16:17:51.369078 2995 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.345s" Apr 20 16:17:51.583861 sshd[4121]: Connection closed by 10.0.0.1 port 34290 Apr 20 16:17:51.583431 sshd-session[4117]: pam_unix(sshd:session): session closed for user core Apr 20 16:17:51.840876 kubelet[2995]: E0420 16:17:51.634920 2995 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 16:17:51.778293 systemd[1]: sshd@21-4109-10.0.0.48:22-10.0.0.1:34290.service: Deactivated successfully. Apr 20 16:17:51.780250 systemd[1]: sshd@21-4109-10.0.0.48:22-10.0.0.1:34290.service: Consumed 1.655s CPU time, 4.4M memory peak. Apr 20 16:17:51.898566 systemd[1]: session-23.scope: Deactivated successfully. Apr 20 16:17:51.915133 systemd[1]: session-23.scope: Consumed 4.402s CPU time, 15.8M memory peak. Apr 20 16:17:51.961462 kubelet[2995]: E0420 16:17:51.918908 2995 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 16:17:51.961462 kubelet[2995]: E0420 16:17:51.926430 2995 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 16:17:52.099890 systemd-logind[1611]: Session 23 logged out. Waiting for processes to exit. Apr 20 16:17:52.283896 systemd-logind[1611]: Removed session 23. Apr 20 16:17:52.685731 systemd-tmpfiles[4132]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Apr 20 16:17:52.685777 systemd-tmpfiles[4132]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Apr 20 16:17:52.692551 systemd-tmpfiles[4132]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Apr 20 16:17:52.716753 systemd-tmpfiles[4132]: ACLs are not supported, ignoring. Apr 20 16:17:52.716830 systemd-tmpfiles[4132]: ACLs are not supported, ignoring. Apr 20 16:17:52.742402 kubelet[2995]: E0420 16:17:52.724717 2995 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.211s" Apr 20 16:17:52.763679 systemd-tmpfiles[4132]: Detected autofs mount point /boot during canonicalization of boot. Apr 20 16:17:52.763696 systemd-tmpfiles[4132]: Skipping /boot Apr 20 16:17:52.935875 systemd[1]: systemd-tmpfiles-clean.service: Deactivated successfully. Apr 20 16:17:52.963924 kubelet[2995]: E0420 16:17:52.930658 2995 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 16:17:52.979102 kubelet[2995]: E0420 16:17:52.975326 2995 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 16:17:52.977585 systemd[1]: Finished systemd-tmpfiles-clean.service - Cleanup of Temporary Directories. Apr 20 16:17:53.994274 kubelet[2995]: E0420 16:17:53.993253 2995 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 16:17:55.872747 containerd[1642]: time="2026-04-20T16:17:55.871468171Z" level=info msg="container event discarded" container=456d4bb384e753507dd13b372b2df359997e43521644fd88e72216de163d6273 type=CONTAINER_STOPPED_EVENT Apr 20 16:17:56.694080 systemd[1]: Started sshd@22-8194-10.0.0.48:22-10.0.0.1:58290.service - OpenSSH per-connection server daemon (10.0.0.1:58290). Apr 20 16:17:57.189776 sshd[4140]: Accepted publickey for core from 10.0.0.1 port 58290 ssh2: RSA SHA256:TY3ywpKQxrHjYT+ud73OgntCcwLDp6eqaAmsAXbkkEQ Apr 20 16:17:57.196770 sshd-session[4140]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 20 16:17:57.327114 systemd-logind[1611]: New session '24' of user 'core' with class 'user' and type 'tty'. Apr 20 16:17:57.487280 systemd[1]: Started session-24.scope - Session 24 of User core. Apr 20 16:17:58.336849 containerd[1642]: time="2026-04-20T16:17:58.331273272Z" level=info msg="container event discarded" container=2cd2d8ccad6a0144eb176f6d4306ced991d40e9d9a29f2a4ef85b3fd4d429eee type=CONTAINER_CREATED_EVENT Apr 20 16:17:58.869209 sshd[4144]: Connection closed by 10.0.0.1 port 58290 Apr 20 16:17:58.877767 sshd-session[4140]: pam_unix(sshd:session): session closed for user core Apr 20 16:17:58.924943 systemd[1]: sshd@22-8194-10.0.0.48:22-10.0.0.1:58290.service: Deactivated successfully. Apr 20 16:17:59.113716 systemd[1]: session-24.scope: Deactivated successfully. Apr 20 16:17:59.121346 systemd[1]: session-24.scope: Consumed 1.178s CPU time, 16M memory peak. Apr 20 16:17:59.243116 systemd-logind[1611]: Session 24 logged out. Waiting for processes to exit. Apr 20 16:17:59.276034 systemd-logind[1611]: Removed session 24. Apr 20 16:17:59.425619 kubelet[2995]: E0420 16:17:59.424143 2995 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 16:18:02.487798 containerd[1642]: time="2026-04-20T16:18:02.477028022Z" level=info msg="container event discarded" container=508156ca7c4de162cfc24ca5317f4ca402e68de276766ea0bf5e1c50d94323a7 type=CONTAINER_STOPPED_EVENT Apr 20 16:18:04.265239 systemd[1]: Started sshd@23-4110-10.0.0.48:22-10.0.0.1:58296.service - OpenSSH per-connection server daemon (10.0.0.1:58296). Apr 20 16:18:04.462069 kubelet[2995]: E0420 16:18:04.460539 2995 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.092s" Apr 20 16:18:04.684927 containerd[1642]: time="2026-04-20T16:18:04.575114883Z" level=info msg="container event discarded" container=6ecf2e6532aba84094a3abed497a20b6cf9a99af4368a143b9ea2427a3970801 type=CONTAINER_CREATED_EVENT Apr 20 16:18:05.990724 sshd[4161]: Accepted publickey for core from 10.0.0.1 port 58296 ssh2: RSA SHA256:TY3ywpKQxrHjYT+ud73OgntCcwLDp6eqaAmsAXbkkEQ Apr 20 16:18:06.190918 sshd-session[4161]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 20 16:18:07.156425 systemd-logind[1611]: New session '25' of user 'core' with class 'user' and type 'tty'. Apr 20 16:18:07.311718 systemd[1]: Started session-25.scope - Session 25 of User core. Apr 20 16:18:08.077008 kubelet[2995]: E0420 16:18:08.074673 2995 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.687s" Apr 20 16:18:08.247521 kubelet[2995]: E0420 16:18:08.247123 2995 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 16:18:10.436595 containerd[1642]: time="2026-04-20T16:18:10.417450247Z" level=info msg="container event discarded" container=2cd2d8ccad6a0144eb176f6d4306ced991d40e9d9a29f2a4ef85b3fd4d429eee type=CONTAINER_STARTED_EVENT Apr 20 16:18:11.144695 kubelet[2995]: E0420 16:18:11.136874 2995 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.746s" Apr 20 16:18:11.886102 containerd[1642]: time="2026-04-20T16:18:11.877203473Z" level=info msg="container event discarded" container=d17bb97a3f3282b85a6720c6e98b5c169be8dca35dce64f8b767b8712f2815f1 type=CONTAINER_DELETED_EVENT Apr 20 16:18:12.818262 kubelet[2995]: E0420 16:18:12.817287 2995 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.403s" Apr 20 16:18:13.311270 containerd[1642]: time="2026-04-20T16:18:13.073789920Z" level=info msg="container event discarded" container=6ecf2e6532aba84094a3abed497a20b6cf9a99af4368a143b9ea2427a3970801 type=CONTAINER_STARTED_EVENT Apr 20 16:18:13.517759 sshd[4166]: Connection closed by 10.0.0.1 port 58296 Apr 20 16:18:13.512797 sshd-session[4161]: pam_unix(sshd:session): session closed for user core Apr 20 16:18:13.689419 systemd[1]: sshd@23-4110-10.0.0.48:22-10.0.0.1:58296.service: Deactivated successfully. Apr 20 16:18:13.998566 systemd[1]: session-25.scope: Deactivated successfully. Apr 20 16:18:14.045973 systemd[1]: session-25.scope: Consumed 3.005s CPU time, 19M memory peak. Apr 20 16:18:14.129538 systemd-logind[1611]: Session 25 logged out. Waiting for processes to exit. Apr 20 16:18:14.204509 systemd-logind[1611]: Removed session 25. Apr 20 16:18:14.599836 kubelet[2995]: E0420 16:18:14.596816 2995 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.225s" Apr 20 16:18:16.724133 kubelet[2995]: E0420 16:18:16.723381 2995 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.331s" Apr 20 16:18:19.263356 systemd[1]: Started sshd@24-4111-10.0.0.48:22-10.0.0.1:44934.service - OpenSSH per-connection server daemon (10.0.0.1:44934). Apr 20 16:18:21.205271 kubelet[2995]: E0420 16:18:21.201384 2995 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.8s" Apr 20 16:18:21.914882 sshd[4188]: Accepted publickey for core from 10.0.0.1 port 44934 ssh2: RSA SHA256:TY3ywpKQxrHjYT+ud73OgntCcwLDp6eqaAmsAXbkkEQ Apr 20 16:18:21.983450 sshd-session[4188]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 20 16:18:25.036363 systemd-logind[1611]: New session '26' of user 'core' with class 'user' and type 'tty'. Apr 20 16:18:25.790966 systemd[1]: Started session-26.scope - Session 26 of User core. Apr 20 16:18:26.103859 kubelet[2995]: E0420 16:18:26.100040 2995 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="4.648s" Apr 20 16:18:29.323993 sshd[4195]: Connection closed by 10.0.0.1 port 44934 Apr 20 16:18:29.361931 sshd-session[4188]: pam_unix(sshd:session): session closed for user core Apr 20 16:18:29.497780 systemd[1]: sshd@24-4111-10.0.0.48:22-10.0.0.1:44934.service: Deactivated successfully. Apr 20 16:18:29.500127 systemd[1]: sshd@24-4111-10.0.0.48:22-10.0.0.1:44934.service: Consumed 1.001s CPU time, 4.2M memory peak. Apr 20 16:18:29.582952 systemd[1]: session-26.scope: Deactivated successfully. Apr 20 16:18:29.588908 systemd[1]: session-26.scope: Consumed 1.872s CPU time, 17.5M memory peak. Apr 20 16:18:29.664531 systemd-logind[1611]: Session 26 logged out. Waiting for processes to exit. Apr 20 16:18:29.671957 systemd-logind[1611]: Removed session 26. Apr 20 16:18:30.984201 containerd[1642]: time="2026-04-20T16:18:30.981807049Z" level=info msg="TaskExit event container_id:\"b82a4cc1f3c560519077acef144c0785a7e58f1ad8314d93d1b941024092e78b\" id:\"b82a4cc1f3c560519077acef144c0785a7e58f1ad8314d93d1b941024092e78b\" pid:3826 exited_at:{seconds:1776701687 nanos:950907982}" Apr 20 16:18:32.455052 kubelet[2995]: E0420 16:18:32.454144 2995 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 16:18:33.160545 containerd[1642]: time="2026-04-20T16:18:33.158908659Z" level=info msg="CreateContainer within sandbox \"0841463db47c0e7c9c6d85cfb6642e85afb7fb990d35c75714898c948ba39613\" for container name:\"kube-flannel\"" Apr 20 16:18:34.114795 containerd[1642]: time="2026-04-20T16:18:34.114449901Z" level=info msg="Container 4bcd4b2bae3758d816ccb55244ee3017bd70e7c45b8b64115214cf2a3ea68692: CDI devices from CRI Config.CDIDevices: []" Apr 20 16:18:34.117387 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3828238759.mount: Deactivated successfully. Apr 20 16:18:34.241698 containerd[1642]: time="2026-04-20T16:18:34.238897711Z" level=info msg="CreateContainer within sandbox \"0841463db47c0e7c9c6d85cfb6642e85afb7fb990d35c75714898c948ba39613\" for name:\"kube-flannel\" returns container id \"4bcd4b2bae3758d816ccb55244ee3017bd70e7c45b8b64115214cf2a3ea68692\"" Apr 20 16:18:34.262897 containerd[1642]: time="2026-04-20T16:18:34.260522979Z" level=info msg="StartContainer for \"4bcd4b2bae3758d816ccb55244ee3017bd70e7c45b8b64115214cf2a3ea68692\"" Apr 20 16:18:34.427691 containerd[1642]: time="2026-04-20T16:18:34.418122829Z" level=info msg="connecting to shim 4bcd4b2bae3758d816ccb55244ee3017bd70e7c45b8b64115214cf2a3ea68692" address="unix:///run/containerd/s/0ed820c0dcc55393a306a831ea5ff34ff6397fb5c5c6190f1f8692e983757cf0" protocol=ttrpc version=3 Apr 20 16:18:34.879706 systemd[1]: Started cri-containerd-4bcd4b2bae3758d816ccb55244ee3017bd70e7c45b8b64115214cf2a3ea68692.scope - libcontainer container 4bcd4b2bae3758d816ccb55244ee3017bd70e7c45b8b64115214cf2a3ea68692. Apr 20 16:18:35.073849 systemd[1]: Started sshd@25-4112-10.0.0.48:22-10.0.0.1:57022.service - OpenSSH per-connection server daemon (10.0.0.1:57022). Apr 20 16:18:36.431969 sshd[4233]: Accepted publickey for core from 10.0.0.1 port 57022 ssh2: RSA SHA256:TY3ywpKQxrHjYT+ud73OgntCcwLDp6eqaAmsAXbkkEQ Apr 20 16:18:36.521974 sshd-session[4233]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 20 16:18:36.967601 systemd-logind[1611]: New session '27' of user 'core' with class 'user' and type 'tty'. Apr 20 16:18:37.138278 systemd[1]: Started session-27.scope - Session 27 of User core. Apr 20 16:18:38.333992 containerd[1642]: time="2026-04-20T16:18:38.333616646Z" level=info msg="StartContainer for \"4bcd4b2bae3758d816ccb55244ee3017bd70e7c45b8b64115214cf2a3ea68692\" returns successfully" Apr 20 16:18:40.618001 kubelet[2995]: E0420 16:18:40.617597 2995 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.102s" Apr 20 16:18:40.920538 sshd[4261]: Connection closed by 10.0.0.1 port 57022 Apr 20 16:18:40.939756 sshd-session[4233]: pam_unix(sshd:session): session closed for user core Apr 20 16:18:41.026911 systemd[1]: sshd@25-4112-10.0.0.48:22-10.0.0.1:57022.service: Deactivated successfully. Apr 20 16:18:41.099510 systemd[1]: session-27.scope: Deactivated successfully. Apr 20 16:18:41.101221 systemd[1]: session-27.scope: Consumed 1.623s CPU time, 17.7M memory peak. Apr 20 16:18:41.105048 systemd-logind[1611]: Session 27 logged out. Waiting for processes to exit. Apr 20 16:18:41.125149 systemd-logind[1611]: Removed session 27. Apr 20 16:18:42.039801 kubelet[2995]: E0420 16:18:42.035413 2995 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 16:18:43.892670 systemd-networkd[1429]: flannel.1: Link UP Apr 20 16:18:43.918528 systemd-networkd[1429]: flannel.1: Gained carrier Apr 20 16:18:45.727308 systemd-networkd[1429]: flannel.1: Gained IPv6LL Apr 20 16:18:46.438623 systemd[1]: Started sshd@26-12290-10.0.0.48:22-10.0.0.1:59998.service - OpenSSH per-connection server daemon (10.0.0.1:59998). Apr 20 16:18:47.578848 sshd[4337]: Accepted publickey for core from 10.0.0.1 port 59998 ssh2: RSA SHA256:TY3ywpKQxrHjYT+ud73OgntCcwLDp6eqaAmsAXbkkEQ Apr 20 16:18:47.816147 sshd-session[4337]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 20 16:18:48.528026 systemd-logind[1611]: New session '28' of user 'core' with class 'user' and type 'tty'. Apr 20 16:18:48.797818 systemd[1]: Started session-28.scope - Session 28 of User core. Apr 20 16:18:48.924268 kubelet[2995]: E0420 16:18:48.924089 2995 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.443s" Apr 20 16:18:52.154321 sshd[4343]: Connection closed by 10.0.0.1 port 59998 Apr 20 16:18:52.159485 sshd-session[4337]: pam_unix(sshd:session): session closed for user core Apr 20 16:18:52.408274 systemd[1]: sshd@26-12290-10.0.0.48:22-10.0.0.1:59998.service: Deactivated successfully. Apr 20 16:18:52.837644 systemd[1]: session-28.scope: Deactivated successfully. Apr 20 16:18:52.878489 systemd[1]: session-28.scope: Consumed 1.174s CPU time, 15.8M memory peak. Apr 20 16:18:53.094590 systemd-logind[1611]: Session 28 logged out. Waiting for processes to exit. Apr 20 16:18:53.367776 systemd-logind[1611]: Removed session 28. Apr 20 16:18:57.719402 systemd[1]: Started sshd@27-8-10.0.0.48:22-10.0.0.1:51648.service - OpenSSH per-connection server daemon (10.0.0.1:51648). Apr 20 16:18:58.912884 sshd[4383]: Accepted publickey for core from 10.0.0.1 port 51648 ssh2: RSA SHA256:TY3ywpKQxrHjYT+ud73OgntCcwLDp6eqaAmsAXbkkEQ Apr 20 16:18:58.928524 sshd-session[4383]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 20 16:18:58.969141 systemd-logind[1611]: New session '29' of user 'core' with class 'user' and type 'tty'. Apr 20 16:18:58.996780 systemd[1]: Started session-29.scope - Session 29 of User core. Apr 20 16:18:59.370420 kubelet[2995]: E0420 16:18:59.370040 2995 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 16:19:00.236786 sshd[4389]: Connection closed by 10.0.0.1 port 51648 Apr 20 16:19:00.238007 sshd-session[4383]: pam_unix(sshd:session): session closed for user core Apr 20 16:19:00.419049 systemd[1]: sshd@27-8-10.0.0.48:22-10.0.0.1:51648.service: Deactivated successfully. Apr 20 16:19:00.640838 systemd[1]: session-29.scope: Deactivated successfully. Apr 20 16:19:00.738720 systemd-logind[1611]: Session 29 logged out. Waiting for processes to exit. Apr 20 16:19:00.772386 systemd-logind[1611]: Removed session 29. Apr 20 16:19:03.387408 containerd[1642]: time="2026-04-20T16:19:03.385687434Z" level=info msg="container event discarded" container=b82a4cc1f3c560519077acef144c0785a7e58f1ad8314d93d1b941024092e78b type=CONTAINER_CREATED_EVENT Apr 20 16:19:05.646650 systemd[1]: Started sshd@28-4113-10.0.0.48:22-10.0.0.1:40876.service - OpenSSH per-connection server daemon (10.0.0.1:40876). Apr 20 16:19:06.478442 sshd[4439]: Accepted publickey for core from 10.0.0.1 port 40876 ssh2: RSA SHA256:TY3ywpKQxrHjYT+ud73OgntCcwLDp6eqaAmsAXbkkEQ Apr 20 16:19:06.485098 sshd-session[4439]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 20 16:19:06.588775 systemd-logind[1611]: New session '30' of user 'core' with class 'user' and type 'tty'. Apr 20 16:19:06.599665 systemd[1]: Started session-30.scope - Session 30 of User core. Apr 20 16:19:07.405508 sshd[4443]: Connection closed by 10.0.0.1 port 40876 Apr 20 16:19:07.410677 sshd-session[4439]: pam_unix(sshd:session): session closed for user core Apr 20 16:19:07.491367 systemd[1]: sshd@28-4113-10.0.0.48:22-10.0.0.1:40876.service: Deactivated successfully. Apr 20 16:19:07.510855 systemd[1]: session-30.scope: Deactivated successfully. Apr 20 16:19:07.552421 systemd-logind[1611]: Session 30 logged out. Waiting for processes to exit. Apr 20 16:19:07.612769 systemd-logind[1611]: Removed session 30. Apr 20 16:19:10.416618 kubelet[2995]: E0420 16:19:10.415319 2995 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 16:19:12.483809 systemd[1]: Started sshd@29-4114-10.0.0.48:22-10.0.0.1:40878.service - OpenSSH per-connection server daemon (10.0.0.1:40878). Apr 20 16:19:13.805493 sshd[4483]: Accepted publickey for core from 10.0.0.1 port 40878 ssh2: RSA SHA256:TY3ywpKQxrHjYT+ud73OgntCcwLDp6eqaAmsAXbkkEQ Apr 20 16:19:13.843378 sshd-session[4483]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 20 16:19:14.279436 systemd-logind[1611]: New session '31' of user 'core' with class 'user' and type 'tty'. Apr 20 16:19:14.347989 systemd[1]: Started session-31.scope - Session 31 of User core. Apr 20 16:19:15.412151 sshd[4487]: Connection closed by 10.0.0.1 port 40878 Apr 20 16:19:15.437976 sshd-session[4483]: pam_unix(sshd:session): session closed for user core Apr 20 16:19:15.599890 systemd[1]: sshd@29-4114-10.0.0.48:22-10.0.0.1:40878.service: Deactivated successfully. Apr 20 16:19:15.714431 systemd[1]: session-31.scope: Deactivated successfully. Apr 20 16:19:15.787548 systemd-logind[1611]: Session 31 logged out. Waiting for processes to exit. Apr 20 16:19:15.885433 systemd-logind[1611]: Removed session 31. Apr 20 16:19:16.384461 kubelet[2995]: E0420 16:19:16.382844 2995 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 16:19:20.376838 kubelet[2995]: E0420 16:19:20.370681 2995 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 16:19:20.567419 systemd[1]: Started sshd@30-9-10.0.0.48:22-10.0.0.1:39928.service - OpenSSH per-connection server daemon (10.0.0.1:39928). Apr 20 16:19:20.952510 sshd[4521]: Accepted publickey for core from 10.0.0.1 port 39928 ssh2: RSA SHA256:TY3ywpKQxrHjYT+ud73OgntCcwLDp6eqaAmsAXbkkEQ Apr 20 16:19:20.957824 sshd-session[4521]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 20 16:19:21.016712 systemd-logind[1611]: New session '32' of user 'core' with class 'user' and type 'tty'. Apr 20 16:19:21.047618 systemd[1]: Started session-32.scope - Session 32 of User core. Apr 20 16:19:23.058600 sshd[4525]: Connection closed by 10.0.0.1 port 39928 Apr 20 16:19:23.081391 sshd-session[4521]: pam_unix(sshd:session): session closed for user core Apr 20 16:19:23.696791 systemd[1]: sshd@30-9-10.0.0.48:22-10.0.0.1:39928.service: Deactivated successfully. Apr 20 16:19:24.121672 systemd[1]: session-32.scope: Deactivated successfully. Apr 20 16:19:24.144873 systemd[1]: session-32.scope: Consumed 1.228s CPU time, 17.8M memory peak. Apr 20 16:19:24.217449 systemd-logind[1611]: Session 32 logged out. Waiting for processes to exit. Apr 20 16:19:24.243766 systemd[1]: Started sshd@31-4115-10.0.0.48:22-10.0.0.1:39938.service - OpenSSH per-connection server daemon (10.0.0.1:39938). Apr 20 16:19:24.265060 systemd-logind[1611]: Removed session 32. Apr 20 16:19:25.250067 sshd[4559]: Accepted publickey for core from 10.0.0.1 port 39938 ssh2: RSA SHA256:TY3ywpKQxrHjYT+ud73OgntCcwLDp6eqaAmsAXbkkEQ Apr 20 16:19:25.382737 sshd-session[4559]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 20 16:19:25.827737 systemd-logind[1611]: New session '33' of user 'core' with class 'user' and type 'tty'. Apr 20 16:19:25.937863 systemd[1]: Started session-33.scope - Session 33 of User core. Apr 20 16:19:29.565398 sshd[4565]: Connection closed by 10.0.0.1 port 39938 Apr 20 16:19:29.569077 sshd-session[4559]: pam_unix(sshd:session): session closed for user core Apr 20 16:19:29.590955 systemd[1]: sshd@31-4115-10.0.0.48:22-10.0.0.1:39938.service: Deactivated successfully. Apr 20 16:19:29.596001 systemd[1]: session-33.scope: Deactivated successfully. Apr 20 16:19:29.609426 systemd[1]: session-33.scope: Consumed 2.378s CPU time, 31.7M memory peak. Apr 20 16:19:29.655051 systemd-logind[1611]: Session 33 logged out. Waiting for processes to exit. Apr 20 16:19:29.683755 systemd[1]: Started sshd@32-4116-10.0.0.48:22-10.0.0.1:53344.service - OpenSSH per-connection server daemon (10.0.0.1:53344). Apr 20 16:19:29.761703 systemd-logind[1611]: Removed session 33. Apr 20 16:19:29.973051 sshd[4598]: Accepted publickey for core from 10.0.0.1 port 53344 ssh2: RSA SHA256:TY3ywpKQxrHjYT+ud73OgntCcwLDp6eqaAmsAXbkkEQ Apr 20 16:19:29.978729 sshd-session[4598]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 20 16:19:29.994478 systemd-logind[1611]: New session '34' of user 'core' with class 'user' and type 'tty'. Apr 20 16:19:30.001659 systemd[1]: Started session-34.scope - Session 34 of User core. Apr 20 16:19:43.412533 sshd[4602]: Connection closed by 10.0.0.1 port 53344 Apr 20 16:19:43.461811 sshd-session[4598]: pam_unix(sshd:session): session closed for user core Apr 20 16:19:43.863063 systemd[1]: sshd@32-4116-10.0.0.48:22-10.0.0.1:53344.service: Deactivated successfully. Apr 20 16:19:43.954051 systemd[1]: session-34.scope: Deactivated successfully. Apr 20 16:19:43.955858 systemd[1]: session-34.scope: Consumed 6.546s CPU time, 36.8M memory peak. Apr 20 16:19:44.026992 systemd-logind[1611]: Session 34 logged out. Waiting for processes to exit. Apr 20 16:19:44.085572 systemd[1]: Started sshd@33-4117-10.0.0.48:22-10.0.0.1:33392.service - OpenSSH per-connection server daemon (10.0.0.1:33392). Apr 20 16:19:44.128365 systemd-logind[1611]: Removed session 34. Apr 20 16:19:44.758879 sshd[4665]: Accepted publickey for core from 10.0.0.1 port 33392 ssh2: RSA SHA256:TY3ywpKQxrHjYT+ud73OgntCcwLDp6eqaAmsAXbkkEQ Apr 20 16:19:44.764137 sshd-session[4665]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 20 16:19:44.919429 systemd-logind[1611]: New session '35' of user 'core' with class 'user' and type 'tty'. Apr 20 16:19:44.970283 systemd[1]: Started session-35.scope - Session 35 of User core. Apr 20 16:19:51.234402 containerd[1642]: time="2026-04-20T16:19:51.187807308Z" level=info msg="container event discarded" container=b82a4cc1f3c560519077acef144c0785a7e58f1ad8314d93d1b941024092e78b type=CONTAINER_STARTED_EVENT Apr 20 16:19:51.361094 sshd[4669]: Connection closed by 10.0.0.1 port 33392 Apr 20 16:19:51.368053 sshd-session[4665]: pam_unix(sshd:session): session closed for user core Apr 20 16:19:51.506961 systemd[1]: Started sshd@34-4118-10.0.0.48:22-10.0.0.1:40656.service - OpenSSH per-connection server daemon (10.0.0.1:40656). Apr 20 16:19:51.517235 systemd[1]: sshd@33-4117-10.0.0.48:22-10.0.0.1:33392.service: Deactivated successfully. Apr 20 16:19:51.541651 systemd[1]: session-35.scope: Deactivated successfully. Apr 20 16:19:51.542526 systemd[1]: session-35.scope: Consumed 3.024s CPU time, 31M memory peak. Apr 20 16:19:51.555934 systemd-logind[1611]: Session 35 logged out. Waiting for processes to exit. Apr 20 16:19:51.564094 systemd-logind[1611]: Removed session 35. Apr 20 16:19:52.942120 sshd[4704]: Accepted publickey for core from 10.0.0.1 port 40656 ssh2: RSA SHA256:TY3ywpKQxrHjYT+ud73OgntCcwLDp6eqaAmsAXbkkEQ Apr 20 16:19:52.943744 sshd-session[4704]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 20 16:19:53.387459 systemd-logind[1611]: New session '36' of user 'core' with class 'user' and type 'tty'. Apr 20 16:19:53.470769 systemd[1]: Started session-36.scope - Session 36 of User core. Apr 20 16:19:59.242859 sshd[4725]: Connection closed by 10.0.0.1 port 40656 Apr 20 16:19:59.257885 sshd-session[4704]: pam_unix(sshd:session): session closed for user core Apr 20 16:19:59.478714 systemd[1]: sshd@34-4118-10.0.0.48:22-10.0.0.1:40656.service: Deactivated successfully. Apr 20 16:19:59.626932 systemd[1]: session-36.scope: Deactivated successfully. Apr 20 16:19:59.636041 systemd[1]: session-36.scope: Consumed 2.183s CPU time, 16.8M memory peak. Apr 20 16:19:59.642908 systemd-logind[1611]: Session 36 logged out. Waiting for processes to exit. Apr 20 16:19:59.709487 systemd-logind[1611]: Removed session 36. Apr 20 16:19:59.881801 kubelet[2995]: E0420 16:19:59.877257 2995 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 16:20:02.444519 kubelet[2995]: E0420 16:20:02.443920 2995 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 16:20:04.734729 systemd[1]: Started sshd@35-12291-10.0.0.48:22-10.0.0.1:39122.service - OpenSSH per-connection server daemon (10.0.0.1:39122). Apr 20 16:20:06.537102 sshd[4768]: Accepted publickey for core from 10.0.0.1 port 39122 ssh2: RSA SHA256:TY3ywpKQxrHjYT+ud73OgntCcwLDp6eqaAmsAXbkkEQ Apr 20 16:20:06.581700 sshd-session[4768]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 20 16:20:06.679556 systemd-logind[1611]: New session '37' of user 'core' with class 'user' and type 'tty'. Apr 20 16:20:06.693673 systemd[1]: Started session-37.scope - Session 37 of User core. Apr 20 16:20:11.818958 sshd[4786]: Connection closed by 10.0.0.1 port 39122 Apr 20 16:20:11.916361 sshd-session[4768]: pam_unix(sshd:session): session closed for user core Apr 20 16:20:12.127096 systemd[1]: sshd@35-12291-10.0.0.48:22-10.0.0.1:39122.service: Deactivated successfully. Apr 20 16:20:12.186040 systemd[1]: session-37.scope: Deactivated successfully. Apr 20 16:20:12.187583 systemd[1]: session-37.scope: Consumed 1.981s CPU time, 16M memory peak. Apr 20 16:20:12.291042 systemd-logind[1611]: Session 37 logged out. Waiting for processes to exit. Apr 20 16:20:12.562370 kubelet[2995]: E0420 16:20:12.555800 2995 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.149s" Apr 20 16:20:12.677482 systemd-logind[1611]: Removed session 37. Apr 20 16:20:17.372047 systemd[1]: Started sshd@36-12292-10.0.0.48:22-10.0.0.1:58938.service - OpenSSH per-connection server daemon (10.0.0.1:58938). Apr 20 16:20:18.793541 sshd[4825]: Accepted publickey for core from 10.0.0.1 port 58938 ssh2: RSA SHA256:TY3ywpKQxrHjYT+ud73OgntCcwLDp6eqaAmsAXbkkEQ Apr 20 16:20:18.902079 sshd-session[4825]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 20 16:20:19.023143 systemd-logind[1611]: New session '38' of user 'core' with class 'user' and type 'tty'. Apr 20 16:20:19.215494 systemd[1]: Started session-38.scope - Session 38 of User core. Apr 20 16:20:21.443089 sshd[4829]: Connection closed by 10.0.0.1 port 58938 Apr 20 16:20:21.497654 sshd-session[4825]: pam_unix(sshd:session): session closed for user core Apr 20 16:20:21.566012 systemd[1]: sshd@36-12292-10.0.0.48:22-10.0.0.1:58938.service: Deactivated successfully. Apr 20 16:20:21.660758 systemd[1]: session-38.scope: Deactivated successfully. Apr 20 16:20:21.696822 systemd[1]: session-38.scope: Consumed 1.099s CPU time, 18.4M memory peak. Apr 20 16:20:21.906020 systemd-logind[1611]: Session 38 logged out. Waiting for processes to exit. Apr 20 16:20:22.046696 systemd-logind[1611]: Removed session 38. Apr 20 16:20:22.590640 containerd[1642]: time="2026-04-20T16:20:22.582999545Z" level=info msg="container event discarded" container=6ecf2e6532aba84094a3abed497a20b6cf9a99af4368a143b9ea2427a3970801 type=CONTAINER_STOPPED_EVENT Apr 20 16:20:24.985492 containerd[1642]: time="2026-04-20T16:20:24.984347301Z" level=info msg="container event discarded" container=2cd2d8ccad6a0144eb176f6d4306ced991d40e9d9a29f2a4ef85b3fd4d429eee type=CONTAINER_STOPPED_EVENT Apr 20 16:20:27.308725 systemd[1]: Started sshd@37-4119-10.0.0.48:22-10.0.0.1:54366.service - OpenSSH per-connection server daemon (10.0.0.1:54366). Apr 20 16:20:27.473860 kubelet[2995]: E0420 16:20:27.472975 2995 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 16:20:28.610971 sshd[4874]: Accepted publickey for core from 10.0.0.1 port 54366 ssh2: RSA SHA256:TY3ywpKQxrHjYT+ud73OgntCcwLDp6eqaAmsAXbkkEQ Apr 20 16:20:28.839891 sshd-session[4874]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 20 16:20:29.003786 systemd-logind[1611]: New session '39' of user 'core' with class 'user' and type 'tty'. Apr 20 16:20:29.040410 systemd[1]: Started session-39.scope - Session 39 of User core. Apr 20 16:20:31.400546 kubelet[2995]: E0420 16:20:31.400111 2995 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 16:20:32.157452 sshd[4890]: Connection closed by 10.0.0.1 port 54366 Apr 20 16:20:32.155329 sshd-session[4874]: pam_unix(sshd:session): session closed for user core Apr 20 16:20:32.169500 systemd[1]: sshd@37-4119-10.0.0.48:22-10.0.0.1:54366.service: Deactivated successfully. Apr 20 16:20:32.201799 systemd[1]: session-39.scope: Deactivated successfully. Apr 20 16:20:32.203690 systemd[1]: session-39.scope: Consumed 1.348s CPU time, 16.2M memory peak. Apr 20 16:20:32.241839 systemd-logind[1611]: Session 39 logged out. Waiting for processes to exit. Apr 20 16:20:32.374735 systemd-logind[1611]: Removed session 39. Apr 20 16:20:37.354669 systemd[1]: Started sshd@38-10-10.0.0.48:22-10.0.0.1:33770.service - OpenSSH per-connection server daemon (10.0.0.1:33770). Apr 20 16:20:38.816955 sshd[4931]: Accepted publickey for core from 10.0.0.1 port 33770 ssh2: RSA SHA256:TY3ywpKQxrHjYT+ud73OgntCcwLDp6eqaAmsAXbkkEQ Apr 20 16:20:38.847953 sshd-session[4931]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 20 16:20:38.994950 systemd-logind[1611]: New session '40' of user 'core' with class 'user' and type 'tty'. Apr 20 16:20:39.221972 systemd[1]: Started session-40.scope - Session 40 of User core. Apr 20 16:20:40.293995 containerd[1642]: time="2026-04-20T16:20:40.276679083Z" level=info msg="container event discarded" container=508156ca7c4de162cfc24ca5317f4ca402e68de276766ea0bf5e1c50d94323a7 type=CONTAINER_DELETED_EVENT Apr 20 16:20:41.752390 sshd[4949]: Connection closed by 10.0.0.1 port 33770 Apr 20 16:20:41.755784 sshd-session[4931]: pam_unix(sshd:session): session closed for user core Apr 20 16:20:41.891402 systemd[1]: sshd@38-10-10.0.0.48:22-10.0.0.1:33770.service: Deactivated successfully. Apr 20 16:20:41.940967 systemd[1]: session-40.scope: Deactivated successfully. Apr 20 16:20:41.946705 systemd[1]: session-40.scope: Consumed 1.360s CPU time, 17.7M memory peak. Apr 20 16:20:41.960755 systemd-logind[1611]: Session 40 logged out. Waiting for processes to exit. Apr 20 16:20:41.963410 systemd-logind[1611]: Removed session 40. Apr 20 16:20:43.520852 kubelet[2995]: E0420 16:20:43.520092 2995 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 16:20:46.745679 kubelet[2995]: E0420 16:20:46.742325 2995 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.348s" Apr 20 16:20:46.835697 systemd[1]: Started sshd@39-8195-10.0.0.48:22-10.0.0.1:51688.service - OpenSSH per-connection server daemon (10.0.0.1:51688). Apr 20 16:20:49.422461 sshd[4982]: Accepted publickey for core from 10.0.0.1 port 51688 ssh2: RSA SHA256:TY3ywpKQxrHjYT+ud73OgntCcwLDp6eqaAmsAXbkkEQ Apr 20 16:20:49.770902 sshd-session[4982]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 20 16:20:50.105087 systemd[1]: Started session-41.scope - Session 41 of User core. Apr 20 16:20:50.108353 systemd-logind[1611]: New session '41' of user 'core' with class 'user' and type 'tty'. Apr 20 16:20:54.566843 sshd[4993]: Connection closed by 10.0.0.1 port 51688 Apr 20 16:20:54.609268 sshd-session[4982]: pam_unix(sshd:session): session closed for user core Apr 20 16:20:54.875297 systemd[1]: sshd@39-8195-10.0.0.48:22-10.0.0.1:51688.service: Deactivated successfully. Apr 20 16:20:55.033363 systemd[1]: session-41.scope: Deactivated successfully. Apr 20 16:20:55.034022 systemd[1]: session-41.scope: Consumed 2.117s CPU time, 17M memory peak. Apr 20 16:20:55.051128 systemd-logind[1611]: Session 41 logged out. Waiting for processes to exit. Apr 20 16:20:55.158998 systemd-logind[1611]: Removed session 41. Apr 20 16:20:58.845492 kubelet[2995]: E0420 16:20:58.837814 2995 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.013s" Apr 20 16:21:00.224599 systemd[1]: Started sshd@40-8196-10.0.0.48:22-10.0.0.1:56572.service - OpenSSH per-connection server daemon (10.0.0.1:56572). Apr 20 16:21:02.546112 kubelet[2995]: E0420 16:21:02.542787 2995 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.088s" Apr 20 16:21:04.010129 sshd[5030]: Accepted publickey for core from 10.0.0.1 port 56572 ssh2: RSA SHA256:TY3ywpKQxrHjYT+ud73OgntCcwLDp6eqaAmsAXbkkEQ Apr 20 16:21:04.114469 sshd-session[5030]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 20 16:21:05.591399 systemd-logind[1611]: New session '42' of user 'core' with class 'user' and type 'tty'. Apr 20 16:21:05.759319 systemd[1]: Started session-42.scope - Session 42 of User core. Apr 20 16:21:07.120247 kubelet[2995]: E0420 16:21:07.109260 2995 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.655s" Apr 20 16:21:09.364655 kubelet[2995]: E0420 16:21:09.347260 2995 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 16:21:12.270836 sshd[5053]: Connection closed by 10.0.0.1 port 56572 Apr 20 16:21:12.277410 sshd-session[5030]: pam_unix(sshd:session): session closed for user core Apr 20 16:21:12.766980 systemd[1]: sshd@40-8196-10.0.0.48:22-10.0.0.1:56572.service: Deactivated successfully. Apr 20 16:21:13.181288 systemd[1]: sshd@40-8196-10.0.0.48:22-10.0.0.1:56572.service: Consumed 1.099s CPU time, 4.1M memory peak. Apr 20 16:21:13.341669 systemd[1]: session-42.scope: Deactivated successfully. Apr 20 16:21:13.491780 systemd[1]: session-42.scope: Consumed 2.965s CPU time, 17.9M memory peak. Apr 20 16:21:13.720783 systemd-logind[1611]: Session 42 logged out. Waiting for processes to exit. Apr 20 16:21:13.929946 systemd-logind[1611]: Removed session 42. Apr 20 16:21:18.979958 systemd[1]: Started sshd@41-4120-10.0.0.48:22-10.0.0.1:47742.service - OpenSSH per-connection server daemon (10.0.0.1:47742). Apr 20 16:21:21.957263 containerd[1642]: time="2026-04-20T16:21:21.678639091Z" level=info msg="container event discarded" container=456d4bb384e753507dd13b372b2df359997e43521644fd88e72216de163d6273 type=CONTAINER_DELETED_EVENT Apr 20 16:21:33.056553 sshd[5091]: Accepted publickey for core from 10.0.0.1 port 47742 ssh2: RSA SHA256:TY3ywpKQxrHjYT+ud73OgntCcwLDp6eqaAmsAXbkkEQ Apr 20 16:21:33.971462 sshd-session[5091]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 20 16:21:37.180641 containerd[1642]: time="2026-04-20T16:21:36.958058429Z" level=info msg="container event discarded" container=d6ce1c97595a028ca86fd46a278e031e5563d277a0c1061f4394f7594a2eb203 type=CONTAINER_CREATED_EVENT Apr 20 16:21:38.270364 containerd[1642]: time="2026-04-20T16:21:38.077036932Z" level=info msg="container event discarded" container=f987633b97f74095524e604507867d747a6eb15bcc29b8d4ed774c4bc22f113e type=CONTAINER_CREATED_EVENT Apr 20 16:21:38.393963 systemd-logind[1611]: New session '43' of user 'core' with class 'user' and type 'tty'. Apr 20 16:21:38.426320 systemd[1]: Started session-43.scope - Session 43 of User core. Apr 20 16:21:38.705152 systemd[1]: cri-containerd-d6ce1c97595a028ca86fd46a278e031e5563d277a0c1061f4394f7594a2eb203.scope: Deactivated successfully. Apr 20 16:21:38.823892 systemd[1]: cri-containerd-d6ce1c97595a028ca86fd46a278e031e5563d277a0c1061f4394f7594a2eb203.scope: Consumed 1min 20.171s CPU time, 53.4M memory peak. Apr 20 16:21:40.412294 containerd[1642]: time="2026-04-20T16:21:40.411270130Z" level=info msg="received container exit event container_id:\"d6ce1c97595a028ca86fd46a278e031e5563d277a0c1061f4394f7594a2eb203\" id:\"d6ce1c97595a028ca86fd46a278e031e5563d277a0c1061f4394f7594a2eb203\" pid:4042 exit_status:1 exited_at:{seconds:1776702099 nanos:270297214}" Apr 20 16:21:40.626531 systemd[1]: cri-containerd-f987633b97f74095524e604507867d747a6eb15bcc29b8d4ed774c4bc22f113e.scope: Deactivated successfully. Apr 20 16:21:40.798611 systemd[1]: cri-containerd-f987633b97f74095524e604507867d747a6eb15bcc29b8d4ed774c4bc22f113e.scope: Consumed 37.744s CPU time, 25.2M memory peak. Apr 20 16:21:41.136112 containerd[1642]: time="2026-04-20T16:21:41.135809103Z" level=info msg="received container exit event container_id:\"f987633b97f74095524e604507867d747a6eb15bcc29b8d4ed774c4bc22f113e\" id:\"f987633b97f74095524e604507867d747a6eb15bcc29b8d4ed774c4bc22f113e\" pid:4051 exit_status:1 exited_at:{seconds:1776702100 nanos:959600079}" Apr 20 16:21:41.208241 kubelet[2995]: E0420 16:21:41.181376 2995 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="27.58s" Apr 20 16:21:43.300240 kubelet[2995]: E0420 16:21:43.299563 2995 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.474s" Apr 20 16:21:43.304242 kubelet[2995]: E0420 16:21:43.302830 2995 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 16:21:43.304242 kubelet[2995]: E0420 16:21:43.303297 2995 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 16:21:43.305098 kubelet[2995]: E0420 16:21:43.304971 2995 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 16:21:43.594559 kubelet[2995]: I0420 16:21:43.573696 2995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-flannel/kube-flannel-ds-cvh2r" podStartSLOduration=513.50427595 podStartE2EDuration="14m3.510585372s" podCreationTimestamp="2026-04-20 16:07:40 +0000 UTC" firstStartedPulling="2026-04-20 16:08:30.583888663 +0000 UTC m=+69.846034070" lastFinishedPulling="2026-04-20 16:14:00.590198072 +0000 UTC m=+399.852343492" observedRunningTime="2026-04-20 16:18:42.644198845 +0000 UTC m=+681.906344261" watchObservedRunningTime="2026-04-20 16:21:43.510585372 +0000 UTC m=+862.772730788" Apr 20 16:21:51.357031 containerd[1642]: time="2026-04-20T16:21:50.869538098Z" level=error msg="failed to delete task" error="context deadline exceeded" id=d6ce1c97595a028ca86fd46a278e031e5563d277a0c1061f4394f7594a2eb203 Apr 20 16:21:52.530568 containerd[1642]: time="2026-04-20T16:21:51.847497885Z" level=error msg="failed to delete task" error="context deadline exceeded" id=f987633b97f74095524e604507867d747a6eb15bcc29b8d4ed774c4bc22f113e Apr 20 16:21:53.008507 containerd[1642]: time="2026-04-20T16:21:52.585095083Z" level=error msg="ttrpc: received message on inactive stream" stream=71 Apr 20 16:21:52.530976 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d6ce1c97595a028ca86fd46a278e031e5563d277a0c1061f4394f7594a2eb203-rootfs.mount: Deactivated successfully. Apr 20 16:21:53.575995 containerd[1642]: time="2026-04-20T16:21:52.783951527Z" level=error msg="failed to handle container TaskExit event container_id:\"d6ce1c97595a028ca86fd46a278e031e5563d277a0c1061f4394f7594a2eb203\" id:\"d6ce1c97595a028ca86fd46a278e031e5563d277a0c1061f4394f7594a2eb203\" pid:4042 exit_status:1 exited_at:{seconds:1776702099 nanos:270297214}" error="failed to stop container: failed to delete task: context deadline exceeded" Apr 20 16:21:54.464946 containerd[1642]: time="2026-04-20T16:21:54.433590601Z" level=error msg="ttrpc: received message on inactive stream" stream=71 Apr 20 16:21:54.882902 containerd[1642]: time="2026-04-20T16:21:54.859805942Z" level=info msg="TaskExit event container_id:\"d6ce1c97595a028ca86fd46a278e031e5563d277a0c1061f4394f7594a2eb203\" id:\"d6ce1c97595a028ca86fd46a278e031e5563d277a0c1061f4394f7594a2eb203\" pid:4042 exit_status:1 exited_at:{seconds:1776702099 nanos:270297214}" Apr 20 16:21:55.195258 containerd[1642]: time="2026-04-20T16:21:54.937995947Z" level=error msg="failed to handle container TaskExit event container_id:\"f987633b97f74095524e604507867d747a6eb15bcc29b8d4ed774c4bc22f113e\" id:\"f987633b97f74095524e604507867d747a6eb15bcc29b8d4ed774c4bc22f113e\" pid:4051 exit_status:1 exited_at:{seconds:1776702100 nanos:959600079}" error="failed to stop container: failed to delete task: context deadline exceeded" Apr 20 16:21:54.933924 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f987633b97f74095524e604507867d747a6eb15bcc29b8d4ed774c4bc22f113e-rootfs.mount: Deactivated successfully. Apr 20 16:22:01.351838 kubelet[2995]: E0420 16:22:01.349101 2995 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="17.936s" Apr 20 16:22:01.897628 sshd[5118]: Connection closed by 10.0.0.1 port 47742 Apr 20 16:22:01.947435 sshd-session[5091]: pam_unix(sshd:session): session closed for user core Apr 20 16:22:02.536029 systemd[1]: sshd@41-4120-10.0.0.48:22-10.0.0.1:47742.service: Deactivated successfully. Apr 20 16:22:02.576110 systemd[1]: sshd@41-4120-10.0.0.48:22-10.0.0.1:47742.service: Consumed 4.968s CPU time, 4.2M memory peak. Apr 20 16:22:02.679880 systemd[1]: session-43.scope: Deactivated successfully. Apr 20 16:22:02.680461 systemd[1]: session-43.scope: Consumed 12.285s CPU time, 15.8M memory peak. Apr 20 16:22:03.089512 systemd-logind[1611]: Session 43 logged out. Waiting for processes to exit. Apr 20 16:22:03.297568 kubelet[2995]: E0420 16:22:03.294433 2995 controller.go:195] "Failed to update lease" err="Put \"https://10.0.0.48:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Apr 20 16:22:03.568083 systemd-logind[1611]: Removed session 43. Apr 20 16:22:04.652909 containerd[1642]: time="2026-04-20T16:22:04.651760373Z" level=error msg="ttrpc: received message on inactive stream" stream=79 Apr 20 16:22:04.767979 containerd[1642]: time="2026-04-20T16:22:04.656749636Z" level=error msg="Failed to handle backOff event container_id:\"d6ce1c97595a028ca86fd46a278e031e5563d277a0c1061f4394f7594a2eb203\" id:\"d6ce1c97595a028ca86fd46a278e031e5563d277a0c1061f4394f7594a2eb203\" pid:4042 exit_status:1 exited_at:{seconds:1776702099 nanos:270297214} for d6ce1c97595a028ca86fd46a278e031e5563d277a0c1061f4394f7594a2eb203" error="failed to handle container TaskExit event: failed to stop container: context deadline exceeded" Apr 20 16:22:04.767979 containerd[1642]: time="2026-04-20T16:22:04.689564630Z" level=error msg="ttrpc: received message on inactive stream" stream=81 Apr 20 16:22:04.795530 containerd[1642]: time="2026-04-20T16:22:04.779422714Z" level=info msg="TaskExit event container_id:\"f987633b97f74095524e604507867d747a6eb15bcc29b8d4ed774c4bc22f113e\" id:\"f987633b97f74095524e604507867d747a6eb15bcc29b8d4ed774c4bc22f113e\" pid:4051 exit_status:1 exited_at:{seconds:1776702100 nanos:959600079}" Apr 20 16:22:05.530756 kubelet[2995]: E0420 16:22:05.529466 2995 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 16:22:07.627001 kubelet[2995]: E0420 16:22:07.570130 2995 controller.go:195] "Failed to update lease" err="Operation cannot be fulfilled on leases.coordination.k8s.io \"localhost\": the object has been modified; please apply your changes to the latest version and try again" Apr 20 16:22:11.720042 systemd[1]: Started sshd@42-8197-10.0.0.48:22-10.0.0.1:42660.service - OpenSSH per-connection server daemon (10.0.0.1:42660). Apr 20 16:22:15.363692 containerd[1642]: time="2026-04-20T16:22:15.326921699Z" level=error msg="ttrpc: received message on inactive stream" stream=81 Apr 20 16:22:15.422438 containerd[1642]: time="2026-04-20T16:22:15.380064514Z" level=error msg="Failed to handle backOff event container_id:\"f987633b97f74095524e604507867d747a6eb15bcc29b8d4ed774c4bc22f113e\" id:\"f987633b97f74095524e604507867d747a6eb15bcc29b8d4ed774c4bc22f113e\" pid:4051 exit_status:1 exited_at:{seconds:1776702100 nanos:959600079} for f987633b97f74095524e604507867d747a6eb15bcc29b8d4ed774c4bc22f113e" error="failed to handle container TaskExit event: failed to stop container: context deadline exceeded" Apr 20 16:22:15.636806 containerd[1642]: time="2026-04-20T16:22:15.620391340Z" level=info msg="TaskExit event container_id:\"d6ce1c97595a028ca86fd46a278e031e5563d277a0c1061f4394f7594a2eb203\" id:\"d6ce1c97595a028ca86fd46a278e031e5563d277a0c1061f4394f7594a2eb203\" pid:4042 exit_status:1 exited_at:{seconds:1776702099 nanos:270297214}" Apr 20 16:22:18.688147 sshd[5199]: Accepted publickey for core from 10.0.0.1 port 42660 ssh2: RSA SHA256:TY3ywpKQxrHjYT+ud73OgntCcwLDp6eqaAmsAXbkkEQ Apr 20 16:22:18.990979 sshd-session[5199]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 20 16:22:20.051419 kubelet[2995]: E0420 16:22:20.047777 2995 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="17.564s" Apr 20 16:22:21.887619 systemd-logind[1611]: New session '44' of user 'core' with class 'user' and type 'tty'. Apr 20 16:22:22.695927 systemd[1]: Started session-44.scope - Session 44 of User core. Apr 20 16:22:25.466639 containerd[1642]: time="2026-04-20T16:22:25.465709937Z" level=error msg="Failed to handle backOff event container_id:\"d6ce1c97595a028ca86fd46a278e031e5563d277a0c1061f4394f7594a2eb203\" id:\"d6ce1c97595a028ca86fd46a278e031e5563d277a0c1061f4394f7594a2eb203\" pid:4042 exit_status:1 exited_at:{seconds:1776702099 nanos:270297214} for d6ce1c97595a028ca86fd46a278e031e5563d277a0c1061f4394f7594a2eb203" error="failed to handle container TaskExit event: failed to stop container: context deadline exceeded" Apr 20 16:22:25.532867 containerd[1642]: time="2026-04-20T16:22:25.532745849Z" level=info msg="TaskExit event container_id:\"f987633b97f74095524e604507867d747a6eb15bcc29b8d4ed774c4bc22f113e\" id:\"f987633b97f74095524e604507867d747a6eb15bcc29b8d4ed774c4bc22f113e\" pid:4051 exit_status:1 exited_at:{seconds:1776702100 nanos:959600079}" Apr 20 16:22:25.539244 containerd[1642]: time="2026-04-20T16:22:25.538552682Z" level=error msg="ttrpc: received message on inactive stream" stream=87 Apr 20 16:22:25.539244 containerd[1642]: time="2026-04-20T16:22:25.538593872Z" level=error msg="ttrpc: received message on inactive stream" stream=91 Apr 20 16:22:25.633268 kubelet[2995]: E0420 16:22:25.633077 2995 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="4.937s" Apr 20 16:22:25.652358 kubelet[2995]: E0420 16:22:25.651933 2995 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 16:22:25.659533 kubelet[2995]: E0420 16:22:25.654088 2995 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 16:22:27.641698 sshd[5231]: Connection closed by 10.0.0.1 port 42660 Apr 20 16:22:27.643472 sshd-session[5199]: pam_unix(sshd:session): session closed for user core Apr 20 16:22:27.703952 systemd[1]: sshd@42-8197-10.0.0.48:22-10.0.0.1:42660.service: Deactivated successfully. Apr 20 16:22:27.706814 systemd[1]: sshd@42-8197-10.0.0.48:22-10.0.0.1:42660.service: Consumed 2.716s CPU time, 4.1M memory peak. Apr 20 16:22:27.753593 systemd[1]: session-44.scope: Deactivated successfully. Apr 20 16:22:27.755266 systemd[1]: session-44.scope: Consumed 2.923s CPU time, 18M memory peak. Apr 20 16:22:27.786040 systemd-logind[1611]: Session 44 logged out. Waiting for processes to exit. Apr 20 16:22:27.805455 systemd-logind[1611]: Removed session 44. Apr 20 16:22:28.468852 containerd[1642]: time="2026-04-20T16:22:28.457327548Z" level=info msg="container event discarded" container=d6ce1c97595a028ca86fd46a278e031e5563d277a0c1061f4394f7594a2eb203 type=CONTAINER_STARTED_EVENT Apr 20 16:22:29.065856 kubelet[2995]: I0420 16:22:29.059467 2995 scope.go:117] "RemoveContainer" containerID="2cd2d8ccad6a0144eb176f6d4306ced991d40e9d9a29f2a4ef85b3fd4d429eee" Apr 20 16:22:29.415881 kubelet[2995]: I0420 16:22:29.324773 2995 scope.go:117] "RemoveContainer" containerID="f987633b97f74095524e604507867d747a6eb15bcc29b8d4ed774c4bc22f113e" Apr 20 16:22:29.660670 kubelet[2995]: E0420 16:22:29.457939 2995 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 16:22:30.058694 containerd[1642]: time="2026-04-20T16:22:30.057894869Z" level=info msg="TaskExit event container_id:\"d6ce1c97595a028ca86fd46a278e031e5563d277a0c1061f4394f7594a2eb203\" id:\"d6ce1c97595a028ca86fd46a278e031e5563d277a0c1061f4394f7594a2eb203\" pid:4042 exit_status:1 exited_at:{seconds:1776702099 nanos:270297214}" Apr 20 16:22:33.007155 containerd[1642]: time="2026-04-20T16:22:32.971929010Z" level=info msg="container event discarded" container=f987633b97f74095524e604507867d747a6eb15bcc29b8d4ed774c4bc22f113e type=CONTAINER_STARTED_EVENT Apr 20 16:22:33.520876 systemd[1]: Started sshd@43-4121-10.0.0.48:22-10.0.0.1:36446.service - OpenSSH per-connection server daemon (10.0.0.1:36446). Apr 20 16:22:33.556851 containerd[1642]: time="2026-04-20T16:22:33.554087550Z" level=info msg="RemoveContainer for \"2cd2d8ccad6a0144eb176f6d4306ced991d40e9d9a29f2a4ef85b3fd4d429eee\"" Apr 20 16:22:34.068793 kubelet[2995]: E0420 16:22:34.066966 2995 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="4.427s" Apr 20 16:22:34.543689 containerd[1642]: time="2026-04-20T16:22:34.506716835Z" level=info msg="CreateContainer within sandbox \"bcb1213fb762cb01d7cad99cfe50564de5d80901cf15182f86eb69aab9596910\" for container name:\"kube-scheduler\" attempt:3" Apr 20 16:22:35.378018 containerd[1642]: time="2026-04-20T16:22:35.376072547Z" level=info msg="RemoveContainer for \"2cd2d8ccad6a0144eb176f6d4306ced991d40e9d9a29f2a4ef85b3fd4d429eee\" returns successfully" Apr 20 16:22:40.033889 containerd[1642]: time="2026-04-20T16:22:39.983987569Z" level=error msg="failed to delete task" error="context deadline exceeded" id=d6ce1c97595a028ca86fd46a278e031e5563d277a0c1061f4394f7594a2eb203 Apr 20 16:22:40.247094 containerd[1642]: time="2026-04-20T16:22:40.235769777Z" level=error msg="Failed to handle backOff event container_id:\"d6ce1c97595a028ca86fd46a278e031e5563d277a0c1061f4394f7594a2eb203\" id:\"d6ce1c97595a028ca86fd46a278e031e5563d277a0c1061f4394f7594a2eb203\" pid:4042 exit_status:1 exited_at:{seconds:1776702099 nanos:270297214} for d6ce1c97595a028ca86fd46a278e031e5563d277a0c1061f4394f7594a2eb203" error="failed to handle container TaskExit event: failed to stop container: failed to delete task: context deadline exceeded" Apr 20 16:22:40.554868 sshd[5291]: Accepted publickey for core from 10.0.0.1 port 36446 ssh2: RSA SHA256:TY3ywpKQxrHjYT+ud73OgntCcwLDp6eqaAmsAXbkkEQ Apr 20 16:22:40.775846 containerd[1642]: time="2026-04-20T16:22:40.761089933Z" level=error msg="ttrpc: received message on inactive stream" stream=107 Apr 20 16:22:40.778970 sshd-session[5291]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 20 16:22:42.292651 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3098556195.mount: Deactivated successfully. Apr 20 16:22:42.568808 containerd[1642]: time="2026-04-20T16:22:42.437016608Z" level=info msg="Container 0aced24249ae034a4e7411d69de3afedf9f39fcb6c9311f29cc1d75162cf722a: CDI devices from CRI Config.CDIDevices: []" Apr 20 16:22:42.952295 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2146133920.mount: Deactivated successfully. Apr 20 16:22:43.239491 systemd-logind[1611]: New session '45' of user 'core' with class 'user' and type 'tty'. Apr 20 16:22:43.680422 systemd[1]: Started session-45.scope - Session 45 of User core. Apr 20 16:22:43.948067 containerd[1642]: time="2026-04-20T16:22:43.942899225Z" level=info msg="StopContainer for \"d6ce1c97595a028ca86fd46a278e031e5563d277a0c1061f4394f7594a2eb203\" with timeout 30 (s)" Apr 20 16:22:44.106866 containerd[1642]: time="2026-04-20T16:22:44.103000654Z" level=info msg="Stop container \"d6ce1c97595a028ca86fd46a278e031e5563d277a0c1061f4394f7594a2eb203\" with signal terminated" Apr 20 16:22:44.185909 containerd[1642]: time="2026-04-20T16:22:44.185596282Z" level=info msg="CreateContainer within sandbox \"bcb1213fb762cb01d7cad99cfe50564de5d80901cf15182f86eb69aab9596910\" for name:\"kube-scheduler\" attempt:3 returns container id \"0aced24249ae034a4e7411d69de3afedf9f39fcb6c9311f29cc1d75162cf722a\"" Apr 20 16:22:44.234323 containerd[1642]: time="2026-04-20T16:22:44.233300872Z" level=info msg="StartContainer for \"0aced24249ae034a4e7411d69de3afedf9f39fcb6c9311f29cc1d75162cf722a\"" Apr 20 16:22:44.245806 containerd[1642]: time="2026-04-20T16:22:44.245743437Z" level=info msg="connecting to shim 0aced24249ae034a4e7411d69de3afedf9f39fcb6c9311f29cc1d75162cf722a" address="unix:///run/containerd/s/fea015718c3b780d3f475ca07cc94aee7b32240562ed54fe7ca53764a3c05bf2" protocol=ttrpc version=3 Apr 20 16:22:44.261920 kubelet[2995]: E0420 16:22:44.258954 2995 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="10.191s" Apr 20 16:22:44.789795 systemd[1]: Started cri-containerd-0aced24249ae034a4e7411d69de3afedf9f39fcb6c9311f29cc1d75162cf722a.scope - libcontainer container 0aced24249ae034a4e7411d69de3afedf9f39fcb6c9311f29cc1d75162cf722a. Apr 20 16:22:45.385443 sshd[5330]: Connection closed by 10.0.0.1 port 36446 Apr 20 16:22:45.400099 sshd-session[5291]: pam_unix(sshd:session): session closed for user core Apr 20 16:22:45.437258 systemd[1]: sshd@43-4121-10.0.0.48:22-10.0.0.1:36446.service: Deactivated successfully. Apr 20 16:22:45.438377 systemd[1]: sshd@43-4121-10.0.0.48:22-10.0.0.1:36446.service: Consumed 3.066s CPU time, 4.3M memory peak. Apr 20 16:22:45.440821 systemd[1]: session-45.scope: Deactivated successfully. Apr 20 16:22:45.441753 systemd[1]: session-45.scope: Consumed 1.188s CPU time, 18.3M memory peak. Apr 20 16:22:45.444990 systemd-logind[1611]: Session 45 logged out. Waiting for processes to exit. Apr 20 16:22:45.486412 systemd-logind[1611]: Removed session 45. Apr 20 16:22:45.995389 containerd[1642]: time="2026-04-20T16:22:45.995082662Z" level=info msg="StartContainer for \"0aced24249ae034a4e7411d69de3afedf9f39fcb6c9311f29cc1d75162cf722a\" returns successfully" Apr 20 16:22:46.847151 kubelet[2995]: E0420 16:22:46.845773 2995 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 16:22:48.144805 kubelet[2995]: E0420 16:22:48.140901 2995 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 16:22:48.945494 containerd[1642]: time="2026-04-20T16:22:48.944828754Z" level=info msg="TaskExit event container_id:\"d6ce1c97595a028ca86fd46a278e031e5563d277a0c1061f4394f7594a2eb203\" id:\"d6ce1c97595a028ca86fd46a278e031e5563d277a0c1061f4394f7594a2eb203\" pid:4042 exit_status:1 exited_at:{seconds:1776702099 nanos:270297214}" Apr 20 16:22:49.414077 kubelet[2995]: E0420 16:22:49.403562 2995 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 16:22:51.298689 kubelet[2995]: E0420 16:22:51.292654 2995 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 16:22:52.080776 systemd[1]: Started sshd@44-4122-10.0.0.48:22-10.0.0.1:55198.service - OpenSSH per-connection server daemon (10.0.0.1:55198). Apr 20 16:22:58.999480 containerd[1642]: time="2026-04-20T16:22:58.983879353Z" level=error msg="failed to delete task" error="context deadline exceeded" id=d6ce1c97595a028ca86fd46a278e031e5563d277a0c1061f4394f7594a2eb203 Apr 20 16:22:59.036683 containerd[1642]: time="2026-04-20T16:22:59.000977480Z" level=error msg="Failed to handle backOff event container_id:\"d6ce1c97595a028ca86fd46a278e031e5563d277a0c1061f4394f7594a2eb203\" id:\"d6ce1c97595a028ca86fd46a278e031e5563d277a0c1061f4394f7594a2eb203\" pid:4042 exit_status:1 exited_at:{seconds:1776702099 nanos:270297214} for d6ce1c97595a028ca86fd46a278e031e5563d277a0c1061f4394f7594a2eb203" error="failed to handle container TaskExit event: failed to stop container: failed to delete task: context deadline exceeded" Apr 20 16:22:59.287068 containerd[1642]: time="2026-04-20T16:22:59.220118730Z" level=error msg="ttrpc: received message on inactive stream" stream=127 Apr 20 16:22:59.602835 sshd[5410]: Accepted publickey for core from 10.0.0.1 port 55198 ssh2: RSA SHA256:TY3ywpKQxrHjYT+ud73OgntCcwLDp6eqaAmsAXbkkEQ Apr 20 16:22:59.686774 sshd-session[5410]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 20 16:23:00.488863 kubelet[2995]: E0420 16:23:00.488740 2995 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="9.12s" Apr 20 16:23:00.540632 systemd-logind[1611]: New session '46' of user 'core' with class 'user' and type 'tty'. Apr 20 16:23:00.669947 systemd[1]: Started session-46.scope - Session 46 of User core. Apr 20 16:23:01.238855 kubelet[2995]: E0420 16:23:01.214120 2995 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 16:23:02.544014 kubelet[2995]: E0420 16:23:02.543933 2995 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.136s" Apr 20 16:23:02.865951 kubelet[2995]: E0420 16:23:02.862464 2995 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 16:23:03.389068 kubelet[2995]: E0420 16:23:03.382828 2995 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 16:23:03.441734 sshd[5433]: Connection closed by 10.0.0.1 port 55198 Apr 20 16:23:03.469925 sshd-session[5410]: pam_unix(sshd:session): session closed for user core Apr 20 16:23:03.616142 systemd[1]: sshd@44-4122-10.0.0.48:22-10.0.0.1:55198.service: Deactivated successfully. Apr 20 16:23:03.628131 systemd[1]: sshd@44-4122-10.0.0.48:22-10.0.0.1:55198.service: Consumed 2.868s CPU time, 4.1M memory peak. Apr 20 16:23:03.659500 systemd[1]: session-46.scope: Deactivated successfully. Apr 20 16:23:03.659878 systemd[1]: session-46.scope: Consumed 1.815s CPU time, 17.7M memory peak. Apr 20 16:23:03.806715 systemd-logind[1611]: Session 46 logged out. Waiting for processes to exit. Apr 20 16:23:04.000838 systemd-logind[1611]: Removed session 46. Apr 20 16:23:04.486733 kubelet[2995]: E0420 16:23:04.481902 2995 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 16:23:10.507084 systemd[1]: Started sshd@45-12293-10.0.0.48:22-10.0.0.1:57532.service - OpenSSH per-connection server daemon (10.0.0.1:57532). Apr 20 16:23:13.000028 kubelet[2995]: E0420 16:23:12.997672 2995 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="5.59s" Apr 20 16:23:14.552988 containerd[1642]: time="2026-04-20T16:23:14.546924631Z" level=info msg="Kill container \"d6ce1c97595a028ca86fd46a278e031e5563d277a0c1061f4394f7594a2eb203\"" Apr 20 16:23:15.277758 kubelet[2995]: E0420 16:23:15.276793 2995 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.251s" Apr 20 16:23:16.305758 containerd[1642]: time="2026-04-20T16:23:16.168766400Z" level=info msg="TaskExit event container_id:\"d6ce1c97595a028ca86fd46a278e031e5563d277a0c1061f4394f7594a2eb203\" id:\"d6ce1c97595a028ca86fd46a278e031e5563d277a0c1061f4394f7594a2eb203\" pid:4042 exit_status:1 exited_at:{seconds:1776702099 nanos:270297214}" Apr 20 16:23:16.386549 sshd[5471]: Accepted publickey for core from 10.0.0.1 port 57532 ssh2: RSA SHA256:TY3ywpKQxrHjYT+ud73OgntCcwLDp6eqaAmsAXbkkEQ Apr 20 16:23:16.559369 sshd-session[5471]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 20 16:23:19.000990 systemd-logind[1611]: New session '47' of user 'core' with class 'user' and type 'tty'. Apr 20 16:23:19.547103 systemd[1]: Started session-47.scope - Session 47 of User core. Apr 20 16:23:27.217618 kubelet[2995]: E0420 16:23:27.216932 2995 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="11.937s" Apr 20 16:23:27.257865 kubelet[2995]: E0420 16:23:27.219722 2995 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 16:23:27.283528 containerd[1642]: time="2026-04-20T16:23:27.223739425Z" level=info msg="StopContainer for \"d6ce1c97595a028ca86fd46a278e031e5563d277a0c1061f4394f7594a2eb203\" returns successfully" Apr 20 16:23:27.329092 kubelet[2995]: E0420 16:23:27.265656 2995 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 16:23:27.376045 sshd[5502]: Connection closed by 10.0.0.1 port 57532 Apr 20 16:23:27.401565 sshd-session[5471]: pam_unix(sshd:session): session closed for user core Apr 20 16:23:27.699013 systemd[1]: sshd@45-12293-10.0.0.48:22-10.0.0.1:57532.service: Deactivated successfully. Apr 20 16:23:27.762829 systemd[1]: sshd@45-12293-10.0.0.48:22-10.0.0.1:57532.service: Consumed 2.052s CPU time, 4.3M memory peak. Apr 20 16:23:28.064975 systemd[1]: session-47.scope: Deactivated successfully. Apr 20 16:23:28.097326 systemd[1]: session-47.scope: Consumed 4.767s CPU time, 17.7M memory peak. Apr 20 16:23:28.225830 systemd-logind[1611]: Session 47 logged out. Waiting for processes to exit. Apr 20 16:23:28.294781 containerd[1642]: time="2026-04-20T16:23:28.288925278Z" level=info msg="CreateContainer within sandbox \"a4f8e106b2c659c398fec8bc48b50dd31c841c50583cc946254869d16aa74adb\" for container name:\"kube-controller-manager\" attempt:4" Apr 20 16:23:28.375862 systemd-logind[1611]: Removed session 47. Apr 20 16:23:29.067842 kubelet[2995]: I0420 16:23:29.067086 2995 scope.go:117] "RemoveContainer" containerID="6ecf2e6532aba84094a3abed497a20b6cf9a99af4368a143b9ea2427a3970801" Apr 20 16:23:29.106058 containerd[1642]: time="2026-04-20T16:23:29.102654675Z" level=info msg="Container 10f88e2bee7140ba19c08c40ef8d5112a0203c44216d608b7120be2c1bfcebc5: CDI devices from CRI Config.CDIDevices: []" Apr 20 16:23:30.192980 containerd[1642]: time="2026-04-20T16:23:30.192625689Z" level=info msg="RemoveContainer for \"6ecf2e6532aba84094a3abed497a20b6cf9a99af4368a143b9ea2427a3970801\"" Apr 20 16:23:30.671282 kubelet[2995]: E0420 16:23:30.629219 2995 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.262s" Apr 20 16:23:30.756220 containerd[1642]: time="2026-04-20T16:23:30.671372061Z" level=info msg="CreateContainer within sandbox \"a4f8e106b2c659c398fec8bc48b50dd31c841c50583cc946254869d16aa74adb\" for name:\"kube-controller-manager\" attempt:4 returns container id \"10f88e2bee7140ba19c08c40ef8d5112a0203c44216d608b7120be2c1bfcebc5\"" Apr 20 16:23:30.971120 containerd[1642]: time="2026-04-20T16:23:30.957144642Z" level=info msg="StartContainer for \"10f88e2bee7140ba19c08c40ef8d5112a0203c44216d608b7120be2c1bfcebc5\"" Apr 20 16:23:31.811408 containerd[1642]: time="2026-04-20T16:23:31.778097213Z" level=info msg="container event discarded" container=b82a4cc1f3c560519077acef144c0785a7e58f1ad8314d93d1b941024092e78b type=CONTAINER_STOPPED_EVENT Apr 20 16:23:32.445700 containerd[1642]: time="2026-04-20T16:23:32.442058620Z" level=info msg="connecting to shim 10f88e2bee7140ba19c08c40ef8d5112a0203c44216d608b7120be2c1bfcebc5" address="unix:///run/containerd/s/954064e80f6724cd9b3557ef5e8f14a291671be0b664e8d783df675780d63b2d" protocol=ttrpc version=3 Apr 20 16:23:32.889112 containerd[1642]: time="2026-04-20T16:23:32.880132325Z" level=info msg="RemoveContainer for \"6ecf2e6532aba84094a3abed497a20b6cf9a99af4368a143b9ea2427a3970801\" returns successfully" Apr 20 16:23:33.000386 containerd[1642]: time="2026-04-20T16:23:32.880273628Z" level=error msg="ContainerStatus for \"6ecf2e6532aba84094a3abed497a20b6cf9a99af4368a143b9ea2427a3970801\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"6ecf2e6532aba84094a3abed497a20b6cf9a99af4368a143b9ea2427a3970801\": not found" Apr 20 16:23:33.149587 kubelet[2995]: E0420 16:23:33.092756 2995 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"6ecf2e6532aba84094a3abed497a20b6cf9a99af4368a143b9ea2427a3970801\": not found" containerID="6ecf2e6532aba84094a3abed497a20b6cf9a99af4368a143b9ea2427a3970801" Apr 20 16:23:33.166494 systemd[1]: Started sshd@46-4123-10.0.0.48:22-10.0.0.1:60128.service - OpenSSH per-connection server daemon (10.0.0.1:60128). Apr 20 16:23:33.272409 kubelet[2995]: E0420 16:23:33.268971 2995 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.889s" Apr 20 16:23:33.785896 systemd[1]: Started cri-containerd-10f88e2bee7140ba19c08c40ef8d5112a0203c44216d608b7120be2c1bfcebc5.scope - libcontainer container 10f88e2bee7140ba19c08c40ef8d5112a0203c44216d608b7120be2c1bfcebc5. Apr 20 16:23:33.861761 sshd[5566]: Accepted publickey for core from 10.0.0.1 port 60128 ssh2: RSA SHA256:TY3ywpKQxrHjYT+ud73OgntCcwLDp6eqaAmsAXbkkEQ Apr 20 16:23:33.971897 sshd-session[5566]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 20 16:23:34.230775 containerd[1642]: time="2026-04-20T16:23:34.229436401Z" level=info msg="container event discarded" container=4bcd4b2bae3758d816ccb55244ee3017bd70e7c45b8b64115214cf2a3ea68692 type=CONTAINER_CREATED_EVENT Apr 20 16:23:34.268213 systemd-logind[1611]: New session '48' of user 'core' with class 'user' and type 'tty'. Apr 20 16:23:34.356810 systemd[1]: Started session-48.scope - Session 48 of User core. Apr 20 16:23:36.261782 containerd[1642]: time="2026-04-20T16:23:36.261553122Z" level=info msg="StartContainer for \"10f88e2bee7140ba19c08c40ef8d5112a0203c44216d608b7120be2c1bfcebc5\" returns successfully" Apr 20 16:23:37.556778 sshd[5604]: Connection closed by 10.0.0.1 port 60128 Apr 20 16:23:37.583394 sshd-session[5566]: pam_unix(sshd:session): session closed for user core Apr 20 16:23:37.632732 containerd[1642]: time="2026-04-20T16:23:37.632026530Z" level=info msg="container event discarded" container=4bcd4b2bae3758d816ccb55244ee3017bd70e7c45b8b64115214cf2a3ea68692 type=CONTAINER_STARTED_EVENT Apr 20 16:23:37.671830 systemd[1]: sshd@46-4123-10.0.0.48:22-10.0.0.1:60128.service: Deactivated successfully. Apr 20 16:23:37.811335 systemd[1]: session-48.scope: Deactivated successfully. Apr 20 16:23:37.820685 systemd[1]: session-48.scope: Consumed 2.438s CPU time, 17.9M memory peak. Apr 20 16:23:37.835674 systemd-logind[1611]: Session 48 logged out. Waiting for processes to exit. Apr 20 16:23:37.926443 systemd-logind[1611]: Removed session 48. Apr 20 16:23:38.363707 kubelet[2995]: E0420 16:23:38.362667 2995 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 16:23:39.387112 kubelet[2995]: E0420 16:23:39.386376 2995 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 16:23:40.879937 kubelet[2995]: E0420 16:23:40.879685 2995 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 16:23:42.673921 systemd[1]: Started sshd@47-11-10.0.0.48:22-10.0.0.1:52726.service - OpenSSH per-connection server daemon (10.0.0.1:52726). Apr 20 16:23:44.519512 sshd[5651]: Accepted publickey for core from 10.0.0.1 port 52726 ssh2: RSA SHA256:TY3ywpKQxrHjYT+ud73OgntCcwLDp6eqaAmsAXbkkEQ Apr 20 16:23:44.690055 sshd-session[5651]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 20 16:23:45.660550 systemd-logind[1611]: New session '49' of user 'core' with class 'user' and type 'tty'. Apr 20 16:23:45.894874 systemd[1]: Started session-49.scope - Session 49 of User core. Apr 20 16:23:47.062017 kubelet[2995]: E0420 16:23:47.060150 2995 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.653s" Apr 20 16:23:48.250669 sshd[5672]: Connection closed by 10.0.0.1 port 52726 Apr 20 16:23:48.281038 sshd-session[5651]: pam_unix(sshd:session): session closed for user core Apr 20 16:23:48.466821 systemd[1]: sshd@47-11-10.0.0.48:22-10.0.0.1:52726.service: Deactivated successfully. Apr 20 16:23:48.571936 systemd[1]: session-49.scope: Deactivated successfully. Apr 20 16:23:48.577940 systemd[1]: session-49.scope: Consumed 1.490s CPU time, 19.2M memory peak. Apr 20 16:23:48.722550 systemd-logind[1611]: Session 49 logged out. Waiting for processes to exit. Apr 20 16:23:48.737402 systemd-logind[1611]: Removed session 49. Apr 20 16:23:51.057707 kubelet[2995]: E0420 16:23:51.056921 2995 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 16:23:53.528390 systemd[1]: Started sshd@48-12-10.0.0.48:22-10.0.0.1:54956.service - OpenSSH per-connection server daemon (10.0.0.1:54956). Apr 20 16:23:53.768515 sshd[5714]: Accepted publickey for core from 10.0.0.1 port 54956 ssh2: RSA SHA256:TY3ywpKQxrHjYT+ud73OgntCcwLDp6eqaAmsAXbkkEQ Apr 20 16:23:53.778102 sshd-session[5714]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 20 16:23:53.994278 systemd-logind[1611]: New session '50' of user 'core' with class 'user' and type 'tty'. Apr 20 16:23:54.064856 systemd[1]: Started session-50.scope - Session 50 of User core. Apr 20 16:23:55.305788 sshd[5720]: Connection closed by 10.0.0.1 port 54956 Apr 20 16:23:55.307375 sshd-session[5714]: pam_unix(sshd:session): session closed for user core Apr 20 16:23:55.318843 systemd[1]: sshd@48-12-10.0.0.48:22-10.0.0.1:54956.service: Deactivated successfully. Apr 20 16:23:55.323095 systemd[1]: session-50.scope: Deactivated successfully. Apr 20 16:23:55.326548 systemd[1]: session-50.scope: Consumed 1.126s CPU time, 17.7M memory peak. Apr 20 16:23:55.331002 systemd-logind[1611]: Session 50 logged out. Waiting for processes to exit. Apr 20 16:23:55.332258 systemd-logind[1611]: Removed session 50. Apr 20 16:24:00.440775 systemd[1]: Started sshd@49-4124-10.0.0.48:22-10.0.0.1:40954.service - OpenSSH per-connection server daemon (10.0.0.1:40954). Apr 20 16:24:00.628630 sshd[5753]: Accepted publickey for core from 10.0.0.1 port 40954 ssh2: RSA SHA256:TY3ywpKQxrHjYT+ud73OgntCcwLDp6eqaAmsAXbkkEQ Apr 20 16:24:00.645033 sshd-session[5753]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 20 16:24:00.726585 systemd-logind[1611]: New session '51' of user 'core' with class 'user' and type 'tty'. Apr 20 16:24:00.755574 systemd[1]: Started session-51.scope - Session 51 of User core. Apr 20 16:24:02.156277 sshd[5760]: Connection closed by 10.0.0.1 port 40954 Apr 20 16:24:02.175673 sshd-session[5753]: pam_unix(sshd:session): session closed for user core Apr 20 16:24:02.237602 systemd[1]: sshd@49-4124-10.0.0.48:22-10.0.0.1:40954.service: Deactivated successfully. Apr 20 16:24:02.246890 systemd[1]: session-51.scope: Deactivated successfully. Apr 20 16:24:02.258927 systemd[1]: session-51.scope: Consumed 1.045s CPU time, 17.8M memory peak. Apr 20 16:24:02.430105 systemd-logind[1611]: Session 51 logged out. Waiting for processes to exit. Apr 20 16:24:02.499611 systemd-logind[1611]: Removed session 51. Apr 20 16:24:07.401878 systemd[1]: Started sshd@50-13-10.0.0.48:22-10.0.0.1:39150.service - OpenSSH per-connection server daemon (10.0.0.1:39150). Apr 20 16:24:08.361662 sshd[5803]: Accepted publickey for core from 10.0.0.1 port 39150 ssh2: RSA SHA256:TY3ywpKQxrHjYT+ud73OgntCcwLDp6eqaAmsAXbkkEQ Apr 20 16:24:08.374495 sshd-session[5803]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 20 16:24:08.581427 systemd-logind[1611]: New session '52' of user 'core' with class 'user' and type 'tty'. Apr 20 16:24:08.671569 systemd[1]: Started session-52.scope - Session 52 of User core. Apr 20 16:24:10.901563 sshd[5807]: Connection closed by 10.0.0.1 port 39150 Apr 20 16:24:10.904743 sshd-session[5803]: pam_unix(sshd:session): session closed for user core Apr 20 16:24:10.960644 systemd[1]: sshd@50-13-10.0.0.48:22-10.0.0.1:39150.service: Deactivated successfully. Apr 20 16:24:11.193031 systemd[1]: session-52.scope: Deactivated successfully. Apr 20 16:24:11.216572 systemd[1]: session-52.scope: Consumed 1.774s CPU time, 19.5M memory peak. Apr 20 16:24:11.232058 systemd-logind[1611]: Session 52 logged out. Waiting for processes to exit. Apr 20 16:24:11.239034 systemd-logind[1611]: Removed session 52. Apr 20 16:24:15.420089 kubelet[2995]: E0420 16:24:15.419575 2995 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 16:24:15.986543 systemd[1]: Started sshd@51-14-10.0.0.48:22-10.0.0.1:50612.service - OpenSSH per-connection server daemon (10.0.0.1:50612). Apr 20 16:24:16.646707 sshd[5853]: Accepted publickey for core from 10.0.0.1 port 50612 ssh2: RSA SHA256:TY3ywpKQxrHjYT+ud73OgntCcwLDp6eqaAmsAXbkkEQ Apr 20 16:24:16.654444 sshd-session[5853]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 20 16:24:16.750383 systemd-logind[1611]: New session '53' of user 'core' with class 'user' and type 'tty'. Apr 20 16:24:16.786879 systemd[1]: Started session-53.scope - Session 53 of User core. Apr 20 16:24:17.513977 sshd[5859]: Connection closed by 10.0.0.1 port 50612 Apr 20 16:24:17.515488 sshd-session[5853]: pam_unix(sshd:session): session closed for user core Apr 20 16:24:17.537822 systemd[1]: sshd@51-14-10.0.0.48:22-10.0.0.1:50612.service: Deactivated successfully. Apr 20 16:24:17.544959 systemd[1]: session-53.scope: Deactivated successfully. Apr 20 16:24:17.546975 systemd-logind[1611]: Session 53 logged out. Waiting for processes to exit. Apr 20 16:24:17.561478 systemd-logind[1611]: Removed session 53. Apr 20 16:24:22.440803 kubelet[2995]: E0420 16:24:22.426586 2995 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 16:24:23.020503 systemd[1]: Started sshd@52-15-10.0.0.48:22-10.0.0.1:50624.service - OpenSSH per-connection server daemon (10.0.0.1:50624). Apr 20 16:24:23.874282 sshd[5900]: Accepted publickey for core from 10.0.0.1 port 50624 ssh2: RSA SHA256:TY3ywpKQxrHjYT+ud73OgntCcwLDp6eqaAmsAXbkkEQ Apr 20 16:24:23.897015 sshd-session[5900]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 20 16:24:23.973680 systemd-logind[1611]: New session '54' of user 'core' with class 'user' and type 'tty'. Apr 20 16:24:24.002466 systemd[1]: Started session-54.scope - Session 54 of User core. Apr 20 16:24:25.571336 sshd[5904]: Connection closed by 10.0.0.1 port 50624 Apr 20 16:24:25.578786 sshd-session[5900]: pam_unix(sshd:session): session closed for user core Apr 20 16:24:25.597395 systemd[1]: sshd@52-15-10.0.0.48:22-10.0.0.1:50624.service: Deactivated successfully. Apr 20 16:24:25.630555 systemd[1]: session-54.scope: Deactivated successfully. Apr 20 16:24:25.631116 systemd[1]: session-54.scope: Consumed 1.010s CPU time, 17.5M memory peak. Apr 20 16:24:25.659746 systemd-logind[1611]: Session 54 logged out. Waiting for processes to exit. Apr 20 16:24:25.665743 systemd-logind[1611]: Removed session 54. Apr 20 16:24:30.691922 systemd[1]: Started sshd@53-12294-10.0.0.48:22-10.0.0.1:37920.service - OpenSSH per-connection server daemon (10.0.0.1:37920). Apr 20 16:24:31.198585 sshd[5939]: Accepted publickey for core from 10.0.0.1 port 37920 ssh2: RSA SHA256:TY3ywpKQxrHjYT+ud73OgntCcwLDp6eqaAmsAXbkkEQ Apr 20 16:24:31.203299 sshd-session[5939]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 20 16:24:31.299074 systemd-logind[1611]: New session '55' of user 'core' with class 'user' and type 'tty'. Apr 20 16:24:31.323477 systemd[1]: Started session-55.scope - Session 55 of User core. Apr 20 16:24:31.368869 kubelet[2995]: E0420 16:24:31.368799 2995 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 16:24:32.177879 sshd[5945]: Connection closed by 10.0.0.1 port 37920 Apr 20 16:24:32.190838 sshd-session[5939]: pam_unix(sshd:session): session closed for user core Apr 20 16:24:32.266005 systemd[1]: sshd@53-12294-10.0.0.48:22-10.0.0.1:37920.service: Deactivated successfully. Apr 20 16:24:32.272106 systemd[1]: session-55.scope: Deactivated successfully. Apr 20 16:24:32.320333 systemd-logind[1611]: Session 55 logged out. Waiting for processes to exit. Apr 20 16:24:32.322838 systemd-logind[1611]: Removed session 55. Apr 20 16:24:37.389285 systemd[1]: Started sshd@54-8198-10.0.0.48:22-10.0.0.1:54890.service - OpenSSH per-connection server daemon (10.0.0.1:54890). Apr 20 16:24:37.886940 sshd[5981]: Accepted publickey for core from 10.0.0.1 port 54890 ssh2: RSA SHA256:TY3ywpKQxrHjYT+ud73OgntCcwLDp6eqaAmsAXbkkEQ Apr 20 16:24:37.888597 sshd-session[5981]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 20 16:24:38.010981 systemd-logind[1611]: New session '56' of user 'core' with class 'user' and type 'tty'. Apr 20 16:24:38.031636 systemd[1]: Started session-56.scope - Session 56 of User core. Apr 20 16:24:39.164398 sshd[6005]: Connection closed by 10.0.0.1 port 54890 Apr 20 16:24:39.167275 sshd-session[5981]: pam_unix(sshd:session): session closed for user core Apr 20 16:24:39.206752 systemd[1]: sshd@54-8198-10.0.0.48:22-10.0.0.1:54890.service: Deactivated successfully. Apr 20 16:24:39.247958 systemd[1]: session-56.scope: Deactivated successfully. Apr 20 16:24:39.293646 systemd-logind[1611]: Session 56 logged out. Waiting for processes to exit. Apr 20 16:24:39.375630 systemd-logind[1611]: Removed session 56. Apr 20 16:24:41.404721 kubelet[2995]: E0420 16:24:41.404418 2995 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 16:24:44.268659 systemd[1]: Started sshd@55-4125-10.0.0.48:22-10.0.0.1:54900.service - OpenSSH per-connection server daemon (10.0.0.1:54900). Apr 20 16:24:44.898640 sshd[6039]: Accepted publickey for core from 10.0.0.1 port 54900 ssh2: RSA SHA256:TY3ywpKQxrHjYT+ud73OgntCcwLDp6eqaAmsAXbkkEQ Apr 20 16:24:44.935641 sshd-session[6039]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 20 16:24:45.196030 systemd-logind[1611]: New session '57' of user 'core' with class 'user' and type 'tty'. Apr 20 16:24:45.329417 systemd[1]: Started session-57.scope - Session 57 of User core. Apr 20 16:24:46.238848 sshd[6043]: Connection closed by 10.0.0.1 port 54900 Apr 20 16:24:46.244779 sshd-session[6039]: pam_unix(sshd:session): session closed for user core Apr 20 16:24:46.288082 systemd[1]: sshd@55-4125-10.0.0.48:22-10.0.0.1:54900.service: Deactivated successfully. Apr 20 16:24:46.310578 systemd[1]: session-57.scope: Deactivated successfully. Apr 20 16:24:46.313369 systemd-logind[1611]: Session 57 logged out. Waiting for processes to exit. Apr 20 16:24:46.317619 systemd-logind[1611]: Removed session 57. Apr 20 16:24:51.543753 systemd[1]: Started sshd@56-4126-10.0.0.48:22-10.0.0.1:58692.service - OpenSSH per-connection server daemon (10.0.0.1:58692). Apr 20 16:24:52.399511 sshd[6076]: Accepted publickey for core from 10.0.0.1 port 58692 ssh2: RSA SHA256:TY3ywpKQxrHjYT+ud73OgntCcwLDp6eqaAmsAXbkkEQ Apr 20 16:24:52.436713 sshd-session[6076]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 20 16:24:52.659288 systemd-logind[1611]: New session '58' of user 'core' with class 'user' and type 'tty'. Apr 20 16:24:52.675374 systemd[1]: Started session-58.scope - Session 58 of User core. Apr 20 16:24:54.285366 sshd[6080]: Connection closed by 10.0.0.1 port 58692 Apr 20 16:24:54.306800 sshd-session[6076]: pam_unix(sshd:session): session closed for user core Apr 20 16:24:54.361110 systemd[1]: sshd@56-4126-10.0.0.48:22-10.0.0.1:58692.service: Deactivated successfully. Apr 20 16:24:54.436918 systemd[1]: session-58.scope: Deactivated successfully. Apr 20 16:24:54.439441 systemd[1]: session-58.scope: Consumed 1.101s CPU time, 18M memory peak. Apr 20 16:24:54.460966 systemd-logind[1611]: Session 58 logged out. Waiting for processes to exit. Apr 20 16:24:54.624531 systemd-logind[1611]: Removed session 58. Apr 20 16:24:59.535297 systemd[1]: Started sshd@57-16-10.0.0.48:22-10.0.0.1:50340.service - OpenSSH per-connection server daemon (10.0.0.1:50340). Apr 20 16:25:00.709863 sshd[6134]: Accepted publickey for core from 10.0.0.1 port 50340 ssh2: RSA SHA256:TY3ywpKQxrHjYT+ud73OgntCcwLDp6eqaAmsAXbkkEQ Apr 20 16:25:00.778103 sshd-session[6134]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 20 16:25:00.905122 systemd-logind[1611]: New session '59' of user 'core' with class 'user' and type 'tty'. Apr 20 16:25:01.005769 systemd[1]: Started session-59.scope - Session 59 of User core. Apr 20 16:25:02.882860 sshd[6140]: Connection closed by 10.0.0.1 port 50340 Apr 20 16:25:02.923689 sshd-session[6134]: pam_unix(sshd:session): session closed for user core Apr 20 16:25:03.085827 systemd[1]: sshd@57-16-10.0.0.48:22-10.0.0.1:50340.service: Deactivated successfully. Apr 20 16:25:03.110008 systemd[1]: session-59.scope: Deactivated successfully. Apr 20 16:25:03.111040 systemd[1]: session-59.scope: Consumed 1.156s CPU time, 16M memory peak. Apr 20 16:25:03.160402 systemd-logind[1611]: Session 59 logged out. Waiting for processes to exit. Apr 20 16:25:03.204354 systemd-logind[1611]: Removed session 59. Apr 20 16:25:08.143544 systemd[1]: Started sshd@58-8199-10.0.0.48:22-10.0.0.1:58162.service - OpenSSH per-connection server daemon (10.0.0.1:58162). Apr 20 16:25:08.923562 sshd[6175]: Accepted publickey for core from 10.0.0.1 port 58162 ssh2: RSA SHA256:TY3ywpKQxrHjYT+ud73OgntCcwLDp6eqaAmsAXbkkEQ Apr 20 16:25:08.946956 sshd-session[6175]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 20 16:25:09.002498 systemd-logind[1611]: New session '60' of user 'core' with class 'user' and type 'tty'. Apr 20 16:25:09.029339 systemd[1]: Started session-60.scope - Session 60 of User core. Apr 20 16:25:09.636098 sshd[6180]: Connection closed by 10.0.0.1 port 58162 Apr 20 16:25:09.638912 sshd-session[6175]: pam_unix(sshd:session): session closed for user core Apr 20 16:25:09.691022 systemd[1]: sshd@58-8199-10.0.0.48:22-10.0.0.1:58162.service: Deactivated successfully. Apr 20 16:25:09.773261 systemd[1]: session-60.scope: Deactivated successfully. Apr 20 16:25:09.823403 systemd-logind[1611]: Session 60 logged out. Waiting for processes to exit. Apr 20 16:25:09.871715 systemd-logind[1611]: Removed session 60.