Apr 28 00:47:51.327337 kernel: Linux version 6.12.81-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 15.2.1_p20260214 p5) 15.2.1 20260214, GNU ld (Gentoo 2.46.0 p1) 2.46.0) #1 SMP PREEMPT_DYNAMIC Mon Apr 27 22:13:07 -00 2026 Apr 28 00:47:51.327393 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=f23531cb6330205ea1df0485b9a03deeb8b8f7eb9c40767cd8b5a2bc5be69458 Apr 28 00:47:51.327408 kernel: BIOS-provided physical RAM map: Apr 28 00:47:51.327448 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Apr 28 00:47:51.327458 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Apr 28 00:47:51.327465 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Apr 28 00:47:51.327489 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Apr 28 00:47:51.327529 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Apr 28 00:47:51.327537 kernel: BIOS-e820: [mem 0x000000000080c000-0x0000000000810fff] usable Apr 28 00:47:51.327547 kernel: BIOS-e820: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Apr 28 00:47:51.327557 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009bd3efff] usable Apr 28 00:47:51.329016 kernel: BIOS-e820: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Apr 28 00:47:51.331738 kernel: BIOS-e820: [mem 0x000000009be00000-0x000000009c8ecfff] usable Apr 28 00:47:51.333248 kernel: BIOS-e820: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Apr 28 00:47:51.333309 kernel: BIOS-e820: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Apr 28 00:47:51.333320 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Apr 28 00:47:51.333333 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009ce90fff] usable Apr 28 00:47:51.333345 kernel: BIOS-e820: [mem 0x000000009ce91000-0x000000009ce94fff] reserved Apr 28 00:47:51.333358 kernel: BIOS-e820: [mem 0x000000009ce95000-0x000000009ce96fff] ACPI NVS Apr 28 00:47:51.333369 kernel: BIOS-e820: [mem 0x000000009ce97000-0x000000009cedbfff] usable Apr 28 00:47:51.333377 kernel: BIOS-e820: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Apr 28 00:47:51.333385 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Apr 28 00:47:51.333393 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Apr 28 00:47:51.333400 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Apr 28 00:47:51.333440 kernel: BIOS-e820: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Apr 28 00:47:51.333449 kernel: NX (Execute Disable) protection: active Apr 28 00:47:51.333460 kernel: APIC: Static calls initialized Apr 28 00:47:51.333470 kernel: e820: update [mem 0x9b31e018-0x9b327c57] usable ==> usable Apr 28 00:47:51.333481 kernel: e820: update [mem 0x9b2e1018-0x9b31de57] usable ==> usable Apr 28 00:47:51.333491 kernel: extended physical RAM map: Apr 28 00:47:51.333501 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable Apr 28 00:47:51.333509 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000007fffff] usable Apr 28 00:47:51.333520 kernel: reserve setup_data: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Apr 28 00:47:51.333531 kernel: reserve setup_data: [mem 0x0000000000808000-0x000000000080afff] usable Apr 28 00:47:51.333541 kernel: reserve setup_data: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Apr 28 00:47:51.333549 kernel: reserve setup_data: [mem 0x000000000080c000-0x0000000000810fff] usable Apr 28 00:47:51.333557 kernel: reserve setup_data: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Apr 28 00:47:51.333564 kernel: reserve setup_data: [mem 0x0000000000900000-0x000000009b2e1017] usable Apr 28 00:47:51.333575 kernel: reserve setup_data: [mem 0x000000009b2e1018-0x000000009b31de57] usable Apr 28 00:47:51.333586 kernel: reserve setup_data: [mem 0x000000009b31de58-0x000000009b31e017] usable Apr 28 00:47:51.333600 kernel: reserve setup_data: [mem 0x000000009b31e018-0x000000009b327c57] usable Apr 28 00:47:51.334579 kernel: reserve setup_data: [mem 0x000000009b327c58-0x000000009bd3efff] usable Apr 28 00:47:51.334630 kernel: reserve setup_data: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Apr 28 00:47:51.334640 kernel: reserve setup_data: [mem 0x000000009be00000-0x000000009c8ecfff] usable Apr 28 00:47:51.334656 kernel: reserve setup_data: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Apr 28 00:47:51.334665 kernel: reserve setup_data: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Apr 28 00:47:51.334675 kernel: reserve setup_data: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Apr 28 00:47:51.334686 kernel: reserve setup_data: [mem 0x000000009cbff000-0x000000009ce90fff] usable Apr 28 00:47:51.334695 kernel: reserve setup_data: [mem 0x000000009ce91000-0x000000009ce94fff] reserved Apr 28 00:47:51.334704 kernel: reserve setup_data: [mem 0x000000009ce95000-0x000000009ce96fff] ACPI NVS Apr 28 00:47:51.334713 kernel: reserve setup_data: [mem 0x000000009ce97000-0x000000009cedbfff] usable Apr 28 00:47:51.334722 kernel: reserve setup_data: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Apr 28 00:47:51.334731 kernel: reserve setup_data: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Apr 28 00:47:51.334743 kernel: reserve setup_data: [mem 0x00000000e0000000-0x00000000efffffff] reserved Apr 28 00:47:51.334753 kernel: reserve setup_data: [mem 0x00000000feffc000-0x00000000feffffff] reserved Apr 28 00:47:51.334762 kernel: reserve setup_data: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Apr 28 00:47:51.334770 kernel: efi: EFI v2.7 by EDK II Apr 28 00:47:51.334818 kernel: efi: SMBIOS=0x9c988000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b9e4198 RNG=0x9cb73018 Apr 28 00:47:51.334828 kernel: random: crng init done Apr 28 00:47:51.334838 kernel: efi: Remove mem151: MMIO range=[0xffc00000-0xffffffff] (4MB) from e820 map Apr 28 00:47:51.334847 kernel: e820: remove [mem 0xffc00000-0xffffffff] reserved Apr 28 00:47:51.334856 kernel: secureboot: Secure boot disabled Apr 28 00:47:51.334868 kernel: SMBIOS 2.8 present. Apr 28 00:47:51.334877 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS unknown 02/02/2022 Apr 28 00:47:51.334885 kernel: DMI: Memory slots populated: 1/1 Apr 28 00:47:51.334897 kernel: Hypervisor detected: KVM Apr 28 00:47:51.334906 kernel: last_pfn = 0x9cedc max_arch_pfn = 0x10000000000 Apr 28 00:47:51.334915 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Apr 28 00:47:51.334924 kernel: kvm-clock: using sched offset of 10806536735 cycles Apr 28 00:47:51.334935 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Apr 28 00:47:51.334945 kernel: tsc: Detected 2793.438 MHz processor Apr 28 00:47:51.334954 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Apr 28 00:47:51.334967 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Apr 28 00:47:51.334978 kernel: last_pfn = 0x9cedc max_arch_pfn = 0x10000000000 Apr 28 00:47:51.334990 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Apr 28 00:47:51.335003 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Apr 28 00:47:51.335013 kernel: Using GB pages for direct mapping Apr 28 00:47:51.335022 kernel: ACPI: Early table checksum verification disabled Apr 28 00:47:51.335031 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Apr 28 00:47:51.335040 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Apr 28 00:47:51.335049 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Apr 28 00:47:51.335064 kernel: ACPI: DSDT 0x000000009CB7A000 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 28 00:47:51.335077 kernel: ACPI: FACS 0x000000009CBDD000 000040 Apr 28 00:47:51.335087 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 28 00:47:51.335098 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 28 00:47:51.335109 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 28 00:47:51.335119 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 28 00:47:51.335130 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Apr 28 00:47:51.335146 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Apr 28 00:47:51.335157 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1b9] Apr 28 00:47:51.335168 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Apr 28 00:47:51.335177 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Apr 28 00:47:51.335186 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Apr 28 00:47:51.335195 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Apr 28 00:47:51.335205 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Apr 28 00:47:51.335216 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Apr 28 00:47:51.335225 kernel: No NUMA configuration found Apr 28 00:47:51.335234 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cedbfff] Apr 28 00:47:51.335243 kernel: NODE_DATA(0) allocated [mem 0x9ce36dc0-0x9ce3dfff] Apr 28 00:47:51.335253 kernel: Zone ranges: Apr 28 00:47:51.335263 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Apr 28 00:47:51.335273 kernel: DMA32 [mem 0x0000000001000000-0x000000009cedbfff] Apr 28 00:47:51.335284 kernel: Normal empty Apr 28 00:47:51.335293 kernel: Device empty Apr 28 00:47:51.336314 kernel: Movable zone start for each node Apr 28 00:47:51.336555 kernel: Early memory node ranges Apr 28 00:47:51.336569 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Apr 28 00:47:51.336579 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Apr 28 00:47:51.336589 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Apr 28 00:47:51.336599 kernel: node 0: [mem 0x000000000080c000-0x0000000000810fff] Apr 28 00:47:51.336618 kernel: node 0: [mem 0x0000000000900000-0x000000009bd3efff] Apr 28 00:47:51.336627 kernel: node 0: [mem 0x000000009be00000-0x000000009c8ecfff] Apr 28 00:47:51.336636 kernel: node 0: [mem 0x000000009cbff000-0x000000009ce90fff] Apr 28 00:47:51.336644 kernel: node 0: [mem 0x000000009ce97000-0x000000009cedbfff] Apr 28 00:47:51.336653 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cedbfff] Apr 28 00:47:51.336663 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Apr 28 00:47:51.337389 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Apr 28 00:47:51.337572 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Apr 28 00:47:51.337590 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Apr 28 00:47:51.337603 kernel: On node 0, zone DMA: 239 pages in unavailable ranges Apr 28 00:47:51.337613 kernel: On node 0, zone DMA32: 193 pages in unavailable ranges Apr 28 00:47:51.337624 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Apr 28 00:47:51.337634 kernel: On node 0, zone DMA32: 6 pages in unavailable ranges Apr 28 00:47:51.337644 kernel: On node 0, zone DMA32: 12580 pages in unavailable ranges Apr 28 00:47:51.337654 kernel: ACPI: PM-Timer IO Port: 0x608 Apr 28 00:47:51.337664 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Apr 28 00:47:51.337677 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Apr 28 00:47:51.337687 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Apr 28 00:47:51.337697 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Apr 28 00:47:51.337708 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Apr 28 00:47:51.337720 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Apr 28 00:47:51.337730 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Apr 28 00:47:51.337741 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Apr 28 00:47:51.337751 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Apr 28 00:47:51.337762 kernel: TSC deadline timer available Apr 28 00:47:51.337773 kernel: CPU topo: Max. logical packages: 1 Apr 28 00:47:51.337783 kernel: CPU topo: Max. logical dies: 1 Apr 28 00:47:51.337812 kernel: CPU topo: Max. dies per package: 1 Apr 28 00:47:51.337822 kernel: CPU topo: Max. threads per core: 1 Apr 28 00:47:51.337832 kernel: CPU topo: Num. cores per package: 4 Apr 28 00:47:51.337842 kernel: CPU topo: Num. threads per package: 4 Apr 28 00:47:51.337852 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs Apr 28 00:47:51.337863 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Apr 28 00:47:51.337874 kernel: kvm-guest: KVM setup pv remote TLB flush Apr 28 00:47:51.337884 kernel: kvm-guest: setup PV sched yield Apr 28 00:47:51.337897 kernel: [mem 0x9d000000-0xdfffffff] available for PCI devices Apr 28 00:47:51.337906 kernel: Booting paravirtualized kernel on KVM Apr 28 00:47:51.337917 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Apr 28 00:47:51.337927 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Apr 28 00:47:51.337937 kernel: percpu: Embedded 60 pages/cpu s207960 r8192 d29608 u524288 Apr 28 00:47:51.339014 kernel: pcpu-alloc: s207960 r8192 d29608 u524288 alloc=1*2097152 Apr 28 00:47:51.339067 kernel: pcpu-alloc: [0] 0 1 2 3 Apr 28 00:47:51.339154 kernel: kvm-guest: PV spinlocks enabled Apr 28 00:47:51.339165 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Apr 28 00:47:51.339179 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=f23531cb6330205ea1df0485b9a03deeb8b8f7eb9c40767cd8b5a2bc5be69458 Apr 28 00:47:51.339190 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Apr 28 00:47:51.339201 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Apr 28 00:47:51.339212 kernel: Fallback order for Node 0: 0 Apr 28 00:47:51.339225 kernel: Built 1 zonelists, mobility grouping on. Total pages: 641450 Apr 28 00:47:51.339236 kernel: Policy zone: DMA32 Apr 28 00:47:51.339245 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Apr 28 00:47:51.339255 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Apr 28 00:47:51.339265 kernel: ftrace: allocating 40346 entries in 158 pages Apr 28 00:47:51.339275 kernel: ftrace: allocated 158 pages with 5 groups Apr 28 00:47:51.339286 kernel: Dynamic Preempt: voluntary Apr 28 00:47:51.339298 kernel: rcu: Preemptible hierarchical RCU implementation. Apr 28 00:47:51.339309 kernel: rcu: RCU event tracing is enabled. Apr 28 00:47:51.339320 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Apr 28 00:47:51.339329 kernel: Trampoline variant of Tasks RCU enabled. Apr 28 00:47:51.339339 kernel: Rude variant of Tasks RCU enabled. Apr 28 00:47:51.339349 kernel: Tracing variant of Tasks RCU enabled. Apr 28 00:47:51.339360 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Apr 28 00:47:51.339368 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Apr 28 00:47:51.339381 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Apr 28 00:47:51.359825 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Apr 28 00:47:51.360162 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Apr 28 00:47:51.360175 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Apr 28 00:47:51.360186 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Apr 28 00:47:51.360195 kernel: Console: colour dummy device 80x25 Apr 28 00:47:51.360312 kernel: printk: legacy console [ttyS0] enabled Apr 28 00:47:51.362026 kernel: ACPI: Core revision 20240827 Apr 28 00:47:51.362040 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Apr 28 00:47:51.362052 kernel: APIC: Switch to symmetric I/O mode setup Apr 28 00:47:51.362063 kernel: x2apic enabled Apr 28 00:47:51.362073 kernel: APIC: Switched APIC routing to: physical x2apic Apr 28 00:47:51.362084 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Apr 28 00:47:51.362095 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Apr 28 00:47:51.362154 kernel: kvm-guest: setup PV IPIs Apr 28 00:47:51.362164 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Apr 28 00:47:51.362176 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x284409db922, max_idle_ns: 440795228871 ns Apr 28 00:47:51.362187 kernel: Calibrating delay loop (skipped) preset value.. 5586.87 BogoMIPS (lpj=2793438) Apr 28 00:47:51.362198 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Apr 28 00:47:51.362207 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Apr 28 00:47:51.362218 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Apr 28 00:47:51.362230 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Apr 28 00:47:51.362240 kernel: Spectre V2 : Mitigation: Retpolines Apr 28 00:47:51.362251 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Apr 28 00:47:51.363302 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Apr 28 00:47:51.363356 kernel: RETBleed: Vulnerable Apr 28 00:47:51.363368 kernel: Speculative Store Bypass: Vulnerable Apr 28 00:47:51.363378 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Apr 28 00:47:51.365467 kernel: GDS: Unknown: Dependent on hypervisor status Apr 28 00:47:51.365482 kernel: active return thunk: its_return_thunk Apr 28 00:47:51.365494 kernel: ITS: Mitigation: Aligned branch/return thunks Apr 28 00:47:51.365506 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Apr 28 00:47:51.365517 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Apr 28 00:47:51.365528 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Apr 28 00:47:51.365539 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Apr 28 00:47:51.365555 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Apr 28 00:47:51.365566 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Apr 28 00:47:51.365576 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Apr 28 00:47:51.365586 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Apr 28 00:47:51.365597 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Apr 28 00:47:51.365607 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Apr 28 00:47:51.365618 kernel: x86/fpu: Enabled xstate features 0xe7, context size is 2432 bytes, using 'compacted' format. Apr 28 00:47:51.365631 kernel: Freeing SMP alternatives memory: 32K Apr 28 00:47:51.365642 kernel: pid_max: default: 32768 minimum: 301 Apr 28 00:47:51.365652 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Apr 28 00:47:51.365663 kernel: landlock: Up and running. Apr 28 00:47:51.365673 kernel: SELinux: Initializing. Apr 28 00:47:51.365684 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 28 00:47:51.365695 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 28 00:47:51.365708 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8370C CPU @ 2.80GHz (family: 0x6, model: 0x6a, stepping: 0x6) Apr 28 00:47:51.365717 kernel: Performance Events: unsupported p6 CPU model 106 no PMU driver, software events only. Apr 28 00:47:51.365727 kernel: signal: max sigframe size: 3632 Apr 28 00:47:51.365737 kernel: rcu: Hierarchical SRCU implementation. Apr 28 00:47:51.365749 kernel: rcu: Max phase no-delay instances is 400. Apr 28 00:47:51.365760 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Apr 28 00:47:51.365770 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Apr 28 00:47:51.365781 kernel: smp: Bringing up secondary CPUs ... Apr 28 00:47:51.365809 kernel: smpboot: x86: Booting SMP configuration: Apr 28 00:47:51.365820 kernel: .... node #0, CPUs: #1 #2 #3 Apr 28 00:47:51.365830 kernel: smp: Brought up 1 node, 4 CPUs Apr 28 00:47:51.365854 kernel: smpboot: Total of 4 processors activated (22347.50 BogoMIPS) Apr 28 00:47:51.365867 kernel: Memory: 2399272K/2565800K available (14336K kernel code, 2458K rwdata, 31736K rodata, 15944K init, 2284K bss, 160636K reserved, 0K cma-reserved) Apr 28 00:47:51.365877 kernel: devtmpfs: initialized Apr 28 00:47:51.365889 kernel: x86/mm: Memory block size: 128MB Apr 28 00:47:51.365899 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Apr 28 00:47:51.365909 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Apr 28 00:47:51.365919 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00811000-0x008fffff] (978944 bytes) Apr 28 00:47:51.365929 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Apr 28 00:47:51.365939 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9ce95000-0x9ce96fff] (8192 bytes) Apr 28 00:47:51.365949 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Apr 28 00:47:51.365961 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Apr 28 00:47:51.365971 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Apr 28 00:47:51.365981 kernel: pinctrl core: initialized pinctrl subsystem Apr 28 00:47:51.365991 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Apr 28 00:47:51.366001 kernel: audit: initializing netlink subsys (disabled) Apr 28 00:47:51.366011 kernel: thermal_sys: Registered thermal governor 'step_wise' Apr 28 00:47:51.366021 kernel: thermal_sys: Registered thermal governor 'user_space' Apr 28 00:47:51.366032 kernel: audit: type=2000 audit(1777337240.563:1): state=initialized audit_enabled=0 res=1 Apr 28 00:47:51.366042 kernel: cpuidle: using governor menu Apr 28 00:47:51.366051 kernel: efi: Freeing EFI boot services memory: 38812K Apr 28 00:47:51.366061 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Apr 28 00:47:51.366070 kernel: dca service started, version 1.12.1 Apr 28 00:47:51.366080 kernel: PCI: ECAM [mem 0xe0000000-0xefffffff] (base 0xe0000000) for domain 0000 [bus 00-ff] Apr 28 00:47:51.366089 kernel: PCI: Using configuration type 1 for base access Apr 28 00:47:51.366099 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Apr 28 00:47:51.366111 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Apr 28 00:47:51.366120 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Apr 28 00:47:51.366130 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Apr 28 00:47:51.366140 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Apr 28 00:47:51.366149 kernel: ACPI: Added _OSI(Module Device) Apr 28 00:47:51.366159 kernel: ACPI: Added _OSI(Processor Device) Apr 28 00:47:51.366169 kernel: ACPI: Added _OSI(Processor Aggregator Device) Apr 28 00:47:51.366181 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Apr 28 00:47:51.366190 kernel: ACPI: Interpreter enabled Apr 28 00:47:51.366199 kernel: ACPI: PM: (supports S0 S3 S5) Apr 28 00:47:51.366209 kernel: ACPI: Using IOAPIC for interrupt routing Apr 28 00:47:51.366218 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Apr 28 00:47:51.366228 kernel: PCI: Using E820 reservations for host bridge windows Apr 28 00:47:51.366237 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Apr 28 00:47:51.366249 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Apr 28 00:47:51.372243 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Apr 28 00:47:51.373914 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Apr 28 00:47:51.374094 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Apr 28 00:47:51.374110 kernel: PCI host bridge to bus 0000:00 Apr 28 00:47:51.378301 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Apr 28 00:47:51.387574 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Apr 28 00:47:51.387816 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Apr 28 00:47:51.387957 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xdfffffff window] Apr 28 00:47:51.388091 kernel: pci_bus 0000:00: root bus resource [mem 0xf0000000-0xfebfffff window] Apr 28 00:47:51.389337 kernel: pci_bus 0000:00: root bus resource [mem 0x380000000000-0x3807ffffffff window] Apr 28 00:47:51.389549 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Apr 28 00:47:51.389719 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Apr 28 00:47:51.413765 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Apr 28 00:47:51.781945 kernel: pci 0000:00:01.0: BAR 0 [mem 0xc0000000-0xc0ffffff pref] Apr 28 00:47:51.782568 kernel: pci 0000:00:01.0: BAR 2 [mem 0xc1044000-0xc1044fff] Apr 28 00:47:51.782746 kernel: pci 0000:00:01.0: ROM [mem 0xffff0000-0xffffffff pref] Apr 28 00:47:51.784177 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Apr 28 00:47:51.786903 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Apr 28 00:47:51.791890 kernel: pci 0000:00:02.0: BAR 0 [io 0x6100-0x611f] Apr 28 00:47:51.792068 kernel: pci 0000:00:02.0: BAR 1 [mem 0xc1043000-0xc1043fff] Apr 28 00:47:51.796959 kernel: pci 0000:00:02.0: BAR 4 [mem 0x380000000000-0x380000003fff 64bit pref] Apr 28 00:47:51.805180 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Apr 28 00:47:51.809563 kernel: pci 0000:00:03.0: BAR 0 [io 0x6000-0x607f] Apr 28 00:47:51.809747 kernel: pci 0000:00:03.0: BAR 1 [mem 0xc1042000-0xc1042fff] Apr 28 00:47:51.813772 kernel: pci 0000:00:03.0: BAR 4 [mem 0x380000004000-0x380000007fff 64bit pref] Apr 28 00:47:51.814778 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Apr 28 00:47:51.814995 kernel: pci 0000:00:04.0: BAR 0 [io 0x60e0-0x60ff] Apr 28 00:47:51.815144 kernel: pci 0000:00:04.0: BAR 1 [mem 0xc1041000-0xc1041fff] Apr 28 00:47:51.815290 kernel: pci 0000:00:04.0: BAR 4 [mem 0x380000008000-0x38000000bfff 64bit pref] Apr 28 00:47:51.816695 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref] Apr 28 00:47:51.817698 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Apr 28 00:47:51.817831 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Apr 28 00:47:51.818963 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Apr 28 00:47:51.819107 kernel: pci 0000:00:1f.2: BAR 4 [io 0x60c0-0x60df] Apr 28 00:47:51.819254 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xc1040000-0xc1040fff] Apr 28 00:47:51.820329 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Apr 28 00:47:51.820643 kernel: pci 0000:00:1f.3: BAR 4 [io 0x6080-0x60bf] Apr 28 00:47:51.820661 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Apr 28 00:47:51.820696 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Apr 28 00:47:51.820707 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Apr 28 00:47:51.820718 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Apr 28 00:47:51.820729 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Apr 28 00:47:51.820739 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Apr 28 00:47:51.820750 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Apr 28 00:47:51.820760 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Apr 28 00:47:51.820785 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Apr 28 00:47:51.820811 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Apr 28 00:47:51.820822 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Apr 28 00:47:51.820832 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Apr 28 00:47:51.820842 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Apr 28 00:47:51.820853 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Apr 28 00:47:51.820863 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Apr 28 00:47:51.820887 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Apr 28 00:47:51.820898 kernel: iommu: Default domain type: Translated Apr 28 00:47:51.820908 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Apr 28 00:47:51.820919 kernel: efivars: Registered efivars operations Apr 28 00:47:51.820930 kernel: PCI: Using ACPI for IRQ routing Apr 28 00:47:51.820940 kernel: PCI: pci_cache_line_size set to 64 bytes Apr 28 00:47:51.820951 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Apr 28 00:47:51.820964 kernel: e820: reserve RAM buffer [mem 0x00811000-0x008fffff] Apr 28 00:47:51.820974 kernel: e820: reserve RAM buffer [mem 0x9b2e1018-0x9bffffff] Apr 28 00:47:51.820983 kernel: e820: reserve RAM buffer [mem 0x9b31e018-0x9bffffff] Apr 28 00:47:51.820992 kernel: e820: reserve RAM buffer [mem 0x9bd3f000-0x9bffffff] Apr 28 00:47:51.821002 kernel: e820: reserve RAM buffer [mem 0x9c8ed000-0x9fffffff] Apr 28 00:47:51.821012 kernel: e820: reserve RAM buffer [mem 0x9ce91000-0x9fffffff] Apr 28 00:47:51.821022 kernel: e820: reserve RAM buffer [mem 0x9cedc000-0x9fffffff] Apr 28 00:47:51.821200 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Apr 28 00:47:51.821298 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Apr 28 00:47:51.822114 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Apr 28 00:47:51.822127 kernel: vgaarb: loaded Apr 28 00:47:51.822134 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Apr 28 00:47:51.822140 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Apr 28 00:47:51.822147 kernel: clocksource: Switched to clocksource kvm-clock Apr 28 00:47:51.822153 kernel: VFS: Disk quotas dquot_6.6.0 Apr 28 00:47:51.822160 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Apr 28 00:47:51.822181 kernel: pnp: PnP ACPI init Apr 28 00:47:51.822308 kernel: system 00:05: [mem 0xe0000000-0xefffffff window] has been reserved Apr 28 00:47:51.822318 kernel: pnp: PnP ACPI: found 6 devices Apr 28 00:47:51.822325 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Apr 28 00:47:51.822400 kernel: NET: Registered PF_INET protocol family Apr 28 00:47:51.822451 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Apr 28 00:47:51.822462 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Apr 28 00:47:51.822485 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Apr 28 00:47:51.822497 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Apr 28 00:47:51.822508 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Apr 28 00:47:51.822519 kernel: TCP: Hash tables configured (established 32768 bind 32768) Apr 28 00:47:51.822540 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 28 00:47:51.822551 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 28 00:47:51.822562 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Apr 28 00:47:51.822585 kernel: NET: Registered PF_XDP protocol family Apr 28 00:47:51.822771 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref]: can't claim; no compatible bridge window Apr 28 00:47:51.824572 kernel: pci 0000:00:04.0: ROM [mem 0x9d000000-0x9d03ffff pref]: assigned Apr 28 00:47:51.827471 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Apr 28 00:47:51.827626 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Apr 28 00:47:51.827758 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Apr 28 00:47:51.838327 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xdfffffff window] Apr 28 00:47:51.838550 kernel: pci_bus 0000:00: resource 8 [mem 0xf0000000-0xfebfffff window] Apr 28 00:47:51.838690 kernel: pci_bus 0000:00: resource 9 [mem 0x380000000000-0x3807ffffffff window] Apr 28 00:47:51.838705 kernel: PCI: CLS 0 bytes, default 64 Apr 28 00:47:51.838732 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Apr 28 00:47:51.838743 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x284409db922, max_idle_ns: 440795228871 ns Apr 28 00:47:51.838754 kernel: Initialise system trusted keyrings Apr 28 00:47:51.838782 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Apr 28 00:47:51.838810 kernel: Key type asymmetric registered Apr 28 00:47:51.838821 kernel: Asymmetric key parser 'x509' registered Apr 28 00:47:51.838831 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Apr 28 00:47:51.838841 kernel: io scheduler mq-deadline registered Apr 28 00:47:51.838864 kernel: io scheduler kyber registered Apr 28 00:47:51.838874 kernel: io scheduler bfq registered Apr 28 00:47:51.838893 kernel: hrtimer: interrupt took 26679747 ns Apr 28 00:47:51.838904 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Apr 28 00:47:51.838915 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Apr 28 00:47:51.838926 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Apr 28 00:47:51.838936 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Apr 28 00:47:51.839606 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Apr 28 00:47:51.839618 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Apr 28 00:47:51.839630 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Apr 28 00:47:51.839641 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Apr 28 00:47:51.839652 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Apr 28 00:47:51.840008 kernel: rtc_cmos 00:04: RTC can wake from S4 Apr 28 00:47:51.840049 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1 Apr 28 00:47:51.840182 kernel: rtc_cmos 00:04: registered as rtc0 Apr 28 00:47:51.840311 kernel: rtc_cmos 00:04: setting system clock to 2026-04-28T00:47:28 UTC (1777337248) Apr 28 00:47:51.840475 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Apr 28 00:47:51.840489 kernel: intel_pstate: CPU model not supported Apr 28 00:47:51.840500 kernel: efifb: probing for efifb Apr 28 00:47:51.840511 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k Apr 28 00:47:51.840539 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 Apr 28 00:47:51.840550 kernel: efifb: scrolling: redraw Apr 28 00:47:51.840560 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Apr 28 00:47:51.840572 kernel: Console: switching to colour frame buffer device 160x50 Apr 28 00:47:51.840582 kernel: fb0: EFI VGA frame buffer device Apr 28 00:47:51.840605 kernel: pstore: Using crash dump compression: deflate Apr 28 00:47:51.840616 kernel: pstore: Registered efi_pstore as persistent store backend Apr 28 00:47:51.840639 kernel: NET: Registered PF_INET6 protocol family Apr 28 00:47:51.840648 kernel: Segment Routing with IPv6 Apr 28 00:47:51.840658 kernel: In-situ OAM (IOAM) with IPv6 Apr 28 00:47:51.840669 kernel: NET: Registered PF_PACKET protocol family Apr 28 00:47:51.840679 kernel: Key type dns_resolver registered Apr 28 00:47:51.840689 kernel: IPI shorthand broadcast: enabled Apr 28 00:47:51.840699 kernel: sched_clock: Marking stable (7538021094, 903628989)->(9664001510, -1222351427) Apr 28 00:47:51.840722 kernel: registered taskstats version 1 Apr 28 00:47:51.840733 kernel: Loading compiled-in X.509 certificates Apr 28 00:47:51.840744 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.81-flatcar: d347ed0a99522a2efcf66a259b61bb14bbbefd0c' Apr 28 00:47:51.840755 kernel: Demotion targets for Node 0: null Apr 28 00:47:51.843211 kernel: Key type .fscrypt registered Apr 28 00:47:51.843228 kernel: Key type fscrypt-provisioning registered Apr 28 00:47:51.843239 kernel: ima: No TPM chip found, activating TPM-bypass! Apr 28 00:47:51.843250 kernel: ima: Allocated hash algorithm: sha1 Apr 28 00:47:51.843349 kernel: ima: No architecture policies found Apr 28 00:47:51.843361 kernel: clk: Disabling unused clocks Apr 28 00:47:51.843370 kernel: Freeing unused kernel image (initmem) memory: 15944K Apr 28 00:47:51.843381 kernel: Write protecting the kernel read-only data: 47104k Apr 28 00:47:51.843390 kernel: Freeing unused kernel image (rodata/data gap) memory: 1032K Apr 28 00:47:51.843401 kernel: Run /init as init process Apr 28 00:47:51.843452 kernel: with arguments: Apr 28 00:47:51.843478 kernel: /init Apr 28 00:47:51.843489 kernel: with environment: Apr 28 00:47:51.843500 kernel: HOME=/ Apr 28 00:47:51.843521 kernel: TERM=linux Apr 28 00:47:51.843532 kernel: SCSI subsystem initialized Apr 28 00:47:51.843544 kernel: libata version 3.00 loaded. Apr 28 00:47:51.843741 kernel: ahci 0000:00:1f.2: version 3.0 Apr 28 00:47:51.843780 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Apr 28 00:47:51.843989 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Apr 28 00:47:51.844156 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Apr 28 00:47:51.844379 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Apr 28 00:47:51.844818 kernel: scsi host0: ahci Apr 28 00:47:51.845035 kernel: scsi host1: ahci Apr 28 00:47:51.845222 kernel: scsi host2: ahci Apr 28 00:47:51.848693 kernel: scsi host3: ahci Apr 28 00:47:51.848920 kernel: scsi host4: ahci Apr 28 00:47:51.850001 kernel: scsi host5: ahci Apr 28 00:47:51.850025 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 26 lpm-pol 1 Apr 28 00:47:51.850937 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 26 lpm-pol 1 Apr 28 00:47:51.850951 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 26 lpm-pol 1 Apr 28 00:47:51.850963 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 26 lpm-pol 1 Apr 28 00:47:51.850974 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 26 lpm-pol 1 Apr 28 00:47:51.850986 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 26 lpm-pol 1 Apr 28 00:47:51.850998 kernel: ata2: SATA link down (SStatus 0 SControl 300) Apr 28 00:47:51.851026 kernel: ata6: SATA link down (SStatus 0 SControl 300) Apr 28 00:47:51.851055 kernel: ata4: SATA link down (SStatus 0 SControl 300) Apr 28 00:47:51.851067 kernel: ata5: SATA link down (SStatus 0 SControl 300) Apr 28 00:47:51.851079 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Apr 28 00:47:51.851092 kernel: ata1: SATA link down (SStatus 0 SControl 300) Apr 28 00:47:51.851104 kernel: ata3.00: LPM support broken, forcing max_power Apr 28 00:47:51.851117 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Apr 28 00:47:51.851130 kernel: ata3.00: applying bridge limits Apr 28 00:47:51.851143 kernel: ata3.00: LPM support broken, forcing max_power Apr 28 00:47:51.851153 kernel: ata3.00: configured for UDMA/100 Apr 28 00:47:51.851521 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Apr 28 00:47:51.851689 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Apr 28 00:47:51.851856 kernel: virtio_blk virtio1: [vda] 27000832 512-byte logical blocks (13.8 GB/12.9 GiB) Apr 28 00:47:51.851873 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Apr 28 00:47:51.851906 kernel: GPT:16515071 != 27000831 Apr 28 00:47:51.851916 kernel: GPT:Alternate GPT header not at the end of the disk. Apr 28 00:47:51.851926 kernel: GPT:16515071 != 27000831 Apr 28 00:47:51.851937 kernel: GPT: Use GNU Parted to correct GPT errors. Apr 28 00:47:51.852119 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Apr 28 00:47:51.852135 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 28 00:47:51.852145 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Apr 28 00:47:51.852306 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Apr 28 00:47:51.852318 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Apr 28 00:47:51.852325 kernel: device-mapper: uevent: version 1.0.3 Apr 28 00:47:51.852332 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Apr 28 00:47:51.852339 kernel: device-mapper: verity: sha256 using shash "sha256-generic" Apr 28 00:47:51.852346 kernel: raid6: avx512x4 gen() 16949 MB/s Apr 28 00:47:51.852352 kernel: raid6: avx512x2 gen() 18492 MB/s Apr 28 00:47:51.852384 kernel: raid6: avx512x1 gen() 11436 MB/s Apr 28 00:47:51.852390 kernel: raid6: avx2x4 gen() 4006 MB/s Apr 28 00:47:51.852397 kernel: raid6: avx2x2 gen() 1478 MB/s Apr 28 00:47:51.852404 kernel: raid6: avx2x1 gen() 9265 MB/s Apr 28 00:47:51.852443 kernel: raid6: using algorithm avx512x2 gen() 18492 MB/s Apr 28 00:47:51.852450 kernel: raid6: .... xor() 10361 MB/s, rmw enabled Apr 28 00:47:51.852456 kernel: raid6: using avx512x2 recovery algorithm Apr 28 00:47:51.852473 kernel: xor: automatically using best checksumming function avx Apr 28 00:47:51.852479 kernel: Btrfs loaded, zoned=no, fsverity=no Apr 28 00:47:51.852486 kernel: BTRFS: device fsid ceb5d4c4-0ad9-4dbe-97f4-74392863c761 devid 1 transid 38 /dev/mapper/usr (253:0) scanned by mount (182) Apr 28 00:47:51.852493 kernel: BTRFS info (device dm-0): first mount of filesystem ceb5d4c4-0ad9-4dbe-97f4-74392863c761 Apr 28 00:47:51.852500 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Apr 28 00:47:51.852506 kernel: BTRFS info (device dm-0 state E): disabling log replay at mount time Apr 28 00:47:51.852513 kernel: BTRFS info (device dm-0 state E): enabling free space tree Apr 28 00:47:51.852536 kernel: loop: module loaded Apr 28 00:47:51.852546 kernel: loop0: detected capacity change from 0 to 106960 Apr 28 00:47:51.852557 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Apr 28 00:47:51.852570 systemd[1]: /etc/systemd/system.conf.d/nocgroup.conf:2: Support for option DefaultCPUAccounting= has been removed and it is ignored Apr 28 00:47:51.852597 systemd[1]: /etc/systemd/system.conf.d/nocgroup.conf:5: Support for option DefaultBlockIOAccounting= has been removed and it is ignored Apr 28 00:47:51.852608 systemd[1]: Successfully made /usr/ read-only. Apr 28 00:47:51.852633 systemd[1]: systemd 258.3 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Apr 28 00:47:51.852644 systemd[1]: Detected virtualization kvm. Apr 28 00:47:51.852654 systemd[1]: Detected architecture x86-64. Apr 28 00:47:51.852666 systemd[1]: Running in initrd. Apr 28 00:47:51.852676 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Apr 28 00:47:51.852687 systemd[1]: No hostname configured, using default hostname. Apr 28 00:47:51.853534 systemd[1]: Hostname set to . Apr 28 00:47:51.853554 systemd[1]: Queued start job for default target initrd.target. Apr 28 00:47:51.853568 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Apr 28 00:47:51.853583 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 28 00:47:51.853597 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 28 00:47:51.853646 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Apr 28 00:47:51.853663 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 28 00:47:51.853675 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Apr 28 00:47:51.853687 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Apr 28 00:47:51.853699 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 28 00:47:51.853711 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 28 00:47:51.853722 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Apr 28 00:47:51.853733 systemd[1]: Reached target paths.target - Path Units. Apr 28 00:47:51.853747 systemd[1]: Reached target slices.target - Slice Units. Apr 28 00:47:51.853758 systemd[1]: Reached target swap.target - Swaps. Apr 28 00:47:51.853770 systemd[1]: Reached target timers.target - Timer Units. Apr 28 00:47:51.853781 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Apr 28 00:47:51.853811 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 28 00:47:51.853823 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Apr 28 00:47:51.853834 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 28 00:47:51.853861 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Apr 28 00:47:51.853871 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 28 00:47:51.853882 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 28 00:47:51.853894 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 28 00:47:51.853905 systemd[1]: Reached target sockets.target - Socket Units. Apr 28 00:47:51.853917 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Apr 28 00:47:51.853942 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Apr 28 00:47:51.853952 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 28 00:47:51.853963 systemd[1]: Finished network-cleanup.service - Network Cleanup. Apr 28 00:47:51.853973 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Apr 28 00:47:51.853994 systemd[1]: Starting systemd-fsck-usr.service... Apr 28 00:47:51.854017 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 28 00:47:51.854029 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 28 00:47:51.854041 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 28 00:47:51.854052 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Apr 28 00:47:51.854064 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 28 00:47:51.854098 systemd[1]: Finished systemd-fsck-usr.service. Apr 28 00:47:51.854109 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 28 00:47:51.854189 systemd-journald[321]: Collecting audit messages is enabled. Apr 28 00:47:51.854217 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Apr 28 00:47:51.854231 kernel: Bridge firewalling registered Apr 28 00:47:51.854243 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 28 00:47:51.854256 systemd-journald[321]: Journal started Apr 28 00:47:51.854278 systemd-journald[321]: Runtime Journal (/run/log/journal/c7857b6b37174cc8bc34ad7b260d3221) is 6M, max 48M, 42M free. Apr 28 00:47:51.843567 systemd-modules-load[324]: Inserted module 'br_netfilter' Apr 28 00:47:51.863000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:47:51.875246 kernel: audit: type=1130 audit(1777337271.863:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:47:51.880608 systemd[1]: Started systemd-journald.service - Journal Service. Apr 28 00:47:51.891000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:47:51.902130 kernel: audit: type=1130 audit(1777337271.891:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:47:51.922666 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 28 00:47:51.929000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:47:51.968962 kernel: audit: type=1130 audit(1777337271.929:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:47:52.032491 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 28 00:47:52.072000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:47:52.083286 kernel: audit: type=1130 audit(1777337272.072:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:47:52.534401 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 28 00:47:52.626126 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 28 00:47:52.655699 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 28 00:47:52.670950 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 28 00:47:52.733316 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 28 00:47:52.737000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:47:52.792001 systemd-tmpfiles[343]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Apr 28 00:47:52.799774 kernel: audit: type=1130 audit(1777337272.737:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:47:52.839478 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 28 00:47:52.862000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:47:52.871051 kernel: audit: type=1130 audit(1777337272.862:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:47:52.886670 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 28 00:47:52.913000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:47:52.939259 kernel: audit: type=1130 audit(1777337272.913:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:47:52.965000 audit: BPF prog-id=5 op=LOAD Apr 28 00:47:52.974350 kernel: audit: type=1334 audit(1777337272.965:9): prog-id=5 op=LOAD Apr 28 00:47:52.977102 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 28 00:47:53.006519 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 28 00:47:53.020000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:47:53.036287 kernel: audit: type=1130 audit(1777337273.020:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:47:53.088981 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Apr 28 00:47:53.383446 dracut-cmdline[361]: dracut-109 Apr 28 00:47:53.437302 dracut-cmdline[361]: Using kernel command line parameters: SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=f23531cb6330205ea1df0485b9a03deeb8b8f7eb9c40767cd8b5a2bc5be69458 Apr 28 00:47:53.998784 systemd-resolved[360]: Positive Trust Anchors: Apr 28 00:47:54.000459 systemd-resolved[360]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 28 00:47:54.000506 systemd-resolved[360]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Apr 28 00:47:54.001316 systemd-resolved[360]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 28 00:47:54.419952 systemd-resolved[360]: Defaulting to hostname 'linux'. Apr 28 00:47:54.466801 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 28 00:47:54.473000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:47:54.500537 kernel: audit: type=1130 audit(1777337274.473:11): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:47:54.497588 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 28 00:47:58.656031 kernel: Loading iSCSI transport class v2.0-870. Apr 28 00:47:59.150360 kernel: iscsi: registered transport (tcp) Apr 28 00:47:59.535377 kernel: iscsi: registered transport (qla4xxx) Apr 28 00:47:59.540570 kernel: QLogic iSCSI HBA Driver Apr 28 00:48:01.211214 systemd[1]: Starting systemd-network-generator.service - Generate Network Units from Kernel Command Line... Apr 28 00:48:01.892314 systemd[1]: Finished systemd-network-generator.service - Generate Network Units from Kernel Command Line. Apr 28 00:48:01.954000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:48:01.966910 kernel: audit: type=1130 audit(1777337281.954:12): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:48:01.971679 systemd[1]: Reached target network-pre.target - Preparation for Network. Apr 28 00:48:07.020062 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Apr 28 00:48:07.113000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:48:07.131534 kernel: audit: type=1130 audit(1777337287.113:13): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:48:07.385212 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Apr 28 00:48:07.430446 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Apr 28 00:48:07.903067 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Apr 28 00:48:07.907000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:48:07.922000 audit: BPF prog-id=6 op=LOAD Apr 28 00:48:07.927000 audit: BPF prog-id=7 op=LOAD Apr 28 00:48:07.930585 kernel: audit: type=1130 audit(1777337287.907:14): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:48:07.930614 kernel: audit: type=1334 audit(1777337287.922:15): prog-id=6 op=LOAD Apr 28 00:48:07.930627 kernel: audit: type=1334 audit(1777337287.927:16): prog-id=7 op=LOAD Apr 28 00:48:07.939830 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 28 00:48:08.354018 systemd-udevd[583]: Using default interface naming scheme 'v258'. Apr 28 00:48:09.007257 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 28 00:48:09.024000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:48:09.036521 kernel: audit: type=1130 audit(1777337289.024:17): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:48:09.109933 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Apr 28 00:48:09.372822 dracut-pre-trigger[660]: rd.md=0: removing MD RAID activation Apr 28 00:48:09.650032 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 28 00:48:09.649000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:48:09.676387 kernel: audit: type=1130 audit(1777337289.649:18): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:48:09.680000 audit: BPF prog-id=8 op=LOAD Apr 28 00:48:09.701199 kernel: audit: type=1334 audit(1777337289.680:19): prog-id=8 op=LOAD Apr 28 00:48:09.732178 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 28 00:48:09.937649 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Apr 28 00:48:09.978000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:48:09.981055 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 28 00:48:09.989540 kernel: audit: type=1130 audit(1777337289.978:20): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:48:10.409814 systemd-networkd[729]: lo: Link UP Apr 28 00:48:10.409840 systemd-networkd[729]: lo: Gained carrier Apr 28 00:48:10.423143 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 28 00:48:10.439000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:48:10.469066 systemd[1]: Reached target network.target - Network. Apr 28 00:48:10.493850 kernel: audit: type=1130 audit(1777337290.439:21): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:48:18.540703 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 28 00:48:18.592000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:48:18.620281 kernel: audit: type=1130 audit(1777337298.592:22): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:48:18.624588 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Apr 28 00:48:19.496350 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Apr 28 00:48:19.673895 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Apr 28 00:48:19.725210 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Apr 28 00:48:19.967439 kernel: cryptd: max_cpu_qlen set to 1000 Apr 28 00:48:20.034521 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Apr 28 00:48:20.087406 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Apr 28 00:48:20.132946 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Apr 28 00:48:20.168135 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 28 00:48:20.178617 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 28 00:48:20.185000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:48:20.186331 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Apr 28 00:48:20.212200 kernel: audit: type=1131 audit(1777337300.185:23): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:48:20.212231 kernel: AES CTR mode by8 optimization enabled Apr 28 00:48:20.214005 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 28 00:48:20.257314 disk-uuid[782]: Primary Header is updated. Apr 28 00:48:20.257314 disk-uuid[782]: Secondary Entries is updated. Apr 28 00:48:20.257314 disk-uuid[782]: Secondary Header is updated. Apr 28 00:48:20.371876 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 28 00:48:20.372076 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 28 00:48:20.385000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:48:20.385000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:48:20.410842 kernel: audit: type=1130 audit(1777337300.385:24): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:48:20.410870 kernel: audit: type=1131 audit(1777337300.385:25): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:48:20.421171 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 28 00:48:20.438797 systemd-networkd[729]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Apr 28 00:48:20.438809 systemd-networkd[729]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 28 00:48:20.496555 systemd-networkd[729]: eth0: Link UP Apr 28 00:48:20.497800 systemd-networkd[729]: eth0: Gained carrier Apr 28 00:48:20.497828 systemd-networkd[729]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Apr 28 00:48:20.550547 systemd-networkd[729]: eth0: DHCPv4 address 10.0.0.30/16, gateway 10.0.0.1 acquired from 10.0.0.1 Apr 28 00:48:20.584220 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 28 00:48:20.587000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:48:20.596705 kernel: audit: type=1130 audit(1777337300.587:26): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:48:21.274265 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Apr 28 00:48:21.292000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:48:21.320327 kernel: audit: type=1130 audit(1777337301.292:27): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:48:21.467900 disk-uuid[806]: Warning: The kernel is still using the old partition table. Apr 28 00:48:21.467900 disk-uuid[806]: The new table will be used at the next reboot or after you Apr 28 00:48:21.467900 disk-uuid[806]: run partprobe(8) or kpartx(8) Apr 28 00:48:21.467900 disk-uuid[806]: The operation has completed successfully. Apr 28 00:48:21.773892 systemd-networkd[729]: eth0: Gained IPv6LL Apr 28 00:48:21.785833 systemd[1]: disk-uuid.service: Deactivated successfully. Apr 28 00:48:21.790402 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Apr 28 00:48:21.839000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:48:21.894000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:48:21.905824 kernel: audit: type=1130 audit(1777337301.839:28): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:48:21.906058 kernel: audit: type=1131 audit(1777337301.894:29): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:48:22.318849 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Apr 28 00:48:22.387896 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 28 00:48:22.530043 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 28 00:48:22.582331 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Apr 28 00:48:22.697825 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Apr 28 00:48:23.169684 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Apr 28 00:48:23.207000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:48:23.216072 kernel: audit: type=1130 audit(1777337303.207:30): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:48:23.267929 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (898) Apr 28 00:48:23.284583 kernel: BTRFS info (device vda6): first mount of filesystem 91af0ae0-8636-4662-9335-0ea2677cb45d Apr 28 00:48:23.284962 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 28 00:48:23.369380 kernel: BTRFS info (device vda6): turning on async discard Apr 28 00:48:23.369738 kernel: BTRFS info (device vda6): enabling free space tree Apr 28 00:48:23.510667 kernel: BTRFS info (device vda6): last unmount of filesystem 91af0ae0-8636-4662-9335-0ea2677cb45d Apr 28 00:48:23.568582 systemd[1]: Finished ignition-setup.service - Ignition (setup). Apr 28 00:48:23.580000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:48:23.592600 kernel: audit: type=1130 audit(1777337303.580:31): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:48:23.975384 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Apr 28 00:48:28.273188 ignition[919]: Ignition 2.24.0 Apr 28 00:48:28.273215 ignition[919]: Stage: fetch-offline Apr 28 00:48:28.273303 ignition[919]: no configs at "/usr/lib/ignition/base.d" Apr 28 00:48:28.273311 ignition[919]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 28 00:48:28.279967 ignition[919]: parsed url from cmdline: "" Apr 28 00:48:28.279977 ignition[919]: no config URL provided Apr 28 00:48:28.283895 ignition[919]: reading system config file "/usr/lib/ignition/user.ign" Apr 28 00:48:28.287206 ignition[919]: no config at "/usr/lib/ignition/user.ign" Apr 28 00:48:28.287493 ignition[919]: op(1): [started] loading QEMU firmware config module Apr 28 00:48:28.287499 ignition[919]: op(1): executing: "modprobe" "qemu_fw_cfg" Apr 28 00:48:28.773762 ignition[919]: op(1): [finished] loading QEMU firmware config module Apr 28 00:48:28.916770 ignition[919]: parsing config with SHA512: f5cd36abe7e81eecd1f344b7a1dfee30d1816b4b1cd257df993e0e9b8ff10f611976366f33fd190c8951a10993273175c39d7dc02fec0aa0d8c82bb842beadd7 Apr 28 00:48:29.277604 unknown[919]: fetched base config from "system" Apr 28 00:48:29.277625 unknown[919]: fetched user config from "qemu" Apr 28 00:48:29.314777 ignition[919]: fetch-offline: fetch-offline passed Apr 28 00:48:29.324094 ignition[919]: Ignition finished successfully Apr 28 00:48:29.438743 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Apr 28 00:48:29.462000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:48:29.465253 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Apr 28 00:48:29.500083 kernel: audit: type=1130 audit(1777337309.462:32): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:48:29.619531 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Apr 28 00:48:33.319509 ignition[929]: Ignition 2.24.0 Apr 28 00:48:33.319545 ignition[929]: Stage: kargs Apr 28 00:48:33.320015 ignition[929]: no configs at "/usr/lib/ignition/base.d" Apr 28 00:48:33.320022 ignition[929]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 28 00:48:33.353943 ignition[929]: kargs: kargs passed Apr 28 00:48:33.354143 ignition[929]: Ignition finished successfully Apr 28 00:48:33.478095 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Apr 28 00:48:33.503000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:48:33.538848 kernel: audit: type=1130 audit(1777337313.503:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:48:33.758884 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Apr 28 00:48:36.440369 ignition[937]: Ignition 2.24.0 Apr 28 00:48:36.486867 ignition[937]: Stage: disks Apr 28 00:48:36.538171 ignition[937]: no configs at "/usr/lib/ignition/base.d" Apr 28 00:48:36.574371 ignition[937]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 28 00:48:36.659940 ignition[937]: disks: disks passed Apr 28 00:48:36.668786 ignition[937]: Ignition finished successfully Apr 28 00:48:36.695008 systemd[1]: Finished ignition-disks.service - Ignition (disks). Apr 28 00:48:36.732000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:48:36.769253 kernel: audit: type=1130 audit(1777337316.732:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:48:36.807887 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Apr 28 00:48:36.840831 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 28 00:48:36.868360 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 28 00:48:36.912701 systemd[1]: Reached target sysinit.target - System Initialization. Apr 28 00:48:37.003614 systemd[1]: Reached target basic.target - Basic System. Apr 28 00:48:37.317307 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Apr 28 00:48:40.007360 systemd-fsck[948]: ROOT: clean, 15/456736 files, 38230/456704 blocks Apr 28 00:48:40.191400 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Apr 28 00:48:40.210000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:48:40.233843 kernel: audit: type=1130 audit(1777337320.210:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:48:40.344572 systemd[1]: Mounting sysroot.mount - /sysroot... Apr 28 00:48:43.302022 kernel: EXT4-fs (vda9): mounted filesystem f2ab3bab-5f4f-4f13-9e1d-ae27d704ff83 r/w with ordered data mode. Quota mode: none. Apr 28 00:48:43.489275 systemd[1]: Mounted sysroot.mount - /sysroot. Apr 28 00:48:43.794024 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Apr 28 00:48:44.290371 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 28 00:48:44.410797 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Apr 28 00:48:44.423855 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Apr 28 00:48:44.424013 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Apr 28 00:48:44.424055 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Apr 28 00:48:44.565654 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (958) Apr 28 00:48:44.598035 kernel: BTRFS info (device vda6): first mount of filesystem 91af0ae0-8636-4662-9335-0ea2677cb45d Apr 28 00:48:44.598382 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 28 00:48:44.703055 kernel: BTRFS info (device vda6): turning on async discard Apr 28 00:48:44.704060 kernel: BTRFS info (device vda6): enabling free space tree Apr 28 00:48:45.237706 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 28 00:48:45.378052 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Apr 28 00:48:45.818012 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Apr 28 00:49:02.561240 kernel: loop1: detected capacity change from 0 to 43472 Apr 28 00:49:02.623031 kernel: loop1: p1 p2 p3 Apr 28 00:49:03.308304 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Apr 28 00:49:03.314345 kernel: device-mapper: verity: Unrecognized verity feature request: root_hash_sig_key_desc Apr 28 00:49:03.315142 kernel: device-mapper: table: 253:1: verity: Unrecognized verity feature request (-EINVAL) Apr 28 00:49:03.318181 kernel: device-mapper: ioctl: error adding target to table Apr 28 00:49:03.318479 systemd-confext[1048]: device-mapper: reload ioctl on bd01924efa64fd6fbc49c41573ab9db4b6e97144b422d98aceb773101478822c-verity (253:1) failed: Invalid argument Apr 28 00:49:03.505622 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Apr 28 00:49:07.465236 kernel: erofs: (device dm-1): mounted with root inode @ nid 40. Apr 28 00:49:07.898262 kernel: loop2: detected capacity change from 0 to 43472 Apr 28 00:49:07.917104 kernel: loop2: p1 p2 p3 Apr 28 00:49:08.363799 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Apr 28 00:49:08.364067 kernel: device-mapper: verity: Unrecognized verity feature request: root_hash_sig_key_desc Apr 28 00:49:08.364083 kernel: device-mapper: table: 253:1: verity: Unrecognized verity feature request (-EINVAL) Apr 28 00:49:08.368629 kernel: device-mapper: ioctl: error adding target to table Apr 28 00:49:08.366924 (sd-merge)[1062]: device-mapper: reload ioctl on bd01924efa64fd6fbc49c41573ab9db4b6e97144b422d98aceb773101478822c-verity (253:1) failed: Invalid argument Apr 28 00:49:08.381659 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Apr 28 00:49:11.097989 kernel: erofs: (device dm-1): mounted with root inode @ nid 40. Apr 28 00:49:11.173945 (sd-merge)[1062]: Using extensions '00-flatcar-default.raw'. Apr 28 00:49:11.319730 (sd-merge)[1062]: Merged extensions into '/sysroot/etc'. Apr 28 00:49:11.673819 initrd-setup-root[1070]: /etc 00-flatcar-default Tue 2026-04-28 00:47:52 UTC Apr 28 00:49:11.788247 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Apr 28 00:49:11.801000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:49:11.848993 kernel: audit: type=1130 audit(1777337351.801:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:49:11.888016 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Apr 28 00:49:11.995092 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Apr 28 00:49:12.237785 kernel: BTRFS info (device vda6): last unmount of filesystem 91af0ae0-8636-4662-9335-0ea2677cb45d Apr 28 00:49:12.301285 systemd[1]: sysroot-oem.mount: Deactivated successfully. Apr 28 00:49:12.713579 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Apr 28 00:49:12.728000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:49:12.772123 kernel: audit: type=1130 audit(1777337352.728:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:49:13.779310 ignition[1079]: INFO : Ignition 2.24.0 Apr 28 00:49:13.779310 ignition[1079]: INFO : Stage: mount Apr 28 00:49:13.784002 ignition[1079]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 28 00:49:13.784002 ignition[1079]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 28 00:49:13.916093 ignition[1079]: INFO : mount: mount passed Apr 28 00:49:13.934055 ignition[1079]: INFO : Ignition finished successfully Apr 28 00:49:14.054060 systemd[1]: Finished ignition-mount.service - Ignition (mount). Apr 28 00:49:14.095000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:49:14.119095 kernel: audit: type=1130 audit(1777337354.095:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:49:14.460546 systemd[1]: Starting ignition-files.service - Ignition (files)... Apr 28 00:49:16.027090 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 28 00:49:16.536377 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1092) Apr 28 00:49:16.574906 kernel: BTRFS info (device vda6): first mount of filesystem 91af0ae0-8636-4662-9335-0ea2677cb45d Apr 28 00:49:16.575066 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 28 00:49:16.656227 kernel: BTRFS info (device vda6): turning on async discard Apr 28 00:49:16.657220 kernel: BTRFS info (device vda6): enabling free space tree Apr 28 00:49:16.977280 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 28 00:49:19.782267 ignition[1109]: INFO : Ignition 2.24.0 Apr 28 00:49:19.782267 ignition[1109]: INFO : Stage: files Apr 28 00:49:19.804503 ignition[1109]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 28 00:49:19.804503 ignition[1109]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 28 00:49:19.824562 ignition[1109]: DEBUG : files: compiled without relabeling support, skipping Apr 28 00:49:19.901150 ignition[1109]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Apr 28 00:49:19.901150 ignition[1109]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Apr 28 00:49:19.953114 ignition[1109]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Apr 28 00:49:19.991261 ignition[1109]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Apr 28 00:49:20.070680 unknown[1109]: wrote ssh authorized keys file for user: core Apr 28 00:49:20.107181 ignition[1109]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Apr 28 00:49:20.107181 ignition[1109]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Apr 28 00:49:20.107181 ignition[1109]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Apr 28 00:49:21.519454 ignition[1109]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Apr 28 00:49:26.008370 ignition[1109]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Apr 28 00:49:26.094541 ignition[1109]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Apr 28 00:49:26.094541 ignition[1109]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Apr 28 00:49:26.094541 ignition[1109]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Apr 28 00:49:26.129151 ignition[1109]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Apr 28 00:49:26.129151 ignition[1109]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 28 00:49:26.153731 ignition[1109]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 28 00:49:26.153731 ignition[1109]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 28 00:49:26.198575 ignition[1109]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 28 00:49:26.198575 ignition[1109]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Apr 28 00:49:26.219563 ignition[1109]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Apr 28 00:49:26.219563 ignition[1109]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Apr 28 00:49:26.219563 ignition[1109]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Apr 28 00:49:26.219563 ignition[1109]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Apr 28 00:49:26.333897 ignition[1109]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.4-x86-64.raw: attempt #1 Apr 28 00:49:27.687273 ignition[1109]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Apr 28 00:50:00.512097 ignition[1109]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Apr 28 00:50:00.512097 ignition[1109]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Apr 28 00:50:00.568545 ignition[1109]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 28 00:50:00.583362 ignition[1109]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 28 00:50:00.601979 ignition[1109]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Apr 28 00:50:00.601979 ignition[1109]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Apr 28 00:50:00.601979 ignition[1109]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Apr 28 00:50:00.601979 ignition[1109]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Apr 28 00:50:00.601979 ignition[1109]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Apr 28 00:50:00.601979 ignition[1109]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Apr 28 00:50:02.791260 ignition[1109]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Apr 28 00:50:03.524815 ignition[1109]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Apr 28 00:50:03.532345 ignition[1109]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Apr 28 00:50:03.532345 ignition[1109]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Apr 28 00:50:03.532345 ignition[1109]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Apr 28 00:50:03.556931 ignition[1109]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Apr 28 00:50:03.556931 ignition[1109]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Apr 28 00:50:03.556931 ignition[1109]: INFO : files: files passed Apr 28 00:50:03.671932 ignition[1109]: INFO : Ignition finished successfully Apr 28 00:50:03.881251 systemd[1]: Finished ignition-files.service - Ignition (files). Apr 28 00:50:03.888000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:50:03.901786 kernel: audit: type=1130 audit(1777337403.888:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:50:04.102739 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Apr 28 00:50:04.236858 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Apr 28 00:50:04.372219 systemd[1]: ignition-quench.service: Deactivated successfully. Apr 28 00:50:04.372440 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Apr 28 00:50:04.372000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:50:04.375000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:50:04.405030 kernel: audit: type=1130 audit(1777337404.372:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:50:04.405086 kernel: audit: type=1131 audit(1777337404.375:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:50:04.810198 initrd-setup-root-after-ignition[1142]: grep: /sysroot/oem/oem-release: No such file or directory Apr 28 00:50:04.992700 initrd-setup-root-after-ignition[1144]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 28 00:50:04.992700 initrd-setup-root-after-ignition[1144]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Apr 28 00:50:05.101729 initrd-setup-root-after-ignition[1148]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 28 00:50:05.699816 kernel: loop3: detected capacity change from 0 to 43472 Apr 28 00:50:05.765588 kernel: loop3: p1 p2 p3 Apr 28 00:50:06.288395 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Apr 28 00:50:06.289310 kernel: device-mapper: verity: Unrecognized verity feature request: root_hash_sig_key_desc Apr 28 00:50:06.289330 kernel: device-mapper: table: 253:2: verity: Unrecognized verity feature request (-EINVAL) Apr 28 00:50:06.310979 kernel: device-mapper: ioctl: error adding target to table Apr 28 00:50:06.311382 systemd-confext[1150]: device-mapper: reload ioctl on loop3p1-verity (253:2) failed: Invalid argument Apr 28 00:50:06.365726 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Apr 28 00:50:09.728057 kernel: erofs: (device dm-2): mounted with root inode @ nid 40. Apr 28 00:50:10.327043 kernel: loop4: detected capacity change from 0 to 43472 Apr 28 00:50:10.424399 kernel: loop4: p1 p2 p3 Apr 28 00:50:10.495918 kernel: loop4: p1 p2 p3 Apr 28 00:50:11.150297 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Apr 28 00:50:11.178480 kernel: device-mapper: verity: Unrecognized verity feature request: root_hash_sig_key_desc Apr 28 00:50:11.178498 kernel: device-mapper: table: 253:2: verity: Unrecognized verity feature request (-EINVAL) Apr 28 00:50:11.182731 kernel: device-mapper: ioctl: error adding target to table Apr 28 00:50:11.183123 (sd-merge)[1163]: device-mapper: reload ioctl on loop4p1-verity (253:2) failed: Invalid argument Apr 28 00:50:11.217829 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Apr 28 00:50:13.495286 kernel: erofs: (device dm-2): mounted with root inode @ nid 40. Apr 28 00:50:13.717391 (sd-merge)[1163]: Skipping extension refresh because no change was found, use --always-refresh=yes to always do a refresh. Apr 28 00:50:13.875220 kernel: device-mapper: ioctl: remove_all left 2 open device(s) Apr 28 00:50:14.006978 kernel: loop4: detected capacity change from 0 to 178200 Apr 28 00:50:14.038182 kernel: loop4: p1 p2 p3 Apr 28 00:50:15.236784 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Apr 28 00:50:15.236989 kernel: device-mapper: verity: Unrecognized verity feature request: root_hash_sig_key_desc Apr 28 00:50:15.238817 kernel: device-mapper: table: 253:2: verity: Unrecognized verity feature request (-EINVAL) Apr 28 00:50:15.239713 kernel: device-mapper: ioctl: error adding target to table Apr 28 00:50:15.261299 systemd-sysext[1171]: device-mapper: reload ioctl on b14ca717c93af6dcf45970900eba2c84b1df1635b4cfb0353a4efa1194de37b1-verity (253:2) failed: Invalid argument Apr 28 00:50:15.357060 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Apr 28 00:50:19.436204 kernel: erofs: (device dm-2): mounted with root inode @ nid 39. Apr 28 00:50:20.809576 kernel: loop5: detected capacity change from 0 to 219192 Apr 28 00:50:21.590225 kernel: loop6: detected capacity change from 0 to 378016 Apr 28 00:50:21.667602 kernel: loop6: p1 p2 p3 Apr 28 00:50:22.279118 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Apr 28 00:50:22.279461 kernel: device-mapper: verity: Unrecognized verity feature request: root_hash_sig_key_desc Apr 28 00:50:22.295380 kernel: device-mapper: table: 253:2: verity: Unrecognized verity feature request (-EINVAL) Apr 28 00:50:22.295667 kernel: device-mapper: ioctl: error adding target to table Apr 28 00:50:22.296264 systemd-sysext[1171]: device-mapper: reload ioctl on 7872a58ca41eede16f5f9c4d58208200d7d53a6d6326a9fbd8291496d1250167-verity (253:2) failed: Invalid argument Apr 28 00:50:22.375367 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Apr 28 00:50:26.232839 kernel: erofs: (device dm-2): mounted with root inode @ nid 39. Apr 28 00:50:27.086056 kernel: loop7: detected capacity change from 0 to 178200 Apr 28 00:50:27.118394 kernel: loop7: p1 p2 p3 Apr 28 00:50:28.223100 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Apr 28 00:50:28.223389 kernel: device-mapper: verity: Unrecognized verity feature request: root_hash_sig_key_desc Apr 28 00:50:28.227319 kernel: device-mapper: table: 253:2: verity: Unrecognized verity feature request (-EINVAL) Apr 28 00:50:28.227595 kernel: device-mapper: ioctl: error adding target to table Apr 28 00:50:28.229393 (sd-merge)[1188]: device-mapper: reload ioctl on b14ca717c93af6dcf45970900eba2c84b1df1635b4cfb0353a4efa1194de37b1-verity (253:2) failed: Invalid argument Apr 28 00:50:28.307174 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Apr 28 00:50:30.045316 kernel: erofs: (device dm-2): mounted with root inode @ nid 39. Apr 28 00:50:30.466160 kernel: loop1: detected capacity change from 0 to 219192 Apr 28 00:50:30.861682 kernel: loop3: detected capacity change from 0 to 378016 Apr 28 00:50:30.879710 kernel: loop3: p1 p2 p3 Apr 28 00:50:31.607988 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Apr 28 00:50:31.608285 kernel: device-mapper: verity: Unrecognized verity feature request: root_hash_sig_key_desc Apr 28 00:50:31.608304 kernel: device-mapper: table: 253:3: verity: Unrecognized verity feature request (-EINVAL) Apr 28 00:50:31.610400 kernel: device-mapper: ioctl: error adding target to table Apr 28 00:50:31.613118 (sd-merge)[1188]: device-mapper: reload ioctl on 7872a58ca41eede16f5f9c4d58208200d7d53a6d6326a9fbd8291496d1250167-verity (253:3) failed: Invalid argument Apr 28 00:50:31.650127 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Apr 28 00:50:34.489253 kernel: erofs: (device dm-3): mounted with root inode @ nid 39. Apr 28 00:50:34.684353 (sd-merge)[1188]: Using extensions 'containerd-flatcar.raw', 'docker-flatcar.raw', 'kubernetes-v1.34.4-x86-64.raw'. Apr 28 00:50:35.286407 (sd-merge)[1188]: Merged extensions into '/sysroot/usr'. Apr 28 00:50:35.695196 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 28 00:50:35.715000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:50:35.727612 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Apr 28 00:50:35.763616 kernel: audit: type=1130 audit(1777337435.715:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:50:35.999093 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Apr 28 00:50:38.267817 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Apr 28 00:50:38.399261 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Apr 28 00:50:38.406000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:50:38.407000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:50:38.451263 kernel: audit: type=1130 audit(1777337438.406:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:50:38.408594 systemd[1]: initrd-parse-etc.service: Triggering OnSuccess= dependencies. Apr 28 00:50:38.499112 kernel: audit: type=1131 audit(1777337438.407:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:50:38.436675 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Apr 28 00:50:38.466698 systemd[1]: Reached target initrd.target - Initrd Default Target. Apr 28 00:50:38.564187 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Apr 28 00:50:38.912926 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Apr 28 00:50:44.616019 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 28 00:50:44.710000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:50:44.741082 kernel: audit: type=1130 audit(1777337444.710:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:50:45.212124 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Apr 28 00:50:50.006698 systemd[1]: initrd-cleanup.service: Deactivated successfully. Apr 28 00:50:50.027000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:50:50.023128 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Apr 28 00:50:50.027000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:50:50.085597 kernel: audit: type=1130 audit(1777337450.027:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:50:50.085733 kernel: audit: type=1131 audit(1777337450.027:47): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:50:50.362326 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Apr 28 00:50:50.406119 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 28 00:50:50.538723 systemd[1]: Stopped target timers.target - Timer Units. Apr 28 00:50:50.658592 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Apr 28 00:50:50.663851 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 28 00:50:50.739000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:50:50.776377 systemd[1]: dracut-pre-pivot.service: Consumed 1.877s CPU time. Apr 28 00:50:50.789335 kernel: audit: type=1131 audit(1777337450.739:48): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:50:50.806037 systemd[1]: Stopped target initrd.target - Initrd Default Target. Apr 28 00:50:50.896065 systemd[1]: Stopped target basic.target - Basic System. Apr 28 00:50:50.952996 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Apr 28 00:50:51.059907 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Apr 28 00:50:51.116789 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Apr 28 00:50:51.174951 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Apr 28 00:50:51.217328 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Apr 28 00:50:51.299213 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Apr 28 00:50:51.334263 systemd[1]: Stopped target sysinit.target - System Initialization. Apr 28 00:50:51.361169 systemd[1]: Stopped target local-fs.target - Local File Systems. Apr 28 00:50:51.370919 systemd[1]: Stopped target swap.target - Swaps. Apr 28 00:50:51.411864 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Apr 28 00:50:51.419190 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Apr 28 00:50:51.470000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:50:51.491154 kernel: audit: type=1131 audit(1777337451.470:49): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:50:51.491183 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Apr 28 00:50:51.512237 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 28 00:50:51.611062 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Apr 28 00:50:51.730206 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 28 00:50:51.773066 systemd[1]: dracut-initqueue.service: Deactivated successfully. Apr 28 00:50:51.773626 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Apr 28 00:50:51.791000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:50:51.802678 kernel: audit: type=1131 audit(1777337451.791:50): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:50:52.034209 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Apr 28 00:50:52.040232 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Apr 28 00:50:52.071000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:50:52.076325 systemd[1]: ignition-fetch-offline.service: Consumed 2.533s CPU time. Apr 28 00:50:52.085408 systemd[1]: Stopped target paths.target - Path Units. Apr 28 00:50:52.114858 kernel: audit: type=1131 audit(1777337452.071:51): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:50:52.160149 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Apr 28 00:50:52.207608 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 28 00:50:52.250013 systemd[1]: Stopped target slices.target - Slice Units. Apr 28 00:50:52.270557 systemd[1]: Stopped target sockets.target - Socket Units. Apr 28 00:50:52.278018 systemd[1]: iscsid.socket: Deactivated successfully. Apr 28 00:50:52.278595 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Apr 28 00:50:52.279651 systemd[1]: iscsiuio.socket: Deactivated successfully. Apr 28 00:50:52.279709 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 28 00:50:52.296507 systemd[1]: systemd-journald-audit.socket: Deactivated successfully. Apr 28 00:50:52.296771 systemd[1]: Closed systemd-journald-audit.socket - Journal Audit Socket. Apr 28 00:50:52.316807 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Apr 28 00:50:52.360291 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 28 00:50:52.479000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:50:52.490825 kernel: audit: type=1131 audit(1777337452.479:52): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:50:52.498266 systemd[1]: initrd-setup-root-after-ignition.service: Consumed 4.323s CPU time. Apr 28 00:50:52.516130 systemd[1]: ignition-files.service: Deactivated successfully. Apr 28 00:50:52.587439 systemd[1]: Stopped ignition-files.service - Ignition (files). Apr 28 00:50:52.658000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:50:52.671354 kernel: audit: type=1131 audit(1777337452.658:53): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:50:52.671475 systemd[1]: ignition-files.service: Consumed 30.095s CPU time. Apr 28 00:50:52.838964 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Apr 28 00:50:53.061361 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Apr 28 00:50:53.079925 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Apr 28 00:50:53.080324 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 28 00:50:53.099628 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Apr 28 00:50:53.099000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:50:53.105000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:50:53.138656 kernel: audit: type=1131 audit(1777337453.099:54): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:50:53.099807 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Apr 28 00:50:53.197405 kernel: audit: type=1131 audit(1777337453.105:55): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:50:53.110297 systemd[1]: systemd-udev-trigger.service: Consumed 4.123s CPU time. Apr 28 00:50:53.210000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:50:53.133833 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Apr 28 00:50:53.190247 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Apr 28 00:50:53.674638 ignition[1218]: INFO : Ignition 2.24.0 Apr 28 00:50:53.674638 ignition[1218]: INFO : Stage: umount Apr 28 00:50:53.674638 ignition[1218]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 28 00:50:53.674638 ignition[1218]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 28 00:50:53.697232 ignition[1218]: INFO : umount: umount passed Apr 28 00:50:53.697232 ignition[1218]: INFO : Ignition finished successfully Apr 28 00:50:53.697700 systemd[1]: sysroot-boot.mount: Deactivated successfully. Apr 28 00:50:53.708395 systemd[1]: ignition-mount.service: Deactivated successfully. Apr 28 00:50:53.731847 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Apr 28 00:50:53.802000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:50:53.874029 systemd[1]: sysroot-boot.service: Deactivated successfully. Apr 28 00:50:53.880256 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Apr 28 00:50:53.912000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:50:54.227798 systemd[1]: Stopped target network.target - Network. Apr 28 00:50:54.299569 systemd[1]: ignition-disks.service: Deactivated successfully. Apr 28 00:50:54.325622 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Apr 28 00:50:54.397000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:50:54.419874 systemd[1]: ignition-disks.service: Consumed 1.455s CPU time. Apr 28 00:50:54.460792 systemd[1]: ignition-kargs.service: Deactivated successfully. Apr 28 00:50:54.469211 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Apr 28 00:50:54.472000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:50:54.475212 systemd[1]: ignition-kargs.service: Consumed 1.894s CPU time. Apr 28 00:50:54.476058 systemd[1]: ignition-setup.service: Deactivated successfully. Apr 28 00:50:54.505000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:50:54.479524 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Apr 28 00:50:54.521519 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Apr 28 00:50:54.528588 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Apr 28 00:50:54.597000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup-pre comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:50:54.620989 systemd[1]: initrd-setup-root.service: Deactivated successfully. Apr 28 00:50:54.632687 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Apr 28 00:50:54.735000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:50:54.735816 systemd[1]: initrd-setup-root.service: Consumed 7.744s CPU time. Apr 28 00:50:54.753363 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Apr 28 00:50:54.783160 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Apr 28 00:50:55.042394 systemd[1]: systemd-resolved.service: Deactivated successfully. Apr 28 00:50:55.198397 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Apr 28 00:50:55.274000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:50:55.276173 kernel: kauditd_printk_skb: 8 callbacks suppressed Apr 28 00:50:55.276224 kernel: audit: type=1131 audit(1777337455.274:64): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:50:55.886511 systemd[1]: systemd-networkd.service: Deactivated successfully. Apr 28 00:50:56.129270 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Apr 28 00:50:56.211000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:50:56.281132 kernel: audit: type=1131 audit(1777337456.211:65): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:50:56.660000 audit: BPF prog-id=8 op=UNLOAD Apr 28 00:50:56.665000 audit: BPF prog-id=5 op=UNLOAD Apr 28 00:50:56.666327 kernel: audit: type=1334 audit(1777337456.660:66): prog-id=8 op=UNLOAD Apr 28 00:50:56.666356 kernel: audit: type=1334 audit(1777337456.665:67): prog-id=5 op=UNLOAD Apr 28 00:50:56.688107 systemd[1]: Stopped target network-pre.target - Preparation for Network. Apr 28 00:50:56.786038 systemd[1]: systemd-networkd.socket: Deactivated successfully. Apr 28 00:50:56.823862 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Apr 28 00:50:56.981535 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Apr 28 00:50:56.988858 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Apr 28 00:50:56.989067 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 28 00:50:57.010000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:50:57.015259 systemd[1]: parse-ip-for-networkd.service: Consumed 1.219s CPU time. Apr 28 00:50:57.024032 kernel: audit: type=1131 audit(1777337457.010:68): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:50:57.017508 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 28 00:50:57.017675 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 28 00:50:57.037000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:50:57.038303 systemd[1]: systemd-modules-load.service: Deactivated successfully. Apr 28 00:50:57.040886 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Apr 28 00:50:57.081000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:50:57.093805 kernel: audit: type=1131 audit(1777337457.037:69): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:50:57.093896 kernel: audit: type=1131 audit(1777337457.081:70): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:50:57.100803 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 28 00:50:57.682019 systemd[1]: systemd-udevd.service: Deactivated successfully. Apr 28 00:50:57.682941 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 28 00:50:57.769000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:50:57.773920 systemd[1]: systemd-udevd.service: Consumed 15.392s CPU time. Apr 28 00:50:57.780021 kernel: audit: type=1131 audit(1777337457.769:71): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:50:57.829307 systemd[1]: network-cleanup.service: Deactivated successfully. Apr 28 00:50:57.900191 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Apr 28 00:50:57.934000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:50:57.943876 kernel: audit: type=1131 audit(1777337457.934:72): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:50:57.965254 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Apr 28 00:50:57.968165 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Apr 28 00:50:57.969439 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Apr 28 00:50:57.969533 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Apr 28 00:50:57.985000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:50:58.020300 kernel: audit: type=1131 audit(1777337457.985:73): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:50:57.986384 systemd[1]: dracut-cmdline.service: Deactivated successfully. Apr 28 00:50:58.026000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:50:57.986492 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Apr 28 00:50:58.034651 systemd[1]: dracut-cmdline.service: Consumed 5.186s CPU time. Apr 28 00:50:58.042361 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 28 00:50:58.042703 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 28 00:50:58.073000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:50:58.097057 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Apr 28 00:50:58.108134 systemd[1]: systemd-network-generator.service: Deactivated successfully. Apr 28 00:50:58.119288 systemd[1]: Stopped systemd-network-generator.service - Generate Network Units from Kernel Command Line. Apr 28 00:50:58.137000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:50:58.138639 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Apr 28 00:50:58.172000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:50:58.138758 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 28 00:50:58.175699 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Apr 28 00:50:58.186000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:50:58.175864 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 28 00:50:58.187202 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Apr 28 00:50:58.203000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:50:58.187324 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Apr 28 00:50:58.204089 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 28 00:50:58.204287 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 28 00:50:58.242000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:50:58.296271 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Apr 28 00:50:58.309910 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Apr 28 00:50:58.333000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:50:58.334000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:50:58.335333 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Apr 28 00:50:58.401920 systemd[1]: Starting initrd-switch-root.service - Switch Root... Apr 28 00:50:58.567695 systemd[1]: Switching root. Apr 28 00:50:58.787876 systemd-journald[321]: Received SIGTERM from PID 1 (systemd). Apr 28 00:50:58.788203 systemd-journald[321]: Journal stopped Apr 28 00:51:03.363963 kernel: SELinux: policy capability network_peer_controls=1 Apr 28 00:51:03.364188 kernel: SELinux: policy capability open_perms=1 Apr 28 00:51:03.364207 kernel: SELinux: policy capability extended_socket_class=1 Apr 28 00:51:03.364222 kernel: SELinux: policy capability always_check_network=0 Apr 28 00:51:03.364237 kernel: SELinux: policy capability cgroup_seclabel=1 Apr 28 00:51:03.364269 kernel: SELinux: policy capability nnp_nosuid_transition=1 Apr 28 00:51:03.364284 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Apr 28 00:51:03.364303 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Apr 28 00:51:03.364322 kernel: SELinux: policy capability userspace_initial_context=0 Apr 28 00:51:03.364342 systemd[1]: Successfully loaded SELinux policy in 243.440ms. Apr 28 00:51:03.364370 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 181.118ms. Apr 28 00:51:03.364386 systemd[1]: systemd 258.3 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Apr 28 00:51:03.364467 systemd[1]: Detected virtualization kvm. Apr 28 00:51:03.364484 systemd[1]: Detected architecture x86-64. Apr 28 00:51:03.364499 systemd[1]: Detected first boot. Apr 28 00:51:03.364518 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Apr 28 00:51:03.364533 kernel: kauditd_printk_skb: 11 callbacks suppressed Apr 28 00:51:03.364549 kernel: audit: type=1334 audit(1777337460.567:85): prog-id=9 op=LOAD Apr 28 00:51:03.364565 kernel: audit: type=1334 audit(1777337460.567:86): prog-id=9 op=UNLOAD Apr 28 00:51:03.367146 zram_generator::config[1264]: No configuration found. Apr 28 00:51:03.367210 kernel: Guest personality initialized and is inactive Apr 28 00:51:03.367226 kernel: VMCI host device registered (name=vmci, major=10, minor=258) Apr 28 00:51:03.367244 kernel: Initialized host personality Apr 28 00:51:03.367258 kernel: NET: Registered PF_VSOCK protocol family Apr 28 00:51:03.367273 systemd-ssh-generator[1260]: Failed to query local AF_VSOCK CID: Cannot assign requested address Apr 28 00:51:03.368855 (sd-exec-[1245]: /usr/lib/systemd/system-generators/systemd-ssh-generator failed with exit status 1. Apr 28 00:51:03.368914 systemd[1]: Applying preset policy. Apr 28 00:51:03.368933 systemd[1]: Created symlink '/etc/systemd/system/multi-user.target.wants/prepare-helm.service' → '/etc/systemd/system/prepare-helm.service'. Apr 28 00:51:03.368949 systemd[1]: Created symlink '/etc/systemd/system/timers.target.wants/google-oslogin-cache.timer' → '/usr/lib/systemd/system/google-oslogin-cache.timer'. Apr 28 00:51:03.368963 systemd[1]: Populated /etc with preset unit settings. Apr 28 00:51:03.368979 systemd[1]: /usr/lib/systemd/system/update-engine.service:10: Support for option BlockIOWeight= has been removed and it is ignored Apr 28 00:51:03.369915 kernel: audit: type=1334 audit(1777337462.280:87): prog-id=10 op=LOAD Apr 28 00:51:03.371069 kernel: audit: type=1334 audit(1777337462.280:88): prog-id=2 op=UNLOAD Apr 28 00:51:03.371092 kernel: audit: type=1334 audit(1777337462.280:89): prog-id=11 op=LOAD Apr 28 00:51:03.371107 kernel: audit: type=1334 audit(1777337462.280:90): prog-id=12 op=LOAD Apr 28 00:51:03.371122 kernel: audit: type=1334 audit(1777337462.280:91): prog-id=3 op=UNLOAD Apr 28 00:51:03.371136 kernel: audit: type=1334 audit(1777337462.280:92): prog-id=4 op=UNLOAD Apr 28 00:51:03.371150 kernel: audit: type=1334 audit(1777337462.282:93): prog-id=13 op=LOAD Apr 28 00:51:03.372175 kernel: audit: type=1334 audit(1777337462.283:94): prog-id=10 op=UNLOAD Apr 28 00:51:03.372302 systemd[1]: initrd-switch-root.service: Deactivated successfully. Apr 28 00:51:03.372320 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Apr 28 00:51:03.372335 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Apr 28 00:51:03.372350 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Apr 28 00:51:03.372364 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Apr 28 00:51:03.372402 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Apr 28 00:51:03.372451 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Apr 28 00:51:03.372468 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Apr 28 00:51:03.372483 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Apr 28 00:51:03.372498 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Apr 28 00:51:03.372513 systemd[1]: Created slice user.slice - User and Session Slice. Apr 28 00:51:03.372530 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 28 00:51:03.372564 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 28 00:51:03.372585 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Apr 28 00:51:03.372601 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Apr 28 00:51:03.372614 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Apr 28 00:51:03.372631 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 28 00:51:03.372646 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Apr 28 00:51:03.372662 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 28 00:51:03.373834 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 28 00:51:03.373979 systemd[1]: Reached target imports.target - Image Downloads. Apr 28 00:51:03.373995 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Apr 28 00:51:03.374929 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Apr 28 00:51:03.375986 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Apr 28 00:51:03.376180 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Apr 28 00:51:03.376197 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 28 00:51:03.376236 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 28 00:51:03.376253 systemd[1]: Reached target remote-integritysetup.target - Remote Integrity Protected Volumes. Apr 28 00:51:03.376269 systemd[1]: Reached target remote-veritysetup.target - Remote Verity Protected Volumes. Apr 28 00:51:03.376286 systemd[1]: Reached target slices.target - Slice Units. Apr 28 00:51:03.376301 systemd[1]: Reached target swap.target - Swaps. Apr 28 00:51:03.376315 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Apr 28 00:51:03.376330 systemd[1]: Listening on systemd-ask-password.socket - Query the User Interactively for a Password. Apr 28 00:51:03.376363 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Apr 28 00:51:03.376380 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Apr 28 00:51:03.376396 systemd[1]: Listening on systemd-factory-reset.socket - Factory Reset Management. Apr 28 00:51:03.376445 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Apr 28 00:51:03.376463 systemd[1]: Listening on systemd-mountfsd.socket - DDI File System Mounter Socket. Apr 28 00:51:03.376480 systemd[1]: Listening on systemd-networkd-varlink.socket - Network Service Varlink Socket. Apr 28 00:51:03.376497 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 28 00:51:03.378375 systemd[1]: Listening on systemd-nsresourced.socket - Namespace Resource Manager Socket. Apr 28 00:51:03.379666 systemd[1]: Listening on systemd-oomd.socket - Userspace Out-Of-Memory (OOM) Killer Socket. Apr 28 00:51:03.379723 systemd[1]: Listening on systemd-resolved-monitor.socket - Resolve Monitor Varlink Socket. Apr 28 00:51:03.379763 systemd[1]: Listening on systemd-resolved-varlink.socket - Resolve Service Varlink Socket. Apr 28 00:51:03.379779 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 28 00:51:03.379793 systemd[1]: Listening on systemd-udevd-varlink.socket - udev Varlink Socket. Apr 28 00:51:03.379808 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Apr 28 00:51:03.379841 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Apr 28 00:51:03.379856 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Apr 28 00:51:03.379871 systemd[1]: Mounting media.mount - External Media Directory... Apr 28 00:51:03.379886 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 28 00:51:03.379901 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Apr 28 00:51:03.379916 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Apr 28 00:51:03.379933 systemd[1]: tmp.mount: x-systemd.graceful-option=usrquota specified, but option is not available, suppressing. Apr 28 00:51:03.379991 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Apr 28 00:51:03.380028 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Apr 28 00:51:03.380044 systemd[1]: Reached target machines.target - Virtual Machines and Containers. Apr 28 00:51:03.380060 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Apr 28 00:51:03.380092 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 28 00:51:03.380108 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 28 00:51:03.380122 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Apr 28 00:51:03.380136 systemd[1]: modprobe@dm_mod.service - Load Kernel Module dm_mod was skipped because of an unmet condition check (ConditionKernelModuleLoaded=!dm_mod). Apr 28 00:51:03.380152 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 28 00:51:03.380184 systemd[1]: modprobe@efi_pstore.service - Load Kernel Module efi_pstore was skipped because of an unmet condition check (ConditionKernelModuleLoaded=!efi_pstore). Apr 28 00:51:03.380198 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Apr 28 00:51:03.380212 systemd[1]: modprobe@loop.service - Load Kernel Module loop was skipped because of an unmet condition check (ConditionKernelModuleLoaded=!loop). Apr 28 00:51:03.380227 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Apr 28 00:51:03.380241 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Apr 28 00:51:03.380271 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Apr 28 00:51:03.380287 systemd[1]: systemd-fsck-root.service: Consumed 1.054s CPU time. Apr 28 00:51:03.380302 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Apr 28 00:51:03.380316 systemd[1]: Stopped systemd-fsck-usr.service. Apr 28 00:51:03.380332 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Apr 28 00:51:03.380347 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 28 00:51:03.380362 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 28 00:51:03.380396 kernel: ACPI: bus type drm_connector registered Apr 28 00:51:03.380443 kernel: fuse: init (API version 7.41) Apr 28 00:51:03.380459 systemd[1]: Starting systemd-network-generator.service - Generate Network Units from Kernel Command Line... Apr 28 00:51:03.380475 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Apr 28 00:51:03.380491 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 28 00:51:03.380523 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Apr 28 00:51:03.380536 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 28 00:51:03.380552 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 28 00:51:03.380606 systemd-journald[1334]: Collecting audit messages is enabled. Apr 28 00:51:03.380658 systemd-journald[1334]: Journal started Apr 28 00:51:03.380685 systemd-journald[1334]: Runtime Journal (/run/log/journal/c7857b6b37174cc8bc34ad7b260d3221) is 6M, max 48M, 42M free. Apr 28 00:51:03.144000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:51:03.155000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:51:03.162000 audit: BPF prog-id=15 op=UNLOAD Apr 28 00:51:03.162000 audit: BPF prog-id=14 op=UNLOAD Apr 28 00:51:03.163000 audit: BPF prog-id=16 op=LOAD Apr 28 00:51:03.169000 audit: BPF prog-id=17 op=LOAD Apr 28 00:51:03.170000 audit: BPF prog-id=18 op=LOAD Apr 28 00:51:03.359000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Apr 28 00:51:03.359000 audit[1334]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=5 a1=7ffcb91672c0 a2=4000 a3=0 items=0 ppid=1 pid=1334 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 28 00:51:03.359000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Apr 28 00:51:02.263473 systemd[1]: Queued start job for default target multi-user.target. Apr 28 00:51:02.286827 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Apr 28 00:51:02.287468 systemd[1]: systemd-journald.service: Deactivated successfully. Apr 28 00:51:02.287932 systemd[1]: systemd-journald.service: Consumed 3.902s CPU time. Apr 28 00:51:03.383795 systemd[1]: Started systemd-journald.service - Journal Service. Apr 28 00:51:03.386000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:51:03.434961 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Apr 28 00:51:03.440985 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Apr 28 00:51:03.505871 systemd[1]: Mounted media.mount - External Media Directory. Apr 28 00:51:03.512510 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Apr 28 00:51:03.533850 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Apr 28 00:51:03.559552 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Apr 28 00:51:03.575184 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Apr 28 00:51:03.584000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:51:03.598573 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 28 00:51:03.614000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:51:03.621097 systemd[1]: modprobe@configfs.service: Deactivated successfully. Apr 28 00:51:03.621629 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Apr 28 00:51:03.626000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:51:03.628000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:51:03.629735 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 28 00:51:03.631548 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 28 00:51:03.635000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:51:03.635000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:51:03.636097 systemd[1]: modprobe@fuse.service: Deactivated successfully. Apr 28 00:51:03.636338 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Apr 28 00:51:03.639000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:51:03.639000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:51:03.640357 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 28 00:51:03.686000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:51:03.687927 systemd[1]: Finished systemd-network-generator.service - Generate Network Units from Kernel Command Line. Apr 28 00:51:03.693000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:51:03.704524 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Apr 28 00:51:03.706000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:51:03.711722 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Apr 28 00:51:03.715000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-load-credentials comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:51:03.741799 systemd[1]: Reached target network-pre.target - Preparation for Network. Apr 28 00:51:03.746531 systemd[1]: Listening on systemd-importd.socket - Disk Image Download Service Socket. Apr 28 00:51:03.750522 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Apr 28 00:51:03.755706 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Apr 28 00:51:03.757778 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Apr 28 00:51:03.757810 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 28 00:51:03.762467 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Apr 28 00:51:03.766908 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 28 00:51:03.777812 systemd[1]: Starting systemd-confext.service - Merge System Configuration Images into /etc/... Apr 28 00:51:03.783682 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Apr 28 00:51:03.792656 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Apr 28 00:51:03.798500 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 28 00:51:03.807017 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Apr 28 00:51:03.813707 systemd-journald[1334]: Time spent on flushing to /var/log/journal/c7857b6b37174cc8bc34ad7b260d3221 is 129.070ms for 1341 entries. Apr 28 00:51:03.813707 systemd-journald[1334]: System Journal (/var/log/journal/c7857b6b37174cc8bc34ad7b260d3221) is 8M, max 163.5M, 155.5M free. Apr 28 00:51:03.895000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:51:03.933000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:51:03.811663 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 28 00:51:03.966786 systemd-journald[1334]: Received client request to flush runtime journal. Apr 28 00:51:03.821802 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 28 00:51:03.832059 systemd[1]: Starting systemd-userdb-load-credentials.service - Load JSON user/group Records from Credentials... Apr 28 00:51:03.838154 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 28 00:51:03.906684 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Apr 28 00:51:03.910460 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Apr 28 00:51:03.924540 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Apr 28 00:51:03.939884 systemd-tmpfiles[1380]: ACLs are not supported, ignoring. Apr 28 00:51:03.939898 systemd-tmpfiles[1380]: ACLs are not supported, ignoring. Apr 28 00:51:03.947106 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Apr 28 00:51:03.952348 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Apr 28 00:51:03.969884 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 28 00:51:03.973000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:51:03.977000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:51:03.981000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdb-load-credentials comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:51:03.974704 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Apr 28 00:51:03.978460 systemd[1]: Finished systemd-userdb-load-credentials.service - Load JSON user/group Records from Credentials. Apr 28 00:51:03.984475 kernel: loop4: detected capacity change from 0 to 43472 Apr 28 00:51:03.986481 kernel: loop4: p1 p2 p3 Apr 28 00:51:03.986957 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 28 00:51:03.989000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:51:03.996463 systemd[1]: Starting systemd-sysusers.service - Create System Users... Apr 28 00:51:04.034760 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Apr 28 00:51:04.039000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:51:04.076789 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Apr 28 00:51:04.084358 kernel: device-mapper: verity: Unrecognized verity feature request: root_hash_sig_key_desc Apr 28 00:51:04.110893 kernel: device-mapper: table: 253:4: verity: Unrecognized verity feature request (-EINVAL) Apr 28 00:51:04.110788 systemd-confext[1384]: device-mapper: reload ioctl on loop4p1-verity (253:4) failed: Invalid argument Apr 28 00:51:04.112498 kernel: device-mapper: ioctl: error adding target to table Apr 28 00:51:04.119447 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Apr 28 00:51:04.172677 systemd[1]: Finished systemd-sysusers.service - Create System Users. Apr 28 00:51:04.174000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:51:04.177000 audit: BPF prog-id=19 op=LOAD Apr 28 00:51:04.177000 audit: BPF prog-id=20 op=LOAD Apr 28 00:51:04.177000 audit: BPF prog-id=21 op=LOAD Apr 28 00:51:04.179208 systemd[1]: Starting systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer... Apr 28 00:51:04.182000 audit: BPF prog-id=22 op=LOAD Apr 28 00:51:04.183515 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 28 00:51:04.189000 audit: BPF prog-id=23 op=LOAD Apr 28 00:51:04.190539 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Apr 28 00:51:04.197756 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 28 00:51:04.206470 systemd[1]: Starting modprobe@tun.service - Load Kernel Module tun... Apr 28 00:51:04.215000 audit: BPF prog-id=24 op=LOAD Apr 28 00:51:04.218000 audit: BPF prog-id=25 op=LOAD Apr 28 00:51:04.218000 audit: BPF prog-id=26 op=LOAD Apr 28 00:51:04.220804 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Apr 28 00:51:04.231788 systemd[1]: modprobe@tun.service: Deactivated successfully. Apr 28 00:51:04.233354 kernel: tun: Universal TUN/TAP device driver, 1.6 Apr 28 00:51:04.233266 systemd[1]: Finished modprobe@tun.service - Load Kernel Module tun. Apr 28 00:51:04.234060 systemd-tmpfiles[1409]: ACLs are not supported, ignoring. Apr 28 00:51:04.234454 systemd-tmpfiles[1409]: ACLs are not supported, ignoring. Apr 28 00:51:04.235000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@tun comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:51:04.235000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@tun comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:51:04.237000 audit: BPF prog-id=27 op=LOAD Apr 28 00:51:04.237000 audit: BPF prog-id=28 op=LOAD Apr 28 00:51:04.237000 audit: BPF prog-id=29 op=LOAD Apr 28 00:51:04.238628 systemd[1]: Starting systemd-nsresourced.service - Namespace Resource Manager... Apr 28 00:51:04.243622 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 28 00:51:04.250000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:51:04.277837 systemd[1]: Started systemd-userdbd.service - User Database Manager. Apr 28 00:51:04.280000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:51:04.316392 systemd-nsresourced[1414]: Not setting up BPF subsystem, as functionality has been disabled at compile time. Apr 28 00:51:04.320114 systemd[1]: Started systemd-nsresourced.service - Namespace Resource Manager. Apr 28 00:51:04.323000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-nsresourced comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:51:04.373911 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Apr 28 00:51:04.463603 systemd-oomd[1406]: No swap; memory pressure usage will be degraded Apr 28 00:51:04.466806 systemd[1]: Started systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer. Apr 28 00:51:04.470000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-oomd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:51:04.498485 systemd-resolved[1407]: Positive Trust Anchors: Apr 28 00:51:04.499199 systemd-resolved[1407]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 28 00:51:04.499205 systemd-resolved[1407]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Apr 28 00:51:04.499301 systemd-resolved[1407]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 28 00:51:04.505485 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Apr 28 00:51:04.505557 systemd-resolved[1407]: Defaulting to hostname 'linux'. Apr 28 00:51:04.507000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-timesyncd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:51:04.508344 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 28 00:51:04.511000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:51:04.514746 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 28 00:51:04.518918 systemd[1]: Reached target time-set.target - System Time Set. Apr 28 00:51:06.907016 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Apr 28 00:51:06.909000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:51:06.910757 kernel: kauditd_printk_skb: 59 callbacks suppressed Apr 28 00:51:06.910858 kernel: audit: type=1130 audit(1777337466.909:152): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:51:06.910000 audit: BPF prog-id=7 op=UNLOAD Apr 28 00:51:06.910000 audit: BPF prog-id=6 op=UNLOAD Apr 28 00:51:06.930861 kernel: audit: type=1334 audit(1777337466.910:153): prog-id=7 op=UNLOAD Apr 28 00:51:06.931553 kernel: audit: type=1334 audit(1777337466.910:154): prog-id=6 op=UNLOAD Apr 28 00:51:06.932000 audit: BPF prog-id=30 op=LOAD Apr 28 00:51:06.932000 audit: BPF prog-id=31 op=LOAD Apr 28 00:51:06.935367 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 28 00:51:06.937074 kernel: audit: type=1334 audit(1777337466.932:155): prog-id=30 op=LOAD Apr 28 00:51:06.937744 kernel: audit: type=1334 audit(1777337466.932:156): prog-id=31 op=LOAD Apr 28 00:51:07.149445 systemd-udevd[1436]: Using default interface naming scheme 'v258'. Apr 28 00:51:07.433644 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 28 00:51:07.435000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:51:07.439000 audit: BPF prog-id=32 op=LOAD Apr 28 00:51:07.441360 kernel: audit: type=1130 audit(1777337467.435:157): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:51:07.441405 kernel: audit: type=1334 audit(1777337467.439:158): prog-id=32 op=LOAD Apr 28 00:51:07.441391 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 28 00:51:07.575960 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Apr 28 00:51:07.609771 systemd-networkd[1438]: lo: Link UP Apr 28 00:51:07.611013 systemd-networkd[1438]: lo: Gained carrier Apr 28 00:51:07.614667 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 28 00:51:07.616265 systemd-networkd[1438]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Apr 28 00:51:07.616292 systemd-networkd[1438]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 28 00:51:07.616793 systemd-networkd[1438]: eth0: Link UP Apr 28 00:51:07.616953 systemd-networkd[1438]: eth0: Gained carrier Apr 28 00:51:07.616981 systemd-networkd[1438]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Apr 28 00:51:07.616000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:51:07.621445 kernel: audit: type=1130 audit(1777337467.616:159): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:51:07.641323 systemd-networkd[1438]: eth0: DHCPv4 address 10.0.0.30/16, gateway 10.0.0.1 acquired from 10.0.0.1 Apr 28 00:51:07.643159 systemd[1]: Reached target network.target - Network. Apr 28 00:51:07.643910 systemd-timesyncd[1408]: Network configuration changed, trying to establish connection. Apr 28 00:51:08.320386 systemd-timesyncd[1408]: Contacted time server 10.0.0.1:123 (10.0.0.1). Apr 28 00:51:08.320452 systemd-resolved[1407]: Clock change detected. Flushing caches. Apr 28 00:51:08.320740 systemd-timesyncd[1408]: Initial clock synchronization to Tue 2026-04-28 00:51:08.320302 UTC. Apr 28 00:51:08.322544 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Apr 28 00:51:08.327878 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Apr 28 00:51:08.372282 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Apr 28 00:51:08.376000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-persistent-storage comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:51:08.386406 kernel: audit: type=1130 audit(1777337468.376:160): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-persistent-storage comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:51:08.397251 kernel: mousedev: PS/2 mouse device common for all mice Apr 28 00:51:08.423556 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input4 Apr 28 00:51:08.426580 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Apr 28 00:51:08.433352 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Apr 28 00:51:08.444386 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Apr 28 00:51:08.445138 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Apr 28 00:51:08.446534 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Apr 28 00:51:08.515635 kernel: ACPI: button: Power Button [PWRF] Apr 28 00:51:09.427959 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Apr 28 00:51:09.431000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:51:09.439544 kernel: audit: type=1130 audit(1777337469.431:161): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:51:09.613307 kernel: erofs: (device dm-4): mounted with root inode @ nid 40. Apr 28 00:51:09.642182 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 28 00:51:09.673720 systemd-networkd[1438]: eth0: Gained IPv6LL Apr 28 00:51:09.870312 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Apr 28 00:51:09.949000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-wait-online comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:51:09.955485 systemd[1]: Reached target network-online.target - Network is Online. Apr 28 00:51:11.293467 kernel: loop4: detected capacity change from 0 to 43472 Apr 28 00:51:11.594987 kernel: loop4: p1 p2 p3 Apr 28 00:51:11.978348 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Apr 28 00:51:11.986316 kernel: device-mapper: verity: Unrecognized verity feature request: root_hash_sig_key_desc Apr 28 00:51:11.997614 kernel: device-mapper: table: 253:4: verity: Unrecognized verity feature request (-EINVAL) Apr 28 00:51:12.005772 kernel: device-mapper: ioctl: error adding target to table Apr 28 00:51:12.007791 (sd-merge)[1500]: device-mapper: reload ioctl on loop4p1-verity (253:4) failed: Invalid argument Apr 28 00:51:12.040534 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Apr 28 00:51:12.320948 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 28 00:51:12.325000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:51:12.349386 kernel: erofs: (device dm-4): mounted with root inode @ nid 40. Apr 28 00:51:12.350014 (sd-merge)[1500]: Skipping extension refresh because no change was found, use --always-refresh=yes to always do a refresh. Apr 28 00:51:12.385096 kernel: device-mapper: ioctl: remove_all left 4 open device(s) Apr 28 00:51:12.385816 systemd[1]: Finished systemd-confext.service - Merge System Configuration Images into /etc/. Apr 28 00:51:12.390000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-confext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:51:12.508963 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Apr 28 00:51:12.854318 kernel: loop4: detected capacity change from 0 to 178200 Apr 28 00:51:12.860481 kernel: loop4: p1 p2 p3 Apr 28 00:51:13.116889 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Apr 28 00:51:13.173651 kernel: device-mapper: verity: Unrecognized verity feature request: root_hash_sig_key_desc Apr 28 00:51:13.173455 systemd-sysext[1512]: device-mapper: reload ioctl on loop4p1-verity (253:4) failed: Invalid argument Apr 28 00:51:13.186825 kernel: device-mapper: table: 253:4: verity: Unrecognized verity feature request (-EINVAL) Apr 28 00:51:13.186883 kernel: device-mapper: ioctl: error adding target to table Apr 28 00:51:13.215682 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Apr 28 00:51:13.558648 kernel: erofs: (device dm-4): mounted with root inode @ nid 39. Apr 28 00:51:13.725295 kernel: loop4: detected capacity change from 0 to 378016 Apr 28 00:51:13.731249 kernel: loop4: p1 p2 p3 Apr 28 00:51:14.016763 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Apr 28 00:51:14.017849 kernel: device-mapper: verity: Unrecognized verity feature request: root_hash_sig_key_desc Apr 28 00:51:14.039457 kernel: device-mapper: table: 253:4: verity: Unrecognized verity feature request (-EINVAL) Apr 28 00:51:14.039931 kernel: device-mapper: ioctl: error adding target to table Apr 28 00:51:14.044740 systemd-sysext[1512]: device-mapper: reload ioctl on loop4p1-verity (253:4) failed: Invalid argument Apr 28 00:51:14.064143 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Apr 28 00:51:14.765461 kernel: erofs: (device dm-4): mounted with root inode @ nid 39. Apr 28 00:51:15.179672 kernel: loop4: detected capacity change from 0 to 219192 Apr 28 00:51:15.470625 kernel: loop4: detected capacity change from 0 to 178200 Apr 28 00:51:15.485978 kernel: loop4: p1 p2 p3 Apr 28 00:51:15.724329 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Apr 28 00:51:15.724892 kernel: device-mapper: verity: Unrecognized verity feature request: root_hash_sig_key_desc Apr 28 00:51:15.728136 kernel: device-mapper: table: 253:4: verity: Unrecognized verity feature request (-EINVAL) Apr 28 00:51:15.729553 kernel: device-mapper: ioctl: error adding target to table Apr 28 00:51:15.729704 (sd-merge)[1532]: device-mapper: reload ioctl on loop4p1-verity (253:4) failed: Invalid argument Apr 28 00:51:15.741376 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Apr 28 00:51:15.873002 kernel: erofs: (device dm-4): mounted with root inode @ nid 39. Apr 28 00:51:15.882283 kernel: loop5: detected capacity change from 0 to 378016 Apr 28 00:51:15.893294 kernel: loop5: p1 p2 p3 Apr 28 00:51:16.039290 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Apr 28 00:51:16.039382 kernel: device-mapper: verity: Unrecognized verity feature request: root_hash_sig_key_desc Apr 28 00:51:16.039429 kernel: device-mapper: table: 253:5: verity: Unrecognized verity feature request (-EINVAL) Apr 28 00:51:16.043555 kernel: device-mapper: ioctl: error adding target to table Apr 28 00:51:16.043613 (sd-merge)[1532]: device-mapper: reload ioctl on loop5p1-verity (253:5) failed: Invalid argument Apr 28 00:51:16.052318 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Apr 28 00:51:16.149328 kernel: erofs: (device dm-5): mounted with root inode @ nid 39. Apr 28 00:51:16.163066 kernel: loop6: detected capacity change from 0 to 219192 Apr 28 00:51:16.234816 (sd-merge)[1532]: Skipping extension refresh because no change was found, use --always-refresh=yes to always do a refresh. Apr 28 00:51:16.245405 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Apr 28 00:51:16.247000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:51:16.253833 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 28 00:51:16.254894 kernel: kauditd_printk_skb: 3 callbacks suppressed Apr 28 00:51:16.254924 kernel: audit: type=1130 audit(1777337476.247:165): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:51:16.260682 kernel: device-mapper: ioctl: remove_all left 4 open device(s) Apr 28 00:51:16.260970 kernel: device-mapper: ioctl: remove_all left 4 open device(s) Apr 28 00:51:16.364188 systemd-tmpfiles[1549]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Apr 28 00:51:16.364770 systemd-tmpfiles[1549]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Apr 28 00:51:16.365298 systemd-tmpfiles[1549]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Apr 28 00:51:16.367325 systemd-tmpfiles[1549]: ACLs are not supported, ignoring. Apr 28 00:51:16.367435 systemd-tmpfiles[1549]: ACLs are not supported, ignoring. Apr 28 00:51:16.617804 systemd-tmpfiles[1549]: Detected autofs mount point /boot during canonicalization of boot. Apr 28 00:51:16.617838 systemd-tmpfiles[1549]: Skipping /boot Apr 28 00:51:17.150459 systemd-tmpfiles[1549]: Detected autofs mount point /boot during canonicalization of boot. Apr 28 00:51:17.150483 systemd-tmpfiles[1549]: Skipping /boot Apr 28 00:51:17.432136 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 28 00:51:17.440000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:51:17.447377 kernel: audit: type=1130 audit(1777337477.440:166): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:51:17.689983 systemd[1]: Starting audit-rules.service - Load Audit Rules... Apr 28 00:51:17.704074 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Apr 28 00:51:17.710962 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Apr 28 00:51:17.767853 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Apr 28 00:51:17.795971 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Apr 28 00:51:17.936000 audit[1560]: AUDIT1127 pid=1560 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Apr 28 00:51:17.946730 kernel: audit: type=1127 audit(1777337477.936:167): pid=1560 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Apr 28 00:51:17.965799 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Apr 28 00:51:17.972000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:51:17.986916 kernel: audit: type=1130 audit(1777337477.972:168): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:51:18.052535 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Apr 28 00:51:18.056000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:51:18.062276 kernel: audit: type=1130 audit(1777337478.056:169): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 28 00:51:18.154000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Apr 28 00:51:18.162821 kernel: audit: type=1305 audit(1777337478.154:170): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Apr 28 00:51:18.154000 audit[1581]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7fff3fcf8400 a2=420 a3=0 items=0 ppid=1555 pid=1581 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 28 00:51:18.164817 kernel: audit: type=1300 audit(1777337478.154:170): arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7fff3fcf8400 a2=420 a3=0 items=0 ppid=1555 pid=1581 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 28 00:51:18.166970 augenrules[1581]: No rules Apr 28 00:51:18.154000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Apr 28 00:51:18.172010 systemd[1]: audit-rules.service: Deactivated successfully. Apr 28 00:51:18.172767 systemd[1]: Finished audit-rules.service - Load Audit Rules. Apr 28 00:51:18.173851 kernel: audit: type=1327 audit(1777337478.154:170): proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Apr 28 00:51:18.335670 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Apr 28 00:51:18.343070 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Apr 28 00:51:22.418756 ldconfig[1557]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Apr 28 00:51:22.435588 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Apr 28 00:51:22.499865 systemd[1]: Starting systemd-update-done.service - Update is Completed... Apr 28 00:51:22.844582 systemd[1]: Finished systemd-update-done.service - Update is Completed. Apr 28 00:51:22.864535 systemd[1]: Reached target sysinit.target - System Initialization. Apr 28 00:51:22.872864 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Apr 28 00:51:22.891060 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Apr 28 00:51:22.905790 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Apr 28 00:51:22.917208 systemd[1]: Started logrotate.timer - Daily rotation of log files. Apr 28 00:51:22.955120 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Apr 28 00:51:22.960724 systemd[1]: Started systemd-sysupdate-reboot.timer - Reboot Automatically After System Update. Apr 28 00:51:22.967732 systemd[1]: Started systemd-sysupdate.timer - Automatic System Update. Apr 28 00:51:22.983036 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Apr 28 00:51:23.012884 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Apr 28 00:51:23.041613 systemd[1]: Reached target paths.target - Path Units. Apr 28 00:51:23.051710 systemd[1]: Reached target timers.target - Timer Units. Apr 28 00:51:23.091753 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Apr 28 00:51:23.113742 systemd[1]: Starting docker.socket - Docker Socket for the API... Apr 28 00:51:23.187436 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Apr 28 00:51:23.273606 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Apr 28 00:51:23.343390 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Apr 28 00:51:23.357482 systemd[1]: Listening on systemd-logind-varlink.socket - User Login Management Varlink Socket. Apr 28 00:51:23.365174 systemd[1]: Listening on systemd-machined.socket - Virtual Machine and Container Registration Service Socket. Apr 28 00:51:23.374447 systemd[1]: Listening on docker.socket - Docker Socket for the API. Apr 28 00:51:23.381860 systemd[1]: Reached target sockets.target - Socket Units. Apr 28 00:51:23.410580 systemd[1]: Reached target basic.target - Basic System. Apr 28 00:51:23.414349 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Apr 28 00:51:23.414386 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Apr 28 00:51:23.442276 systemd[1]: Starting containerd.service - containerd container runtime... Apr 28 00:51:23.454866 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Apr 28 00:51:23.484049 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Apr 28 00:51:23.495560 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Apr 28 00:51:23.576449 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Apr 28 00:51:23.580994 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Apr 28 00:51:23.582806 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Apr 28 00:51:23.593810 jq[1597]: false Apr 28 00:51:23.593626 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Apr 28 00:51:23.609360 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 28 00:51:23.629658 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Apr 28 00:51:23.636364 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Apr 28 00:51:23.646530 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Apr 28 00:51:23.651336 google_oslogin_nss_cache[1599]: oslogin_cache_refresh[1599]: Refreshing passwd entry cache Apr 28 00:51:23.651338 oslogin_cache_refresh[1599]: Refreshing passwd entry cache Apr 28 00:51:23.652156 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Apr 28 00:51:23.659156 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Apr 28 00:51:23.667264 extend-filesystems[1598]: Found /dev/vda6 Apr 28 00:51:23.674536 google_oslogin_nss_cache[1599]: oslogin_cache_refresh[1599]: Failure getting users, quitting Apr 28 00:51:23.674536 google_oslogin_nss_cache[1599]: oslogin_cache_refresh[1599]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Apr 28 00:51:23.674536 google_oslogin_nss_cache[1599]: oslogin_cache_refresh[1599]: Refreshing group entry cache Apr 28 00:51:23.672810 oslogin_cache_refresh[1599]: Failure getting users, quitting Apr 28 00:51:23.673054 oslogin_cache_refresh[1599]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Apr 28 00:51:23.673296 oslogin_cache_refresh[1599]: Refreshing group entry cache Apr 28 00:51:23.688054 extend-filesystems[1598]: Found /dev/vda9 Apr 28 00:51:23.693796 google_oslogin_nss_cache[1599]: oslogin_cache_refresh[1599]: Failure getting groups, quitting Apr 28 00:51:23.693796 google_oslogin_nss_cache[1599]: oslogin_cache_refresh[1599]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Apr 28 00:51:23.693701 oslogin_cache_refresh[1599]: Failure getting groups, quitting Apr 28 00:51:23.693779 oslogin_cache_refresh[1599]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Apr 28 00:51:23.694740 systemd[1]: Starting systemd-logind.service - User Login Management... Apr 28 00:51:23.702842 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Apr 28 00:51:23.707546 extend-filesystems[1598]: Checking size of /dev/vda9 Apr 28 00:51:23.780485 systemd[1]: Starting update-engine.service - Update Engine... Apr 28 00:51:23.790237 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Apr 28 00:51:23.796176 extend-filesystems[1598]: Resized partition /dev/vda9 Apr 28 00:51:23.806790 extend-filesystems[1626]: resize2fs 1.47.3 (8-Jul-2025) Apr 28 00:51:23.807849 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Apr 28 00:51:23.817275 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Apr 28 00:51:23.823006 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Apr 28 00:51:23.823719 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Apr 28 00:51:23.824850 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Apr 28 00:51:23.837426 kernel: EXT4-fs (vda9): resizing filesystem from 456704 to 1784827 blocks Apr 28 00:51:23.837749 jq[1622]: true Apr 28 00:51:23.832318 systemd[1]: motdgen.service: Deactivated successfully. Apr 28 00:51:23.834286 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Apr 28 00:51:23.847988 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Apr 28 00:51:23.850868 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Apr 28 00:51:24.006647 kernel: EXT4-fs (vda9): resized filesystem to 1784827 Apr 28 00:51:24.006793 update_engine[1618]: I20260428 00:51:23.974247 1618 main.cc:92] Flatcar Update Engine starting Apr 28 00:51:24.007166 extend-filesystems[1626]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Apr 28 00:51:24.007166 extend-filesystems[1626]: old_desc_blocks = 1, new_desc_blocks = 1 Apr 28 00:51:24.007166 extend-filesystems[1626]: The filesystem on /dev/vda9 is now 1784827 (4k) blocks long. Apr 28 00:51:24.014484 systemd[1]: extend-filesystems.service: Deactivated successfully. Apr 28 00:51:24.072662 extend-filesystems[1598]: Resized filesystem in /dev/vda9 Apr 28 00:51:24.090471 jq[1637]: true Apr 28 00:51:24.016671 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Apr 28 00:51:24.090834 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Apr 28 00:51:24.365619 systemd[1]: coreos-metadata.service: Deactivated successfully. Apr 28 00:51:24.368345 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Apr 28 00:51:24.381469 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Apr 28 00:51:24.481857 tar[1635]: linux-amd64/LICENSE Apr 28 00:51:24.483184 tar[1635]: linux-amd64/helm Apr 28 00:51:24.503881 dbus-daemon[1595]: [system] SELinux support is enabled Apr 28 00:51:24.504851 systemd[1]: Started dbus.service - D-Bus System Message Bus. Apr 28 00:51:24.546345 sshd_keygen[1632]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Apr 28 00:51:24.551349 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Apr 28 00:51:24.551422 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Apr 28 00:51:24.555356 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Apr 28 00:51:24.556334 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Apr 28 00:51:24.558947 systemd[1]: Started update-engine.service - Update Engine. Apr 28 00:51:24.559514 update_engine[1618]: I20260428 00:51:24.559258 1618 update_check_scheduler.cc:74] Next update check in 5m20s Apr 28 00:51:24.591777 systemd[1]: Started locksmithd.service - Cluster reboot manager. Apr 28 00:51:24.606759 bash[1687]: Updated "/home/core/.ssh/authorized_keys" Apr 28 00:51:24.613731 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Apr 28 00:51:24.618199 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Apr 28 00:51:24.944535 systemd-logind[1614]: Watching system buttons on /dev/input/event2 (Power Button) Apr 28 00:51:24.949409 systemd-logind[1614]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Apr 28 00:51:24.951984 systemd-logind[1614]: New seat seat0. Apr 28 00:51:24.975531 systemd[1]: Started systemd-logind.service - User Login Management. Apr 28 00:51:24.992369 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Apr 28 00:51:25.035847 systemd[1]: Starting issuegen.service - Generate /run/issue... Apr 28 00:51:25.161291 systemd[1]: issuegen.service: Deactivated successfully. Apr 28 00:51:25.161625 systemd[1]: Finished issuegen.service - Generate /run/issue. Apr 28 00:51:25.215494 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Apr 28 00:51:25.701610 locksmithd[1693]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Apr 28 00:51:25.713505 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Apr 28 00:51:26.226730 systemd[1]: Started getty@tty1.service - Getty on tty1. Apr 28 00:51:26.581814 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Apr 28 00:51:26.595626 systemd[1]: Reached target getty.target - Login Prompts. Apr 28 00:51:27.894996 containerd[1638]: time="2026-04-28T00:51:27Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Apr 28 00:51:27.913505 containerd[1638]: time="2026-04-28T00:51:27.913322727Z" level=info msg="starting containerd" revision=dea7da592f5d1d2b7755e3a161be07f43fad8f75 version=v2.2.1 Apr 28 00:51:27.917841 tar[1635]: linux-amd64/README.md Apr 28 00:51:28.058255 containerd[1638]: time="2026-04-28T00:51:28.058040812Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="211.444µs" Apr 28 00:51:28.059577 containerd[1638]: time="2026-04-28T00:51:28.059297563Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Apr 28 00:51:28.059639 containerd[1638]: time="2026-04-28T00:51:28.059597925Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Apr 28 00:51:28.059639 containerd[1638]: time="2026-04-28T00:51:28.059619728Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Apr 28 00:51:28.059933 containerd[1638]: time="2026-04-28T00:51:28.059886927Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Apr 28 00:51:28.059987 containerd[1638]: time="2026-04-28T00:51:28.059933933Z" level=info msg="loading plugin" id=io.containerd.mount-handler.v1.erofs type=io.containerd.mount-handler.v1 Apr 28 00:51:28.059987 containerd[1638]: time="2026-04-28T00:51:28.059977288Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Apr 28 00:51:28.060156 containerd[1638]: time="2026-04-28T00:51:28.060118065Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Apr 28 00:51:28.060156 containerd[1638]: time="2026-04-28T00:51:28.060146431Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Apr 28 00:51:28.060808 containerd[1638]: time="2026-04-28T00:51:28.060762400Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Apr 28 00:51:28.060808 containerd[1638]: time="2026-04-28T00:51:28.060799065Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Apr 28 00:51:28.060847 containerd[1638]: time="2026-04-28T00:51:28.060812870Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Apr 28 00:51:28.060874 containerd[1638]: time="2026-04-28T00:51:28.060844405Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Apr 28 00:51:28.061297 containerd[1638]: time="2026-04-28T00:51:28.061260802Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Apr 28 00:51:28.061412 containerd[1638]: time="2026-04-28T00:51:28.061381893Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Apr 28 00:51:28.061751 containerd[1638]: time="2026-04-28T00:51:28.061709108Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Apr 28 00:51:28.061850 containerd[1638]: time="2026-04-28T00:51:28.061818653Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Apr 28 00:51:28.061850 containerd[1638]: time="2026-04-28T00:51:28.061842927Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Apr 28 00:51:28.061991 containerd[1638]: time="2026-04-28T00:51:28.061959087Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Apr 28 00:51:28.069738 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Apr 28 00:51:28.082400 containerd[1638]: time="2026-04-28T00:51:28.079674284Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Apr 28 00:51:28.085062 containerd[1638]: time="2026-04-28T00:51:28.084548156Z" level=info msg="metadata content store policy set" policy=shared Apr 28 00:51:28.143687 containerd[1638]: time="2026-04-28T00:51:28.140826540Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Apr 28 00:51:28.146895 containerd[1638]: time="2026-04-28T00:51:28.146454417Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Apr 28 00:51:28.147135 containerd[1638]: time="2026-04-28T00:51:28.147095409Z" level=info msg="built-in NRI default validator is disabled" Apr 28 00:51:28.147135 containerd[1638]: time="2026-04-28T00:51:28.147124330Z" level=info msg="runtime interface created" Apr 28 00:51:28.147135 containerd[1638]: time="2026-04-28T00:51:28.147130399Z" level=info msg="created NRI interface" Apr 28 00:51:28.147184 containerd[1638]: time="2026-04-28T00:51:28.147163903Z" level=info msg="loading plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Apr 28 00:51:28.147435 containerd[1638]: time="2026-04-28T00:51:28.147385901Z" level=info msg="skip loading plugin" error="failed to check mkfs.erofs availability: failed to run mkfs.erofs --help: exec: \"mkfs.erofs\": executable file not found in $PATH: skip plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Apr 28 00:51:28.147435 containerd[1638]: time="2026-04-28T00:51:28.147415393Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Apr 28 00:51:28.147490 containerd[1638]: time="2026-04-28T00:51:28.147474525Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Apr 28 00:51:28.147515 containerd[1638]: time="2026-04-28T00:51:28.147497033Z" level=info msg="loading plugin" id=io.containerd.mount-manager.v1.bolt type=io.containerd.mount-manager.v1 Apr 28 00:51:28.148065 containerd[1638]: time="2026-04-28T00:51:28.148024231Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Apr 28 00:51:28.148094 containerd[1638]: time="2026-04-28T00:51:28.148085678Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Apr 28 00:51:28.148108 containerd[1638]: time="2026-04-28T00:51:28.148098796Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Apr 28 00:51:28.148122 containerd[1638]: time="2026-04-28T00:51:28.148114729Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Apr 28 00:51:28.148143 containerd[1638]: time="2026-04-28T00:51:28.148128851Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Apr 28 00:51:28.148193 containerd[1638]: time="2026-04-28T00:51:28.148140376Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Apr 28 00:51:28.148207 containerd[1638]: time="2026-04-28T00:51:28.148198634Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Apr 28 00:51:28.148261 containerd[1638]: time="2026-04-28T00:51:28.148210052Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Apr 28 00:51:28.148303 containerd[1638]: time="2026-04-28T00:51:28.148273813Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Apr 28 00:51:28.148612 containerd[1638]: time="2026-04-28T00:51:28.148576483Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Apr 28 00:51:28.148668 containerd[1638]: time="2026-04-28T00:51:28.148638493Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Apr 28 00:51:28.148710 containerd[1638]: time="2026-04-28T00:51:28.148689556Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Apr 28 00:51:28.148736 containerd[1638]: time="2026-04-28T00:51:28.148710835Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Apr 28 00:51:28.148736 containerd[1638]: time="2026-04-28T00:51:28.148723839Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Apr 28 00:51:28.148736 containerd[1638]: time="2026-04-28T00:51:28.148733723Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Apr 28 00:51:28.148831 containerd[1638]: time="2026-04-28T00:51:28.148745288Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Apr 28 00:51:28.148831 containerd[1638]: time="2026-04-28T00:51:28.148755704Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Apr 28 00:51:28.148831 containerd[1638]: time="2026-04-28T00:51:28.148768515Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.mounts type=io.containerd.grpc.v1 Apr 28 00:51:28.148831 containerd[1638]: time="2026-04-28T00:51:28.148779022Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Apr 28 00:51:28.148831 containerd[1638]: time="2026-04-28T00:51:28.148791475Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Apr 28 00:51:28.148831 containerd[1638]: time="2026-04-28T00:51:28.148802529Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Apr 28 00:51:28.149373 containerd[1638]: time="2026-04-28T00:51:28.149322379Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Apr 28 00:51:28.149574 containerd[1638]: time="2026-04-28T00:51:28.149539548Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Apr 28 00:51:28.149574 containerd[1638]: time="2026-04-28T00:51:28.149571785Z" level=info msg="Start snapshots syncer" Apr 28 00:51:28.149678 containerd[1638]: time="2026-04-28T00:51:28.149648048Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Apr 28 00:51:28.150347 containerd[1638]: time="2026-04-28T00:51:28.150292612Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"cgroupWritable\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"\",\"binDirs\":[\"/opt/cni/bin\"],\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogLineSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Apr 28 00:51:28.150812 containerd[1638]: time="2026-04-28T00:51:28.150379769Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Apr 28 00:51:28.150812 containerd[1638]: time="2026-04-28T00:51:28.150507963Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Apr 28 00:51:28.150812 containerd[1638]: time="2026-04-28T00:51:28.150655491Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Apr 28 00:51:28.150812 containerd[1638]: time="2026-04-28T00:51:28.150690189Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Apr 28 00:51:28.150812 containerd[1638]: time="2026-04-28T00:51:28.150700507Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Apr 28 00:51:28.150812 containerd[1638]: time="2026-04-28T00:51:28.150708867Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Apr 28 00:51:28.150812 containerd[1638]: time="2026-04-28T00:51:28.150726084Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Apr 28 00:51:28.150812 containerd[1638]: time="2026-04-28T00:51:28.150734811Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Apr 28 00:51:28.150812 containerd[1638]: time="2026-04-28T00:51:28.150743540Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Apr 28 00:51:28.150812 containerd[1638]: time="2026-04-28T00:51:28.150752312Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Apr 28 00:51:28.150812 containerd[1638]: time="2026-04-28T00:51:28.150758849Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Apr 28 00:51:28.152396 containerd[1638]: time="2026-04-28T00:51:28.150827600Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Apr 28 00:51:28.152396 containerd[1638]: time="2026-04-28T00:51:28.150839846Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Apr 28 00:51:28.152396 containerd[1638]: time="2026-04-28T00:51:28.150847462Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Apr 28 00:51:28.152396 containerd[1638]: time="2026-04-28T00:51:28.150882671Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Apr 28 00:51:28.152396 containerd[1638]: time="2026-04-28T00:51:28.150891347Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Apr 28 00:51:28.152546 containerd[1638]: time="2026-04-28T00:51:28.152293328Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Apr 28 00:51:28.152546 containerd[1638]: time="2026-04-28T00:51:28.152534173Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Apr 28 00:51:28.152811 containerd[1638]: time="2026-04-28T00:51:28.152714663Z" level=info msg="Connect containerd service" Apr 28 00:51:28.152886 containerd[1638]: time="2026-04-28T00:51:28.152865892Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Apr 28 00:51:28.182528 containerd[1638]: time="2026-04-28T00:51:28.182342423Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 28 00:51:29.456449 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Apr 28 00:51:29.480973 systemd[1]: Started sshd@0-1-10.0.0.30:22-10.0.0.1:34956.service - OpenSSH per-connection server daemon (10.0.0.1:34956). Apr 28 00:51:30.358830 containerd[1638]: time="2026-04-28T00:51:30.354760314Z" level=info msg="Start subscribing containerd event" Apr 28 00:51:30.393648 containerd[1638]: time="2026-04-28T00:51:30.390294837Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Apr 28 00:51:30.393648 containerd[1638]: time="2026-04-28T00:51:30.391389430Z" level=info msg="Start recovering state" Apr 28 00:51:30.393648 containerd[1638]: time="2026-04-28T00:51:30.391916546Z" level=info msg=serving... address=/run/containerd/containerd.sock Apr 28 00:51:30.398786 containerd[1638]: time="2026-04-28T00:51:30.398611702Z" level=info msg="Start event monitor" Apr 28 00:51:30.399682 containerd[1638]: time="2026-04-28T00:51:30.399632340Z" level=info msg="Start cni network conf syncer for default" Apr 28 00:51:30.399682 containerd[1638]: time="2026-04-28T00:51:30.399687444Z" level=info msg="Start streaming server" Apr 28 00:51:30.399828 containerd[1638]: time="2026-04-28T00:51:30.399810847Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Apr 28 00:51:30.399898 containerd[1638]: time="2026-04-28T00:51:30.399878362Z" level=info msg="runtime interface starting up..." Apr 28 00:51:30.400204 containerd[1638]: time="2026-04-28T00:51:30.399943729Z" level=info msg="starting plugins..." Apr 28 00:51:30.452790 containerd[1638]: time="2026-04-28T00:51:30.451824402Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Apr 28 00:51:30.477262 systemd[1]: Started containerd.service - containerd container runtime. Apr 28 00:51:30.525827 containerd[1638]: time="2026-04-28T00:51:30.517272211Z" level=info msg="containerd successfully booted in 2.626617s" Apr 28 00:51:31.532269 sshd[1750]: Accepted publickey for core from 10.0.0.1 port 34956 ssh2: RSA SHA256:H+Vux/uLcwSNGfLerJ6bpcTovGn/hDI0W9YvrkmMHk4 Apr 28 00:51:31.534639 sshd-session[1750]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 00:51:31.963032 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Apr 28 00:51:31.965316 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Apr 28 00:51:31.980294 systemd-logind[1614]: New session '1' of user 'core' with class 'user' and type 'tty'. Apr 28 00:51:32.149137 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Apr 28 00:51:32.151605 systemd[1]: Starting user@500.service - User Manager for UID 500... Apr 28 00:51:32.198427 (systemd)[1759]: pam_unix(systemd-user:session): session opened for user core(uid=500) by core(uid=0) Apr 28 00:51:32.257356 systemd-logind[1614]: New session '2' of user 'core' with class 'manager-early' and type 'unspecified'. Apr 28 00:51:32.883282 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 28 00:51:32.890861 systemd[1]: Reached target multi-user.target - Multi-User System. Apr 28 00:51:32.925117 (kubelet)[1773]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 28 00:51:33.358020 systemd[1759]: Queued start job for default target default.target. Apr 28 00:51:33.373779 systemd[1759]: Created slice app.slice - User Application Slice. Apr 28 00:51:33.373814 systemd[1759]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of User's Temporary Directories. Apr 28 00:51:33.373826 systemd[1759]: Reached target machines.target - Virtual Machines and Containers. Apr 28 00:51:33.373892 systemd[1759]: Reached target paths.target - Paths. Apr 28 00:51:33.373912 systemd[1759]: Reached target timers.target - Timers. Apr 28 00:51:33.384532 systemd[1759]: Starting dbus.socket - D-Bus User Message Bus Socket... Apr 28 00:51:33.388536 systemd[1759]: Listening on systemd-ask-password.socket - Query the User Interactively for a Password. Apr 28 00:51:33.682419 systemd[1759]: Starting systemd-tmpfiles-setup.service - Create User Files and Directories... Apr 28 00:51:33.735847 systemd[1759]: Finished systemd-tmpfiles-setup.service - Create User Files and Directories. Apr 28 00:51:33.742566 systemd[1759]: Listening on dbus.socket - D-Bus User Message Bus Socket. Apr 28 00:51:33.742692 systemd[1759]: Reached target sockets.target - Sockets. Apr 28 00:51:33.742736 systemd[1759]: Reached target basic.target - Basic System. Apr 28 00:51:33.742764 systemd[1759]: Reached target default.target - Main User Target. Apr 28 00:51:33.742805 systemd[1759]: Startup finished in 1.057s. Apr 28 00:51:33.743257 systemd[1]: Started user@500.service - User Manager for UID 500. Apr 28 00:51:33.755141 systemd[1]: Started session-1.scope - Session 1 of User core. Apr 28 00:51:33.756392 systemd[1]: Startup finished in 18.818s (kernel) + 3min 19.014s (initrd) + 34.155s (userspace) = 4min 11.988s. Apr 28 00:51:33.808569 systemd[1]: Started sshd@1-4097-10.0.0.30:22-10.0.0.1:48596.service - OpenSSH per-connection server daemon (10.0.0.1:48596). Apr 28 00:51:34.165808 sshd[1788]: Accepted publickey for core from 10.0.0.1 port 48596 ssh2: RSA SHA256:H+Vux/uLcwSNGfLerJ6bpcTovGn/hDI0W9YvrkmMHk4 Apr 28 00:51:34.167081 sshd-session[1788]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 00:51:34.178241 systemd-logind[1614]: New session '3' of user 'core' with class 'user' and type 'tty'. Apr 28 00:51:34.207302 systemd[1]: Started session-3.scope - Session 3 of User core. Apr 28 00:51:34.255278 sshd[1793]: Connection closed by 10.0.0.1 port 48596 Apr 28 00:51:34.256364 sshd-session[1788]: pam_unix(sshd:session): session closed for user core Apr 28 00:51:34.417180 systemd[1]: sshd@1-4097-10.0.0.30:22-10.0.0.1:48596.service: Deactivated successfully. Apr 28 00:51:34.435136 systemd[1]: session-3.scope: Deactivated successfully. Apr 28 00:51:34.443253 systemd-logind[1614]: Session 3 logged out. Waiting for processes to exit. Apr 28 00:51:34.497013 systemd[1]: Started sshd@2-2-10.0.0.30:22-10.0.0.1:48600.service - OpenSSH per-connection server daemon (10.0.0.1:48600). Apr 28 00:51:34.506323 systemd-logind[1614]: Removed session 3. Apr 28 00:51:35.500759 sshd[1799]: Accepted publickey for core from 10.0.0.1 port 48600 ssh2: RSA SHA256:H+Vux/uLcwSNGfLerJ6bpcTovGn/hDI0W9YvrkmMHk4 Apr 28 00:51:35.579462 sshd-session[1799]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 00:51:35.664893 systemd-logind[1614]: New session '4' of user 'core' with class 'user' and type 'tty'. Apr 28 00:51:35.698952 systemd[1]: Started session-4.scope - Session 4 of User core. Apr 28 00:51:35.876711 sshd[1803]: Connection closed by 10.0.0.1 port 48600 Apr 28 00:51:35.884863 sshd-session[1799]: pam_unix(sshd:session): session closed for user core Apr 28 00:51:35.898868 kubelet[1773]: E0428 00:51:35.896043 1773 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 28 00:51:35.998545 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 28 00:51:35.998669 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 28 00:51:35.999140 systemd[1]: kubelet.service: Consumed 6.408s CPU time, 259.8M memory peak. Apr 28 00:51:36.044810 systemd[1]: sshd@2-2-10.0.0.30:22-10.0.0.1:48600.service: Deactivated successfully. Apr 28 00:51:36.157491 systemd[1]: session-4.scope: Deactivated successfully. Apr 28 00:51:36.207912 systemd-logind[1614]: Session 4 logged out. Waiting for processes to exit. Apr 28 00:51:36.293786 systemd[1]: Started sshd@3-3-10.0.0.30:22-10.0.0.1:48612.service - OpenSSH per-connection server daemon (10.0.0.1:48612). Apr 28 00:51:36.449593 systemd-logind[1614]: Removed session 4. Apr 28 00:51:37.156806 sshd[1810]: Accepted publickey for core from 10.0.0.1 port 48612 ssh2: RSA SHA256:H+Vux/uLcwSNGfLerJ6bpcTovGn/hDI0W9YvrkmMHk4 Apr 28 00:51:37.160544 sshd-session[1810]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 00:51:37.166114 systemd-logind[1614]: New session '5' of user 'core' with class 'user' and type 'tty'. Apr 28 00:51:37.219682 systemd[1]: Started session-5.scope - Session 5 of User core. Apr 28 00:51:37.549861 sshd[1814]: Connection closed by 10.0.0.1 port 48612 Apr 28 00:51:37.556914 sshd-session[1810]: pam_unix(sshd:session): session closed for user core Apr 28 00:51:37.721189 systemd[1]: sshd@3-3-10.0.0.30:22-10.0.0.1:48612.service: Deactivated successfully. Apr 28 00:51:37.750750 systemd[1]: session-5.scope: Deactivated successfully. Apr 28 00:51:37.872149 systemd-logind[1614]: Session 5 logged out. Waiting for processes to exit. Apr 28 00:51:37.885624 systemd[1]: Started sshd@4-4-10.0.0.30:22-10.0.0.1:48620.service - OpenSSH per-connection server daemon (10.0.0.1:48620). Apr 28 00:51:38.175082 systemd-logind[1614]: Removed session 5. Apr 28 00:51:39.261697 sshd[1821]: Accepted publickey for core from 10.0.0.1 port 48620 ssh2: RSA SHA256:H+Vux/uLcwSNGfLerJ6bpcTovGn/hDI0W9YvrkmMHk4 Apr 28 00:51:39.315916 sshd-session[1821]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 00:51:39.753299 systemd-logind[1614]: New session '6' of user 'core' with class 'user' and type 'tty'. Apr 28 00:51:39.770609 systemd[1]: Started session-6.scope - Session 6 of User core. Apr 28 00:51:40.112001 sudo[1826]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Apr 28 00:51:40.135893 sudo[1826]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 28 00:51:46.185343 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Apr 28 00:51:46.227506 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 28 00:51:48.770831 systemd[1]: Starting docker.service - Docker Application Container Engine... Apr 28 00:51:48.906341 (dockerd)[1850]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Apr 28 00:51:49.744625 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 28 00:51:49.757387 (kubelet)[1860]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 28 00:51:50.281502 kubelet[1860]: E0428 00:51:50.281266 1860 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 28 00:51:50.314162 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 28 00:51:50.314362 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 28 00:51:50.316896 systemd[1]: kubelet.service: Consumed 2.395s CPU time, 110.8M memory peak. Apr 28 00:51:53.786570 dockerd[1850]: time="2026-04-28T00:51:53.785569538Z" level=info msg="Starting up" Apr 28 00:51:53.813632 dockerd[1850]: time="2026-04-28T00:51:53.812296455Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Apr 28 00:51:54.351438 dockerd[1850]: time="2026-04-28T00:51:54.344342070Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Apr 28 00:51:56.182314 dockerd[1850]: time="2026-04-28T00:51:56.181733984Z" level=info msg="Loading containers: start." Apr 28 00:51:56.280121 kernel: Initializing XFRM netlink socket Apr 28 00:52:00.434287 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Apr 28 00:52:00.442966 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 28 00:52:02.330351 systemd-networkd[1438]: docker0: Link UP Apr 28 00:52:02.558779 dockerd[1850]: time="2026-04-28T00:52:02.558144299Z" level=info msg="Loading containers: done." Apr 28 00:52:02.838041 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 28 00:52:02.911480 (kubelet)[2054]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 28 00:52:03.009921 dockerd[1850]: time="2026-04-28T00:52:03.009198890Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Apr 28 00:52:03.017720 dockerd[1850]: time="2026-04-28T00:52:03.010512147Z" level=info msg="Docker daemon" commit=45873be4ae3f5488c9498b3d9f17deaddaf609f4 containerd-snapshotter=false storage-driver=overlay2 version=28.2.2 Apr 28 00:52:03.044573 dockerd[1850]: time="2026-04-28T00:52:03.043849329Z" level=info msg="Initializing buildkit" Apr 28 00:52:03.139456 dockerd[1850]: time="2026-04-28T00:52:03.132909266Z" level=warning msg="CDI setup error /var/run/cdi: failed to monitor for changes: no such file or directory" Apr 28 00:52:03.139456 dockerd[1850]: time="2026-04-28T00:52:03.138685835Z" level=warning msg="CDI setup error /etc/cdi: failed to monitor for changes: no such file or directory" Apr 28 00:52:03.140198 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2806455855-merged.mount: Deactivated successfully. Apr 28 00:52:03.795416 dockerd[1850]: time="2026-04-28T00:52:03.793994413Z" level=info msg="Completed buildkit initialization" Apr 28 00:52:03.807292 kubelet[2054]: E0428 00:52:03.806837 2054 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 28 00:52:03.888956 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 28 00:52:03.889255 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 28 00:52:03.897204 systemd[1]: kubelet.service: Consumed 2.251s CPU time, 109.2M memory peak. Apr 28 00:52:04.114360 dockerd[1850]: time="2026-04-28T00:52:04.109348912Z" level=info msg="Daemon has completed initialization" Apr 28 00:52:04.115922 systemd[1]: Started docker.service - Docker Application Container Engine. Apr 28 00:52:04.131657 dockerd[1850]: time="2026-04-28T00:52:04.109840791Z" level=info msg="API listen on /run/docker.sock" Apr 28 00:52:09.782262 update_engine[1618]: I20260428 00:52:09.776003 1618 update_attempter.cc:509] Updating boot flags... Apr 28 00:52:14.265300 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Apr 28 00:52:14.272315 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 28 00:52:14.298792 containerd[1638]: time="2026-04-28T00:52:14.298664084Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.7\"" Apr 28 00:52:16.155593 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 28 00:52:16.215009 (kubelet)[2127]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 28 00:52:16.837010 kubelet[2127]: E0428 00:52:16.836664 2127 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 28 00:52:16.839406 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 28 00:52:16.839591 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 28 00:52:16.840393 systemd[1]: kubelet.service: Consumed 1.569s CPU time, 110.7M memory peak. Apr 28 00:52:19.656683 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3266533651.mount: Deactivated successfully. Apr 28 00:52:26.951172 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Apr 28 00:52:26.959872 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 28 00:52:28.711002 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 28 00:52:28.758122 (kubelet)[2202]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 28 00:52:29.393502 kubelet[2202]: E0428 00:52:29.393027 2202 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 28 00:52:29.451086 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 28 00:52:29.451358 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 28 00:52:29.466412 systemd[1]: kubelet.service: Consumed 1.602s CPU time, 110.3M memory peak. Apr 28 00:52:35.113852 containerd[1638]: time="2026-04-28T00:52:35.113488077Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.34.7: active requests=0, bytes read=27088662" Apr 28 00:52:35.113852 containerd[1638]: time="2026-04-28T00:52:35.113597228Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.34.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 00:52:35.217209 containerd[1638]: time="2026-04-28T00:52:35.215767677Z" level=info msg="ImageCreate event name:\"sha256:c15709457ff55a861a7259eb631c447f9bf906267615f9d8dcc820635a0bfb95\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 00:52:35.711804 containerd[1638]: time="2026-04-28T00:52:35.711121967Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:b96b8464d152a24c81d7f0435fd2198f8486970cd26a9e0e9c20826c73d1441c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 00:52:35.720269 containerd[1638]: time="2026-04-28T00:52:35.720091821Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.34.7\" with image id \"sha256:c15709457ff55a861a7259eb631c447f9bf906267615f9d8dcc820635a0bfb95\", repo tag \"registry.k8s.io/kube-apiserver:v1.34.7\", repo digest \"registry.k8s.io/kube-apiserver@sha256:b96b8464d152a24c81d7f0435fd2198f8486970cd26a9e0e9c20826c73d1441c\", size \"27097113\" in 21.421348911s" Apr 28 00:52:35.720612 containerd[1638]: time="2026-04-28T00:52:35.720349639Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.7\" returns image reference \"sha256:c15709457ff55a861a7259eb631c447f9bf906267615f9d8dcc820635a0bfb95\"" Apr 28 00:52:35.740259 containerd[1638]: time="2026-04-28T00:52:35.739858551Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.7\"" Apr 28 00:52:39.670738 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Apr 28 00:52:39.747792 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 28 00:52:40.839426 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 28 00:52:40.876294 (kubelet)[2222]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 28 00:52:41.260138 kubelet[2222]: E0428 00:52:41.259717 2222 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 28 00:52:41.262919 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 28 00:52:41.263094 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 28 00:52:41.264899 systemd[1]: kubelet.service: Consumed 978ms CPU time, 108.2M memory peak. Apr 28 00:52:43.546044 containerd[1638]: time="2026-04-28T00:52:43.545321099Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.34.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 00:52:43.615153 containerd[1638]: time="2026-04-28T00:52:43.562920260Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.34.7: active requests=0, bytes read=21243949" Apr 28 00:52:43.757080 containerd[1638]: time="2026-04-28T00:52:43.755020248Z" level=info msg="ImageCreate event name:\"sha256:23986a24c803336f2a2dfbcaaf0547ee8bcf6638f23bec8967e210909d00a97a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 00:52:44.215472 containerd[1638]: time="2026-04-28T00:52:44.214939727Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:7d759bdc4fef10a3fc1ad60ce9439d58e1a4df7ebb22751f7cc0201ce55f280b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 00:52:44.343526 containerd[1638]: time="2026-04-28T00:52:44.343023965Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.34.7\" with image id \"sha256:23986a24c803336f2a2dfbcaaf0547ee8bcf6638f23bec8967e210909d00a97a\", repo tag \"registry.k8s.io/kube-controller-manager:v1.34.7\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:7d759bdc4fef10a3fc1ad60ce9439d58e1a4df7ebb22751f7cc0201ce55f280b\", size \"22819085\" in 8.603032276s" Apr 28 00:52:44.343526 containerd[1638]: time="2026-04-28T00:52:44.343283055Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.7\" returns image reference \"sha256:23986a24c803336f2a2dfbcaaf0547ee8bcf6638f23bec8967e210909d00a97a\"" Apr 28 00:52:44.354997 containerd[1638]: time="2026-04-28T00:52:44.354723405Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.7\"" Apr 28 00:52:51.393506 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Apr 28 00:52:51.607870 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 28 00:52:53.158837 containerd[1638]: time="2026-04-28T00:52:53.156110206Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.34.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 00:52:53.178676 containerd[1638]: time="2026-04-28T00:52:53.175767182Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.34.7: active requests=1, bytes read=10485760" Apr 28 00:52:53.285759 containerd[1638]: time="2026-04-28T00:52:53.283186788Z" level=info msg="ImageCreate event name:\"sha256:568f1856b0e1c464b0b50ab2879ebd535623c1a620b1d2530ba5dd594237dc82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 00:52:53.345012 containerd[1638]: time="2026-04-28T00:52:53.344659918Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:4ab32f707ff84beaac431797999707757b885196b0b9a52d29cb67f95efce7c1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 00:52:53.513635 containerd[1638]: time="2026-04-28T00:52:53.510908275Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.34.7\" with image id \"sha256:568f1856b0e1c464b0b50ab2879ebd535623c1a620b1d2530ba5dd594237dc82\", repo tag \"registry.k8s.io/kube-scheduler:v1.34.7\", repo digest \"registry.k8s.io/kube-scheduler@sha256:4ab32f707ff84beaac431797999707757b885196b0b9a52d29cb67f95efce7c1\", size \"17377256\" in 9.156028003s" Apr 28 00:52:53.513635 containerd[1638]: time="2026-04-28T00:52:53.512982161Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.7\" returns image reference \"sha256:568f1856b0e1c464b0b50ab2879ebd535623c1a620b1d2530ba5dd594237dc82\"" Apr 28 00:52:53.554941 containerd[1638]: time="2026-04-28T00:52:53.543031599Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.7\"" Apr 28 00:52:54.690731 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 28 00:52:54.755582 (kubelet)[2242]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 28 00:52:56.812046 kubelet[2242]: E0428 00:52:56.795048 2242 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 28 00:52:56.885324 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 28 00:52:56.885574 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 28 00:52:56.897375 systemd[1]: kubelet.service: Consumed 3.401s CPU time, 110.7M memory peak. Apr 28 00:53:07.322064 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7. Apr 28 00:53:07.361414 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 28 00:53:09.904845 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 28 00:53:09.931749 (kubelet)[2263]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 28 00:53:10.945123 kubelet[2263]: E0428 00:53:10.944720 2263 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 28 00:53:10.967059 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 28 00:53:10.970859 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 28 00:53:11.025204 systemd[1]: kubelet.service: Consumed 2.487s CPU time, 110.5M memory peak. Apr 28 00:53:11.691799 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2514632961.mount: Deactivated successfully. Apr 28 00:53:15.066372 containerd[1638]: time="2026-04-28T00:53:15.057892835Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.34.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 00:53:15.084128 containerd[1638]: time="2026-04-28T00:53:15.082252186Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.34.7: active requests=1, bytes read=24147998" Apr 28 00:53:15.099834 containerd[1638]: time="2026-04-28T00:53:15.098953114Z" level=info msg="ImageCreate event name:\"sha256:345c2b8919907fbb425a843da24d86a16708ee53a49ad3fa2e6dc229c7b34643\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 00:53:15.943147 containerd[1638]: time="2026-04-28T00:53:15.918563773Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:062519bc0a14769e2f98c6bdff7816a17e6252de3f3c9cb102e6be33fe38d9e2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 00:53:16.056023 containerd[1638]: time="2026-04-28T00:53:16.055476670Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.34.7\" with image id \"sha256:345c2b8919907fbb425a843da24d86a16708ee53a49ad3fa2e6dc229c7b34643\", repo tag \"registry.k8s.io/kube-proxy:v1.34.7\", repo digest \"registry.k8s.io/kube-proxy@sha256:062519bc0a14769e2f98c6bdff7816a17e6252de3f3c9cb102e6be33fe38d9e2\", size \"25971973\" in 22.497033747s" Apr 28 00:53:16.056023 containerd[1638]: time="2026-04-28T00:53:16.056018690Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.7\" returns image reference \"sha256:345c2b8919907fbb425a843da24d86a16708ee53a49ad3fa2e6dc229c7b34643\"" Apr 28 00:53:16.082464 containerd[1638]: time="2026-04-28T00:53:16.082181059Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\"" Apr 28 00:53:21.261890 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 8. Apr 28 00:53:21.332559 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 28 00:53:23.580791 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 28 00:53:23.618805 (kubelet)[2287]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 28 00:53:24.106305 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1036383251.mount: Deactivated successfully. Apr 28 00:53:25.331113 kubelet[2287]: E0428 00:53:25.330845 2287 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 28 00:53:25.355620 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 28 00:53:25.355913 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 28 00:53:25.366549 systemd[1]: kubelet.service: Consumed 2.547s CPU time, 110.5M memory peak. Apr 28 00:53:35.394549 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 9. Apr 28 00:53:35.482571 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 28 00:53:36.962466 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 28 00:53:37.007803 (kubelet)[2317]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 28 00:53:37.849514 kubelet[2317]: E0428 00:53:37.849070 2317 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 28 00:53:37.863570 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 28 00:53:37.879064 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 28 00:53:37.973429 systemd[1]: kubelet.service: Consumed 1.642s CPU time, 110.2M memory peak. Apr 28 00:53:44.547966 containerd[1638]: time="2026-04-28T00:53:44.547187213Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 00:53:44.572828 containerd[1638]: time="2026-04-28T00:53:44.569139531Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.1: active requests=0, bytes read=21918931" Apr 28 00:53:44.658593 containerd[1638]: time="2026-04-28T00:53:44.656830593Z" level=info msg="ImageCreate event name:\"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 00:53:44.896095 containerd[1638]: time="2026-04-28T00:53:44.895774278Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 00:53:45.040699 containerd[1638]: time="2026-04-28T00:53:45.040099568Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.1\" with image id \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\", size \"22384805\" in 28.957670491s" Apr 28 00:53:45.040699 containerd[1638]: time="2026-04-28T00:53:45.040497321Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\" returns image reference \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\"" Apr 28 00:53:45.050790 containerd[1638]: time="2026-04-28T00:53:45.050534682Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Apr 28 00:53:47.958067 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 10. Apr 28 00:53:48.196725 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 28 00:53:48.685145 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount737420497.mount: Deactivated successfully. Apr 28 00:53:48.785098 containerd[1638]: time="2026-04-28T00:53:48.780016007Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 28 00:53:48.804725 containerd[1638]: time="2026-04-28T00:53:48.803442321Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=0" Apr 28 00:53:49.057939 containerd[1638]: time="2026-04-28T00:53:49.048692607Z" level=info msg="ImageCreate event name:\"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 28 00:53:49.489808 containerd[1638]: time="2026-04-28T00:53:49.489205536Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 28 00:53:49.497703 containerd[1638]: time="2026-04-28T00:53:49.494686740Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"320448\" in 4.444052148s" Apr 28 00:53:49.497703 containerd[1638]: time="2026-04-28T00:53:49.496706225Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\"" Apr 28 00:53:49.529735 containerd[1638]: time="2026-04-28T00:53:49.528172048Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.5-0\"" Apr 28 00:53:50.806717 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 28 00:53:50.915709 (kubelet)[2374]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 28 00:53:51.948968 kubelet[2374]: E0428 00:53:51.948664 2374 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 28 00:53:51.959304 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 28 00:53:51.959497 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 28 00:53:51.965200 systemd[1]: kubelet.service: Consumed 2.236s CPU time, 110.7M memory peak. Apr 28 00:53:56.965202 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount559811675.mount: Deactivated successfully. Apr 28 00:54:02.134945 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 11. Apr 28 00:54:02.261446 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 28 00:54:04.250044 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 28 00:54:04.261074 (kubelet)[2402]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 28 00:54:05.735095 kubelet[2402]: E0428 00:54:05.730578 2402 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 28 00:54:05.759103 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 28 00:54:05.759478 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 28 00:54:05.797713 systemd[1]: kubelet.service: Consumed 2.562s CPU time, 110.4M memory peak. Apr 28 00:54:15.942612 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 12. Apr 28 00:54:16.189706 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 28 00:54:18.457674 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 28 00:54:18.585960 (kubelet)[2462]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 28 00:54:19.649207 kubelet[2462]: E0428 00:54:19.648912 2462 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 28 00:54:19.694342 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 28 00:54:19.712449 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 28 00:54:19.835009 systemd[1]: kubelet.service: Consumed 2.555s CPU time, 111.6M memory peak. Apr 28 00:54:21.458699 containerd[1638]: time="2026-04-28T00:54:21.458151255Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.5-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 00:54:21.485210 containerd[1638]: time="2026-04-28T00:54:21.465163969Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.5-0: active requests=0, bytes read=22863316" Apr 28 00:54:21.917755 containerd[1638]: time="2026-04-28T00:54:21.911000988Z" level=info msg="ImageCreate event name:\"sha256:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 00:54:22.495206 containerd[1638]: time="2026-04-28T00:54:22.494569201Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 00:54:22.552437 containerd[1638]: time="2026-04-28T00:54:22.552090646Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.5-0\" with image id \"sha256:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1\", repo tag \"registry.k8s.io/etcd:3.6.5-0\", repo digest \"registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534\", size \"22871747\" in 33.022403037s" Apr 28 00:54:22.552437 containerd[1638]: time="2026-04-28T00:54:22.552274216Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.5-0\" returns image reference \"sha256:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1\"" Apr 28 00:54:30.100602 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 13. Apr 28 00:54:30.311682 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 28 00:54:33.616645 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 28 00:54:33.653106 (kubelet)[2497]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 28 00:54:34.434637 kubelet[2497]: E0428 00:54:34.433650 2497 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 28 00:54:34.449886 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 28 00:54:34.450098 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 28 00:54:34.450864 systemd[1]: kubelet.service: Consumed 2.401s CPU time, 110.2M memory peak. Apr 28 00:54:44.640202 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 14. Apr 28 00:54:44.714060 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 28 00:54:48.533474 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 28 00:54:48.608538 (kubelet)[2525]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 28 00:54:50.663830 kubelet[2525]: E0428 00:54:50.660203 2525 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 28 00:54:50.836017 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 28 00:54:50.848768 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 28 00:54:50.990210 systemd[1]: kubelet.service: Consumed 3.447s CPU time, 110.7M memory peak. Apr 28 00:55:00.995706 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 15. Apr 28 00:55:01.074545 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 28 00:55:04.417345 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 28 00:55:04.711842 (kubelet)[2541]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 28 00:55:08.145697 kubelet[2541]: E0428 00:55:08.145290 2541 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 28 00:55:08.260789 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 28 00:55:08.268977 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 28 00:55:08.397870 systemd[1]: kubelet.service: Consumed 4.743s CPU time, 110.4M memory peak. Apr 28 00:55:12.408489 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 28 00:55:12.432394 systemd[1]: kubelet.service: Consumed 4.743s CPU time, 110.4M memory peak. Apr 28 00:55:12.680065 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 28 00:55:13.754310 systemd[1]: Reload requested from client PID 2559 ('systemctl') (unit session-6.scope)... Apr 28 00:55:13.754386 systemd[1]: Reloading... Apr 28 00:55:19.161692 zram_generator::config[2616]: No configuration found. Apr 28 00:55:19.198742 systemd-ssh-generator[2609]: Failed to query local AF_VSOCK CID: Cannot assign requested address Apr 28 00:55:19.248365 (sd-exec-[2590]: /usr/lib/systemd/system-generators/systemd-ssh-generator failed with exit status 1. Apr 28 00:55:33.503818 systemd[1]: /usr/lib/systemd/system/update-engine.service:10: Support for option BlockIOWeight= has been removed and it is ignored Apr 28 00:55:45.464366 systemd[1]: Reloading finished in 31704 ms. Apr 28 00:55:46.482826 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Apr 28 00:55:46.482999 systemd[1]: kubelet.service: Failed with result 'signal'. Apr 28 00:55:46.485473 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 28 00:55:46.488383 systemd[1]: kubelet.service: Consumed 1.386s CPU time, 98.8M memory peak. Apr 28 00:55:46.625203 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 28 00:55:49.484104 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 28 00:55:49.518356 (kubelet)[2660]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 28 00:55:50.655692 kubelet[2660]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 28 00:55:50.658092 kubelet[2660]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 28 00:55:50.658092 kubelet[2660]: I0428 00:55:50.657296 2660 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 28 00:55:53.617009 kubelet[2660]: I0428 00:55:53.614630 2660 server.go:529] "Kubelet version" kubeletVersion="v1.34.4" Apr 28 00:55:53.695729 kubelet[2660]: I0428 00:55:53.668925 2660 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 28 00:55:53.695729 kubelet[2660]: I0428 00:55:53.683857 2660 watchdog_linux.go:95] "Systemd watchdog is not enabled" Apr 28 00:55:53.695729 kubelet[2660]: I0428 00:55:53.685768 2660 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 28 00:55:53.696661 kubelet[2660]: I0428 00:55:53.696272 2660 server.go:956] "Client rotation is on, will bootstrap in background" Apr 28 00:55:53.944788 kubelet[2660]: E0428 00:55:53.944330 2660 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.30:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.30:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 28 00:55:53.948172 kubelet[2660]: I0428 00:55:53.945628 2660 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 28 00:55:54.304244 kubelet[2660]: I0428 00:55:54.291400 2660 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Apr 28 00:55:54.388203 kubelet[2660]: I0428 00:55:54.387191 2660 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Apr 28 00:55:54.437706 kubelet[2660]: I0428 00:55:54.419688 2660 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 28 00:55:54.487875 kubelet[2660]: I0428 00:55:54.476839 2660 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 28 00:55:54.491494 kubelet[2660]: I0428 00:55:54.488048 2660 topology_manager.go:138] "Creating topology manager with none policy" Apr 28 00:55:54.491494 kubelet[2660]: I0428 00:55:54.488092 2660 container_manager_linux.go:306] "Creating device plugin manager" Apr 28 00:55:54.491494 kubelet[2660]: I0428 00:55:54.490683 2660 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Apr 28 00:55:54.523765 kubelet[2660]: I0428 00:55:54.523296 2660 state_mem.go:36] "Initialized new in-memory state store" Apr 28 00:55:54.527021 kubelet[2660]: I0428 00:55:54.526888 2660 kubelet.go:475] "Attempting to sync node with API server" Apr 28 00:55:54.527354 kubelet[2660]: I0428 00:55:54.527069 2660 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 28 00:55:54.527354 kubelet[2660]: I0428 00:55:54.527285 2660 kubelet.go:387] "Adding apiserver pod source" Apr 28 00:55:54.527590 kubelet[2660]: I0428 00:55:54.527379 2660 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 28 00:55:54.529040 kubelet[2660]: E0428 00:55:54.528395 2660 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.30:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.30:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 28 00:55:54.529040 kubelet[2660]: E0428 00:55:54.528966 2660 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.30:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.30:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 28 00:55:54.619750 kubelet[2660]: I0428 00:55:54.617177 2660 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.2.1" apiVersion="v1" Apr 28 00:55:54.665930 kubelet[2660]: I0428 00:55:54.662799 2660 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 28 00:55:54.665930 kubelet[2660]: I0428 00:55:54.663903 2660 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Apr 28 00:55:54.665930 kubelet[2660]: W0428 00:55:54.665788 2660 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Apr 28 00:55:54.761202 kubelet[2660]: I0428 00:55:54.760652 2660 server.go:1262] "Started kubelet" Apr 28 00:55:54.761202 kubelet[2660]: I0428 00:55:54.760911 2660 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Apr 28 00:55:54.784967 kubelet[2660]: I0428 00:55:54.784620 2660 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 28 00:55:54.810478 kubelet[2660]: I0428 00:55:54.810179 2660 server_v1.go:49] "podresources" method="list" useActivePods=true Apr 28 00:55:54.814101 kubelet[2660]: I0428 00:55:54.813921 2660 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 28 00:55:54.814960 kubelet[2660]: I0428 00:55:54.814510 2660 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 28 00:55:54.857634 kubelet[2660]: I0428 00:55:54.856698 2660 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 28 00:55:54.913844 kubelet[2660]: I0428 00:55:54.913786 2660 volume_manager.go:313] "Starting Kubelet Volume Manager" Apr 28 00:55:54.916812 kubelet[2660]: I0428 00:55:54.912118 2660 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Apr 28 00:55:55.018434 kubelet[2660]: E0428 00:55:55.017841 2660 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 00:55:55.081925 kubelet[2660]: E0428 00:55:55.078166 2660 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.30:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.30:6443: connect: connection refused" interval="200ms" Apr 28 00:55:55.081925 kubelet[2660]: I0428 00:55:55.079547 2660 server.go:310] "Adding debug handlers to kubelet server" Apr 28 00:55:55.081925 kubelet[2660]: I0428 00:55:55.079599 2660 factory.go:223] Registration of the systemd container factory successfully Apr 28 00:55:55.081925 kubelet[2660]: I0428 00:55:55.079948 2660 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 28 00:55:55.085601 kubelet[2660]: E0428 00:55:55.085465 2660 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.30:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.30:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 28 00:55:55.096983 kubelet[2660]: E0428 00:55:55.018446 2660 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.30:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.30:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18aa5f456e64dafd default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-28 00:55:54.760567549 +0000 UTC m=+5.206518022,LastTimestamp:2026-04-28 00:55:54.760567549 +0000 UTC m=+5.206518022,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 28 00:55:55.099561 kubelet[2660]: I0428 00:55:55.099193 2660 reconciler.go:29] "Reconciler: start to sync state" Apr 28 00:55:55.183673 kubelet[2660]: E0428 00:55:55.182794 2660 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 00:55:55.296970 kubelet[2660]: E0428 00:55:55.293460 2660 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 00:55:55.305383 kubelet[2660]: I0428 00:55:55.212760 2660 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Apr 28 00:55:55.307553 kubelet[2660]: I0428 00:55:55.307527 2660 factory.go:223] Registration of the containerd container factory successfully Apr 28 00:55:55.313109 kubelet[2660]: E0428 00:55:55.305896 2660 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.30:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.30:6443: connect: connection refused" interval="400ms" Apr 28 00:55:55.362879 kubelet[2660]: E0428 00:55:55.362683 2660 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 28 00:55:55.403711 kubelet[2660]: E0428 00:55:55.402762 2660 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 00:55:55.515624 kubelet[2660]: E0428 00:55:55.513442 2660 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 00:55:55.556802 kubelet[2660]: I0428 00:55:55.554934 2660 cpu_manager.go:221] "Starting CPU manager" policy="none" Apr 28 00:55:55.556802 kubelet[2660]: I0428 00:55:55.555026 2660 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Apr 28 00:55:55.556802 kubelet[2660]: I0428 00:55:55.555149 2660 state_mem.go:36] "Initialized new in-memory state store" Apr 28 00:55:55.573573 kubelet[2660]: I0428 00:55:55.571747 2660 policy_none.go:49] "None policy: Start" Apr 28 00:55:55.587154 kubelet[2660]: I0428 00:55:55.587065 2660 memory_manager.go:187] "Starting memorymanager" policy="None" Apr 28 00:55:55.587485 kubelet[2660]: I0428 00:55:55.587468 2660 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Apr 28 00:55:55.615636 kubelet[2660]: I0428 00:55:55.608872 2660 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Apr 28 00:55:55.623967 kubelet[2660]: I0428 00:55:55.622407 2660 status_manager.go:244] "Starting to sync pod status with apiserver" Apr 28 00:55:55.690252 kubelet[2660]: E0428 00:55:55.624177 2660 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 00:55:55.690252 kubelet[2660]: I0428 00:55:55.628727 2660 policy_none.go:47] "Start" Apr 28 00:55:55.690252 kubelet[2660]: I0428 00:55:55.685782 2660 kubelet.go:2428] "Starting kubelet main sync loop" Apr 28 00:55:55.690252 kubelet[2660]: E0428 00:55:55.686040 2660 kubelet.go:2452] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 28 00:55:55.693258 kubelet[2660]: E0428 00:55:55.692598 2660 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.30:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.30:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 28 00:55:55.739110 kubelet[2660]: E0428 00:55:55.738732 2660 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 00:55:55.739110 kubelet[2660]: E0428 00:55:55.738644 2660 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.30:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.30:6443: connect: connection refused" interval="800ms" Apr 28 00:55:55.803931 kubelet[2660]: E0428 00:55:55.790059 2660 kubelet.go:2452] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Apr 28 00:55:55.813206 kubelet[2660]: E0428 00:55:55.806121 2660 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.30:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.30:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 28 00:55:55.847681 kubelet[2660]: E0428 00:55:55.846668 2660 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 00:55:55.863612 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Apr 28 00:55:55.959204 kubelet[2660]: E0428 00:55:55.958593 2660 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 00:55:55.965658 kubelet[2660]: E0428 00:55:55.965346 2660 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.30:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.30:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 28 00:55:56.014854 kubelet[2660]: E0428 00:55:56.011615 2660 kubelet.go:2452] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Apr 28 00:55:56.082764 kubelet[2660]: E0428 00:55:56.081505 2660 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 00:55:56.090274 kubelet[2660]: E0428 00:55:56.089621 2660 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.30:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.30:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 28 00:55:56.198360 kubelet[2660]: E0428 00:55:56.194963 2660 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 00:55:56.199266 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Apr 28 00:55:56.279704 kubelet[2660]: E0428 00:55:56.277717 2660 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.30:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.30:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 28 00:55:56.308903 kubelet[2660]: E0428 00:55:56.301752 2660 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 00:55:56.431105 kubelet[2660]: E0428 00:55:56.430599 2660 kubelet.go:2452] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Apr 28 00:55:56.431105 kubelet[2660]: E0428 00:55:56.430763 2660 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 00:55:56.487147 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Apr 28 00:55:56.531902 kubelet[2660]: E0428 00:55:56.531732 2660 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 00:55:56.581175 kubelet[2660]: E0428 00:55:56.579507 2660 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.30:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.30:6443: connect: connection refused" interval="1.6s" Apr 28 00:55:56.655158 kubelet[2660]: E0428 00:55:56.653893 2660 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 00:55:56.660474 kubelet[2660]: E0428 00:55:56.660387 2660 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 28 00:55:56.677066 kubelet[2660]: I0428 00:55:56.675576 2660 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 28 00:55:56.677066 kubelet[2660]: I0428 00:55:56.675690 2660 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 28 00:55:56.698523 kubelet[2660]: I0428 00:55:56.697417 2660 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 28 00:55:56.886598 kubelet[2660]: I0428 00:55:56.871933 2660 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 28 00:55:56.926085 kubelet[2660]: E0428 00:55:56.925952 2660 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.30:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.30:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 28 00:55:56.928719 kubelet[2660]: E0428 00:55:56.927096 2660 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.30:6443/api/v1/nodes\": dial tcp 10.0.0.30:6443: connect: connection refused" node="localhost" Apr 28 00:55:56.931758 kubelet[2660]: E0428 00:55:56.931595 2660 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 28 00:55:56.938174 kubelet[2660]: E0428 00:55:56.938021 2660 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 28 00:55:57.320116 kubelet[2660]: I0428 00:55:57.319915 2660 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 28 00:55:57.331674 kubelet[2660]: E0428 00:55:57.331100 2660 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.30:6443/api/v1/nodes\": dial tcp 10.0.0.30:6443: connect: connection refused" node="localhost" Apr 28 00:55:57.406552 kubelet[2660]: I0428 00:55:57.398950 2660 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/cf2ebce56cde410c1f7401213757c4d8-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"cf2ebce56cde410c1f7401213757c4d8\") " pod="kube-system/kube-apiserver-localhost" Apr 28 00:55:57.484883 kubelet[2660]: I0428 00:55:57.483662 2660 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/cf2ebce56cde410c1f7401213757c4d8-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"cf2ebce56cde410c1f7401213757c4d8\") " pod="kube-system/kube-apiserver-localhost" Apr 28 00:55:57.496056 kubelet[2660]: I0428 00:55:57.484198 2660 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/cf2ebce56cde410c1f7401213757c4d8-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"cf2ebce56cde410c1f7401213757c4d8\") " pod="kube-system/kube-apiserver-localhost" Apr 28 00:55:57.605712 kubelet[2660]: I0428 00:55:57.603373 2660 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c6bb8708a026256e82ca4c5631a78b5a-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"c6bb8708a026256e82ca4c5631a78b5a\") " pod="kube-system/kube-controller-manager-localhost" Apr 28 00:55:57.606395 kubelet[2660]: I0428 00:55:57.605447 2660 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c6bb8708a026256e82ca4c5631a78b5a-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"c6bb8708a026256e82ca4c5631a78b5a\") " pod="kube-system/kube-controller-manager-localhost" Apr 28 00:55:57.677933 kubelet[2660]: I0428 00:55:57.674778 2660 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c6bb8708a026256e82ca4c5631a78b5a-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"c6bb8708a026256e82ca4c5631a78b5a\") " pod="kube-system/kube-controller-manager-localhost" Apr 28 00:55:57.684869 kubelet[2660]: I0428 00:55:57.684583 2660 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/c6bb8708a026256e82ca4c5631a78b5a-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"c6bb8708a026256e82ca4c5631a78b5a\") " pod="kube-system/kube-controller-manager-localhost" Apr 28 00:55:57.684869 kubelet[2660]: I0428 00:55:57.684867 2660 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/824fd89300514e351ed3b68d82c665c6-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"824fd89300514e351ed3b68d82c665c6\") " pod="kube-system/kube-scheduler-localhost" Apr 28 00:55:57.685273 kubelet[2660]: I0428 00:55:57.685207 2660 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c6bb8708a026256e82ca4c5631a78b5a-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"c6bb8708a026256e82ca4c5631a78b5a\") " pod="kube-system/kube-controller-manager-localhost" Apr 28 00:55:57.747862 kubelet[2660]: I0428 00:55:57.747828 2660 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 28 00:55:57.753340 systemd[1]: Created slice kubepods-burstable-podcf2ebce56cde410c1f7401213757c4d8.slice - libcontainer container kubepods-burstable-podcf2ebce56cde410c1f7401213757c4d8.slice. Apr 28 00:55:57.789311 kubelet[2660]: E0428 00:55:57.756852 2660 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.30:6443/api/v1/nodes\": dial tcp 10.0.0.30:6443: connect: connection refused" node="localhost" Apr 28 00:55:58.003676 kubelet[2660]: E0428 00:55:58.002788 2660 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 28 00:55:58.082891 kubelet[2660]: E0428 00:55:58.079948 2660 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:55:58.084055 systemd[1]: Created slice kubepods-burstable-podc6bb8708a026256e82ca4c5631a78b5a.slice - libcontainer container kubepods-burstable-podc6bb8708a026256e82ca4c5631a78b5a.slice. Apr 28 00:55:58.193966 kubelet[2660]: E0428 00:55:58.193827 2660 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.30:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.30:6443: connect: connection refused" interval="3.2s" Apr 28 00:55:58.194843 kubelet[2660]: E0428 00:55:58.194752 2660 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 28 00:55:58.243816 kubelet[2660]: E0428 00:55:58.243474 2660 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.30:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.30:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 28 00:55:58.250662 containerd[1638]: time="2026-04-28T00:55:58.250319928Z" level=info msg="RunPodSandbox for name:\"kube-apiserver-localhost\" uid:\"cf2ebce56cde410c1f7401213757c4d8\" namespace:\"kube-system\"" Apr 28 00:55:58.265245 systemd[1]: Created slice kubepods-burstable-pod824fd89300514e351ed3b68d82c665c6.slice - libcontainer container kubepods-burstable-pod824fd89300514e351ed3b68d82c665c6.slice. Apr 28 00:55:58.265766 kubelet[2660]: E0428 00:55:58.265732 2660 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:55:58.354849 containerd[1638]: time="2026-04-28T00:55:58.354187506Z" level=info msg="RunPodSandbox for name:\"kube-controller-manager-localhost\" uid:\"c6bb8708a026256e82ca4c5631a78b5a\" namespace:\"kube-system\"" Apr 28 00:55:58.585565 kubelet[2660]: E0428 00:55:58.584138 2660 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 28 00:55:58.619170 kubelet[2660]: E0428 00:55:58.618149 2660 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:55:58.658012 kubelet[2660]: I0428 00:55:58.656912 2660 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 28 00:55:58.689953 kubelet[2660]: E0428 00:55:58.688824 2660 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.30:6443/api/v1/nodes\": dial tcp 10.0.0.30:6443: connect: connection refused" node="localhost" Apr 28 00:55:58.691888 containerd[1638]: time="2026-04-28T00:55:58.691684144Z" level=info msg="RunPodSandbox for name:\"kube-scheduler-localhost\" uid:\"824fd89300514e351ed3b68d82c665c6\" namespace:\"kube-system\"" Apr 28 00:55:58.798390 kubelet[2660]: E0428 00:55:58.797023 2660 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.30:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.30:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 28 00:55:58.827132 kubelet[2660]: E0428 00:55:58.826770 2660 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.30:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.30:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 28 00:55:59.498438 kubelet[2660]: E0428 00:55:59.497896 2660 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.30:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.30:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 28 00:55:59.619673 containerd[1638]: time="2026-04-28T00:55:59.614414163Z" level=info msg="connecting to shim f99c09fa9bcebfcc9ce36f1e690fb119057571c499a894190b3ac3be9c525e0a" address="unix:///run/containerd/s/aafd21b6e43b3c36323942c08fd3df2bb03ac8c2cdd619376b1243457cecf8d1" namespace=k8s.io protocol=ttrpc version=3 Apr 28 00:55:59.645899 containerd[1638]: time="2026-04-28T00:55:59.643699768Z" level=info msg="connecting to shim e1005760f423d43ce44d06a9d4b7e9ce0b0129a1949c0222355e980d83e1c805" address="unix:///run/containerd/s/a6f6fe89b2fd9ed7e76b21a90d817ac6b4bb652aa72b24cd6c021d9b1372cd4c" namespace=k8s.io protocol=ttrpc version=3 Apr 28 00:55:59.865898 containerd[1638]: time="2026-04-28T00:55:59.857499222Z" level=info msg="connecting to shim e82fdc1d5f53a8fce152ad99c68af9170281cfec471286d1bb09abdd165edcda" address="unix:///run/containerd/s/87324bb63ef3a4130ae0dbb17ad0d3ce89ecf0940cd570753f29942f5d39ca08" namespace=k8s.io protocol=ttrpc version=3 Apr 28 00:56:00.188784 kubelet[2660]: E0428 00:56:00.188630 2660 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.30:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.30:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 28 00:56:00.436990 systemd[1]: Started cri-containerd-f99c09fa9bcebfcc9ce36f1e690fb119057571c499a894190b3ac3be9c525e0a.scope - libcontainer container f99c09fa9bcebfcc9ce36f1e690fb119057571c499a894190b3ac3be9c525e0a. Apr 28 00:56:00.637928 kubelet[2660]: I0428 00:56:00.637733 2660 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 28 00:56:00.784110 kubelet[2660]: E0428 00:56:00.768108 2660 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.30:6443/api/v1/nodes\": dial tcp 10.0.0.30:6443: connect: connection refused" node="localhost" Apr 28 00:56:01.588854 kubelet[2660]: E0428 00:56:01.584717 2660 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.30:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.30:6443: connect: connection refused" interval="6.4s" Apr 28 00:56:01.647068 systemd[1]: Started cri-containerd-e1005760f423d43ce44d06a9d4b7e9ce0b0129a1949c0222355e980d83e1c805.scope - libcontainer container e1005760f423d43ce44d06a9d4b7e9ce0b0129a1949c0222355e980d83e1c805. Apr 28 00:56:01.756497 systemd[1]: Started cri-containerd-e82fdc1d5f53a8fce152ad99c68af9170281cfec471286d1bb09abdd165edcda.scope - libcontainer container e82fdc1d5f53a8fce152ad99c68af9170281cfec471286d1bb09abdd165edcda. Apr 28 00:56:02.567471 containerd[1638]: time="2026-04-28T00:56:02.567152962Z" level=info msg="RunPodSandbox for name:\"kube-controller-manager-localhost\" uid:\"c6bb8708a026256e82ca4c5631a78b5a\" namespace:\"kube-system\" returns sandbox id \"f99c09fa9bcebfcc9ce36f1e690fb119057571c499a894190b3ac3be9c525e0a\"" Apr 28 00:56:02.673841 kubelet[2660]: E0428 00:56:02.671590 2660 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:56:02.918428 containerd[1638]: time="2026-04-28T00:56:02.917914017Z" level=info msg="RunPodSandbox for name:\"kube-scheduler-localhost\" uid:\"824fd89300514e351ed3b68d82c665c6\" namespace:\"kube-system\" returns sandbox id \"e82fdc1d5f53a8fce152ad99c68af9170281cfec471286d1bb09abdd165edcda\"" Apr 28 00:56:03.007596 kubelet[2660]: E0428 00:56:03.005781 2660 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:56:03.058890 kubelet[2660]: E0428 00:56:03.058031 2660 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.30:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.30:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 28 00:56:03.111355 containerd[1638]: time="2026-04-28T00:56:03.111097441Z" level=info msg="CreateContainer within sandbox \"f99c09fa9bcebfcc9ce36f1e690fb119057571c499a894190b3ac3be9c525e0a\" for container name:\"kube-controller-manager\"" Apr 28 00:56:03.199747 containerd[1638]: time="2026-04-28T00:56:03.195717797Z" level=info msg="RunPodSandbox for name:\"kube-apiserver-localhost\" uid:\"cf2ebce56cde410c1f7401213757c4d8\" namespace:\"kube-system\" returns sandbox id \"e1005760f423d43ce44d06a9d4b7e9ce0b0129a1949c0222355e980d83e1c805\"" Apr 28 00:56:03.199747 containerd[1638]: time="2026-04-28T00:56:03.198704536Z" level=info msg="CreateContainer within sandbox \"e82fdc1d5f53a8fce152ad99c68af9170281cfec471286d1bb09abdd165edcda\" for container name:\"kube-scheduler\"" Apr 28 00:56:03.314675 kubelet[2660]: E0428 00:56:03.314410 2660 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:56:03.543758 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3645625634.mount: Deactivated successfully. Apr 28 00:56:03.549629 containerd[1638]: time="2026-04-28T00:56:03.546718920Z" level=info msg="Container a074e8c266d442053cd1e3651d83d956d8b7264f094e1e4cf7ae772b73813a41: CDI devices from CRI Config.CDIDevices: []" Apr 28 00:56:03.606175 kubelet[2660]: E0428 00:56:03.604774 2660 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.30:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.30:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 28 00:56:03.608743 containerd[1638]: time="2026-04-28T00:56:03.608684762Z" level=info msg="CreateContainer within sandbox \"e1005760f423d43ce44d06a9d4b7e9ce0b0129a1949c0222355e980d83e1c805\" for container name:\"kube-apiserver\"" Apr 28 00:56:03.619593 containerd[1638]: time="2026-04-28T00:56:03.619540177Z" level=info msg="Container 0bfcdb0a37d3f62d6d86ee7c1eafcd28d53fdc4b124373d470f4b26c5d209542: CDI devices from CRI Config.CDIDevices: []" Apr 28 00:56:03.694479 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2851154449.mount: Deactivated successfully. Apr 28 00:56:04.113971 containerd[1638]: time="2026-04-28T00:56:04.113610463Z" level=info msg="CreateContainer within sandbox \"f99c09fa9bcebfcc9ce36f1e690fb119057571c499a894190b3ac3be9c525e0a\" for name:\"kube-controller-manager\" returns container id \"a074e8c266d442053cd1e3651d83d956d8b7264f094e1e4cf7ae772b73813a41\"" Apr 28 00:56:04.159838 containerd[1638]: time="2026-04-28T00:56:04.159509794Z" level=info msg="StartContainer for \"a074e8c266d442053cd1e3651d83d956d8b7264f094e1e4cf7ae772b73813a41\"" Apr 28 00:56:04.159838 containerd[1638]: time="2026-04-28T00:56:04.159674668Z" level=info msg="CreateContainer within sandbox \"e82fdc1d5f53a8fce152ad99c68af9170281cfec471286d1bb09abdd165edcda\" for name:\"kube-scheduler\" returns container id \"0bfcdb0a37d3f62d6d86ee7c1eafcd28d53fdc4b124373d470f4b26c5d209542\"" Apr 28 00:56:04.161270 kubelet[2660]: I0428 00:56:04.161189 2660 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 28 00:56:04.162211 kubelet[2660]: E0428 00:56:04.161852 2660 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.30:6443/api/v1/nodes\": dial tcp 10.0.0.30:6443: connect: connection refused" node="localhost" Apr 28 00:56:04.211971 containerd[1638]: time="2026-04-28T00:56:04.211603783Z" level=info msg="StartContainer for \"0bfcdb0a37d3f62d6d86ee7c1eafcd28d53fdc4b124373d470f4b26c5d209542\"" Apr 28 00:56:04.213717 containerd[1638]: time="2026-04-28T00:56:04.213602660Z" level=info msg="connecting to shim a074e8c266d442053cd1e3651d83d956d8b7264f094e1e4cf7ae772b73813a41" address="unix:///run/containerd/s/aafd21b6e43b3c36323942c08fd3df2bb03ac8c2cdd619376b1243457cecf8d1" protocol=ttrpc version=3 Apr 28 00:56:04.269472 containerd[1638]: time="2026-04-28T00:56:04.269096148Z" level=info msg="connecting to shim 0bfcdb0a37d3f62d6d86ee7c1eafcd28d53fdc4b124373d470f4b26c5d209542" address="unix:///run/containerd/s/87324bb63ef3a4130ae0dbb17ad0d3ce89ecf0940cd570753f29942f5d39ca08" protocol=ttrpc version=3 Apr 28 00:56:04.275714 containerd[1638]: time="2026-04-28T00:56:04.275370769Z" level=info msg="Container 9583e013b7af737c09ec0f5a95821eabdf38511a63754015720f2c75fbb31b16: CDI devices from CRI Config.CDIDevices: []" Apr 28 00:56:04.381422 kubelet[2660]: E0428 00:56:04.379128 2660 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.30:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.30:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18aa5f456e64dafd default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-28 00:55:54.760567549 +0000 UTC m=+5.206518022,LastTimestamp:2026-04-28 00:55:54.760567549 +0000 UTC m=+5.206518022,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 28 00:56:04.642049 kubelet[2660]: E0428 00:56:04.636672 2660 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.30:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.30:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 28 00:56:04.642531 systemd[1]: Started cri-containerd-a074e8c266d442053cd1e3651d83d956d8b7264f094e1e4cf7ae772b73813a41.scope - libcontainer container a074e8c266d442053cd1e3651d83d956d8b7264f094e1e4cf7ae772b73813a41. Apr 28 00:56:04.656316 containerd[1638]: time="2026-04-28T00:56:04.655858456Z" level=info msg="CreateContainer within sandbox \"e1005760f423d43ce44d06a9d4b7e9ce0b0129a1949c0222355e980d83e1c805\" for name:\"kube-apiserver\" returns container id \"9583e013b7af737c09ec0f5a95821eabdf38511a63754015720f2c75fbb31b16\"" Apr 28 00:56:04.694914 containerd[1638]: time="2026-04-28T00:56:04.694821673Z" level=info msg="StartContainer for \"9583e013b7af737c09ec0f5a95821eabdf38511a63754015720f2c75fbb31b16\"" Apr 28 00:56:04.954862 containerd[1638]: time="2026-04-28T00:56:04.954467374Z" level=info msg="connecting to shim 9583e013b7af737c09ec0f5a95821eabdf38511a63754015720f2c75fbb31b16" address="unix:///run/containerd/s/a6f6fe89b2fd9ed7e76b21a90d817ac6b4bb652aa72b24cd6c021d9b1372cd4c" protocol=ttrpc version=3 Apr 28 00:56:04.967146 systemd[1]: Started cri-containerd-0bfcdb0a37d3f62d6d86ee7c1eafcd28d53fdc4b124373d470f4b26c5d209542.scope - libcontainer container 0bfcdb0a37d3f62d6d86ee7c1eafcd28d53fdc4b124373d470f4b26c5d209542. Apr 28 00:56:05.337354 systemd[1]: Started cri-containerd-9583e013b7af737c09ec0f5a95821eabdf38511a63754015720f2c75fbb31b16.scope - libcontainer container 9583e013b7af737c09ec0f5a95821eabdf38511a63754015720f2c75fbb31b16. Apr 28 00:56:05.745787 kubelet[2660]: E0428 00:56:05.745032 2660 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.30:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.30:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 28 00:56:07.007740 kubelet[2660]: E0428 00:56:07.007309 2660 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 28 00:56:08.115012 containerd[1638]: time="2026-04-28T00:56:08.020073205Z" level=info msg="StartContainer for \"0bfcdb0a37d3f62d6d86ee7c1eafcd28d53fdc4b124373d470f4b26c5d209542\" returns successfully" Apr 28 00:56:08.147986 containerd[1638]: time="2026-04-28T00:56:08.119921502Z" level=info msg="StartContainer for \"a074e8c266d442053cd1e3651d83d956d8b7264f094e1e4cf7ae772b73813a41\" returns successfully" Apr 28 00:56:08.157366 kubelet[2660]: E0428 00:56:08.111378 2660 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.30:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.30:6443: connect: connection refused" interval="7s" Apr 28 00:56:08.954915 kubelet[2660]: E0428 00:56:08.953550 2660 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.30:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.30:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 28 00:56:09.060744 containerd[1638]: time="2026-04-28T00:56:09.046141961Z" level=info msg="StartContainer for \"9583e013b7af737c09ec0f5a95821eabdf38511a63754015720f2c75fbb31b16\" returns successfully" Apr 28 00:56:10.017437 kubelet[2660]: E0428 00:56:10.017190 2660 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 28 00:56:10.038038 kubelet[2660]: E0428 00:56:10.018966 2660 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:56:10.109615 kubelet[2660]: E0428 00:56:10.108876 2660 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 28 00:56:10.113818 kubelet[2660]: E0428 00:56:10.113628 2660 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:56:10.648180 kubelet[2660]: E0428 00:56:10.647829 2660 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 28 00:56:10.649277 kubelet[2660]: E0428 00:56:10.648548 2660 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:56:10.701161 kubelet[2660]: I0428 00:56:10.700569 2660 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 28 00:56:11.964392 kubelet[2660]: E0428 00:56:11.964050 2660 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 28 00:56:12.008539 kubelet[2660]: E0428 00:56:11.964164 2660 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 28 00:56:12.008539 kubelet[2660]: E0428 00:56:11.996325 2660 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 28 00:56:12.008539 kubelet[2660]: E0428 00:56:11.997205 2660 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:56:12.008539 kubelet[2660]: E0428 00:56:11.997260 2660 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:56:12.055944 kubelet[2660]: E0428 00:56:12.015131 2660 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:56:13.334870 kubelet[2660]: E0428 00:56:13.319462 2660 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 28 00:56:13.334870 kubelet[2660]: E0428 00:56:13.319779 2660 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 28 00:56:13.362019 kubelet[2660]: E0428 00:56:13.335955 2660 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:56:13.362019 kubelet[2660]: E0428 00:56:13.347056 2660 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:56:14.499925 kubelet[2660]: E0428 00:56:14.499465 2660 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 28 00:56:14.507307 kubelet[2660]: E0428 00:56:14.506345 2660 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:56:15.849562 kubelet[2660]: E0428 00:56:15.848555 2660 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 28 00:56:15.947820 kubelet[2660]: E0428 00:56:15.941918 2660 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:56:17.019734 kubelet[2660]: E0428 00:56:17.018920 2660 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 28 00:56:17.840028 kubelet[2660]: E0428 00:56:17.839340 2660 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 28 00:56:17.842332 kubelet[2660]: E0428 00:56:17.842293 2660 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:56:20.712552 kubelet[2660]: E0428 00:56:20.712130 2660 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.30:6443/api/v1/nodes\": net/http: TLS handshake timeout" node="localhost" Apr 28 00:56:22.210333 kubelet[2660]: E0428 00:56:22.209629 2660 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.30:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 28 00:56:24.510543 kubelet[2660]: E0428 00:56:24.510031 2660 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.30:6443/api/v1/namespaces/default/events\": net/http: TLS handshake timeout" event="&Event{ObjectMeta:{localhost.18aa5f456e64dafd default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-28 00:55:54.760567549 +0000 UTC m=+5.206518022,LastTimestamp:2026-04-28 00:55:54.760567549 +0000 UTC m=+5.206518022,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 28 00:56:24.603872 kubelet[2660]: E0428 00:56:24.602679 2660 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 28 00:56:24.646255 kubelet[2660]: E0428 00:56:24.641403 2660 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:56:25.196767 kubelet[2660]: E0428 00:56:25.194778 2660 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.30:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" interval="7s" Apr 28 00:56:25.399938 kubelet[2660]: E0428 00:56:25.396925 2660 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.30:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 28 00:56:26.196584 kubelet[2660]: E0428 00:56:26.196249 2660 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.30:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 28 00:56:26.470163 kubelet[2660]: E0428 00:56:26.467759 2660 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 28 00:56:26.608917 kubelet[2660]: E0428 00:56:26.606751 2660 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:56:26.611996 kubelet[2660]: E0428 00:56:26.611541 2660 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.30:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 28 00:56:27.062960 kubelet[2660]: E0428 00:56:27.060561 2660 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 28 00:56:28.145735 kubelet[2660]: I0428 00:56:28.143824 2660 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 28 00:56:34.697202 systemd[1759]: Created slice background.slice - User Background Tasks Slice. Apr 28 00:56:34.954731 systemd[1759]: Starting systemd-tmpfiles-clean.service - Cleanup of User's Temporary Files and Directories... Apr 28 00:56:35.561913 systemd[1759]: Finished systemd-tmpfiles-clean.service - Cleanup of User's Temporary Files and Directories. Apr 28 00:56:36.550747 kubelet[2660]: E0428 00:56:36.549808 2660 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.30:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": net/http: TLS handshake timeout" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 28 00:56:36.749895 kubelet[2660]: E0428 00:56:36.697014 2660 certificate_manager.go:461] "Reached backoff limit, still unable to rotate certs" err="timed out waiting for the condition" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 28 00:56:37.354735 kubelet[2660]: E0428 00:56:37.354416 2660 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 28 00:56:38.971158 kubelet[2660]: E0428 00:56:38.968261 2660 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.30:6443/api/v1/nodes\": net/http: TLS handshake timeout" node="localhost" Apr 28 00:56:43.096194 kubelet[2660]: E0428 00:56:43.080926 2660 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.30:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded" interval="7s" Apr 28 00:56:44.836862 update_engine[1618]: I20260428 00:56:44.832809 1618 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Apr 28 00:56:44.836862 update_engine[1618]: I20260428 00:56:44.835829 1618 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Apr 28 00:56:44.863096 update_engine[1618]: I20260428 00:56:44.856437 1618 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Apr 28 00:56:44.879384 update_engine[1618]: I20260428 00:56:44.871567 1618 omaha_request_params.cc:62] Current group set to alpha Apr 28 00:56:44.879384 update_engine[1618]: I20260428 00:56:44.877619 1618 update_attempter.cc:499] Already updated boot flags. Skipping. Apr 28 00:56:44.879384 update_engine[1618]: I20260428 00:56:44.878026 1618 update_attempter.cc:643] Scheduling an action processor start. Apr 28 00:56:44.879384 update_engine[1618]: I20260428 00:56:44.878057 1618 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Apr 28 00:56:44.954045 update_engine[1618]: I20260428 00:56:44.884935 1618 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Apr 28 00:56:44.954045 update_engine[1618]: I20260428 00:56:44.891016 1618 omaha_request_action.cc:271] Posting an Omaha request to disabled Apr 28 00:56:44.954045 update_engine[1618]: I20260428 00:56:44.894016 1618 omaha_request_action.cc:272] Request: Apr 28 00:56:44.954045 update_engine[1618]: Apr 28 00:56:44.954045 update_engine[1618]: Apr 28 00:56:44.954045 update_engine[1618]: Apr 28 00:56:44.954045 update_engine[1618]: Apr 28 00:56:44.954045 update_engine[1618]: Apr 28 00:56:44.954045 update_engine[1618]: Apr 28 00:56:44.954045 update_engine[1618]: Apr 28 00:56:44.954045 update_engine[1618]: Apr 28 00:56:44.954045 update_engine[1618]: I20260428 00:56:44.895334 1618 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 28 00:56:44.968146 locksmithd[1693]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Apr 28 00:56:45.011025 update_engine[1618]: I20260428 00:56:44.998008 1618 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 28 00:56:45.104930 update_engine[1618]: I20260428 00:56:45.085680 1618 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 28 00:56:45.104930 update_engine[1618]: E20260428 00:56:45.102749 1618 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (Domain name not found) Apr 28 00:56:45.178476 update_engine[1618]: I20260428 00:56:45.110496 1618 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Apr 28 00:56:46.040325 kubelet[2660]: E0428 00:56:46.028967 2660 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.30:6443/api/v1/namespaces/default/events\": net/http: TLS handshake timeout" event="&Event{ObjectMeta:{localhost.18aa5f456e64dafd default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-28 00:55:54.760567549 +0000 UTC m=+5.206518022,LastTimestamp:2026-04-28 00:55:54.760567549 +0000 UTC m=+5.206518022,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 28 00:56:47.449008 kubelet[2660]: E0428 00:56:47.445698 2660 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 28 00:56:47.904899 kubelet[2660]: I0428 00:56:47.868563 2660 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 28 00:56:53.952918 kubelet[2660]: I0428 00:56:53.951721 2660 apiserver.go:52] "Watching apiserver" Apr 28 00:56:54.842072 kubelet[2660]: E0428 00:56:54.823155 2660 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Apr 28 00:56:55.793533 update_engine[1618]: I20260428 00:56:55.783070 1618 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 28 00:56:55.818421 update_engine[1618]: I20260428 00:56:55.795944 1618 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 28 00:56:55.874931 update_engine[1618]: I20260428 00:56:55.871335 1618 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 28 00:56:55.885951 update_engine[1618]: E20260428 00:56:55.883718 1618 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (Domain name not found) Apr 28 00:56:55.885951 update_engine[1618]: I20260428 00:56:55.884140 1618 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Apr 28 00:56:57.328127 kubelet[2660]: I0428 00:56:57.316774 2660 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Apr 28 00:56:57.905462 kubelet[2660]: I0428 00:56:57.889571 2660 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Apr 28 00:57:00.335484 kubelet[2660]: E0428 00:57:00.328636 2660 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.18aa5f456e64dafd default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-28 00:55:54.760567549 +0000 UTC m=+5.206518022,LastTimestamp:2026-04-28 00:55:54.760567549 +0000 UTC m=+5.206518022,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 28 00:57:00.787091 kubelet[2660]: I0428 00:57:00.783818 2660 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Apr 28 00:57:05.113654 kubelet[2660]: I0428 00:57:05.108803 2660 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Apr 28 00:57:05.411641 kubelet[2660]: E0428 00:57:05.400157 2660 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.694s" Apr 28 00:57:05.795648 update_engine[1618]: I20260428 00:57:05.791185 1618 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 28 00:57:05.807157 update_engine[1618]: I20260428 00:57:05.799109 1618 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 28 00:57:05.899815 update_engine[1618]: I20260428 00:57:05.888816 1618 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 28 00:57:06.009040 update_engine[1618]: E20260428 00:57:06.004101 1618 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (Domain name not found) Apr 28 00:57:06.087030 update_engine[1618]: I20260428 00:57:06.016700 1618 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Apr 28 00:57:12.351839 kubelet[2660]: I0428 00:57:12.350810 2660 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Apr 28 00:57:15.788893 update_engine[1618]: I20260428 00:57:15.784675 1618 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 28 00:57:15.972184 update_engine[1618]: I20260428 00:57:15.798736 1618 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 28 00:57:15.998916 update_engine[1618]: I20260428 00:57:15.988716 1618 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 28 00:57:16.073851 update_engine[1618]: E20260428 00:57:16.006906 1618 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (Domain name not found) Apr 28 00:57:16.104040 update_engine[1618]: I20260428 00:57:16.095610 1618 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Apr 28 00:57:16.104040 update_engine[1618]: I20260428 00:57:16.099666 1618 omaha_request_action.cc:617] Omaha request response: Apr 28 00:57:16.255036 update_engine[1618]: E20260428 00:57:16.172000 1618 omaha_request_action.cc:636] Omaha request network transfer failed. Apr 28 00:57:16.255036 update_engine[1618]: I20260428 00:57:16.189888 1618 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Apr 28 00:57:16.255036 update_engine[1618]: I20260428 00:57:16.195197 1618 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Apr 28 00:57:16.255036 update_engine[1618]: I20260428 00:57:16.195746 1618 update_attempter.cc:306] Processing Done. Apr 28 00:57:16.255036 update_engine[1618]: E20260428 00:57:16.211966 1618 update_attempter.cc:619] Update failed. Apr 28 00:57:16.299960 update_engine[1618]: I20260428 00:57:16.251585 1618 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Apr 28 00:57:16.299960 update_engine[1618]: I20260428 00:57:16.264125 1618 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Apr 28 00:57:16.299960 update_engine[1618]: I20260428 00:57:16.264578 1618 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Apr 28 00:57:16.299960 update_engine[1618]: I20260428 00:57:16.266311 1618 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Apr 28 00:57:16.299960 update_engine[1618]: I20260428 00:57:16.267074 1618 omaha_request_action.cc:271] Posting an Omaha request to disabled Apr 28 00:57:16.299960 update_engine[1618]: I20260428 00:57:16.267149 1618 omaha_request_action.cc:272] Request: Apr 28 00:57:16.299960 update_engine[1618]: Apr 28 00:57:16.299960 update_engine[1618]: Apr 28 00:57:16.299960 update_engine[1618]: Apr 28 00:57:16.299960 update_engine[1618]: Apr 28 00:57:16.299960 update_engine[1618]: Apr 28 00:57:16.299960 update_engine[1618]: Apr 28 00:57:16.299960 update_engine[1618]: I20260428 00:57:16.267158 1618 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 28 00:57:16.299960 update_engine[1618]: I20260428 00:57:16.291472 1618 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 28 00:57:16.948649 update_engine[1618]: I20260428 00:57:16.482652 1618 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 28 00:57:16.948649 update_engine[1618]: E20260428 00:57:16.490123 1618 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (Domain name not found) Apr 28 00:57:16.948649 update_engine[1618]: I20260428 00:57:16.519811 1618 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Apr 28 00:57:16.948649 update_engine[1618]: I20260428 00:57:16.560964 1618 omaha_request_action.cc:617] Omaha request response: Apr 28 00:57:16.948649 update_engine[1618]: I20260428 00:57:16.561493 1618 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Apr 28 00:57:16.948649 update_engine[1618]: I20260428 00:57:16.561502 1618 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Apr 28 00:57:16.948649 update_engine[1618]: I20260428 00:57:16.561507 1618 update_attempter.cc:306] Processing Done. Apr 28 00:57:16.948649 update_engine[1618]: I20260428 00:57:16.561568 1618 update_attempter.cc:310] Error event sent. Apr 28 00:57:16.948649 update_engine[1618]: I20260428 00:57:16.561627 1618 update_check_scheduler.cc:74] Next update check in 48m48s Apr 28 00:57:17.113674 locksmithd[1693]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Apr 28 00:57:17.244194 locksmithd[1693]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Apr 28 00:57:22.961176 kubelet[2660]: E0428 00:57:22.849125 2660 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="17.441s" Apr 28 00:57:28.018915 kubelet[2660]: E0428 00:57:27.963847 2660 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="4.993s" Apr 28 00:57:28.997727 kubelet[2660]: E0428 00:57:28.997167 2660 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:57:31.371204 kubelet[2660]: E0428 00:57:31.370331 2660 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.233s" Apr 28 00:57:34.424981 kubelet[2660]: E0428 00:57:34.424119 2660 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.799s" Apr 28 00:57:36.118828 kubelet[2660]: E0428 00:57:36.115852 2660 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:57:37.178081 kubelet[2660]: E0428 00:57:37.177884 2660 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.713s" Apr 28 00:57:43.221082 kubelet[2660]: I0428 00:57:43.055673 2660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=36.048041609 podStartE2EDuration="36.048041609s" podCreationTimestamp="2026-04-28 00:57:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-28 00:57:42.966806954 +0000 UTC m=+113.412757446" watchObservedRunningTime="2026-04-28 00:57:43.048041609 +0000 UTC m=+113.493992090" Apr 28 00:57:44.357503 kubelet[2660]: E0428 00:57:44.357204 2660 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="7.179s" Apr 28 00:57:45.888401 kubelet[2660]: E0428 00:57:45.888003 2660 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.531s" Apr 28 00:57:45.961324 kubelet[2660]: E0428 00:57:45.961012 2660 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:57:52.715099 kubelet[2660]: E0428 00:57:52.714825 2660 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.903s" Apr 28 00:57:54.913920 kubelet[2660]: E0428 00:57:54.910991 2660 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.196s" Apr 28 00:57:57.511130 kubelet[2660]: E0428 00:57:57.500887 2660 kubelet_node_status.go:398] "Node not becoming ready in time after startup" Apr 28 00:57:58.042147 kubelet[2660]: I0428 00:57:58.041733 2660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=54.966248527 podStartE2EDuration="54.966248527s" podCreationTimestamp="2026-04-28 00:57:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-28 00:57:45.649173813 +0000 UTC m=+116.095124289" watchObservedRunningTime="2026-04-28 00:57:57.966248527 +0000 UTC m=+128.412199010" Apr 28 00:57:58.042147 kubelet[2660]: I0428 00:57:58.042298 2660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=45.042283865 podStartE2EDuration="45.042283865s" podCreationTimestamp="2026-04-28 00:57:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-28 00:57:57.978746743 +0000 UTC m=+128.424697262" watchObservedRunningTime="2026-04-28 00:57:58.042283865 +0000 UTC m=+128.488234354" Apr 28 00:57:58.659782 kubelet[2660]: E0428 00:57:58.659408 2660 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 28 00:57:58.800092 kubelet[2660]: E0428 00:57:58.787797 2660 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.867s" Apr 28 00:58:01.209602 kubelet[2660]: E0428 00:58:01.178814 2660 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.082s" Apr 28 00:58:03.653586 kubelet[2660]: E0428 00:58:03.653516 2660 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.405s" Apr 28 00:58:04.862423 kubelet[2660]: E0428 00:58:04.860706 2660 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 28 00:58:05.298348 kubelet[2660]: E0428 00:58:05.298036 2660 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.541s" Apr 28 00:58:06.910616 kubelet[2660]: E0428 00:58:06.910195 2660 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.197s" Apr 28 00:58:09.961106 kubelet[2660]: E0428 00:58:09.918862 2660 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 28 00:58:14.707786 kubelet[2660]: E0428 00:58:14.706765 2660 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.01s" Apr 28 00:58:15.032984 kubelet[2660]: E0428 00:58:14.981168 2660 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 28 00:58:19.302050 kubelet[2660]: E0428 00:58:19.299982 2660 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.463s" Apr 28 00:58:20.269173 kubelet[2660]: E0428 00:58:20.264093 2660 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 28 00:58:20.748328 kubelet[2660]: E0428 00:58:20.746631 2660 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.052s" Apr 28 00:58:25.751023 kubelet[2660]: E0428 00:58:25.748979 2660 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 28 00:58:28.679487 systemd[1]: cri-containerd-a074e8c266d442053cd1e3651d83d956d8b7264f094e1e4cf7ae772b73813a41.scope: Deactivated successfully. Apr 28 00:58:28.693271 systemd[1]: cri-containerd-a074e8c266d442053cd1e3651d83d956d8b7264f094e1e4cf7ae772b73813a41.scope: Consumed 5.427s CPU time, 22.3M memory peak. Apr 28 00:58:29.098371 containerd[1638]: time="2026-04-28T00:58:28.988559799Z" level=info msg="received container exit event container_id:\"a074e8c266d442053cd1e3651d83d956d8b7264f094e1e4cf7ae772b73813a41\" id:\"a074e8c266d442053cd1e3651d83d956d8b7264f094e1e4cf7ae772b73813a41\" pid:2872 exit_status:1 exited_at:{seconds:1777337908 nanos:981263651}" Apr 28 00:58:29.687992 kubelet[2660]: E0428 00:58:29.686651 2660 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.924s" Apr 28 00:58:31.527789 kubelet[2660]: E0428 00:58:31.516557 2660 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 28 00:58:32.694317 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a074e8c266d442053cd1e3651d83d956d8b7264f094e1e4cf7ae772b73813a41-rootfs.mount: Deactivated successfully. Apr 28 00:58:33.389682 kubelet[2660]: E0428 00:58:33.388837 2660 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.69s" Apr 28 00:58:36.437620 kubelet[2660]: I0428 00:58:36.436906 2660 scope.go:117] "RemoveContainer" containerID="a074e8c266d442053cd1e3651d83d956d8b7264f094e1e4cf7ae772b73813a41" Apr 28 00:58:36.460478 kubelet[2660]: E0428 00:58:36.458094 2660 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:58:36.708478 kubelet[2660]: E0428 00:58:36.654781 2660 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 28 00:58:37.618823 containerd[1638]: time="2026-04-28T00:58:37.610914342Z" level=info msg="CreateContainer within sandbox \"f99c09fa9bcebfcc9ce36f1e690fb119057571c499a894190b3ac3be9c525e0a\" for container name:\"kube-controller-manager\" attempt:1" Apr 28 00:58:39.515766 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4229720823.mount: Deactivated successfully. Apr 28 00:58:39.657254 containerd[1638]: time="2026-04-28T00:58:39.657009116Z" level=info msg="Container 0a3cb94c91645c00428bd7da14a0449fffb988671709d443a7f1179a66b82b18: CDI devices from CRI Config.CDIDevices: []" Apr 28 00:58:39.812921 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3387975832.mount: Deactivated successfully. Apr 28 00:58:40.649183 kubelet[2660]: E0428 00:58:40.636144 2660 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:58:40.769207 containerd[1638]: time="2026-04-28T00:58:40.769046300Z" level=info msg="CreateContainer within sandbox \"f99c09fa9bcebfcc9ce36f1e690fb119057571c499a894190b3ac3be9c525e0a\" for name:\"kube-controller-manager\" attempt:1 returns container id \"0a3cb94c91645c00428bd7da14a0449fffb988671709d443a7f1179a66b82b18\"" Apr 28 00:58:40.957629 containerd[1638]: time="2026-04-28T00:58:40.950432103Z" level=info msg="StartContainer for \"0a3cb94c91645c00428bd7da14a0449fffb988671709d443a7f1179a66b82b18\"" Apr 28 00:58:41.457167 containerd[1638]: time="2026-04-28T00:58:41.456143822Z" level=info msg="connecting to shim 0a3cb94c91645c00428bd7da14a0449fffb988671709d443a7f1179a66b82b18" address="unix:///run/containerd/s/aafd21b6e43b3c36323942c08fd3df2bb03ac8c2cdd619376b1243457cecf8d1" protocol=ttrpc version=3 Apr 28 00:58:42.244869 kubelet[2660]: E0428 00:58:42.244631 2660 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 28 00:58:43.439959 kubelet[2660]: E0428 00:58:43.438779 2660 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.664s" Apr 28 00:58:43.749817 systemd[1]: Started cri-containerd-0a3cb94c91645c00428bd7da14a0449fffb988671709d443a7f1179a66b82b18.scope - libcontainer container 0a3cb94c91645c00428bd7da14a0449fffb988671709d443a7f1179a66b82b18. Apr 28 00:58:45.003127 kubelet[2660]: E0428 00:58:45.000077 2660 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.313s" Apr 28 00:58:47.451451 kubelet[2660]: E0428 00:58:47.450949 2660 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 28 00:58:48.149457 containerd[1638]: time="2026-04-28T00:58:48.149285126Z" level=info msg="StartContainer for \"0a3cb94c91645c00428bd7da14a0449fffb988671709d443a7f1179a66b82b18\" returns successfully" Apr 28 00:58:48.890417 kubelet[2660]: E0428 00:58:48.886633 2660 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:58:51.410078 kubelet[2660]: E0428 00:58:51.397131 2660 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:58:52.218973 kubelet[2660]: E0428 00:58:52.218085 2660 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.514s" Apr 28 00:58:53.520655 kubelet[2660]: E0428 00:58:53.520290 2660 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 28 00:58:53.728555 kubelet[2660]: E0428 00:58:53.718961 2660 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:58:55.833849 kubelet[2660]: E0428 00:58:55.827289 2660 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:58:57.736805 kubelet[2660]: E0428 00:58:57.736418 2660 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="5.437s" Apr 28 00:58:58.497734 kubelet[2660]: E0428 00:58:58.482173 2660 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:58:59.502031 kubelet[2660]: E0428 00:58:59.461817 2660 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 28 00:59:01.117153 kubelet[2660]: E0428 00:59:01.115524 2660 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.405s" Apr 28 00:59:01.234268 kubelet[2660]: E0428 00:59:01.233766 2660 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:59:04.498830 systemd[1]: Reload requested from client PID 3021 ('systemctl') (unit session-6.scope)... Apr 28 00:59:04.498941 systemd[1]: Reloading... Apr 28 00:59:04.928826 kubelet[2660]: E0428 00:59:04.927984 2660 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 28 00:59:07.232342 zram_generator::config[3075]: No configuration found. Apr 28 00:59:07.246380 kubelet[2660]: E0428 00:59:07.240439 2660 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.077s" Apr 28 00:59:07.249003 kubelet[2660]: E0428 00:59:07.248978 2660 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:59:07.364348 systemd-ssh-generator[3071]: Failed to query local AF_VSOCK CID: Cannot assign requested address Apr 28 00:59:07.466758 (sd-exec-[3052]: /usr/lib/systemd/system-generators/systemd-ssh-generator failed with exit status 1. Apr 28 00:59:10.308559 kubelet[2660]: E0428 00:59:10.308103 2660 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 28 00:59:10.412975 systemd[1]: /usr/lib/systemd/system/update-engine.service:10: Support for option BlockIOWeight= has been removed and it is ignored Apr 28 00:59:12.722945 kubelet[2660]: E0428 00:59:12.722834 2660 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.033s" Apr 28 00:59:13.392633 systemd[1]: Reloading finished in 8860 ms. Apr 28 00:59:15.665408 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 28 00:59:16.011346 systemd[1]: kubelet.service: Deactivated successfully. Apr 28 00:59:16.011813 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 28 00:59:16.011942 systemd[1]: kubelet.service: Consumed 2min 3.850s CPU time, 137M memory peak. Apr 28 00:59:16.196400 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 28 00:59:20.915565 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 28 00:59:20.986795 (kubelet)[3120]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 28 00:59:23.609125 kubelet[3120]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 28 00:59:23.609125 kubelet[3120]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 28 00:59:23.620872 kubelet[3120]: I0428 00:59:23.609046 3120 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 28 00:59:23.825509 kubelet[3120]: I0428 00:59:23.825329 3120 server.go:529] "Kubelet version" kubeletVersion="v1.34.4" Apr 28 00:59:23.825509 kubelet[3120]: I0428 00:59:23.825372 3120 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 28 00:59:23.830057 kubelet[3120]: I0428 00:59:23.827162 3120 watchdog_linux.go:95] "Systemd watchdog is not enabled" Apr 28 00:59:23.834335 kubelet[3120]: I0428 00:59:23.831861 3120 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 28 00:59:23.907497 kubelet[3120]: I0428 00:59:23.906457 3120 server.go:956] "Client rotation is on, will bootstrap in background" Apr 28 00:59:24.037280 kubelet[3120]: I0428 00:59:24.036941 3120 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Apr 28 00:59:24.064601 kubelet[3120]: I0428 00:59:24.041889 3120 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 28 00:59:25.181150 kubelet[3120]: I0428 00:59:25.180650 3120 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Apr 28 00:59:25.644971 kubelet[3120]: I0428 00:59:25.621518 3120 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Apr 28 00:59:25.652366 kubelet[3120]: I0428 00:59:25.646838 3120 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 28 00:59:25.652366 kubelet[3120]: I0428 00:59:25.649532 3120 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 28 00:59:25.652366 kubelet[3120]: I0428 00:59:25.650441 3120 topology_manager.go:138] "Creating topology manager with none policy" Apr 28 00:59:25.652366 kubelet[3120]: I0428 00:59:25.651201 3120 container_manager_linux.go:306] "Creating device plugin manager" Apr 28 00:59:25.695082 kubelet[3120]: I0428 00:59:25.653343 3120 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Apr 28 00:59:25.695082 kubelet[3120]: I0428 00:59:25.654867 3120 state_mem.go:36] "Initialized new in-memory state store" Apr 28 00:59:25.708313 kubelet[3120]: I0428 00:59:25.698712 3120 kubelet.go:475] "Attempting to sync node with API server" Apr 28 00:59:25.714735 kubelet[3120]: I0428 00:59:25.708268 3120 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 28 00:59:25.863447 kubelet[3120]: I0428 00:59:25.862739 3120 kubelet.go:387] "Adding apiserver pod source" Apr 28 00:59:25.863447 kubelet[3120]: I0428 00:59:25.862984 3120 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 28 00:59:26.651412 kubelet[3120]: I0428 00:59:26.645055 3120 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.2.1" apiVersion="v1" Apr 28 00:59:26.664277 kubelet[3120]: I0428 00:59:26.663613 3120 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 28 00:59:26.664277 kubelet[3120]: I0428 00:59:26.663664 3120 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Apr 28 00:59:26.895772 kubelet[3120]: I0428 00:59:26.887489 3120 apiserver.go:52] "Watching apiserver" Apr 28 00:59:27.110399 kubelet[3120]: I0428 00:59:27.096746 3120 server.go:1262] "Started kubelet" Apr 28 00:59:27.151805 kubelet[3120]: I0428 00:59:27.150460 3120 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Apr 28 00:59:27.159967 kubelet[3120]: I0428 00:59:27.159822 3120 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 28 00:59:27.299593 kubelet[3120]: I0428 00:59:27.171127 3120 server_v1.go:49] "podresources" method="list" useActivePods=true Apr 28 00:59:27.308363 kubelet[3120]: I0428 00:59:27.307938 3120 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 28 00:59:27.328144 kubelet[3120]: I0428 00:59:27.323956 3120 server.go:310] "Adding debug handlers to kubelet server" Apr 28 00:59:27.373961 kubelet[3120]: I0428 00:59:27.367529 3120 volume_manager.go:313] "Starting Kubelet Volume Manager" Apr 28 00:59:27.388374 kubelet[3120]: I0428 00:59:27.386681 3120 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 28 00:59:27.389398 kubelet[3120]: I0428 00:59:27.389272 3120 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Apr 28 00:59:27.538749 kubelet[3120]: I0428 00:59:27.532715 3120 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 28 00:59:27.977041 kubelet[3120]: I0428 00:59:27.975766 3120 reconciler.go:29] "Reconciler: start to sync state" Apr 28 00:59:28.428009 kubelet[3120]: E0428 00:59:28.395777 3120 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 28 00:59:29.146453 kubelet[3120]: W0428 00:59:29.146010 3120 logging.go:55] [core] [Channel #9 SubChannel #10]grpc: addrConn.createTransport failed to connect to {Addr: "/run/containerd/containerd.sock", ServerName: "localhost", Attributes: {"<%!p(networktype.keyType=grpc.internal.transport.networktype)>": "unix" }, BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial unix:///run/containerd/containerd.sock: timeout" Apr 28 00:59:29.632618 kubelet[3120]: I0428 00:59:29.613371 3120 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Apr 28 00:59:29.811580 kubelet[3120]: I0428 00:59:29.811396 3120 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Apr 28 00:59:29.812815 kubelet[3120]: I0428 00:59:29.812797 3120 status_manager.go:244] "Starting to sync pod status with apiserver" Apr 28 00:59:29.821100 kubelet[3120]: I0428 00:59:29.820868 3120 kubelet.go:2428] "Starting kubelet main sync loop" Apr 28 00:59:29.994270 kubelet[3120]: E0428 00:59:29.989142 3120 kubelet.go:2452] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 28 00:59:30.061072 kubelet[3120]: I0428 00:59:30.054922 3120 factory.go:223] Registration of the containerd container factory successfully Apr 28 00:59:30.109209 kubelet[3120]: I0428 00:59:30.107641 3120 factory.go:223] Registration of the systemd container factory successfully Apr 28 00:59:30.137027 kubelet[3120]: I0428 00:59:30.134539 3120 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 28 00:59:30.260331 kubelet[3120]: E0428 00:59:30.133754 3120 kubelet.go:2452] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 28 00:59:30.438110 kubelet[3120]: E0428 00:59:30.437714 3120 kubelet.go:2452] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 28 00:59:30.915160 kubelet[3120]: E0428 00:59:30.865529 3120 kubelet.go:2452] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Apr 28 00:59:31.943284 kubelet[3120]: E0428 00:59:31.835569 3120 kubelet.go:2452] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Apr 28 00:59:33.554150 kubelet[3120]: E0428 00:59:33.551879 3120 kubelet.go:2452] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Apr 28 00:59:36.831090 kubelet[3120]: E0428 00:59:36.825891 3120 kubelet.go:2452] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Apr 28 00:59:38.064176 kubelet[3120]: I0428 00:59:38.061293 3120 cpu_manager.go:221] "Starting CPU manager" policy="none" Apr 28 00:59:38.064176 kubelet[3120]: I0428 00:59:38.061376 3120 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Apr 28 00:59:38.064176 kubelet[3120]: I0428 00:59:38.061803 3120 state_mem.go:36] "Initialized new in-memory state store" Apr 28 00:59:38.227573 kubelet[3120]: I0428 00:59:38.184508 3120 state_mem.go:88] "Updated default CPUSet" cpuSet="" Apr 28 00:59:38.227573 kubelet[3120]: I0428 00:59:38.184555 3120 state_mem.go:96] "Updated CPUSet assignments" assignments={} Apr 28 00:59:38.227573 kubelet[3120]: I0428 00:59:38.184576 3120 policy_none.go:49] "None policy: Start" Apr 28 00:59:38.227573 kubelet[3120]: I0428 00:59:38.184710 3120 memory_manager.go:187] "Starting memorymanager" policy="None" Apr 28 00:59:38.227573 kubelet[3120]: I0428 00:59:38.184868 3120 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Apr 28 00:59:38.227573 kubelet[3120]: I0428 00:59:38.221092 3120 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Apr 28 00:59:38.227573 kubelet[3120]: I0428 00:59:38.221205 3120 policy_none.go:47] "Start" Apr 28 00:59:40.030891 kubelet[3120]: E0428 00:59:40.026995 3120 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 28 00:59:40.506467 kubelet[3120]: I0428 00:59:40.504929 3120 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 28 00:59:40.603326 kubelet[3120]: I0428 00:59:40.602165 3120 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 28 00:59:40.605051 kubelet[3120]: I0428 00:59:40.605024 3120 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 28 00:59:41.720496 kubelet[3120]: E0428 00:59:41.718967 3120 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 28 00:59:42.183061 kubelet[3120]: I0428 00:59:42.180005 3120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c6bb8708a026256e82ca4c5631a78b5a-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"c6bb8708a026256e82ca4c5631a78b5a\") " pod="kube-system/kube-controller-manager-localhost" Apr 28 00:59:42.205167 kubelet[3120]: I0428 00:59:42.185887 3120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c6bb8708a026256e82ca4c5631a78b5a-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"c6bb8708a026256e82ca4c5631a78b5a\") " pod="kube-system/kube-controller-manager-localhost" Apr 28 00:59:42.205167 kubelet[3120]: I0428 00:59:42.186859 3120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/cf2ebce56cde410c1f7401213757c4d8-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"cf2ebce56cde410c1f7401213757c4d8\") " pod="kube-system/kube-apiserver-localhost" Apr 28 00:59:42.205167 kubelet[3120]: I0428 00:59:42.187021 3120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/c6bb8708a026256e82ca4c5631a78b5a-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"c6bb8708a026256e82ca4c5631a78b5a\") " pod="kube-system/kube-controller-manager-localhost" Apr 28 00:59:42.205167 kubelet[3120]: I0428 00:59:42.191840 3120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c6bb8708a026256e82ca4c5631a78b5a-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"c6bb8708a026256e82ca4c5631a78b5a\") " pod="kube-system/kube-controller-manager-localhost" Apr 28 00:59:42.205167 kubelet[3120]: I0428 00:59:42.194868 3120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/824fd89300514e351ed3b68d82c665c6-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"824fd89300514e351ed3b68d82c665c6\") " pod="kube-system/kube-scheduler-localhost" Apr 28 00:59:42.219847 kubelet[3120]: I0428 00:59:42.194925 3120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/cf2ebce56cde410c1f7401213757c4d8-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"cf2ebce56cde410c1f7401213757c4d8\") " pod="kube-system/kube-apiserver-localhost" Apr 28 00:59:42.219847 kubelet[3120]: I0428 00:59:42.194937 3120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/cf2ebce56cde410c1f7401213757c4d8-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"cf2ebce56cde410c1f7401213757c4d8\") " pod="kube-system/kube-apiserver-localhost" Apr 28 00:59:42.219847 kubelet[3120]: I0428 00:59:42.195172 3120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c6bb8708a026256e82ca4c5631a78b5a-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"c6bb8708a026256e82ca4c5631a78b5a\") " pod="kube-system/kube-controller-manager-localhost" Apr 28 00:59:42.219847 kubelet[3120]: I0428 00:59:42.207978 3120 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Apr 28 00:59:42.219847 kubelet[3120]: I0428 00:59:42.218632 3120 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Apr 28 00:59:42.278641 kubelet[3120]: I0428 00:59:42.273820 3120 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Apr 28 00:59:42.312727 kubelet[3120]: I0428 00:59:42.312536 3120 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 28 00:59:42.961055 kubelet[3120]: E0428 00:59:42.956675 3120 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:59:42.961055 kubelet[3120]: E0428 00:59:42.957818 3120 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Apr 28 00:59:43.050049 kubelet[3120]: E0428 00:59:43.049100 3120 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:59:43.749662 kubelet[3120]: I0428 00:59:43.749414 3120 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Apr 28 00:59:43.751858 kubelet[3120]: I0428 00:59:43.751844 3120 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Apr 28 00:59:43.754112 kubelet[3120]: E0428 00:59:43.752747 3120 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Apr 28 00:59:43.759914 kubelet[3120]: E0428 00:59:43.759863 3120 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:59:44.245323 kubelet[3120]: E0428 00:59:44.244127 3120 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:59:46.511461 kubelet[3120]: E0428 00:59:46.511022 3120 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:59:46.552662 kubelet[3120]: E0428 00:59:46.511810 3120 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:59:46.780918 kubelet[3120]: E0428 00:59:46.763571 3120 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:59:47.818110 kubelet[3120]: E0428 00:59:47.817840 3120 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:59:51.900053 kubelet[3120]: E0428 00:59:51.899782 3120 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.985s" Apr 28 00:59:53.644841 kubelet[3120]: E0428 00:59:53.635072 3120 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:59:53.807875 kubelet[3120]: E0428 00:59:53.647836 3120 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:59:54.724005 kubelet[3120]: E0428 00:59:54.723660 3120 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:59:58.718149 kubelet[3120]: E0428 00:59:58.712999 3120 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.792s" Apr 28 01:00:06.453492 systemd[1]: cri-containerd-0a3cb94c91645c00428bd7da14a0449fffb988671709d443a7f1179a66b82b18.scope: Deactivated successfully. Apr 28 01:00:06.454821 systemd[1]: cri-containerd-0a3cb94c91645c00428bd7da14a0449fffb988671709d443a7f1179a66b82b18.scope: Consumed 19.916s CPU time, 36.3M memory peak. Apr 28 01:00:08.093178 containerd[1638]: time="2026-04-28T01:00:07.989598029Z" level=info msg="received container exit event container_id:\"0a3cb94c91645c00428bd7da14a0449fffb988671709d443a7f1179a66b82b18\" id:\"0a3cb94c91645c00428bd7da14a0449fffb988671709d443a7f1179a66b82b18\" pid:2999 exit_status:1 exited_at:{seconds:1777338007 nanos:437810683}" Apr 28 01:00:08.890957 kubelet[3120]: E0428 01:00:08.788200 3120 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="10.07s" Apr 28 01:00:11.612883 systemd[1]: cri-containerd-0bfcdb0a37d3f62d6d86ee7c1eafcd28d53fdc4b124373d470f4b26c5d209542.scope: Deactivated successfully. Apr 28 01:00:11.675898 systemd[1]: cri-containerd-0bfcdb0a37d3f62d6d86ee7c1eafcd28d53fdc4b124373d470f4b26c5d209542.scope: Consumed 35.131s CPU time, 22.3M memory peak. Apr 28 01:00:12.140756 sudo[1826]: pam_unix(sudo:session): session closed for user root Apr 28 01:00:12.284293 sshd[1825]: Connection closed by 10.0.0.1 port 48620 Apr 28 01:00:12.317835 sshd-session[1821]: pam_unix(sshd:session): session closed for user core Apr 28 01:00:12.685753 systemd[1]: sshd@4-4-10.0.0.30:22-10.0.0.1:48620.service: Deactivated successfully. Apr 28 01:00:13.116385 systemd[1]: session-6.scope: Deactivated successfully. Apr 28 01:00:13.118102 systemd[1]: session-6.scope: Consumed 1min 44.116s CPU time, 225M memory peak. Apr 28 01:00:13.242993 systemd-logind[1614]: Session 6 logged out. Waiting for processes to exit. Apr 28 01:00:13.578801 systemd-logind[1614]: Removed session 6. Apr 28 01:00:13.687141 containerd[1638]: time="2026-04-28T01:00:13.683821107Z" level=info msg="received container exit event container_id:\"0bfcdb0a37d3f62d6d86ee7c1eafcd28d53fdc4b124373d470f4b26c5d209542\" id:\"0bfcdb0a37d3f62d6d86ee7c1eafcd28d53fdc4b124373d470f4b26c5d209542\" pid:2886 exit_status:1 exited_at:{seconds:1777338013 nanos:156846154}" Apr 28 01:00:18.105982 kubelet[3120]: E0428 01:00:18.089783 3120 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="8.666s" Apr 28 01:00:18.148044 containerd[1638]: time="2026-04-28T01:00:18.140020923Z" level=error msg="failed to delete task" error="context deadline exceeded" id=0a3cb94c91645c00428bd7da14a0449fffb988671709d443a7f1179a66b82b18 Apr 28 01:00:18.207103 containerd[1638]: time="2026-04-28T01:00:18.159590016Z" level=error msg="failed to handle container TaskExit event container_id:\"0a3cb94c91645c00428bd7da14a0449fffb988671709d443a7f1179a66b82b18\" id:\"0a3cb94c91645c00428bd7da14a0449fffb988671709d443a7f1179a66b82b18\" pid:2999 exit_status:1 exited_at:{seconds:1777338007 nanos:437810683}" error="failed to stop container: failed to delete task: context deadline exceeded" Apr 28 01:00:18.956729 kubelet[3120]: E0428 01:00:18.956409 3120 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:00:19.216750 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0a3cb94c91645c00428bd7da14a0449fffb988671709d443a7f1179a66b82b18-rootfs.mount: Deactivated successfully. Apr 28 01:00:19.375065 containerd[1638]: time="2026-04-28T01:00:19.264879715Z" level=error msg="ttrpc: received message on inactive stream" stream=41 Apr 28 01:00:19.706170 containerd[1638]: time="2026-04-28T01:00:19.705452724Z" level=info msg="TaskExit event container_id:\"0a3cb94c91645c00428bd7da14a0449fffb988671709d443a7f1179a66b82b18\" id:\"0a3cb94c91645c00428bd7da14a0449fffb988671709d443a7f1179a66b82b18\" pid:2999 exit_status:1 exited_at:{seconds:1777338007 nanos:437810683}" Apr 28 01:00:19.931454 kubelet[3120]: E0428 01:00:19.929431 3120 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.791s" Apr 28 01:00:22.059438 kubelet[3120]: E0428 01:00:22.058692 3120 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.129s" Apr 28 01:00:23.764669 containerd[1638]: time="2026-04-28T01:00:23.761168861Z" level=error msg="failed to delete task" error="context deadline exceeded" id=0bfcdb0a37d3f62d6d86ee7c1eafcd28d53fdc4b124373d470f4b26c5d209542 Apr 28 01:00:23.817017 containerd[1638]: time="2026-04-28T01:00:23.804014894Z" level=error msg="failed to handle container TaskExit event container_id:\"0bfcdb0a37d3f62d6d86ee7c1eafcd28d53fdc4b124373d470f4b26c5d209542\" id:\"0bfcdb0a37d3f62d6d86ee7c1eafcd28d53fdc4b124373d470f4b26c5d209542\" pid:2886 exit_status:1 exited_at:{seconds:1777338013 nanos:156846154}" error="failed to stop container: failed to delete task: context deadline exceeded" Apr 28 01:00:24.291507 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0bfcdb0a37d3f62d6d86ee7c1eafcd28d53fdc4b124373d470f4b26c5d209542-rootfs.mount: Deactivated successfully. Apr 28 01:00:24.379962 containerd[1638]: time="2026-04-28T01:00:24.373709925Z" level=error msg="ttrpc: received message on inactive stream" stream=53 Apr 28 01:00:24.411771 kubelet[3120]: E0428 01:00:24.411593 3120 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.353s" Apr 28 01:00:25.753723 kubelet[3120]: E0428 01:00:25.750157 3120 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.31s" Apr 28 01:00:26.617064 containerd[1638]: time="2026-04-28T01:00:26.616809023Z" level=info msg="TaskExit event container_id:\"0bfcdb0a37d3f62d6d86ee7c1eafcd28d53fdc4b124373d470f4b26c5d209542\" id:\"0bfcdb0a37d3f62d6d86ee7c1eafcd28d53fdc4b124373d470f4b26c5d209542\" pid:2886 exit_status:1 exited_at:{seconds:1777338013 nanos:156846154}" Apr 28 01:00:26.744071 containerd[1638]: time="2026-04-28T01:00:26.743183687Z" level=error msg="failed to delete task" error="rpc error: code = NotFound desc = container not created: not found" id=0bfcdb0a37d3f62d6d86ee7c1eafcd28d53fdc4b124373d470f4b26c5d209542 Apr 28 01:00:26.994718 containerd[1638]: time="2026-04-28T01:00:26.992029926Z" level=info msg="Ensure that container 0bfcdb0a37d3f62d6d86ee7c1eafcd28d53fdc4b124373d470f4b26c5d209542 in task-service has been cleanup successfully" Apr 28 01:00:27.418880 kubelet[3120]: I0428 01:00:27.418633 3120 scope.go:117] "RemoveContainer" containerID="a074e8c266d442053cd1e3651d83d956d8b7264f094e1e4cf7ae772b73813a41" Apr 28 01:00:27.599378 containerd[1638]: time="2026-04-28T01:00:27.598583405Z" level=info msg="RemoveContainer for \"a074e8c266d442053cd1e3651d83d956d8b7264f094e1e4cf7ae772b73813a41\"" Apr 28 01:00:27.719422 kubelet[3120]: I0428 01:00:27.715676 3120 scope.go:117] "RemoveContainer" containerID="a074e8c266d442053cd1e3651d83d956d8b7264f094e1e4cf7ae772b73813a41" Apr 28 01:00:27.719422 kubelet[3120]: I0428 01:00:27.715933 3120 scope.go:117] "RemoveContainer" containerID="0a3cb94c91645c00428bd7da14a0449fffb988671709d443a7f1179a66b82b18" Apr 28 01:00:27.790974 kubelet[3120]: E0428 01:00:27.787082 3120 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:00:28.571976 containerd[1638]: time="2026-04-28T01:00:28.562478394Z" level=info msg="RemoveContainer for \"a074e8c266d442053cd1e3651d83d956d8b7264f094e1e4cf7ae772b73813a41\" returns successfully" Apr 28 01:00:28.612612 containerd[1638]: time="2026-04-28T01:00:28.612516880Z" level=info msg="RemoveContainer for \"a074e8c266d442053cd1e3651d83d956d8b7264f094e1e4cf7ae772b73813a41\"" Apr 28 01:00:28.612767 containerd[1638]: time="2026-04-28T01:00:28.612721795Z" level=info msg="RemoveContainer for \"a074e8c266d442053cd1e3651d83d956d8b7264f094e1e4cf7ae772b73813a41\" returns successfully" Apr 28 01:00:28.690742 containerd[1638]: time="2026-04-28T01:00:28.690292173Z" level=info msg="CreateContainer within sandbox \"f99c09fa9bcebfcc9ce36f1e690fb119057571c499a894190b3ac3be9c525e0a\" for container name:\"kube-controller-manager\" attempt:2" Apr 28 01:00:28.699001 kubelet[3120]: I0428 01:00:28.698828 3120 scope.go:117] "RemoveContainer" containerID="0bfcdb0a37d3f62d6d86ee7c1eafcd28d53fdc4b124373d470f4b26c5d209542" Apr 28 01:00:28.699001 kubelet[3120]: E0428 01:00:28.699016 3120 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:00:28.701462 containerd[1638]: time="2026-04-28T01:00:28.701409969Z" level=info msg="CreateContainer within sandbox \"e82fdc1d5f53a8fce152ad99c68af9170281cfec471286d1bb09abdd165edcda\" for container name:\"kube-scheduler\" attempt:1" Apr 28 01:00:29.018340 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3503911602.mount: Deactivated successfully. Apr 28 01:00:29.255497 containerd[1638]: time="2026-04-28T01:00:29.238014150Z" level=info msg="Container 2123c0c810e33437ad48772fb52f3649439d2fd73353cbf527e2dd4e5e4e496a: CDI devices from CRI Config.CDIDevices: []" Apr 28 01:00:29.256516 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3901026276.mount: Deactivated successfully. Apr 28 01:00:29.560538 containerd[1638]: time="2026-04-28T01:00:29.556204045Z" level=info msg="Container e07d40a321b16af047cd22b828dacfef5fea20b28bc199460cc840f763742e82: CDI devices from CRI Config.CDIDevices: []" Apr 28 01:00:30.279132 containerd[1638]: time="2026-04-28T01:00:30.278406086Z" level=info msg="CreateContainer within sandbox \"f99c09fa9bcebfcc9ce36f1e690fb119057571c499a894190b3ac3be9c525e0a\" for name:\"kube-controller-manager\" attempt:2 returns container id \"2123c0c810e33437ad48772fb52f3649439d2fd73353cbf527e2dd4e5e4e496a\"" Apr 28 01:00:30.288795 containerd[1638]: time="2026-04-28T01:00:30.279575556Z" level=info msg="StartContainer for \"2123c0c810e33437ad48772fb52f3649439d2fd73353cbf527e2dd4e5e4e496a\"" Apr 28 01:00:30.393997 containerd[1638]: time="2026-04-28T01:00:30.393525857Z" level=info msg="CreateContainer within sandbox \"e82fdc1d5f53a8fce152ad99c68af9170281cfec471286d1bb09abdd165edcda\" for name:\"kube-scheduler\" attempt:1 returns container id \"e07d40a321b16af047cd22b828dacfef5fea20b28bc199460cc840f763742e82\"" Apr 28 01:00:30.582022 containerd[1638]: time="2026-04-28T01:00:30.520494854Z" level=info msg="connecting to shim 2123c0c810e33437ad48772fb52f3649439d2fd73353cbf527e2dd4e5e4e496a" address="unix:///run/containerd/s/aafd21b6e43b3c36323942c08fd3df2bb03ac8c2cdd619376b1243457cecf8d1" protocol=ttrpc version=3 Apr 28 01:00:31.445122 containerd[1638]: time="2026-04-28T01:00:31.409037780Z" level=info msg="StartContainer for \"e07d40a321b16af047cd22b828dacfef5fea20b28bc199460cc840f763742e82\"" Apr 28 01:00:31.666140 containerd[1638]: time="2026-04-28T01:00:31.665895512Z" level=info msg="connecting to shim e07d40a321b16af047cd22b828dacfef5fea20b28bc199460cc840f763742e82" address="unix:///run/containerd/s/87324bb63ef3a4130ae0dbb17ad0d3ce89ecf0940cd570753f29942f5d39ca08" protocol=ttrpc version=3 Apr 28 01:00:32.834363 systemd[1]: Started cri-containerd-2123c0c810e33437ad48772fb52f3649439d2fd73353cbf527e2dd4e5e4e496a.scope - libcontainer container 2123c0c810e33437ad48772fb52f3649439d2fd73353cbf527e2dd4e5e4e496a. Apr 28 01:00:35.661460 kubelet[3120]: E0428 01:00:35.657144 3120 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.724s" Apr 28 01:00:36.744768 systemd[1]: Started cri-containerd-e07d40a321b16af047cd22b828dacfef5fea20b28bc199460cc840f763742e82.scope - libcontainer container e07d40a321b16af047cd22b828dacfef5fea20b28bc199460cc840f763742e82. Apr 28 01:00:39.611888 kubelet[3120]: E0428 01:00:39.608397 3120 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.146s" Apr 28 01:00:44.467133 kubelet[3120]: E0428 01:00:44.466452 3120 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.481s" Apr 28 01:00:46.834907 containerd[1638]: time="2026-04-28T01:00:46.717641326Z" level=info msg="StartContainer for \"2123c0c810e33437ad48772fb52f3649439d2fd73353cbf527e2dd4e5e4e496a\" returns successfully" Apr 28 01:00:54.906757 containerd[1638]: time="2026-04-28T01:00:54.885096796Z" level=info msg="StartContainer for \"e07d40a321b16af047cd22b828dacfef5fea20b28bc199460cc840f763742e82\" returns successfully" Apr 28 01:00:55.647736 kubelet[3120]: E0428 01:00:55.590002 3120 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="10.968s" Apr 28 01:01:00.583151 kubelet[3120]: E0428 01:01:00.572881 3120 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:01:00.640190 kubelet[3120]: E0428 01:01:00.639790 3120 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="4.838s" Apr 28 01:01:01.571685 kubelet[3120]: E0428 01:01:01.571400 3120 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:01:01.864556 kubelet[3120]: E0428 01:01:01.855554 3120 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:01:02.591644 containerd[1638]: time="2026-04-28T01:01:02.571156976Z" level=info msg="container event discarded" container=f99c09fa9bcebfcc9ce36f1e690fb119057571c499a894190b3ac3be9c525e0a type=CONTAINER_CREATED_EVENT Apr 28 01:01:02.607751 containerd[1638]: time="2026-04-28T01:01:02.596850463Z" level=info msg="container event discarded" container=f99c09fa9bcebfcc9ce36f1e690fb119057571c499a894190b3ac3be9c525e0a type=CONTAINER_STARTED_EVENT Apr 28 01:01:02.991876 containerd[1638]: time="2026-04-28T01:01:02.989775475Z" level=info msg="container event discarded" container=e82fdc1d5f53a8fce152ad99c68af9170281cfec471286d1bb09abdd165edcda type=CONTAINER_CREATED_EVENT Apr 28 01:01:03.014753 containerd[1638]: time="2026-04-28T01:01:03.009831792Z" level=info msg="container event discarded" container=e82fdc1d5f53a8fce152ad99c68af9170281cfec471286d1bb09abdd165edcda type=CONTAINER_STARTED_EVENT Apr 28 01:01:03.239480 containerd[1638]: time="2026-04-28T01:01:03.213839522Z" level=info msg="container event discarded" container=e1005760f423d43ce44d06a9d4b7e9ce0b0129a1949c0222355e980d83e1c805 type=CONTAINER_CREATED_EVENT Apr 28 01:01:03.240829 containerd[1638]: time="2026-04-28T01:01:03.240530941Z" level=info msg="container event discarded" container=e1005760f423d43ce44d06a9d4b7e9ce0b0129a1949c0222355e980d83e1c805 type=CONTAINER_STARTED_EVENT Apr 28 01:01:03.240884 kubelet[3120]: E0428 01:01:03.239953 3120 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:01:03.240884 kubelet[3120]: E0428 01:01:03.240273 3120 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:01:03.817700 containerd[1638]: time="2026-04-28T01:01:03.807126039Z" level=info msg="container event discarded" container=a074e8c266d442053cd1e3651d83d956d8b7264f094e1e4cf7ae772b73813a41 type=CONTAINER_CREATED_EVENT Apr 28 01:01:03.973942 containerd[1638]: time="2026-04-28T01:01:03.968798839Z" level=info msg="container event discarded" container=0bfcdb0a37d3f62d6d86ee7c1eafcd28d53fdc4b124373d470f4b26c5d209542 type=CONTAINER_CREATED_EVENT Apr 28 01:01:04.650738 kubelet[3120]: E0428 01:01:04.644805 3120 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:01:04.762312 kubelet[3120]: E0428 01:01:04.748493 3120 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:01:04.764585 containerd[1638]: time="2026-04-28T01:01:04.652782333Z" level=info msg="container event discarded" container=9583e013b7af737c09ec0f5a95821eabdf38511a63754015720f2c75fbb31b16 type=CONTAINER_CREATED_EVENT Apr 28 01:01:05.964041 kubelet[3120]: E0428 01:01:05.963662 3120 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:01:06.712704 kubelet[3120]: E0428 01:01:06.711731 3120 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:01:07.021764 containerd[1638]: time="2026-04-28T01:01:06.979504161Z" level=info msg="container event discarded" container=0bfcdb0a37d3f62d6d86ee7c1eafcd28d53fdc4b124373d470f4b26c5d209542 type=CONTAINER_STARTED_EVENT Apr 28 01:01:07.098914 kubelet[3120]: E0428 01:01:07.093326 3120 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:01:07.765295 containerd[1638]: time="2026-04-28T01:01:07.763580820Z" level=info msg="container event discarded" container=a074e8c266d442053cd1e3651d83d956d8b7264f094e1e4cf7ae772b73813a41 type=CONTAINER_STARTED_EVENT Apr 28 01:01:08.609619 containerd[1638]: time="2026-04-28T01:01:08.605986198Z" level=info msg="container event discarded" container=9583e013b7af737c09ec0f5a95821eabdf38511a63754015720f2c75fbb31b16 type=CONTAINER_STARTED_EVENT Apr 28 01:01:11.662527 kubelet[3120]: E0428 01:01:11.662276 3120 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.535s" Apr 28 01:01:13.493962 kubelet[3120]: E0428 01:01:13.473431 3120 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.553s" Apr 28 01:01:28.244830 kubelet[3120]: E0428 01:01:28.233036 3120 kubelet_node_status.go:398] "Node not becoming ready in time after startup" Apr 28 01:01:28.287898 kubelet[3120]: E0428 01:01:28.245572 3120 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.131s" Apr 28 01:01:33.265408 kubelet[3120]: E0428 01:01:33.257979 3120 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 28 01:01:33.370035 kubelet[3120]: E0428 01:01:33.314754 3120 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.354s" Apr 28 01:01:36.720758 kubelet[3120]: I0428 01:01:36.713178 3120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/0119e170-e6c1-4e77-9131-085c2b9d7bc5-kube-proxy\") pod \"kube-proxy-d52vp\" (UID: \"0119e170-e6c1-4e77-9131-085c2b9d7bc5\") " pod="kube-system/kube-proxy-d52vp" Apr 28 01:01:36.720758 kubelet[3120]: I0428 01:01:36.720263 3120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0119e170-e6c1-4e77-9131-085c2b9d7bc5-xtables-lock\") pod \"kube-proxy-d52vp\" (UID: \"0119e170-e6c1-4e77-9131-085c2b9d7bc5\") " pod="kube-system/kube-proxy-d52vp" Apr 28 01:01:36.720758 kubelet[3120]: I0428 01:01:36.720473 3120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0119e170-e6c1-4e77-9131-085c2b9d7bc5-lib-modules\") pod \"kube-proxy-d52vp\" (UID: \"0119e170-e6c1-4e77-9131-085c2b9d7bc5\") " pod="kube-system/kube-proxy-d52vp" Apr 28 01:01:36.720758 kubelet[3120]: I0428 01:01:36.720553 3120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mtpbb\" (UniqueName: \"kubernetes.io/projected/0119e170-e6c1-4e77-9131-085c2b9d7bc5-kube-api-access-mtpbb\") pod \"kube-proxy-d52vp\" (UID: \"0119e170-e6c1-4e77-9131-085c2b9d7bc5\") " pod="kube-system/kube-proxy-d52vp" Apr 28 01:01:36.847381 systemd[1]: Created slice kubepods-besteffort-pod0119e170_e6c1_4e77_9131_085c2b9d7bc5.slice - libcontainer container kubepods-besteffort-pod0119e170_e6c1_4e77_9131_085c2b9d7bc5.slice. Apr 28 01:01:36.884504 kubelet[3120]: I0428 01:01:36.884411 3120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-plugin\" (UniqueName: \"kubernetes.io/host-path/61b03599-9c01-4d11-8ba6-0d4d43ff2bf4-cni-plugin\") pod \"kube-flannel-ds-tpgdg\" (UID: \"61b03599-9c01-4d11-8ba6-0d4d43ff2bf4\") " pod="kube-flannel/kube-flannel-ds-tpgdg" Apr 28 01:01:36.912116 kubelet[3120]: I0428 01:01:36.911864 3120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/61b03599-9c01-4d11-8ba6-0d4d43ff2bf4-xtables-lock\") pod \"kube-flannel-ds-tpgdg\" (UID: \"61b03599-9c01-4d11-8ba6-0d4d43ff2bf4\") " pod="kube-flannel/kube-flannel-ds-tpgdg" Apr 28 01:01:36.912116 kubelet[3120]: I0428 01:01:36.912091 3120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/61b03599-9c01-4d11-8ba6-0d4d43ff2bf4-run\") pod \"kube-flannel-ds-tpgdg\" (UID: \"61b03599-9c01-4d11-8ba6-0d4d43ff2bf4\") " pod="kube-flannel/kube-flannel-ds-tpgdg" Apr 28 01:01:36.912116 kubelet[3120]: I0428 01:01:36.912182 3120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vnx8j\" (UniqueName: \"kubernetes.io/projected/61b03599-9c01-4d11-8ba6-0d4d43ff2bf4-kube-api-access-vnx8j\") pod \"kube-flannel-ds-tpgdg\" (UID: \"61b03599-9c01-4d11-8ba6-0d4d43ff2bf4\") " pod="kube-flannel/kube-flannel-ds-tpgdg" Apr 28 01:01:36.912802 kubelet[3120]: I0428 01:01:36.912407 3120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni\" (UniqueName: \"kubernetes.io/host-path/61b03599-9c01-4d11-8ba6-0d4d43ff2bf4-cni\") pod \"kube-flannel-ds-tpgdg\" (UID: \"61b03599-9c01-4d11-8ba6-0d4d43ff2bf4\") " pod="kube-flannel/kube-flannel-ds-tpgdg" Apr 28 01:01:36.912802 kubelet[3120]: I0428 01:01:36.912425 3120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flannel-cfg\" (UniqueName: \"kubernetes.io/configmap/61b03599-9c01-4d11-8ba6-0d4d43ff2bf4-flannel-cfg\") pod \"kube-flannel-ds-tpgdg\" (UID: \"61b03599-9c01-4d11-8ba6-0d4d43ff2bf4\") " pod="kube-flannel/kube-flannel-ds-tpgdg" Apr 28 01:01:37.022270 systemd[1]: Created slice kubepods-burstable-pod61b03599_9c01_4d11_8ba6_0d4d43ff2bf4.slice - libcontainer container kubepods-burstable-pod61b03599_9c01_4d11_8ba6_0d4d43ff2bf4.slice. Apr 28 01:01:37.305172 kubelet[3120]: E0428 01:01:37.303674 3120 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:01:37.320695 containerd[1638]: time="2026-04-28T01:01:37.318731124Z" level=info msg="RunPodSandbox for name:\"kube-proxy-d52vp\" uid:\"0119e170-e6c1-4e77-9131-085c2b9d7bc5\" namespace:\"kube-system\"" Apr 28 01:01:37.442695 kubelet[3120]: E0428 01:01:37.442366 3120 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:01:37.451460 containerd[1638]: time="2026-04-28T01:01:37.450006167Z" level=info msg="RunPodSandbox for name:\"kube-flannel-ds-tpgdg\" uid:\"61b03599-9c01-4d11-8ba6-0d4d43ff2bf4\" namespace:\"kube-flannel\"" Apr 28 01:01:37.461491 containerd[1638]: time="2026-04-28T01:01:37.458880368Z" level=info msg="connecting to shim cbe6c8a633d441637d3c42703e7f1d4cf58d01943ca03951530763820fdb4c82" address="unix:///run/containerd/s/c7e8696aeb0ba7e4a7fc1a4377d5c8bc41318b4aacc3ee8e1d5700c28119b0d8" namespace=k8s.io protocol=ttrpc version=3 Apr 28 01:01:37.648667 containerd[1638]: time="2026-04-28T01:01:37.648208311Z" level=info msg="connecting to shim 30333e1f9cbaa202417482f278ad65cb3394dddff24285ed4eafe81042b6e5e3" address="unix:///run/containerd/s/6b4e4a91a5f6aab403175d61bfc06c84854786f5d88c51170f4914c96227d2b8" namespace=k8s.io protocol=ttrpc version=3 Apr 28 01:01:37.704463 systemd[1]: Started cri-containerd-cbe6c8a633d441637d3c42703e7f1d4cf58d01943ca03951530763820fdb4c82.scope - libcontainer container cbe6c8a633d441637d3c42703e7f1d4cf58d01943ca03951530763820fdb4c82. Apr 28 01:01:38.000109 systemd[1]: Started cri-containerd-30333e1f9cbaa202417482f278ad65cb3394dddff24285ed4eafe81042b6e5e3.scope - libcontainer container 30333e1f9cbaa202417482f278ad65cb3394dddff24285ed4eafe81042b6e5e3. Apr 28 01:01:38.285197 containerd[1638]: time="2026-04-28T01:01:38.284340591Z" level=info msg="RunPodSandbox for name:\"kube-proxy-d52vp\" uid:\"0119e170-e6c1-4e77-9131-085c2b9d7bc5\" namespace:\"kube-system\" returns sandbox id \"cbe6c8a633d441637d3c42703e7f1d4cf58d01943ca03951530763820fdb4c82\"" Apr 28 01:01:38.294249 kubelet[3120]: E0428 01:01:38.292736 3120 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 28 01:01:38.294249 kubelet[3120]: E0428 01:01:38.292815 3120 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:01:38.362399 containerd[1638]: time="2026-04-28T01:01:38.362183496Z" level=info msg="CreateContainer within sandbox \"cbe6c8a633d441637d3c42703e7f1d4cf58d01943ca03951530763820fdb4c82\" for container name:\"kube-proxy\"" Apr 28 01:01:38.419364 containerd[1638]: time="2026-04-28T01:01:38.419134744Z" level=info msg="RunPodSandbox for name:\"kube-flannel-ds-tpgdg\" uid:\"61b03599-9c01-4d11-8ba6-0d4d43ff2bf4\" namespace:\"kube-flannel\" returns sandbox id \"30333e1f9cbaa202417482f278ad65cb3394dddff24285ed4eafe81042b6e5e3\"" Apr 28 01:01:38.421845 kubelet[3120]: E0428 01:01:38.421577 3120 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:01:38.423411 containerd[1638]: time="2026-04-28T01:01:38.423380833Z" level=info msg="PullImage \"ghcr.io/flannel-io/flannel-cni-plugin:v1.6.2-flannel1\"" Apr 28 01:01:38.502013 containerd[1638]: time="2026-04-28T01:01:38.498955637Z" level=info msg="Container aa1c65d85dd92dae16407034e921fa1401a04e0595c6a021b139ed9a8576b995: CDI devices from CRI Config.CDIDevices: []" Apr 28 01:01:38.508374 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1153021302.mount: Deactivated successfully. Apr 28 01:01:38.574833 kubelet[3120]: I0428 01:01:38.568456 3120 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Apr 28 01:01:38.589399 containerd[1638]: time="2026-04-28T01:01:38.588573537Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Apr 28 01:01:38.595653 kubelet[3120]: I0428 01:01:38.595331 3120 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Apr 28 01:01:38.603535 containerd[1638]: time="2026-04-28T01:01:38.603347992Z" level=info msg="CreateContainer within sandbox \"cbe6c8a633d441637d3c42703e7f1d4cf58d01943ca03951530763820fdb4c82\" for name:\"kube-proxy\" returns container id \"aa1c65d85dd92dae16407034e921fa1401a04e0595c6a021b139ed9a8576b995\"" Apr 28 01:01:38.612800 containerd[1638]: time="2026-04-28T01:01:38.612526613Z" level=info msg="StartContainer for \"aa1c65d85dd92dae16407034e921fa1401a04e0595c6a021b139ed9a8576b995\"" Apr 28 01:01:38.625984 containerd[1638]: time="2026-04-28T01:01:38.625519698Z" level=info msg="connecting to shim aa1c65d85dd92dae16407034e921fa1401a04e0595c6a021b139ed9a8576b995" address="unix:///run/containerd/s/c7e8696aeb0ba7e4a7fc1a4377d5c8bc41318b4aacc3ee8e1d5700c28119b0d8" protocol=ttrpc version=3 Apr 28 01:01:38.830276 systemd[1]: Started cri-containerd-aa1c65d85dd92dae16407034e921fa1401a04e0595c6a021b139ed9a8576b995.scope - libcontainer container aa1c65d85dd92dae16407034e921fa1401a04e0595c6a021b139ed9a8576b995. Apr 28 01:01:38.954777 containerd[1638]: time="2026-04-28T01:01:38.954601883Z" level=info msg="StartContainer for \"aa1c65d85dd92dae16407034e921fa1401a04e0595c6a021b139ed9a8576b995\" returns successfully" Apr 28 01:01:39.239157 kubelet[3120]: E0428 01:01:39.238745 3120 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:01:39.305319 kubelet[3120]: I0428 01:01:39.304917 3120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-d52vp" podStartSLOduration=3.304881995 podStartE2EDuration="3.304881995s" podCreationTimestamp="2026-04-28 01:01:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-28 01:01:39.304382365 +0000 UTC m=+138.045581489" watchObservedRunningTime="2026-04-28 01:01:39.304881995 +0000 UTC m=+138.046081121" Apr 28 01:01:40.364291 systemd[1]: Started sshd@5-8193-10.0.0.30:22-10.0.0.1:38134.service - OpenSSH per-connection server daemon (10.0.0.1:38134). Apr 28 01:01:40.800101 sshd[3566]: Accepted publickey for core from 10.0.0.1 port 38134 ssh2: RSA SHA256:H+Vux/uLcwSNGfLerJ6bpcTovGn/hDI0W9YvrkmMHk4 Apr 28 01:01:40.908661 sshd-session[3566]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 01:01:41.227177 systemd-logind[1614]: New session '7' of user 'core' with class 'user' and type 'tty'. Apr 28 01:01:41.267083 systemd[1]: Started session-7.scope - Session 7 of User core. Apr 28 01:01:43.332092 sshd[3586]: Connection closed by 10.0.0.1 port 38134 Apr 28 01:01:43.336790 sshd-session[3566]: pam_unix(sshd:session): session closed for user core Apr 28 01:01:43.349812 kubelet[3120]: E0428 01:01:43.336564 3120 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 28 01:01:43.351526 systemd[1]: sshd@5-8193-10.0.0.30:22-10.0.0.1:38134.service: Deactivated successfully. Apr 28 01:01:43.353754 systemd[1]: session-7.scope: Deactivated successfully. Apr 28 01:01:43.353966 systemd[1]: session-7.scope: Consumed 1.117s CPU time, 15.2M memory peak. Apr 28 01:01:43.364001 systemd-logind[1614]: Session 7 logged out. Waiting for processes to exit. Apr 28 01:01:43.367663 systemd-logind[1614]: Removed session 7. Apr 28 01:01:43.533778 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount249529180.mount: Deactivated successfully. Apr 28 01:01:43.998090 containerd[1638]: time="2026-04-28T01:01:43.997150736Z" level=info msg="ImageCreate event name:\"ghcr.io/flannel-io/flannel-cni-plugin:v1.6.2-flannel1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 01:01:44.012892 containerd[1638]: time="2026-04-28T01:01:44.011326805Z" level=info msg="stop pulling image ghcr.io/flannel-io/flannel-cni-plugin:v1.6.2-flannel1: active requests=1, bytes read=3469491" Apr 28 01:01:44.014776 containerd[1638]: time="2026-04-28T01:01:44.014652086Z" level=info msg="ImageCreate event name:\"sha256:55ce2385d9d8c6f720091c177fbe885a21c9dc07c9e480bfb4d94b3001f58182\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 01:01:44.030928 containerd[1638]: time="2026-04-28T01:01:44.030121279Z" level=info msg="ImageCreate event name:\"ghcr.io/flannel-io/flannel-cni-plugin@sha256:f1812994f0edbcb5bb5ccb63be2147ba6ad10e1faaa7ca9fcdad4f441739d84f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 01:01:44.040543 containerd[1638]: time="2026-04-28T01:01:44.039859053Z" level=info msg="Pulled image \"ghcr.io/flannel-io/flannel-cni-plugin:v1.6.2-flannel1\" with image id \"sha256:55ce2385d9d8c6f720091c177fbe885a21c9dc07c9e480bfb4d94b3001f58182\", repo tag \"ghcr.io/flannel-io/flannel-cni-plugin:v1.6.2-flannel1\", repo digest \"ghcr.io/flannel-io/flannel-cni-plugin@sha256:f1812994f0edbcb5bb5ccb63be2147ba6ad10e1faaa7ca9fcdad4f441739d84f\", size \"4856838\" in 5.616373483s" Apr 28 01:01:44.042888 containerd[1638]: time="2026-04-28T01:01:44.042390958Z" level=info msg="PullImage \"ghcr.io/flannel-io/flannel-cni-plugin:v1.6.2-flannel1\" returns image reference \"sha256:55ce2385d9d8c6f720091c177fbe885a21c9dc07c9e480bfb4d94b3001f58182\"" Apr 28 01:01:44.131230 containerd[1638]: time="2026-04-28T01:01:44.131003400Z" level=info msg="CreateContainer within sandbox \"30333e1f9cbaa202417482f278ad65cb3394dddff24285ed4eafe81042b6e5e3\" for container name:\"install-cni-plugin\"" Apr 28 01:01:44.278075 containerd[1638]: time="2026-04-28T01:01:44.255286378Z" level=info msg="Container ddd2735569c043203f2b58bbb55372d4747dec61bb5bfc3eb17127cc5b9b1564: CDI devices from CRI Config.CDIDevices: []" Apr 28 01:01:44.307693 containerd[1638]: time="2026-04-28T01:01:44.307002737Z" level=info msg="CreateContainer within sandbox \"30333e1f9cbaa202417482f278ad65cb3394dddff24285ed4eafe81042b6e5e3\" for name:\"install-cni-plugin\" returns container id \"ddd2735569c043203f2b58bbb55372d4747dec61bb5bfc3eb17127cc5b9b1564\"" Apr 28 01:01:44.316301 containerd[1638]: time="2026-04-28T01:01:44.315570995Z" level=info msg="StartContainer for \"ddd2735569c043203f2b58bbb55372d4747dec61bb5bfc3eb17127cc5b9b1564\"" Apr 28 01:01:44.401520 containerd[1638]: time="2026-04-28T01:01:44.398737908Z" level=info msg="connecting to shim ddd2735569c043203f2b58bbb55372d4747dec61bb5bfc3eb17127cc5b9b1564" address="unix:///run/containerd/s/6b4e4a91a5f6aab403175d61bfc06c84854786f5d88c51170f4914c96227d2b8" protocol=ttrpc version=3 Apr 28 01:01:44.635312 systemd[1]: Started cri-containerd-ddd2735569c043203f2b58bbb55372d4747dec61bb5bfc3eb17127cc5b9b1564.scope - libcontainer container ddd2735569c043203f2b58bbb55372d4747dec61bb5bfc3eb17127cc5b9b1564. Apr 28 01:01:44.767318 systemd[1]: cri-containerd-ddd2735569c043203f2b58bbb55372d4747dec61bb5bfc3eb17127cc5b9b1564.scope: Deactivated successfully. Apr 28 01:01:44.776731 containerd[1638]: time="2026-04-28T01:01:44.776405764Z" level=info msg="received container exit event container_id:\"ddd2735569c043203f2b58bbb55372d4747dec61bb5bfc3eb17127cc5b9b1564\" id:\"ddd2735569c043203f2b58bbb55372d4747dec61bb5bfc3eb17127cc5b9b1564\" pid:3624 exited_at:{seconds:1777338104 nanos:768702238}" Apr 28 01:01:44.780760 containerd[1638]: time="2026-04-28T01:01:44.780685589Z" level=info msg="StartContainer for \"ddd2735569c043203f2b58bbb55372d4747dec61bb5bfc3eb17127cc5b9b1564\" returns successfully" Apr 28 01:01:45.048794 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ddd2735569c043203f2b58bbb55372d4747dec61bb5bfc3eb17127cc5b9b1564-rootfs.mount: Deactivated successfully. Apr 28 01:01:45.159563 kubelet[3120]: E0428 01:01:45.159378 3120 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:01:45.167896 containerd[1638]: time="2026-04-28T01:01:45.164209506Z" level=info msg="PullImage \"ghcr.io/flannel-io/flannel:v0.26.7\"" Apr 28 01:01:48.796417 systemd[1]: Started sshd@6-5-10.0.0.30:22-10.0.0.1:38142.service - OpenSSH per-connection server daemon (10.0.0.1:38142). Apr 28 01:01:49.763413 kubelet[3120]: E0428 01:01:49.762948 3120 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 28 01:01:50.119312 kubelet[3120]: E0428 01:01:50.102435 3120 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.153s" Apr 28 01:01:51.486656 sshd[3651]: Accepted publickey for core from 10.0.0.1 port 38142 ssh2: RSA SHA256:H+Vux/uLcwSNGfLerJ6bpcTovGn/hDI0W9YvrkmMHk4 Apr 28 01:01:51.571788 sshd-session[3651]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 01:01:51.845538 systemd-logind[1614]: New session '8' of user 'core' with class 'user' and type 'tty'. Apr 28 01:01:51.892497 systemd[1]: Started session-8.scope - Session 8 of User core. Apr 28 01:01:52.433283 kubelet[3120]: E0428 01:01:52.433019 3120 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.33s" Apr 28 01:01:53.427164 sshd[3655]: Connection closed by 10.0.0.1 port 38142 Apr 28 01:01:53.429728 sshd-session[3651]: pam_unix(sshd:session): session closed for user core Apr 28 01:01:53.434873 systemd[1]: sshd@6-5-10.0.0.30:22-10.0.0.1:38142.service: Deactivated successfully. Apr 28 01:01:53.435330 systemd[1]: sshd@6-5-10.0.0.30:22-10.0.0.1:38142.service: Consumed 1.257s CPU time, 5M memory peak. Apr 28 01:01:53.448643 systemd[1]: session-8.scope: Deactivated successfully. Apr 28 01:01:53.455484 systemd-logind[1614]: Session 8 logged out. Waiting for processes to exit. Apr 28 01:01:53.468206 systemd-logind[1614]: Removed session 8. Apr 28 01:01:55.005672 kubelet[3120]: E0428 01:01:55.000542 3120 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 28 01:01:58.580170 systemd[1]: Started sshd@7-12289-10.0.0.30:22-10.0.0.1:52288.service - OpenSSH per-connection server daemon (10.0.0.1:52288). Apr 28 01:01:59.309448 sshd[3682]: Accepted publickey for core from 10.0.0.1 port 52288 ssh2: RSA SHA256:H+Vux/uLcwSNGfLerJ6bpcTovGn/hDI0W9YvrkmMHk4 Apr 28 01:01:59.376286 sshd-session[3682]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 01:01:59.590660 systemd-logind[1614]: New session '9' of user 'core' with class 'user' and type 'tty'. Apr 28 01:01:59.775789 systemd[1]: Started session-9.scope - Session 9 of User core. Apr 28 01:02:00.933488 kubelet[3120]: E0428 01:02:00.920648 3120 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 28 01:02:01.318497 kubelet[3120]: E0428 01:02:01.312455 3120 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.394s" Apr 28 01:02:02.611723 sshd[3694]: Connection closed by 10.0.0.1 port 52288 Apr 28 01:02:02.610379 sshd-session[3682]: pam_unix(sshd:session): session closed for user core Apr 28 01:02:02.627530 systemd[1]: sshd@7-12289-10.0.0.30:22-10.0.0.1:52288.service: Deactivated successfully. Apr 28 01:02:02.630843 systemd[1]: session-9.scope: Deactivated successfully. Apr 28 01:02:02.631390 systemd[1]: session-9.scope: Consumed 1.800s CPU time, 17.7M memory peak. Apr 28 01:02:02.640384 systemd-logind[1614]: Session 9 logged out. Waiting for processes to exit. Apr 28 01:02:02.644664 systemd-logind[1614]: Removed session 9. Apr 28 01:02:06.078261 kubelet[3120]: E0428 01:02:06.077996 3120 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 28 01:02:07.884797 systemd[1]: Started sshd@8-8194-10.0.0.30:22-10.0.0.1:49760.service - OpenSSH per-connection server daemon (10.0.0.1:49760). Apr 28 01:02:09.126254 sshd[3727]: Accepted publickey for core from 10.0.0.1 port 49760 ssh2: RSA SHA256:H+Vux/uLcwSNGfLerJ6bpcTovGn/hDI0W9YvrkmMHk4 Apr 28 01:02:09.147975 sshd-session[3727]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 01:02:09.223854 containerd[1638]: time="2026-04-28T01:02:09.220715474Z" level=info msg="ImageCreate event name:\"ghcr.io/flannel-io/flannel:v0.26.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 01:02:09.340431 containerd[1638]: time="2026-04-28T01:02:09.338807278Z" level=info msg="stop pulling image ghcr.io/flannel-io/flannel:v0.26.7: active requests=0, bytes read=29342843" Apr 28 01:02:09.435336 systemd-logind[1614]: New session '10' of user 'core' with class 'user' and type 'tty'. Apr 28 01:02:09.455140 systemd[1]: Started session-10.scope - Session 10 of User core. Apr 28 01:02:09.482611 containerd[1638]: time="2026-04-28T01:02:09.481878565Z" level=info msg="ImageCreate event name:\"sha256:965b9dd4aa4c1b6b68a4c54a166692b4645b6e6f8a5937d8dc17736cb63f515e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 01:02:09.591114 containerd[1638]: time="2026-04-28T01:02:09.590817832Z" level=info msg="ImageCreate event name:\"ghcr.io/flannel-io/flannel@sha256:7f471907fa940f944867270de4ed78121b8b4c5d564e17f940dc787cb16dea82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 01:02:09.608767 containerd[1638]: time="2026-04-28T01:02:09.608442127Z" level=info msg="Pulled image \"ghcr.io/flannel-io/flannel:v0.26.7\" with image id \"sha256:965b9dd4aa4c1b6b68a4c54a166692b4645b6e6f8a5937d8dc17736cb63f515e\", repo tag \"ghcr.io/flannel-io/flannel:v0.26.7\", repo digest \"ghcr.io/flannel-io/flannel@sha256:7f471907fa940f944867270de4ed78121b8b4c5d564e17f940dc787cb16dea82\", size \"32996046\" in 24.443261417s" Apr 28 01:02:09.608767 containerd[1638]: time="2026-04-28T01:02:09.608578008Z" level=info msg="PullImage \"ghcr.io/flannel-io/flannel:v0.26.7\" returns image reference \"sha256:965b9dd4aa4c1b6b68a4c54a166692b4645b6e6f8a5937d8dc17736cb63f515e\"" Apr 28 01:02:09.749869 containerd[1638]: time="2026-04-28T01:02:09.739153538Z" level=info msg="CreateContainer within sandbox \"30333e1f9cbaa202417482f278ad65cb3394dddff24285ed4eafe81042b6e5e3\" for container name:\"install-cni\"" Apr 28 01:02:10.116976 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2660010173.mount: Deactivated successfully. Apr 28 01:02:10.280421 containerd[1638]: time="2026-04-28T01:02:10.271705258Z" level=info msg="Container 8a792955ef70d791e4f2fc6cc9e0dcc7d4cd0adf8ac776faff31ea00585b695b: CDI devices from CRI Config.CDIDevices: []" Apr 28 01:02:10.395023 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount18710247.mount: Deactivated successfully. Apr 28 01:02:10.439244 containerd[1638]: time="2026-04-28T01:02:10.436796486Z" level=info msg="CreateContainer within sandbox \"30333e1f9cbaa202417482f278ad65cb3394dddff24285ed4eafe81042b6e5e3\" for name:\"install-cni\" returns container id \"8a792955ef70d791e4f2fc6cc9e0dcc7d4cd0adf8ac776faff31ea00585b695b\"" Apr 28 01:02:10.587154 containerd[1638]: time="2026-04-28T01:02:10.580846077Z" level=info msg="StartContainer for \"8a792955ef70d791e4f2fc6cc9e0dcc7d4cd0adf8ac776faff31ea00585b695b\"" Apr 28 01:02:10.661456 containerd[1638]: time="2026-04-28T01:02:10.660985639Z" level=info msg="connecting to shim 8a792955ef70d791e4f2fc6cc9e0dcc7d4cd0adf8ac776faff31ea00585b695b" address="unix:///run/containerd/s/6b4e4a91a5f6aab403175d61bfc06c84854786f5d88c51170f4914c96227d2b8" protocol=ttrpc version=3 Apr 28 01:02:11.256761 systemd[1]: Started cri-containerd-8a792955ef70d791e4f2fc6cc9e0dcc7d4cd0adf8ac776faff31ea00585b695b.scope - libcontainer container 8a792955ef70d791e4f2fc6cc9e0dcc7d4cd0adf8ac776faff31ea00585b695b. Apr 28 01:02:11.538087 kubelet[3120]: E0428 01:02:11.536720 3120 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 28 01:02:11.582196 sshd[3733]: Connection closed by 10.0.0.1 port 49760 Apr 28 01:02:11.605129 sshd-session[3727]: pam_unix(sshd:session): session closed for user core Apr 28 01:02:11.914771 systemd[1]: sshd@8-8194-10.0.0.30:22-10.0.0.1:49760.service: Deactivated successfully. Apr 28 01:02:12.172124 systemd[1]: session-10.scope: Deactivated successfully. Apr 28 01:02:12.174686 systemd[1]: session-10.scope: Consumed 1.322s CPU time, 16M memory peak. Apr 28 01:02:12.367771 systemd-logind[1614]: Session 10 logged out. Waiting for processes to exit. Apr 28 01:02:12.459921 systemd-logind[1614]: Removed session 10. Apr 28 01:02:14.866279 kubelet[3120]: E0428 01:02:14.864828 3120 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.602s" Apr 28 01:02:16.088299 systemd[1]: cri-containerd-8a792955ef70d791e4f2fc6cc9e0dcc7d4cd0adf8ac776faff31ea00585b695b.scope: Deactivated successfully. Apr 28 01:02:16.211317 containerd[1638]: time="2026-04-28T01:02:16.210840859Z" level=info msg="received container exit event container_id:\"8a792955ef70d791e4f2fc6cc9e0dcc7d4cd0adf8ac776faff31ea00585b695b\" id:\"8a792955ef70d791e4f2fc6cc9e0dcc7d4cd0adf8ac776faff31ea00585b695b\" pid:3758 exited_at:{seconds:1777338136 nanos:133899814}" Apr 28 01:02:16.225828 containerd[1638]: time="2026-04-28T01:02:16.225554275Z" level=info msg="StartContainer for \"8a792955ef70d791e4f2fc6cc9e0dcc7d4cd0adf8ac776faff31ea00585b695b\" returns successfully" Apr 28 01:02:16.474853 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8a792955ef70d791e4f2fc6cc9e0dcc7d4cd0adf8ac776faff31ea00585b695b-rootfs.mount: Deactivated successfully. Apr 28 01:02:16.629298 systemd[1]: Started sshd@9-8195-10.0.0.30:22-10.0.0.1:36590.service - OpenSSH per-connection server daemon (10.0.0.1:36590). Apr 28 01:02:17.321739 sshd[3788]: Accepted publickey for core from 10.0.0.1 port 36590 ssh2: RSA SHA256:H+Vux/uLcwSNGfLerJ6bpcTovGn/hDI0W9YvrkmMHk4 Apr 28 01:02:17.375525 sshd-session[3788]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 01:02:17.416254 kubelet[3120]: E0428 01:02:17.415708 3120 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:02:17.611668 systemd-logind[1614]: New session '11' of user 'core' with class 'user' and type 'tty'. Apr 28 01:02:17.616850 systemd[1]: Started session-11.scope - Session 11 of User core. Apr 28 01:02:17.786967 containerd[1638]: time="2026-04-28T01:02:17.785885456Z" level=info msg="CreateContainer within sandbox \"30333e1f9cbaa202417482f278ad65cb3394dddff24285ed4eafe81042b6e5e3\" for container name:\"kube-flannel\"" Apr 28 01:02:18.189506 containerd[1638]: time="2026-04-28T01:02:18.176327927Z" level=info msg="Container 810b487acaca79daaca1336bacf014dcc78f9c531bdc5cddd5c4d52467303e4f: CDI devices from CRI Config.CDIDevices: []" Apr 28 01:02:18.220862 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2635812603.mount: Deactivated successfully. Apr 28 01:02:18.541385 containerd[1638]: time="2026-04-28T01:02:18.536134142Z" level=info msg="CreateContainer within sandbox \"30333e1f9cbaa202417482f278ad65cb3394dddff24285ed4eafe81042b6e5e3\" for name:\"kube-flannel\" returns container id \"810b487acaca79daaca1336bacf014dcc78f9c531bdc5cddd5c4d52467303e4f\"" Apr 28 01:02:18.560138 containerd[1638]: time="2026-04-28T01:02:18.558475989Z" level=info msg="StartContainer for \"810b487acaca79daaca1336bacf014dcc78f9c531bdc5cddd5c4d52467303e4f\"" Apr 28 01:02:18.820743 containerd[1638]: time="2026-04-28T01:02:18.811927262Z" level=info msg="connecting to shim 810b487acaca79daaca1336bacf014dcc78f9c531bdc5cddd5c4d52467303e4f" address="unix:///run/containerd/s/6b4e4a91a5f6aab403175d61bfc06c84854786f5d88c51170f4914c96227d2b8" protocol=ttrpc version=3 Apr 28 01:02:19.027624 systemd[1]: Started cri-containerd-810b487acaca79daaca1336bacf014dcc78f9c531bdc5cddd5c4d52467303e4f.scope - libcontainer container 810b487acaca79daaca1336bacf014dcc78f9c531bdc5cddd5c4d52467303e4f. Apr 28 01:02:19.493787 sshd[3792]: Connection closed by 10.0.0.1 port 36590 Apr 28 01:02:19.495051 sshd-session[3788]: pam_unix(sshd:session): session closed for user core Apr 28 01:02:19.549847 systemd[1]: sshd@9-8195-10.0.0.30:22-10.0.0.1:36590.service: Deactivated successfully. Apr 28 01:02:19.690894 systemd[1]: session-11.scope: Deactivated successfully. Apr 28 01:02:19.708795 systemd-logind[1614]: Session 11 logged out. Waiting for processes to exit. Apr 28 01:02:19.822364 systemd-logind[1614]: Removed session 11. Apr 28 01:02:20.992453 containerd[1638]: time="2026-04-28T01:02:20.992321452Z" level=info msg="StartContainer for \"810b487acaca79daaca1336bacf014dcc78f9c531bdc5cddd5c4d52467303e4f\" returns successfully" Apr 28 01:02:21.844781 kubelet[3120]: E0428 01:02:21.844650 3120 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:02:22.023706 kubelet[3120]: I0428 01:02:22.022694 3120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-flannel/kube-flannel-ds-tpgdg" podStartSLOduration=14.81208242 podStartE2EDuration="46.02248438s" podCreationTimestamp="2026-04-28 01:01:36 +0000 UTC" firstStartedPulling="2026-04-28 01:01:38.422785613 +0000 UTC m=+137.163984728" lastFinishedPulling="2026-04-28 01:02:09.633187564 +0000 UTC m=+168.374386688" observedRunningTime="2026-04-28 01:02:22.008981519 +0000 UTC m=+180.750180638" watchObservedRunningTime="2026-04-28 01:02:22.02248438 +0000 UTC m=+180.763683508" Apr 28 01:02:22.835716 kubelet[3120]: E0428 01:02:22.835412 3120 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:02:22.890520 kubelet[3120]: E0428 01:02:22.889306 3120 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:02:22.903484 kubelet[3120]: E0428 01:02:22.903271 3120 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:02:23.063925 systemd[1]: Starting systemd-tmpfiles-clean.service - Cleanup of Temporary Directories... Apr 28 01:02:23.515255 systemd-tmpfiles[3845]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Apr 28 01:02:23.515273 systemd-tmpfiles[3845]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Apr 28 01:02:23.529916 systemd-tmpfiles[3845]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Apr 28 01:02:23.550111 systemd-tmpfiles[3845]: ACLs are not supported, ignoring. Apr 28 01:02:23.550243 systemd-tmpfiles[3845]: ACLs are not supported, ignoring. Apr 28 01:02:23.559001 systemd-tmpfiles[3845]: Detected autofs mount point /boot during canonicalization of boot. Apr 28 01:02:23.559054 systemd-tmpfiles[3845]: Skipping /boot Apr 28 01:02:23.574011 systemd-networkd[1438]: flannel.1: Link UP Apr 28 01:02:23.574015 systemd-networkd[1438]: flannel.1: Gained carrier Apr 28 01:02:23.588320 systemd[1]: systemd-tmpfiles-clean.service: Deactivated successfully. Apr 28 01:02:23.588855 systemd[1]: Finished systemd-tmpfiles-clean.service - Cleanup of Temporary Directories. Apr 28 01:02:24.666074 systemd[1]: Started sshd@10-8196-10.0.0.30:22-10.0.0.1:43644.service - OpenSSH per-connection server daemon (10.0.0.1:43644). Apr 28 01:02:25.121613 sshd[3905]: Accepted publickey for core from 10.0.0.1 port 43644 ssh2: RSA SHA256:H+Vux/uLcwSNGfLerJ6bpcTovGn/hDI0W9YvrkmMHk4 Apr 28 01:02:25.126838 sshd-session[3905]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 01:02:25.188676 systemd-logind[1614]: New session '12' of user 'core' with class 'user' and type 'tty'. Apr 28 01:02:25.194565 systemd[1]: Started session-12.scope - Session 12 of User core. Apr 28 01:02:25.272070 systemd-networkd[1438]: flannel.1: Gained IPv6LL Apr 28 01:02:26.078742 sshd[3909]: Connection closed by 10.0.0.1 port 43644 Apr 28 01:02:26.079650 sshd-session[3905]: pam_unix(sshd:session): session closed for user core Apr 28 01:02:26.105006 systemd[1]: sshd@10-8196-10.0.0.30:22-10.0.0.1:43644.service: Deactivated successfully. Apr 28 01:02:26.129460 systemd[1]: session-12.scope: Deactivated successfully. Apr 28 01:02:26.167029 systemd-logind[1614]: Session 12 logged out. Waiting for processes to exit. Apr 28 01:02:26.188373 systemd-logind[1614]: Removed session 12. Apr 28 01:02:28.916412 kubelet[3120]: E0428 01:02:28.915119 3120 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:02:31.174876 systemd[1]: Started sshd@11-12290-10.0.0.30:22-10.0.0.1:54236.service - OpenSSH per-connection server daemon (10.0.0.1:54236). Apr 28 01:02:32.023941 sshd[3948]: Accepted publickey for core from 10.0.0.1 port 54236 ssh2: RSA SHA256:H+Vux/uLcwSNGfLerJ6bpcTovGn/hDI0W9YvrkmMHk4 Apr 28 01:02:32.049763 sshd-session[3948]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 01:02:32.300505 systemd-logind[1614]: New session '13' of user 'core' with class 'user' and type 'tty'. Apr 28 01:02:32.365626 systemd[1]: Started session-13.scope - Session 13 of User core. Apr 28 01:02:33.145403 sshd[3952]: Connection closed by 10.0.0.1 port 54236 Apr 28 01:02:33.157360 sshd-session[3948]: pam_unix(sshd:session): session closed for user core Apr 28 01:02:33.179345 systemd[1]: sshd@11-12290-10.0.0.30:22-10.0.0.1:54236.service: Deactivated successfully. Apr 28 01:02:33.212303 systemd[1]: session-13.scope: Deactivated successfully. Apr 28 01:02:33.289200 systemd-logind[1614]: Session 13 logged out. Waiting for processes to exit. Apr 28 01:02:33.306942 systemd[1]: Started sshd@12-8197-10.0.0.30:22-10.0.0.1:54244.service - OpenSSH per-connection server daemon (10.0.0.1:54244). Apr 28 01:02:33.316109 systemd-logind[1614]: Removed session 13. Apr 28 01:02:33.637188 sshd[3971]: Accepted publickey for core from 10.0.0.1 port 54244 ssh2: RSA SHA256:H+Vux/uLcwSNGfLerJ6bpcTovGn/hDI0W9YvrkmMHk4 Apr 28 01:02:33.639409 sshd-session[3971]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 01:02:33.658800 systemd-logind[1614]: New session '14' of user 'core' with class 'user' and type 'tty'. Apr 28 01:02:33.668575 systemd[1]: Started session-14.scope - Session 14 of User core. Apr 28 01:02:34.184617 sshd[3979]: Connection closed by 10.0.0.1 port 54244 Apr 28 01:02:34.187998 sshd-session[3971]: pam_unix(sshd:session): session closed for user core Apr 28 01:02:34.221201 systemd[1]: sshd@12-8197-10.0.0.30:22-10.0.0.1:54244.service: Deactivated successfully. Apr 28 01:02:34.224313 systemd[1]: session-14.scope: Deactivated successfully. Apr 28 01:02:34.290303 systemd-logind[1614]: Session 14 logged out. Waiting for processes to exit. Apr 28 01:02:34.306267 systemd[1]: Started sshd@13-8198-10.0.0.30:22-10.0.0.1:54248.service - OpenSSH per-connection server daemon (10.0.0.1:54248). Apr 28 01:02:34.310663 systemd-logind[1614]: Removed session 14. Apr 28 01:02:34.505884 sshd[3992]: Accepted publickey for core from 10.0.0.1 port 54248 ssh2: RSA SHA256:H+Vux/uLcwSNGfLerJ6bpcTovGn/hDI0W9YvrkmMHk4 Apr 28 01:02:34.512695 sshd-session[3992]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 01:02:34.627814 systemd-logind[1614]: New session '15' of user 'core' with class 'user' and type 'tty'. Apr 28 01:02:34.637010 systemd[1]: Started session-15.scope - Session 15 of User core. Apr 28 01:02:35.336539 sshd[4015]: Connection closed by 10.0.0.1 port 54248 Apr 28 01:02:35.337694 sshd-session[3992]: pam_unix(sshd:session): session closed for user core Apr 28 01:02:35.353298 systemd[1]: sshd@13-8198-10.0.0.30:22-10.0.0.1:54248.service: Deactivated successfully. Apr 28 01:02:35.369600 systemd[1]: session-15.scope: Deactivated successfully. Apr 28 01:02:35.371695 systemd-logind[1614]: Session 15 logged out. Waiting for processes to exit. Apr 28 01:02:35.384562 systemd-logind[1614]: Removed session 15. Apr 28 01:02:40.392568 systemd[1]: Started sshd@14-4098-10.0.0.30:22-10.0.0.1:50732.service - OpenSSH per-connection server daemon (10.0.0.1:50732). Apr 28 01:02:40.518197 sshd[4052]: Accepted publickey for core from 10.0.0.1 port 50732 ssh2: RSA SHA256:H+Vux/uLcwSNGfLerJ6bpcTovGn/hDI0W9YvrkmMHk4 Apr 28 01:02:40.522210 sshd-session[4052]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 01:02:40.609393 systemd-logind[1614]: New session '16' of user 'core' with class 'user' and type 'tty'. Apr 28 01:02:40.627764 systemd[1]: Started session-16.scope - Session 16 of User core. Apr 28 01:02:40.897462 sshd[4056]: Connection closed by 10.0.0.1 port 50732 Apr 28 01:02:40.905028 sshd-session[4052]: pam_unix(sshd:session): session closed for user core Apr 28 01:02:40.914175 systemd[1]: sshd@14-4098-10.0.0.30:22-10.0.0.1:50732.service: Deactivated successfully. Apr 28 01:02:40.922888 systemd[1]: session-16.scope: Deactivated successfully. Apr 28 01:02:41.015020 systemd-logind[1614]: Session 16 logged out. Waiting for processes to exit. Apr 28 01:02:41.025658 systemd-logind[1614]: Removed session 16. Apr 28 01:02:43.897952 kubelet[3120]: E0428 01:02:43.893466 3120 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:02:46.086871 systemd[1]: Started sshd@15-8199-10.0.0.30:22-10.0.0.1:50742.service - OpenSSH per-connection server daemon (10.0.0.1:50742). Apr 28 01:02:46.911821 sshd[4089]: Accepted publickey for core from 10.0.0.1 port 50742 ssh2: RSA SHA256:H+Vux/uLcwSNGfLerJ6bpcTovGn/hDI0W9YvrkmMHk4 Apr 28 01:02:46.968930 sshd-session[4089]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 01:02:46.989127 systemd-logind[1614]: New session '17' of user 'core' with class 'user' and type 'tty'. Apr 28 01:02:47.009731 systemd[1]: Started session-17.scope - Session 17 of User core. Apr 28 01:02:50.386303 sshd[4093]: Connection closed by 10.0.0.1 port 50742 Apr 28 01:02:50.503619 sshd-session[4089]: pam_unix(sshd:session): session closed for user core Apr 28 01:02:50.584885 systemd[1]: sshd@15-8199-10.0.0.30:22-10.0.0.1:50742.service: Deactivated successfully. Apr 28 01:02:50.611772 systemd[1]: session-17.scope: Deactivated successfully. Apr 28 01:02:50.617907 systemd[1]: session-17.scope: Consumed 2.679s CPU time, 16.3M memory peak. Apr 28 01:02:50.641367 systemd-logind[1614]: Session 17 logged out. Waiting for processes to exit. Apr 28 01:02:50.648127 systemd-logind[1614]: Removed session 17. Apr 28 01:02:55.657941 systemd[1]: Started sshd@16-12291-10.0.0.30:22-10.0.0.1:42910.service - OpenSSH per-connection server daemon (10.0.0.1:42910). Apr 28 01:02:56.071473 sshd[4126]: Accepted publickey for core from 10.0.0.1 port 42910 ssh2: RSA SHA256:H+Vux/uLcwSNGfLerJ6bpcTovGn/hDI0W9YvrkmMHk4 Apr 28 01:02:56.094700 sshd-session[4126]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 01:02:56.253686 systemd-logind[1614]: New session '18' of user 'core' with class 'user' and type 'tty'. Apr 28 01:02:56.269661 systemd[1]: Started session-18.scope - Session 18 of User core. Apr 28 01:02:58.666912 sshd[4146]: Connection closed by 10.0.0.1 port 42910 Apr 28 01:02:58.676753 sshd-session[4126]: pam_unix(sshd:session): session closed for user core Apr 28 01:02:58.720767 systemd[1]: Started sshd@17-12292-10.0.0.30:22-10.0.0.1:42918.service - OpenSSH per-connection server daemon (10.0.0.1:42918). Apr 28 01:02:58.721680 systemd[1]: sshd@16-12291-10.0.0.30:22-10.0.0.1:42910.service: Deactivated successfully. Apr 28 01:02:58.785434 systemd[1]: session-18.scope: Deactivated successfully. Apr 28 01:02:58.788727 systemd[1]: session-18.scope: Consumed 1.871s CPU time, 17M memory peak. Apr 28 01:02:58.818468 systemd-logind[1614]: Session 18 logged out. Waiting for processes to exit. Apr 28 01:02:58.821926 systemd-logind[1614]: Removed session 18. Apr 28 01:02:59.149852 sshd[4162]: Accepted publickey for core from 10.0.0.1 port 42918 ssh2: RSA SHA256:H+Vux/uLcwSNGfLerJ6bpcTovGn/hDI0W9YvrkmMHk4 Apr 28 01:02:59.153046 sshd-session[4162]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 01:02:59.201029 systemd-logind[1614]: New session '19' of user 'core' with class 'user' and type 'tty'. Apr 28 01:02:59.343391 systemd[1]: Started session-19.scope - Session 19 of User core. Apr 28 01:02:59.830803 sshd[4170]: Connection closed by 10.0.0.1 port 42918 Apr 28 01:02:59.833677 sshd-session[4162]: pam_unix(sshd:session): session closed for user core Apr 28 01:02:59.876055 systemd[1]: sshd@17-12292-10.0.0.30:22-10.0.0.1:42918.service: Deactivated successfully. Apr 28 01:02:59.879038 systemd[1]: session-19.scope: Deactivated successfully. Apr 28 01:02:59.880876 systemd-logind[1614]: Session 19 logged out. Waiting for processes to exit. Apr 28 01:02:59.894770 systemd[1]: Started sshd@18-4099-10.0.0.30:22-10.0.0.1:44162.service - OpenSSH per-connection server daemon (10.0.0.1:44162). Apr 28 01:02:59.901893 systemd-logind[1614]: Removed session 19. Apr 28 01:03:00.127277 sshd[4182]: Accepted publickey for core from 10.0.0.1 port 44162 ssh2: RSA SHA256:H+Vux/uLcwSNGfLerJ6bpcTovGn/hDI0W9YvrkmMHk4 Apr 28 01:03:00.134429 sshd-session[4182]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 01:03:00.142920 systemd-logind[1614]: New session '20' of user 'core' with class 'user' and type 'tty'. Apr 28 01:03:00.156503 systemd[1]: Started session-20.scope - Session 20 of User core. Apr 28 01:03:02.274333 sshd[4186]: Connection closed by 10.0.0.1 port 44162 Apr 28 01:03:02.281814 sshd-session[4182]: pam_unix(sshd:session): session closed for user core Apr 28 01:03:02.318839 systemd[1]: sshd@18-4099-10.0.0.30:22-10.0.0.1:44162.service: Deactivated successfully. Apr 28 01:03:02.322077 systemd[1]: session-20.scope: Deactivated successfully. Apr 28 01:03:02.322529 systemd[1]: session-20.scope: Consumed 1.483s CPU time, 35.3M memory peak. Apr 28 01:03:02.528683 systemd-logind[1614]: Session 20 logged out. Waiting for processes to exit. Apr 28 01:03:02.687725 systemd[1]: Started sshd@19-12293-10.0.0.30:22-10.0.0.1:44164.service - OpenSSH per-connection server daemon (10.0.0.1:44164). Apr 28 01:03:02.690256 systemd-logind[1614]: Removed session 20. Apr 28 01:03:03.285615 sshd[4223]: Accepted publickey for core from 10.0.0.1 port 44164 ssh2: RSA SHA256:H+Vux/uLcwSNGfLerJ6bpcTovGn/hDI0W9YvrkmMHk4 Apr 28 01:03:03.308406 sshd-session[4223]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 01:03:03.394502 systemd-logind[1614]: New session '21' of user 'core' with class 'user' and type 'tty'. Apr 28 01:03:03.415485 systemd[1]: Started session-21.scope - Session 21 of User core. Apr 28 01:03:07.485632 sshd[4229]: Connection closed by 10.0.0.1 port 44164 Apr 28 01:03:07.514057 sshd-session[4223]: pam_unix(sshd:session): session closed for user core Apr 28 01:03:08.106693 systemd[1]: sshd@19-12293-10.0.0.30:22-10.0.0.1:44164.service: Deactivated successfully. Apr 28 01:03:08.198521 systemd[1]: session-21.scope: Deactivated successfully. Apr 28 01:03:08.199352 systemd[1]: session-21.scope: Consumed 2.704s CPU time, 23.5M memory peak. Apr 28 01:03:08.230929 systemd-logind[1614]: Session 21 logged out. Waiting for processes to exit. Apr 28 01:03:08.232404 systemd[1]: Started sshd@20-6-10.0.0.30:22-10.0.0.1:44170.service - OpenSSH per-connection server daemon (10.0.0.1:44170). Apr 28 01:03:08.383775 systemd-logind[1614]: Removed session 21. Apr 28 01:03:10.390619 kubelet[3120]: E0428 01:03:10.390472 3120 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.341s" Apr 28 01:03:11.820972 sshd[4251]: Accepted publickey for core from 10.0.0.1 port 44170 ssh2: RSA SHA256:H+Vux/uLcwSNGfLerJ6bpcTovGn/hDI0W9YvrkmMHk4 Apr 28 01:03:12.222509 sshd-session[4251]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 01:03:12.756851 systemd-logind[1614]: New session '22' of user 'core' with class 'user' and type 'tty'. Apr 28 01:03:12.777797 systemd[1]: Started session-22.scope - Session 22 of User core. Apr 28 01:03:15.494489 kubelet[3120]: E0428 01:03:15.492052 3120 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.407s" Apr 28 01:03:21.080908 kubelet[3120]: E0428 01:03:20.963429 3120 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="5.368s" Apr 28 01:03:26.211176 sshd[4265]: Connection closed by 10.0.0.1 port 44170 Apr 28 01:03:26.282468 sshd-session[4251]: pam_unix(sshd:session): session closed for user core Apr 28 01:03:26.690150 systemd[1]: sshd@20-6-10.0.0.30:22-10.0.0.1:44170.service: Deactivated successfully. Apr 28 01:03:26.816673 systemd[1]: sshd@20-6-10.0.0.30:22-10.0.0.1:44170.service: Consumed 1.263s CPU time, 4.1M memory peak. Apr 28 01:03:27.319332 systemd[1]: session-22.scope: Deactivated successfully. Apr 28 01:03:27.320435 systemd[1]: session-22.scope: Consumed 4.908s CPU time, 18.4M memory peak. Apr 28 01:03:28.263766 systemd-logind[1614]: Session 22 logged out. Waiting for processes to exit. Apr 28 01:03:29.039921 systemd-logind[1614]: Removed session 22. Apr 28 01:03:34.795055 systemd[1]: Started sshd@21-12294-10.0.0.30:22-10.0.0.1:45610.service - OpenSSH per-connection server daemon (10.0.0.1:45610). Apr 28 01:03:35.677744 systemd[1]: cri-containerd-e07d40a321b16af047cd22b828dacfef5fea20b28bc199460cc840f763742e82.scope: Deactivated successfully. Apr 28 01:03:35.980008 systemd[1]: cri-containerd-e07d40a321b16af047cd22b828dacfef5fea20b28bc199460cc840f763742e82.scope: Consumed 27.054s CPU time, 22.2M memory peak. Apr 28 01:03:36.310061 systemd[1]: cri-containerd-2123c0c810e33437ad48772fb52f3649439d2fd73353cbf527e2dd4e5e4e496a.scope: Deactivated successfully. Apr 28 01:03:36.669510 systemd[1]: cri-containerd-2123c0c810e33437ad48772fb52f3649439d2fd73353cbf527e2dd4e5e4e496a.scope: Consumed 46.832s CPU time, 56.7M memory peak. Apr 28 01:03:37.995974 containerd[1638]: time="2026-04-28T01:03:34.200944214Z" level=info msg="container event discarded" container=a074e8c266d442053cd1e3651d83d956d8b7264f094e1e4cf7ae772b73813a41 type=CONTAINER_STOPPED_EVENT Apr 28 01:03:37.995974 containerd[1638]: time="2026-04-28T01:03:37.987799544Z" level=info msg="received container exit event container_id:\"2123c0c810e33437ad48772fb52f3649439d2fd73353cbf527e2dd4e5e4e496a\" id:\"2123c0c810e33437ad48772fb52f3649439d2fd73353cbf527e2dd4e5e4e496a\" pid:3265 exit_status:1 exited_at:{seconds:1777338216 nanos:986834886}" Apr 28 01:03:40.213499 containerd[1638]: time="2026-04-28T01:03:40.002039290Z" level=info msg="received container exit event container_id:\"e07d40a321b16af047cd22b828dacfef5fea20b28bc199460cc840f763742e82\" id:\"e07d40a321b16af047cd22b828dacfef5fea20b28bc199460cc840f763742e82\" pid:3272 exit_status:1 exited_at:{seconds:1777338219 nanos:312978926}" Apr 28 01:03:41.076583 containerd[1638]: time="2026-04-28T01:03:40.924002459Z" level=info msg="container event discarded" container=0a3cb94c91645c00428bd7da14a0449fffb988671709d443a7f1179a66b82b18 type=CONTAINER_CREATED_EVENT Apr 28 01:03:47.748032 kubelet[3120]: E0428 01:03:45.989069 3120 controller.go:195] "Failed to update lease" err="Put \"https://10.0.0.30:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Apr 28 01:03:47.993538 containerd[1638]: time="2026-04-28T01:03:47.992390737Z" level=info msg="container event discarded" container=0a3cb94c91645c00428bd7da14a0449fffb988671709d443a7f1179a66b82b18 type=CONTAINER_STARTED_EVENT Apr 28 01:03:48.242776 sshd[4294]: Accepted publickey for core from 10.0.0.1 port 45610 ssh2: RSA SHA256:H+Vux/uLcwSNGfLerJ6bpcTovGn/hDI0W9YvrkmMHk4 Apr 28 01:03:48.390147 containerd[1638]: time="2026-04-28T01:03:48.010614970Z" level=error msg="ttrpc: received message on inactive stream" stream=47 Apr 28 01:03:48.463819 sshd-session[4294]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 01:03:48.619654 containerd[1638]: time="2026-04-28T01:03:48.609986409Z" level=error msg="failed to handle container TaskExit event container_id:\"2123c0c810e33437ad48772fb52f3649439d2fd73353cbf527e2dd4e5e4e496a\" id:\"2123c0c810e33437ad48772fb52f3649439d2fd73353cbf527e2dd4e5e4e496a\" pid:3265 exit_status:1 exited_at:{seconds:1777338216 nanos:986834886}" error="failed to stop container: context deadline exceeded" Apr 28 01:03:48.819692 systemd-logind[1614]: New session '23' of user 'core' with class 'user' and type 'tty'. Apr 28 01:03:48.868975 systemd[1]: Started session-23.scope - Session 23 of User core. Apr 28 01:03:48.977904 containerd[1638]: time="2026-04-28T01:03:48.782987653Z" level=error msg="ttrpc: received message on inactive stream" stream=49 Apr 28 01:03:50.386792 kubelet[3120]: E0428 01:03:50.384045 3120 controller.go:195] "Failed to update lease" err="Operation cannot be fulfilled on leases.coordination.k8s.io \"localhost\": the object has been modified; please apply your changes to the latest version and try again" Apr 28 01:03:50.559532 containerd[1638]: time="2026-04-28T01:03:50.524762464Z" level=info msg="TaskExit event container_id:\"2123c0c810e33437ad48772fb52f3649439d2fd73353cbf527e2dd4e5e4e496a\" id:\"2123c0c810e33437ad48772fb52f3649439d2fd73353cbf527e2dd4e5e4e496a\" pid:3265 exit_status:1 exited_at:{seconds:1777338216 nanos:986834886}" Apr 28 01:03:50.628172 containerd[1638]: time="2026-04-28T01:03:50.511195504Z" level=error msg="failed to delete task" error="context deadline exceeded" id=e07d40a321b16af047cd22b828dacfef5fea20b28bc199460cc840f763742e82 Apr 28 01:03:50.907243 kubelet[3120]: E0428 01:03:50.864188 3120 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="29.374s" Apr 28 01:03:51.064435 containerd[1638]: time="2026-04-28T01:03:51.059677385Z" level=error msg="failed to handle container TaskExit event container_id:\"e07d40a321b16af047cd22b828dacfef5fea20b28bc199460cc840f763742e82\" id:\"e07d40a321b16af047cd22b828dacfef5fea20b28bc199460cc840f763742e82\" pid:3272 exit_status:1 exited_at:{seconds:1777338219 nanos:312978926}" error="failed to stop container: failed to delete task: context deadline exceeded" Apr 28 01:03:51.557966 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e07d40a321b16af047cd22b828dacfef5fea20b28bc199460cc840f763742e82-rootfs.mount: Deactivated successfully. Apr 28 01:03:51.679840 containerd[1638]: time="2026-04-28T01:03:51.678197442Z" level=error msg="ttrpc: received message on inactive stream" stream=55 Apr 28 01:03:52.277808 kubelet[3120]: E0428 01:03:52.269873 3120 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:03:52.277808 kubelet[3120]: E0428 01:03:52.271311 3120 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:03:52.494474 kubelet[3120]: E0428 01:03:52.394391 3120 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:03:53.241064 kubelet[3120]: E0428 01:03:53.235720 3120 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:03:54.610243 kubelet[3120]: E0428 01:03:54.610008 3120 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.534s" Apr 28 01:03:55.106862 sshd[4323]: Connection closed by 10.0.0.1 port 45610 Apr 28 01:03:55.109606 sshd-session[4294]: pam_unix(sshd:session): session closed for user core Apr 28 01:03:55.248849 systemd[1]: sshd@21-12294-10.0.0.30:22-10.0.0.1:45610.service: Deactivated successfully. Apr 28 01:03:55.282914 systemd[1]: sshd@21-12294-10.0.0.30:22-10.0.0.1:45610.service: Consumed 3.439s CPU time, 4.3M memory peak. Apr 28 01:03:55.450817 systemd[1]: session-23.scope: Deactivated successfully. Apr 28 01:03:55.451572 systemd[1]: session-23.scope: Consumed 3.681s CPU time, 14.5M memory peak. Apr 28 01:03:55.498557 systemd-logind[1614]: Session 23 logged out. Waiting for processes to exit. Apr 28 01:03:55.500376 kubelet[3120]: E0428 01:03:55.499625 3120 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:03:55.627031 systemd-logind[1614]: Removed session 23. Apr 28 01:03:57.165754 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2123c0c810e33437ad48772fb52f3649439d2fd73353cbf527e2dd4e5e4e496a-rootfs.mount: Deactivated successfully. Apr 28 01:03:57.376753 containerd[1638]: time="2026-04-28T01:03:57.376566017Z" level=info msg="TaskExit event container_id:\"e07d40a321b16af047cd22b828dacfef5fea20b28bc199460cc840f763742e82\" id:\"e07d40a321b16af047cd22b828dacfef5fea20b28bc199460cc840f763742e82\" pid:3272 exit_status:1 exited_at:{seconds:1777338219 nanos:312978926}" Apr 28 01:03:57.419514 kubelet[3120]: E0428 01:03:57.418428 3120 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.328s" Apr 28 01:03:57.914164 kubelet[3120]: E0428 01:03:57.913925 3120 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:03:58.800812 kubelet[3120]: I0428 01:03:58.800357 3120 scope.go:117] "RemoveContainer" containerID="0bfcdb0a37d3f62d6d86ee7c1eafcd28d53fdc4b124373d470f4b26c5d209542" Apr 28 01:03:58.848961 kubelet[3120]: I0428 01:03:58.846432 3120 scope.go:117] "RemoveContainer" containerID="e07d40a321b16af047cd22b828dacfef5fea20b28bc199460cc840f763742e82" Apr 28 01:03:58.877078 kubelet[3120]: E0428 01:03:58.876189 3120 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:03:58.895054 containerd[1638]: time="2026-04-28T01:03:58.894757150Z" level=info msg="RemoveContainer for \"0bfcdb0a37d3f62d6d86ee7c1eafcd28d53fdc4b124373d470f4b26c5d209542\"" Apr 28 01:03:58.922898 kubelet[3120]: I0428 01:03:58.896804 3120 scope.go:117] "RemoveContainer" containerID="2123c0c810e33437ad48772fb52f3649439d2fd73353cbf527e2dd4e5e4e496a" Apr 28 01:03:58.922898 kubelet[3120]: E0428 01:03:58.896958 3120 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:03:59.176673 containerd[1638]: time="2026-04-28T01:03:59.175804829Z" level=info msg="CreateContainer within sandbox \"e82fdc1d5f53a8fce152ad99c68af9170281cfec471286d1bb09abdd165edcda\" for container name:\"kube-scheduler\" attempt:2" Apr 28 01:03:59.323292 containerd[1638]: time="2026-04-28T01:03:59.322821250Z" level=info msg="CreateContainer within sandbox \"f99c09fa9bcebfcc9ce36f1e690fb119057571c499a894190b3ac3be9c525e0a\" for container name:\"kube-controller-manager\" attempt:3" Apr 28 01:03:59.539536 containerd[1638]: time="2026-04-28T01:03:59.537685240Z" level=info msg="RemoveContainer for \"0bfcdb0a37d3f62d6d86ee7c1eafcd28d53fdc4b124373d470f4b26c5d209542\" returns successfully" Apr 28 01:03:59.569142 kubelet[3120]: I0428 01:03:59.539684 3120 scope.go:117] "RemoveContainer" containerID="0a3cb94c91645c00428bd7da14a0449fffb988671709d443a7f1179a66b82b18" Apr 28 01:04:00.977648 containerd[1638]: time="2026-04-28T01:04:00.915847761Z" level=info msg="Container d99ba52aef3e5e6e745953a55875c7ff1597d941380a59b0e4b2280ea02ced9b: CDI devices from CRI Config.CDIDevices: []" Apr 28 01:04:01.287553 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount591409893.mount: Deactivated successfully. Apr 28 01:04:01.771819 containerd[1638]: time="2026-04-28T01:04:01.601912881Z" level=info msg="RemoveContainer for \"0a3cb94c91645c00428bd7da14a0449fffb988671709d443a7f1179a66b82b18\"" Apr 28 01:04:02.205769 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2288070070.mount: Deactivated successfully. Apr 28 01:04:02.694537 containerd[1638]: time="2026-04-28T01:04:02.673552925Z" level=info msg="Container 307693d5f6b27635f480b3af7e65ec1b92e0753b3f946c102f39b7b3864b2cdc: CDI devices from CRI Config.CDIDevices: []" Apr 28 01:04:03.250417 kubelet[3120]: E0428 01:04:03.249060 3120 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.376s" Apr 28 01:04:03.250839 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3560914046.mount: Deactivated successfully. Apr 28 01:04:03.691135 systemd[1]: Started sshd@22-8200-10.0.0.30:22-10.0.0.1:51506.service - OpenSSH per-connection server daemon (10.0.0.1:51506). Apr 28 01:04:09.587053 containerd[1638]: time="2026-04-28T01:04:09.562740224Z" level=info msg="CreateContainer within sandbox \"f99c09fa9bcebfcc9ce36f1e690fb119057571c499a894190b3ac3be9c525e0a\" for name:\"kube-controller-manager\" attempt:3 returns container id \"d99ba52aef3e5e6e745953a55875c7ff1597d941380a59b0e4b2280ea02ced9b\"" Apr 28 01:04:10.441813 containerd[1638]: time="2026-04-28T01:04:10.441188733Z" level=info msg="RemoveContainer for \"0a3cb94c91645c00428bd7da14a0449fffb988671709d443a7f1179a66b82b18\" returns successfully" Apr 28 01:04:11.268901 containerd[1638]: time="2026-04-28T01:04:11.268755871Z" level=info msg="CreateContainer within sandbox \"e82fdc1d5f53a8fce152ad99c68af9170281cfec471286d1bb09abdd165edcda\" for name:\"kube-scheduler\" attempt:2 returns container id \"307693d5f6b27635f480b3af7e65ec1b92e0753b3f946c102f39b7b3864b2cdc\"" Apr 28 01:04:11.326319 containerd[1638]: time="2026-04-28T01:04:11.313326372Z" level=info msg="StartContainer for \"307693d5f6b27635f480b3af7e65ec1b92e0753b3f946c102f39b7b3864b2cdc\"" Apr 28 01:04:11.355616 containerd[1638]: time="2026-04-28T01:04:11.355037705Z" level=info msg="StartContainer for \"d99ba52aef3e5e6e745953a55875c7ff1597d941380a59b0e4b2280ea02ced9b\"" Apr 28 01:04:11.395254 sshd[4405]: Accepted publickey for core from 10.0.0.1 port 51506 ssh2: RSA SHA256:H+Vux/uLcwSNGfLerJ6bpcTovGn/hDI0W9YvrkmMHk4 Apr 28 01:04:11.516548 containerd[1638]: time="2026-04-28T01:04:11.396648777Z" level=info msg="connecting to shim d99ba52aef3e5e6e745953a55875c7ff1597d941380a59b0e4b2280ea02ced9b" address="unix:///run/containerd/s/aafd21b6e43b3c36323942c08fd3df2bb03ac8c2cdd619376b1243457cecf8d1" protocol=ttrpc version=3 Apr 28 01:04:11.555793 containerd[1638]: time="2026-04-28T01:04:11.540842764Z" level=info msg="connecting to shim 307693d5f6b27635f480b3af7e65ec1b92e0753b3f946c102f39b7b3864b2cdc" address="unix:///run/containerd/s/87324bb63ef3a4130ae0dbb17ad0d3ce89ecf0940cd570753f29942f5d39ca08" protocol=ttrpc version=3 Apr 28 01:04:11.541573 sshd-session[4405]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 01:04:11.622088 kubelet[3120]: E0428 01:04:11.541048 3120 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="8.292s" Apr 28 01:04:13.479608 systemd-logind[1614]: New session '24' of user 'core' with class 'user' and type 'tty'. Apr 28 01:04:13.531657 systemd[1]: Started session-24.scope - Session 24 of User core. Apr 28 01:04:15.953600 kubelet[3120]: E0428 01:04:15.947124 3120 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="4.308s" Apr 28 01:04:18.493660 kubelet[3120]: E0428 01:04:18.486282 3120 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.517s" Apr 28 01:04:32.369102 systemd[1]: Started cri-containerd-307693d5f6b27635f480b3af7e65ec1b92e0753b3f946c102f39b7b3864b2cdc.scope - libcontainer container 307693d5f6b27635f480b3af7e65ec1b92e0753b3f946c102f39b7b3864b2cdc. Apr 28 01:04:33.647528 systemd[1]: Started cri-containerd-d99ba52aef3e5e6e745953a55875c7ff1597d941380a59b0e4b2280ea02ced9b.scope - libcontainer container d99ba52aef3e5e6e745953a55875c7ff1597d941380a59b0e4b2280ea02ced9b. Apr 28 01:04:34.139150 kubelet[3120]: E0428 01:04:34.111861 3120 controller.go:195] "Failed to update lease" err="Put \"https://10.0.0.30:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded" Apr 28 01:04:44.919556 kubelet[3120]: E0428 01:04:44.874035 3120 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="25.646s" Apr 28 01:04:45.993157 kubelet[3120]: E0428 01:04:45.987059 3120 controller.go:195] "Failed to update lease" err="Put \"https://10.0.0.30:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded" Apr 28 01:04:47.112746 containerd[1638]: time="2026-04-28T01:04:45.982758152Z" level=error msg="get state for 307693d5f6b27635f480b3af7e65ec1b92e0753b3f946c102f39b7b3864b2cdc" error="context deadline exceeded" Apr 28 01:04:47.522139 containerd[1638]: time="2026-04-28T01:04:47.265868253Z" level=warning msg="unknown status" status=0 Apr 28 01:04:48.155351 sshd[4439]: Connection closed by 10.0.0.1 port 51506 Apr 28 01:04:48.161693 sshd-session[4405]: pam_unix(sshd:session): session closed for user core Apr 28 01:04:48.502961 systemd[1]: sshd@22-8200-10.0.0.30:22-10.0.0.1:51506.service: Deactivated successfully. Apr 28 01:04:48.504362 systemd[1]: sshd@22-8200-10.0.0.30:22-10.0.0.1:51506.service: Consumed 2.445s CPU time, 4.4M memory peak. Apr 28 01:04:48.519733 systemd[1]: session-24.scope: Deactivated successfully. Apr 28 01:04:48.604520 systemd[1]: session-24.scope: Consumed 8.930s CPU time, 19.3M memory peak. Apr 28 01:04:48.822399 systemd-logind[1614]: Session 24 logged out. Waiting for processes to exit. Apr 28 01:04:48.938497 systemd-logind[1614]: Removed session 24. Apr 28 01:04:51.562297 kubelet[3120]: I0428 01:04:51.562123 3120 scope.go:117] "RemoveContainer" containerID="2123c0c810e33437ad48772fb52f3649439d2fd73353cbf527e2dd4e5e4e496a" Apr 28 01:04:51.696301 containerd[1638]: time="2026-04-28T01:04:51.558968252Z" level=error msg="get state for 307693d5f6b27635f480b3af7e65ec1b92e0753b3f946c102f39b7b3864b2cdc" error="context deadline exceeded" Apr 28 01:04:51.696301 containerd[1638]: time="2026-04-28T01:04:51.562416011Z" level=warning msg="unknown status" status=0 Apr 28 01:04:53.970114 systemd[1]: Started sshd@23-7-10.0.0.30:22-10.0.0.1:48374.service - OpenSSH per-connection server daemon (10.0.0.1:48374). Apr 28 01:04:56.000456 containerd[1638]: time="2026-04-28T01:04:55.840077968Z" level=error msg="get state for 307693d5f6b27635f480b3af7e65ec1b92e0753b3f946c102f39b7b3864b2cdc" error="context deadline exceeded" Apr 28 01:04:56.377778 containerd[1638]: time="2026-04-28T01:04:56.283117431Z" level=warning msg="unknown status" status=0 Apr 28 01:04:57.865373 containerd[1638]: time="2026-04-28T01:04:57.792982998Z" level=error msg="ttrpc: received message on inactive stream" stream=3 Apr 28 01:04:57.910143 containerd[1638]: time="2026-04-28T01:04:57.908668305Z" level=error msg="ttrpc: received message on inactive stream" stream=5 Apr 28 01:04:57.910143 containerd[1638]: time="2026-04-28T01:04:57.909164818Z" level=error msg="ttrpc: received message on inactive stream" stream=7 Apr 28 01:04:58.290935 sshd[4522]: Accepted publickey for core from 10.0.0.1 port 48374 ssh2: RSA SHA256:H+Vux/uLcwSNGfLerJ6bpcTovGn/hDI0W9YvrkmMHk4 Apr 28 01:04:58.338509 kubelet[3120]: E0428 01:04:58.310927 3120 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="13.033s" Apr 28 01:04:58.349349 sshd-session[4522]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 01:04:58.390548 systemd-logind[1614]: New session '25' of user 'core' with class 'user' and type 'tty'. Apr 28 01:04:58.405133 systemd[1]: Started session-25.scope - Session 25 of User core. Apr 28 01:04:58.422834 containerd[1638]: time="2026-04-28T01:04:58.422645949Z" level=info msg="RemoveContainer for \"2123c0c810e33437ad48772fb52f3649439d2fd73353cbf527e2dd4e5e4e496a\"" Apr 28 01:04:59.156964 kubelet[3120]: E0428 01:04:59.155932 3120 cadvisor_stats_provider.go:567] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod824fd89300514e351ed3b68d82c665c6.slice/cri-containerd-307693d5f6b27635f480b3af7e65ec1b92e0753b3f946c102f39b7b3864b2cdc.scope\": RecentStats: unable to find data in memory cache]" Apr 28 01:04:59.201092 containerd[1638]: time="2026-04-28T01:04:59.192390564Z" level=error msg="ContainerStatus for \"2123c0c810e33437ad48772fb52f3649439d2fd73353cbf527e2dd4e5e4e496a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"2123c0c810e33437ad48772fb52f3649439d2fd73353cbf527e2dd4e5e4e496a\": not found" Apr 28 01:04:59.249906 containerd[1638]: time="2026-04-28T01:04:59.227461594Z" level=info msg="RemoveContainer for \"2123c0c810e33437ad48772fb52f3649439d2fd73353cbf527e2dd4e5e4e496a\" returns successfully" Apr 28 01:04:59.267156 kubelet[3120]: E0428 01:04:59.225855 3120 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"2123c0c810e33437ad48772fb52f3649439d2fd73353cbf527e2dd4e5e4e496a\": not found" containerID="2123c0c810e33437ad48772fb52f3649439d2fd73353cbf527e2dd4e5e4e496a" Apr 28 01:04:59.267156 kubelet[3120]: I0428 01:04:59.229178 3120 scope.go:117] "RemoveContainer" containerID="e07d40a321b16af047cd22b828dacfef5fea20b28bc199460cc840f763742e82" Apr 28 01:04:59.891094 containerd[1638]: time="2026-04-28T01:04:59.891005054Z" level=info msg="RemoveContainer for \"e07d40a321b16af047cd22b828dacfef5fea20b28bc199460cc840f763742e82\"" Apr 28 01:05:00.412523 containerd[1638]: time="2026-04-28T01:05:00.412334598Z" level=info msg="StartContainer for \"d99ba52aef3e5e6e745953a55875c7ff1597d941380a59b0e4b2280ea02ced9b\" returns successfully" Apr 28 01:05:00.437826 containerd[1638]: time="2026-04-28T01:05:00.437354008Z" level=info msg="RemoveContainer for \"e07d40a321b16af047cd22b828dacfef5fea20b28bc199460cc840f763742e82\" returns successfully" Apr 28 01:05:00.854407 containerd[1638]: time="2026-04-28T01:05:00.841166715Z" level=info msg="StartContainer for \"307693d5f6b27635f480b3af7e65ec1b92e0753b3f946c102f39b7b3864b2cdc\" returns successfully" Apr 28 01:05:01.533886 sshd[4548]: Connection closed by 10.0.0.1 port 48374 Apr 28 01:05:01.572608 sshd-session[4522]: pam_unix(sshd:session): session closed for user core Apr 28 01:05:01.736888 systemd[1]: sshd@23-7-10.0.0.30:22-10.0.0.1:48374.service: Deactivated successfully. Apr 28 01:05:01.759135 systemd[1]: sshd@23-7-10.0.0.30:22-10.0.0.1:48374.service: Consumed 1.378s CPU time, 4.1M memory peak. Apr 28 01:05:01.918699 kubelet[3120]: E0428 01:05:01.897834 3120 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:05:01.939018 systemd[1]: session-25.scope: Deactivated successfully. Apr 28 01:05:01.940711 systemd[1]: session-25.scope: Consumed 1.688s CPU time, 18.9M memory peak. Apr 28 01:05:02.019928 systemd-logind[1614]: Session 25 logged out. Waiting for processes to exit. Apr 28 01:05:02.140277 systemd-logind[1614]: Removed session 25. Apr 28 01:05:02.931113 kubelet[3120]: E0428 01:05:02.930041 3120 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:05:02.942002 kubelet[3120]: E0428 01:05:02.930016 3120 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:05:08.116927 kubelet[3120]: E0428 01:05:08.116373 3120 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:05:08.499167 systemd[1]: Started sshd@24-8-10.0.0.30:22-10.0.0.1:44948.service - OpenSSH per-connection server daemon (10.0.0.1:44948). Apr 28 01:05:08.989659 kubelet[3120]: E0428 01:05:08.974085 3120 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="4.581s" Apr 28 01:05:10.856493 kubelet[3120]: E0428 01:05:10.482675 3120 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:05:26.177773 sshd[4591]: Accepted publickey for core from 10.0.0.1 port 44948 ssh2: RSA SHA256:H+Vux/uLcwSNGfLerJ6bpcTovGn/hDI0W9YvrkmMHk4 Apr 28 01:05:26.616913 sshd-session[4591]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 01:05:27.761686 containerd[1638]: time="2026-04-28T01:05:27.522551012Z" level=info msg="container event discarded" container=0a3cb94c91645c00428bd7da14a0449fffb988671709d443a7f1179a66b82b18 type=CONTAINER_STOPPED_EVENT Apr 28 01:05:29.261859 containerd[1638]: time="2026-04-28T01:05:29.193834576Z" level=info msg="container event discarded" container=0bfcdb0a37d3f62d6d86ee7c1eafcd28d53fdc4b124373d470f4b26c5d209542 type=CONTAINER_STOPPED_EVENT Apr 28 01:05:30.016200 systemd-logind[1614]: New session '26' of user 'core' with class 'user' and type 'tty'. Apr 28 01:05:31.468685 systemd[1]: Started session-26.scope - Session 26 of User core. Apr 28 01:05:33.130435 containerd[1638]: time="2026-04-28T01:05:32.985181270Z" level=info msg="container event discarded" container=a074e8c266d442053cd1e3651d83d956d8b7264f094e1e4cf7ae772b73813a41 type=CONTAINER_DELETED_EVENT Apr 28 01:05:35.613747 kubelet[3120]: E0428 01:05:33.788625 3120 controller.go:195] "Failed to update lease" err="Put \"https://10.0.0.30:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Apr 28 01:05:36.551198 containerd[1638]: time="2026-04-28T01:05:36.547817279Z" level=info msg="container event discarded" container=2123c0c810e33437ad48772fb52f3649439d2fd73353cbf527e2dd4e5e4e496a type=CONTAINER_CREATED_EVENT Apr 28 01:05:37.150039 containerd[1638]: time="2026-04-28T01:05:36.922939056Z" level=info msg="container event discarded" container=e07d40a321b16af047cd22b828dacfef5fea20b28bc199460cc840f763742e82 type=CONTAINER_CREATED_EVENT Apr 28 01:05:44.372987 containerd[1638]: time="2026-04-28T01:05:43.951721623Z" level=info msg="container event discarded" container=2123c0c810e33437ad48772fb52f3649439d2fd73353cbf527e2dd4e5e4e496a type=CONTAINER_STARTED_EVENT Apr 28 01:05:54.627491 containerd[1638]: time="2026-04-28T01:05:54.593143074Z" level=info msg="container event discarded" container=e07d40a321b16af047cd22b828dacfef5fea20b28bc199460cc840f763742e82 type=CONTAINER_STARTED_EVENT Apr 28 01:05:56.903348 kubelet[3120]: E0428 01:05:54.810563 3120 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{kube-controller-manager-localhost.18aa5f7abfc69d8f kube-system 1053 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-controller-manager-localhost,UID:c6bb8708a026256e82ca4c5631a78b5a,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:DNSConfigForming,Message:Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-28 00:59:43 +0000 UTC,LastTimestamp:2026-04-28 01:05:02.930023248 +0000 UTC m=+341.671222378,Count:13,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 28 01:05:57.826001 kubelet[3120]: E0428 01:05:57.823748 3120 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:05:58.807179 kubelet[3120]: E0428 01:05:58.662608 3120 controller.go:195] "Failed to update lease" err="Put \"https://10.0.0.30:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded" Apr 28 01:05:59.016752 sshd[4614]: Connection closed by 10.0.0.1 port 44948 Apr 28 01:05:59.064116 kubelet[3120]: E0428 01:05:59.057207 3120 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="49.756s" Apr 28 01:05:59.074186 sshd-session[4591]: pam_unix(sshd:session): session closed for user core Apr 28 01:05:59.327021 systemd[1]: sshd@24-8-10.0.0.30:22-10.0.0.1:44948.service: Deactivated successfully. Apr 28 01:05:59.391698 systemd[1]: sshd@24-8-10.0.0.30:22-10.0.0.1:44948.service: Consumed 5.614s CPU time, 4.1M memory peak. Apr 28 01:05:59.507451 systemd[1]: session-26.scope: Deactivated successfully. Apr 28 01:05:59.522043 systemd[1]: session-26.scope: Consumed 15.313s CPU time, 17.7M memory peak. Apr 28 01:05:59.658622 systemd-logind[1614]: Session 26 logged out. Waiting for processes to exit. Apr 28 01:05:59.803690 kubelet[3120]: E0428 01:05:59.793033 3120 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:05:59.993793 systemd-logind[1614]: Removed session 26. Apr 28 01:06:00.734475 kubelet[3120]: E0428 01:06:00.733723 3120 controller.go:195] "Failed to update lease" err="Operation cannot be fulfilled on leases.coordination.k8s.io \"localhost\": the object has been modified; please apply your changes to the latest version and try again" Apr 28 01:06:00.929143 kubelet[3120]: E0428 01:06:00.901452 3120 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.844s" Apr 28 01:06:00.951347 kubelet[3120]: E0428 01:06:00.951171 3120 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:06:01.135336 kubelet[3120]: E0428 01:06:01.134426 3120 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:06:01.158672 kubelet[3120]: E0428 01:06:01.155750 3120 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:06:02.256827 kubelet[3120]: E0428 01:06:02.255528 3120 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.352s" Apr 28 01:06:03.071766 kubelet[3120]: E0428 01:06:03.068839 3120 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:06:04.025749 kubelet[3120]: E0428 01:06:04.025565 3120 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:06:04.167047 systemd[1]: Started sshd@25-9-10.0.0.30:22-10.0.0.1:54340.service - OpenSSH per-connection server daemon (10.0.0.1:54340). Apr 28 01:06:05.005870 sshd[4662]: Accepted publickey for core from 10.0.0.1 port 54340 ssh2: RSA SHA256:H+Vux/uLcwSNGfLerJ6bpcTovGn/hDI0W9YvrkmMHk4 Apr 28 01:06:05.031916 sshd-session[4662]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 01:06:05.200505 systemd-logind[1614]: New session '27' of user 'core' with class 'user' and type 'tty'. Apr 28 01:06:05.251151 systemd[1]: Started session-27.scope - Session 27 of User core. Apr 28 01:06:05.992689 sshd[4667]: Connection closed by 10.0.0.1 port 54340 Apr 28 01:06:05.996633 sshd-session[4662]: pam_unix(sshd:session): session closed for user core Apr 28 01:06:06.046913 systemd[1]: sshd@25-9-10.0.0.30:22-10.0.0.1:54340.service: Deactivated successfully. Apr 28 01:06:06.101767 systemd[1]: session-27.scope: Deactivated successfully. Apr 28 01:06:06.109585 systemd-logind[1614]: Session 27 logged out. Waiting for processes to exit. Apr 28 01:06:06.126953 systemd-logind[1614]: Removed session 27. Apr 28 01:06:11.083847 systemd[1]: Started sshd@26-10-10.0.0.30:22-10.0.0.1:59492.service - OpenSSH per-connection server daemon (10.0.0.1:59492). Apr 28 01:06:11.372565 sshd[4705]: Accepted publickey for core from 10.0.0.1 port 59492 ssh2: RSA SHA256:H+Vux/uLcwSNGfLerJ6bpcTovGn/hDI0W9YvrkmMHk4 Apr 28 01:06:11.376665 sshd-session[4705]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 01:06:11.487078 systemd-logind[1614]: New session '28' of user 'core' with class 'user' and type 'tty'. Apr 28 01:06:11.522084 systemd[1]: Started session-28.scope - Session 28 of User core. Apr 28 01:06:12.304558 sshd[4709]: Connection closed by 10.0.0.1 port 59492 Apr 28 01:06:12.314441 sshd-session[4705]: pam_unix(sshd:session): session closed for user core Apr 28 01:06:12.320402 systemd[1]: sshd@26-10-10.0.0.30:22-10.0.0.1:59492.service: Deactivated successfully. Apr 28 01:06:12.322961 systemd[1]: session-28.scope: Deactivated successfully. Apr 28 01:06:12.395920 systemd-logind[1614]: Session 28 logged out. Waiting for processes to exit. Apr 28 01:06:12.418480 systemd-logind[1614]: Removed session 28. Apr 28 01:06:18.786695 systemd[1]: Started sshd@27-12295-10.0.0.30:22-10.0.0.1:59512.service - OpenSSH per-connection server daemon (10.0.0.1:59512). Apr 28 01:06:27.624121 sshd[4743]: Accepted publickey for core from 10.0.0.1 port 59512 ssh2: RSA SHA256:H+Vux/uLcwSNGfLerJ6bpcTovGn/hDI0W9YvrkmMHk4 Apr 28 01:06:27.918247 sshd-session[4743]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 01:06:30.724145 systemd-logind[1614]: New session '29' of user 'core' with class 'user' and type 'tty'. Apr 28 01:06:30.783463 systemd[1]: cri-containerd-d99ba52aef3e5e6e745953a55875c7ff1597d941380a59b0e4b2280ea02ced9b.scope: Deactivated successfully. Apr 28 01:06:30.784885 systemd[1]: cri-containerd-d99ba52aef3e5e6e745953a55875c7ff1597d941380a59b0e4b2280ea02ced9b.scope: Consumed 17.924s CPU time, 43M memory peak, 4K read from disk. Apr 28 01:06:31.328669 systemd[1]: Started session-29.scope - Session 29 of User core. Apr 28 01:06:33.307030 containerd[1638]: time="2026-04-28T01:06:33.306934477Z" level=info msg="received container exit event container_id:\"d99ba52aef3e5e6e745953a55875c7ff1597d941380a59b0e4b2280ea02ced9b\" id:\"d99ba52aef3e5e6e745953a55875c7ff1597d941380a59b0e4b2280ea02ced9b\" pid:4487 exit_status:1 exited_at:{seconds:1777338392 nanos:475787925}" Apr 28 01:06:35.105721 kubelet[3120]: E0428 01:06:35.105562 3120 controller.go:195] "Failed to update lease" err="Put \"https://10.0.0.30:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Apr 28 01:06:35.906773 kubelet[3120]: E0428 01:06:35.906615 3120 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="17.938s" Apr 28 01:06:37.806430 kubelet[3120]: E0428 01:06:37.683846 3120 controller.go:195] "Failed to update lease" err="Operation cannot be fulfilled on leases.coordination.k8s.io \"localhost\": the object has been modified; please apply your changes to the latest version and try again" Apr 28 01:06:38.847067 containerd[1638]: time="2026-04-28T01:06:38.799172174Z" level=info msg="container event discarded" container=cbe6c8a633d441637d3c42703e7f1d4cf58d01943ca03951530763820fdb4c82 type=CONTAINER_CREATED_EVENT Apr 28 01:06:38.899268 kubelet[3120]: E0428 01:06:38.898568 3120 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.992s" Apr 28 01:06:38.906960 sshd[4769]: Connection closed by 10.0.0.1 port 59512 Apr 28 01:06:38.912753 containerd[1638]: time="2026-04-28T01:06:38.897941665Z" level=info msg="container event discarded" container=cbe6c8a633d441637d3c42703e7f1d4cf58d01943ca03951530763820fdb4c82 type=CONTAINER_STARTED_EVENT Apr 28 01:06:38.913643 sshd-session[4743]: pam_unix(sshd:session): session closed for user core Apr 28 01:06:39.015945 systemd[1]: sshd@27-12295-10.0.0.30:22-10.0.0.1:59512.service: Deactivated successfully. Apr 28 01:06:39.016562 systemd[1]: sshd@27-12295-10.0.0.30:22-10.0.0.1:59512.service: Consumed 2.654s CPU time, 4.1M memory peak. Apr 28 01:06:39.084975 systemd[1]: session-29.scope: Deactivated successfully. Apr 28 01:06:39.089335 systemd[1]: session-29.scope: Consumed 3.101s CPU time, 15.7M memory peak. Apr 28 01:06:39.106329 systemd-logind[1614]: Session 29 logged out. Waiting for processes to exit. Apr 28 01:06:39.185449 systemd-logind[1614]: Removed session 29. Apr 28 01:06:39.421677 containerd[1638]: time="2026-04-28T01:06:39.418633460Z" level=info msg="container event discarded" container=30333e1f9cbaa202417482f278ad65cb3394dddff24285ed4eafe81042b6e5e3 type=CONTAINER_CREATED_EVENT Apr 28 01:06:39.421677 containerd[1638]: time="2026-04-28T01:06:39.419402970Z" level=info msg="container event discarded" container=30333e1f9cbaa202417482f278ad65cb3394dddff24285ed4eafe81042b6e5e3 type=CONTAINER_STARTED_EVENT Apr 28 01:06:39.421677 containerd[1638]: time="2026-04-28T01:06:39.419425649Z" level=info msg="container event discarded" container=aa1c65d85dd92dae16407034e921fa1401a04e0595c6a021b139ed9a8576b995 type=CONTAINER_CREATED_EVENT Apr 28 01:06:39.421677 containerd[1638]: time="2026-04-28T01:06:39.419433152Z" level=info msg="container event discarded" container=aa1c65d85dd92dae16407034e921fa1401a04e0595c6a021b139ed9a8576b995 type=CONTAINER_STARTED_EVENT Apr 28 01:06:39.454088 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d99ba52aef3e5e6e745953a55875c7ff1597d941380a59b0e4b2280ea02ced9b-rootfs.mount: Deactivated successfully. Apr 28 01:06:40.097276 kubelet[3120]: I0428 01:06:40.096815 3120 scope.go:117] "RemoveContainer" containerID="d99ba52aef3e5e6e745953a55875c7ff1597d941380a59b0e4b2280ea02ced9b" Apr 28 01:06:40.097276 kubelet[3120]: E0428 01:06:40.097423 3120 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:06:40.191548 kubelet[3120]: E0428 01:06:40.106204 3120 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-controller-manager pod=kube-controller-manager-localhost_kube-system(c6bb8708a026256e82ca4c5631a78b5a)\"" pod="kube-system/kube-controller-manager-localhost" podUID="c6bb8708a026256e82ca4c5631a78b5a" Apr 28 01:06:43.791657 kubelet[3120]: I0428 01:06:43.790531 3120 scope.go:117] "RemoveContainer" containerID="d99ba52aef3e5e6e745953a55875c7ff1597d941380a59b0e4b2280ea02ced9b" Apr 28 01:06:43.896494 kubelet[3120]: E0428 01:06:43.796706 3120 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:06:43.896494 kubelet[3120]: E0428 01:06:43.800048 3120 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-controller-manager pod=kube-controller-manager-localhost_kube-system(c6bb8708a026256e82ca4c5631a78b5a)\"" pod="kube-system/kube-controller-manager-localhost" podUID="c6bb8708a026256e82ca4c5631a78b5a" Apr 28 01:06:44.347313 containerd[1638]: time="2026-04-28T01:06:44.346526413Z" level=info msg="container event discarded" container=ddd2735569c043203f2b58bbb55372d4747dec61bb5bfc3eb17127cc5b9b1564 type=CONTAINER_CREATED_EVENT Apr 28 01:06:44.859733 containerd[1638]: time="2026-04-28T01:06:44.808942306Z" level=info msg="container event discarded" container=ddd2735569c043203f2b58bbb55372d4747dec61bb5bfc3eb17127cc5b9b1564 type=CONTAINER_STARTED_EVENT Apr 28 01:06:45.198180 systemd[1]: Started sshd@28-4100-10.0.0.30:22-10.0.0.1:54118.service - OpenSSH per-connection server daemon (10.0.0.1:54118). Apr 28 01:06:45.575737 containerd[1638]: time="2026-04-28T01:06:45.349900380Z" level=info msg="container event discarded" container=ddd2735569c043203f2b58bbb55372d4747dec61bb5bfc3eb17127cc5b9b1564 type=CONTAINER_STOPPED_EVENT Apr 28 01:06:47.814924 sshd[4834]: Accepted publickey for core from 10.0.0.1 port 54118 ssh2: RSA SHA256:H+Vux/uLcwSNGfLerJ6bpcTovGn/hDI0W9YvrkmMHk4 Apr 28 01:06:47.850958 sshd-session[4834]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 01:06:48.054296 systemd-logind[1614]: New session '30' of user 'core' with class 'user' and type 'tty'. Apr 28 01:06:48.098988 systemd[1]: Started session-30.scope - Session 30 of User core. Apr 28 01:06:51.565042 kubelet[3120]: E0428 01:06:51.565009 3120 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.68s" Apr 28 01:06:53.210967 sshd[4838]: Connection closed by 10.0.0.1 port 54118 Apr 28 01:06:53.246944 sshd-session[4834]: pam_unix(sshd:session): session closed for user core Apr 28 01:06:53.289655 systemd[1]: sshd@28-4100-10.0.0.30:22-10.0.0.1:54118.service: Deactivated successfully. Apr 28 01:06:53.323966 systemd[1]: sshd@28-4100-10.0.0.30:22-10.0.0.1:54118.service: Consumed 1.004s CPU time, 4.4M memory peak. Apr 28 01:06:53.472160 systemd[1]: session-30.scope: Deactivated successfully. Apr 28 01:06:53.476698 systemd[1]: session-30.scope: Consumed 3.440s CPU time, 16.3M memory peak. Apr 28 01:06:53.480065 systemd-logind[1614]: Session 30 logged out. Waiting for processes to exit. Apr 28 01:06:53.481687 systemd-logind[1614]: Removed session 30. Apr 28 01:06:55.912427 kubelet[3120]: I0428 01:06:55.912048 3120 scope.go:117] "RemoveContainer" containerID="d99ba52aef3e5e6e745953a55875c7ff1597d941380a59b0e4b2280ea02ced9b" Apr 28 01:06:55.983102 kubelet[3120]: E0428 01:06:55.981248 3120 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:06:56.210057 containerd[1638]: time="2026-04-28T01:06:56.207647349Z" level=info msg="CreateContainer within sandbox \"f99c09fa9bcebfcc9ce36f1e690fb119057571c499a894190b3ac3be9c525e0a\" for container name:\"kube-controller-manager\" attempt:4" Apr 28 01:06:56.614541 containerd[1638]: time="2026-04-28T01:06:56.613353773Z" level=info msg="Container a095478f240cbd6fb021b879525084d9c008460587333289a0c742e05b868b1c: CDI devices from CRI Config.CDIDevices: []" Apr 28 01:06:57.114713 containerd[1638]: time="2026-04-28T01:06:57.114448587Z" level=info msg="CreateContainer within sandbox \"f99c09fa9bcebfcc9ce36f1e690fb119057571c499a894190b3ac3be9c525e0a\" for name:\"kube-controller-manager\" attempt:4 returns container id \"a095478f240cbd6fb021b879525084d9c008460587333289a0c742e05b868b1c\"" Apr 28 01:06:57.165830 containerd[1638]: time="2026-04-28T01:06:57.165443379Z" level=info msg="StartContainer for \"a095478f240cbd6fb021b879525084d9c008460587333289a0c742e05b868b1c\"" Apr 28 01:06:57.193767 containerd[1638]: time="2026-04-28T01:06:57.193559933Z" level=info msg="connecting to shim a095478f240cbd6fb021b879525084d9c008460587333289a0c742e05b868b1c" address="unix:///run/containerd/s/aafd21b6e43b3c36323942c08fd3df2bb03ac8c2cdd619376b1243457cecf8d1" protocol=ttrpc version=3 Apr 28 01:06:57.695187 systemd[1]: Started cri-containerd-a095478f240cbd6fb021b879525084d9c008460587333289a0c742e05b868b1c.scope - libcontainer container a095478f240cbd6fb021b879525084d9c008460587333289a0c742e05b868b1c. Apr 28 01:06:58.443960 systemd[1]: Started sshd@29-12296-10.0.0.30:22-10.0.0.1:47408.service - OpenSSH per-connection server daemon (10.0.0.1:47408). Apr 28 01:06:58.461281 containerd[1638]: time="2026-04-28T01:06:58.461056860Z" level=info msg="StartContainer for \"a095478f240cbd6fb021b879525084d9c008460587333289a0c742e05b868b1c\" returns successfully" Apr 28 01:06:59.025186 sshd[4921]: Accepted publickey for core from 10.0.0.1 port 47408 ssh2: RSA SHA256:H+Vux/uLcwSNGfLerJ6bpcTovGn/hDI0W9YvrkmMHk4 Apr 28 01:06:59.153064 sshd-session[4921]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 01:06:59.177874 systemd-logind[1614]: New session '31' of user 'core' with class 'user' and type 'tty'. Apr 28 01:06:59.191492 systemd[1]: Started session-31.scope - Session 31 of User core. Apr 28 01:06:59.536007 kubelet[3120]: E0428 01:06:59.535662 3120 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:06:59.762930 sshd[4927]: Connection closed by 10.0.0.1 port 47408 Apr 28 01:06:59.763774 sshd-session[4921]: pam_unix(sshd:session): session closed for user core Apr 28 01:06:59.768128 systemd[1]: sshd@29-12296-10.0.0.30:22-10.0.0.1:47408.service: Deactivated successfully. Apr 28 01:06:59.778282 systemd[1]: session-31.scope: Deactivated successfully. Apr 28 01:06:59.779548 systemd-logind[1614]: Session 31 logged out. Waiting for processes to exit. Apr 28 01:06:59.780706 systemd-logind[1614]: Removed session 31. Apr 28 01:07:03.819537 kubelet[3120]: E0428 01:07:03.817662 3120 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:07:04.880139 systemd[1]: Started sshd@30-4101-10.0.0.30:22-10.0.0.1:37692.service - OpenSSH per-connection server daemon (10.0.0.1:37692). Apr 28 01:07:05.359189 sshd[4962]: Accepted publickey for core from 10.0.0.1 port 37692 ssh2: RSA SHA256:H+Vux/uLcwSNGfLerJ6bpcTovGn/hDI0W9YvrkmMHk4 Apr 28 01:07:05.591127 sshd-session[4962]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 01:07:05.908695 systemd-logind[1614]: New session '32' of user 'core' with class 'user' and type 'tty'. Apr 28 01:07:06.007664 systemd[1]: Started session-32.scope - Session 32 of User core. Apr 28 01:07:08.912977 kubelet[3120]: E0428 01:07:08.912763 3120 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.039s" Apr 28 01:07:09.296622 sshd[4966]: Connection closed by 10.0.0.1 port 37692 Apr 28 01:07:09.309989 sshd-session[4962]: pam_unix(sshd:session): session closed for user core Apr 28 01:07:09.318650 systemd[1]: sshd@30-4101-10.0.0.30:22-10.0.0.1:37692.service: Deactivated successfully. Apr 28 01:07:09.518802 systemd[1]: session-32.scope: Deactivated successfully. Apr 28 01:07:09.519980 systemd[1]: session-32.scope: Consumed 2.323s CPU time, 17.8M memory peak. Apr 28 01:07:09.673905 systemd-logind[1614]: Session 32 logged out. Waiting for processes to exit. Apr 28 01:07:09.812701 systemd-logind[1614]: Removed session 32. Apr 28 01:07:10.445103 containerd[1638]: time="2026-04-28T01:07:10.437833826Z" level=info msg="container event discarded" container=8a792955ef70d791e4f2fc6cc9e0dcc7d4cd0adf8ac776faff31ea00585b695b type=CONTAINER_CREATED_EVENT Apr 28 01:07:13.122211 kubelet[3120]: E0428 01:07:13.116400 3120 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:07:14.881699 kubelet[3120]: E0428 01:07:14.881018 3120 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:07:15.080117 systemd[1]: Started sshd@31-11-10.0.0.30:22-10.0.0.1:36898.service - OpenSSH per-connection server daemon (10.0.0.1:36898). Apr 28 01:07:16.264108 containerd[1638]: time="2026-04-28T01:07:16.262300837Z" level=info msg="container event discarded" container=8a792955ef70d791e4f2fc6cc9e0dcc7d4cd0adf8ac776faff31ea00585b695b type=CONTAINER_STARTED_EVENT Apr 28 01:07:16.892424 containerd[1638]: time="2026-04-28T01:07:16.887052033Z" level=info msg="container event discarded" container=8a792955ef70d791e4f2fc6cc9e0dcc7d4cd0adf8ac776faff31ea00585b695b type=CONTAINER_STOPPED_EVENT Apr 28 01:07:18.710309 containerd[1638]: time="2026-04-28T01:07:18.472376717Z" level=info msg="container event discarded" container=810b487acaca79daaca1336bacf014dcc78f9c531bdc5cddd5c4d52467303e4f type=CONTAINER_CREATED_EVENT Apr 28 01:07:21.381388 containerd[1638]: time="2026-04-28T01:07:21.259140149Z" level=info msg="container event discarded" container=810b487acaca79daaca1336bacf014dcc78f9c531bdc5cddd5c4d52467303e4f type=CONTAINER_STARTED_EVENT Apr 28 01:07:23.540840 sshd[5006]: Accepted publickey for core from 10.0.0.1 port 36898 ssh2: RSA SHA256:H+Vux/uLcwSNGfLerJ6bpcTovGn/hDI0W9YvrkmMHk4 Apr 28 01:07:24.378996 sshd-session[5006]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 01:07:26.211855 systemd-logind[1614]: New session '33' of user 'core' with class 'user' and type 'tty'. Apr 28 01:07:26.502359 systemd[1]: Started session-33.scope - Session 33 of User core. Apr 28 01:07:38.266615 systemd[1]: cri-containerd-a095478f240cbd6fb021b879525084d9c008460587333289a0c742e05b868b1c.scope: Deactivated successfully. Apr 28 01:07:38.623911 systemd[1]: cri-containerd-a095478f240cbd6fb021b879525084d9c008460587333289a0c742e05b868b1c.scope: Consumed 4.582s CPU time, 18.4M memory peak. Apr 28 01:07:41.599080 containerd[1638]: time="2026-04-28T01:07:41.580114327Z" level=info msg="received container exit event container_id:\"a095478f240cbd6fb021b879525084d9c008460587333289a0c742e05b868b1c\" id:\"a095478f240cbd6fb021b879525084d9c008460587333289a0c742e05b868b1c\" pid:4889 exit_status:1 exited_at:{seconds:1777338460 nanos:446013510}" Apr 28 01:07:45.979080 systemd[1]: cri-containerd-307693d5f6b27635f480b3af7e65ec1b92e0753b3f946c102f39b7b3864b2cdc.scope: Deactivated successfully. Apr 28 01:07:46.371640 systemd[1]: cri-containerd-307693d5f6b27635f480b3af7e65ec1b92e0753b3f946c102f39b7b3864b2cdc.scope: Consumed 32.501s CPU time, 20.9M memory peak. Apr 28 01:07:48.977559 containerd[1638]: time="2026-04-28T01:07:48.968261418Z" level=error msg="ttrpc: received message on inactive stream" stream=23 Apr 28 01:07:49.866849 containerd[1638]: time="2026-04-28T01:07:49.504111599Z" level=error msg="get state for a095478f240cbd6fb021b879525084d9c008460587333289a0c742e05b868b1c" error="context deadline exceeded" Apr 28 01:07:50.230456 containerd[1638]: time="2026-04-28T01:07:49.889983456Z" level=warning msg="unknown status" status=0 Apr 28 01:07:51.441544 containerd[1638]: time="2026-04-28T01:07:51.439024238Z" level=info msg="received container exit event container_id:\"307693d5f6b27635f480b3af7e65ec1b92e0753b3f946c102f39b7b3864b2cdc\" id:\"307693d5f6b27635f480b3af7e65ec1b92e0753b3f946c102f39b7b3864b2cdc\" pid:4482 exit_status:1 exited_at:{seconds:1777338468 nanos:658378364}" Apr 28 01:07:52.976007 containerd[1638]: time="2026-04-28T01:07:52.896025535Z" level=error msg="get state for a095478f240cbd6fb021b879525084d9c008460587333289a0c742e05b868b1c" error="context deadline exceeded" Apr 28 01:07:52.976007 containerd[1638]: time="2026-04-28T01:07:52.972784136Z" level=error msg="ttrpc: received message on inactive stream" stream=27 Apr 28 01:07:53.301015 containerd[1638]: time="2026-04-28T01:07:52.979940711Z" level=warning msg="unknown status" status=0 Apr 28 01:07:53.301015 containerd[1638]: time="2026-04-28T01:07:53.033978761Z" level=error msg="failed to delete task" error="context deadline exceeded" id=a095478f240cbd6fb021b879525084d9c008460587333289a0c742e05b868b1c Apr 28 01:07:53.301015 containerd[1638]: time="2026-04-28T01:07:53.034515605Z" level=error msg="failed to handle container TaskExit event container_id:\"a095478f240cbd6fb021b879525084d9c008460587333289a0c742e05b868b1c\" id:\"a095478f240cbd6fb021b879525084d9c008460587333289a0c742e05b868b1c\" pid:4889 exit_status:1 exited_at:{seconds:1777338460 nanos:446013510}" error="failed to stop container: failed to delete task: context deadline exceeded" Apr 28 01:07:53.301015 containerd[1638]: time="2026-04-28T01:07:53.037117121Z" level=error msg="failed to drain init process a095478f240cbd6fb021b879525084d9c008460587333289a0c742e05b868b1c io" error="context deadline exceeded" runtime=io.containerd.runc.v2 Apr 28 01:07:53.889406 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a095478f240cbd6fb021b879525084d9c008460587333289a0c742e05b868b1c-rootfs.mount: Deactivated successfully. Apr 28 01:07:53.946428 containerd[1638]: time="2026-04-28T01:07:53.678186520Z" level=error msg="ttrpc: received message on inactive stream" stream=29 Apr 28 01:07:54.715024 containerd[1638]: time="2026-04-28T01:07:54.712572365Z" level=info msg="TaskExit event container_id:\"a095478f240cbd6fb021b879525084d9c008460587333289a0c742e05b868b1c\" id:\"a095478f240cbd6fb021b879525084d9c008460587333289a0c742e05b868b1c\" pid:4889 exit_status:1 exited_at:{seconds:1777338460 nanos:446013510}" Apr 28 01:07:55.567997 kubelet[3120]: E0428 01:07:55.566229 3120 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="39.599s" Apr 28 01:07:55.567997 kubelet[3120]: E0428 01:07:55.566315 3120 kubelet.go:2452] "Skipping pod synchronization" err="container runtime is down" Apr 28 01:07:56.696936 kubelet[3120]: E0428 01:07:56.687732 3120 kubelet.go:2452] "Skipping pod synchronization" err="container runtime is down" Apr 28 01:08:01.877005 kubelet[3120]: E0428 01:08:01.876079 3120 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:08:02.301890 containerd[1638]: time="2026-04-28T01:08:01.742802274Z" level=error msg="failed to handle container TaskExit event container_id:\"307693d5f6b27635f480b3af7e65ec1b92e0753b3f946c102f39b7b3864b2cdc\" id:\"307693d5f6b27635f480b3af7e65ec1b92e0753b3f946c102f39b7b3864b2cdc\" pid:4482 exit_status:1 exited_at:{seconds:1777338468 nanos:658378364}" error="failed to stop container: context deadline exceeded" Apr 28 01:08:03.664356 containerd[1638]: time="2026-04-28T01:08:03.379127834Z" level=error msg="ttrpc: received message on inactive stream" stream=47 Apr 28 01:08:03.821673 containerd[1638]: time="2026-04-28T01:08:03.668649667Z" level=error msg="ttrpc: received message on inactive stream" stream=43 Apr 28 01:08:04.955294 containerd[1638]: time="2026-04-28T01:08:04.954888806Z" level=error msg="Failed to handle backOff event container_id:\"a095478f240cbd6fb021b879525084d9c008460587333289a0c742e05b868b1c\" id:\"a095478f240cbd6fb021b879525084d9c008460587333289a0c742e05b868b1c\" pid:4889 exit_status:1 exited_at:{seconds:1777338460 nanos:446013510} for a095478f240cbd6fb021b879525084d9c008460587333289a0c742e05b868b1c" error="failed to handle container TaskExit event: failed to stop container: context deadline exceeded" Apr 28 01:08:05.283554 containerd[1638]: time="2026-04-28T01:08:05.155196995Z" level=info msg="TaskExit event container_id:\"307693d5f6b27635f480b3af7e65ec1b92e0753b3f946c102f39b7b3864b2cdc\" id:\"307693d5f6b27635f480b3af7e65ec1b92e0753b3f946c102f39b7b3864b2cdc\" pid:4482 exit_status:1 exited_at:{seconds:1777338468 nanos:658378364}" Apr 28 01:08:05.321773 containerd[1638]: time="2026-04-28T01:08:05.319335248Z" level=error msg="ttrpc: received message on inactive stream" stream=39 Apr 28 01:08:05.503828 containerd[1638]: time="2026-04-28T01:08:05.365875012Z" level=error msg="ttrpc: received message on inactive stream" stream=35 Apr 28 01:08:06.271798 kubelet[3120]: E0428 01:08:06.267319 3120 controller.go:195] "Failed to update lease" err="Put \"https://10.0.0.30:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Apr 28 01:08:09.705943 sshd[5027]: Connection closed by 10.0.0.1 port 36898 Apr 28 01:08:09.832546 sshd-session[5006]: pam_unix(sshd:session): session closed for user core Apr 28 01:08:11.022851 systemd[1]: sshd@31-11-10.0.0.30:22-10.0.0.1:36898.service: Deactivated successfully. Apr 28 01:08:11.362834 systemd[1]: sshd@31-11-10.0.0.30:22-10.0.0.1:36898.service: Consumed 2.637s CPU time, 4.1M memory peak. Apr 28 01:08:12.346415 systemd[1]: session-33.scope: Deactivated successfully. Apr 28 01:08:12.394152 systemd[1]: session-33.scope: Consumed 15.657s CPU time, 17.8M memory peak. Apr 28 01:08:12.910663 systemd-logind[1614]: Session 33 logged out. Waiting for processes to exit. Apr 28 01:08:13.862467 systemd-logind[1614]: Removed session 33. Apr 28 01:08:16.733644 containerd[1638]: time="2026-04-28T01:08:16.624071947Z" level=error msg="Failed to handle backOff event container_id:\"307693d5f6b27635f480b3af7e65ec1b92e0753b3f946c102f39b7b3864b2cdc\" id:\"307693d5f6b27635f480b3af7e65ec1b92e0753b3f946c102f39b7b3864b2cdc\" pid:4482 exit_status:1 exited_at:{seconds:1777338468 nanos:658378364} for 307693d5f6b27635f480b3af7e65ec1b92e0753b3f946c102f39b7b3864b2cdc" error="failed to handle container TaskExit event: failed to stop container: context deadline exceeded" Apr 28 01:08:17.314170 containerd[1638]: time="2026-04-28T01:08:16.730128850Z" level=info msg="TaskExit event container_id:\"a095478f240cbd6fb021b879525084d9c008460587333289a0c742e05b868b1c\" id:\"a095478f240cbd6fb021b879525084d9c008460587333289a0c742e05b868b1c\" pid:4889 exit_status:1 exited_at:{seconds:1777338460 nanos:446013510}" Apr 28 01:08:17.809960 containerd[1638]: time="2026-04-28T01:08:16.943162129Z" level=error msg="ttrpc: received message on inactive stream" stream=53 Apr 28 01:08:18.655882 kubelet[3120]: E0428 01:08:16.738536 3120 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:08:18.997167 containerd[1638]: time="2026-04-28T01:08:18.359188026Z" level=error msg="ttrpc: received message on inactive stream" stream=55 Apr 28 01:08:18.995581 systemd[1]: Started sshd@32-12297-10.0.0.30:22-10.0.0.1:45918.service - OpenSSH per-connection server daemon (10.0.0.1:45918). Apr 28 01:08:25.516898 kubelet[3120]: E0428 01:08:25.009172 3120 controller.go:195] "Failed to update lease" err="Put \"https://10.0.0.30:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Apr 28 01:08:27.084107 containerd[1638]: time="2026-04-28T01:08:27.062994021Z" level=error msg="Failed to handle backOff event container_id:\"a095478f240cbd6fb021b879525084d9c008460587333289a0c742e05b868b1c\" id:\"a095478f240cbd6fb021b879525084d9c008460587333289a0c742e05b868b1c\" pid:4889 exit_status:1 exited_at:{seconds:1777338460 nanos:446013510} for a095478f240cbd6fb021b879525084d9c008460587333289a0c742e05b868b1c" error="failed to handle container TaskExit event: failed to stop container: context deadline exceeded" Apr 28 01:08:27.235927 containerd[1638]: time="2026-04-28T01:08:27.235610185Z" level=info msg="TaskExit event container_id:\"307693d5f6b27635f480b3af7e65ec1b92e0753b3f946c102f39b7b3864b2cdc\" id:\"307693d5f6b27635f480b3af7e65ec1b92e0753b3f946c102f39b7b3864b2cdc\" pid:4482 exit_status:1 exited_at:{seconds:1777338468 nanos:658378364}" Apr 28 01:08:27.487934 containerd[1638]: time="2026-04-28T01:08:27.239745008Z" level=error msg="ttrpc: received message on inactive stream" stream=47 Apr 28 01:08:27.678305 kubelet[3120]: E0428 01:08:27.676609 3120 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:08:27.681909 containerd[1638]: time="2026-04-28T01:08:27.483007737Z" level=error msg="ttrpc: received message on inactive stream" stream=49 Apr 28 01:08:27.981386 kubelet[3120]: E0428 01:08:27.980876 3120 cadvisor_stats_provider.go:567] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc6bb8708a026256e82ca4c5631a78b5a.slice/cri-containerd-a095478f240cbd6fb021b879525084d9c008460587333289a0c742e05b868b1c.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod824fd89300514e351ed3b68d82c665c6.slice/cri-containerd-307693d5f6b27635f480b3af7e65ec1b92e0753b3f946c102f39b7b3864b2cdc.scope\": RecentStats: unable to find data in memory cache]" Apr 28 01:08:30.299128 sshd[5087]: Accepted publickey for core from 10.0.0.1 port 45918 ssh2: RSA SHA256:H+Vux/uLcwSNGfLerJ6bpcTovGn/hDI0W9YvrkmMHk4 Apr 28 01:08:30.451581 kubelet[3120]: E0428 01:08:30.299361 3120 controller.go:195] "Failed to update lease" err="Operation cannot be fulfilled on leases.coordination.k8s.io \"localhost\": the object has been modified; please apply your changes to the latest version and try again" Apr 28 01:08:30.443958 sshd-session[5087]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 01:08:31.814612 systemd-logind[1614]: New session '34' of user 'core' with class 'user' and type 'tty'. Apr 28 01:08:32.439362 systemd[1]: Started session-34.scope - Session 34 of User core. Apr 28 01:08:35.517396 kubelet[3120]: E0428 01:08:35.517313 3120 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="36.885s" Apr 28 01:08:35.555410 kubelet[3120]: E0428 01:08:35.555134 3120 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:08:35.661338 containerd[1638]: time="2026-04-28T01:08:35.661051715Z" level=info msg="StopContainer for \"a095478f240cbd6fb021b879525084d9c008460587333289a0c742e05b868b1c\" with timeout 30 (s)" Apr 28 01:08:36.547712 containerd[1638]: time="2026-04-28T01:08:36.532066247Z" level=info msg="Stop container \"a095478f240cbd6fb021b879525084d9c008460587333289a0c742e05b868b1c\" with signal terminated" Apr 28 01:08:37.157960 kubelet[3120]: E0428 01:08:37.118147 3120 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.531s" Apr 28 01:08:37.697781 containerd[1638]: time="2026-04-28T01:08:37.347979112Z" level=error msg="failed to delete task" error="context deadline exceeded" id=307693d5f6b27635f480b3af7e65ec1b92e0753b3f946c102f39b7b3864b2cdc Apr 28 01:08:38.753145 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-307693d5f6b27635f480b3af7e65ec1b92e0753b3f946c102f39b7b3864b2cdc-rootfs.mount: Deactivated successfully. Apr 28 01:08:38.977917 containerd[1638]: time="2026-04-28T01:08:38.415091873Z" level=error msg="ttrpc: received message on inactive stream" stream=69 Apr 28 01:08:39.453394 containerd[1638]: time="2026-04-28T01:08:39.452681208Z" level=error msg="Failed to handle backOff event container_id:\"307693d5f6b27635f480b3af7e65ec1b92e0753b3f946c102f39b7b3864b2cdc\" id:\"307693d5f6b27635f480b3af7e65ec1b92e0753b3f946c102f39b7b3864b2cdc\" pid:4482 exit_status:1 exited_at:{seconds:1777338468 nanos:658378364} for 307693d5f6b27635f480b3af7e65ec1b92e0753b3f946c102f39b7b3864b2cdc" error="failed to handle container TaskExit event: failed to stop container: failed to delete task: context deadline exceeded" Apr 28 01:08:39.784525 containerd[1638]: time="2026-04-28T01:08:39.556994612Z" level=info msg="TaskExit event container_id:\"a095478f240cbd6fb021b879525084d9c008460587333289a0c742e05b868b1c\" id:\"a095478f240cbd6fb021b879525084d9c008460587333289a0c742e05b868b1c\" pid:4889 exit_status:1 exited_at:{seconds:1777338460 nanos:446013510}" Apr 28 01:08:41.335500 sshd[5127]: Connection closed by 10.0.0.1 port 45918 Apr 28 01:08:41.360828 sshd-session[5087]: pam_unix(sshd:session): session closed for user core Apr 28 01:08:41.715558 systemd[1]: sshd@32-12297-10.0.0.30:22-10.0.0.1:45918.service: Deactivated successfully. Apr 28 01:08:41.836566 systemd[1]: sshd@32-12297-10.0.0.30:22-10.0.0.1:45918.service: Consumed 3.222s CPU time, 4.4M memory peak. Apr 28 01:08:42.309066 systemd[1]: session-34.scope: Deactivated successfully. Apr 28 01:08:42.346842 containerd[1638]: time="2026-04-28T01:08:42.308167602Z" level=error msg="get state for a095478f240cbd6fb021b879525084d9c008460587333289a0c742e05b868b1c" error="context deadline exceeded" Apr 28 01:08:42.354633 systemd[1]: session-34.scope: Consumed 4.915s CPU time, 16.7M memory peak. Apr 28 01:08:42.379838 containerd[1638]: time="2026-04-28T01:08:42.361020262Z" level=warning msg="unknown status" status=0 Apr 28 01:08:42.512001 systemd-logind[1614]: Session 34 logged out. Waiting for processes to exit. Apr 28 01:08:42.695895 containerd[1638]: time="2026-04-28T01:08:42.549475602Z" level=error msg="ttrpc: received message on inactive stream" stream=55 Apr 28 01:08:42.718169 systemd-logind[1614]: Removed session 34. Apr 28 01:08:44.038143 kubelet[3120]: E0428 01:08:44.032207 3120 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="6.752s" Apr 28 01:08:48.160075 containerd[1638]: time="2026-04-28T01:08:48.159615587Z" level=info msg="StopContainer for \"307693d5f6b27635f480b3af7e65ec1b92e0753b3f946c102f39b7b3864b2cdc\" with timeout 30 (s)" Apr 28 01:08:48.573003 systemd[1]: Started sshd@33-8201-10.0.0.30:22-10.0.0.1:52160.service - OpenSSH per-connection server daemon (10.0.0.1:52160). Apr 28 01:08:49.304307 containerd[1638]: time="2026-04-28T01:08:49.206181344Z" level=info msg="Stop container \"307693d5f6b27635f480b3af7e65ec1b92e0753b3f946c102f39b7b3864b2cdc\" with signal terminated" Apr 28 01:08:49.465172 containerd[1638]: time="2026-04-28T01:08:49.463506704Z" level=error msg="failed to delete task" error="context deadline exceeded" id=a095478f240cbd6fb021b879525084d9c008460587333289a0c742e05b868b1c Apr 28 01:08:49.465742 containerd[1638]: time="2026-04-28T01:08:49.465561066Z" level=error msg="Failed to handle backOff event container_id:\"a095478f240cbd6fb021b879525084d9c008460587333289a0c742e05b868b1c\" id:\"a095478f240cbd6fb021b879525084d9c008460587333289a0c742e05b868b1c\" pid:4889 exit_status:1 exited_at:{seconds:1777338460 nanos:446013510} for a095478f240cbd6fb021b879525084d9c008460587333289a0c742e05b868b1c" error="failed to handle container TaskExit event: failed to stop container: failed to delete task: context deadline exceeded" Apr 28 01:08:49.466189 containerd[1638]: time="2026-04-28T01:08:49.465752925Z" level=info msg="TaskExit event container_id:\"307693d5f6b27635f480b3af7e65ec1b92e0753b3f946c102f39b7b3864b2cdc\" id:\"307693d5f6b27635f480b3af7e65ec1b92e0753b3f946c102f39b7b3864b2cdc\" pid:4482 exit_status:1 exited_at:{seconds:1777338468 nanos:658378364}" Apr 28 01:08:52.344099 containerd[1638]: time="2026-04-28T01:08:52.339002128Z" level=error msg="ttrpc: received message on inactive stream" stream=61 Apr 28 01:08:53.808853 containerd[1638]: time="2026-04-28T01:08:53.756148020Z" level=error msg="get state for 307693d5f6b27635f480b3af7e65ec1b92e0753b3f946c102f39b7b3864b2cdc" error="context deadline exceeded" Apr 28 01:08:54.228280 containerd[1638]: time="2026-04-28T01:08:53.805706788Z" level=warning msg="unknown status" status=0 Apr 28 01:08:56.950119 containerd[1638]: time="2026-04-28T01:08:56.785057283Z" level=error msg="get state for 307693d5f6b27635f480b3af7e65ec1b92e0753b3f946c102f39b7b3864b2cdc" error="context deadline exceeded" Apr 28 01:08:57.642070 containerd[1638]: time="2026-04-28T01:08:56.949112786Z" level=warning msg="unknown status" status=0 Apr 28 01:08:57.696394 containerd[1638]: time="2026-04-28T01:08:57.660059788Z" level=info msg="container event discarded" container=2123c0c810e33437ad48772fb52f3649439d2fd73353cbf527e2dd4e5e4e496a type=CONTAINER_STOPPED_EVENT Apr 28 01:08:58.676912 containerd[1638]: time="2026-04-28T01:08:58.600205102Z" level=info msg="container event discarded" container=e07d40a321b16af047cd22b828dacfef5fea20b28bc199460cc840f763742e82 type=CONTAINER_STOPPED_EVENT Apr 28 01:09:00.321376 containerd[1638]: time="2026-04-28T01:09:00.104084252Z" level=info msg="container event discarded" container=0bfcdb0a37d3f62d6d86ee7c1eafcd28d53fdc4b124373d470f4b26c5d209542 type=CONTAINER_DELETED_EVENT Apr 28 01:09:01.260193 containerd[1638]: time="2026-04-28T01:09:01.250157906Z" level=error msg="get state for 307693d5f6b27635f480b3af7e65ec1b92e0753b3f946c102f39b7b3864b2cdc" error="context deadline exceeded" Apr 28 01:09:01.388727 sshd[5190]: Accepted publickey for core from 10.0.0.1 port 52160 ssh2: RSA SHA256:H+Vux/uLcwSNGfLerJ6bpcTovGn/hDI0W9YvrkmMHk4 Apr 28 01:09:01.674510 sshd-session[5190]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 01:09:01.988158 containerd[1638]: time="2026-04-28T01:09:01.834133489Z" level=warning msg="unknown status" status=0 Apr 28 01:09:04.170832 containerd[1638]: time="2026-04-28T01:09:04.057833032Z" level=info msg="container event discarded" container=d99ba52aef3e5e6e745953a55875c7ff1597d941380a59b0e4b2280ea02ced9b type=CONTAINER_CREATED_EVENT Apr 28 01:09:04.871112 systemd-logind[1614]: New session '35' of user 'core' with class 'user' and type 'tty'. Apr 28 01:09:05.550300 containerd[1638]: time="2026-04-28T01:09:05.193382791Z" level=error msg="failed to delete task" error="context deadline exceeded" id=307693d5f6b27635f480b3af7e65ec1b92e0753b3f946c102f39b7b3864b2cdc Apr 28 01:09:06.059854 systemd[1]: Started session-35.scope - Session 35 of User core. Apr 28 01:09:06.683082 containerd[1638]: time="2026-04-28T01:09:05.977772666Z" level=error msg="Failed to handle backOff event container_id:\"307693d5f6b27635f480b3af7e65ec1b92e0753b3f946c102f39b7b3864b2cdc\" id:\"307693d5f6b27635f480b3af7e65ec1b92e0753b3f946c102f39b7b3864b2cdc\" pid:4482 exit_status:1 exited_at:{seconds:1777338468 nanos:658378364} for 307693d5f6b27635f480b3af7e65ec1b92e0753b3f946c102f39b7b3864b2cdc" error="failed to handle container TaskExit event: failed to stop container: failed to delete task: context deadline exceeded" Apr 28 01:09:07.304851 kubelet[3120]: E0428 01:09:06.607080 3120 controller.go:195] "Failed to update lease" err="Put \"https://10.0.0.30:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded" Apr 28 01:09:09.065920 containerd[1638]: time="2026-04-28T01:09:08.957059768Z" level=info msg="TaskExit event container_id:\"a095478f240cbd6fb021b879525084d9c008460587333289a0c742e05b868b1c\" id:\"a095478f240cbd6fb021b879525084d9c008460587333289a0c742e05b868b1c\" pid:4889 exit_status:1 exited_at:{seconds:1777338460 nanos:446013510}" Apr 28 01:09:10.382883 kubelet[3120]: E0428 01:09:10.380637 3120 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="26.209s" Apr 28 01:09:10.989990 containerd[1638]: time="2026-04-28T01:09:10.966905586Z" level=info msg="container event discarded" container=0a3cb94c91645c00428bd7da14a0449fffb988671709d443a7f1179a66b82b18 type=CONTAINER_DELETED_EVENT Apr 28 01:09:13.597931 containerd[1638]: time="2026-04-28T01:09:13.195157909Z" level=info msg="Kill container \"a095478f240cbd6fb021b879525084d9c008460587333289a0c742e05b868b1c\"" Apr 28 01:09:14.160140 containerd[1638]: time="2026-04-28T01:09:13.864020179Z" level=info msg="container event discarded" container=307693d5f6b27635f480b3af7e65ec1b92e0753b3f946c102f39b7b3864b2cdc type=CONTAINER_CREATED_EVENT Apr 28 01:09:20.316993 containerd[1638]: time="2026-04-28T01:09:18.171469034Z" level=error msg="Failed to handle backOff event container_id:\"a095478f240cbd6fb021b879525084d9c008460587333289a0c742e05b868b1c\" id:\"a095478f240cbd6fb021b879525084d9c008460587333289a0c742e05b868b1c\" pid:4889 exit_status:1 exited_at:{seconds:1777338460 nanos:446013510} for a095478f240cbd6fb021b879525084d9c008460587333289a0c742e05b868b1c" error="failed to handle container TaskExit event: failed to stop container: context deadline exceeded" Apr 28 01:09:22.583065 containerd[1638]: time="2026-04-28T01:09:21.710663430Z" level=error msg="ttrpc: received message on inactive stream" stream=73 Apr 28 01:09:23.193530 containerd[1638]: time="2026-04-28T01:09:22.152600459Z" level=error msg="failed to drain init process 307693d5f6b27635f480b3af7e65ec1b92e0753b3f946c102f39b7b3864b2cdc io" error="context deadline exceeded" runtime=io.containerd.runc.v2 Apr 28 01:09:23.976674 kubelet[3120]: E0428 01:09:22.283018 3120 controller.go:195] "Failed to update lease" err="Put \"https://10.0.0.30:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Apr 28 01:09:24.883537 containerd[1638]: time="2026-04-28T01:09:24.083050930Z" level=error msg="ttrpc: received message on inactive stream" stream=75 Apr 28 01:09:25.529746 containerd[1638]: time="2026-04-28T01:09:24.969777681Z" level=info msg="TaskExit event container_id:\"307693d5f6b27635f480b3af7e65ec1b92e0753b3f946c102f39b7b3864b2cdc\" id:\"307693d5f6b27635f480b3af7e65ec1b92e0753b3f946c102f39b7b3864b2cdc\" pid:4482 exit_status:1 exited_at:{seconds:1777338468 nanos:658378364}" Apr 28 01:09:26.710321 containerd[1638]: time="2026-04-28T01:09:26.289051856Z" level=error msg="ttrpc: received message on inactive stream" stream=77 Apr 28 01:09:27.360691 containerd[1638]: time="2026-04-28T01:09:27.356077393Z" level=error msg="ttrpc: received message on inactive stream" stream=79 Apr 28 01:09:27.509073 containerd[1638]: time="2026-04-28T01:09:27.384723123Z" level=error msg="ttrpc: received message on inactive stream" stream=81 Apr 28 01:09:42.737086 containerd[1638]: time="2026-04-28T01:09:42.089087333Z" level=error msg="ttrpc: received message on inactive stream" stream=71 Apr 28 01:09:45.696309 kubelet[3120]: E0428 01:09:44.578370 3120 controller.go:195] "Failed to update lease" err="Put \"https://10.0.0.30:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Apr 28 01:09:47.097140 sshd[5227]: Connection closed by 10.0.0.1 port 52160 Apr 28 01:09:47.266181 sshd-session[5190]: pam_unix(sshd:session): session closed for user core Apr 28 01:09:48.364208 systemd[1]: sshd@33-8201-10.0.0.30:22-10.0.0.1:52160.service: Deactivated successfully. Apr 28 01:09:48.964071 systemd[1]: sshd@33-8201-10.0.0.30:22-10.0.0.1:52160.service: Consumed 3.339s CPU time, 4.1M memory peak. Apr 28 01:09:49.271809 containerd[1638]: time="2026-04-28T01:09:48.958632147Z" level=error msg="ttrpc: received message on inactive stream" stream=89 Apr 28 01:09:50.040124 systemd[1]: session-35.scope: Deactivated successfully. Apr 28 01:09:50.283866 systemd[1]: session-35.scope: Consumed 18.482s CPU time, 17.2M memory peak. Apr 28 01:09:50.468867 containerd[1638]: time="2026-04-28T01:09:49.681967394Z" level=error msg="Failed to handle backOff event container_id:\"307693d5f6b27635f480b3af7e65ec1b92e0753b3f946c102f39b7b3864b2cdc\" id:\"307693d5f6b27635f480b3af7e65ec1b92e0753b3f946c102f39b7b3864b2cdc\" pid:4482 exit_status:1 exited_at:{seconds:1777338468 nanos:658378364} for 307693d5f6b27635f480b3af7e65ec1b92e0753b3f946c102f39b7b3864b2cdc" error="failed to handle container TaskExit event: failed to stop container: context deadline exceeded" Apr 28 01:09:50.794850 systemd-logind[1614]: Session 35 logged out. Waiting for processes to exit. Apr 28 01:09:51.534406 containerd[1638]: time="2026-04-28T01:09:50.080089918Z" level=error msg="ttrpc: received message on inactive stream" stream=91 Apr 28 01:09:51.569684 containerd[1638]: time="2026-04-28T01:09:51.069963817Z" level=info msg="TaskExit event container_id:\"a095478f240cbd6fb021b879525084d9c008460587333289a0c742e05b868b1c\" id:\"a095478f240cbd6fb021b879525084d9c008460587333289a0c742e05b868b1c\" pid:4889 exit_status:1 exited_at:{seconds:1777338460 nanos:446013510}" Apr 28 01:09:51.543163 systemd-logind[1614]: Removed session 35. Apr 28 01:09:51.926016 kubelet[3120]: E0428 01:09:51.894668 3120 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:09:53.895908 systemd[1]: Started sshd@34-4102-10.0.0.30:22-10.0.0.1:54480.service - OpenSSH per-connection server daemon (10.0.0.1:54480). Apr 28 01:09:57.159722 containerd[1638]: time="2026-04-28T01:09:57.086128656Z" level=info msg="Kill container \"307693d5f6b27635f480b3af7e65ec1b92e0753b3f946c102f39b7b3864b2cdc\"" Apr 28 01:10:01.886610 containerd[1638]: time="2026-04-28T01:10:01.418904059Z" level=info msg="container event discarded" container=2123c0c810e33437ad48772fb52f3649439d2fd73353cbf527e2dd4e5e4e496a type=CONTAINER_DELETED_EVENT Apr 28 01:10:03.229610 containerd[1638]: time="2026-04-28T01:10:02.891195587Z" level=error msg="ttrpc: received message on inactive stream" stream=79 Apr 28 01:10:03.999758 containerd[1638]: time="2026-04-28T01:10:03.968698838Z" level=error msg="Failed to handle backOff event container_id:\"a095478f240cbd6fb021b879525084d9c008460587333289a0c742e05b868b1c\" id:\"a095478f240cbd6fb021b879525084d9c008460587333289a0c742e05b868b1c\" pid:4889 exit_status:1 exited_at:{seconds:1777338460 nanos:446013510} for a095478f240cbd6fb021b879525084d9c008460587333289a0c742e05b868b1c" error="failed to handle container TaskExit event: failed to stop container: context deadline exceeded" Apr 28 01:10:04.419675 containerd[1638]: time="2026-04-28T01:10:04.154725517Z" level=info msg="container event discarded" container=d99ba52aef3e5e6e745953a55875c7ff1597d941380a59b0e4b2280ea02ced9b type=CONTAINER_STARTED_EVENT Apr 28 01:10:04.865023 containerd[1638]: time="2026-04-28T01:10:04.465015206Z" level=info msg="container event discarded" container=e07d40a321b16af047cd22b828dacfef5fea20b28bc199460cc840f763742e82 type=CONTAINER_DELETED_EVENT Apr 28 01:10:04.865023 containerd[1638]: time="2026-04-28T01:10:04.471973033Z" level=info msg="container event discarded" container=307693d5f6b27635f480b3af7e65ec1b92e0753b3f946c102f39b7b3864b2cdc type=CONTAINER_STARTED_EVENT Apr 28 01:10:04.865023 containerd[1638]: time="2026-04-28T01:10:04.289997490Z" level=error msg="ttrpc: received message on inactive stream" stream=81 Apr 28 01:10:07.001752 containerd[1638]: time="2026-04-28T01:10:06.920029199Z" level=info msg="TaskExit event container_id:\"307693d5f6b27635f480b3af7e65ec1b92e0753b3f946c102f39b7b3864b2cdc\" id:\"307693d5f6b27635f480b3af7e65ec1b92e0753b3f946c102f39b7b3864b2cdc\" pid:4482 exit_status:1 exited_at:{seconds:1777338468 nanos:658378364}" Apr 28 01:10:12.069753 containerd[1638]: time="2026-04-28T01:10:12.066991833Z" level=error msg="get state for 307693d5f6b27635f480b3af7e65ec1b92e0753b3f946c102f39b7b3864b2cdc" error="context deadline exceeded" Apr 28 01:10:12.393761 containerd[1638]: time="2026-04-28T01:10:12.315589098Z" level=warning msg="unknown status" status=0 Apr 28 01:10:13.455996 kubelet[3120]: E0428 01:10:13.445108 3120 controller.go:195] "Failed to update lease" err="Put \"https://10.0.0.30:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Apr 28 01:10:17.255952 sshd[5283]: Accepted publickey for core from 10.0.0.1 port 54480 ssh2: RSA SHA256:H+Vux/uLcwSNGfLerJ6bpcTovGn/hDI0W9YvrkmMHk4 Apr 28 01:10:17.780829 containerd[1638]: time="2026-04-28T01:10:17.606180893Z" level=error msg="get state for 307693d5f6b27635f480b3af7e65ec1b92e0753b3f946c102f39b7b3864b2cdc" error="context deadline exceeded" Apr 28 01:10:17.780829 containerd[1638]: time="2026-04-28T01:10:17.697990660Z" level=warning msg="unknown status" status=0 Apr 28 01:10:17.737847 sshd-session[5283]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 01:10:20.506176 systemd-logind[1614]: New session '36' of user 'core' with class 'user' and type 'tty'. Apr 28 01:10:20.903100 containerd[1638]: time="2026-04-28T01:10:20.892074955Z" level=error msg="ttrpc: received message on inactive stream" stream=97 Apr 28 01:10:20.903100 containerd[1638]: time="2026-04-28T01:10:20.892902813Z" level=error msg="ttrpc: received message on inactive stream" stream=95 Apr 28 01:10:21.325920 systemd[1]: Started session-36.scope - Session 36 of User core. Apr 28 01:10:21.897800 containerd[1638]: time="2026-04-28T01:10:21.666028225Z" level=error msg="get state for 307693d5f6b27635f480b3af7e65ec1b92e0753b3f946c102f39b7b3864b2cdc" error="context deadline exceeded" Apr 28 01:10:21.897800 containerd[1638]: time="2026-04-28T01:10:21.787513142Z" level=warning msg="unknown status" status=0 Apr 28 01:10:22.016714 containerd[1638]: time="2026-04-28T01:10:21.996075744Z" level=error msg="ttrpc: received message on inactive stream" stream=99 Apr 28 01:10:23.189676 containerd[1638]: time="2026-04-28T01:10:23.037034553Z" level=error msg="failed to drain init process 307693d5f6b27635f480b3af7e65ec1b92e0753b3f946c102f39b7b3864b2cdc io" error="context deadline exceeded" runtime=io.containerd.runc.v2 Apr 28 01:10:23.308832 containerd[1638]: time="2026-04-28T01:10:23.183843101Z" level=error msg="failed to delete task" error="context deadline exceeded" id=307693d5f6b27635f480b3af7e65ec1b92e0753b3f946c102f39b7b3864b2cdc Apr 28 01:10:23.423047 containerd[1638]: time="2026-04-28T01:10:23.422584090Z" level=error msg="Failed to handle backOff event container_id:\"307693d5f6b27635f480b3af7e65ec1b92e0753b3f946c102f39b7b3864b2cdc\" id:\"307693d5f6b27635f480b3af7e65ec1b92e0753b3f946c102f39b7b3864b2cdc\" pid:4482 exit_status:1 exited_at:{seconds:1777338468 nanos:658378364} for 307693d5f6b27635f480b3af7e65ec1b92e0753b3f946c102f39b7b3864b2cdc" error="failed to handle container TaskExit event: failed to stop container: failed to delete task: context deadline exceeded" Apr 28 01:10:25.371920 containerd[1638]: time="2026-04-28T01:10:25.091201478Z" level=error msg="ttrpc: received message on inactive stream" stream=101 Apr 28 01:10:37.105376 containerd[1638]: time="2026-04-28T01:10:37.102180388Z" level=info msg="TaskExit event container_id:\"a095478f240cbd6fb021b879525084d9c008460587333289a0c742e05b868b1c\" id:\"a095478f240cbd6fb021b879525084d9c008460587333289a0c742e05b868b1c\" pid:4889 exit_status:1 exited_at:{seconds:1777338460 nanos:446013510}" Apr 28 01:10:41.966052 kubelet[3120]: E0428 01:10:40.768879 3120 controller.go:195] "Failed to update lease" err="Put \"https://10.0.0.30:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded" Apr 28 01:10:46.216351 kubelet[3120]: I0428 01:10:44.882187 3120 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Apr 28 01:10:47.200181 containerd[1638]: time="2026-04-28T01:10:47.198686591Z" level=error msg="ttrpc: received message on inactive stream" stream=89 Apr 28 01:10:48.999694 containerd[1638]: time="2026-04-28T01:10:48.556077487Z" level=error msg="ttrpc: received message on inactive stream" stream=87 Apr 28 01:10:49.147024 containerd[1638]: time="2026-04-28T01:10:48.606338938Z" level=error msg="Failed to handle backOff event container_id:\"a095478f240cbd6fb021b879525084d9c008460587333289a0c742e05b868b1c\" id:\"a095478f240cbd6fb021b879525084d9c008460587333289a0c742e05b868b1c\" pid:4889 exit_status:1 exited_at:{seconds:1777338460 nanos:446013510} for a095478f240cbd6fb021b879525084d9c008460587333289a0c742e05b868b1c" error="failed to handle container TaskExit event: failed to stop container: context deadline exceeded" Apr 28 01:10:56.655583 containerd[1638]: time="2026-04-28T01:10:56.421067389Z" level=info msg="TaskExit event container_id:\"307693d5f6b27635f480b3af7e65ec1b92e0753b3f946c102f39b7b3864b2cdc\" id:\"307693d5f6b27635f480b3af7e65ec1b92e0753b3f946c102f39b7b3864b2cdc\" pid:4482 exit_status:1 exited_at:{seconds:1777338468 nanos:658378364}" Apr 28 01:10:57.611054 kubelet[3120]: E0428 01:10:57.349921 3120 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.30:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded" interval="200ms" Apr 28 01:11:02.670106 kubelet[3120]: E0428 01:11:02.658163 3120 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1m45.804s" Apr 28 01:11:05.661001 kubelet[3120]: E0428 01:11:05.659440 3120 log.go:32] "StopContainer from runtime service failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" containerID="a095478f240cbd6fb021b879525084d9c008460587333289a0c742e05b868b1c" Apr 28 01:11:07.367654 containerd[1638]: time="2026-04-28T01:11:06.964929746Z" level=error msg="StopContainer for \"a095478f240cbd6fb021b879525084d9c008460587333289a0c742e05b868b1c\" failed" error="rpc error: code = Canceled desc = an error occurs during waiting for container \"a095478f240cbd6fb021b879525084d9c008460587333289a0c742e05b868b1c\" to be killed: wait container \"a095478f240cbd6fb021b879525084d9c008460587333289a0c742e05b868b1c\": context canceled" Apr 28 01:11:07.768336 kubelet[3120]: E0428 01:11:07.367019 3120 kuberuntime_container.go:871] "Container termination failed with gracePeriod" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" pod="kube-system/kube-controller-manager-localhost" podUID="c6bb8708a026256e82ca4c5631a78b5a" containerName="kube-controller-manager" containerID="containerd://a095478f240cbd6fb021b879525084d9c008460587333289a0c742e05b868b1c" gracePeriod=30 Apr 28 01:11:07.768336 kubelet[3120]: E0428 01:11:07.367646 3120 kuberuntime_manager.go:1248] "killContainer for pod failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" containerName="kube-controller-manager" containerID={"Type":"containerd","ID":"a095478f240cbd6fb021b879525084d9c008460587333289a0c742e05b868b1c"} pod="kube-system/kube-controller-manager-localhost" Apr 28 01:11:07.768336 kubelet[3120]: E0428 01:11:07.368033 3120 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillContainer\" for \"kube-controller-manager\" with KillContainerError: \"rpc error: code = DeadlineExceeded desc = context deadline exceeded\"" pod="kube-system/kube-controller-manager-localhost" podUID="c6bb8708a026256e82ca4c5631a78b5a" Apr 28 01:11:08.014923 containerd[1638]: time="2026-04-28T01:11:07.367129132Z" level=error msg="Failed to handle backOff event container_id:\"307693d5f6b27635f480b3af7e65ec1b92e0753b3f946c102f39b7b3864b2cdc\" id:\"307693d5f6b27635f480b3af7e65ec1b92e0753b3f946c102f39b7b3864b2cdc\" pid:4482 exit_status:1 exited_at:{seconds:1777338468 nanos:658378364} for 307693d5f6b27635f480b3af7e65ec1b92e0753b3f946c102f39b7b3864b2cdc" error="failed to handle container TaskExit event: failed to stop container: context deadline exceeded" Apr 28 01:11:08.896072 containerd[1638]: time="2026-04-28T01:11:07.054370292Z" level=error msg="ttrpc: received message on inactive stream" stream=107 Apr 28 01:11:11.644426 kubelet[3120]: I0428 01:11:11.639637 3120 reflector.go:571] "Warning: watch ended with error" reflector="object-\"kube-system\"/\"kube-root-ca.crt\"" type="*v1.ConfigMap" err="an error on the server (\"unable to decode an event from the watch stream: http2: client connection lost\") has prevented the request from succeeding" Apr 28 01:11:12.143860 sshd[5320]: Connection closed by 10.0.0.1 port 54480 Apr 28 01:11:11.906093 sshd-session[5283]: pam_unix(sshd:session): session closed for user core Apr 28 01:11:12.390420 kubelet[3120]: I0428 01:11:11.640992 3120 reflector.go:571] "Warning: watch ended with error" reflector="object-\"kube-flannel\"/\"kube-root-ca.crt\"" type="*v1.ConfigMap" err="an error on the server (\"unable to decode an event from the watch stream: http2: client connection lost\") has prevented the request from succeeding" Apr 28 01:11:12.570412 kubelet[3120]: I0428 01:11:11.641315 3120 reflector.go:571] "Warning: watch ended with error" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" err="an error on the server (\"unable to decode an event from the watch stream: http2: client connection lost\") has prevented the request from succeeding" Apr 28 01:11:12.710070 kubelet[3120]: E0428 01:11:11.171721 3120 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.30:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded" interval="400ms" Apr 28 01:11:13.168748 systemd[1]: sshd@34-4102-10.0.0.30:22-10.0.0.1:54480.service: Deactivated successfully. Apr 28 01:11:13.398638 systemd[1]: sshd@34-4102-10.0.0.30:22-10.0.0.1:54480.service: Consumed 5.236s CPU time, 4.3M memory peak. Apr 28 01:11:13.877157 kubelet[3120]: E0428 01:11:13.456871 3120 status_manager.go:1041] "Failed to update status for pod" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"feddee20-9702-4973-89cf-a0c25fa3413c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-04-28T01:09:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-28T01:09:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"250m\\\"},\\\"containerID\\\":\\\"containerd://9583e013b7af737c09ec0f5a95821eabdf38511a63754015720f2c75fbb31b16\\\",\\\"image\\\":\\\"registry.k8s.io/kube-apiserver:v1.34.7\\\",\\\"imageID\\\":\\\"registry.k8s.io/kube-apiserver@sha256:b96b8464d152a24c81d7f0435fd2198f8486970cd26a9e0e9c20826c73d1441c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"250m\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-28T00:56:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}]}}\" for pod \"kube-system\"/\"kube-apiserver-localhost\": Patch \"https://10.0.0.30:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-localhost/status\": http2: client connection lost" pod="kube-system/kube-apiserver-localhost" Apr 28 01:11:14.306313 systemd[1]: session-36.scope: Deactivated successfully. Apr 28 01:11:14.358555 systemd[1]: session-36.scope: Consumed 23.722s CPU time, 17.9M memory peak. Apr 28 01:11:15.184072 kubelet[3120]: I0428 01:11:15.093185 3120 reflector.go:571] "Warning: watch ended with error" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" err="an error on the server (\"unable to decode an event from the watch stream: http2: client connection lost\") has prevented the request from succeeding" Apr 28 01:11:15.903830 systemd-logind[1614]: Session 36 logged out. Waiting for processes to exit. Apr 28 01:11:16.462156 systemd-logind[1614]: Removed session 36. Apr 28 01:11:17.491130 containerd[1638]: time="2026-04-28T01:11:17.476699659Z" level=error msg="StopContainer for \"307693d5f6b27635f480b3af7e65ec1b92e0753b3f946c102f39b7b3864b2cdc\" failed" error="rpc error: code = Canceled desc = an error occurs during waiting for container \"307693d5f6b27635f480b3af7e65ec1b92e0753b3f946c102f39b7b3864b2cdc\" to be killed: wait container \"307693d5f6b27635f480b3af7e65ec1b92e0753b3f946c102f39b7b3864b2cdc\": context canceled" Apr 28 01:11:18.342322 kubelet[3120]: I0428 01:11:17.040819 3120 reflector.go:571] "Warning: watch ended with error" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" err="an error on the server (\"unable to decode an event from the watch stream: http2: client connection lost\") has prevented the request from succeeding" Apr 28 01:11:18.544853 kubelet[3120]: I0428 01:11:17.190450 3120 reflector.go:571] "Warning: watch ended with error" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" err="an error on the server (\"unable to decode an event from the watch stream: http2: client connection lost\") has prevented the request from succeeding" Apr 28 01:11:19.418811 systemd[1]: Started sshd@35-12298-10.0.0.30:22-10.0.0.1:59442.service - OpenSSH per-connection server daemon (10.0.0.1:59442). Apr 28 01:11:20.257851 kubelet[3120]: E0428 01:11:17.380194 3120 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://10.0.0.30:6443/api/v1/namespaces/kube-system/events/kube-apiserver-localhost.18aa5ff715f618ea\": http2: client connection lost" event="&Event{ObjectMeta:{kube-apiserver-localhost.18aa5ff715f618ea kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-localhost,UID:cf2ebce56cde410c1f7401213757c4d8,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:(combined from similar events): Liveness probe failed: Get \"https://10.0.0.30:6443/livez\": context deadline exceeded,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-28 01:08:37.781092586 +0000 UTC m=+556.522291707,LastTimestamp:2026-04-28 01:09:43.191062136 +0000 UTC m=+621.932261254,Count:3,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:,ReportingInstance:,}" Apr 28 01:11:20.808102 kubelet[3120]: E0428 01:11:20.668119 3120 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:11:22.374173 kubelet[3120]: E0428 01:11:22.267939 3120 log.go:32] "StopContainer from runtime service failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" containerID="307693d5f6b27635f480b3af7e65ec1b92e0753b3f946c102f39b7b3864b2cdc" Apr 28 01:11:22.374173 kubelet[3120]: E0428 01:11:22.366959 3120 kuberuntime_container.go:871] "Container termination failed with gracePeriod" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" pod="kube-system/kube-scheduler-localhost" podUID="824fd89300514e351ed3b68d82c665c6" containerName="kube-scheduler" containerID="containerd://307693d5f6b27635f480b3af7e65ec1b92e0753b3f946c102f39b7b3864b2cdc" gracePeriod=30 Apr 28 01:11:22.374173 kubelet[3120]: E0428 01:11:22.367481 3120 kuberuntime_manager.go:1248] "killContainer for pod failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" containerName="kube-scheduler" containerID={"Type":"containerd","ID":"307693d5f6b27635f480b3af7e65ec1b92e0753b3f946c102f39b7b3864b2cdc"} pod="kube-system/kube-scheduler-localhost" Apr 28 01:11:22.374173 kubelet[3120]: E0428 01:11:22.367700 3120 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillContainer\" for \"kube-scheduler\" with KillContainerError: \"rpc error: code = DeadlineExceeded desc = context deadline exceeded\"" pod="kube-system/kube-scheduler-localhost" podUID="824fd89300514e351ed3b68d82c665c6" Apr 28 01:11:23.972634 kubelet[3120]: I0428 01:11:21.911478 3120 reflector.go:571] "Warning: watch ended with error" reflector="pkg/kubelet/config/apiserver.go:66" type="*v1.Pod" err="an error on the server (\"unable to decode an event from the watch stream: http2: client connection lost\") has prevented the request from succeeding" Apr 28 01:11:24.303955 kubelet[3120]: I0428 01:11:15.517143 3120 reflector.go:571] "Warning: watch ended with error" reflector="object-\"kube-flannel\"/\"kube-flannel-cfg\"" type="*v1.ConfigMap" err="an error on the server (\"unable to decode an event from the watch stream: http2: client connection lost\") has prevented the request from succeeding" Apr 28 01:11:31.753197 kubelet[3120]: I0428 01:11:25.719988 3120 reflector.go:571] "Warning: watch ended with error" reflector="object-\"kube-system\"/\"kube-proxy\"" type="*v1.ConfigMap" err="an error on the server (\"unable to decode an event from the watch stream: http2: client connection lost\") has prevented the request from succeeding" Apr 28 01:11:36.516671 kubelet[3120]: E0428 01:11:32.484919 3120 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.30:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" interval="800ms" Apr 28 01:11:37.559089 sshd[5373]: Accepted publickey for core from 10.0.0.1 port 59442 ssh2: RSA SHA256:H+Vux/uLcwSNGfLerJ6bpcTovGn/hDI0W9YvrkmMHk4 Apr 28 01:11:38.974109 sshd-session[5373]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 01:11:39.997095 containerd[1638]: time="2026-04-28T01:11:39.981268202Z" level=info msg="container event discarded" container=d99ba52aef3e5e6e745953a55875c7ff1597d941380a59b0e4b2280ea02ced9b type=CONTAINER_STOPPED_EVENT Apr 28 01:11:52.223772 systemd-logind[1614]: New session '37' of user 'core' with class 'user' and type 'tty'. Apr 28 01:11:54.156756 systemd[1]: Started session-37.scope - Session 37 of User core. Apr 28 01:11:54.887893 containerd[1638]: time="2026-04-28T01:11:54.584784377Z" level=info msg="TaskExit event container_id:\"a095478f240cbd6fb021b879525084d9c008460587333289a0c742e05b868b1c\" id:\"a095478f240cbd6fb021b879525084d9c008460587333289a0c742e05b868b1c\" pid:4889 exit_status:1 exited_at:{seconds:1777338460 nanos:446013510}" Apr 28 01:11:58.165992 containerd[1638]: time="2026-04-28T01:11:58.053092723Z" level=info msg="container event discarded" container=a095478f240cbd6fb021b879525084d9c008460587333289a0c742e05b868b1c type=CONTAINER_CREATED_EVENT Apr 28 01:11:59.045133 containerd[1638]: time="2026-04-28T01:11:58.165036186Z" level=error msg="get state for a095478f240cbd6fb021b879525084d9c008460587333289a0c742e05b868b1c" error="context deadline exceeded" Apr 28 01:11:59.224027 containerd[1638]: time="2026-04-28T01:11:59.136785162Z" level=warning msg="unknown status" status=0 Apr 28 01:11:59.668059 containerd[1638]: time="2026-04-28T01:11:59.048516116Z" level=info msg="container event discarded" container=a095478f240cbd6fb021b879525084d9c008460587333289a0c742e05b868b1c type=CONTAINER_STARTED_EVENT Apr 28 01:12:02.082666 kubelet[3120]: E0428 01:12:02.044459 3120 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: Get \"https://10.0.0.30:6443/api/v1/namespaces/kube-flannel/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=1214\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="object-\"kube-flannel\"/\"kube-root-ca.crt\"" type="*v1.ConfigMap" Apr 28 01:12:02.784578 kubelet[3120]: E0428 01:12:01.162400 3120 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.30:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&resourceVersion=1177\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 28 01:12:02.937103 kubelet[3120]: E0428 01:12:02.927206 3120 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.30:6443/apis/storage.k8s.io/v1/csidrivers?resourceVersion=1177\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 28 01:12:04.701020 kubelet[3120]: E0428 01:12:04.643847 3120 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.30:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" interval="1.6s" Apr 28 01:12:04.862003 containerd[1638]: time="2026-04-28T01:12:04.861179256Z" level=error msg="ttrpc: received message on inactive stream" stream=95 Apr 28 01:12:05.019796 containerd[1638]: time="2026-04-28T01:12:04.859111679Z" level=error msg="get state for a095478f240cbd6fb021b879525084d9c008460587333289a0c742e05b868b1c" error="context deadline exceeded" Apr 28 01:12:05.095070 containerd[1638]: time="2026-04-28T01:12:05.033131797Z" level=warning msg="unknown status" status=0 Apr 28 01:12:05.191030 kubelet[3120]: E0428 01:12:04.766509 3120 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.30:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&resourceVersion=1213\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 28 01:12:05.191030 kubelet[3120]: E0428 01:12:02.381947 3120 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://10.0.0.30:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-localhost\": net/http: TLS handshake timeout" podUID="cf2ebce56cde410c1f7401213757c4d8" pod="kube-system/kube-apiserver-localhost" Apr 28 01:12:05.191030 kubelet[3120]: E0428 01:12:03.766139 3120 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://10.0.0.30:6443/api/v1/namespaces/kube-system/events/kube-apiserver-localhost.18aa5ff715f618ea\": net/http: TLS handshake timeout" event="&Event{ObjectMeta:{kube-apiserver-localhost.18aa5ff715f618ea kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-localhost,UID:cf2ebce56cde410c1f7401213757c4d8,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:(combined from similar events): Liveness probe failed: Get \"https://10.0.0.30:6443/livez\": context deadline exceeded,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-28 01:08:37.781092586 +0000 UTC m=+556.522291707,LastTimestamp:2026-04-28 01:09:43.191062136 +0000 UTC m=+621.932261254,Count:3,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:,ReportingInstance:,}" Apr 28 01:12:05.790282 containerd[1638]: time="2026-04-28T01:12:05.788496642Z" level=error msg="failed to delete task" error="context deadline exceeded" id=a095478f240cbd6fb021b879525084d9c008460587333289a0c742e05b868b1c Apr 28 01:12:05.855509 containerd[1638]: time="2026-04-28T01:12:05.791917861Z" level=error msg="failed to drain init process a095478f240cbd6fb021b879525084d9c008460587333289a0c742e05b868b1c io" error="context deadline exceeded" runtime=io.containerd.runc.v2 Apr 28 01:12:06.294808 kubelet[3120]: E0428 01:12:04.511145 3120 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: Get \"https://10.0.0.30:6443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=1214\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="object-\"kube-system\"/\"kube-root-ca.crt\"" type="*v1.ConfigMap" Apr 28 01:12:06.508070 containerd[1638]: time="2026-04-28T01:12:06.489094425Z" level=error msg="ttrpc: received message on inactive stream" stream=97 Apr 28 01:12:06.943635 containerd[1638]: time="2026-04-28T01:12:06.931007149Z" level=error msg="Failed to handle backOff event container_id:\"a095478f240cbd6fb021b879525084d9c008460587333289a0c742e05b868b1c\" id:\"a095478f240cbd6fb021b879525084d9c008460587333289a0c742e05b868b1c\" pid:4889 exit_status:1 exited_at:{seconds:1777338460 nanos:446013510} for a095478f240cbd6fb021b879525084d9c008460587333289a0c742e05b868b1c" error="failed to handle container TaskExit event: failed to stop container: failed to delete task: context deadline exceeded" Apr 28 01:12:07.466914 kubelet[3120]: E0428 01:12:07.259147 3120 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.30:6443/apis/node.k8s.io/v1/runtimeclasses?resourceVersion=1206\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 28 01:12:13.051121 containerd[1638]: time="2026-04-28T01:12:12.955060143Z" level=info msg="TaskExit event container_id:\"307693d5f6b27635f480b3af7e65ec1b92e0753b3f946c102f39b7b3864b2cdc\" id:\"307693d5f6b27635f480b3af7e65ec1b92e0753b3f946c102f39b7b3864b2cdc\" pid:4482 exit_status:1 exited_at:{seconds:1777338468 nanos:658378364}" Apr 28 01:12:13.982803 kubelet[3120]: E0428 01:12:13.981682 3120 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.0.0.30:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dlocalhost&resourceVersion=1197\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="pkg/kubelet/config/apiserver.go:66" type="*v1.Pod" Apr 28 01:12:14.813975 kubelet[3120]: E0428 01:12:14.169940 3120 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: Get \"https://10.0.0.30:6443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dkube-proxy&resourceVersion=1214\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="object-\"kube-system\"/\"kube-proxy\"" type="*v1.ConfigMap" Apr 28 01:12:15.515083 kubelet[3120]: E0428 01:12:15.513999 3120 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: Get \"https://10.0.0.30:6443/api/v1/namespaces/kube-flannel/configmaps?fieldSelector=metadata.name%3Dkube-flannel-cfg&resourceVersion=1214\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="object-\"kube-flannel\"/\"kube-flannel-cfg\"" type="*v1.ConfigMap" Apr 28 01:12:22.974626 containerd[1638]: time="2026-04-28T01:12:22.960959897Z" level=error msg="Failed to handle backOff event container_id:\"307693d5f6b27635f480b3af7e65ec1b92e0753b3f946c102f39b7b3864b2cdc\" id:\"307693d5f6b27635f480b3af7e65ec1b92e0753b3f946c102f39b7b3864b2cdc\" pid:4482 exit_status:1 exited_at:{seconds:1777338468 nanos:658378364} for 307693d5f6b27635f480b3af7e65ec1b92e0753b3f946c102f39b7b3864b2cdc" error="failed to handle container TaskExit event: failed to stop container: context deadline exceeded" Apr 28 01:12:23.760120 containerd[1638]: time="2026-04-28T01:12:23.676945112Z" level=error msg="ttrpc: received message on inactive stream" stream=119 Apr 28 01:12:23.879622 containerd[1638]: time="2026-04-28T01:12:23.793040476Z" level=error msg="ttrpc: received message on inactive stream" stream=121 Apr 28 01:12:26.539744 sshd[5413]: Connection closed by 10.0.0.1 port 59442 Apr 28 01:12:26.588818 sshd-session[5373]: pam_unix(sshd:session): session closed for user core Apr 28 01:12:28.067286 systemd[1]: sshd@35-12298-10.0.0.30:22-10.0.0.1:59442.service: Deactivated successfully. Apr 28 01:12:28.489062 systemd[1]: sshd@35-12298-10.0.0.30:22-10.0.0.1:59442.service: Consumed 5.585s CPU time, 4.3M memory peak. Apr 28 01:12:29.612100 kubelet[3120]: E0428 01:12:28.071006 3120 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: Get \"https://10.0.0.30:6443/api/v1/namespaces/kube-flannel/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=1214\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="object-\"kube-flannel\"/\"kube-root-ca.crt\"" type="*v1.ConfigMap" Apr 28 01:12:30.158167 systemd[1]: session-37.scope: Deactivated successfully. Apr 28 01:12:30.468802 systemd[1]: session-37.scope: Consumed 14.475s CPU time, 17.8M memory peak. Apr 28 01:12:31.476088 systemd-logind[1614]: Session 37 logged out. Waiting for processes to exit. Apr 28 01:12:35.080786 kubelet[3120]: E0428 01:12:27.302103 3120 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.30:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" interval="3.2s" Apr 28 01:12:35.989304 systemd-logind[1614]: Removed session 37. Apr 28 01:12:36.711157 systemd[1]: Started sshd@36-12299-10.0.0.30:22-10.0.0.1:45580.service - OpenSSH per-connection server daemon (10.0.0.1:45580). Apr 28 01:13:05.677198 kubelet[3120]: E0428 01:13:05.669494 3120 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.30:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&resourceVersion=1177\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 28 01:13:06.532306 kubelet[3120]: E0428 01:13:06.336967 3120 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.30:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" interval="6.4s" Apr 28 01:13:09.654974 kubelet[3120]: E0428 01:12:51.073037 3120 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://10.0.0.30:6443/api/v1/namespaces/kube-system/events/kube-apiserver-localhost.18aa5ff715f618ea\": net/http: TLS handshake timeout" event="&Event{ObjectMeta:{kube-apiserver-localhost.18aa5ff715f618ea kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-localhost,UID:cf2ebce56cde410c1f7401213757c4d8,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:(combined from similar events): Liveness probe failed: Get \"https://10.0.0.30:6443/livez\": context deadline exceeded,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-28 01:08:37.781092586 +0000 UTC m=+556.522291707,LastTimestamp:2026-04-28 01:09:43.191062136 +0000 UTC m=+621.932261254,Count:3,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:,ReportingInstance:,}" Apr 28 01:13:10.357850 kubelet[3120]: E0428 01:13:08.191560 3120 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.30:6443/apis/storage.k8s.io/v1/csidrivers?resourceVersion=1177\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 28 01:13:15.896914 sshd[5463]: Accepted publickey for core from 10.0.0.1 port 45580 ssh2: RSA SHA256:H+Vux/uLcwSNGfLerJ6bpcTovGn/hDI0W9YvrkmMHk4 Apr 28 01:13:17.992703 sshd-session[5463]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 01:13:25.118816 kubelet[3120]: E0428 01:13:12.922157 3120 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://10.0.0.30:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-localhost\": net/http: TLS handshake timeout" podUID="cf2ebce56cde410c1f7401213757c4d8" pod="kube-system/kube-apiserver-localhost" Apr 28 01:13:26.184071 systemd-logind[1614]: New session '38' of user 'core' with class 'user' and type 'tty'. Apr 28 01:13:29.098766 systemd[1]: Started session-38.scope - Session 38 of User core. Apr 28 01:13:31.932797 kubelet[3120]: E0428 01:13:27.200794 3120 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: Get \"https://10.0.0.30:6443/api/v1/namespaces/kube-flannel/configmaps?fieldSelector=metadata.name%3Dkube-flannel-cfg&resourceVersion=1214\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="object-\"kube-flannel\"/\"kube-flannel-cfg\"" type="*v1.ConfigMap" Apr 28 01:13:35.191138 kubelet[3120]: E0428 01:13:35.190128 3120 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.30:6443/apis/node.k8s.io/v1/runtimeclasses?resourceVersion=1206\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 28 01:13:35.195661 kubelet[3120]: E0428 01:13:35.195041 3120 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: Get \"https://10.0.0.30:6443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=1214\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="object-\"kube-system\"/\"kube-root-ca.crt\"" type="*v1.ConfigMap" Apr 28 01:13:40.623168 kubelet[3120]: E0428 01:13:37.277624 3120 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.30:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&resourceVersion=1213\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 28 01:13:41.945766 kubelet[3120]: E0428 01:13:36.214803 3120 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.0.0.30:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dlocalhost&resourceVersion=1197\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="pkg/kubelet/config/apiserver.go:66" type="*v1.Pod" Apr 28 01:13:55.000945 kubelet[3120]: E0428 01:13:54.919893 3120 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://10.0.0.30:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-localhost\": net/http: TLS handshake timeout" podUID="cf2ebce56cde410c1f7401213757c4d8" pod="kube-system/kube-apiserver-localhost" Apr 28 01:13:56.285763 kubelet[3120]: E0428 01:13:53.466184 3120 kubelet_node_status.go:486] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"NetworkUnavailable\\\"},{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-04-28T01:12:26Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-28T01:12:26Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-28T01:12:26Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-28T01:12:26Z\\\",\\\"type\\\":\\\"Ready\\\"}]}}\" for node \"localhost\": Patch \"https://10.0.0.30:6443/api/v1/nodes/localhost/status?timeout=10s\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Apr 28 01:14:05.350123 kubelet[3120]: E0428 01:13:57.996970 3120 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.30:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded" interval="7s" Apr 28 01:14:15.798995 kubelet[3120]: E0428 01:14:15.663895 3120 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.30:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&resourceVersion=1177\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 28 01:14:17.691639 containerd[1638]: time="2026-04-28T01:14:17.581920908Z" level=info msg="TaskExit event container_id:\"a095478f240cbd6fb021b879525084d9c008460587333289a0c742e05b868b1c\" id:\"a095478f240cbd6fb021b879525084d9c008460587333289a0c742e05b868b1c\" pid:4889 exit_status:1 exited_at:{seconds:1777338460 nanos:446013510}" Apr 28 01:14:25.489935 kubelet[3120]: E0428 01:14:20.117917 3120 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3m13.63s" Apr 28 01:14:31.472101 kubelet[3120]: E0428 01:14:31.470409 3120 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"localhost\": Get \"https://10.0.0.30:6443/api/v1/nodes/localhost?timeout=10s\": context deadline exceeded" Apr 28 01:14:32.587913 kubelet[3120]: E0428 01:14:28.555000 3120 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://10.0.0.30:6443/api/v1/namespaces/kube-system/events/kube-apiserver-localhost.18aa5ff715f618ea\": net/http: TLS handshake timeout" event="&Event{ObjectMeta:{kube-apiserver-localhost.18aa5ff715f618ea kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-localhost,UID:cf2ebce56cde410c1f7401213757c4d8,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:(combined from similar events): Liveness probe failed: Get \"https://10.0.0.30:6443/livez\": context deadline exceeded,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-28 01:08:37.781092586 +0000 UTC m=+556.522291707,LastTimestamp:2026-04-28 01:09:43.191062136 +0000 UTC m=+621.932261254,Count:3,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:,ReportingInstance:,}" Apr 28 01:14:40.198923 kubelet[3120]: E0428 01:14:28.910079 3120 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.30:6443/apis/node.k8s.io/v1/runtimeclasses?resourceVersion=1206\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 28 01:14:41.142633 containerd[1638]: time="2026-04-28T01:14:40.790929234Z" level=error msg="get state for a095478f240cbd6fb021b879525084d9c008460587333289a0c742e05b868b1c" error="context deadline exceeded" Apr 28 01:14:41.880089 containerd[1638]: time="2026-04-28T01:14:41.860986387Z" level=warning msg="unknown status" status=0 Apr 28 01:14:42.884871 kubelet[3120]: E0428 01:14:37.604167 3120 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.30:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded" interval="7s" Apr 28 01:14:46.980127 containerd[1638]: time="2026-04-28T01:14:45.744993848Z" level=error msg="get state for f99c09fa9bcebfcc9ce36f1e690fb119057571c499a894190b3ac3be9c525e0a" error="context deadline exceeded" Apr 28 01:14:48.012583 containerd[1638]: time="2026-04-28T01:14:45.771210125Z" level=error msg="ttrpc: received message on inactive stream" stream=93 Apr 28 01:14:48.839950 containerd[1638]: time="2026-04-28T01:14:47.685075609Z" level=warning msg="unknown status" status=0 Apr 28 01:14:57.428020 kubelet[3120]: E0428 01:14:57.403019 3120 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.0.0.30:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dlocalhost&resourceVersion=1197\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="pkg/kubelet/config/apiserver.go:66" type="*v1.Pod" Apr 28 01:14:57.575763 containerd[1638]: time="2026-04-28T01:14:57.158821140Z" level=error msg="ttrpc: received message on inactive stream" stream=103 Apr 28 01:14:58.694046 containerd[1638]: time="2026-04-28T01:14:58.254206104Z" level=error msg="ttrpc: received message on inactive stream" stream=105 Apr 28 01:15:00.371984 containerd[1638]: time="2026-04-28T01:14:59.011096658Z" level=error msg="Failed to handle backOff event container_id:\"a095478f240cbd6fb021b879525084d9c008460587333289a0c742e05b868b1c\" id:\"a095478f240cbd6fb021b879525084d9c008460587333289a0c742e05b868b1c\" pid:4889 exit_status:1 exited_at:{seconds:1777338460 nanos:446013510} for a095478f240cbd6fb021b879525084d9c008460587333289a0c742e05b868b1c" error="failed to handle container TaskExit event: failed to stop container: context deadline exceeded" Apr 28 01:15:01.197892 kubelet[3120]: E0428 01:14:58.400351 3120 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"localhost\": Get \"https://10.0.0.30:6443/api/v1/nodes/localhost?timeout=10s\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Apr 28 01:15:02.647637 sshd[5494]: Connection closed by 10.0.0.1 port 45580 Apr 28 01:15:02.850067 sshd-session[5463]: pam_unix(sshd:session): session closed for user core Apr 28 01:15:05.149477 containerd[1638]: time="2026-04-28T01:15:04.126694231Z" level=info msg="TaskExit event container_id:\"307693d5f6b27635f480b3af7e65ec1b92e0753b3f946c102f39b7b3864b2cdc\" id:\"307693d5f6b27635f480b3af7e65ec1b92e0753b3f946c102f39b7b3864b2cdc\" pid:4482 exit_status:1 exited_at:{seconds:1777338468 nanos:658378364}" Apr 28 01:15:05.862049 systemd[1]: sshd@36-12299-10.0.0.30:22-10.0.0.1:45580.service: Deactivated successfully. Apr 28 01:15:06.236994 systemd[1]: sshd@36-12299-10.0.0.30:22-10.0.0.1:45580.service: Consumed 11.028s CPU time, 4.2M memory peak. Apr 28 01:15:07.491065 systemd[1]: session-38.scope: Deactivated successfully. Apr 28 01:15:07.796905 systemd[1]: session-38.scope: Consumed 47.274s CPU time, 17.9M memory peak. Apr 28 01:15:09.213800 systemd-logind[1614]: Session 38 logged out. Waiting for processes to exit. Apr 28 01:15:11.993131 containerd[1638]: time="2026-04-28T01:15:11.361697672Z" level=error msg="ttrpc: received message on inactive stream" stream=123 Apr 28 01:15:12.522935 containerd[1638]: time="2026-04-28T01:15:11.992718971Z" level=error msg="get state for 307693d5f6b27635f480b3af7e65ec1b92e0753b3f946c102f39b7b3864b2cdc" error="context deadline exceeded" Apr 28 01:15:12.609691 containerd[1638]: time="2026-04-28T01:15:12.604108426Z" level=warning msg="unknown status" status=0 Apr 28 01:15:14.659120 systemd[1]: Started sshd@37-4103-10.0.0.30:22-10.0.0.1:37148.service - OpenSSH per-connection server daemon (10.0.0.1:37148). Apr 28 01:15:16.085861 systemd-logind[1614]: Removed session 38. Apr 28 01:15:16.898810 kubelet[3120]: E0428 01:15:06.082029 3120 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.30:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&resourceVersion=1213\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 28 01:15:19.498918 containerd[1638]: time="2026-04-28T01:15:19.496687736Z" level=error msg="get state for 307693d5f6b27635f480b3af7e65ec1b92e0753b3f946c102f39b7b3864b2cdc" error="context deadline exceeded" Apr 28 01:15:19.498918 containerd[1638]: time="2026-04-28T01:15:19.497403369Z" level=warning msg="unknown status" status=0 Apr 28 01:15:20.304688 containerd[1638]: time="2026-04-28T01:15:20.149133174Z" level=error msg="failed to drain init process 307693d5f6b27635f480b3af7e65ec1b92e0753b3f946c102f39b7b3864b2cdc io" error="context deadline exceeded" runtime=io.containerd.runc.v2 Apr 28 01:15:20.694195 containerd[1638]: time="2026-04-28T01:15:20.644613897Z" level=error msg="ttrpc: received message on inactive stream" stream=129 Apr 28 01:15:21.256793 containerd[1638]: time="2026-04-28T01:15:20.662869687Z" level=error msg="failed to delete task" error="context deadline exceeded" id=307693d5f6b27635f480b3af7e65ec1b92e0753b3f946c102f39b7b3864b2cdc Apr 28 01:15:21.272974 containerd[1638]: time="2026-04-28T01:15:21.265782867Z" level=error msg="Failed to handle backOff event container_id:\"307693d5f6b27635f480b3af7e65ec1b92e0753b3f946c102f39b7b3864b2cdc\" id:\"307693d5f6b27635f480b3af7e65ec1b92e0753b3f946c102f39b7b3864b2cdc\" pid:4482 exit_status:1 exited_at:{seconds:1777338468 nanos:658378364} for 307693d5f6b27635f480b3af7e65ec1b92e0753b3f946c102f39b7b3864b2cdc" error="failed to handle container TaskExit event: failed to stop container: failed to delete task: context deadline exceeded" Apr 28 01:15:26.805127 kubelet[3120]: E0428 01:15:26.715623 3120 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.30:6443/apis/storage.k8s.io/v1/csidrivers?resourceVersion=1177\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 28 01:15:47.956585 sshd[5544]: Accepted publickey for core from 10.0.0.1 port 37148 ssh2: RSA SHA256:H+Vux/uLcwSNGfLerJ6bpcTovGn/hDI0W9YvrkmMHk4 Apr 28 01:15:49.505361 sshd-session[5544]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 01:15:54.984774 systemd-logind[1614]: New session '39' of user 'core' with class 'user' and type 'tty'. Apr 28 01:15:59.579889 systemd[1]: Started session-39.scope - Session 39 of User core. Apr 28 01:16:06.796138 containerd[1638]: time="2026-04-28T01:16:06.749954341Z" level=info msg="StopContainer for \"9583e013b7af737c09ec0f5a95821eabdf38511a63754015720f2c75fbb31b16\" with timeout 30 (s)" Apr 28 01:16:07.726831 kubelet[3120]: E0428 01:15:53.113371 3120 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.30:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" interval="7s" Apr 28 01:16:13.000757 containerd[1638]: time="2026-04-28T01:16:12.916388642Z" level=info msg="Stop container \"9583e013b7af737c09ec0f5a95821eabdf38511a63754015720f2c75fbb31b16\" with signal terminated" Apr 28 01:16:29.467886 kubelet[3120]: E0428 01:16:29.453108 3120 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"localhost\": Get \"https://10.0.0.30:6443/api/v1/nodes/localhost?timeout=10s\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Apr 28 01:17:06.693539 kubelet[3120]: E0428 01:16:44.991826 3120 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://10.0.0.30:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-localhost\": net/http: TLS handshake timeout" podUID="cf2ebce56cde410c1f7401213757c4d8" pod="kube-system/kube-apiserver-localhost" Apr 28 01:17:23.805175 kubelet[3120]: E0428 01:17:23.798746 3120 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"localhost\": Get \"https://10.0.0.30:6443/api/v1/nodes/localhost?timeout=10s\": context deadline exceeded" Apr 28 01:17:26.524053 kubelet[3120]: E0428 01:17:13.162105 3120 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://10.0.0.30:6443/api/v1/namespaces/kube-system/events/kube-apiserver-localhost.18aa5ff715f618ea\": net/http: TLS handshake timeout" event="&Event{ObjectMeta:{kube-apiserver-localhost.18aa5ff715f618ea kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-localhost,UID:cf2ebce56cde410c1f7401213757c4d8,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:(combined from similar events): Liveness probe failed: Get \"https://10.0.0.30:6443/livez\": context deadline exceeded,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-28 01:08:37.781092586 +0000 UTC m=+556.522291707,LastTimestamp:2026-04-28 01:09:43.191062136 +0000 UTC m=+621.932261254,Count:3,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:,ReportingInstance:,}" Apr 28 01:17:30.188630 kubelet[3120]: E0428 01:17:26.334798 3120 kubelet_node_status.go:473] "Unable to update node status" err="update node status exceeds retry count" Apr 28 01:17:33.548011 kubelet[3120]: E0428 01:17:28.447740 3120 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.30:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&resourceVersion=1177\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 28 01:17:35.998818 kubelet[3120]: E0428 01:17:31.663022 3120 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.30:6443/apis/node.k8s.io/v1/runtimeclasses?resourceVersion=1206\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 28 01:17:38.084190 kubelet[3120]: E0428 01:17:36.725197 3120 log.go:32] "ListContainers with filter from runtime service failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" filter="" Apr 28 01:17:39.559737 kubelet[3120]: E0428 01:17:37.455501 3120 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.30:6443/apis/storage.k8s.io/v1/csidrivers?resourceVersion=1177\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 28 01:17:45.361305 kubelet[3120]: E0428 01:17:42.146570 3120 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.30:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded" interval="7s" Apr 28 01:17:46.377957 containerd[1638]: time="2026-04-28T01:17:43.451491623Z" level=error msg="StopContainer for \"9583e013b7af737c09ec0f5a95821eabdf38511a63754015720f2c75fbb31b16\" failed" error="rpc error: code = Unknown desc = failed to stop container \"9583e013b7af737c09ec0f5a95821eabdf38511a63754015720f2c75fbb31b16\": context canceled" Apr 28 01:17:48.114968 kubelet[3120]: E0428 01:17:41.706033 3120 log.go:32] "StopContainer from runtime service failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" containerID="9583e013b7af737c09ec0f5a95821eabdf38511a63754015720f2c75fbb31b16" Apr 28 01:17:50.206436 kubelet[3120]: E0428 01:17:50.188503 3120 container_log_manager.go:154] "Failed to rotate container logs" err="failed to list containers: rpc error: code = DeadlineExceeded desc = context deadline exceeded" Apr 28 01:17:51.654792 kubelet[3120]: E0428 01:17:51.637368 3120 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.30:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&resourceVersion=1213\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 28 01:17:53.616648 containerd[1638]: time="2026-04-28T01:17:50.827031757Z" level=error msg="ttrpc: received message on inactive stream" stream=87 Apr 28 01:17:54.872921 kubelet[3120]: E0428 01:17:53.148040 3120 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.0.0.30:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dlocalhost&resourceVersion=1197\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="pkg/kubelet/config/apiserver.go:66" type="*v1.Pod" Apr 28 01:18:03.198181 kubelet[3120]: E0428 01:18:03.176680 3120 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:18:13.665728 kubelet[3120]: E0428 01:17:59.642919 3120 kuberuntime_container.go:871] "Container termination failed with gracePeriod" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" pod="kube-system/kube-apiserver-localhost" podUID="cf2ebce56cde410c1f7401213757c4d8" containerName="kube-apiserver" containerID="containerd://9583e013b7af737c09ec0f5a95821eabdf38511a63754015720f2c75fbb31b16" gracePeriod=30 Apr 28 01:18:20.735005 kubelet[3120]: E0428 01:18:12.595841 3120 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.30:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded" interval="7s" Apr 28 01:18:26.249957 kubelet[3120]: E0428 01:18:24.975211 3120 kuberuntime_manager.go:1248] "killContainer for pod failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" containerName="kube-apiserver" containerID={"Type":"containerd","ID":"9583e013b7af737c09ec0f5a95821eabdf38511a63754015720f2c75fbb31b16"} pod="kube-system/kube-apiserver-localhost" Apr 28 01:18:26.536871 kubelet[3120]: E0428 01:18:26.529715 3120 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:18:28.885361 kubelet[3120]: E0428 01:18:28.876142 3120 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillContainer\" for \"kube-apiserver\" with KillContainerError: \"rpc error: code = DeadlineExceeded desc = context deadline exceeded\"" pod="kube-system/kube-apiserver-localhost" podUID="cf2ebce56cde410c1f7401213757c4d8" Apr 28 01:18:35.475555 sshd[5573]: Connection closed by 10.0.0.1 port 37148 Apr 28 01:18:35.690764 sshd-session[5544]: pam_unix(sshd:session): session closed for user core Apr 28 01:18:38.351983 systemd[1]: sshd@37-4103-10.0.0.30:22-10.0.0.1:37148.service: Deactivated successfully. Apr 28 01:18:38.763013 systemd[1]: sshd@37-4103-10.0.0.30:22-10.0.0.1:37148.service: Consumed 10.111s CPU time, 4.1M memory peak. Apr 28 01:18:40.689889 systemd[1]: session-39.scope: Deactivated successfully. Apr 28 01:18:41.063809 systemd[1]: session-39.scope: Consumed 1min 18.740s CPU time, 19.5M memory peak. Apr 28 01:18:43.319090 systemd-logind[1614]: Session 39 logged out. Waiting for processes to exit. Apr 28 01:18:47.581834 kubelet[3120]: E0428 01:18:47.575337 3120 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.30:6443/apis/storage.k8s.io/v1/csidrivers?resourceVersion=1177\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 28 01:18:50.266871 kubelet[3120]: E0428 01:18:37.198163 3120 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get rootFs stats: failed to get rootFs info: cannot find filesystem info for device \"/dev/vda9\"" Apr 28 01:18:57.239197 systemd[1]: Started sshd@38-12-10.0.0.30:22-10.0.0.1:49090.service - OpenSSH per-connection server daemon (10.0.0.1:49090). Apr 28 01:18:58.403185 systemd-logind[1614]: Removed session 39. Apr 28 01:19:06.147053 kubelet[3120]: E0428 01:19:02.981067 3120 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://10.0.0.30:6443/api/v1/namespaces/kube-system/events/kube-apiserver-localhost.18aa5ff715f618ea\": net/http: TLS handshake timeout" event="&Event{ObjectMeta:{kube-apiserver-localhost.18aa5ff715f618ea kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-localhost,UID:cf2ebce56cde410c1f7401213757c4d8,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:(combined from similar events): Liveness probe failed: Get \"https://10.0.0.30:6443/livez\": context deadline exceeded,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-28 01:08:37.781092586 +0000 UTC m=+556.522291707,LastTimestamp:2026-04-28 01:09:43.191062136 +0000 UTC m=+621.932261254,Count:3,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:,ReportingInstance:,}" Apr 28 01:19:10.284835 kubelet[3120]: E0428 01:19:06.279622 3120 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.30:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&resourceVersion=1177\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 28 01:19:17.194930 containerd[1638]: time="2026-04-28T01:19:17.172910863Z" level=info msg="TaskExit event container_id:\"a095478f240cbd6fb021b879525084d9c008460587333289a0c742e05b868b1c\" id:\"a095478f240cbd6fb021b879525084d9c008460587333289a0c742e05b868b1c\" pid:4889 exit_status:1 exited_at:{seconds:1777338460 nanos:446013510}" Apr 28 01:19:34.682654 sshd[5664]: Accepted publickey for core from 10.0.0.1 port 49090 ssh2: RSA SHA256:H+Vux/uLcwSNGfLerJ6bpcTovGn/hDI0W9YvrkmMHk4 Apr 28 01:19:38.025675 sshd-session[5664]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 01:19:42.070476 containerd[1638]: time="2026-04-28T01:19:42.069903050Z" level=error msg="Failed to handle backOff event container_id:\"a095478f240cbd6fb021b879525084d9c008460587333289a0c742e05b868b1c\" id:\"a095478f240cbd6fb021b879525084d9c008460587333289a0c742e05b868b1c\" pid:4889 exit_status:1 exited_at:{seconds:1777338460 nanos:446013510} for a095478f240cbd6fb021b879525084d9c008460587333289a0c742e05b868b1c" error="failed to handle container TaskExit event: failed to stop container: context deadline exceeded" Apr 28 01:19:43.941704 containerd[1638]: time="2026-04-28T01:19:43.213450934Z" level=error msg="ttrpc: received message on inactive stream" stream=111 Apr 28 01:19:44.590139 containerd[1638]: time="2026-04-28T01:19:43.787727951Z" level=info msg="TaskExit event container_id:\"307693d5f6b27635f480b3af7e65ec1b92e0753b3f946c102f39b7b3864b2cdc\" id:\"307693d5f6b27635f480b3af7e65ec1b92e0753b3f946c102f39b7b3864b2cdc\" pid:4482 exit_status:1 exited_at:{seconds:1777338468 nanos:658378364}" Apr 28 01:19:46.306707 containerd[1638]: time="2026-04-28T01:19:44.328588716Z" level=error msg="ttrpc: received message on inactive stream" stream=113 Apr 28 01:19:48.703909 systemd-logind[1614]: New session '40' of user 'core' with class 'user' and type 'tty'. Apr 28 01:19:51.052655 containerd[1638]: time="2026-04-28T01:19:51.041488514Z" level=error msg="get state for 307693d5f6b27635f480b3af7e65ec1b92e0753b3f946c102f39b7b3864b2cdc" error="context deadline exceeded" Apr 28 01:19:52.038201 containerd[1638]: time="2026-04-28T01:19:51.116629709Z" level=warning msg="unknown status" status=0 Apr 28 01:19:52.308473 systemd[1]: Started session-40.scope - Session 40 of User core. Apr 28 01:19:53.911654 kubelet[3120]: E0428 01:19:41.784209 3120 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.30:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" interval="7s" Apr 28 01:19:59.504873 kubelet[3120]: I0428 01:19:59.496913 3120 request.go:752] "Waited before sending request" delay="4.955575199s" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://10.0.0.30:6443/apis/storage.k8s.io/v1/csidrivers?resourceVersion=1177" Apr 28 01:20:14.577629 kubelet[3120]: E0428 01:20:06.225927 3120 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.30:6443/apis/node.k8s.io/v1/runtimeclasses?resourceVersion=1206\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 28 01:20:16.652676 kubelet[3120]: E0428 01:20:08.789722 3120 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.30:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&resourceVersion=1213\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 28 01:20:20.080041 containerd[1638]: time="2026-04-28T01:20:19.476380675Z" level=error msg="get state for 307693d5f6b27635f480b3af7e65ec1b92e0753b3f946c102f39b7b3864b2cdc" error="context deadline exceeded" Apr 28 01:20:20.080041 containerd[1638]: time="2026-04-28T01:20:20.060107300Z" level=warning msg="unknown status" status=0 Apr 28 01:20:20.953739 containerd[1638]: time="2026-04-28T01:20:20.418514648Z" level=error msg="ttrpc: received message on inactive stream" stream=135 Apr 28 01:20:20.953739 containerd[1638]: time="2026-04-28T01:20:20.952296506Z" level=error msg="failed to delete task" error="context deadline exceeded" id=307693d5f6b27635f480b3af7e65ec1b92e0753b3f946c102f39b7b3864b2cdc Apr 28 01:20:22.857766 containerd[1638]: time="2026-04-28T01:20:21.243747197Z" level=error msg="failed to drain init process 307693d5f6b27635f480b3af7e65ec1b92e0753b3f946c102f39b7b3864b2cdc io" error="context deadline exceeded" runtime=io.containerd.runc.v2 Apr 28 01:20:23.846845 containerd[1638]: time="2026-04-28T01:20:23.200069243Z" level=error msg="Failed to handle backOff event container_id:\"307693d5f6b27635f480b3af7e65ec1b92e0753b3f946c102f39b7b3864b2cdc\" id:\"307693d5f6b27635f480b3af7e65ec1b92e0753b3f946c102f39b7b3864b2cdc\" pid:4482 exit_status:1 exited_at:{seconds:1777338468 nanos:658378364} for 307693d5f6b27635f480b3af7e65ec1b92e0753b3f946c102f39b7b3864b2cdc" error="failed to handle container TaskExit event: failed to stop container: failed to delete task: context deadline exceeded" Apr 28 01:20:25.057712 containerd[1638]: time="2026-04-28T01:20:23.492208962Z" level=error msg="ttrpc: received message on inactive stream" stream=137 Apr 28 01:20:40.608165 containerd[1638]: time="2026-04-28T01:20:39.405730436Z" level=info msg="StopContainer for \"307693d5f6b27635f480b3af7e65ec1b92e0753b3f946c102f39b7b3864b2cdc\" with timeout 30 (s)" Apr 28 01:20:43.246830 containerd[1638]: time="2026-04-28T01:20:43.245522536Z" level=info msg="Skipping the sending of signal terminated to container \"307693d5f6b27635f480b3af7e65ec1b92e0753b3f946c102f39b7b3864b2cdc\" because a prior stop with timeout>0 request already sent the signal" Apr 28 01:20:46.995902 kubelet[3120]: E0428 01:20:46.988651 3120 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" filter="" Apr 28 01:21:03.883787 kubelet[3120]: E0428 01:21:03.877758 3120 log.go:32] "StopContainer from runtime service failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" containerID="307693d5f6b27635f480b3af7e65ec1b92e0753b3f946c102f39b7b3864b2cdc" Apr 28 01:21:12.128772 kubelet[3120]: E0428 01:20:50.266092 3120 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" Apr 28 01:21:13.977049 containerd[1638]: time="2026-04-28T01:21:13.961060062Z" level=info msg="Kill container \"307693d5f6b27635f480b3af7e65ec1b92e0753b3f946c102f39b7b3864b2cdc\"" Apr 28 01:21:18.250568 kubelet[3120]: E0428 01:21:15.553743 3120 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.30:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded" interval="7s" Apr 28 01:21:23.403817 kubelet[3120]: E0428 01:21:10.842599 3120 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.0.0.30:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dlocalhost&resourceVersion=1197\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="pkg/kubelet/config/apiserver.go:66" type="*v1.Pod" Apr 28 01:21:25.202864 containerd[1638]: time="2026-04-28T01:21:24.154716181Z" level=error msg="StopContainer for \"307693d5f6b27635f480b3af7e65ec1b92e0753b3f946c102f39b7b3864b2cdc\" failed" error="rpc error: code = Unknown desc = failed to kill container \"307693d5f6b27635f480b3af7e65ec1b92e0753b3f946c102f39b7b3864b2cdc\": context canceled" Apr 28 01:21:29.502767 kubelet[3120]: E0428 01:21:29.436888 3120 log.go:32] "ListImages with filter from image service failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" filter="" Apr 28 01:21:31.006917 kubelet[3120]: E0428 01:21:16.527824 3120 generic.go:254] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" Apr 28 01:21:33.910580 kubelet[3120]: E0428 01:21:15.517681 3120 projected.go:291] Couldn't get configMap kube-flannel/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Apr 28 01:21:36.484605 kubelet[3120]: E0428 01:21:36.385710 3120 configmap.go:193] Couldn't get configMap kube-flannel/kube-flannel-cfg: failed to sync configmap cache: timed out waiting for the condition Apr 28 01:21:40.654786 kubelet[3120]: E0428 01:21:37.468764 3120 kuberuntime_image.go:108] "Failed to list images" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" Apr 28 01:21:44.166606 kubelet[3120]: E0428 01:20:40.570062 3120 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://10.0.0.30:6443/api/v1/namespaces/kube-system/events/kube-apiserver-localhost.18aa5ff715f618ea\": net/http: TLS handshake timeout" event="&Event{ObjectMeta:{kube-apiserver-localhost.18aa5ff715f618ea kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-localhost,UID:cf2ebce56cde410c1f7401213757c4d8,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:(combined from similar events): Liveness probe failed: Get \"https://10.0.0.30:6443/livez\": context deadline exceeded,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-28 01:08:37.781092586 +0000 UTC m=+556.522291707,LastTimestamp:2026-04-28 01:09:43.191062136 +0000 UTC m=+621.932261254,Count:3,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:,ReportingInstance:,}" Apr 28 01:21:44.896523 kubelet[3120]: E0428 01:21:29.174753 3120 log.go:32] "ListContainers with filter from runtime service failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" filter="" Apr 28 01:21:49.712070 kubelet[3120]: E0428 01:21:13.393735 3120 kuberuntime_container.go:871] "Container termination failed with gracePeriod" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" pod="kube-system/kube-scheduler-localhost" podUID="824fd89300514e351ed3b68d82c665c6" containerName="kube-scheduler" containerID="containerd://307693d5f6b27635f480b3af7e65ec1b92e0753b3f946c102f39b7b3864b2cdc" gracePeriod=30 Apr 28 01:21:50.871074 kubelet[3120]: E0428 01:21:47.175810 3120 log.go:32] "ListContainers with filter from runtime service failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" filter="" Apr 28 01:21:51.419202 kubelet[3120]: E0428 01:21:51.386907 3120 projected.go:196] Error preparing data for projected volume kube-api-access-vnx8j for pod kube-flannel/kube-flannel-ds-tpgdg: failed to sync configmap cache: timed out waiting for the condition Apr 28 01:21:52.084667 kubelet[3120]: E0428 01:21:44.460901 3120 kuberuntime_container.go:545] "ListContainers failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" Apr 28 01:21:54.183189 kubelet[3120]: E0428 01:21:44.875579 3120 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.30:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&resourceVersion=1177\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 28 01:21:57.699159 kubelet[3120]: E0428 01:21:52.708777 3120 kuberuntime_container.go:545] "ListContainers failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" Apr 28 01:22:02.317511 kubelet[3120]: E0428 01:22:00.809284 3120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/61b03599-9c01-4d11-8ba6-0d4d43ff2bf4-flannel-cfg podName:61b03599-9c01-4d11-8ba6-0d4d43ff2bf4 nodeName:}" failed. No retries permitted until 2026-04-28 01:21:51.651029677 +0000 UTC m=+1350.392228872 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "flannel-cfg" (UniqueName: "kubernetes.io/configmap/61b03599-9c01-4d11-8ba6-0d4d43ff2bf4-flannel-cfg") pod "kube-flannel-ds-tpgdg" (UID: "61b03599-9c01-4d11-8ba6-0d4d43ff2bf4") : failed to sync configmap cache: timed out waiting for the condition Apr 28 01:22:03.604164 kubelet[3120]: I0428 01:21:50.667949 3120 image_gc_manager.go:230] "Failed to update image list" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" Apr 28 01:22:03.977945 kubelet[3120]: I0428 01:22:00.847958 3120 image_gc_manager.go:222] "Failed to monitor images" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" Apr 28 01:22:08.609424 kubelet[3120]: E0428 01:21:57.046177 3120 kuberuntime_manager.go:1248] "killContainer for pod failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" containerName="kube-scheduler" containerID={"Type":"containerd","ID":"307693d5f6b27635f480b3af7e65ec1b92e0753b3f946c102f39b7b3864b2cdc"} pod="kube-system/kube-scheduler-localhost" Apr 28 01:22:19.875827 kubelet[3120]: E0428 01:22:10.085839 3120 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.30:6443/apis/storage.k8s.io/v1/csidrivers?resourceVersion=1177\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 28 01:22:20.693598 kubelet[3120]: E0428 01:22:19.968399 3120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/61b03599-9c01-4d11-8ba6-0d4d43ff2bf4-kube-api-access-vnx8j podName:61b03599-9c01-4d11-8ba6-0d4d43ff2bf4 nodeName:}" failed. No retries permitted until 2026-04-28 01:22:20.376983944 +0000 UTC m=+1379.118183065 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-vnx8j" (UniqueName: "kubernetes.io/projected/61b03599-9c01-4d11-8ba6-0d4d43ff2bf4-kube-api-access-vnx8j") pod "kube-flannel-ds-tpgdg" (UID: "61b03599-9c01-4d11-8ba6-0d4d43ff2bf4") : failed to sync configmap cache: timed out waiting for the condition Apr 28 01:22:26.098741 kubelet[3120]: E0428 01:22:21.163581 3120 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.30:6443/apis/node.k8s.io/v1/runtimeclasses?resourceVersion=1206\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 28 01:22:27.569388 kubelet[3120]: E0428 01:22:06.666527 3120 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://10.0.0.30:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-localhost\": net/http: TLS handshake timeout" podUID="cf2ebce56cde410c1f7401213757c4d8" pod="kube-system/kube-apiserver-localhost" Apr 28 01:22:37.912935 kubelet[3120]: E0428 01:22:32.152907 3120 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Apr 28 01:22:40.029128 sshd[5692]: Connection closed by 10.0.0.1 port 49090 Apr 28 01:22:40.405843 sshd-session[5664]: pam_unix(sshd:session): session closed for user core Apr 28 01:22:42.790409 systemd[1]: sshd@38-12-10.0.0.30:22-10.0.0.1:49090.service: Deactivated successfully. Apr 28 01:22:43.096959 systemd[1]: sshd@38-12-10.0.0.30:22-10.0.0.1:49090.service: Consumed 10.752s CPU time, 4.2M memory peak. Apr 28 01:22:44.305570 systemd[1]: session-40.scope: Deactivated successfully. Apr 28 01:22:44.454069 systemd[1]: session-40.scope: Consumed 1min 27.785s CPU time, 17.7M memory peak. Apr 28 01:22:45.737129 systemd-logind[1614]: Session 40 logged out. Waiting for processes to exit. Apr 28 01:22:46.415561 containerd[1638]: time="2026-04-28T01:22:45.567974485Z" level=error msg="ttrpc: received message on inactive stream" stream=141 Apr 28 01:22:47.066829 kubelet[3120]: E0428 01:22:34.751656 3120 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.30:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" interval="7s" Apr 28 01:22:47.339798 kubelet[3120]: E0428 01:22:32.880696 3120 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillContainer\" for \"kube-scheduler\" with KillContainerError: \"rpc error: code = DeadlineExceeded desc = context deadline exceeded\"" pod="kube-system/kube-scheduler-localhost" podUID="824fd89300514e351ed3b68d82c665c6" Apr 28 01:22:50.729708 kubelet[3120]: I0428 01:22:13.156980 3120 setters.go:543] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-28T01:22:10Z","lastTransitionTime":"2026-04-28T01:22:10Z","reason":"KubeletNotReady","message":"[container runtime is down, PLEG is not healthy: pleg was last seen active 8m17.018346425s ago; threshold is 3m0s]"} Apr 28 01:22:51.726777 systemd[1]: Started sshd@39-12300-10.0.0.30:22-10.0.0.1:54082.service - OpenSSH per-connection server daemon (10.0.0.1:54082). Apr 28 01:22:52.692017 systemd-logind[1614]: Removed session 40. Apr 28 01:22:58.976090 kubelet[3120]: E0428 01:22:40.546956 3120 configmap.go:193] Couldn't get configMap kube-flannel/kube-flannel-cfg: failed to sync configmap cache: timed out waiting for the condition Apr 28 01:23:02.766756 kubelet[3120]: E0428 01:23:02.758947 3120 configmap.go:193] Couldn't get configMap kube-system/kube-proxy: failed to sync configmap cache: timed out waiting for the condition Apr 28 01:23:03.910141 kubelet[3120]: E0428 01:22:54.273936 3120 projected.go:196] Error preparing data for projected volume kube-api-access-mtpbb for pod kube-system/kube-proxy-d52vp: failed to sync configmap cache: timed out waiting for the condition Apr 28 01:23:04.331066 kubelet[3120]: E0428 01:23:04.258025 3120 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.30:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&resourceVersion=1213\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 28 01:23:04.627167 kubelet[3120]: E0428 01:23:00.130534 3120 kubelet.go:1583] "Container garbage collection failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" Apr 28 01:23:07.374420 kubelet[3120]: E0428 01:23:07.373804 3120 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.0.0.30:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dlocalhost&resourceVersion=1197\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="pkg/kubelet/config/apiserver.go:66" type="*v1.Pod" Apr 28 01:23:09.149412 kubelet[3120]: E0428 01:23:07.374179 3120 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.30:6443/apis/storage.k8s.io/v1/csidrivers?resourceVersion=1177\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 28 01:23:10.356634 kubelet[3120]: E0428 01:23:10.355287 3120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0119e170-e6c1-4e77-9131-085c2b9d7bc5-kube-proxy podName:0119e170-e6c1-4e77-9131-085c2b9d7bc5 nodeName:}" failed. No retries permitted until 2026-04-28 01:23:10.854651804 +0000 UTC m=+1429.595850923 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/0119e170-e6c1-4e77-9131-085c2b9d7bc5-kube-proxy") pod "kube-proxy-d52vp" (UID: "0119e170-e6c1-4e77-9131-085c2b9d7bc5") : failed to sync configmap cache: timed out waiting for the condition Apr 28 01:23:10.356634 kubelet[3120]: E0428 01:23:10.355454 3120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/61b03599-9c01-4d11-8ba6-0d4d43ff2bf4-flannel-cfg podName:61b03599-9c01-4d11-8ba6-0d4d43ff2bf4 nodeName:}" failed. No retries permitted until 2026-04-28 01:23:11.355444864 +0000 UTC m=+1430.096643986 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "flannel-cfg" (UniqueName: "kubernetes.io/configmap/61b03599-9c01-4d11-8ba6-0d4d43ff2bf4-flannel-cfg") pod "kube-flannel-ds-tpgdg" (UID: "61b03599-9c01-4d11-8ba6-0d4d43ff2bf4") : failed to sync configmap cache: timed out waiting for the condition Apr 28 01:23:10.356634 kubelet[3120]: E0428 01:23:10.355526 3120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0119e170-e6c1-4e77-9131-085c2b9d7bc5-kube-api-access-mtpbb podName:0119e170-e6c1-4e77-9131-085c2b9d7bc5 nodeName:}" failed. No retries permitted until 2026-04-28 01:23:10.855520668 +0000 UTC m=+1429.596719790 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-mtpbb" (UniqueName: "kubernetes.io/projected/0119e170-e6c1-4e77-9131-085c2b9d7bc5-kube-api-access-mtpbb") pod "kube-proxy-d52vp" (UID: "0119e170-e6c1-4e77-9131-085c2b9d7bc5") : failed to sync configmap cache: timed out waiting for the condition Apr 28 01:23:10.883937 kubelet[3120]: E0428 01:23:10.609407 3120 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: Get \"https://10.0.0.30:6443/api/v1/namespaces/kube-flannel/configmaps?fieldSelector=metadata.name%3Dkube-flannel-cfg&resourceVersion=1214\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="object-\"kube-flannel\"/\"kube-flannel-cfg\"" type="*v1.ConfigMap" Apr 28 01:23:10.883937 kubelet[3120]: E0428 01:23:10.770086 3120 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.30:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&resourceVersion=1177\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 28 01:23:15.002837 sshd[5775]: Accepted publickey for core from 10.0.0.1 port 54082 ssh2: RSA SHA256:H+Vux/uLcwSNGfLerJ6bpcTovGn/hDI0W9YvrkmMHk4 Apr 28 01:23:15.382457 kubelet[3120]: E0428 01:23:15.257106 3120 projected.go:291] Couldn't get configMap kube-flannel/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Apr 28 01:23:15.382457 kubelet[3120]: E0428 01:23:15.268572 3120 projected.go:196] Error preparing data for projected volume kube-api-access-vnx8j for pod kube-flannel/kube-flannel-ds-tpgdg: failed to sync configmap cache: timed out waiting for the condition Apr 28 01:23:16.380155 sshd-session[5775]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 01:23:17.381845 kubelet[3120]: E0428 01:23:17.368734 3120 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.30:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" interval="7s" Apr 28 01:23:17.381845 kubelet[3120]: E0428 01:23:17.369344 3120 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: Get \"https://10.0.0.30:6443/api/v1/namespaces/kube-flannel/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=1214\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="object-\"kube-flannel\"/\"kube-root-ca.crt\"" type="*v1.ConfigMap" Apr 28 01:23:17.381845 kubelet[3120]: E0428 01:23:14.873960 3120 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://10.0.0.30:6443/api/v1/namespaces/kube-system/events/kube-apiserver-localhost.18aa5ff715f618ea\": net/http: TLS handshake timeout" event="&Event{ObjectMeta:{kube-apiserver-localhost.18aa5ff715f618ea kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-localhost,UID:cf2ebce56cde410c1f7401213757c4d8,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:(combined from similar events): Liveness probe failed: Get \"https://10.0.0.30:6443/livez\": context deadline exceeded,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-28 01:08:37.781092586 +0000 UTC m=+556.522291707,LastTimestamp:2026-04-28 01:09:43.191062136 +0000 UTC m=+621.932261254,Count:3,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:,ReportingInstance:,}" Apr 28 01:23:18.483658 kubelet[3120]: E0428 01:23:18.480823 3120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/61b03599-9c01-4d11-8ba6-0d4d43ff2bf4-kube-api-access-vnx8j podName:61b03599-9c01-4d11-8ba6-0d4d43ff2bf4 nodeName:}" failed. No retries permitted until 2026-04-28 01:23:18.38513192 +0000 UTC m=+1437.126331034 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-vnx8j" (UniqueName: "kubernetes.io/projected/61b03599-9c01-4d11-8ba6-0d4d43ff2bf4-kube-api-access-vnx8j") pod "kube-flannel-ds-tpgdg" (UID: "61b03599-9c01-4d11-8ba6-0d4d43ff2bf4") : failed to sync configmap cache: timed out waiting for the condition Apr 28 01:23:19.044004 containerd[1638]: time="2026-04-28T01:23:18.637029919Z" level=info msg="StopContainer for \"a095478f240cbd6fb021b879525084d9c008460587333289a0c742e05b868b1c\" with timeout 30 (s)" Apr 28 01:23:19.726135 systemd-logind[1614]: New session '41' of user 'core' with class 'user' and type 'tty'. Apr 28 01:23:19.823511 systemd[1]: Started session-41.scope - Session 41 of User core. Apr 28 01:23:20.417927 containerd[1638]: time="2026-04-28T01:23:20.417461108Z" level=info msg="Skipping the sending of signal terminated to container \"a095478f240cbd6fb021b879525084d9c008460587333289a0c742e05b868b1c\" because a prior stop with timeout>0 request already sent the signal" Apr 28 01:23:20.495072 kubelet[3120]: E0428 01:23:20.487647 3120 kubelet_node_status.go:486] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"NetworkUnavailable\\\"},{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-04-28T01:22:10Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-28T01:22:10Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-28T01:22:10Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-28T01:22:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-04-28T01:22:10Z\\\",\\\"message\\\":\\\"[container runtime is down, PLEG is not healthy: pleg was last seen active 8m17.018346425s ago; threshold is 3m0s]\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}]}}\" for node \"localhost\": Patch \"https://10.0.0.30:6443/api/v1/nodes/localhost/status?timeout=10s\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Apr 28 01:23:21.305136 kubelet[3120]: E0428 01:23:21.297406 3120 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: Get \"https://10.0.0.30:6443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=1214\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="object-\"kube-system\"/\"kube-root-ca.crt\"" type="*v1.ConfigMap" Apr 28 01:23:21.511911 kubelet[3120]: E0428 01:23:21.511745 3120 configmap.go:193] Couldn't get configMap kube-system/kube-proxy: failed to sync configmap cache: timed out waiting for the condition Apr 28 01:23:21.632827 kubelet[3120]: E0428 01:23:21.627533 3120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0119e170-e6c1-4e77-9131-085c2b9d7bc5-kube-proxy podName:0119e170-e6c1-4e77-9131-085c2b9d7bc5 nodeName:}" failed. No retries permitted until 2026-04-28 01:23:22.62690877 +0000 UTC m=+1441.368107894 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/0119e170-e6c1-4e77-9131-085c2b9d7bc5-kube-proxy") pod "kube-proxy-d52vp" (UID: "0119e170-e6c1-4e77-9131-085c2b9d7bc5") : failed to sync configmap cache: timed out waiting for the condition Apr 28 01:23:21.942346 kubelet[3120]: E0428 01:23:21.624053 3120 projected.go:291] Couldn't get configMap kube-flannel/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Apr 28 01:23:21.943757 kubelet[3120]: E0428 01:23:21.943716 3120 projected.go:196] Error preparing data for projected volume kube-api-access-vnx8j for pod kube-flannel/kube-flannel-ds-tpgdg: failed to sync configmap cache: timed out waiting for the condition Apr 28 01:23:21.954541 kubelet[3120]: E0428 01:23:21.949017 3120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/61b03599-9c01-4d11-8ba6-0d4d43ff2bf4-kube-api-access-vnx8j podName:61b03599-9c01-4d11-8ba6-0d4d43ff2bf4 nodeName:}" failed. No retries permitted until 2026-04-28 01:23:23.947420514 +0000 UTC m=+1442.688619648 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-vnx8j" (UniqueName: "kubernetes.io/projected/61b03599-9c01-4d11-8ba6-0d4d43ff2bf4-kube-api-access-vnx8j") pod "kube-flannel-ds-tpgdg" (UID: "61b03599-9c01-4d11-8ba6-0d4d43ff2bf4") : failed to sync configmap cache: timed out waiting for the condition Apr 28 01:23:22.449344 kubelet[3120]: E0428 01:23:22.184592 3120 configmap.go:193] Couldn't get configMap kube-flannel/kube-flannel-cfg: failed to sync configmap cache: timed out waiting for the condition Apr 28 01:23:22.582930 kubelet[3120]: E0428 01:23:22.511282 3120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/61b03599-9c01-4d11-8ba6-0d4d43ff2bf4-flannel-cfg podName:61b03599-9c01-4d11-8ba6-0d4d43ff2bf4 nodeName:}" failed. No retries permitted until 2026-04-28 01:23:24.510518919 +0000 UTC m=+1443.251718063 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "flannel-cfg" (UniqueName: "kubernetes.io/configmap/61b03599-9c01-4d11-8ba6-0d4d43ff2bf4-flannel-cfg") pod "kube-flannel-ds-tpgdg" (UID: "61b03599-9c01-4d11-8ba6-0d4d43ff2bf4") : failed to sync configmap cache: timed out waiting for the condition Apr 28 01:23:22.868268 kubelet[3120]: E0428 01:23:22.688876 3120 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Apr 28 01:23:22.868268 kubelet[3120]: E0428 01:23:22.865196 3120 projected.go:196] Error preparing data for projected volume kube-api-access-mtpbb for pod kube-system/kube-proxy-d52vp: failed to sync configmap cache: timed out waiting for the condition Apr 28 01:23:22.868268 kubelet[3120]: E0428 01:23:22.865519 3120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0119e170-e6c1-4e77-9131-085c2b9d7bc5-kube-api-access-mtpbb podName:0119e170-e6c1-4e77-9131-085c2b9d7bc5 nodeName:}" failed. No retries permitted until 2026-04-28 01:23:23.865470212 +0000 UTC m=+1442.606669340 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-mtpbb" (UniqueName: "kubernetes.io/projected/0119e170-e6c1-4e77-9131-085c2b9d7bc5-kube-api-access-mtpbb") pod "kube-proxy-d52vp" (UID: "0119e170-e6c1-4e77-9131-085c2b9d7bc5") : failed to sync configmap cache: timed out waiting for the condition Apr 28 01:23:24.280975 containerd[1638]: time="2026-04-28T01:23:23.793887466Z" level=info msg="StopContainer for \"9583e013b7af737c09ec0f5a95821eabdf38511a63754015720f2c75fbb31b16\" with timeout 30 (s)" Apr 28 01:23:26.858735 kubelet[3120]: E0428 01:23:25.684732 3120 configmap.go:193] Couldn't get configMap kube-system/kube-proxy: failed to sync configmap cache: timed out waiting for the condition Apr 28 01:23:27.584003 containerd[1638]: time="2026-04-28T01:23:26.739534833Z" level=info msg="Skipping the sending of signal terminated to container \"9583e013b7af737c09ec0f5a95821eabdf38511a63754015720f2c75fbb31b16\" because a prior stop with timeout>0 request already sent the signal" Apr 28 01:23:35.683632 kubelet[3120]: E0428 01:23:35.666694 3120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0119e170-e6c1-4e77-9131-085c2b9d7bc5-kube-proxy podName:0119e170-e6c1-4e77-9131-085c2b9d7bc5 nodeName:}" failed. No retries permitted until 2026-04-28 01:23:35.748211576 +0000 UTC m=+1454.489410699 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/0119e170-e6c1-4e77-9131-085c2b9d7bc5-kube-proxy") pod "kube-proxy-d52vp" (UID: "0119e170-e6c1-4e77-9131-085c2b9d7bc5") : failed to sync configmap cache: timed out waiting for the condition Apr 28 01:23:39.245988 kubelet[3120]: E0428 01:23:37.087513 3120 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.30:6443/apis/storage.k8s.io/v1/csidrivers?resourceVersion=1177\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 28 01:23:41.667051 kubelet[3120]: E0428 01:23:41.656770 3120 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Apr 28 01:23:44.277144 kubelet[3120]: E0428 01:23:44.266764 3120 projected.go:291] Couldn't get configMap kube-flannel/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Apr 28 01:23:45.842013 kubelet[3120]: E0428 01:23:45.834776 3120 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: Get \"https://10.0.0.30:6443/api/v1/namespaces/kube-flannel/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=1214\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="object-\"kube-flannel\"/\"kube-root-ca.crt\"" type="*v1.ConfigMap" Apr 28 01:23:48.603997 kubelet[3120]: E0428 01:23:47.261064 3120 projected.go:196] Error preparing data for projected volume kube-api-access-vnx8j for pod kube-flannel/kube-flannel-ds-tpgdg: failed to sync configmap cache: timed out waiting for the condition Apr 28 01:23:49.899478 kubelet[3120]: E0428 01:23:49.899059 3120 projected.go:196] Error preparing data for projected volume kube-api-access-mtpbb for pod kube-system/kube-proxy-d52vp: failed to sync configmap cache: timed out waiting for the condition Apr 28 01:23:50.847112 containerd[1638]: time="2026-04-28T01:23:50.554722550Z" level=info msg="Kill container \"a095478f240cbd6fb021b879525084d9c008460587333289a0c742e05b868b1c\"" Apr 28 01:23:52.275107 kubelet[3120]: E0428 01:23:50.208907 3120 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.30:6443/apis/node.k8s.io/v1/runtimeclasses?resourceVersion=1206\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 28 01:23:53.194627 kubelet[3120]: E0428 01:23:51.665660 3120 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.30:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&resourceVersion=1177\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 28 01:23:53.948335 kubelet[3120]: E0428 01:23:53.019592 3120 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: Get \"https://10.0.0.30:6443/api/v1/namespaces/kube-flannel/configmaps?fieldSelector=metadata.name%3Dkube-flannel-cfg&resourceVersion=1214\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="object-\"kube-flannel\"/\"kube-flannel-cfg\"" type="*v1.ConfigMap" Apr 28 01:23:54.091165 kubelet[3120]: E0428 01:23:53.020179 3120 configmap.go:193] Couldn't get configMap kube-flannel/kube-flannel-cfg: failed to sync configmap cache: timed out waiting for the condition Apr 28 01:23:54.091165 kubelet[3120]: E0428 01:23:53.101832 3120 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: Get \"https://10.0.0.30:6443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=1214\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="object-\"kube-system\"/\"kube-root-ca.crt\"" type="*v1.ConfigMap" Apr 28 01:23:54.195998 kubelet[3120]: E0428 01:23:53.206933 3120 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.0.0.30:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dlocalhost&resourceVersion=1197\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="pkg/kubelet/config/apiserver.go:66" type="*v1.Pod" Apr 28 01:23:55.107304 kubelet[3120]: E0428 01:23:54.293281 3120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/61b03599-9c01-4d11-8ba6-0d4d43ff2bf4-kube-api-access-vnx8j podName:61b03599-9c01-4d11-8ba6-0d4d43ff2bf4 nodeName:}" failed. No retries permitted until 2026-04-28 01:23:57.571712823 +0000 UTC m=+1476.312911944 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-vnx8j" (UniqueName: "kubernetes.io/projected/61b03599-9c01-4d11-8ba6-0d4d43ff2bf4-kube-api-access-vnx8j") pod "kube-flannel-ds-tpgdg" (UID: "61b03599-9c01-4d11-8ba6-0d4d43ff2bf4") : failed to sync configmap cache: timed out waiting for the condition Apr 28 01:23:55.386799 kubelet[3120]: E0428 01:23:53.318037 3120 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.30:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" interval="7s" Apr 28 01:23:55.386799 kubelet[3120]: E0428 01:23:53.767049 3120 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.30:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&resourceVersion=1213\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 28 01:23:55.532120 kubelet[3120]: E0428 01:23:54.848725 3120 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://10.0.0.30:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-localhost\": net/http: TLS handshake timeout" podUID="cf2ebce56cde410c1f7401213757c4d8" pod="kube-system/kube-apiserver-localhost" Apr 28 01:23:56.212285 kubelet[3120]: E0428 01:23:55.421089 3120 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: Get \"https://10.0.0.30:6443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dkube-proxy&resourceVersion=1214\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="object-\"kube-system\"/\"kube-proxy\"" type="*v1.ConfigMap" Apr 28 01:23:56.632797 kubelet[3120]: E0428 01:23:55.869896 3120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0119e170-e6c1-4e77-9131-085c2b9d7bc5-kube-api-access-mtpbb podName:0119e170-e6c1-4e77-9131-085c2b9d7bc5 nodeName:}" failed. No retries permitted until 2026-04-28 01:23:57.104785819 +0000 UTC m=+1475.845984936 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-mtpbb" (UniqueName: "kubernetes.io/projected/0119e170-e6c1-4e77-9131-085c2b9d7bc5-kube-api-access-mtpbb") pod "kube-proxy-d52vp" (UID: "0119e170-e6c1-4e77-9131-085c2b9d7bc5") : failed to sync configmap cache: timed out waiting for the condition Apr 28 01:23:57.312432 containerd[1638]: time="2026-04-28T01:23:57.122091334Z" level=info msg="Kill container \"9583e013b7af737c09ec0f5a95821eabdf38511a63754015720f2c75fbb31b16\"" Apr 28 01:23:58.167866 kubelet[3120]: E0428 01:23:58.162716 3120 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"localhost\": Get \"https://10.0.0.30:6443/api/v1/nodes/localhost?timeout=10s\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Apr 28 01:23:59.541735 kubelet[3120]: E0428 01:23:57.784729 3120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/61b03599-9c01-4d11-8ba6-0d4d43ff2bf4-flannel-cfg podName:61b03599-9c01-4d11-8ba6-0d4d43ff2bf4 nodeName:}" failed. No retries permitted until 2026-04-28 01:24:01.667818175 +0000 UTC m=+1480.409017303 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "flannel-cfg" (UniqueName: "kubernetes.io/configmap/61b03599-9c01-4d11-8ba6-0d4d43ff2bf4-flannel-cfg") pod "kube-flannel-ds-tpgdg" (UID: "61b03599-9c01-4d11-8ba6-0d4d43ff2bf4") : failed to sync configmap cache: timed out waiting for the condition Apr 28 01:24:02.179201 kubelet[3120]: E0428 01:24:02.159861 3120 projected.go:291] Couldn't get configMap kube-flannel/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Apr 28 01:24:02.179201 kubelet[3120]: E0428 01:24:02.161164 3120 projected.go:196] Error preparing data for projected volume kube-api-access-vnx8j for pod kube-flannel/kube-flannel-ds-tpgdg: failed to sync configmap cache: timed out waiting for the condition Apr 28 01:24:02.179201 kubelet[3120]: E0428 01:24:02.175307 3120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/61b03599-9c01-4d11-8ba6-0d4d43ff2bf4-kube-api-access-vnx8j podName:61b03599-9c01-4d11-8ba6-0d4d43ff2bf4 nodeName:}" failed. No retries permitted until 2026-04-28 01:24:10.170682757 +0000 UTC m=+1488.911959519 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-vnx8j" (UniqueName: "kubernetes.io/projected/61b03599-9c01-4d11-8ba6-0d4d43ff2bf4-kube-api-access-vnx8j") pod "kube-flannel-ds-tpgdg" (UID: "61b03599-9c01-4d11-8ba6-0d4d43ff2bf4") : failed to sync configmap cache: timed out waiting for the condition Apr 28 01:24:04.372083 kubelet[3120]: E0428 01:24:02.518153 3120 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://10.0.0.30:6443/api/v1/namespaces/kube-system/events/kube-apiserver-localhost.18aa5ff715f618ea\": net/http: TLS handshake timeout" event="&Event{ObjectMeta:{kube-apiserver-localhost.18aa5ff715f618ea kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-localhost,UID:cf2ebce56cde410c1f7401213757c4d8,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:(combined from similar events): Liveness probe failed: Get \"https://10.0.0.30:6443/livez\": context deadline exceeded,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-28 01:08:37.781092586 +0000 UTC m=+556.522291707,LastTimestamp:2026-04-28 01:09:43.191062136 +0000 UTC m=+621.932261254,Count:3,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:,ReportingInstance:,}" Apr 28 01:24:04.786885 sshd[5805]: Connection closed by 10.0.0.1 port 54082 Apr 28 01:24:04.701024 sshd-session[5775]: pam_unix(sshd:session): session closed for user core Apr 28 01:24:06.225979 systemd[1]: sshd@39-12300-10.0.0.30:22-10.0.0.1:54082.service: Deactivated successfully. Apr 28 01:24:06.351303 systemd[1]: sshd@39-12300-10.0.0.30:22-10.0.0.1:54082.service: Consumed 6.791s CPU time, 4.1M memory peak. Apr 28 01:24:06.625171 kubelet[3120]: E0428 01:24:03.178000 3120 configmap.go:193] Couldn't get configMap kube-system/kube-proxy: failed to sync configmap cache: timed out waiting for the condition Apr 28 01:24:07.226647 systemd[1]: session-41.scope: Deactivated successfully. Apr 28 01:24:07.314477 systemd[1]: session-41.scope: Consumed 21.750s CPU time, 15.5M memory peak. Apr 28 01:24:07.963934 kubelet[3120]: E0428 01:24:07.810175 3120 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Apr 28 01:24:08.087356 systemd-logind[1614]: Session 41 logged out. Waiting for processes to exit. Apr 28 01:24:08.654650 systemd-logind[1614]: Removed session 41. Apr 28 01:24:09.437807 kubelet[3120]: E0428 01:24:09.436959 3120 projected.go:196] Error preparing data for projected volume kube-api-access-mtpbb for pod kube-system/kube-proxy-d52vp: failed to sync configmap cache: timed out waiting for the condition Apr 28 01:24:10.608432 kubelet[3120]: E0428 01:24:10.513014 3120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0119e170-e6c1-4e77-9131-085c2b9d7bc5-kube-proxy podName:0119e170-e6c1-4e77-9131-085c2b9d7bc5 nodeName:}" failed. No retries permitted until 2026-04-28 01:24:13.416920769 +0000 UTC m=+1492.158119895 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/0119e170-e6c1-4e77-9131-085c2b9d7bc5-kube-proxy") pod "kube-proxy-d52vp" (UID: "0119e170-e6c1-4e77-9131-085c2b9d7bc5") : failed to sync configmap cache: timed out waiting for the condition Apr 28 01:24:10.993945 kubelet[3120]: E0428 01:24:10.692209 3120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0119e170-e6c1-4e77-9131-085c2b9d7bc5-kube-api-access-mtpbb podName:0119e170-e6c1-4e77-9131-085c2b9d7bc5 nodeName:}" failed. No retries permitted until 2026-04-28 01:24:14.691529287 +0000 UTC m=+1493.432728412 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-mtpbb" (UniqueName: "kubernetes.io/projected/0119e170-e6c1-4e77-9131-085c2b9d7bc5-kube-api-access-mtpbb") pod "kube-proxy-d52vp" (UID: "0119e170-e6c1-4e77-9131-085c2b9d7bc5") : failed to sync configmap cache: timed out waiting for the condition Apr 28 01:24:11.629122 kubelet[3120]: E0428 01:24:10.713035 3120 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.30:6443/apis/storage.k8s.io/v1/csidrivers?resourceVersion=1177\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 28 01:24:12.392979 kubelet[3120]: E0428 01:24:11.022503 3120 configmap.go:193] Couldn't get configMap kube-flannel/kube-flannel-cfg: failed to sync configmap cache: timed out waiting for the condition Apr 28 01:24:12.966076 systemd[1]: cri-containerd-9583e013b7af737c09ec0f5a95821eabdf38511a63754015720f2c75fbb31b16.scope: Deactivated successfully. Apr 28 01:24:13.093802 systemd[1]: cri-containerd-9583e013b7af737c09ec0f5a95821eabdf38511a63754015720f2c75fbb31b16.scope: Consumed 23min 58.865s CPU time, 226.8M memory peak. Apr 28 01:24:13.289206 containerd[1638]: time="2026-04-28T01:24:12.864566131Z" level=info msg="received container exit event container_id:\"9583e013b7af737c09ec0f5a95821eabdf38511a63754015720f2c75fbb31b16\" id:\"9583e013b7af737c09ec0f5a95821eabdf38511a63754015720f2c75fbb31b16\" pid:2897 exit_status:137 exited_at:{seconds:1777339451 nanos:462874617}" Apr 28 01:24:13.913212 kubelet[3120]: E0428 01:24:13.912195 3120 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="9m14.379s" Apr 28 01:24:14.208798 systemd[1]: Started sshd@40-13-10.0.0.30:22-10.0.0.1:40388.service - OpenSSH per-connection server daemon (10.0.0.1:40388). Apr 28 01:24:14.909786 kubelet[3120]: E0428 01:24:14.902346 3120 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"localhost\": context deadline exceeded - error from a previous attempt: read tcp 10.0.0.30:56256->10.0.0.30:6443: read: connection reset by peer" Apr 28 01:24:15.363063 kubelet[3120]: E0428 01:24:15.192342 3120 projected.go:291] Couldn't get configMap kube-flannel/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Apr 28 01:24:15.363063 kubelet[3120]: E0428 01:24:15.354914 3120 projected.go:196] Error preparing data for projected volume kube-api-access-vnx8j for pod kube-flannel/kube-flannel-ds-tpgdg: failed to sync configmap cache: timed out waiting for the condition Apr 28 01:24:15.363063 kubelet[3120]: E0428 01:24:15.355038 3120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/61b03599-9c01-4d11-8ba6-0d4d43ff2bf4-flannel-cfg podName:61b03599-9c01-4d11-8ba6-0d4d43ff2bf4 nodeName:}" failed. No retries permitted until 2026-04-28 01:24:22.974650963 +0000 UTC m=+1501.715850085 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "flannel-cfg" (UniqueName: "kubernetes.io/configmap/61b03599-9c01-4d11-8ba6-0d4d43ff2bf4-flannel-cfg") pod "kube-flannel-ds-tpgdg" (UID: "61b03599-9c01-4d11-8ba6-0d4d43ff2bf4") : failed to sync configmap cache: timed out waiting for the condition Apr 28 01:24:15.530555 kubelet[3120]: E0428 01:24:15.223904 3120 controller.go:145] "Failed to ensure lease exists, will retry" err="context deadline exceeded - error from a previous attempt: dial tcp 10.0.0.30:6443: connect: connection reset by peer" interval="7s" Apr 28 01:24:15.530555 kubelet[3120]: E0428 01:24:15.114141 3120 configmap.go:193] Couldn't get configMap kube-system/kube-proxy: failed to sync configmap cache: timed out waiting for the condition Apr 28 01:24:15.651683 kubelet[3120]: E0428 01:24:15.645141 3120 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.30:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&resourceVersion=1213\": dial tcp 10.0.0.30:6443: connect: connection refused - error from a previous attempt: dial tcp 10.0.0.30:6443: connect: connection reset by peer" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 28 01:24:16.581944 kubelet[3120]: E0428 01:24:15.531123 3120 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: Get \"https://10.0.0.30:6443/api/v1/namespaces/kube-flannel/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=1214\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="object-\"kube-flannel\"/\"kube-root-ca.crt\"" type="*v1.ConfigMap" Apr 28 01:24:16.961021 kubelet[3120]: E0428 01:24:16.916911 3120 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://10.0.0.30:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-localhost\": dial tcp 10.0.0.30:6443: connect: connection refused" podUID="cf2ebce56cde410c1f7401213757c4d8" pod="kube-system/kube-apiserver-localhost" Apr 28 01:24:17.431045 kubelet[3120]: E0428 01:24:17.430713 3120 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.30:6443/apis/node.k8s.io/v1/runtimeclasses?resourceVersion=1206\": dial tcp 10.0.0.30:6443: connect: connection refused - error from a previous attempt: read tcp 10.0.0.30:56264->10.0.0.30:6443: read: connection reset by peer" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 28 01:24:17.652697 kubelet[3120]: E0428 01:24:17.439101 3120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/61b03599-9c01-4d11-8ba6-0d4d43ff2bf4-kube-api-access-vnx8j podName:61b03599-9c01-4d11-8ba6-0d4d43ff2bf4 nodeName:}" failed. No retries permitted until 2026-04-28 01:24:32.609414011 +0000 UTC m=+1511.350613201 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-vnx8j" (UniqueName: "kubernetes.io/projected/61b03599-9c01-4d11-8ba6-0d4d43ff2bf4-kube-api-access-vnx8j") pod "kube-flannel-ds-tpgdg" (UID: "61b03599-9c01-4d11-8ba6-0d4d43ff2bf4") : failed to sync configmap cache: timed out waiting for the condition Apr 28 01:24:17.891049 kubelet[3120]: E0428 01:24:17.879617 3120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0119e170-e6c1-4e77-9131-085c2b9d7bc5-kube-proxy podName:0119e170-e6c1-4e77-9131-085c2b9d7bc5 nodeName:}" failed. No retries permitted until 2026-04-28 01:24:25.873771924 +0000 UTC m=+1504.614971049 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/0119e170-e6c1-4e77-9131-085c2b9d7bc5-kube-proxy") pod "kube-proxy-d52vp" (UID: "0119e170-e6c1-4e77-9131-085c2b9d7bc5") : failed to sync configmap cache: timed out waiting for the condition Apr 28 01:24:18.360322 kubelet[3120]: E0428 01:24:17.663491 3120 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: Get \"https://10.0.0.30:6443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dkube-proxy&resourceVersion=1214\": dial tcp 10.0.0.30:6443: connect: connection refused - error from a previous attempt: read tcp 10.0.0.30:56250->10.0.0.30:6443: read: connection reset by peer" logger="UnhandledError" reflector="object-\"kube-system\"/\"kube-proxy\"" type="*v1.ConfigMap" Apr 28 01:24:18.371819 kubelet[3120]: E0428 01:24:18.361198 3120 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: Get \"https://10.0.0.30:6443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=1214\": dial tcp 10.0.0.30:6443: connect: connection refused - error from a previous attempt: read tcp 10.0.0.30:56278->10.0.0.30:6443: read: connection reset by peer" logger="UnhandledError" reflector="object-\"kube-system\"/\"kube-root-ca.crt\"" type="*v1.ConfigMap" Apr 28 01:24:18.650094 kubelet[3120]: E0428 01:24:18.419592 3120 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: Get \"https://10.0.0.30:6443/api/v1/namespaces/kube-flannel/configmaps?fieldSelector=metadata.name%3Dkube-flannel-cfg&resourceVersion=1214\": dial tcp 10.0.0.30:6443: connect: connection refused - error from a previous attempt: write tcp 10.0.0.30:58000->10.0.0.30:6443: write: connection reset by peer" logger="UnhandledError" reflector="object-\"kube-flannel\"/\"kube-flannel-cfg\"" type="*v1.ConfigMap" Apr 28 01:24:18.847481 kubelet[3120]: E0428 01:24:18.420081 3120 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.30:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&resourceVersion=1177\": dial tcp 10.0.0.30:6443: connect: connection refused - error from a previous attempt: dial tcp 10.0.0.30:6443: connect: connection reset by peer" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 28 01:24:19.006683 kubelet[3120]: E0428 01:24:18.511163 3120 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.0.0.30:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dlocalhost&resourceVersion=1197\": dial tcp 10.0.0.30:6443: connect: connection refused - error from a previous attempt: dial tcp 10.0.0.30:6443: connect: connection reset by peer" logger="UnhandledError" reflector="pkg/kubelet/config/apiserver.go:66" type="*v1.Pod" Apr 28 01:24:19.023204 containerd[1638]: time="2026-04-28T01:24:18.998414571Z" level=info msg="StopContainer for \"307693d5f6b27635f480b3af7e65ec1b92e0753b3f946c102f39b7b3864b2cdc\" with timeout 30 (s)" Apr 28 01:24:19.068594 kubelet[3120]: E0428 01:24:18.512105 3120 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://10.0.0.30:6443/api/v1/namespaces/kube-system/events/kube-apiserver-localhost.18aa5ff715f618ea\": dial tcp 10.0.0.30:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-localhost.18aa5ff715f618ea kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-localhost,UID:cf2ebce56cde410c1f7401213757c4d8,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:(combined from similar events): Liveness probe failed: Get \"https://10.0.0.30:6443/livez\": context deadline exceeded,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-28 01:08:37.781092586 +0000 UTC m=+556.522291707,LastTimestamp:2026-04-28 01:09:43.191062136 +0000 UTC m=+621.932261254,Count:3,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:,ReportingInstance:,}" Apr 28 01:24:19.099076 kubelet[3120]: E0428 01:24:18.512383 3120 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://10.0.0.30:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-localhost\": dial tcp 10.0.0.30:6443: connect: connection refused" podUID="cf2ebce56cde410c1f7401213757c4d8" pod="kube-system/kube-apiserver-localhost" Apr 28 01:24:19.111272 containerd[1638]: time="2026-04-28T01:24:19.073723515Z" level=info msg="Skipping the sending of signal terminated to container \"307693d5f6b27635f480b3af7e65ec1b92e0753b3f946c102f39b7b3864b2cdc\" because a prior stop with timeout>0 request already sent the signal" Apr 28 01:24:19.117905 kubelet[3120]: E0428 01:24:18.512637 3120 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"localhost\": Get \"https://10.0.0.30:6443/api/v1/nodes/localhost?timeout=10s\": dial tcp 10.0.0.30:6443: connect: connection refused" Apr 28 01:24:19.200760 kubelet[3120]: E0428 01:24:18.553125 3120 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:24:19.226156 kubelet[3120]: E0428 01:24:19.224309 3120 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"localhost\": Get \"https://10.0.0.30:6443/api/v1/nodes/localhost?timeout=10s\": dial tcp 10.0.0.30:6443: connect: connection refused" Apr 28 01:24:19.226156 kubelet[3120]: E0428 01:24:19.224379 3120 kubelet_node_status.go:473] "Unable to update node status" err="update node status exceeds retry count" Apr 28 01:24:19.429864 kubelet[3120]: E0428 01:24:19.427343 3120 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.109s" Apr 28 01:24:19.676944 kubelet[3120]: E0428 01:24:19.675607 3120 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:24:19.685763 kubelet[3120]: E0428 01:24:19.684994 3120 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Apr 28 01:24:19.685763 kubelet[3120]: E0428 01:24:19.685025 3120 projected.go:196] Error preparing data for projected volume kube-api-access-mtpbb for pod kube-system/kube-proxy-d52vp: failed to sync configmap cache: timed out waiting for the condition Apr 28 01:24:19.686034 sshd[5884]: Accepted publickey for core from 10.0.0.1 port 40388 ssh2: RSA SHA256:H+Vux/uLcwSNGfLerJ6bpcTovGn/hDI0W9YvrkmMHk4 Apr 28 01:24:19.686560 kubelet[3120]: E0428 01:24:19.686500 3120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0119e170-e6c1-4e77-9131-085c2b9d7bc5-kube-api-access-mtpbb podName:0119e170-e6c1-4e77-9131-085c2b9d7bc5 nodeName:}" failed. No retries permitted until 2026-04-28 01:24:27.686476108 +0000 UTC m=+1506.427675232 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-mtpbb" (UniqueName: "kubernetes.io/projected/0119e170-e6c1-4e77-9131-085c2b9d7bc5-kube-api-access-mtpbb") pod "kube-proxy-d52vp" (UID: "0119e170-e6c1-4e77-9131-085c2b9d7bc5") : failed to sync configmap cache: timed out waiting for the condition Apr 28 01:24:19.690849 sshd-session[5884]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 01:24:20.064388 kubelet[3120]: E0428 01:24:20.057485 3120 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://10.0.0.30:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-localhost\": dial tcp 10.0.0.30:6443: connect: connection refused" podUID="cf2ebce56cde410c1f7401213757c4d8" pod="kube-system/kube-apiserver-localhost" Apr 28 01:24:20.064388 kubelet[3120]: E0428 01:24:20.062973 3120 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.30:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&resourceVersion=1213\": dial tcp 10.0.0.30:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 28 01:24:20.149966 systemd-logind[1614]: New session '42' of user 'core' with class 'user' and type 'tty'. Apr 28 01:24:20.233107 systemd[1]: Started session-42.scope - Session 42 of User core. Apr 28 01:24:20.477404 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9583e013b7af737c09ec0f5a95821eabdf38511a63754015720f2c75fbb31b16-rootfs.mount: Deactivated successfully. Apr 28 01:24:20.514107 containerd[1638]: time="2026-04-28T01:24:20.513921847Z" level=info msg="StopContainer for \"9583e013b7af737c09ec0f5a95821eabdf38511a63754015720f2c75fbb31b16\" returns successfully" Apr 28 01:24:20.619775 kubelet[3120]: E0428 01:24:20.615146 3120 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:24:20.902612 containerd[1638]: time="2026-04-28T01:24:20.893495336Z" level=info msg="CreateContainer within sandbox \"e1005760f423d43ce44d06a9d4b7e9ce0b0129a1949c0222355e980d83e1c805\" for container name:\"kube-apiserver\" attempt:1" Apr 28 01:24:20.948680 kubelet[3120]: E0428 01:24:20.948306 3120 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: Get \"https://10.0.0.30:6443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dkube-proxy&resourceVersion=1214\": dial tcp 10.0.0.30:6443: connect: connection refused" logger="UnhandledError" reflector="object-\"kube-system\"/\"kube-proxy\"" type="*v1.ConfigMap" Apr 28 01:24:21.405355 containerd[1638]: time="2026-04-28T01:24:21.400856573Z" level=info msg="Container 522dee74516d35316df2be35144a34811bc3c9296f3f326c6ebac97842bf00f4: CDI devices from CRI Config.CDIDevices: []" Apr 28 01:24:21.713912 containerd[1638]: time="2026-04-28T01:24:21.700786555Z" level=info msg="CreateContainer within sandbox \"e1005760f423d43ce44d06a9d4b7e9ce0b0129a1949c0222355e980d83e1c805\" for name:\"kube-apiserver\" attempt:1 returns container id \"522dee74516d35316df2be35144a34811bc3c9296f3f326c6ebac97842bf00f4\"" Apr 28 01:24:21.808956 containerd[1638]: time="2026-04-28T01:24:21.808782694Z" level=info msg="StartContainer for \"522dee74516d35316df2be35144a34811bc3c9296f3f326c6ebac97842bf00f4\"" Apr 28 01:24:21.828633 containerd[1638]: time="2026-04-28T01:24:21.827708863Z" level=info msg="connecting to shim 522dee74516d35316df2be35144a34811bc3c9296f3f326c6ebac97842bf00f4" address="unix:///run/containerd/s/a6f6fe89b2fd9ed7e76b21a90d817ac6b4bb652aa72b24cd6c021d9b1372cd4c" protocol=ttrpc version=3 Apr 28 01:24:22.184038 kubelet[3120]: E0428 01:24:22.182107 3120 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: Get \"https://10.0.0.30:6443/api/v1/namespaces/kube-flannel/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=1214\": dial tcp 10.0.0.30:6443: connect: connection refused" logger="UnhandledError" reflector="object-\"kube-flannel\"/\"kube-root-ca.crt\"" type="*v1.ConfigMap" Apr 28 01:24:22.300445 kubelet[3120]: E0428 01:24:22.300298 3120 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.30:6443/apis/storage.k8s.io/v1/csidrivers?resourceVersion=1177\": dial tcp 10.0.0.30:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 28 01:24:22.644505 kubelet[3120]: E0428 01:24:22.644472 3120 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: Get \"https://10.0.0.30:6443/api/v1/namespaces/kube-flannel/configmaps?fieldSelector=metadata.name%3Dkube-flannel-cfg&resourceVersion=1214\": dial tcp 10.0.0.30:6443: connect: connection refused" logger="UnhandledError" reflector="object-\"kube-flannel\"/\"kube-flannel-cfg\"" type="*v1.ConfigMap" Apr 28 01:24:22.718017 kubelet[3120]: E0428 01:24:22.717695 3120 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.30:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.30:6443: connect: connection refused" interval="7s" Apr 28 01:24:22.740398 kubelet[3120]: E0428 01:24:22.740055 3120 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.30:6443/apis/node.k8s.io/v1/runtimeclasses?resourceVersion=1206\": dial tcp 10.0.0.30:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 28 01:24:23.173816 systemd[1]: Started cri-containerd-522dee74516d35316df2be35144a34811bc3c9296f3f326c6ebac97842bf00f4.scope - libcontainer container 522dee74516d35316df2be35144a34811bc3c9296f3f326c6ebac97842bf00f4. Apr 28 01:24:23.553289 kubelet[3120]: E0428 01:24:23.549581 3120 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: Get \"https://10.0.0.30:6443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=1214\": dial tcp 10.0.0.30:6443: connect: connection refused" logger="UnhandledError" reflector="object-\"kube-system\"/\"kube-root-ca.crt\"" type="*v1.ConfigMap" Apr 28 01:24:23.862813 sshd[5923]: Connection closed by 10.0.0.1 port 40388 Apr 28 01:24:23.880539 sshd-session[5884]: pam_unix(sshd:session): session closed for user core Apr 28 01:24:23.890384 systemd[1]: sshd@40-13-10.0.0.30:22-10.0.0.1:40388.service: Deactivated successfully. Apr 28 01:24:23.890726 systemd[1]: sshd@40-13-10.0.0.30:22-10.0.0.1:40388.service: Consumed 2.833s CPU time, 4.1M memory peak. Apr 28 01:24:24.224709 kubelet[3120]: E0428 01:24:24.224397 3120 configmap.go:193] Couldn't get configMap kube-flannel/kube-flannel-cfg: failed to sync configmap cache: timed out waiting for the condition Apr 28 01:24:24.249709 kubelet[3120]: E0428 01:24:24.248669 3120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/61b03599-9c01-4d11-8ba6-0d4d43ff2bf4-flannel-cfg podName:61b03599-9c01-4d11-8ba6-0d4d43ff2bf4 nodeName:}" failed. No retries permitted until 2026-04-28 01:24:40.224958745 +0000 UTC m=+1518.966157860 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "flannel-cfg" (UniqueName: "kubernetes.io/configmap/61b03599-9c01-4d11-8ba6-0d4d43ff2bf4-flannel-cfg") pod "kube-flannel-ds-tpgdg" (UID: "61b03599-9c01-4d11-8ba6-0d4d43ff2bf4") : failed to sync configmap cache: timed out waiting for the condition Apr 28 01:24:24.254009 systemd[1]: session-42.scope: Deactivated successfully. Apr 28 01:24:24.254907 systemd[1]: session-42.scope: Consumed 2.845s CPU time, 15.8M memory peak. Apr 28 01:24:24.257538 systemd-logind[1614]: Session 42 logged out. Waiting for processes to exit. Apr 28 01:24:24.258739 systemd-logind[1614]: Removed session 42. Apr 28 01:24:24.965075 containerd[1638]: time="2026-04-28T01:24:24.964796231Z" level=info msg="StartContainer for \"522dee74516d35316df2be35144a34811bc3c9296f3f326c6ebac97842bf00f4\" returns successfully" Apr 28 01:24:25.029464 kubelet[3120]: E0428 01:24:25.028667 3120 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: Get \"https://10.0.0.30:6443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dkube-proxy&resourceVersion=1214\": dial tcp 10.0.0.30:6443: connect: connection refused" logger="UnhandledError" reflector="object-\"kube-system\"/\"kube-proxy\"" type="*v1.ConfigMap" Apr 28 01:24:25.747388 kubelet[3120]: E0428 01:24:25.747049 3120 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:24:26.853985 kubelet[3120]: E0428 01:24:26.853655 3120 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:24:27.006909 kubelet[3120]: E0428 01:24:26.999180 3120 configmap.go:193] Couldn't get configMap kube-system/kube-proxy: failed to sync configmap cache: timed out waiting for the condition Apr 28 01:24:27.015864 kubelet[3120]: E0428 01:24:27.015055 3120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0119e170-e6c1-4e77-9131-085c2b9d7bc5-kube-proxy podName:0119e170-e6c1-4e77-9131-085c2b9d7bc5 nodeName:}" failed. No retries permitted until 2026-04-28 01:24:43.012828558 +0000 UTC m=+1521.754027677 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/0119e170-e6c1-4e77-9131-085c2b9d7bc5-kube-proxy") pod "kube-proxy-d52vp" (UID: "0119e170-e6c1-4e77-9131-085c2b9d7bc5") : failed to sync configmap cache: timed out waiting for the condition Apr 28 01:24:27.866373 kubelet[3120]: E0428 01:24:27.865890 3120 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:24:28.731154 kubelet[3120]: E0428 01:24:28.730379 3120 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Apr 28 01:24:28.731154 kubelet[3120]: E0428 01:24:28.730902 3120 projected.go:196] Error preparing data for projected volume kube-api-access-mtpbb for pod kube-system/kube-proxy-d52vp: failed to sync configmap cache: timed out waiting for the condition Apr 28 01:24:28.734983 kubelet[3120]: E0428 01:24:28.731761 3120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0119e170-e6c1-4e77-9131-085c2b9d7bc5-kube-api-access-mtpbb podName:0119e170-e6c1-4e77-9131-085c2b9d7bc5 nodeName:}" failed. No retries permitted until 2026-04-28 01:24:44.731484999 +0000 UTC m=+1523.472684129 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-mtpbb" (UniqueName: "kubernetes.io/projected/0119e170-e6c1-4e77-9131-085c2b9d7bc5-kube-api-access-mtpbb") pod "kube-proxy-d52vp" (UID: "0119e170-e6c1-4e77-9131-085c2b9d7bc5") : failed to sync configmap cache: timed out waiting for the condition Apr 28 01:24:28.925662 systemd[1]: Started sshd@41-4104-10.0.0.30:22-10.0.0.1:54148.service - OpenSSH per-connection server daemon (10.0.0.1:54148). Apr 28 01:24:29.043985 sshd[5996]: Accepted publickey for core from 10.0.0.1 port 54148 ssh2: RSA SHA256:H+Vux/uLcwSNGfLerJ6bpcTovGn/hDI0W9YvrkmMHk4 Apr 28 01:24:29.044905 sshd-session[5996]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 01:24:29.092115 systemd-logind[1614]: New session '43' of user 'core' with class 'user' and type 'tty'. Apr 28 01:24:29.099522 systemd[1]: Started session-43.scope - Session 43 of User core. Apr 28 01:24:30.933807 sshd[6000]: Connection closed by 10.0.0.1 port 54148 Apr 28 01:24:30.944362 sshd-session[5996]: pam_unix(sshd:session): session closed for user core Apr 28 01:24:31.023868 systemd[1]: sshd@41-4104-10.0.0.30:22-10.0.0.1:54148.service: Deactivated successfully. Apr 28 01:24:31.071323 kubelet[3120]: E0428 01:24:31.024663 3120 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-apiserver-localhost\" is forbidden: User \"system:node:localhost\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'localhost' and this object" podUID="cf2ebce56cde410c1f7401213757c4d8" pod="kube-system/kube-apiserver-localhost" Apr 28 01:24:31.071323 kubelet[3120]: E0428 01:24:31.024759 3120 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:localhost\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-flannel\": no relationship found between node 'localhost' and this object" logger="UnhandledError" reflector="object-\"kube-flannel\"/\"kube-root-ca.crt\"" type="*v1.ConfigMap" Apr 28 01:24:31.121209 systemd[1]: session-43.scope: Deactivated successfully. Apr 28 01:24:31.138017 systemd-logind[1614]: Session 43 logged out. Waiting for processes to exit. Apr 28 01:24:31.234963 systemd-logind[1614]: Removed session 43. Apr 28 01:24:31.309473 kubelet[3120]: E0428 01:24:31.309137 3120 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-apiserver-localhost\" is forbidden: User \"system:node:localhost\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'localhost' and this object" podUID="cf2ebce56cde410c1f7401213757c4d8" pod="kube-system/kube-apiserver-localhost" Apr 28 01:24:32.845162 kubelet[3120]: E0428 01:24:32.843510 3120 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:24:33.685037 kubelet[3120]: E0428 01:24:33.683992 3120 projected.go:291] Couldn't get configMap kube-flannel/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Apr 28 01:24:33.698460 kubelet[3120]: E0428 01:24:33.686813 3120 projected.go:196] Error preparing data for projected volume kube-api-access-vnx8j for pod kube-flannel/kube-flannel-ds-tpgdg: failed to sync configmap cache: timed out waiting for the condition Apr 28 01:24:33.702141 kubelet[3120]: E0428 01:24:33.700976 3120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/61b03599-9c01-4d11-8ba6-0d4d43ff2bf4-kube-api-access-vnx8j podName:61b03599-9c01-4d11-8ba6-0d4d43ff2bf4 nodeName:}" failed. No retries permitted until 2026-04-28 01:25:05.696897078 +0000 UTC m=+1544.438096198 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-vnx8j" (UniqueName: "kubernetes.io/projected/61b03599-9c01-4d11-8ba6-0d4d43ff2bf4-kube-api-access-vnx8j") pod "kube-flannel-ds-tpgdg" (UID: "61b03599-9c01-4d11-8ba6-0d4d43ff2bf4") : failed to sync configmap cache: timed out waiting for the condition Apr 28 01:24:36.192994 systemd[1]: Started sshd@42-8202-10.0.0.30:22-10.0.0.1:57718.service - OpenSSH per-connection server daemon (10.0.0.1:57718). Apr 28 01:24:36.687135 sshd[6052]: Accepted publickey for core from 10.0.0.1 port 57718 ssh2: RSA SHA256:H+Vux/uLcwSNGfLerJ6bpcTovGn/hDI0W9YvrkmMHk4 Apr 28 01:24:36.755924 sshd-session[6052]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 01:24:37.174566 systemd-logind[1614]: New session '44' of user 'core' with class 'user' and type 'tty'. Apr 28 01:24:37.206443 systemd[1]: Started session-44.scope - Session 44 of User core. Apr 28 01:24:38.778293 sshd[6060]: Connection closed by 10.0.0.1 port 57718 Apr 28 01:24:38.776922 sshd-session[6052]: pam_unix(sshd:session): session closed for user core Apr 28 01:24:38.794923 systemd[1]: sshd@42-8202-10.0.0.30:22-10.0.0.1:57718.service: Deactivated successfully. Apr 28 01:24:38.849131 systemd[1]: session-44.scope: Deactivated successfully. Apr 28 01:24:38.951694 systemd-logind[1614]: Session 44 logged out. Waiting for processes to exit. Apr 28 01:24:38.955083 systemd-logind[1614]: Removed session 44. Apr 28 01:24:42.415559 containerd[1638]: time="2026-04-28T01:24:42.414590534Z" level=info msg="TaskExit event container_id:\"a095478f240cbd6fb021b879525084d9c008460587333289a0c742e05b868b1c\" id:\"a095478f240cbd6fb021b879525084d9c008460587333289a0c742e05b868b1c\" pid:4889 exit_status:1 exited_at:{seconds:1777338460 nanos:446013510}" Apr 28 01:24:42.781334 containerd[1638]: time="2026-04-28T01:24:42.776499681Z" level=info msg="StopContainer for \"a095478f240cbd6fb021b879525084d9c008460587333289a0c742e05b868b1c\" returns successfully" Apr 28 01:24:42.794589 kubelet[3120]: E0428 01:24:42.785841 3120 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:24:43.044289 containerd[1638]: time="2026-04-28T01:24:43.040746797Z" level=info msg="CreateContainer within sandbox \"f99c09fa9bcebfcc9ce36f1e690fb119057571c499a894190b3ac3be9c525e0a\" for container name:\"kube-controller-manager\" attempt:5" Apr 28 01:24:43.044626 kubelet[3120]: E0428 01:24:43.044181 3120 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:24:43.104408 kubelet[3120]: I0428 01:24:43.099513 3120 scope.go:117] "RemoveContainer" containerID="d99ba52aef3e5e6e745953a55875c7ff1597d941380a59b0e4b2280ea02ced9b" Apr 28 01:24:43.138394 containerd[1638]: time="2026-04-28T01:24:43.137903451Z" level=info msg="RemoveContainer for \"d99ba52aef3e5e6e745953a55875c7ff1597d941380a59b0e4b2280ea02ced9b\"" Apr 28 01:24:43.145852 containerd[1638]: time="2026-04-28T01:24:43.145689699Z" level=info msg="Container d11312894f3ea680c1fb61fb5b0384728cc773c61a24a6be3ba49bbe9c4ac5e8: CDI devices from CRI Config.CDIDevices: []" Apr 28 01:24:43.198052 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2106394794.mount: Deactivated successfully. Apr 28 01:24:43.205421 containerd[1638]: time="2026-04-28T01:24:43.205245331Z" level=info msg="RemoveContainer for \"d99ba52aef3e5e6e745953a55875c7ff1597d941380a59b0e4b2280ea02ced9b\" returns successfully" Apr 28 01:24:43.209294 containerd[1638]: time="2026-04-28T01:24:43.209207650Z" level=info msg="CreateContainer within sandbox \"f99c09fa9bcebfcc9ce36f1e690fb119057571c499a894190b3ac3be9c525e0a\" for name:\"kube-controller-manager\" attempt:5 returns container id \"d11312894f3ea680c1fb61fb5b0384728cc773c61a24a6be3ba49bbe9c4ac5e8\"" Apr 28 01:24:43.216271 containerd[1638]: time="2026-04-28T01:24:43.216010636Z" level=info msg="StartContainer for \"d11312894f3ea680c1fb61fb5b0384728cc773c61a24a6be3ba49bbe9c4ac5e8\"" Apr 28 01:24:43.243576 containerd[1638]: time="2026-04-28T01:24:43.239931803Z" level=info msg="connecting to shim d11312894f3ea680c1fb61fb5b0384728cc773c61a24a6be3ba49bbe9c4ac5e8" address="unix:///run/containerd/s/aafd21b6e43b3c36323942c08fd3df2bb03ac8c2cdd619376b1243457cecf8d1" protocol=ttrpc version=3 Apr 28 01:24:43.333138 systemd[1]: Started cri-containerd-d11312894f3ea680c1fb61fb5b0384728cc773c61a24a6be3ba49bbe9c4ac5e8.scope - libcontainer container d11312894f3ea680c1fb61fb5b0384728cc773c61a24a6be3ba49bbe9c4ac5e8. Apr 28 01:24:43.715061 containerd[1638]: time="2026-04-28T01:24:43.714843578Z" level=info msg="StartContainer for \"d11312894f3ea680c1fb61fb5b0384728cc773c61a24a6be3ba49bbe9c4ac5e8\" returns successfully" Apr 28 01:24:43.897705 systemd[1]: Started sshd@43-14-10.0.0.30:22-10.0.0.1:38802.service - OpenSSH per-connection server daemon (10.0.0.1:38802). Apr 28 01:24:44.049270 sshd[6141]: Accepted publickey for core from 10.0.0.1 port 38802 ssh2: RSA SHA256:H+Vux/uLcwSNGfLerJ6bpcTovGn/hDI0W9YvrkmMHk4 Apr 28 01:24:44.049818 sshd-session[6141]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 01:24:44.155929 systemd-logind[1614]: New session '45' of user 'core' with class 'user' and type 'tty'. Apr 28 01:24:44.255123 systemd[1]: Started session-45.scope - Session 45 of User core. Apr 28 01:24:44.354624 kubelet[3120]: E0428 01:24:44.352921 3120 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:24:44.355677 kubelet[3120]: E0428 01:24:44.355318 3120 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:24:44.954791 sshd[6145]: Connection closed by 10.0.0.1 port 38802 Apr 28 01:24:44.955706 sshd-session[6141]: pam_unix(sshd:session): session closed for user core Apr 28 01:24:44.982490 systemd[1]: sshd@43-14-10.0.0.30:22-10.0.0.1:38802.service: Deactivated successfully. Apr 28 01:24:44.996378 systemd[1]: session-45.scope: Deactivated successfully. Apr 28 01:24:44.999764 systemd-logind[1614]: Session 45 logged out. Waiting for processes to exit. Apr 28 01:24:45.000786 systemd-logind[1614]: Removed session 45. Apr 28 01:24:45.406889 kubelet[3120]: E0428 01:24:45.406473 3120 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:24:49.113671 containerd[1638]: time="2026-04-28T01:24:49.110408155Z" level=info msg="Kill container \"307693d5f6b27635f480b3af7e65ec1b92e0753b3f946c102f39b7b3864b2cdc\"" Apr 28 01:24:50.155586 systemd[1]: Started sshd@44-4105-10.0.0.30:22-10.0.0.1:59904.service - OpenSSH per-connection server daemon (10.0.0.1:59904). Apr 28 01:24:50.311177 systemd[1]: Started systemd-sysupdate.service - Automatic System Update. Apr 28 01:24:50.408381 systemd-sysupdate[6191]: Discovering installed instances… Apr 28 01:24:50.409490 systemd-sysupdate[6191]: Discovering available instances… Apr 28 01:24:50.409523 systemd-sysupdate[6191]: Determining installed update sets… Apr 28 01:24:50.409526 systemd-sysupdate[6191]: Determining available update sets… Apr 28 01:24:50.409530 systemd-sysupdate[6191]: No update needed. Apr 28 01:24:50.415801 systemd[1]: systemd-sysupdate.service: Deactivated successfully. Apr 28 01:24:50.440752 sshd[6188]: Accepted publickey for core from 10.0.0.1 port 59904 ssh2: RSA SHA256:H+Vux/uLcwSNGfLerJ6bpcTovGn/hDI0W9YvrkmMHk4 Apr 28 01:24:50.442080 sshd-session[6188]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 01:24:50.466401 systemd-logind[1614]: New session '46' of user 'core' with class 'user' and type 'tty'. Apr 28 01:24:50.482430 systemd[1]: Started session-46.scope - Session 46 of User core. Apr 28 01:24:51.226987 sshd[6194]: Connection closed by 10.0.0.1 port 59904 Apr 28 01:24:51.239522 sshd-session[6188]: pam_unix(sshd:session): session closed for user core Apr 28 01:24:51.255795 systemd[1]: sshd@44-4105-10.0.0.30:22-10.0.0.1:59904.service: Deactivated successfully. Apr 28 01:24:51.314817 systemd[1]: session-46.scope: Deactivated successfully. Apr 28 01:24:51.330353 systemd-logind[1614]: Session 46 logged out. Waiting for processes to exit. Apr 28 01:24:51.410885 systemd-logind[1614]: Removed session 46. Apr 28 01:24:53.776370 kubelet[3120]: E0428 01:24:53.775846 3120 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:24:54.796305 kubelet[3120]: E0428 01:24:54.795998 3120 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:24:56.298392 systemd[1]: Started sshd@45-4106-10.0.0.30:22-10.0.0.1:59906.service - OpenSSH per-connection server daemon (10.0.0.1:59906). Apr 28 01:24:56.503488 sshd[6234]: Accepted publickey for core from 10.0.0.1 port 59906 ssh2: RSA SHA256:H+Vux/uLcwSNGfLerJ6bpcTovGn/hDI0W9YvrkmMHk4 Apr 28 01:24:56.505646 sshd-session[6234]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 01:24:56.515178 systemd-logind[1614]: New session '47' of user 'core' with class 'user' and type 'tty'. Apr 28 01:24:56.531121 systemd[1]: Started session-47.scope - Session 47 of User core. Apr 28 01:24:56.891625 sshd[6238]: Connection closed by 10.0.0.1 port 59906 Apr 28 01:24:56.893497 sshd-session[6234]: pam_unix(sshd:session): session closed for user core Apr 28 01:24:56.932442 systemd[1]: sshd@45-4106-10.0.0.30:22-10.0.0.1:59906.service: Deactivated successfully. Apr 28 01:24:56.992684 systemd[1]: session-47.scope: Deactivated successfully. Apr 28 01:24:57.028261 systemd-logind[1614]: Session 47 logged out. Waiting for processes to exit. Apr 28 01:24:57.029890 systemd-logind[1614]: Removed session 47. Apr 28 01:25:01.935665 systemd[1]: Started sshd@46-15-10.0.0.30:22-10.0.0.1:52250.service - OpenSSH per-connection server daemon (10.0.0.1:52250). Apr 28 01:25:02.030276 sshd[6271]: Accepted publickey for core from 10.0.0.1 port 52250 ssh2: RSA SHA256:H+Vux/uLcwSNGfLerJ6bpcTovGn/hDI0W9YvrkmMHk4 Apr 28 01:25:02.031249 sshd-session[6271]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 01:25:02.045643 systemd-logind[1614]: New session '48' of user 'core' with class 'user' and type 'tty'. Apr 28 01:25:02.050421 systemd[1]: Started session-48.scope - Session 48 of User core. Apr 28 01:25:02.258467 sshd[6275]: Connection closed by 10.0.0.1 port 52250 Apr 28 01:25:02.259200 sshd-session[6271]: pam_unix(sshd:session): session closed for user core Apr 28 01:25:02.283996 systemd[1]: sshd@46-15-10.0.0.30:22-10.0.0.1:52250.service: Deactivated successfully. Apr 28 01:25:02.317917 systemd[1]: session-48.scope: Deactivated successfully. Apr 28 01:25:02.324243 systemd-logind[1614]: Session 48 logged out. Waiting for processes to exit. Apr 28 01:25:02.325682 systemd-logind[1614]: Removed session 48. Apr 28 01:25:07.396571 systemd[1]: Started sshd@47-4107-10.0.0.30:22-10.0.0.1:52252.service - OpenSSH per-connection server daemon (10.0.0.1:52252). Apr 28 01:25:07.496584 sshd[6309]: Accepted publickey for core from 10.0.0.1 port 52252 ssh2: RSA SHA256:H+Vux/uLcwSNGfLerJ6bpcTovGn/hDI0W9YvrkmMHk4 Apr 28 01:25:07.497642 sshd-session[6309]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 01:25:07.646635 systemd-logind[1614]: New session '49' of user 'core' with class 'user' and type 'tty'. Apr 28 01:25:07.660611 systemd[1]: Started session-49.scope - Session 49 of User core. Apr 28 01:25:08.211999 sshd[6313]: Connection closed by 10.0.0.1 port 52252 Apr 28 01:25:08.213233 sshd-session[6309]: pam_unix(sshd:session): session closed for user core Apr 28 01:25:08.223108 systemd[1]: sshd@47-4107-10.0.0.30:22-10.0.0.1:52252.service: Deactivated successfully. Apr 28 01:25:08.243997 systemd[1]: session-49.scope: Deactivated successfully. Apr 28 01:25:08.250932 systemd-logind[1614]: Session 49 logged out. Waiting for processes to exit. Apr 28 01:25:08.252545 systemd-logind[1614]: Removed session 49. Apr 28 01:25:13.274147 systemd[1]: Started sshd@48-16-10.0.0.30:22-10.0.0.1:48612.service - OpenSSH per-connection server daemon (10.0.0.1:48612). Apr 28 01:25:13.706511 sshd[6362]: Accepted publickey for core from 10.0.0.1 port 48612 ssh2: RSA SHA256:H+Vux/uLcwSNGfLerJ6bpcTovGn/hDI0W9YvrkmMHk4 Apr 28 01:25:13.714491 sshd-session[6362]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 01:25:13.831878 systemd-logind[1614]: New session '50' of user 'core' with class 'user' and type 'tty'. Apr 28 01:25:13.944470 systemd[1]: Started session-50.scope - Session 50 of User core. Apr 28 01:25:16.535818 sshd[6366]: Connection closed by 10.0.0.1 port 48612 Apr 28 01:25:16.539170 sshd-session[6362]: pam_unix(sshd:session): session closed for user core Apr 28 01:25:16.707487 systemd[1]: sshd@48-16-10.0.0.30:22-10.0.0.1:48612.service: Deactivated successfully. Apr 28 01:25:16.837929 systemd[1]: session-50.scope: Deactivated successfully. Apr 28 01:25:16.846953 systemd[1]: session-50.scope: Consumed 1.848s CPU time, 15.8M memory peak. Apr 28 01:25:16.852245 systemd-logind[1614]: Session 50 logged out. Waiting for processes to exit. Apr 28 01:25:16.860930 systemd-logind[1614]: Removed session 50. Apr 28 01:25:21.751063 systemd[1]: Started sshd@49-12301-10.0.0.30:22-10.0.0.1:35384.service - OpenSSH per-connection server daemon (10.0.0.1:35384). Apr 28 01:25:22.051892 sshd[6399]: Accepted publickey for core from 10.0.0.1 port 35384 ssh2: RSA SHA256:H+Vux/uLcwSNGfLerJ6bpcTovGn/hDI0W9YvrkmMHk4 Apr 28 01:25:22.077249 sshd-session[6399]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 01:25:22.203285 systemd-logind[1614]: New session '51' of user 'core' with class 'user' and type 'tty'. Apr 28 01:25:22.256846 systemd[1]: Started session-51.scope - Session 51 of User core. Apr 28 01:25:24.213621 kubelet[3120]: E0428 01:25:24.206443 3120 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:25:24.417806 containerd[1638]: time="2026-04-28T01:25:24.415815138Z" level=info msg="TaskExit event container_id:\"307693d5f6b27635f480b3af7e65ec1b92e0753b3f946c102f39b7b3864b2cdc\" id:\"307693d5f6b27635f480b3af7e65ec1b92e0753b3f946c102f39b7b3864b2cdc\" pid:4482 exit_status:1 exited_at:{seconds:1777338468 nanos:658378364}" Apr 28 01:25:24.993861 sshd[6409]: Connection closed by 10.0.0.1 port 35384 Apr 28 01:25:24.998608 sshd-session[6399]: pam_unix(sshd:session): session closed for user core Apr 28 01:25:25.149889 systemd[1]: sshd@49-12301-10.0.0.30:22-10.0.0.1:35384.service: Deactivated successfully. Apr 28 01:25:25.209054 systemd[1]: session-51.scope: Deactivated successfully. Apr 28 01:25:25.215456 systemd[1]: session-51.scope: Consumed 1.893s CPU time, 18M memory peak. Apr 28 01:25:25.251295 containerd[1638]: time="2026-04-28T01:25:25.247950028Z" level=info msg="StopContainer for \"307693d5f6b27635f480b3af7e65ec1b92e0753b3f946c102f39b7b3864b2cdc\" returns successfully" Apr 28 01:25:25.309583 systemd-logind[1614]: Session 51 logged out. Waiting for processes to exit. Apr 28 01:25:25.393786 kubelet[3120]: E0428 01:25:25.316368 3120 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:25:25.445901 systemd-logind[1614]: Removed session 51. Apr 28 01:25:25.756697 containerd[1638]: time="2026-04-28T01:25:25.756133840Z" level=info msg="CreateContainer within sandbox \"e82fdc1d5f53a8fce152ad99c68af9170281cfec471286d1bb09abdd165edcda\" for container name:\"kube-scheduler\" attempt:3" Apr 28 01:25:25.902351 containerd[1638]: time="2026-04-28T01:25:25.901578322Z" level=info msg="Container 74c4b95ed3646b2cca4ec999d8d9d1b17f76db48c0cf9c31421d0beef3ec771e: CDI devices from CRI Config.CDIDevices: []" Apr 28 01:25:26.400737 containerd[1638]: time="2026-04-28T01:25:26.400314643Z" level=info msg="CreateContainer within sandbox \"e82fdc1d5f53a8fce152ad99c68af9170281cfec471286d1bb09abdd165edcda\" for name:\"kube-scheduler\" attempt:3 returns container id \"74c4b95ed3646b2cca4ec999d8d9d1b17f76db48c0cf9c31421d0beef3ec771e\"" Apr 28 01:25:26.431872 containerd[1638]: time="2026-04-28T01:25:26.431469912Z" level=info msg="StartContainer for \"74c4b95ed3646b2cca4ec999d8d9d1b17f76db48c0cf9c31421d0beef3ec771e\"" Apr 28 01:25:26.457816 containerd[1638]: time="2026-04-28T01:25:26.457306700Z" level=info msg="connecting to shim 74c4b95ed3646b2cca4ec999d8d9d1b17f76db48c0cf9c31421d0beef3ec771e" address="unix:///run/containerd/s/87324bb63ef3a4130ae0dbb17ad0d3ce89ecf0940cd570753f29942f5d39ca08" protocol=ttrpc version=3 Apr 28 01:25:27.851001 systemd[1]: Started cri-containerd-74c4b95ed3646b2cca4ec999d8d9d1b17f76db48c0cf9c31421d0beef3ec771e.scope - libcontainer container 74c4b95ed3646b2cca4ec999d8d9d1b17f76db48c0cf9c31421d0beef3ec771e. Apr 28 01:25:30.206715 systemd[1]: Started sshd@50-17-10.0.0.30:22-10.0.0.1:48276.service - OpenSSH per-connection server daemon (10.0.0.1:48276). Apr 28 01:25:30.794239 containerd[1638]: time="2026-04-28T01:25:30.783166577Z" level=info msg="StartContainer for \"74c4b95ed3646b2cca4ec999d8d9d1b17f76db48c0cf9c31421d0beef3ec771e\" returns successfully" Apr 28 01:25:31.526801 kubelet[3120]: E0428 01:25:31.521513 3120 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.635s" Apr 28 01:25:32.166361 sshd[6486]: Accepted publickey for core from 10.0.0.1 port 48276 ssh2: RSA SHA256:H+Vux/uLcwSNGfLerJ6bpcTovGn/hDI0W9YvrkmMHk4 Apr 28 01:25:32.214439 sshd-session[6486]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 01:25:33.039393 systemd-logind[1614]: New session '52' of user 'core' with class 'user' and type 'tty'. Apr 28 01:25:33.045666 kubelet[3120]: E0428 01:25:33.045587 3120 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.173s" Apr 28 01:25:33.073594 systemd[1]: Started session-52.scope - Session 52 of User core. Apr 28 01:25:33.279539 kubelet[3120]: E0428 01:25:33.276545 3120 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:25:35.458448 kubelet[3120]: E0428 01:25:35.425782 3120 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:25:37.059194 kubelet[3120]: E0428 01:25:37.057133 3120 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:25:37.194106 sshd[6509]: Connection closed by 10.0.0.1 port 48276 Apr 28 01:25:37.220761 sshd-session[6486]: pam_unix(sshd:session): session closed for user core Apr 28 01:25:37.524829 systemd[1]: sshd@50-17-10.0.0.30:22-10.0.0.1:48276.service: Deactivated successfully. Apr 28 01:25:38.010140 systemd[1]: session-52.scope: Deactivated successfully. Apr 28 01:25:38.023767 systemd[1]: session-52.scope: Consumed 2.236s CPU time, 16.8M memory peak. Apr 28 01:25:38.232036 systemd-logind[1614]: Session 52 logged out. Waiting for processes to exit. Apr 28 01:25:38.369192 systemd-logind[1614]: Removed session 52. Apr 28 01:25:38.798640 kubelet[3120]: E0428 01:25:38.798478 3120 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:25:42.212747 systemd[1]: Started sshd@51-12302-10.0.0.30:22-10.0.0.1:60398.service - OpenSSH per-connection server daemon (10.0.0.1:60398). Apr 28 01:25:42.588135 sshd[6555]: Accepted publickey for core from 10.0.0.1 port 60398 ssh2: RSA SHA256:H+Vux/uLcwSNGfLerJ6bpcTovGn/hDI0W9YvrkmMHk4 Apr 28 01:25:42.597682 sshd-session[6555]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 01:25:42.649591 systemd-logind[1614]: New session '53' of user 'core' with class 'user' and type 'tty'. Apr 28 01:25:42.704784 systemd[1]: Started session-53.scope - Session 53 of User core. Apr 28 01:25:43.112038 kubelet[3120]: E0428 01:25:43.111643 3120 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:25:43.495287 sshd[6559]: Connection closed by 10.0.0.1 port 60398 Apr 28 01:25:43.496276 sshd-session[6555]: pam_unix(sshd:session): session closed for user core Apr 28 01:25:43.508196 systemd[1]: sshd@51-12302-10.0.0.30:22-10.0.0.1:60398.service: Deactivated successfully. Apr 28 01:25:43.511435 systemd[1]: session-53.scope: Deactivated successfully. Apr 28 01:25:43.526644 systemd-logind[1614]: Session 53 logged out. Waiting for processes to exit. Apr 28 01:25:43.547553 systemd-logind[1614]: Removed session 53. Apr 28 01:25:44.187386 kubelet[3120]: E0428 01:25:44.187069 3120 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:25:45.254352 kubelet[3120]: E0428 01:25:45.253368 3120 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:25:48.655653 systemd[1]: Started sshd@52-12303-10.0.0.30:22-10.0.0.1:60402.service - OpenSSH per-connection server daemon (10.0.0.1:60402). Apr 28 01:25:48.832517 sshd[6593]: Accepted publickey for core from 10.0.0.1 port 60402 ssh2: RSA SHA256:H+Vux/uLcwSNGfLerJ6bpcTovGn/hDI0W9YvrkmMHk4 Apr 28 01:25:48.836988 sshd-session[6593]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 01:25:48.844028 systemd-logind[1614]: New session '54' of user 'core' with class 'user' and type 'tty'. Apr 28 01:25:48.851447 systemd[1]: Started session-54.scope - Session 54 of User core. Apr 28 01:25:48.976649 sshd[6597]: Connection closed by 10.0.0.1 port 60402 Apr 28 01:25:48.977186 sshd-session[6593]: pam_unix(sshd:session): session closed for user core Apr 28 01:25:48.986315 systemd[1]: sshd@52-12303-10.0.0.30:22-10.0.0.1:60402.service: Deactivated successfully. Apr 28 01:25:48.989881 systemd[1]: session-54.scope: Deactivated successfully. Apr 28 01:25:49.014893 systemd-logind[1614]: Session 54 logged out. Waiting for processes to exit. Apr 28 01:25:49.027376 systemd-logind[1614]: Removed session 54. Apr 28 01:25:49.926625 kubelet[3120]: E0428 01:25:49.926150 3120 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:25:54.197593 systemd[1]: Started sshd@53-18-10.0.0.30:22-10.0.0.1:49034.service - OpenSSH per-connection server daemon (10.0.0.1:49034). Apr 28 01:25:54.548753 sshd[6631]: Accepted publickey for core from 10.0.0.1 port 49034 ssh2: RSA SHA256:H+Vux/uLcwSNGfLerJ6bpcTovGn/hDI0W9YvrkmMHk4 Apr 28 01:25:54.577949 sshd-session[6631]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 01:25:54.735740 systemd-logind[1614]: New session '55' of user 'core' with class 'user' and type 'tty'. Apr 28 01:25:54.757912 systemd[1]: Started session-55.scope - Session 55 of User core. Apr 28 01:25:55.226847 sshd[6635]: Connection closed by 10.0.0.1 port 49034 Apr 28 01:25:55.228631 sshd-session[6631]: pam_unix(sshd:session): session closed for user core Apr 28 01:25:55.236913 systemd[1]: sshd@53-18-10.0.0.30:22-10.0.0.1:49034.service: Deactivated successfully. Apr 28 01:25:55.241071 systemd[1]: session-55.scope: Deactivated successfully. Apr 28 01:25:55.243266 systemd-logind[1614]: Session 55 logged out. Waiting for processes to exit. Apr 28 01:25:55.244316 systemd-logind[1614]: Removed session 55. Apr 28 01:26:00.273460 systemd[1]: Started sshd@54-8203-10.0.0.30:22-10.0.0.1:45530.service - OpenSSH per-connection server daemon (10.0.0.1:45530). Apr 28 01:26:00.445124 sshd[6672]: Accepted publickey for core from 10.0.0.1 port 45530 ssh2: RSA SHA256:H+Vux/uLcwSNGfLerJ6bpcTovGn/hDI0W9YvrkmMHk4 Apr 28 01:26:00.460022 sshd-session[6672]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 01:26:00.491862 systemd-logind[1614]: New session '56' of user 'core' with class 'user' and type 'tty'. Apr 28 01:26:00.524407 systemd[1]: Started session-56.scope - Session 56 of User core. Apr 28 01:26:01.007918 sshd[6682]: Connection closed by 10.0.0.1 port 45530 Apr 28 01:26:01.009473 sshd-session[6672]: pam_unix(sshd:session): session closed for user core Apr 28 01:26:01.032598 systemd[1]: sshd@54-8203-10.0.0.30:22-10.0.0.1:45530.service: Deactivated successfully. Apr 28 01:26:01.037354 systemd[1]: session-56.scope: Deactivated successfully. Apr 28 01:26:01.079950 systemd-logind[1614]: Session 56 logged out. Waiting for processes to exit. Apr 28 01:26:01.104143 systemd-logind[1614]: Removed session 56. Apr 28 01:26:06.115032 systemd[1]: Started sshd@55-19-10.0.0.30:22-10.0.0.1:45538.service - OpenSSH per-connection server daemon (10.0.0.1:45538). Apr 28 01:26:06.292253 sshd[6726]: Accepted publickey for core from 10.0.0.1 port 45538 ssh2: RSA SHA256:H+Vux/uLcwSNGfLerJ6bpcTovGn/hDI0W9YvrkmMHk4 Apr 28 01:26:06.302830 sshd-session[6726]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 01:26:06.323066 systemd-logind[1614]: New session '57' of user 'core' with class 'user' and type 'tty'. Apr 28 01:26:06.331420 systemd[1]: Started session-57.scope - Session 57 of User core. Apr 28 01:26:06.508205 sshd[6730]: Connection closed by 10.0.0.1 port 45538 Apr 28 01:26:06.508785 sshd-session[6726]: pam_unix(sshd:session): session closed for user core Apr 28 01:26:06.513532 systemd[1]: sshd@55-19-10.0.0.30:22-10.0.0.1:45538.service: Deactivated successfully. Apr 28 01:26:06.515639 systemd[1]: session-57.scope: Deactivated successfully. Apr 28 01:26:06.516480 systemd-logind[1614]: Session 57 logged out. Waiting for processes to exit. Apr 28 01:26:06.524568 systemd-logind[1614]: Removed session 57.