Apr 20 17:24:50.704934 kernel: Linux version 6.12.81-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 15.2.1_p20260214 p5) 15.2.1 20260214, GNU ld (Gentoo 2.46.0 p1) 2.46.0) #1 SMP PREEMPT_DYNAMIC Tue Apr 14 02:21:25 -00 2026 Apr 20 17:24:50.705283 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=70de22794192cd52e167b5a4b1ae0509811ded61dbe4152dfc02378f843ae81a Apr 20 17:24:50.705299 kernel: BIOS-provided physical RAM map: Apr 20 17:24:50.705348 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000002ffff] usable Apr 20 17:24:50.705361 kernel: BIOS-e820: [mem 0x0000000000030000-0x000000000004ffff] reserved Apr 20 17:24:50.705405 kernel: BIOS-e820: [mem 0x0000000000050000-0x000000000009efff] usable Apr 20 17:24:50.705449 kernel: BIOS-e820: [mem 0x000000000009f000-0x000000000009ffff] reserved Apr 20 17:24:50.705559 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009b8ecfff] usable Apr 20 17:24:50.705568 kernel: BIOS-e820: [mem 0x000000009b8ed000-0x000000009bb6cfff] reserved Apr 20 17:24:50.705615 kernel: BIOS-e820: [mem 0x000000009bb6d000-0x000000009bb7efff] ACPI data Apr 20 17:24:50.705623 kernel: BIOS-e820: [mem 0x000000009bb7f000-0x000000009bbfefff] ACPI NVS Apr 20 17:24:50.705698 kernel: BIOS-e820: [mem 0x000000009bbff000-0x000000009bfb0fff] usable Apr 20 17:24:50.705740 kernel: BIOS-e820: [mem 0x000000009bfb1000-0x000000009bfb4fff] reserved Apr 20 17:24:50.705783 kernel: BIOS-e820: [mem 0x000000009bfb5000-0x000000009bfb6fff] ACPI NVS Apr 20 17:24:50.705904 kernel: BIOS-e820: [mem 0x000000009bfb7000-0x000000009bffffff] usable Apr 20 17:24:50.705912 kernel: BIOS-e820: [mem 0x000000009c000000-0x000000009cffffff] reserved Apr 20 17:24:50.705920 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Apr 20 17:24:50.705928 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Apr 20 17:24:50.705935 kernel: NX (Execute Disable) protection: active Apr 20 17:24:50.705943 kernel: APIC: Static calls initialized Apr 20 17:24:50.705986 kernel: e820: update [mem 0x9a142018-0x9a14bc57] usable ==> usable Apr 20 17:24:50.706060 kernel: e820: update [mem 0x9a105018-0x9a141e57] usable ==> usable Apr 20 17:24:50.706103 kernel: extended physical RAM map: Apr 20 17:24:50.706111 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000002ffff] usable Apr 20 17:24:50.706120 kernel: reserve setup_data: [mem 0x0000000000030000-0x000000000004ffff] reserved Apr 20 17:24:50.706128 kernel: reserve setup_data: [mem 0x0000000000050000-0x000000000009efff] usable Apr 20 17:24:50.706139 kernel: reserve setup_data: [mem 0x000000000009f000-0x000000000009ffff] reserved Apr 20 17:24:50.706149 kernel: reserve setup_data: [mem 0x0000000000100000-0x000000009a105017] usable Apr 20 17:24:50.706157 kernel: reserve setup_data: [mem 0x000000009a105018-0x000000009a141e57] usable Apr 20 17:24:50.706164 kernel: reserve setup_data: [mem 0x000000009a141e58-0x000000009a142017] usable Apr 20 17:24:50.706175 kernel: reserve setup_data: [mem 0x000000009a142018-0x000000009a14bc57] usable Apr 20 17:24:50.706262 kernel: reserve setup_data: [mem 0x000000009a14bc58-0x000000009b8ecfff] usable Apr 20 17:24:50.706315 kernel: reserve setup_data: [mem 0x000000009b8ed000-0x000000009bb6cfff] reserved Apr 20 17:24:50.706327 kernel: reserve setup_data: [mem 0x000000009bb6d000-0x000000009bb7efff] ACPI data Apr 20 17:24:50.706335 kernel: reserve setup_data: [mem 0x000000009bb7f000-0x000000009bbfefff] ACPI NVS Apr 20 17:24:50.706342 kernel: reserve setup_data: [mem 0x000000009bbff000-0x000000009bfb0fff] usable Apr 20 17:24:50.706350 kernel: reserve setup_data: [mem 0x000000009bfb1000-0x000000009bfb4fff] reserved Apr 20 17:24:50.706357 kernel: reserve setup_data: [mem 0x000000009bfb5000-0x000000009bfb6fff] ACPI NVS Apr 20 17:24:50.706368 kernel: reserve setup_data: [mem 0x000000009bfb7000-0x000000009bffffff] usable Apr 20 17:24:50.706376 kernel: reserve setup_data: [mem 0x000000009c000000-0x000000009cffffff] reserved Apr 20 17:24:50.706460 kernel: reserve setup_data: [mem 0x00000000e0000000-0x00000000efffffff] reserved Apr 20 17:24:50.706507 kernel: reserve setup_data: [mem 0x00000000feffc000-0x00000000feffffff] reserved Apr 20 17:24:50.706519 kernel: efi: EFI v2.7 by EDK II Apr 20 17:24:50.706528 kernel: efi: SMBIOS=0x9b9d5000 ACPI=0x9bb7e000 ACPI 2.0=0x9bb7e014 MEMATTR=0x9a1b4018 RNG=0x9bb73018 Apr 20 17:24:50.706536 kernel: random: crng init done Apr 20 17:24:50.706544 kernel: Kernel is locked down from EFI Secure Boot; see man kernel_lockdown.7 Apr 20 17:24:50.706590 kernel: secureboot: Secure boot enabled Apr 20 17:24:50.706598 kernel: SMBIOS 2.8 present. Apr 20 17:24:50.706606 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS unknown 02/02/2022 Apr 20 17:24:50.706614 kernel: DMI: Memory slots populated: 1/1 Apr 20 17:24:50.706624 kernel: Hypervisor detected: KVM Apr 20 17:24:50.706635 kernel: last_pfn = 0x9c000 max_arch_pfn = 0x10000000000 Apr 20 17:24:50.706643 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Apr 20 17:24:50.706651 kernel: kvm-clock: using sched offset of 26750180840 cycles Apr 20 17:24:50.706661 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Apr 20 17:24:50.706670 kernel: tsc: Detected 2793.438 MHz processor Apr 20 17:24:50.706716 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Apr 20 17:24:50.706760 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Apr 20 17:24:50.706770 kernel: last_pfn = 0x9c000 max_arch_pfn = 0x10000000000 Apr 20 17:24:50.706780 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Apr 20 17:24:50.706874 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Apr 20 17:24:50.706884 kernel: Using GB pages for direct mapping Apr 20 17:24:50.706929 kernel: ACPI: Early table checksum verification disabled Apr 20 17:24:50.706984 kernel: ACPI: RSDP 0x000000009BB7E014 000024 (v02 BOCHS ) Apr 20 17:24:50.706993 kernel: ACPI: XSDT 0x000000009BB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Apr 20 17:24:50.707002 kernel: ACPI: FACP 0x000000009BB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Apr 20 17:24:50.707054 kernel: ACPI: DSDT 0x000000009BB7A000 002237 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 20 17:24:50.707067 kernel: ACPI: FACS 0x000000009BBDD000 000040 Apr 20 17:24:50.707076 kernel: ACPI: APIC 0x000000009BB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 20 17:24:50.707085 kernel: ACPI: HPET 0x000000009BB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 20 17:24:50.707130 kernel: ACPI: MCFG 0x000000009BB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 20 17:24:50.707140 kernel: ACPI: WAET 0x000000009BB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 20 17:24:50.707149 kernel: ACPI: BGRT 0x000000009BB74000 000038 (v01 INTEL EDK2 00000002 01000013) Apr 20 17:24:50.707158 kernel: ACPI: Reserving FACP table memory at [mem 0x9bb79000-0x9bb790f3] Apr 20 17:24:50.707166 kernel: ACPI: Reserving DSDT table memory at [mem 0x9bb7a000-0x9bb7c236] Apr 20 17:24:50.707176 kernel: ACPI: Reserving FACS table memory at [mem 0x9bbdd000-0x9bbdd03f] Apr 20 17:24:50.707185 kernel: ACPI: Reserving APIC table memory at [mem 0x9bb78000-0x9bb7808f] Apr 20 17:24:50.707266 kernel: ACPI: Reserving HPET table memory at [mem 0x9bb77000-0x9bb77037] Apr 20 17:24:50.707275 kernel: ACPI: Reserving MCFG table memory at [mem 0x9bb76000-0x9bb7603b] Apr 20 17:24:50.707286 kernel: ACPI: Reserving WAET table memory at [mem 0x9bb75000-0x9bb75027] Apr 20 17:24:50.707332 kernel: ACPI: Reserving BGRT table memory at [mem 0x9bb74000-0x9bb74037] Apr 20 17:24:50.707341 kernel: No NUMA configuration found Apr 20 17:24:50.707349 kernel: Faking a node at [mem 0x0000000000000000-0x000000009bffffff] Apr 20 17:24:50.707358 kernel: NODE_DATA(0) allocated [mem 0x9bf57dc0-0x9bf5efff] Apr 20 17:24:50.707404 kernel: Zone ranges: Apr 20 17:24:50.707414 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Apr 20 17:24:50.707424 kernel: DMA32 [mem 0x0000000001000000-0x000000009bffffff] Apr 20 17:24:50.707433 kernel: Normal empty Apr 20 17:24:50.707443 kernel: Device empty Apr 20 17:24:50.707451 kernel: Movable zone start for each node Apr 20 17:24:50.707459 kernel: Early memory node ranges Apr 20 17:24:50.707467 kernel: node 0: [mem 0x0000000000001000-0x000000000002ffff] Apr 20 17:24:50.707518 kernel: node 0: [mem 0x0000000000050000-0x000000000009efff] Apr 20 17:24:50.707528 kernel: node 0: [mem 0x0000000000100000-0x000000009b8ecfff] Apr 20 17:24:50.707538 kernel: node 0: [mem 0x000000009bbff000-0x000000009bfb0fff] Apr 20 17:24:50.707547 kernel: node 0: [mem 0x000000009bfb7000-0x000000009bffffff] Apr 20 17:24:50.708326 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009bffffff] Apr 20 17:24:50.708345 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Apr 20 17:24:50.708355 kernel: On node 0, zone DMA: 32 pages in unavailable ranges Apr 20 17:24:50.708416 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Apr 20 17:24:50.708426 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Apr 20 17:24:50.708436 kernel: On node 0, zone DMA32: 6 pages in unavailable ranges Apr 20 17:24:50.708445 kernel: On node 0, zone DMA32: 16384 pages in unavailable ranges Apr 20 17:24:50.708455 kernel: ACPI: PM-Timer IO Port: 0x608 Apr 20 17:24:50.708504 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Apr 20 17:24:50.708515 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Apr 20 17:24:50.708570 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Apr 20 17:24:50.708581 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Apr 20 17:24:50.708595 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Apr 20 17:24:50.708605 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Apr 20 17:24:50.708615 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Apr 20 17:24:50.708624 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Apr 20 17:24:50.708634 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Apr 20 17:24:50.708691 kernel: TSC deadline timer available Apr 20 17:24:50.708702 kernel: CPU topo: Max. logical packages: 1 Apr 20 17:24:50.708712 kernel: CPU topo: Max. logical dies: 1 Apr 20 17:24:50.708721 kernel: CPU topo: Max. dies per package: 1 Apr 20 17:24:50.708774 kernel: CPU topo: Max. threads per core: 1 Apr 20 17:24:50.709280 kernel: CPU topo: Num. cores per package: 4 Apr 20 17:24:50.709338 kernel: CPU topo: Num. threads per package: 4 Apr 20 17:24:50.709348 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs Apr 20 17:24:50.709393 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Apr 20 17:24:50.709403 kernel: kvm-guest: KVM setup pv remote TLB flush Apr 20 17:24:50.709453 kernel: kvm-guest: setup PV sched yield Apr 20 17:24:50.709462 kernel: [mem 0x9d000000-0xdfffffff] available for PCI devices Apr 20 17:24:50.709471 kernel: Booting paravirtualized kernel on KVM Apr 20 17:24:50.709481 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Apr 20 17:24:50.709529 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Apr 20 17:24:50.709539 kernel: percpu: Embedded 60 pages/cpu s207960 r8192 d29608 u524288 Apr 20 17:24:50.709550 kernel: pcpu-alloc: s207960 r8192 d29608 u524288 alloc=1*2097152 Apr 20 17:24:50.709561 kernel: pcpu-alloc: [0] 0 1 2 3 Apr 20 17:24:50.709571 kernel: kvm-guest: PV spinlocks enabled Apr 20 17:24:50.709580 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Apr 20 17:24:50.709593 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=70de22794192cd52e167b5a4b1ae0509811ded61dbe4152dfc02378f843ae81a Apr 20 17:24:50.709644 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Apr 20 17:24:50.709654 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Apr 20 17:24:50.709664 kernel: Fallback order for Node 0: 0 Apr 20 17:24:50.709675 kernel: Built 1 zonelists, mobility grouping on. Total pages: 638054 Apr 20 17:24:50.709686 kernel: Policy zone: DMA32 Apr 20 17:24:50.709697 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Apr 20 17:24:50.709744 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Apr 20 17:24:50.709754 kernel: ftrace: allocating 40346 entries in 158 pages Apr 20 17:24:50.709763 kernel: ftrace: allocated 158 pages with 5 groups Apr 20 17:24:50.709773 kernel: Dynamic Preempt: voluntary Apr 20 17:24:50.709782 kernel: rcu: Preemptible hierarchical RCU implementation. Apr 20 17:24:50.710729 kernel: rcu: RCU event tracing is enabled. Apr 20 17:24:50.710743 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Apr 20 17:24:50.710869 kernel: Trampoline variant of Tasks RCU enabled. Apr 20 17:24:50.710880 kernel: Rude variant of Tasks RCU enabled. Apr 20 17:24:50.710890 kernel: Tracing variant of Tasks RCU enabled. Apr 20 17:24:50.710936 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Apr 20 17:24:50.710947 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Apr 20 17:24:50.710957 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Apr 20 17:24:50.710966 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Apr 20 17:24:50.710975 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Apr 20 17:24:50.711030 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Apr 20 17:24:50.711040 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Apr 20 17:24:50.711050 kernel: Console: colour dummy device 80x25 Apr 20 17:24:50.711061 kernel: printk: legacy console [ttyS0] enabled Apr 20 17:24:50.711072 kernel: ACPI: Core revision 20240827 Apr 20 17:24:50.711082 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Apr 20 17:24:50.711093 kernel: APIC: Switch to symmetric I/O mode setup Apr 20 17:24:50.711148 kernel: x2apic enabled Apr 20 17:24:50.711158 kernel: APIC: Switched APIC routing to: physical x2apic Apr 20 17:24:50.711169 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Apr 20 17:24:50.711178 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Apr 20 17:24:50.711189 kernel: kvm-guest: setup PV IPIs Apr 20 17:24:50.711199 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Apr 20 17:24:50.711253 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x284409db922, max_idle_ns: 440795228871 ns Apr 20 17:24:50.711309 kernel: Calibrating delay loop (skipped) preset value.. 5586.87 BogoMIPS (lpj=2793438) Apr 20 17:24:50.711321 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Apr 20 17:24:50.711332 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Apr 20 17:24:50.711342 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Apr 20 17:24:50.711353 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Apr 20 17:24:50.711364 kernel: Spectre V2 : Mitigation: Retpolines Apr 20 17:24:50.711374 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Apr 20 17:24:50.711466 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Apr 20 17:24:50.711478 kernel: RETBleed: Vulnerable Apr 20 17:24:50.711487 kernel: Speculative Store Bypass: Vulnerable Apr 20 17:24:50.711496 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Apr 20 17:24:50.711506 kernel: GDS: Unknown: Dependent on hypervisor status Apr 20 17:24:50.711515 kernel: active return thunk: its_return_thunk Apr 20 17:24:50.711523 kernel: ITS: Mitigation: Aligned branch/return thunks Apr 20 17:24:50.711573 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Apr 20 17:24:50.711583 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Apr 20 17:24:50.711592 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Apr 20 17:24:50.711601 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Apr 20 17:24:50.711612 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Apr 20 17:24:50.711621 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Apr 20 17:24:50.711630 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Apr 20 17:24:50.711680 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Apr 20 17:24:50.711691 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Apr 20 17:24:50.711702 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Apr 20 17:24:50.711713 kernel: x86/fpu: Enabled xstate features 0xe7, context size is 2432 bytes, using 'compacted' format. Apr 20 17:24:50.711724 kernel: Freeing SMP alternatives memory: 32K Apr 20 17:24:50.711734 kernel: pid_max: default: 32768 minimum: 301 Apr 20 17:24:50.711743 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Apr 20 17:24:50.711904 kernel: landlock: Up and running. Apr 20 17:24:50.711917 kernel: SELinux: Initializing. Apr 20 17:24:50.711927 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 20 17:24:50.711938 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 20 17:24:50.711949 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8370C CPU @ 2.80GHz (family: 0x6, model: 0x6a, stepping: 0x6) Apr 20 17:24:50.711960 kernel: Performance Events: unsupported p6 CPU model 106 no PMU driver, software events only. Apr 20 17:24:50.711970 kernel: signal: max sigframe size: 3632 Apr 20 17:24:50.712025 kernel: rcu: Hierarchical SRCU implementation. Apr 20 17:24:50.712036 kernel: rcu: Max phase no-delay instances is 400. Apr 20 17:24:50.712045 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Apr 20 17:24:50.712055 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Apr 20 17:24:50.712065 kernel: smp: Bringing up secondary CPUs ... Apr 20 17:24:50.712073 kernel: smpboot: x86: Booting SMP configuration: Apr 20 17:24:50.712081 kernel: .... node #0, CPUs: #1 #2 #3 Apr 20 17:24:50.712127 kernel: smp: Brought up 1 node, 4 CPUs Apr 20 17:24:50.712137 kernel: smpboot: Total of 4 processors activated (22347.50 BogoMIPS) Apr 20 17:24:50.712148 kernel: Memory: 2381832K/2552216K available (14336K kernel code, 2458K rwdata, 31736K rodata, 15944K init, 2284K bss, 164492K reserved, 0K cma-reserved) Apr 20 17:24:50.712158 kernel: devtmpfs: initialized Apr 20 17:24:50.712167 kernel: x86/mm: Memory block size: 128MB Apr 20 17:24:50.712177 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9bb7f000-0x9bbfefff] (524288 bytes) Apr 20 17:24:50.712187 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9bfb5000-0x9bfb6fff] (8192 bytes) Apr 20 17:24:50.712283 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Apr 20 17:24:50.712295 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Apr 20 17:24:50.712304 kernel: pinctrl core: initialized pinctrl subsystem Apr 20 17:24:50.712313 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Apr 20 17:24:50.712323 kernel: audit: initializing netlink subsys (disabled) Apr 20 17:24:50.712333 kernel: thermal_sys: Registered thermal governor 'step_wise' Apr 20 17:24:50.712343 kernel: thermal_sys: Registered thermal governor 'user_space' Apr 20 17:24:50.712402 kernel: audit: type=2000 audit(1776705866.903:1): state=initialized audit_enabled=0 res=1 Apr 20 17:24:50.712413 kernel: cpuidle: using governor menu Apr 20 17:24:50.712424 kernel: efi: Freeing EFI boot services memory: 42800K Apr 20 17:24:50.712435 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Apr 20 17:24:50.712445 kernel: dca service started, version 1.12.1 Apr 20 17:24:50.712455 kernel: PCI: ECAM [mem 0xe0000000-0xefffffff] (base 0xe0000000) for domain 0000 [bus 00-ff] Apr 20 17:24:50.712464 kernel: PCI: Using configuration type 1 for base access Apr 20 17:24:50.712518 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Apr 20 17:24:50.712528 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Apr 20 17:24:50.712538 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Apr 20 17:24:50.712549 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Apr 20 17:24:50.712559 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Apr 20 17:24:50.712567 kernel: ACPI: Added _OSI(Module Device) Apr 20 17:24:50.712578 kernel: ACPI: Added _OSI(Processor Device) Apr 20 17:24:50.712628 kernel: ACPI: Added _OSI(Processor Aggregator Device) Apr 20 17:24:50.712639 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Apr 20 17:24:50.712648 kernel: ACPI: Interpreter enabled Apr 20 17:24:50.712657 kernel: ACPI: PM: (supports S0 S5) Apr 20 17:24:50.712667 kernel: ACPI: Using IOAPIC for interrupt routing Apr 20 17:24:50.712677 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Apr 20 17:24:50.712686 kernel: PCI: Using E820 reservations for host bridge windows Apr 20 17:24:50.712696 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Apr 20 17:24:50.712744 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Apr 20 17:24:50.713093 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Apr 20 17:24:50.713380 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Apr 20 17:24:50.715070 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Apr 20 17:24:50.715091 kernel: PCI host bridge to bus 0000:00 Apr 20 17:24:50.715431 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Apr 20 17:24:50.715580 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Apr 20 17:24:50.715723 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Apr 20 17:24:50.717366 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xdfffffff window] Apr 20 17:24:50.717507 kernel: pci_bus 0000:00: root bus resource [mem 0xf0000000-0xfebfffff window] Apr 20 17:24:50.717680 kernel: pci_bus 0000:00: root bus resource [mem 0x380000000000-0x3807ffffffff window] Apr 20 17:24:50.719991 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Apr 20 17:24:50.720546 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Apr 20 17:24:50.720707 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Apr 20 17:24:50.722324 kernel: pci 0000:00:01.0: BAR 0 [mem 0xc0000000-0xc0ffffff pref] Apr 20 17:24:50.722470 kernel: pci 0000:00:01.0: BAR 2 [mem 0xc1044000-0xc1044fff] Apr 20 17:24:50.722682 kernel: pci 0000:00:01.0: ROM [mem 0xffff0000-0xffffffff pref] Apr 20 17:24:50.722944 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Apr 20 17:24:50.723078 kernel: pci 0000:00:01.0: pci_fixup_video+0x0/0xe0 took 13671 usecs Apr 20 17:24:50.723348 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Apr 20 17:24:50.723470 kernel: pci 0000:00:02.0: BAR 0 [io 0x6100-0x611f] Apr 20 17:24:50.723594 kernel: pci 0000:00:02.0: BAR 1 [mem 0xc1043000-0xc1043fff] Apr 20 17:24:50.723772 kernel: pci 0000:00:02.0: BAR 4 [mem 0x380000000000-0x380000003fff 64bit pref] Apr 20 17:24:50.726103 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Apr 20 17:24:50.726398 kernel: pci 0000:00:03.0: BAR 0 [io 0x6000-0x607f] Apr 20 17:24:50.726553 kernel: pci 0000:00:03.0: BAR 1 [mem 0xc1042000-0xc1042fff] Apr 20 17:24:50.726693 kernel: pci 0000:00:03.0: BAR 4 [mem 0x380000004000-0x380000007fff 64bit pref] Apr 20 17:24:50.733194 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Apr 20 17:24:50.733397 kernel: pci 0000:00:04.0: BAR 0 [io 0x60e0-0x60ff] Apr 20 17:24:50.733517 kernel: pci 0000:00:04.0: BAR 1 [mem 0xc1041000-0xc1041fff] Apr 20 17:24:50.733629 kernel: pci 0000:00:04.0: BAR 4 [mem 0x380000008000-0x38000000bfff 64bit pref] Apr 20 17:24:50.733740 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref] Apr 20 17:24:50.734031 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Apr 20 17:24:50.734267 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Apr 20 17:24:50.734384 kernel: pci 0000:00:1f.0: quirk_ich7_lpc+0x0/0xc0 took 17578 usecs Apr 20 17:24:50.734505 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Apr 20 17:24:50.734618 kernel: pci 0000:00:1f.2: BAR 4 [io 0x60c0-0x60df] Apr 20 17:24:50.734728 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xc1040000-0xc1040fff] Apr 20 17:24:50.734946 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Apr 20 17:24:50.735116 kernel: pci 0000:00:1f.3: BAR 4 [io 0x6080-0x60bf] Apr 20 17:24:50.735127 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Apr 20 17:24:50.735138 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Apr 20 17:24:50.735147 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Apr 20 17:24:50.735156 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Apr 20 17:24:50.735165 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Apr 20 17:24:50.739470 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Apr 20 17:24:50.739491 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Apr 20 17:24:50.739500 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Apr 20 17:24:50.739583 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Apr 20 17:24:50.739593 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Apr 20 17:24:50.739602 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Apr 20 17:24:50.739612 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Apr 20 17:24:50.739925 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Apr 20 17:24:50.739938 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Apr 20 17:24:50.739950 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Apr 20 17:24:50.739960 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Apr 20 17:24:50.739971 kernel: iommu: Default domain type: Translated Apr 20 17:24:50.739983 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Apr 20 17:24:50.739994 kernel: efivars: Registered efivars operations Apr 20 17:24:50.740005 kernel: PCI: Using ACPI for IRQ routing Apr 20 17:24:50.740056 kernel: PCI: pci_cache_line_size set to 64 bytes Apr 20 17:24:50.740067 kernel: e820: reserve RAM buffer [mem 0x0009f000-0x0009ffff] Apr 20 17:24:50.740076 kernel: e820: reserve RAM buffer [mem 0x9a105018-0x9bffffff] Apr 20 17:24:50.740085 kernel: e820: reserve RAM buffer [mem 0x9a142018-0x9bffffff] Apr 20 17:24:50.740094 kernel: e820: reserve RAM buffer [mem 0x9b8ed000-0x9bffffff] Apr 20 17:24:50.740103 kernel: e820: reserve RAM buffer [mem 0x9bfb1000-0x9bffffff] Apr 20 17:24:50.740441 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Apr 20 17:24:50.740665 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Apr 20 17:24:50.740914 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Apr 20 17:24:50.740935 kernel: vgaarb: loaded Apr 20 17:24:50.740947 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Apr 20 17:24:50.740958 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Apr 20 17:24:50.740970 kernel: clocksource: Switched to clocksource kvm-clock Apr 20 17:24:50.740982 kernel: VFS: Disk quotas dquot_6.6.0 Apr 20 17:24:50.741050 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Apr 20 17:24:50.741063 kernel: pnp: PnP ACPI init Apr 20 17:24:50.743141 kernel: system 00:05: [mem 0xe0000000-0xefffffff window] has been reserved Apr 20 17:24:50.743171 kernel: pnp: PnP ACPI: found 6 devices Apr 20 17:24:50.743184 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Apr 20 17:24:50.743195 kernel: NET: Registered PF_INET protocol family Apr 20 17:24:50.743266 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Apr 20 17:24:50.743278 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Apr 20 17:24:50.743290 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Apr 20 17:24:50.743302 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Apr 20 17:24:50.743314 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Apr 20 17:24:50.743327 kernel: TCP: Hash tables configured (established 32768 bind 32768) Apr 20 17:24:50.743339 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 20 17:24:50.743387 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 20 17:24:50.743399 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Apr 20 17:24:50.743411 kernel: NET: Registered PF_XDP protocol family Apr 20 17:24:50.743591 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref]: can't claim; no compatible bridge window Apr 20 17:24:50.743762 kernel: pci 0000:00:04.0: ROM [mem 0x9d000000-0x9d03ffff pref]: assigned Apr 20 17:24:50.744094 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Apr 20 17:24:50.744277 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Apr 20 17:24:50.744458 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Apr 20 17:24:50.744579 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xdfffffff window] Apr 20 17:24:50.744700 kernel: pci_bus 0000:00: resource 8 [mem 0xf0000000-0xfebfffff window] Apr 20 17:24:50.748150 kernel: pci_bus 0000:00: resource 9 [mem 0x380000000000-0x3807ffffffff window] Apr 20 17:24:50.748297 kernel: PCI: CLS 0 bytes, default 64 Apr 20 17:24:50.748312 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Apr 20 17:24:50.749505 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x284409db922, max_idle_ns: 440795228871 ns Apr 20 17:24:50.749522 kernel: Initialise system trusted keyrings Apr 20 17:24:50.749533 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Apr 20 17:24:50.749579 kernel: Key type asymmetric registered Apr 20 17:24:50.749589 kernel: Asymmetric key parser 'x509' registered Apr 20 17:24:50.752555 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Apr 20 17:24:50.752619 kernel: io scheduler mq-deadline registered Apr 20 17:24:50.752673 kernel: io scheduler kyber registered Apr 20 17:24:50.752685 kernel: io scheduler bfq registered Apr 20 17:24:50.752695 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Apr 20 17:24:50.752709 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Apr 20 17:24:50.752722 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Apr 20 17:24:50.752734 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Apr 20 17:24:50.752746 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Apr 20 17:24:50.752759 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Apr 20 17:24:50.752897 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Apr 20 17:24:50.752909 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Apr 20 17:24:50.752921 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Apr 20 17:24:50.753281 kernel: rtc_cmos 00:04: RTC can wake from S4 Apr 20 17:24:50.753306 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Apr 20 17:24:50.753451 kernel: rtc_cmos 00:04: registered as rtc0 Apr 20 17:24:50.753665 kernel: rtc_cmos 00:04: setting system clock to 2026-04-20T17:24:38 UTC (1776705878) Apr 20 17:24:50.753910 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Apr 20 17:24:50.753928 kernel: intel_pstate: CPU model not supported Apr 20 17:24:50.753941 kernel: efifb: probing for efifb Apr 20 17:24:50.753954 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k Apr 20 17:24:50.753966 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 Apr 20 17:24:50.753979 kernel: efifb: scrolling: redraw Apr 20 17:24:50.754046 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Apr 20 17:24:50.754059 kernel: Console: switching to colour frame buffer device 160x50 Apr 20 17:24:50.754071 kernel: fb0: EFI VGA frame buffer device Apr 20 17:24:50.754084 kernel: pstore: Using crash dump compression: deflate Apr 20 17:24:50.754140 kernel: pstore: Registered efi_pstore as persistent store backend Apr 20 17:24:50.754153 kernel: NET: Registered PF_INET6 protocol family Apr 20 17:24:50.754165 kernel: Segment Routing with IPv6 Apr 20 17:24:50.754178 kernel: In-situ OAM (IOAM) with IPv6 Apr 20 17:24:50.754190 kernel: NET: Registered PF_PACKET protocol family Apr 20 17:24:50.754202 kernel: Key type dns_resolver registered Apr 20 17:24:50.754259 kernel: IPI shorthand broadcast: enabled Apr 20 17:24:50.754270 kernel: sched_clock: Marking stable (9559076480, 3844432456)->(15469392068, -2065883132) Apr 20 17:24:50.756546 kernel: registered taskstats version 1 Apr 20 17:24:50.756556 kernel: Loading compiled-in X.509 certificates Apr 20 17:24:50.756566 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.81-flatcar: 7cf14208c08026297bea8a5678f7340932b35e4b' Apr 20 17:24:50.756576 kernel: Demotion targets for Node 0: null Apr 20 17:24:50.756585 kernel: Key type .fscrypt registered Apr 20 17:24:50.756594 kernel: Key type fscrypt-provisioning registered Apr 20 17:24:50.756604 kernel: ima: No TPM chip found, activating TPM-bypass! Apr 20 17:24:50.756656 kernel: ima: Allocated hash algorithm: sha1 Apr 20 17:24:50.756666 kernel: ima: No architecture policies found Apr 20 17:24:50.756741 kernel: clk: Disabling unused clocks Apr 20 17:24:50.756751 kernel: Freeing unused kernel image (initmem) memory: 15944K Apr 20 17:24:50.756761 kernel: Write protecting the kernel read-only data: 47104k Apr 20 17:24:50.756771 kernel: Freeing unused kernel image (rodata/data gap) memory: 1032K Apr 20 17:24:50.756885 kernel: Run /init as init process Apr 20 17:24:50.756937 kernel: with arguments: Apr 20 17:24:50.756949 kernel: /init Apr 20 17:24:50.757038 kernel: with environment: Apr 20 17:24:50.757049 kernel: HOME=/ Apr 20 17:24:50.757060 kernel: TERM=linux Apr 20 17:24:50.757071 kernel: SCSI subsystem initialized Apr 20 17:24:50.757083 kernel: libata version 3.00 loaded. Apr 20 17:24:50.757585 kernel: ahci 0000:00:1f.2: version 3.0 Apr 20 17:24:50.757609 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Apr 20 17:24:50.757744 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Apr 20 17:24:50.757953 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Apr 20 17:24:50.758087 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Apr 20 17:24:50.758384 kernel: scsi host0: ahci Apr 20 17:24:50.758572 kernel: scsi host1: ahci Apr 20 17:24:50.758734 kernel: scsi host2: ahci Apr 20 17:24:50.758963 kernel: scsi host3: ahci Apr 20 17:24:50.759167 kernel: scsi host4: ahci Apr 20 17:24:50.759349 kernel: scsi host5: ahci Apr 20 17:24:50.759441 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 26 lpm-pol 1 Apr 20 17:24:50.759451 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 26 lpm-pol 1 Apr 20 17:24:50.759460 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 26 lpm-pol 1 Apr 20 17:24:50.759470 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 26 lpm-pol 1 Apr 20 17:24:50.759479 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 26 lpm-pol 1 Apr 20 17:24:50.759489 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 26 lpm-pol 1 Apr 20 17:24:50.759498 kernel: ata5: SATA link down (SStatus 0 SControl 300) Apr 20 17:24:50.759541 kernel: ata2: SATA link down (SStatus 0 SControl 300) Apr 20 17:24:50.759551 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Apr 20 17:24:50.759560 kernel: ata3.00: LPM support broken, forcing max_power Apr 20 17:24:50.759570 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Apr 20 17:24:50.759579 kernel: ata3.00: applying bridge limits Apr 20 17:24:50.759588 kernel: ata1: SATA link down (SStatus 0 SControl 300) Apr 20 17:24:50.759597 kernel: ata6: SATA link down (SStatus 0 SControl 300) Apr 20 17:24:50.759640 kernel: ata4: SATA link down (SStatus 0 SControl 300) Apr 20 17:24:50.759649 kernel: ata3.00: LPM support broken, forcing max_power Apr 20 17:24:50.759659 kernel: ata3.00: configured for UDMA/100 Apr 20 17:24:50.760012 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Apr 20 17:24:50.760142 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Apr 20 17:24:50.761447 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Apr 20 17:24:50.761531 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Apr 20 17:24:50.761701 kernel: virtio_blk virtio1: [vda] 27000832 512-byte logical blocks (13.8 GB/12.9 GiB) Apr 20 17:24:50.761719 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Apr 20 17:24:50.761731 kernel: GPT:16515071 != 27000831 Apr 20 17:24:50.761742 kernel: GPT:Alternate GPT header not at the end of the disk. Apr 20 17:24:50.762300 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Apr 20 17:24:50.762320 kernel: GPT:16515071 != 27000831 Apr 20 17:24:50.762377 kernel: GPT: Use GNU Parted to correct GPT errors. Apr 20 17:24:50.762387 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 20 17:24:50.762398 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Apr 20 17:24:50.762408 kernel: device-mapper: uevent: version 1.0.3 Apr 20 17:24:50.762418 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Apr 20 17:24:50.762428 kernel: device-mapper: verity: sha256 using shash "sha256-generic" Apr 20 17:24:50.762479 kernel: raid6: avx512x4 gen() 2831 MB/s Apr 20 17:24:50.762491 kernel: raid6: avx512x2 gen() 21681 MB/s Apr 20 17:24:50.762500 kernel: raid6: avx512x1 gen() 19439 MB/s Apr 20 17:24:50.762509 kernel: raid6: avx2x4 gen() 15467 MB/s Apr 20 17:24:50.762520 kernel: raid6: avx2x2 gen() 16943 MB/s Apr 20 17:24:50.762529 kernel: raid6: avx2x1 gen() 11206 MB/s Apr 20 17:24:50.762538 kernel: raid6: using algorithm avx512x2 gen() 21681 MB/s Apr 20 17:24:50.762548 kernel: raid6: .... xor() 13980 MB/s, rmw enabled Apr 20 17:24:50.762600 kernel: raid6: using avx512x2 recovery algorithm Apr 20 17:24:50.762612 kernel: xor: automatically using best checksumming function avx Apr 20 17:24:50.762622 kernel: Btrfs loaded, zoned=no, fsverity=no Apr 20 17:24:50.762633 kernel: BTRFS: device fsid 2b1891e6-d4d2-4c02-a1ed-3a6feccae86f devid 1 transid 45 /dev/mapper/usr (253:0) scanned by mount (181) Apr 20 17:24:50.762645 kernel: BTRFS info (device dm-0): first mount of filesystem 2b1891e6-d4d2-4c02-a1ed-3a6feccae86f Apr 20 17:24:50.762657 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Apr 20 17:24:50.762667 kernel: BTRFS info (device dm-0 state E): disabling log replay at mount time Apr 20 17:24:50.762724 kernel: BTRFS info (device dm-0 state E): enabling free space tree Apr 20 17:24:50.762735 kernel: loop: module loaded Apr 20 17:24:50.762746 kernel: loop0: detected capacity change from 0 to 106960 Apr 20 17:24:50.762756 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Apr 20 17:24:50.762767 kernel: hrtimer: interrupt took 12142574 ns Apr 20 17:24:50.762781 systemd[1]: /etc/systemd/system.conf.d/nocgroup.conf:2: Support for option DefaultCPUAccounting= has been removed and it is ignored Apr 20 17:24:50.762917 systemd[1]: /etc/systemd/system.conf.d/nocgroup.conf:5: Support for option DefaultBlockIOAccounting= has been removed and it is ignored Apr 20 17:24:50.762931 systemd[1]: Successfully made /usr/ read-only. Apr 20 17:24:50.762944 systemd[1]: systemd 258.3 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Apr 20 17:24:50.762957 systemd[1]: Detected virtualization kvm. Apr 20 17:24:50.762969 systemd[1]: Detected architecture x86-64. Apr 20 17:24:50.762981 systemd[1]: Running in initrd. Apr 20 17:24:50.763030 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Apr 20 17:24:50.763043 systemd[1]: No hostname configured, using default hostname. Apr 20 17:24:50.763056 systemd[1]: Hostname set to . Apr 20 17:24:50.763068 systemd[1]: Queued start job for default target initrd.target. Apr 20 17:24:50.763081 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Apr 20 17:24:50.763094 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 20 17:24:50.763108 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 20 17:24:50.763156 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Apr 20 17:24:50.763169 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 20 17:24:50.763181 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Apr 20 17:24:50.763194 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Apr 20 17:24:50.763247 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 20 17:24:50.763297 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 20 17:24:50.763309 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Apr 20 17:24:50.763322 systemd[1]: Reached target paths.target - Path Units. Apr 20 17:24:50.763334 systemd[1]: Reached target slices.target - Slice Units. Apr 20 17:24:50.763347 systemd[1]: Reached target swap.target - Swaps. Apr 20 17:24:50.763360 systemd[1]: Reached target timers.target - Timer Units. Apr 20 17:24:50.763372 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Apr 20 17:24:50.763419 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 20 17:24:50.763432 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Apr 20 17:24:50.763445 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 20 17:24:50.763457 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Apr 20 17:24:50.763469 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 20 17:24:50.763481 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 20 17:24:50.763493 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 20 17:24:50.763541 systemd[1]: Reached target sockets.target - Socket Units. Apr 20 17:24:50.763553 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Apr 20 17:24:50.763565 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Apr 20 17:24:50.763575 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 20 17:24:50.763587 systemd[1]: Finished network-cleanup.service - Network Cleanup. Apr 20 17:24:50.763597 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Apr 20 17:24:50.763607 systemd[1]: Starting systemd-fsck-usr.service... Apr 20 17:24:50.763660 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 20 17:24:50.763673 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 20 17:24:50.763686 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 20 17:24:50.763736 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Apr 20 17:24:50.763749 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 20 17:24:50.763762 systemd[1]: Finished systemd-fsck-usr.service. Apr 20 17:24:50.763922 systemd-journald[320]: Collecting audit messages is enabled. Apr 20 17:24:50.763990 kernel: audit: type=1130 audit(1776705890.691:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 17:24:50.764005 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 20 17:24:50.764021 systemd-journald[320]: Journal started Apr 20 17:24:50.764047 systemd-journald[320]: Runtime Journal (/run/log/journal/7e9674e34d464ebf866d1273883c3b53) is 5.9M, max 47.8M, 41.8M free. Apr 20 17:24:50.691000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 17:24:50.775000 systemd[1]: Started systemd-journald.service - Journal Service. Apr 20 17:24:50.774000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 17:24:50.794170 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 20 17:24:50.823174 kernel: audit: type=1130 audit(1776705890.774:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 17:24:50.828191 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 20 17:24:50.837000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 17:24:50.876530 kernel: audit: type=1130 audit(1776705890.837:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 17:24:50.894331 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 20 17:24:50.928370 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 20 17:24:50.947000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 17:24:50.998942 kernel: audit: type=1130 audit(1776705890.947:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 17:24:50.999278 systemd-tmpfiles[335]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Apr 20 17:24:51.008057 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 20 17:24:51.040000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 17:24:51.040750 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 20 17:24:51.054263 kernel: audit: type=1130 audit(1776705891.040:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 17:24:51.104438 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 20 17:24:51.103000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 17:24:51.124154 kernel: audit: type=1130 audit(1776705891.103:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 17:24:51.209773 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 20 17:24:51.221000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 17:24:51.250983 kernel: audit: type=1130 audit(1776705891.221:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 17:24:51.250063 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Apr 20 17:24:51.278164 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Apr 20 17:24:51.289357 kernel: Bridge firewalling registered Apr 20 17:24:51.289650 systemd-modules-load[325]: Inserted module 'br_netfilter' Apr 20 17:24:51.307557 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 20 17:24:51.316000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 17:24:51.361898 dracut-cmdline[353]: dracut-109 Apr 20 17:24:51.370445 kernel: audit: type=1130 audit(1776705891.316:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 17:24:51.335530 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 20 17:24:51.386766 dracut-cmdline[353]: Using kernel command line parameters: SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=70de22794192cd52e167b5a4b1ae0509811ded61dbe4152dfc02378f843ae81a Apr 20 17:24:51.435143 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 20 17:24:51.458000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 17:24:51.479951 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 20 17:24:51.472000 audit: BPF prog-id=5 op=LOAD Apr 20 17:24:51.527592 kernel: audit: type=1130 audit(1776705891.458:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 17:24:51.527623 kernel: audit: type=1334 audit(1776705891.472:11): prog-id=5 op=LOAD Apr 20 17:24:51.924641 systemd-resolved[377]: Positive Trust Anchors: Apr 20 17:24:51.924655 systemd-resolved[377]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 20 17:24:51.924658 systemd-resolved[377]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Apr 20 17:24:51.924692 systemd-resolved[377]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 20 17:24:52.245926 systemd-resolved[377]: Defaulting to hostname 'linux'. Apr 20 17:24:52.279088 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 20 17:24:52.287000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 17:24:52.288768 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 20 17:24:52.768311 kernel: Loading iSCSI transport class v2.0-870. Apr 20 17:24:52.824178 kernel: iscsi: registered transport (tcp) Apr 20 17:24:52.991366 kernel: iscsi: registered transport (qla4xxx) Apr 20 17:24:52.996682 kernel: QLogic iSCSI HBA Driver Apr 20 17:24:53.284692 systemd[1]: Starting systemd-network-generator.service - Generate Network Units from Kernel Command Line... Apr 20 17:24:53.407141 systemd[1]: Finished systemd-network-generator.service - Generate Network Units from Kernel Command Line. Apr 20 17:24:53.431000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 17:24:53.434944 systemd[1]: Reached target network-pre.target - Preparation for Network. Apr 20 17:24:53.841196 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Apr 20 17:24:53.849000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 17:24:53.876597 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Apr 20 17:24:53.892553 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Apr 20 17:24:54.086569 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Apr 20 17:24:54.114000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 17:24:54.119000 audit: BPF prog-id=6 op=LOAD Apr 20 17:24:54.119000 audit: BPF prog-id=7 op=LOAD Apr 20 17:24:54.123567 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 20 17:24:54.371028 systemd-udevd[591]: Using default interface naming scheme 'v258'. Apr 20 17:24:54.511428 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 20 17:24:54.520000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 17:24:54.562182 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Apr 20 17:24:54.826600 dracut-pre-trigger[668]: rd.md=0: removing MD RAID activation Apr 20 17:24:55.021698 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 20 17:24:55.046000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 17:24:55.055000 audit: BPF prog-id=8 op=LOAD Apr 20 17:24:55.063711 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 20 17:24:55.332340 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Apr 20 17:24:55.331000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 17:24:55.346552 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 20 17:24:55.537963 systemd-networkd[724]: lo: Link UP Apr 20 17:24:55.538006 systemd-networkd[724]: lo: Gained carrier Apr 20 17:24:55.578000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 17:24:55.546433 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 20 17:24:55.581388 systemd[1]: Reached target network.target - Network. Apr 20 17:24:55.922128 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 20 17:24:55.956000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 17:24:55.981128 kernel: kauditd_printk_skb: 11 callbacks suppressed Apr 20 17:24:55.981228 kernel: audit: type=1130 audit(1776705895.956:23): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 17:24:55.997699 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Apr 20 17:24:56.717485 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Apr 20 17:24:56.757986 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Apr 20 17:24:56.787356 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Apr 20 17:24:56.818143 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Apr 20 17:24:56.842190 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Apr 20 17:24:56.906887 kernel: cryptd: max_cpu_qlen set to 1000 Apr 20 17:24:56.997990 disk-uuid[779]: Primary Header is updated. Apr 20 17:24:56.997990 disk-uuid[779]: Secondary Entries is updated. Apr 20 17:24:56.997990 disk-uuid[779]: Secondary Header is updated. Apr 20 17:24:57.068923 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 20 17:24:57.080000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 17:24:57.131028 kernel: audit: type=1131 audit(1776705897.080:24): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 17:24:57.069086 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 20 17:24:57.082035 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Apr 20 17:24:57.126442 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 20 17:24:57.269220 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 20 17:24:57.269607 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 20 17:24:57.308000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 17:24:57.332415 kernel: audit: type=1130 audit(1776705897.308:25): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 17:24:57.366233 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 20 17:24:57.308000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 17:24:57.405924 kernel: audit: type=1131 audit(1776705897.308:26): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 17:24:57.623491 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Apr 20 17:24:57.641518 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 20 17:24:57.639000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 17:24:57.684665 kernel: audit: type=1130 audit(1776705897.639:27): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 17:24:57.684701 kernel: AES CTR mode by8 optimization enabled Apr 20 17:24:57.907143 systemd-networkd[724]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Apr 20 17:24:57.907153 systemd-networkd[724]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 20 17:24:57.968567 systemd-networkd[724]: eth0: Link UP Apr 20 17:24:57.983654 systemd-networkd[724]: eth0: Gained carrier Apr 20 17:24:57.983676 systemd-networkd[724]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Apr 20 17:24:58.075702 systemd-networkd[724]: eth0: DHCPv4 address 10.0.0.107/16, gateway 10.0.0.1 acquired from 10.0.0.1 Apr 20 17:24:58.300456 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Apr 20 17:24:58.325000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 17:24:58.367733 kernel: audit: type=1130 audit(1776705898.325:28): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 17:24:58.379438 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Apr 20 17:24:58.393209 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 20 17:24:58.423638 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 20 17:24:58.458552 disk-uuid[781]: Warning: The kernel is still using the old partition table. Apr 20 17:24:58.458552 disk-uuid[781]: The new table will be used at the next reboot or after you Apr 20 17:24:58.458552 disk-uuid[781]: run partprobe(8) or kpartx(8) Apr 20 17:24:58.458552 disk-uuid[781]: The operation has completed successfully. Apr 20 17:24:58.483032 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Apr 20 17:24:58.609600 systemd[1]: disk-uuid.service: Deactivated successfully. Apr 20 17:24:58.631000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 17:24:58.635000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 17:24:58.612092 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Apr 20 17:24:58.726308 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Apr 20 17:24:58.742571 kernel: audit: type=1130 audit(1776705898.631:29): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 17:24:58.742607 kernel: audit: type=1131 audit(1776705898.635:30): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 17:24:58.768966 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Apr 20 17:24:58.781000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 17:24:58.816075 kernel: audit: type=1130 audit(1776705898.781:31): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 17:24:59.084446 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (901) Apr 20 17:24:59.091359 kernel: BTRFS info (device vda6): first mount of filesystem 17906e87-85d1-46f5-980e-3e85555360cf Apr 20 17:24:59.101970 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 20 17:24:59.147149 kernel: BTRFS info (device vda6): turning on async discard Apr 20 17:24:59.147938 kernel: BTRFS info (device vda6): enabling free space tree Apr 20 17:24:59.206782 systemd-networkd[724]: eth0: Gained IPv6LL Apr 20 17:24:59.304501 kernel: BTRFS info (device vda6): last unmount of filesystem 17906e87-85d1-46f5-980e-3e85555360cf Apr 20 17:24:59.362549 systemd[1]: Finished ignition-setup.service - Ignition (setup). Apr 20 17:24:59.384000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 17:24:59.456414 kernel: audit: type=1130 audit(1776705899.384:32): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 17:24:59.500152 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Apr 20 17:25:01.241443 ignition[920]: Ignition 2.24.0 Apr 20 17:25:01.241550 ignition[920]: Stage: fetch-offline Apr 20 17:25:01.244747 ignition[920]: no configs at "/usr/lib/ignition/base.d" Apr 20 17:25:01.244766 ignition[920]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 20 17:25:01.246605 ignition[920]: parsed url from cmdline: "" Apr 20 17:25:01.246610 ignition[920]: no config URL provided Apr 20 17:25:01.247153 ignition[920]: reading system config file "/usr/lib/ignition/user.ign" Apr 20 17:25:01.247174 ignition[920]: no config at "/usr/lib/ignition/user.ign" Apr 20 17:25:01.247357 ignition[920]: op(1): [started] loading QEMU firmware config module Apr 20 17:25:01.247362 ignition[920]: op(1): executing: "modprobe" "qemu_fw_cfg" Apr 20 17:25:01.392545 ignition[920]: op(1): [finished] loading QEMU firmware config module Apr 20 17:25:01.392615 ignition[920]: QEMU firmware config was not found. Ignoring... Apr 20 17:25:01.584276 ignition[920]: parsing config with SHA512: 8fe7e17c322faf5dec0a82aac9df5a5bfd34dece0348db88b66998b6afd03f44ac1646eae557e7b611ca31e847122b0c75f8a888a4c2464965972869e1a9c61a Apr 20 17:25:01.856757 unknown[920]: fetched base config from "system" Apr 20 17:25:01.856772 unknown[920]: fetched user config from "qemu" Apr 20 17:25:01.874088 ignition[920]: fetch-offline: fetch-offline passed Apr 20 17:25:01.875772 ignition[920]: Ignition finished successfully Apr 20 17:25:01.903424 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Apr 20 17:25:01.916000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 17:25:01.929532 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Apr 20 17:25:01.939660 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Apr 20 17:25:01.943773 kernel: audit: type=1130 audit(1776705901.916:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 17:25:02.776060 ignition[931]: Ignition 2.24.0 Apr 20 17:25:02.776099 ignition[931]: Stage: kargs Apr 20 17:25:02.776616 ignition[931]: no configs at "/usr/lib/ignition/base.d" Apr 20 17:25:02.776625 ignition[931]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 20 17:25:02.798583 ignition[931]: kargs: kargs passed Apr 20 17:25:02.851089 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Apr 20 17:25:02.885000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 17:25:02.798883 ignition[931]: Ignition finished successfully Apr 20 17:25:02.925986 kernel: audit: type=1130 audit(1776705902.885:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 17:25:02.985703 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Apr 20 17:25:03.409474 ignition[939]: Ignition 2.24.0 Apr 20 17:25:03.409527 ignition[939]: Stage: disks Apr 20 17:25:03.409713 ignition[939]: no configs at "/usr/lib/ignition/base.d" Apr 20 17:25:03.409721 ignition[939]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 20 17:25:03.592683 ignition[939]: disks: disks passed Apr 20 17:25:03.593219 ignition[939]: Ignition finished successfully Apr 20 17:25:03.610715 systemd[1]: Finished ignition-disks.service - Ignition (disks). Apr 20 17:25:03.700000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 17:25:03.739768 kernel: audit: type=1130 audit(1776705903.700:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 17:25:03.839070 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Apr 20 17:25:03.877385 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 20 17:25:03.907122 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 20 17:25:03.971555 systemd[1]: Reached target sysinit.target - System Initialization. Apr 20 17:25:03.995787 systemd[1]: Reached target basic.target - Basic System. Apr 20 17:25:04.114220 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Apr 20 17:25:04.548646 systemd-fsck[949]: ROOT: clean, 15/456736 files, 38230/456704 blocks Apr 20 17:25:04.661201 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Apr 20 17:25:04.685000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 17:25:04.734161 kernel: audit: type=1130 audit(1776705904.685:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 17:25:04.790440 systemd[1]: Mounting sysroot.mount - /sysroot... Apr 20 17:25:07.777604 kernel: EXT4-fs (vda9): mounted filesystem 2bdffc2e-451a-418b-b04b-9e3cd9229e7e r/w with ordered data mode. Quota mode: none. Apr 20 17:25:08.439579 systemd[1]: Mounted sysroot.mount - /sysroot. Apr 20 17:25:08.753505 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Apr 20 17:25:09.968648 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 20 17:25:10.113161 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Apr 20 17:25:10.174103 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Apr 20 17:25:10.193144 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Apr 20 17:25:10.213915 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Apr 20 17:25:10.497679 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Apr 20 17:25:10.708709 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (958) Apr 20 17:25:10.795768 kernel: BTRFS info (device vda6): first mount of filesystem 17906e87-85d1-46f5-980e-3e85555360cf Apr 20 17:25:10.806029 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 20 17:25:10.807454 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Apr 20 17:25:10.919045 kernel: BTRFS info (device vda6): turning on async discard Apr 20 17:25:10.919194 kernel: BTRFS info (device vda6): enabling free space tree Apr 20 17:25:11.113321 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 20 17:25:33.100998 kernel: loop1: detected capacity change from 0 to 43472 Apr 20 17:25:33.218635 kernel: loop1: p1 p2 p3 Apr 20 17:25:34.002990 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Apr 20 17:25:34.003789 kernel: device-mapper: verity: Unrecognized verity feature request: root_hash_sig_key_desc Apr 20 17:25:34.044210 kernel: device-mapper: table: 253:1: verity: Unrecognized verity feature request (-EINVAL) Apr 20 17:25:34.089531 kernel: device-mapper: ioctl: error adding target to table Apr 20 17:25:34.089555 systemd-confext[1049]: device-mapper: reload ioctl on 036bab43330e5a16b58b2997d79b59667c299046b83a0d438261a470d6586a8f-verity (253:1) failed: Invalid argument Apr 20 17:25:34.186980 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Apr 20 17:25:38.801571 kernel: erofs: (device dm-1): mounted with root inode @ nid 40. Apr 20 17:25:40.313430 kernel: loop2: detected capacity change from 0 to 43472 Apr 20 17:25:40.348298 kernel: loop2: p1 p2 p3 Apr 20 17:25:40.901652 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Apr 20 17:25:40.901937 kernel: device-mapper: verity: Unrecognized verity feature request: root_hash_sig_key_desc Apr 20 17:25:40.903378 kernel: device-mapper: table: 253:1: verity: Unrecognized verity feature request (-EINVAL) Apr 20 17:25:40.909189 kernel: device-mapper: ioctl: error adding target to table Apr 20 17:25:40.919228 (sd-merge)[1063]: device-mapper: reload ioctl on 036bab43330e5a16b58b2997d79b59667c299046b83a0d438261a470d6586a8f-verity (253:1) failed: Invalid argument Apr 20 17:25:40.953140 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Apr 20 17:25:43.893374 kernel: erofs: (device dm-1): mounted with root inode @ nid 40. Apr 20 17:25:44.065782 (sd-merge)[1063]: Using extensions '00-flatcar-default.raw'. Apr 20 17:25:44.417276 (sd-merge)[1063]: Merged extensions into '/sysroot/etc'. Apr 20 17:25:45.117218 initrd-setup-root[1070]: /etc 00-flatcar-default Mon 2026-04-20 17:24:51 UTC Apr 20 17:25:45.320001 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Apr 20 17:25:45.391000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 17:25:45.446456 kernel: audit: type=1130 audit(1776705945.391:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 17:25:45.951181 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Apr 20 17:25:46.069332 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Apr 20 17:25:46.507266 kernel: BTRFS info (device vda6): last unmount of filesystem 17906e87-85d1-46f5-980e-3e85555360cf Apr 20 17:25:46.516596 systemd[1]: sysroot-oem.mount: Deactivated successfully. Apr 20 17:25:46.709116 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Apr 20 17:25:46.720000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 17:25:46.764237 kernel: audit: type=1130 audit(1776705946.720:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 17:25:49.424162 ignition[1080]: INFO : Ignition 2.24.0 Apr 20 17:25:49.459322 ignition[1080]: INFO : Stage: mount Apr 20 17:25:49.468412 ignition[1080]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 20 17:25:49.491481 ignition[1080]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 20 17:25:50.702131 ignition[1080]: INFO : mount: mount passed Apr 20 17:25:50.718680 ignition[1080]: INFO : Ignition finished successfully Apr 20 17:25:50.834748 systemd[1]: Finished ignition-mount.service - Ignition (mount). Apr 20 17:25:50.857000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 17:25:50.876391 kernel: audit: type=1130 audit(1776705950.857:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 17:25:51.194564 systemd[1]: Starting ignition-files.service - Ignition (files)... Apr 20 17:25:52.990740 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 20 17:25:53.993136 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1092) Apr 20 17:25:53.994216 kernel: BTRFS info (device vda6): first mount of filesystem 17906e87-85d1-46f5-980e-3e85555360cf Apr 20 17:25:54.008907 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 20 17:25:54.046241 kernel: BTRFS info (device vda6): turning on async discard Apr 20 17:25:54.047352 kernel: BTRFS info (device vda6): enabling free space tree Apr 20 17:25:54.976360 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 20 17:26:05.941387 ignition[1109]: INFO : Ignition 2.24.0 Apr 20 17:26:05.996482 ignition[1109]: INFO : Stage: files Apr 20 17:26:06.226076 ignition[1109]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 20 17:26:06.245158 ignition[1109]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 20 17:26:07.393101 ignition[1109]: DEBUG : files: compiled without relabeling support, skipping Apr 20 17:26:07.605177 ignition[1109]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Apr 20 17:26:07.626211 ignition[1109]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Apr 20 17:26:08.001556 ignition[1109]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Apr 20 17:26:08.059156 ignition[1109]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Apr 20 17:26:08.263362 unknown[1109]: wrote ssh authorized keys file for user: core Apr 20 17:26:08.293585 ignition[1109]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Apr 20 17:26:08.450901 ignition[1109]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Apr 20 17:26:08.549782 ignition[1109]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Apr 20 17:26:18.150763 ignition[1109]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET error: Get "https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz": http: server closed idle connection Apr 20 17:26:18.362145 ignition[1109]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #2 Apr 20 17:26:19.376303 ignition[1109]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Apr 20 17:26:24.183228 ignition[1109]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Apr 20 17:26:24.183228 ignition[1109]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Apr 20 17:26:24.248098 ignition[1109]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Apr 20 17:26:24.248098 ignition[1109]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Apr 20 17:26:24.248098 ignition[1109]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Apr 20 17:26:24.248098 ignition[1109]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 20 17:26:24.365716 ignition[1109]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 20 17:26:24.365716 ignition[1109]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 20 17:26:24.365716 ignition[1109]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 20 17:26:24.365716 ignition[1109]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Apr 20 17:26:24.365716 ignition[1109]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Apr 20 17:26:24.365716 ignition[1109]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.35.1-x86-64.raw" Apr 20 17:26:24.365716 ignition[1109]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.35.1-x86-64.raw" Apr 20 17:26:24.365716 ignition[1109]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.35.1-x86-64.raw" Apr 20 17:26:24.365716 ignition[1109]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.35.1-x86-64.raw: attempt #1 Apr 20 17:26:26.212249 ignition[1109]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Apr 20 17:26:48.382465 ignition[1109]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.35.1-x86-64.raw" Apr 20 17:26:48.382465 ignition[1109]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Apr 20 17:26:48.398206 ignition[1109]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 20 17:26:48.406518 ignition[1109]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 20 17:26:48.406518 ignition[1109]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Apr 20 17:26:48.406518 ignition[1109]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Apr 20 17:26:48.406518 ignition[1109]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Apr 20 17:26:48.406518 ignition[1109]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Apr 20 17:26:48.406518 ignition[1109]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Apr 20 17:26:48.406518 ignition[1109]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Apr 20 17:26:48.801715 ignition[1109]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Apr 20 17:26:48.927093 ignition[1109]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Apr 20 17:26:48.948202 ignition[1109]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Apr 20 17:26:48.948202 ignition[1109]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Apr 20 17:26:48.948202 ignition[1109]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Apr 20 17:26:48.996328 ignition[1109]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Apr 20 17:26:48.996328 ignition[1109]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Apr 20 17:26:48.996328 ignition[1109]: INFO : files: files passed Apr 20 17:26:48.996328 ignition[1109]: INFO : Ignition finished successfully Apr 20 17:26:49.037184 systemd[1]: Finished ignition-files.service - Ignition (files). Apr 20 17:26:49.043000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 17:26:49.052634 kernel: audit: type=1130 audit(1776706009.043:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 17:26:49.068438 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Apr 20 17:26:49.078586 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Apr 20 17:26:49.104978 systemd[1]: ignition-quench.service: Deactivated successfully. Apr 20 17:26:49.119000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 17:26:49.105267 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Apr 20 17:26:49.120000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 17:26:49.137579 kernel: audit: type=1130 audit(1776706009.119:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 17:26:49.137637 kernel: audit: type=1131 audit(1776706009.120:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 17:26:49.167600 initrd-setup-root-after-ignition[1141]: grep: /sysroot/oem/oem-release: No such file or directory Apr 20 17:26:49.175616 initrd-setup-root-after-ignition[1143]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 20 17:26:49.175616 initrd-setup-root-after-ignition[1143]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Apr 20 17:26:49.188632 initrd-setup-root-after-ignition[1147]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 20 17:26:49.227017 kernel: loop3: detected capacity change from 0 to 43472 Apr 20 17:26:49.234931 kernel: loop3: p1 p2 p3 Apr 20 17:26:49.297958 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Apr 20 17:26:49.298032 kernel: device-mapper: verity: Unrecognized verity feature request: root_hash_sig_key_desc Apr 20 17:26:49.298086 kernel: device-mapper: table: 253:2: verity: Unrecognized verity feature request (-EINVAL) Apr 20 17:26:49.299534 kernel: device-mapper: ioctl: error adding target to table Apr 20 17:26:49.301217 systemd-confext[1149]: device-mapper: reload ioctl on loop3p1-verity (253:2) failed: Invalid argument Apr 20 17:26:49.328328 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Apr 20 17:26:49.699364 kernel: erofs: (device dm-2): mounted with root inode @ nid 40. Apr 20 17:26:49.786945 kernel: loop4: detected capacity change from 0 to 43472 Apr 20 17:26:49.793909 kernel: loop4: p1 p2 p3 Apr 20 17:26:50.421951 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Apr 20 17:26:50.422249 kernel: device-mapper: verity: Unrecognized verity feature request: root_hash_sig_key_desc Apr 20 17:26:50.422339 kernel: device-mapper: table: 253:2: verity: Unrecognized verity feature request (-EINVAL) Apr 20 17:26:50.426100 kernel: device-mapper: ioctl: error adding target to table Apr 20 17:26:50.438169 (sd-merge)[1161]: device-mapper: reload ioctl on loop4p1-verity (253:2) failed: Invalid argument Apr 20 17:26:50.506511 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Apr 20 17:26:51.093202 kernel: erofs: (device dm-2): mounted with root inode @ nid 40. Apr 20 17:26:51.133636 (sd-merge)[1161]: Skipping extension refresh because no change was found, use --always-refresh=yes to always do a refresh. Apr 20 17:26:51.208601 kernel: device-mapper: ioctl: remove_all left 2 open device(s) Apr 20 17:26:51.252608 kernel: loop4: detected capacity change from 0 to 178200 Apr 20 17:26:51.272374 kernel: loop4: p1 p2 p3 Apr 20 17:26:51.721900 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Apr 20 17:26:51.722233 kernel: device-mapper: verity: Unrecognized verity feature request: root_hash_sig_key_desc Apr 20 17:26:51.722251 kernel: device-mapper: table: 253:2: verity: Unrecognized verity feature request (-EINVAL) Apr 20 17:26:51.727941 kernel: device-mapper: ioctl: error adding target to table Apr 20 17:26:51.729185 systemd-sysext[1169]: device-mapper: reload ioctl on 47e3a0d62726bde98fcb471f946aa0f0e9f97280e4f7267ec40f142aba643eb6-verity (253:2) failed: Invalid argument Apr 20 17:26:51.788990 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Apr 20 17:26:52.558903 kernel: erofs: (device dm-2): mounted with root inode @ nid 39. Apr 20 17:26:52.834066 kernel: loop5: detected capacity change from 0 to 217752 Apr 20 17:26:53.246907 kernel: loop6: detected capacity change from 0 to 378016 Apr 20 17:26:53.276249 kernel: loop6: p1 p2 p3 Apr 20 17:26:53.470401 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Apr 20 17:26:53.470665 kernel: device-mapper: verity: Unrecognized verity feature request: root_hash_sig_key_desc Apr 20 17:26:53.483609 kernel: device-mapper: table: 253:2: verity: Unrecognized verity feature request (-EINVAL) Apr 20 17:26:53.485180 kernel: device-mapper: ioctl: error adding target to table Apr 20 17:26:53.488782 systemd-sysext[1169]: device-mapper: reload ioctl on 5f63b01eb609e19b7df6b1f3554b098a8644903507171258f91f339ee69140b0-verity (253:2) failed: Invalid argument Apr 20 17:26:53.532137 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Apr 20 17:26:54.352423 kernel: erofs: (device dm-2): mounted with root inode @ nid 39. Apr 20 17:26:54.608305 kernel: loop7: detected capacity change from 0 to 178200 Apr 20 17:26:54.620339 kernel: loop7: p1 p2 p3 Apr 20 17:26:54.762736 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Apr 20 17:26:54.763080 kernel: device-mapper: verity: Unrecognized verity feature request: root_hash_sig_key_desc Apr 20 17:26:54.763095 kernel: device-mapper: table: 253:2: verity: Unrecognized verity feature request (-EINVAL) Apr 20 17:26:54.765886 kernel: device-mapper: ioctl: error adding target to table Apr 20 17:26:54.771931 (sd-merge)[1187]: device-mapper: reload ioctl on 47e3a0d62726bde98fcb471f946aa0f0e9f97280e4f7267ec40f142aba643eb6-verity (253:2) failed: Invalid argument Apr 20 17:26:54.813994 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Apr 20 17:26:55.264533 kernel: erofs: (device dm-2): mounted with root inode @ nid 39. Apr 20 17:26:55.306140 kernel: loop1: detected capacity change from 0 to 217752 Apr 20 17:26:55.475188 kernel: loop3: detected capacity change from 0 to 378016 Apr 20 17:26:55.513864 kernel: loop3: p1 p2 p3 Apr 20 17:26:55.762634 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Apr 20 17:26:55.764038 kernel: device-mapper: verity: Unrecognized verity feature request: root_hash_sig_key_desc Apr 20 17:26:55.764102 kernel: device-mapper: table: 253:3: verity: Unrecognized verity feature request (-EINVAL) Apr 20 17:26:55.764920 kernel: device-mapper: ioctl: error adding target to table Apr 20 17:26:55.767025 (sd-merge)[1187]: device-mapper: reload ioctl on 5f63b01eb609e19b7df6b1f3554b098a8644903507171258f91f339ee69140b0-verity (253:3) failed: Invalid argument Apr 20 17:26:55.797865 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Apr 20 17:26:56.608940 kernel: erofs: (device dm-3): mounted with root inode @ nid 39. Apr 20 17:26:56.613112 (sd-merge)[1187]: Using extensions 'containerd-flatcar.raw', 'docker-flatcar.raw', 'kubernetes-v1.35.1-x86-64.raw'. Apr 20 17:26:56.626132 (sd-merge)[1187]: Merged extensions into '/sysroot/usr'. Apr 20 17:26:56.675414 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 20 17:26:56.682000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 17:26:56.696997 kernel: audit: type=1130 audit(1776706016.682:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 17:26:56.684320 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Apr 20 17:26:56.736786 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Apr 20 17:26:56.955564 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Apr 20 17:26:56.955780 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Apr 20 17:26:56.975046 systemd[1]: initrd-parse-etc.service: Triggering OnSuccess= dependencies. Apr 20 17:26:56.973000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 17:26:56.976649 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Apr 20 17:26:56.974000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 17:26:56.998786 kernel: audit: type=1130 audit(1776706016.973:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 17:26:56.998857 kernel: audit: type=1131 audit(1776706016.974:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 17:26:57.006026 systemd[1]: Reached target initrd.target - Initrd Default Target. Apr 20 17:26:57.032147 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Apr 20 17:26:57.151851 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Apr 20 17:26:57.783612 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 20 17:26:57.794000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 17:26:57.804080 kernel: audit: type=1130 audit(1776706017.794:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 17:26:57.812586 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Apr 20 17:26:58.087948 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Apr 20 17:26:58.119519 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 20 17:26:58.155207 systemd[1]: Stopped target timers.target - Timer Units. Apr 20 17:26:58.183360 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Apr 20 17:26:58.188630 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 20 17:26:58.188000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 17:26:58.205096 kernel: audit: type=1131 audit(1776706018.188:47): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 17:26:58.189343 systemd[1]: Stopped target initrd.target - Initrd Default Target. Apr 20 17:26:58.207526 systemd[1]: Stopped target basic.target - Basic System. Apr 20 17:26:58.209469 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Apr 20 17:26:58.227029 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Apr 20 17:26:58.265569 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Apr 20 17:26:58.273552 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Apr 20 17:26:58.278923 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Apr 20 17:26:58.290152 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Apr 20 17:26:58.305382 systemd[1]: Stopped target sysinit.target - System Initialization. Apr 20 17:26:58.305783 systemd[1]: Stopped target local-fs.target - Local File Systems. Apr 20 17:26:58.317832 systemd[1]: Stopped target swap.target - Swaps. Apr 20 17:26:58.325174 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Apr 20 17:26:58.330940 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Apr 20 17:26:58.346000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 17:26:58.358353 kernel: audit: type=1131 audit(1776706018.346:48): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 17:26:58.347687 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Apr 20 17:26:58.366177 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 20 17:26:58.386088 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Apr 20 17:26:58.387009 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 20 17:26:58.398985 systemd[1]: dracut-initqueue.service: Deactivated successfully. Apr 20 17:26:58.399333 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Apr 20 17:26:58.401000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 17:26:58.416903 kernel: audit: type=1131 audit(1776706018.401:49): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 17:26:58.402982 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Apr 20 17:26:58.419000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 17:26:58.434996 kernel: audit: type=1131 audit(1776706018.419:50): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 17:26:58.405378 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Apr 20 17:26:58.420444 systemd[1]: Stopped target paths.target - Path Units. Apr 20 17:26:58.476012 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Apr 20 17:26:58.478486 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 20 17:26:58.491967 systemd[1]: Stopped target slices.target - Slice Units. Apr 20 17:26:58.496668 systemd[1]: Stopped target sockets.target - Socket Units. Apr 20 17:26:58.504391 systemd[1]: iscsid.socket: Deactivated successfully. Apr 20 17:26:58.517927 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Apr 20 17:26:58.533533 systemd[1]: iscsiuio.socket: Deactivated successfully. Apr 20 17:26:58.533721 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 20 17:26:58.541554 systemd[1]: systemd-journald-audit.socket: Deactivated successfully. Apr 20 17:26:58.542530 systemd[1]: Closed systemd-journald-audit.socket - Journal Audit Socket. Apr 20 17:26:58.553388 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Apr 20 17:26:58.553635 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 20 17:26:58.577000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 17:26:58.581697 systemd[1]: initrd-setup-root-after-ignition.service: Consumed 1.640s CPU time. Apr 20 17:26:58.584477 systemd[1]: ignition-files.service: Deactivated successfully. Apr 20 17:26:58.585348 systemd[1]: Stopped ignition-files.service - Ignition (files). Apr 20 17:26:58.594000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 17:26:58.610057 kernel: audit: type=1131 audit(1776706018.577:51): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 17:26:58.595607 systemd[1]: ignition-files.service: Consumed 42.973s CPU time. Apr 20 17:26:58.614245 kernel: audit: type=1131 audit(1776706018.594:52): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 17:26:58.619200 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Apr 20 17:26:58.631029 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Apr 20 17:26:58.639390 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Apr 20 17:26:58.644899 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 20 17:26:58.651000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 17:26:58.652425 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Apr 20 17:26:58.666692 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Apr 20 17:26:58.681000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 17:26:58.683600 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Apr 20 17:26:58.698578 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Apr 20 17:26:58.712000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 17:26:58.761266 systemd[1]: sysroot-boot.mount: Deactivated successfully. Apr 20 17:26:58.784687 ignition[1216]: INFO : Ignition 2.24.0 Apr 20 17:26:58.784687 ignition[1216]: INFO : Stage: umount Apr 20 17:26:58.784687 ignition[1216]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 20 17:26:58.784687 ignition[1216]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 20 17:26:58.793000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 17:26:58.800000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 17:26:58.801000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 17:26:58.788570 systemd[1]: sysroot-boot.service: Deactivated successfully. Apr 20 17:26:58.848000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 17:26:58.898585 ignition[1216]: INFO : umount: umount passed Apr 20 17:26:58.898585 ignition[1216]: INFO : Ignition finished successfully Apr 20 17:26:58.789212 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Apr 20 17:26:58.794617 systemd[1]: initrd-cleanup.service: Deactivated successfully. Apr 20 17:26:58.798263 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Apr 20 17:26:58.813896 systemd[1]: ignition-mount.service: Deactivated successfully. Apr 20 17:26:58.814247 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Apr 20 17:26:59.033467 systemd[1]: Stopped target network.target - Network. Apr 20 17:26:59.050506 systemd[1]: ignition-disks.service: Deactivated successfully. Apr 20 17:26:59.053963 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Apr 20 17:26:59.066000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 17:26:59.069103 systemd[1]: ignition-kargs.service: Deactivated successfully. Apr 20 17:26:59.069319 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Apr 20 17:26:59.089000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 17:26:59.102957 systemd[1]: ignition-setup.service: Deactivated successfully. Apr 20 17:26:59.104430 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Apr 20 17:26:59.126000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 17:26:59.145464 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Apr 20 17:26:59.145883 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Apr 20 17:26:59.195000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup-pre comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 17:26:59.200968 systemd[1]: initrd-setup-root.service: Deactivated successfully. Apr 20 17:26:59.210219 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Apr 20 17:26:59.226000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 17:26:59.227630 systemd[1]: initrd-setup-root.service: Consumed 11.199s CPU time. Apr 20 17:26:59.248477 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Apr 20 17:26:59.249243 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Apr 20 17:26:59.307586 systemd[1]: systemd-resolved.service: Deactivated successfully. Apr 20 17:26:59.319142 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Apr 20 17:26:59.329000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 17:26:59.436037 systemd[1]: systemd-networkd.service: Deactivated successfully. Apr 20 17:26:59.436336 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Apr 20 17:26:59.453000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 17:26:59.473000 audit: BPF prog-id=5 op=UNLOAD Apr 20 17:26:59.486000 audit: BPF prog-id=8 op=UNLOAD Apr 20 17:26:59.490576 systemd[1]: Stopped target network-pre.target - Preparation for Network. Apr 20 17:26:59.505404 systemd[1]: systemd-networkd.socket: Deactivated successfully. Apr 20 17:26:59.511896 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Apr 20 17:26:59.536513 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Apr 20 17:26:59.560326 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Apr 20 17:26:59.570384 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 20 17:26:59.592000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 17:26:59.597548 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 20 17:26:59.602605 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 20 17:26:59.619000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 17:26:59.622138 systemd[1]: systemd-modules-load.service: Deactivated successfully. Apr 20 17:26:59.622442 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Apr 20 17:26:59.683000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 17:26:59.692477 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 20 17:26:59.769840 systemd[1]: systemd-udevd.service: Deactivated successfully. Apr 20 17:26:59.774065 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 20 17:26:59.796000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 17:26:59.797845 systemd[1]: systemd-udevd.service: Consumed 10.122s CPU time. Apr 20 17:26:59.818576 systemd[1]: network-cleanup.service: Deactivated successfully. Apr 20 17:26:59.838286 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Apr 20 17:26:59.847000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 17:26:59.895411 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Apr 20 17:26:59.929939 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Apr 20 17:26:59.962651 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Apr 20 17:26:59.966497 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Apr 20 17:26:59.974000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 17:26:59.976991 systemd[1]: dracut-cmdline.service: Deactivated successfully. Apr 20 17:26:59.991000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 17:26:59.977148 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Apr 20 17:27:00.000154 systemd[1]: dracut-cmdline.service: Consumed 1.194s CPU time. Apr 20 17:27:00.006414 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 20 17:27:00.028000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 17:27:00.008850 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 20 17:27:00.072376 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Apr 20 17:27:00.086542 systemd[1]: systemd-network-generator.service: Deactivated successfully. Apr 20 17:27:00.087088 systemd[1]: Stopped systemd-network-generator.service - Generate Network Units from Kernel Command Line. Apr 20 17:27:00.171000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 17:27:00.185157 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Apr 20 17:27:00.194000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 17:27:00.185297 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 20 17:27:00.204577 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Apr 20 17:27:00.204640 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 20 17:27:00.232000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 17:27:00.243466 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Apr 20 17:27:00.244561 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Apr 20 17:27:00.259000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 17:27:00.279178 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 20 17:27:00.279000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 17:27:00.279954 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 20 17:27:00.291000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 17:27:00.291000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 17:27:00.282211 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Apr 20 17:27:00.282484 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Apr 20 17:27:00.302135 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Apr 20 17:27:00.355542 systemd[1]: Starting initrd-switch-root.service - Switch Root... Apr 20 17:27:00.513704 systemd[1]: Switching root. Apr 20 17:27:01.034639 systemd-journald[320]: Received SIGTERM from PID 1 (systemd). Apr 20 17:27:01.076589 systemd-journald[320]: Journal stopped Apr 20 17:27:25.988744 kernel: SELinux: policy capability network_peer_controls=1 Apr 20 17:27:25.989412 kernel: SELinux: policy capability open_perms=1 Apr 20 17:27:25.989481 kernel: SELinux: policy capability extended_socket_class=1 Apr 20 17:27:25.989493 kernel: SELinux: policy capability always_check_network=0 Apr 20 17:27:25.989502 kernel: SELinux: policy capability cgroup_seclabel=1 Apr 20 17:27:25.989517 kernel: SELinux: policy capability nnp_nosuid_transition=1 Apr 20 17:27:25.989525 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Apr 20 17:27:25.989538 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Apr 20 17:27:25.989547 kernel: SELinux: policy capability userspace_initial_context=0 Apr 20 17:27:25.989584 kernel: kauditd_printk_skb: 32 callbacks suppressed Apr 20 17:27:25.989594 kernel: audit: type=1403 audit(1776706022.049:85): auid=4294967295 ses=4294967295 lsm=selinux res=1 Apr 20 17:27:25.989608 systemd[1]: Successfully loaded SELinux policy in 614.017ms. Apr 20 17:27:25.989621 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 292.615ms. Apr 20 17:27:25.989631 systemd[1]: systemd 258.3 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Apr 20 17:27:25.989642 systemd[1]: Detected virtualization kvm. Apr 20 17:27:25.989651 systemd[1]: Detected architecture x86-64. Apr 20 17:27:25.989659 systemd[1]: Detected first boot. Apr 20 17:27:25.989669 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Apr 20 17:27:25.989678 kernel: audit: type=1334 audit(1776706024.103:86): prog-id=9 op=LOAD Apr 20 17:27:25.989687 kernel: audit: type=1334 audit(1776706024.107:87): prog-id=9 op=UNLOAD Apr 20 17:27:25.989695 zram_generator::config[1263]: No configuration found. Apr 20 17:27:25.989751 kernel: Guest personality initialized and is inactive Apr 20 17:27:25.989760 kernel: VMCI host device registered (name=vmci, major=10, minor=258) Apr 20 17:27:25.989767 kernel: Initialized host personality Apr 20 17:27:25.989777 kernel: NET: Registered PF_VSOCK protocol family Apr 20 17:27:25.989850 systemd-ssh-generator[1259]: Failed to query local AF_VSOCK CID: Cannot assign requested address Apr 20 17:27:25.989863 (sd-exec-[1244]: /usr/lib/systemd/system-generators/systemd-ssh-generator failed with exit status 1. Apr 20 17:27:25.989875 systemd[1]: Applying preset policy. Apr 20 17:27:25.989887 systemd[1]: Created symlink '/etc/systemd/system/multi-user.target.wants/prepare-helm.service' → '/etc/systemd/system/prepare-helm.service'. Apr 20 17:27:25.989897 systemd[1]: Created symlink '/etc/systemd/system/timers.target.wants/google-oslogin-cache.timer' → '/usr/lib/systemd/system/google-oslogin-cache.timer'. Apr 20 17:27:25.989906 systemd[1]: Populated /etc with preset unit settings. Apr 20 17:27:25.989914 systemd[1]: /usr/lib/systemd/system/update-engine.service:10: Support for option BlockIOWeight= has been removed and it is ignored Apr 20 17:27:25.989922 kernel: audit: type=1334 audit(1776706042.716:88): prog-id=10 op=LOAD Apr 20 17:27:25.989941 kernel: audit: type=1334 audit(1776706042.716:89): prog-id=2 op=UNLOAD Apr 20 17:27:25.989952 kernel: audit: type=1334 audit(1776706042.718:90): prog-id=11 op=LOAD Apr 20 17:27:25.989960 kernel: audit: type=1334 audit(1776706042.719:91): prog-id=12 op=LOAD Apr 20 17:27:25.989968 kernel: audit: type=1334 audit(1776706042.719:92): prog-id=3 op=UNLOAD Apr 20 17:27:25.989975 kernel: audit: type=1334 audit(1776706042.719:93): prog-id=4 op=UNLOAD Apr 20 17:27:25.989983 kernel: audit: type=1334 audit(1776706042.722:94): prog-id=13 op=LOAD Apr 20 17:27:25.989991 kernel: audit: type=1334 audit(1776706042.724:95): prog-id=10 op=UNLOAD Apr 20 17:27:25.990000 kernel: audit: type=1334 audit(1776706042.725:96): prog-id=14 op=LOAD Apr 20 17:27:25.990009 kernel: audit: type=1334 audit(1776706042.725:97): prog-id=15 op=LOAD Apr 20 17:27:25.990017 systemd[1]: initrd-switch-root.service: Deactivated successfully. Apr 20 17:27:25.990026 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Apr 20 17:27:25.990045 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Apr 20 17:27:25.990054 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Apr 20 17:27:25.990064 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Apr 20 17:27:25.990073 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Apr 20 17:27:25.990082 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Apr 20 17:27:25.990092 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Apr 20 17:27:25.990103 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Apr 20 17:27:25.990111 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Apr 20 17:27:25.990120 systemd[1]: Created slice user.slice - User and Session Slice. Apr 20 17:27:25.990130 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 20 17:27:25.990139 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 20 17:27:25.990151 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Apr 20 17:27:25.990159 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Apr 20 17:27:25.990168 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Apr 20 17:27:25.990177 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 20 17:27:25.990186 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Apr 20 17:27:25.990196 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 20 17:27:25.990216 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 20 17:27:25.990224 systemd[1]: Reached target imports.target - Image Downloads. Apr 20 17:27:25.990232 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Apr 20 17:27:25.990240 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Apr 20 17:27:25.990248 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Apr 20 17:27:25.990257 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Apr 20 17:27:25.990267 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 20 17:27:25.990276 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 20 17:27:25.990285 systemd[1]: Reached target remote-integritysetup.target - Remote Integrity Protected Volumes. Apr 20 17:27:25.990297 systemd[1]: Reached target remote-veritysetup.target - Remote Verity Protected Volumes. Apr 20 17:27:25.990305 systemd[1]: Reached target slices.target - Slice Units. Apr 20 17:27:25.990314 systemd[1]: Reached target swap.target - Swaps. Apr 20 17:27:25.990322 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Apr 20 17:27:25.990330 systemd[1]: Listening on systemd-ask-password.socket - Query the User Interactively for a Password. Apr 20 17:27:25.990341 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Apr 20 17:27:25.990350 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Apr 20 17:27:25.990358 systemd[1]: Listening on systemd-factory-reset.socket - Factory Reset Management. Apr 20 17:27:25.990370 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Apr 20 17:27:25.990384 systemd[1]: Listening on systemd-mountfsd.socket - DDI File System Mounter Socket. Apr 20 17:27:25.990397 systemd[1]: Listening on systemd-networkd-varlink.socket - Network Service Varlink Socket. Apr 20 17:27:25.990409 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 20 17:27:25.990423 systemd[1]: Listening on systemd-nsresourced.socket - Namespace Resource Manager Socket. Apr 20 17:27:25.990435 systemd[1]: Listening on systemd-oomd.socket - Userspace Out-Of-Memory (OOM) Killer Socket. Apr 20 17:27:25.990448 systemd[1]: Listening on systemd-resolved-monitor.socket - Resolve Monitor Varlink Socket. Apr 20 17:27:25.990462 systemd[1]: Listening on systemd-resolved-varlink.socket - Resolve Service Varlink Socket. Apr 20 17:27:25.990475 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 20 17:27:25.990488 systemd[1]: Listening on systemd-udevd-varlink.socket - udev Varlink Socket. Apr 20 17:27:25.990503 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Apr 20 17:27:25.990516 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Apr 20 17:27:25.990529 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Apr 20 17:27:25.990543 systemd[1]: Mounting media.mount - External Media Directory... Apr 20 17:27:25.990557 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 20 17:27:25.990570 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Apr 20 17:27:25.990583 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Apr 20 17:27:25.990599 systemd[1]: tmp.mount: x-systemd.graceful-option=usrquota specified, but option is not available, suppressing. Apr 20 17:27:25.990612 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Apr 20 17:27:25.990626 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Apr 20 17:27:25.990640 systemd[1]: Reached target machines.target - Virtual Machines and Containers. Apr 20 17:27:25.990655 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Apr 20 17:27:25.990670 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 20 17:27:25.990685 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 20 17:27:25.990697 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Apr 20 17:27:25.990712 systemd[1]: modprobe@dm_mod.service - Load Kernel Module dm_mod was skipped because of an unmet condition check (ConditionKernelModuleLoaded=!dm_mod). Apr 20 17:27:25.990725 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 20 17:27:25.990740 systemd[1]: modprobe@efi_pstore.service - Load Kernel Module efi_pstore was skipped because of an unmet condition check (ConditionKernelModuleLoaded=!efi_pstore). Apr 20 17:27:25.990754 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Apr 20 17:27:25.990767 systemd[1]: modprobe@loop.service - Load Kernel Module loop was skipped because of an unmet condition check (ConditionKernelModuleLoaded=!loop). Apr 20 17:27:25.990780 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Apr 20 17:27:25.991353 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Apr 20 17:27:25.991500 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Apr 20 17:27:25.991542 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Apr 20 17:27:25.991559 systemd[1]: Stopped systemd-fsck-usr.service. Apr 20 17:27:25.991570 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Apr 20 17:27:25.991579 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 20 17:27:25.991588 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 20 17:27:25.991598 kernel: fuse: init (API version 7.41) Apr 20 17:27:25.991608 systemd[1]: Starting systemd-network-generator.service - Generate Network Units from Kernel Command Line... Apr 20 17:27:25.991617 kernel: ACPI: bus type drm_connector registered Apr 20 17:27:25.991629 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Apr 20 17:27:25.991642 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 20 17:27:25.991650 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Apr 20 17:27:25.991660 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 20 17:27:25.991669 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 20 17:27:25.991678 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Apr 20 17:27:25.991686 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Apr 20 17:27:25.991782 systemd-journald[1327]: Collecting audit messages is enabled. Apr 20 17:27:25.991927 systemd[1]: Mounted media.mount - External Media Directory. Apr 20 17:27:25.991941 systemd-journald[1327]: Journal started Apr 20 17:27:25.991973 systemd-journald[1327]: Runtime Journal (/run/log/journal/7e9674e34d464ebf866d1273883c3b53) is 5.9M, max 47.8M, 41.8M free. Apr 20 17:27:24.473000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Apr 20 17:27:25.683000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 17:27:25.698000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 17:27:25.717000 audit: BPF prog-id=18 op=UNLOAD Apr 20 17:27:25.717000 audit: BPF prog-id=17 op=UNLOAD Apr 20 17:27:25.720000 audit: BPF prog-id=19 op=LOAD Apr 20 17:27:25.722000 audit: BPF prog-id=20 op=LOAD Apr 20 17:27:25.723000 audit: BPF prog-id=21 op=LOAD Apr 20 17:27:25.986000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Apr 20 17:27:25.986000 audit[1327]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=3 a1=7ffde8306c70 a2=4000 a3=0 items=0 ppid=1 pid=1327 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 17:27:25.986000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Apr 20 17:27:22.672971 systemd[1]: Queued start job for default target multi-user.target. Apr 20 17:27:22.736224 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Apr 20 17:27:22.740116 systemd[1]: systemd-journald.service: Deactivated successfully. Apr 20 17:27:22.743268 systemd[1]: systemd-journald.service: Consumed 2.579s CPU time. Apr 20 17:27:26.014663 systemd[1]: Started systemd-journald.service - Journal Service. Apr 20 17:27:26.026000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 17:27:26.087382 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Apr 20 17:27:26.091021 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Apr 20 17:27:26.100032 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Apr 20 17:27:26.108349 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Apr 20 17:27:26.114000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 17:27:26.118163 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 20 17:27:26.145000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 17:27:26.151703 systemd[1]: modprobe@configfs.service: Deactivated successfully. Apr 20 17:27:26.154292 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Apr 20 17:27:26.159000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 17:27:26.162000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 17:27:26.168635 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 20 17:27:26.171893 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 20 17:27:26.177000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 17:27:26.179000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 17:27:26.183216 systemd[1]: modprobe@fuse.service: Deactivated successfully. Apr 20 17:27:26.189027 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Apr 20 17:27:26.204000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 17:27:26.210000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 17:27:26.229695 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 20 17:27:26.240000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 17:27:26.247497 systemd[1]: Finished systemd-network-generator.service - Generate Network Units from Kernel Command Line. Apr 20 17:27:26.252000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 17:27:26.309089 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Apr 20 17:27:26.313000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 17:27:26.320248 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Apr 20 17:27:26.323000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-load-credentials comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 17:27:26.449979 systemd[1]: Reached target network-pre.target - Preparation for Network. Apr 20 17:27:26.460698 systemd[1]: Listening on systemd-importd.socket - Disk Image Download Service Socket. Apr 20 17:27:26.515100 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Apr 20 17:27:26.524170 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Apr 20 17:27:26.532229 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Apr 20 17:27:26.589212 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 20 17:27:26.598073 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Apr 20 17:27:26.611110 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 20 17:27:26.643721 systemd[1]: Starting systemd-confext.service - Merge System Configuration Images into /etc/... Apr 20 17:27:26.662352 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Apr 20 17:27:26.679083 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Apr 20 17:27:26.684082 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 20 17:27:26.688653 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Apr 20 17:27:26.712067 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 20 17:27:26.729526 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 20 17:27:26.745048 systemd[1]: Starting systemd-userdb-load-credentials.service - Load JSON user/group Records from Credentials... Apr 20 17:27:26.753462 systemd-journald[1327]: Time spent on flushing to /var/log/journal/7e9674e34d464ebf866d1273883c3b53 is 212.960ms for 1294 entries. Apr 20 17:27:26.753462 systemd-journald[1327]: System Journal (/var/log/journal/7e9674e34d464ebf866d1273883c3b53) is 8M, max 163.5M, 155.5M free. Apr 20 17:27:26.776000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 17:27:26.817000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 17:27:26.984000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdb-load-credentials comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 17:27:26.763786 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 20 17:27:27.027849 systemd-journald[1327]: Received client request to flush runtime journal. Apr 20 17:27:26.779309 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Apr 20 17:27:27.028053 kernel: loop4: detected capacity change from 0 to 43472 Apr 20 17:27:26.784731 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Apr 20 17:27:27.036330 kernel: loop4: p1 p2 p3 Apr 20 17:27:26.810319 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Apr 20 17:27:27.083000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 17:27:26.882743 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Apr 20 17:27:26.911697 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Apr 20 17:27:26.970647 systemd[1]: Finished systemd-userdb-load-credentials.service - Load JSON user/group Records from Credentials. Apr 20 17:27:27.027575 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Apr 20 17:27:27.149660 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Apr 20 17:27:27.155297 systemd-tmpfiles[1379]: ACLs are not supported, ignoring. Apr 20 17:27:27.159000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 17:27:27.155311 systemd-tmpfiles[1379]: ACLs are not supported, ignoring. Apr 20 17:27:27.157524 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 20 17:27:27.170344 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Apr 20 17:27:27.170782 kernel: device-mapper: verity: Unrecognized verity feature request: root_hash_sig_key_desc Apr 20 17:27:27.176947 kernel: device-mapper: table: 253:4: verity: Unrecognized verity feature request (-EINVAL) Apr 20 17:27:27.175139 systemd-confext[1381]: device-mapper: reload ioctl on loop4p1-verity (253:4) failed: Invalid argument Apr 20 17:27:27.177430 kernel: device-mapper: ioctl: error adding target to table Apr 20 17:27:27.186682 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Apr 20 17:27:27.189000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 17:27:27.199634 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 20 17:27:27.221000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 17:27:27.301389 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Apr 20 17:27:27.370295 systemd[1]: Starting systemd-sysusers.service - Create System Users... Apr 20 17:27:27.844937 systemd[1]: Finished systemd-sysusers.service - Create System Users. Apr 20 17:27:27.853000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 17:27:27.856533 kernel: kauditd_printk_skb: 43 callbacks suppressed Apr 20 17:27:27.856653 kernel: audit: type=1130 audit(1776706047.853:139): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 17:27:27.863000 audit: BPF prog-id=22 op=LOAD Apr 20 17:27:27.865000 audit: BPF prog-id=23 op=LOAD Apr 20 17:27:27.867714 kernel: audit: type=1334 audit(1776706047.863:140): prog-id=22 op=LOAD Apr 20 17:27:27.865000 audit: BPF prog-id=24 op=LOAD Apr 20 17:27:27.870488 kernel: audit: type=1334 audit(1776706047.865:141): prog-id=23 op=LOAD Apr 20 17:27:27.870556 kernel: audit: type=1334 audit(1776706047.865:142): prog-id=24 op=LOAD Apr 20 17:27:27.875466 systemd[1]: Starting systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer... Apr 20 17:27:27.883000 audit: BPF prog-id=25 op=LOAD Apr 20 17:27:27.888422 kernel: audit: type=1334 audit(1776706047.883:143): prog-id=25 op=LOAD Apr 20 17:27:27.890090 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 20 17:27:27.908000 audit: BPF prog-id=26 op=LOAD Apr 20 17:27:27.912602 kernel: audit: type=1334 audit(1776706047.908:144): prog-id=26 op=LOAD Apr 20 17:27:27.917157 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Apr 20 17:27:27.974289 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 20 17:27:27.986530 systemd[1]: Starting modprobe@tun.service - Load Kernel Module tun... Apr 20 17:27:27.992000 audit: BPF prog-id=27 op=LOAD Apr 20 17:27:27.992000 audit: BPF prog-id=28 op=LOAD Apr 20 17:27:27.998599 kernel: audit: type=1334 audit(1776706047.992:145): prog-id=27 op=LOAD Apr 20 17:27:27.998621 kernel: audit: type=1334 audit(1776706047.992:146): prog-id=28 op=LOAD Apr 20 17:27:27.992000 audit: BPF prog-id=29 op=LOAD Apr 20 17:27:27.999580 kernel: audit: type=1334 audit(1776706047.992:147): prog-id=29 op=LOAD Apr 20 17:27:28.004546 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Apr 20 17:27:28.031208 kernel: tun: Universal TUN/TAP device driver, 1.6 Apr 20 17:27:28.033077 systemd[1]: modprobe@tun.service: Deactivated successfully. Apr 20 17:27:28.046000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@tun comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 17:27:28.046000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@tun comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 17:27:28.057000 audit: BPF prog-id=30 op=LOAD Apr 20 17:27:28.058000 audit: BPF prog-id=31 op=LOAD Apr 20 17:27:28.058000 audit: BPF prog-id=32 op=LOAD Apr 20 17:27:28.066435 kernel: audit: type=1130 audit(1776706048.046:148): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@tun comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 17:27:28.033770 systemd[1]: Finished modprobe@tun.service - Load Kernel Module tun. Apr 20 17:27:28.069461 systemd[1]: Starting systemd-nsresourced.service - Namespace Resource Manager... Apr 20 17:27:28.076573 systemd-tmpfiles[1409]: ACLs are not supported, ignoring. Apr 20 17:27:28.076586 systemd-tmpfiles[1409]: ACLs are not supported, ignoring. Apr 20 17:27:28.107610 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 20 17:27:28.112000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 17:27:28.283473 systemd[1]: Started systemd-userdbd.service - User Database Manager. Apr 20 17:27:28.294000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 17:27:28.337620 systemd-nsresourced[1414]: Not setting up BPF subsystem, as functionality has been disabled at compile time. Apr 20 17:27:28.350682 systemd[1]: Started systemd-nsresourced.service - Namespace Resource Manager. Apr 20 17:27:28.356000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-nsresourced comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 17:27:28.622958 systemd-oomd[1406]: No swap; memory pressure usage will be degraded Apr 20 17:27:28.698168 systemd[1]: Started systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer. Apr 20 17:27:28.706000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-oomd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 17:27:28.712769 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Apr 20 17:27:28.718000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-timesyncd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 17:27:28.722186 systemd[1]: Reached target time-set.target - System Time Set. Apr 20 17:27:28.772476 systemd-resolved[1407]: Positive Trust Anchors: Apr 20 17:27:28.772536 systemd-resolved[1407]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 20 17:27:28.772540 systemd-resolved[1407]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Apr 20 17:27:28.772571 systemd-resolved[1407]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 20 17:27:28.808466 systemd-resolved[1407]: Defaulting to hostname 'linux'. Apr 20 17:27:28.824422 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 20 17:27:28.828000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 17:27:28.829447 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 20 17:27:36.413675 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Apr 20 17:27:36.419000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 17:27:36.422432 kernel: kauditd_printk_skb: 10 callbacks suppressed Apr 20 17:27:36.424570 kernel: audit: type=1130 audit(1776706056.419:159): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 17:27:36.451000 audit: BPF prog-id=7 op=UNLOAD Apr 20 17:27:36.451000 audit: BPF prog-id=6 op=UNLOAD Apr 20 17:27:36.456883 kernel: audit: type=1334 audit(1776706056.451:160): prog-id=7 op=UNLOAD Apr 20 17:27:36.454000 audit: BPF prog-id=33 op=LOAD Apr 20 17:27:36.456944 kernel: audit: type=1334 audit(1776706056.451:161): prog-id=6 op=UNLOAD Apr 20 17:27:36.456954 kernel: audit: type=1334 audit(1776706056.454:162): prog-id=33 op=LOAD Apr 20 17:27:36.454000 audit: BPF prog-id=34 op=LOAD Apr 20 17:27:36.459547 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 20 17:27:36.463763 kernel: audit: type=1334 audit(1776706056.454:163): prog-id=34 op=LOAD Apr 20 17:27:37.118427 systemd-udevd[1436]: Using default interface naming scheme 'v258'. Apr 20 17:27:39.549225 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 20 17:27:39.601000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 17:27:39.611597 kernel: audit: type=1130 audit(1776706059.601:164): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 17:27:39.613000 audit: BPF prog-id=35 op=LOAD Apr 20 17:27:39.615834 kernel: audit: type=1334 audit(1776706059.613:165): prog-id=35 op=LOAD Apr 20 17:27:39.616219 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 20 17:27:39.968198 systemd-networkd[1438]: lo: Link UP Apr 20 17:27:39.969038 systemd-networkd[1438]: lo: Gained carrier Apr 20 17:27:39.972663 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 20 17:27:39.977000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 17:27:39.983663 systemd[1]: Reached target network.target - Network. Apr 20 17:27:39.992345 kernel: audit: type=1130 audit(1776706059.977:166): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 17:27:40.008739 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Apr 20 17:27:40.025763 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Apr 20 17:27:40.158270 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Apr 20 17:27:40.165000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-persistent-storage comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 17:27:40.172362 kernel: audit: type=1130 audit(1776706060.165:167): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-persistent-storage comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 17:27:40.209016 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Apr 20 17:27:40.420781 systemd-networkd[1438]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Apr 20 17:27:40.420912 systemd-networkd[1438]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 20 17:27:40.426317 systemd-networkd[1438]: eth0: Link UP Apr 20 17:27:40.429954 systemd-networkd[1438]: eth0: Gained carrier Apr 20 17:27:40.430013 systemd-networkd[1438]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Apr 20 17:27:40.455378 systemd-networkd[1438]: eth0: DHCPv4 address 10.0.0.107/16, gateway 10.0.0.1 acquired from 10.0.0.1 Apr 20 17:27:40.464630 systemd-timesyncd[1408]: Network configuration changed, trying to establish connection. Apr 20 17:27:41.890623 systemd-timesyncd[1408]: Contacted time server 10.0.0.1:123 (10.0.0.1). Apr 20 17:27:41.892116 systemd-timesyncd[1408]: Initial clock synchronization to Mon 2026-04-20 17:27:41.884982 UTC. Apr 20 17:27:41.896076 systemd-resolved[1407]: Clock change detected. Flushing caches. Apr 20 17:27:41.986912 kernel: mousedev: PS/2 mouse device common for all mice Apr 20 17:27:42.048096 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Apr 20 17:27:42.068984 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Apr 20 17:27:42.186064 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Apr 20 17:27:42.188290 kernel: ACPI: button: Power Button [PWRF] Apr 20 17:27:42.267244 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Apr 20 17:27:42.276322 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Apr 20 17:27:42.285000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 17:27:42.324790 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Apr 20 17:27:42.326246 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Apr 20 17:27:42.330362 kernel: audit: type=1130 audit(1776706062.285:168): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 17:27:43.012996 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 20 17:27:43.141818 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 20 17:27:43.147278 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 20 17:27:43.154000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 17:27:43.165238 kernel: audit: type=1130 audit(1776706063.154:169): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 17:27:43.154000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 17:27:43.178082 kernel: audit: type=1131 audit(1776706063.154:170): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 17:27:43.182945 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 20 17:27:43.358202 kernel: erofs: (device dm-4): mounted with root inode @ nid 40. Apr 20 17:27:43.527472 kernel: loop4: detected capacity change from 0 to 43472 Apr 20 17:27:43.539116 kernel: loop4: p1 p2 p3 Apr 20 17:27:43.592082 systemd-networkd[1438]: eth0: Gained IPv6LL Apr 20 17:27:43.614000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-wait-online comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 17:27:43.613318 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Apr 20 17:27:43.645232 kernel: audit: type=1130 audit(1776706063.614:171): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-wait-online comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 17:27:43.625662 systemd[1]: Reached target network-online.target - Network is Online. Apr 20 17:27:43.706510 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Apr 20 17:27:43.712510 kernel: device-mapper: verity: Unrecognized verity feature request: root_hash_sig_key_desc Apr 20 17:27:43.712505 (sd-merge)[1502]: device-mapper: reload ioctl on loop4p1-verity (253:4) failed: Invalid argument Apr 20 17:27:43.713103 kernel: device-mapper: table: 253:4: verity: Unrecognized verity feature request (-EINVAL) Apr 20 17:27:43.713124 kernel: device-mapper: ioctl: error adding target to table Apr 20 17:27:43.732136 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Apr 20 17:27:43.837047 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 20 17:27:43.850000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 17:27:43.859948 kernel: audit: type=1130 audit(1776706063.850:172): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 17:27:44.131974 kernel: erofs: (device dm-4): mounted with root inode @ nid 40. Apr 20 17:27:44.155092 (sd-merge)[1502]: Skipping extension refresh because no change was found, use --always-refresh=yes to always do a refresh. Apr 20 17:27:44.175020 kernel: device-mapper: ioctl: remove_all left 4 open device(s) Apr 20 17:27:44.183244 systemd[1]: Finished systemd-confext.service - Merge System Configuration Images into /etc/. Apr 20 17:27:44.190000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-confext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 17:27:44.203051 kernel: audit: type=1130 audit(1776706064.190:173): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-confext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 17:27:44.237669 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Apr 20 17:27:44.377029 kernel: loop4: detected capacity change from 0 to 217752 Apr 20 17:27:44.530209 kernel: loop4: detected capacity change from 0 to 178200 Apr 20 17:27:44.537773 kernel: loop4: p1 p2 p3 Apr 20 17:27:44.595475 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Apr 20 17:27:44.595579 kernel: device-mapper: verity: Unrecognized verity feature request: root_hash_sig_key_desc Apr 20 17:27:44.595600 kernel: device-mapper: table: 253:4: verity: Unrecognized verity feature request (-EINVAL) Apr 20 17:27:44.602299 kernel: device-mapper: ioctl: error adding target to table Apr 20 17:27:44.605331 systemd-sysext[1514]: device-mapper: reload ioctl on loop4p1-verity (253:4) failed: Invalid argument Apr 20 17:27:44.622166 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Apr 20 17:27:44.691301 kernel: erofs: (device dm-4): mounted with root inode @ nid 39. Apr 20 17:27:44.907896 kernel: loop4: detected capacity change from 0 to 378016 Apr 20 17:27:44.926172 kernel: loop4: p1 p2 p3 Apr 20 17:27:45.102170 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Apr 20 17:27:45.103234 kernel: device-mapper: verity: Unrecognized verity feature request: root_hash_sig_key_desc Apr 20 17:27:45.109880 kernel: device-mapper: table: 253:4: verity: Unrecognized verity feature request (-EINVAL) Apr 20 17:27:45.107995 systemd-sysext[1514]: device-mapper: reload ioctl on loop4p1-verity (253:4) failed: Invalid argument Apr 20 17:27:45.110042 kernel: device-mapper: ioctl: error adding target to table Apr 20 17:27:45.127045 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Apr 20 17:27:45.639394 kernel: erofs: (device dm-4): mounted with root inode @ nid 39. Apr 20 17:27:45.890308 kernel: loop4: detected capacity change from 0 to 217752 Apr 20 17:27:45.964242 kernel: loop5: detected capacity change from 0 to 178200 Apr 20 17:27:45.968675 kernel: loop5: p1 p2 p3 Apr 20 17:27:46.101001 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Apr 20 17:27:46.108315 kernel: device-mapper: verity: Unrecognized verity feature request: root_hash_sig_key_desc Apr 20 17:27:46.104996 (sd-merge)[1534]: device-mapper: reload ioctl on loop5p1-verity (253:4) failed: Invalid argument Apr 20 17:27:46.112974 kernel: device-mapper: table: 253:4: verity: Unrecognized verity feature request (-EINVAL) Apr 20 17:27:46.113052 kernel: device-mapper: ioctl: error adding target to table Apr 20 17:27:46.116711 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Apr 20 17:27:46.392238 kernel: erofs: (device dm-4): mounted with root inode @ nid 39. Apr 20 17:27:46.434926 kernel: loop6: detected capacity change from 0 to 378016 Apr 20 17:27:46.449454 kernel: loop6: p1 p2 p3 Apr 20 17:27:46.614183 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Apr 20 17:27:46.617936 kernel: device-mapper: verity: Unrecognized verity feature request: root_hash_sig_key_desc Apr 20 17:27:46.617532 (sd-merge)[1534]: device-mapper: reload ioctl on loop6p1-verity (253:5) failed: Invalid argument Apr 20 17:27:46.618027 kernel: device-mapper: table: 253:5: verity: Unrecognized verity feature request (-EINVAL) Apr 20 17:27:46.618113 kernel: device-mapper: ioctl: error adding target to table Apr 20 17:27:46.624246 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Apr 20 17:27:47.023685 kernel: erofs: (device dm-5): mounted with root inode @ nid 39. Apr 20 17:27:47.054121 (sd-merge)[1534]: Skipping extension refresh because no change was found, use --always-refresh=yes to always do a refresh. Apr 20 17:27:47.089159 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Apr 20 17:27:47.096000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 17:27:47.118359 kernel: device-mapper: ioctl: remove_all left 4 open device(s) Apr 20 17:27:47.118561 kernel: audit: type=1130 audit(1776706067.096:174): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 17:27:47.118580 kernel: device-mapper: ioctl: remove_all left 4 open device(s) Apr 20 17:27:48.220693 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 20 17:27:48.938734 systemd-tmpfiles[1551]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Apr 20 17:27:48.986392 systemd-tmpfiles[1551]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Apr 20 17:27:48.989249 systemd-tmpfiles[1551]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Apr 20 17:27:48.998105 systemd-tmpfiles[1551]: ACLs are not supported, ignoring. Apr 20 17:27:48.998223 systemd-tmpfiles[1551]: ACLs are not supported, ignoring. Apr 20 17:27:49.091141 systemd-tmpfiles[1551]: Detected autofs mount point /boot during canonicalization of boot. Apr 20 17:27:49.095382 systemd-tmpfiles[1551]: Skipping /boot Apr 20 17:27:49.783198 systemd-tmpfiles[1551]: Detected autofs mount point /boot during canonicalization of boot. Apr 20 17:27:49.785313 systemd-tmpfiles[1551]: Skipping /boot Apr 20 17:27:50.165619 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 20 17:27:50.179000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 17:27:50.188230 kernel: audit: type=1130 audit(1776706070.179:175): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 17:27:50.401818 systemd[1]: Starting audit-rules.service - Load Audit Rules... Apr 20 17:27:50.437981 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Apr 20 17:27:50.459589 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Apr 20 17:27:50.478286 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Apr 20 17:27:50.512393 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Apr 20 17:27:50.650000 audit[1567]: AUDIT1127 pid=1567 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Apr 20 17:27:50.679150 kernel: audit: type=1127 audit(1776706070.650:176): pid=1567 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Apr 20 17:27:50.825937 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Apr 20 17:27:50.836000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 17:27:50.859211 kernel: audit: type=1130 audit(1776706070.836:177): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 17:27:50.926688 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Apr 20 17:27:50.930000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 17:27:50.989686 kernel: audit: type=1130 audit(1776706070.930:178): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 17:27:51.013465 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Apr 20 17:27:51.017000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 17:27:51.029959 kernel: audit: type=1130 audit(1776706071.017:179): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 17:27:51.035557 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Apr 20 17:27:51.037000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Apr 20 17:27:51.037000 audit[1584]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffdd985b950 a2=420 a3=0 items=0 ppid=1557 pid=1584 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 17:27:51.037000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Apr 20 17:27:51.070681 kernel: audit: type=1305 audit(1776706071.037:180): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Apr 20 17:27:51.070856 augenrules[1584]: No rules Apr 20 17:27:51.071166 kernel: audit: type=1300 audit(1776706071.037:180): arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffdd985b950 a2=420 a3=0 items=0 ppid=1557 pid=1584 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 17:27:51.071239 kernel: audit: type=1327 audit(1776706071.037:180): proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Apr 20 17:27:51.074393 systemd[1]: audit-rules.service: Deactivated successfully. Apr 20 17:27:51.081798 systemd[1]: Finished audit-rules.service - Load Audit Rules. Apr 20 17:27:52.923003 ldconfig[1559]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Apr 20 17:27:52.945332 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Apr 20 17:27:52.966961 systemd[1]: Starting systemd-update-done.service - Update is Completed... Apr 20 17:27:53.181297 systemd[1]: Finished systemd-update-done.service - Update is Completed. Apr 20 17:27:53.198254 systemd[1]: Reached target sysinit.target - System Initialization. Apr 20 17:27:53.215322 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Apr 20 17:27:53.254211 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Apr 20 17:27:53.258821 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Apr 20 17:27:53.270921 systemd[1]: Started logrotate.timer - Daily rotation of log files. Apr 20 17:27:53.288303 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Apr 20 17:27:53.328105 systemd[1]: Started systemd-sysupdate-reboot.timer - Reboot Automatically After System Update. Apr 20 17:27:53.412309 systemd[1]: Started systemd-sysupdate.timer - Automatic System Update. Apr 20 17:27:53.445319 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Apr 20 17:27:53.454798 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Apr 20 17:27:53.456509 systemd[1]: Reached target paths.target - Path Units. Apr 20 17:27:53.465353 systemd[1]: Reached target timers.target - Timer Units. Apr 20 17:27:53.481318 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Apr 20 17:27:53.499177 systemd[1]: Starting docker.socket - Docker Socket for the API... Apr 20 17:27:53.515078 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Apr 20 17:27:53.563563 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Apr 20 17:27:53.572042 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Apr 20 17:27:53.609367 systemd[1]: Listening on systemd-logind-varlink.socket - User Login Management Varlink Socket. Apr 20 17:27:53.630678 systemd[1]: Listening on systemd-machined.socket - Virtual Machine and Container Registration Service Socket. Apr 20 17:27:53.695365 systemd[1]: Listening on docker.socket - Docker Socket for the API. Apr 20 17:27:53.707652 systemd[1]: Reached target sockets.target - Socket Units. Apr 20 17:27:53.712914 systemd[1]: Reached target basic.target - Basic System. Apr 20 17:27:53.725303 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Apr 20 17:27:53.728300 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Apr 20 17:27:53.744170 systemd[1]: Starting containerd.service - containerd container runtime... Apr 20 17:27:53.756903 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Apr 20 17:27:53.772366 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Apr 20 17:27:53.789588 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Apr 20 17:27:53.803066 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Apr 20 17:27:53.820223 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Apr 20 17:27:53.823002 jq[1599]: false Apr 20 17:27:53.825298 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Apr 20 17:27:53.833793 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Apr 20 17:27:53.920517 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 20 17:27:53.932268 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Apr 20 17:27:53.936010 extend-filesystems[1600]: Found /dev/vda6 Apr 20 17:27:53.939457 extend-filesystems[1600]: Found /dev/vda9 Apr 20 17:27:53.949384 google_oslogin_nss_cache[1601]: oslogin_cache_refresh[1601]: Refreshing passwd entry cache Apr 20 17:27:53.936743 oslogin_cache_refresh[1601]: Refreshing passwd entry cache Apr 20 17:27:53.949958 extend-filesystems[1600]: Checking size of /dev/vda9 Apr 20 17:27:53.949067 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Apr 20 17:27:53.961312 google_oslogin_nss_cache[1601]: oslogin_cache_refresh[1601]: Failure getting users, quitting Apr 20 17:27:53.961312 google_oslogin_nss_cache[1601]: oslogin_cache_refresh[1601]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Apr 20 17:27:53.961312 google_oslogin_nss_cache[1601]: oslogin_cache_refresh[1601]: Refreshing group entry cache Apr 20 17:27:53.960293 oslogin_cache_refresh[1601]: Failure getting users, quitting Apr 20 17:27:53.960319 oslogin_cache_refresh[1601]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Apr 20 17:27:53.961229 oslogin_cache_refresh[1601]: Refreshing group entry cache Apr 20 17:27:53.965661 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Apr 20 17:27:53.968031 extend-filesystems[1600]: Resized partition /dev/vda9 Apr 20 17:27:53.981976 google_oslogin_nss_cache[1601]: oslogin_cache_refresh[1601]: Failure getting groups, quitting Apr 20 17:27:53.981976 google_oslogin_nss_cache[1601]: oslogin_cache_refresh[1601]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Apr 20 17:27:53.979137 oslogin_cache_refresh[1601]: Failure getting groups, quitting Apr 20 17:27:53.979204 oslogin_cache_refresh[1601]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Apr 20 17:27:53.985558 extend-filesystems[1616]: resize2fs 1.47.3 (8-Jul-2025) Apr 20 17:27:53.986070 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Apr 20 17:27:54.003468 kernel: EXT4-fs (vda9): resizing filesystem from 456704 to 1784827 blocks Apr 20 17:27:54.003552 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Apr 20 17:27:54.035745 systemd[1]: Starting systemd-logind.service - User Login Management... Apr 20 17:27:54.042216 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Apr 20 17:27:54.057857 systemd[1]: Starting update-engine.service - Update Engine... Apr 20 17:27:54.070050 kernel: EXT4-fs (vda9): resized filesystem to 1784827 Apr 20 17:27:54.076353 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Apr 20 17:27:54.098310 extend-filesystems[1616]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Apr 20 17:27:54.098310 extend-filesystems[1616]: old_desc_blocks = 1, new_desc_blocks = 1 Apr 20 17:27:54.098310 extend-filesystems[1616]: The filesystem on /dev/vda9 is now 1784827 (4k) blocks long. Apr 20 17:27:54.102902 extend-filesystems[1600]: Resized filesystem in /dev/vda9 Apr 20 17:27:54.102594 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Apr 20 17:27:54.103625 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Apr 20 17:27:54.103902 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Apr 20 17:27:54.104150 systemd[1]: extend-filesystems.service: Deactivated successfully. Apr 20 17:27:54.107029 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Apr 20 17:27:54.124082 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Apr 20 17:27:54.132133 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Apr 20 17:27:54.139756 systemd[1]: motdgen.service: Deactivated successfully. Apr 20 17:27:54.140054 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Apr 20 17:27:54.145220 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Apr 20 17:27:54.145666 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Apr 20 17:27:54.151571 jq[1635]: true Apr 20 17:27:54.171520 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Apr 20 17:27:54.196616 update_engine[1630]: I20260420 17:27:54.196548 1630 main.cc:92] Flatcar Update Engine starting Apr 20 17:27:54.211776 jq[1656]: true Apr 20 17:27:54.215882 systemd-logind[1622]: Watching system buttons on /dev/input/event2 (Power Button) Apr 20 17:27:54.215904 systemd-logind[1622]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Apr 20 17:27:54.218464 systemd-logind[1622]: New seat seat0. Apr 20 17:27:54.219302 systemd[1]: Started systemd-logind.service - User Login Management. Apr 20 17:27:54.224228 systemd[1]: coreos-metadata.service: Deactivated successfully. Apr 20 17:27:54.224603 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Apr 20 17:27:54.264455 tar[1651]: linux-amd64/LICENSE Apr 20 17:27:54.277292 tar[1651]: linux-amd64/helm Apr 20 17:27:54.329182 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Apr 20 17:27:54.426739 sshd_keygen[1667]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Apr 20 17:27:54.434672 dbus-daemon[1597]: [system] SELinux support is enabled Apr 20 17:27:54.443992 systemd[1]: Started dbus.service - D-Bus System Message Bus. Apr 20 17:27:54.447886 update_engine[1630]: I20260420 17:27:54.445308 1630 update_check_scheduler.cc:74] Next update check in 5m36s Apr 20 17:27:54.455738 bash[1705]: Updated "/home/core/.ssh/authorized_keys" Apr 20 17:27:54.497658 dbus-daemon[1597]: [system] Successfully activated service 'org.freedesktop.systemd1' Apr 20 17:27:54.502661 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Apr 20 17:27:54.516986 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Apr 20 17:27:54.532175 systemd[1]: Started update-engine.service - Update Engine. Apr 20 17:27:54.541738 systemd[1]: Starting issuegen.service - Generate /run/issue... Apr 20 17:27:54.552993 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Apr 20 17:27:54.556897 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Apr 20 17:27:54.557077 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Apr 20 17:27:54.562125 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Apr 20 17:27:54.562294 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Apr 20 17:27:54.604205 systemd[1]: Started locksmithd.service - Cluster reboot manager. Apr 20 17:27:54.612039 systemd[1]: issuegen.service: Deactivated successfully. Apr 20 17:27:54.614724 systemd[1]: Finished issuegen.service - Generate /run/issue. Apr 20 17:27:54.636023 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Apr 20 17:27:54.695854 locksmithd[1718]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Apr 20 17:27:54.701246 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Apr 20 17:27:54.736039 systemd[1]: Started getty@tty1.service - Getty on tty1. Apr 20 17:27:54.815095 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Apr 20 17:27:54.824484 systemd[1]: Reached target getty.target - Login Prompts. Apr 20 17:27:54.826768 containerd[1658]: time="2026-04-20T17:27:54Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Apr 20 17:27:54.828452 containerd[1658]: time="2026-04-20T17:27:54.828379687Z" level=info msg="starting containerd" revision=dea7da592f5d1d2b7755e3a161be07f43fad8f75 version=v2.2.1 Apr 20 17:27:54.849232 containerd[1658]: time="2026-04-20T17:27:54.848898092Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="59.26µs" Apr 20 17:27:54.851460 containerd[1658]: time="2026-04-20T17:27:54.850296547Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Apr 20 17:27:54.851460 containerd[1658]: time="2026-04-20T17:27:54.850464377Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Apr 20 17:27:54.851460 containerd[1658]: time="2026-04-20T17:27:54.850478622Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Apr 20 17:27:54.851460 containerd[1658]: time="2026-04-20T17:27:54.850833194Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Apr 20 17:27:54.851460 containerd[1658]: time="2026-04-20T17:27:54.850856645Z" level=info msg="loading plugin" id=io.containerd.mount-handler.v1.erofs type=io.containerd.mount-handler.v1 Apr 20 17:27:54.851460 containerd[1658]: time="2026-04-20T17:27:54.850878569Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Apr 20 17:27:54.851460 containerd[1658]: time="2026-04-20T17:27:54.850965871Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Apr 20 17:27:54.851460 containerd[1658]: time="2026-04-20T17:27:54.850984450Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Apr 20 17:27:54.851460 containerd[1658]: time="2026-04-20T17:27:54.851230826Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Apr 20 17:27:54.851460 containerd[1658]: time="2026-04-20T17:27:54.851249735Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Apr 20 17:27:54.851460 containerd[1658]: time="2026-04-20T17:27:54.851263397Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Apr 20 17:27:54.851460 containerd[1658]: time="2026-04-20T17:27:54.851273033Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Apr 20 17:27:54.852242 containerd[1658]: time="2026-04-20T17:27:54.852218176Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Apr 20 17:27:54.852440 containerd[1658]: time="2026-04-20T17:27:54.852379897Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Apr 20 17:27:54.854701 containerd[1658]: time="2026-04-20T17:27:54.854298884Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Apr 20 17:27:54.858344 containerd[1658]: time="2026-04-20T17:27:54.857210438Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Apr 20 17:27:54.858344 containerd[1658]: time="2026-04-20T17:27:54.857235101Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Apr 20 17:27:54.859471 containerd[1658]: time="2026-04-20T17:27:54.859030346Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Apr 20 17:27:54.864739 containerd[1658]: time="2026-04-20T17:27:54.864474320Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Apr 20 17:27:54.866239 containerd[1658]: time="2026-04-20T17:27:54.864886137Z" level=info msg="metadata content store policy set" policy=shared Apr 20 17:27:54.873286 containerd[1658]: time="2026-04-20T17:27:54.872937372Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Apr 20 17:27:54.873286 containerd[1658]: time="2026-04-20T17:27:54.873012518Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Apr 20 17:27:54.873286 containerd[1658]: time="2026-04-20T17:27:54.873083119Z" level=info msg="built-in NRI default validator is disabled" Apr 20 17:27:54.873286 containerd[1658]: time="2026-04-20T17:27:54.873089214Z" level=info msg="runtime interface created" Apr 20 17:27:54.873286 containerd[1658]: time="2026-04-20T17:27:54.873093275Z" level=info msg="created NRI interface" Apr 20 17:27:54.873286 containerd[1658]: time="2026-04-20T17:27:54.873100428Z" level=info msg="loading plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Apr 20 17:27:54.873286 containerd[1658]: time="2026-04-20T17:27:54.873279459Z" level=info msg="skip loading plugin" error="failed to check mkfs.erofs availability: failed to run mkfs.erofs --help: exec: \"mkfs.erofs\": executable file not found in $PATH: skip plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Apr 20 17:27:54.873286 containerd[1658]: time="2026-04-20T17:27:54.873292095Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Apr 20 17:27:54.873286 containerd[1658]: time="2026-04-20T17:27:54.873301677Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Apr 20 17:27:54.873286 containerd[1658]: time="2026-04-20T17:27:54.873309780Z" level=info msg="loading plugin" id=io.containerd.mount-manager.v1.bolt type=io.containerd.mount-manager.v1 Apr 20 17:27:54.874551 containerd[1658]: time="2026-04-20T17:27:54.873631726Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Apr 20 17:27:54.874654 containerd[1658]: time="2026-04-20T17:27:54.874569239Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Apr 20 17:27:54.874654 containerd[1658]: time="2026-04-20T17:27:54.874588419Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Apr 20 17:27:54.874654 containerd[1658]: time="2026-04-20T17:27:54.874604575Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Apr 20 17:27:54.874654 containerd[1658]: time="2026-04-20T17:27:54.874617903Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Apr 20 17:27:54.874654 containerd[1658]: time="2026-04-20T17:27:54.874631641Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Apr 20 17:27:54.874654 containerd[1658]: time="2026-04-20T17:27:54.874644743Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Apr 20 17:27:54.874782 containerd[1658]: time="2026-04-20T17:27:54.874655980Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Apr 20 17:27:54.874782 containerd[1658]: time="2026-04-20T17:27:54.874671831Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Apr 20 17:27:54.876816 containerd[1658]: time="2026-04-20T17:27:54.875720117Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Apr 20 17:27:54.876816 containerd[1658]: time="2026-04-20T17:27:54.875772216Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Apr 20 17:27:54.876816 containerd[1658]: time="2026-04-20T17:27:54.875790207Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Apr 20 17:27:54.876816 containerd[1658]: time="2026-04-20T17:27:54.875803164Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Apr 20 17:27:54.876816 containerd[1658]: time="2026-04-20T17:27:54.875814949Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Apr 20 17:27:54.876816 containerd[1658]: time="2026-04-20T17:27:54.875831462Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Apr 20 17:27:54.876816 containerd[1658]: time="2026-04-20T17:27:54.875845595Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Apr 20 17:27:54.876816 containerd[1658]: time="2026-04-20T17:27:54.875862973Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Apr 20 17:27:54.876816 containerd[1658]: time="2026-04-20T17:27:54.875880324Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.mounts type=io.containerd.grpc.v1 Apr 20 17:27:54.876816 containerd[1658]: time="2026-04-20T17:27:54.875892828Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Apr 20 17:27:54.876816 containerd[1658]: time="2026-04-20T17:27:54.875907903Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Apr 20 17:27:54.876816 containerd[1658]: time="2026-04-20T17:27:54.875918453Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Apr 20 17:27:54.876816 containerd[1658]: time="2026-04-20T17:27:54.876742400Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Apr 20 17:27:54.876816 containerd[1658]: time="2026-04-20T17:27:54.876899812Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Apr 20 17:27:54.876816 containerd[1658]: time="2026-04-20T17:27:54.876915097Z" level=info msg="Start snapshots syncer" Apr 20 17:27:54.878261 containerd[1658]: time="2026-04-20T17:27:54.877321988Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Apr 20 17:27:54.878261 containerd[1658]: time="2026-04-20T17:27:54.877652774Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"cgroupWritable\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"\",\"binDirs\":[\"/opt/cni/bin\"],\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogLineSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Apr 20 17:27:54.878463 containerd[1658]: time="2026-04-20T17:27:54.877698666Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Apr 20 17:27:54.878508 containerd[1658]: time="2026-04-20T17:27:54.878488525Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Apr 20 17:27:54.884014 containerd[1658]: time="2026-04-20T17:27:54.878697138Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Apr 20 17:27:54.884014 containerd[1658]: time="2026-04-20T17:27:54.878715137Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Apr 20 17:27:54.884014 containerd[1658]: time="2026-04-20T17:27:54.878723936Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Apr 20 17:27:54.884014 containerd[1658]: time="2026-04-20T17:27:54.878732809Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Apr 20 17:27:54.884014 containerd[1658]: time="2026-04-20T17:27:54.878744313Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Apr 20 17:27:54.884014 containerd[1658]: time="2026-04-20T17:27:54.878752341Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Apr 20 17:27:54.884014 containerd[1658]: time="2026-04-20T17:27:54.878760924Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Apr 20 17:27:54.884014 containerd[1658]: time="2026-04-20T17:27:54.878769377Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Apr 20 17:27:54.884014 containerd[1658]: time="2026-04-20T17:27:54.878776641Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Apr 20 17:27:54.884014 containerd[1658]: time="2026-04-20T17:27:54.878827359Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Apr 20 17:27:54.884014 containerd[1658]: time="2026-04-20T17:27:54.878840309Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Apr 20 17:27:54.884014 containerd[1658]: time="2026-04-20T17:27:54.878847312Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Apr 20 17:27:54.884014 containerd[1658]: time="2026-04-20T17:27:54.878853614Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Apr 20 17:27:54.884014 containerd[1658]: time="2026-04-20T17:27:54.878859342Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Apr 20 17:27:54.884861 containerd[1658]: time="2026-04-20T17:27:54.878866203Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Apr 20 17:27:54.884861 containerd[1658]: time="2026-04-20T17:27:54.878874185Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Apr 20 17:27:54.884861 containerd[1658]: time="2026-04-20T17:27:54.878882929Z" level=info msg="Connect containerd service" Apr 20 17:27:54.884861 containerd[1658]: time="2026-04-20T17:27:54.878899022Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Apr 20 17:27:54.884861 containerd[1658]: time="2026-04-20T17:27:54.884497740Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 20 17:27:55.008946 tar[1651]: linux-amd64/README.md Apr 20 17:27:55.126774 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Apr 20 17:27:55.165982 systemd[1]: Started sshd@0-1-10.0.0.107:22-10.0.0.1:40224.service - OpenSSH per-connection server daemon (10.0.0.1:40224). Apr 20 17:27:55.204128 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Apr 20 17:27:55.235041 containerd[1658]: time="2026-04-20T17:27:55.233695188Z" level=info msg="Start subscribing containerd event" Apr 20 17:27:55.235041 containerd[1658]: time="2026-04-20T17:27:55.233761148Z" level=info msg="Start recovering state" Apr 20 17:27:55.240495 containerd[1658]: time="2026-04-20T17:27:55.239293590Z" level=info msg="Start event monitor" Apr 20 17:27:55.240495 containerd[1658]: time="2026-04-20T17:27:55.239356638Z" level=info msg="Start cni network conf syncer for default" Apr 20 17:27:55.240495 containerd[1658]: time="2026-04-20T17:27:55.239368395Z" level=info msg="Start streaming server" Apr 20 17:27:55.240495 containerd[1658]: time="2026-04-20T17:27:55.239379951Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Apr 20 17:27:55.240495 containerd[1658]: time="2026-04-20T17:27:55.239389106Z" level=info msg="runtime interface starting up..." Apr 20 17:27:55.240495 containerd[1658]: time="2026-04-20T17:27:55.239395934Z" level=info msg="starting plugins..." Apr 20 17:27:55.240495 containerd[1658]: time="2026-04-20T17:27:55.239447023Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Apr 20 17:27:55.240495 containerd[1658]: time="2026-04-20T17:27:55.240002195Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Apr 20 17:27:55.240495 containerd[1658]: time="2026-04-20T17:27:55.240042478Z" level=info msg=serving... address=/run/containerd/containerd.sock Apr 20 17:27:55.240495 containerd[1658]: time="2026-04-20T17:27:55.240097501Z" level=info msg="containerd successfully booted in 0.415385s" Apr 20 17:27:55.257094 systemd[1]: Started containerd.service - containerd container runtime. Apr 20 17:27:55.497834 sshd[1749]: Accepted publickey for core from 10.0.0.1 port 40224 ssh2: RSA SHA256:snwWgtVh/dFlg/GoTCcrMcNNvPRv/2kcBFWNu/hKVq0 Apr 20 17:27:55.499456 sshd-session[1749]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 20 17:27:55.544739 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Apr 20 17:27:55.554026 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Apr 20 17:27:55.565278 systemd-logind[1622]: New session '1' of user 'core' with class 'user' and type 'tty'. Apr 20 17:27:55.635741 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Apr 20 17:27:55.661077 systemd[1]: Starting user@500.service - User Manager for UID 500... Apr 20 17:27:55.738177 (systemd)[1758]: pam_unix(systemd-user:session): session opened for user core(uid=500) by core(uid=0) Apr 20 17:27:55.793824 systemd-logind[1622]: New session '2' of user 'core' with class 'manager-early' and type 'unspecified'. Apr 20 17:27:55.923087 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 20 17:27:55.932609 systemd[1]: Reached target multi-user.target - Multi-User System. Apr 20 17:27:55.955643 (kubelet)[1770]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 20 17:27:56.596142 systemd[1758]: Queued start job for default target default.target. Apr 20 17:27:56.611481 systemd[1758]: Created slice app.slice - User Application Slice. Apr 20 17:27:56.611628 systemd[1758]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of User's Temporary Directories. Apr 20 17:27:56.611642 systemd[1758]: Reached target machines.target - Virtual Machines and Containers. Apr 20 17:27:56.612351 systemd[1758]: Reached target paths.target - Paths. Apr 20 17:27:56.612464 systemd[1758]: Reached target timers.target - Timers. Apr 20 17:27:56.618225 systemd[1758]: Starting dbus.socket - D-Bus User Message Bus Socket... Apr 20 17:27:56.620522 systemd[1758]: Listening on systemd-ask-password.socket - Query the User Interactively for a Password. Apr 20 17:27:56.622587 systemd[1758]: Starting systemd-tmpfiles-setup.service - Create User Files and Directories... Apr 20 17:27:56.660922 systemd[1758]: Listening on dbus.socket - D-Bus User Message Bus Socket. Apr 20 17:27:56.661173 systemd[1758]: Finished systemd-tmpfiles-setup.service - Create User Files and Directories. Apr 20 17:27:56.661269 systemd[1758]: Reached target sockets.target - Sockets. Apr 20 17:27:56.661332 systemd[1758]: Reached target basic.target - Basic System. Apr 20 17:27:56.661355 systemd[1758]: Reached target default.target - Main User Target. Apr 20 17:27:56.661374 systemd[1758]: Startup finished in 850ms. Apr 20 17:27:56.663989 systemd[1]: Started user@500.service - User Manager for UID 500. Apr 20 17:27:56.714855 systemd[1]: Started session-1.scope - Session 1 of User core. Apr 20 17:27:56.729130 systemd[1]: Startup finished in 15.341s (kernel) + 2min 16.886s (initrd) + 53.830s (userspace) = 3min 26.058s. Apr 20 17:27:56.880928 systemd[1]: Started sshd@1-4097-10.0.0.107:22-10.0.0.1:35218.service - OpenSSH per-connection server daemon (10.0.0.1:35218). Apr 20 17:27:57.013303 kubelet[1770]: E0420 17:27:56.984153 1770 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 20 17:27:57.028188 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 20 17:27:57.028300 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 20 17:27:57.029139 systemd[1]: kubelet.service: Consumed 1.259s CPU time, 256.4M memory peak. Apr 20 17:27:57.161440 sshd[1784]: Accepted publickey for core from 10.0.0.1 port 35218 ssh2: RSA SHA256:snwWgtVh/dFlg/GoTCcrMcNNvPRv/2kcBFWNu/hKVq0 Apr 20 17:27:57.165094 sshd-session[1784]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 20 17:27:57.194033 systemd-logind[1622]: New session '3' of user 'core' with class 'user' and type 'tty'. Apr 20 17:27:57.212033 systemd[1]: Started session-3.scope - Session 3 of User core. Apr 20 17:27:57.287234 sshd[1792]: Connection closed by 10.0.0.1 port 35218 Apr 20 17:27:57.288272 sshd-session[1784]: pam_unix(sshd:session): session closed for user core Apr 20 17:27:57.339865 systemd[1]: sshd@1-4097-10.0.0.107:22-10.0.0.1:35218.service: Deactivated successfully. Apr 20 17:27:57.349180 systemd[1]: session-3.scope: Deactivated successfully. Apr 20 17:27:57.353931 systemd-logind[1622]: Session 3 logged out. Waiting for processes to exit. Apr 20 17:27:57.429885 systemd[1]: Started sshd@2-4098-10.0.0.107:22-10.0.0.1:35234.service - OpenSSH per-connection server daemon (10.0.0.1:35234). Apr 20 17:27:57.433029 systemd-logind[1622]: Removed session 3. Apr 20 17:27:58.341392 sshd[1798]: Accepted publickey for core from 10.0.0.1 port 35234 ssh2: RSA SHA256:snwWgtVh/dFlg/GoTCcrMcNNvPRv/2kcBFWNu/hKVq0 Apr 20 17:27:58.359997 sshd-session[1798]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 20 17:27:58.854262 systemd-logind[1622]: New session '4' of user 'core' with class 'user' and type 'tty'. Apr 20 17:27:58.900164 systemd[1]: Started session-4.scope - Session 4 of User core. Apr 20 17:27:59.217635 sshd[1803]: Connection closed by 10.0.0.1 port 35234 Apr 20 17:27:59.218371 sshd-session[1798]: pam_unix(sshd:session): session closed for user core Apr 20 17:27:59.343319 systemd[1]: sshd@2-4098-10.0.0.107:22-10.0.0.1:35234.service: Deactivated successfully. Apr 20 17:27:59.360281 systemd[1]: session-4.scope: Deactivated successfully. Apr 20 17:27:59.368736 systemd-logind[1622]: Session 4 logged out. Waiting for processes to exit. Apr 20 17:27:59.398774 systemd[1]: Started sshd@3-2-10.0.0.107:22-10.0.0.1:35250.service - OpenSSH per-connection server daemon (10.0.0.1:35250). Apr 20 17:27:59.400691 systemd-logind[1622]: Removed session 4. Apr 20 17:28:00.358754 sshd[1809]: Accepted publickey for core from 10.0.0.1 port 35250 ssh2: RSA SHA256:snwWgtVh/dFlg/GoTCcrMcNNvPRv/2kcBFWNu/hKVq0 Apr 20 17:28:00.366333 sshd-session[1809]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 20 17:28:00.627153 systemd-logind[1622]: New session '5' of user 'core' with class 'user' and type 'tty'. Apr 20 17:28:00.691912 systemd[1]: Started session-5.scope - Session 5 of User core. Apr 20 17:28:00.951063 sshd[1813]: Connection closed by 10.0.0.1 port 35250 Apr 20 17:28:00.951739 sshd-session[1809]: pam_unix(sshd:session): session closed for user core Apr 20 17:28:01.004293 systemd[1]: sshd@3-2-10.0.0.107:22-10.0.0.1:35250.service: Deactivated successfully. Apr 20 17:28:01.097906 systemd[1]: session-5.scope: Deactivated successfully. Apr 20 17:28:01.143329 systemd-logind[1622]: Session 5 logged out. Waiting for processes to exit. Apr 20 17:28:01.203196 systemd[1]: Started sshd@4-8193-10.0.0.107:22-10.0.0.1:35258.service - OpenSSH per-connection server daemon (10.0.0.1:35258). Apr 20 17:28:01.208130 systemd-logind[1622]: Removed session 5. Apr 20 17:28:01.778541 sshd[1820]: Accepted publickey for core from 10.0.0.1 port 35258 ssh2: RSA SHA256:snwWgtVh/dFlg/GoTCcrMcNNvPRv/2kcBFWNu/hKVq0 Apr 20 17:28:01.781060 sshd-session[1820]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 20 17:28:01.906516 systemd-logind[1622]: New session '6' of user 'core' with class 'user' and type 'tty'. Apr 20 17:28:01.936168 systemd[1]: Started session-6.scope - Session 6 of User core. Apr 20 17:28:02.058351 sudo[1825]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Apr 20 17:28:02.058918 sudo[1825]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 20 17:28:04.037354 systemd[1]: Starting docker.service - Docker Application Container Engine... Apr 20 17:28:04.074175 (dockerd)[1846]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Apr 20 17:28:06.423932 dockerd[1846]: time="2026-04-20T17:28:06.423134505Z" level=info msg="Starting up" Apr 20 17:28:06.438993 dockerd[1846]: time="2026-04-20T17:28:06.438918390Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Apr 20 17:28:06.585233 dockerd[1846]: time="2026-04-20T17:28:06.581522176Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Apr 20 17:28:06.812370 dockerd[1846]: time="2026-04-20T17:28:06.806675485Z" level=info msg="Loading containers: start." Apr 20 17:28:06.906174 kernel: Initializing XFRM netlink socket Apr 20 17:28:07.168000 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Apr 20 17:28:07.180569 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 20 17:28:08.021085 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 20 17:28:08.117369 (kubelet)[1941]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 20 17:28:08.244888 kubelet[1941]: E0420 17:28:08.244157 1941 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 20 17:28:08.262337 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 20 17:28:08.262541 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 20 17:28:08.263205 systemd[1]: kubelet.service: Consumed 675ms CPU time, 110.9M memory peak. Apr 20 17:28:09.374879 systemd-networkd[1438]: docker0: Link UP Apr 20 17:28:09.384664 dockerd[1846]: time="2026-04-20T17:28:09.384263454Z" level=info msg="Loading containers: done." Apr 20 17:28:09.530032 dockerd[1846]: time="2026-04-20T17:28:09.528028399Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Apr 20 17:28:09.530032 dockerd[1846]: time="2026-04-20T17:28:09.528158503Z" level=info msg="Docker daemon" commit=45873be4ae3f5488c9498b3d9f17deaddaf609f4 containerd-snapshotter=false storage-driver=overlay2 version=28.2.2 Apr 20 17:28:09.551673 dockerd[1846]: time="2026-04-20T17:28:09.530344074Z" level=info msg="Initializing buildkit" Apr 20 17:28:09.565571 dockerd[1846]: time="2026-04-20T17:28:09.563534634Z" level=warning msg="CDI setup error /etc/cdi: failed to monitor for changes: no such file or directory" Apr 20 17:28:09.565571 dockerd[1846]: time="2026-04-20T17:28:09.563572972Z" level=warning msg="CDI setup error /var/run/cdi: failed to monitor for changes: no such file or directory" Apr 20 17:28:09.575972 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck245667317-merged.mount: Deactivated successfully. Apr 20 17:28:09.881365 dockerd[1846]: time="2026-04-20T17:28:09.880215605Z" level=info msg="Completed buildkit initialization" Apr 20 17:28:09.908866 dockerd[1846]: time="2026-04-20T17:28:09.907565626Z" level=info msg="Daemon has completed initialization" Apr 20 17:28:09.908866 dockerd[1846]: time="2026-04-20T17:28:09.908098403Z" level=info msg="API listen on /run/docker.sock" Apr 20 17:28:09.912851 systemd[1]: Started docker.service - Docker Application Container Engine. Apr 20 17:28:11.695108 containerd[1658]: time="2026-04-20T17:28:11.692721340Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.35.4\"" Apr 20 17:28:13.913243 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount712657282.mount: Deactivated successfully. Apr 20 17:28:18.406237 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Apr 20 17:28:18.423856 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 20 17:28:19.091249 containerd[1658]: time="2026-04-20T17:28:19.088858496Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.35.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 20 17:28:19.091249 containerd[1658]: time="2026-04-20T17:28:19.091447073Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.35.4: active requests=0, bytes read=27569248" Apr 20 17:28:19.112704 containerd[1658]: time="2026-04-20T17:28:19.094764871Z" level=info msg="ImageCreate event name:\"sha256:840f22aa169cc9a11114a874832f60c2d4a4f7767d107303cd1ca6d9c228ee8b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 20 17:28:19.112704 containerd[1658]: time="2026-04-20T17:28:19.111274606Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:06b4bb208634a107ab9e6c50cdb9df178d05166a700c0cc448d59522091074b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 20 17:28:19.129957 containerd[1658]: time="2026-04-20T17:28:19.118961296Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.35.4\" with image id \"sha256:840f22aa169cc9a11114a874832f60c2d4a4f7767d107303cd1ca6d9c228ee8b\", repo tag \"registry.k8s.io/kube-apiserver:v1.35.4\", repo digest \"registry.k8s.io/kube-apiserver@sha256:06b4bb208634a107ab9e6c50cdb9df178d05166a700c0cc448d59522091074b5\", size \"27576022\" in 7.425614862s" Apr 20 17:28:19.129957 containerd[1658]: time="2026-04-20T17:28:19.119102815Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.35.4\" returns image reference \"sha256:840f22aa169cc9a11114a874832f60c2d4a4f7767d107303cd1ca6d9c228ee8b\"" Apr 20 17:28:19.129957 containerd[1658]: time="2026-04-20T17:28:19.125308312Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.35.4\"" Apr 20 17:28:19.652870 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 20 17:28:19.880925 (kubelet)[2148]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 20 17:28:20.098161 kubelet[2148]: E0420 17:28:20.091487 2148 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 20 17:28:20.111639 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 20 17:28:20.121018 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 20 17:28:20.135199 systemd[1]: kubelet.service: Consumed 985ms CPU time, 111M memory peak. Apr 20 17:28:23.403961 kernel: clocksource: Long readout interval, skipping watchdog check: cs_nsec: 2192496938 wd_nsec: 2192497011 Apr 20 17:28:27.158286 containerd[1658]: time="2026-04-20T17:28:27.157045724Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.35.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 20 17:28:27.161057 containerd[1658]: time="2026-04-20T17:28:27.160207010Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.35.4: active requests=0, bytes read=21442870" Apr 20 17:28:27.163790 containerd[1658]: time="2026-04-20T17:28:27.163118448Z" level=info msg="ImageCreate event name:\"sha256:96ce7469899d4d3ccad56b1a80b91609cb2203287112d73818296004948bb667\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 20 17:28:27.178185 containerd[1658]: time="2026-04-20T17:28:27.176297153Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:7b036c805d57f203e9efaf43672cff6019b9083a9c0eb107ea8500eace29d8fd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 20 17:28:27.181842 containerd[1658]: time="2026-04-20T17:28:27.180956075Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.35.4\" with image id \"sha256:96ce7469899d4d3ccad56b1a80b91609cb2203287112d73818296004948bb667\", repo tag \"registry.k8s.io/kube-controller-manager:v1.35.4\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:7b036c805d57f203e9efaf43672cff6019b9083a9c0eb107ea8500eace29d8fd\", size \"23018006\" in 8.055611733s" Apr 20 17:28:27.181842 containerd[1658]: time="2026-04-20T17:28:27.181050637Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.35.4\" returns image reference \"sha256:96ce7469899d4d3ccad56b1a80b91609cb2203287112d73818296004948bb667\"" Apr 20 17:28:27.181842 containerd[1658]: time="2026-04-20T17:28:27.181801726Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.35.4\"" Apr 20 17:28:30.188012 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Apr 20 17:28:30.206679 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 20 17:28:32.127176 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 20 17:28:32.636547 (kubelet)[2174]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 20 17:28:32.749883 containerd[1658]: time="2026-04-20T17:28:32.740087280Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.35.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 20 17:28:32.749883 containerd[1658]: time="2026-04-20T17:28:32.745906091Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.35.4: active requests=1, bytes read=13348864" Apr 20 17:28:32.749883 containerd[1658]: time="2026-04-20T17:28:32.750475150Z" level=info msg="ImageCreate event name:\"sha256:a0eecd9b69a38f829c29b535f73c1a3de3c7cc9f1294a44dc42c808faf0a23ff\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 20 17:28:32.787046 containerd[1658]: time="2026-04-20T17:28:32.758344510Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:9054fecb4fa04cc63aec47b0913c8deb3487d414190cd15211f864cfe0d0b4d6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 20 17:28:32.787046 containerd[1658]: time="2026-04-20T17:28:32.766804126Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.35.4\" with image id \"sha256:a0eecd9b69a38f829c29b535f73c1a3de3c7cc9f1294a44dc42c808faf0a23ff\", repo tag \"registry.k8s.io/kube-scheduler:v1.35.4\", repo digest \"registry.k8s.io/kube-scheduler@sha256:9054fecb4fa04cc63aec47b0913c8deb3487d414190cd15211f864cfe0d0b4d6\", size \"17121655\" in 5.584970458s" Apr 20 17:28:32.787046 containerd[1658]: time="2026-04-20T17:28:32.767476919Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.35.4\" returns image reference \"sha256:a0eecd9b69a38f829c29b535f73c1a3de3c7cc9f1294a44dc42c808faf0a23ff\"" Apr 20 17:28:32.787046 containerd[1658]: time="2026-04-20T17:28:32.773129265Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.35.4\"" Apr 20 17:28:33.286354 kubelet[2174]: E0420 17:28:33.285196 2174 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 20 17:28:33.329680 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 20 17:28:33.330389 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 20 17:28:33.413383 systemd[1]: kubelet.service: Consumed 2.264s CPU time, 111.1M memory peak. Apr 20 17:28:39.705059 update_engine[1630]: I20260420 17:28:39.702793 1630 update_attempter.cc:509] Updating boot flags... Apr 20 17:28:43.598037 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2859025270.mount: Deactivated successfully. Apr 20 17:28:43.661869 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Apr 20 17:28:43.947530 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 20 17:28:45.958832 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 20 17:28:46.142188 (kubelet)[2219]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 20 17:28:46.359897 kubelet[2219]: E0420 17:28:46.357359 2219 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 20 17:28:46.386858 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 20 17:28:46.390903 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 20 17:28:46.420983 systemd[1]: kubelet.service: Consumed 1.409s CPU time, 109.8M memory peak. Apr 20 17:28:46.894590 containerd[1658]: time="2026-04-20T17:28:46.891481122Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.35.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 20 17:28:46.950656 containerd[1658]: time="2026-04-20T17:28:46.897933695Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.35.4: active requests=0, bytes read=14179737" Apr 20 17:28:46.950656 containerd[1658]: time="2026-04-20T17:28:46.910826360Z" level=info msg="ImageCreate event name:\"sha256:f21f27cddb23d0d7131dc7c59666b3b0e0b5ca4c3f003225f90307ab6211b6e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 20 17:28:46.950656 containerd[1658]: time="2026-04-20T17:28:46.949694423Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:c5daa23c72474e5e4062c320177d3b485fd42e7010f052bc80d657c4c00a0672\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 20 17:28:47.028906 containerd[1658]: time="2026-04-20T17:28:46.955050488Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.35.4\" with image id \"sha256:f21f27cddb23d0d7131dc7c59666b3b0e0b5ca4c3f003225f90307ab6211b6e1\", repo tag \"registry.k8s.io/kube-proxy:v1.35.4\", repo digest \"registry.k8s.io/kube-proxy@sha256:c5daa23c72474e5e4062c320177d3b485fd42e7010f052bc80d657c4c00a0672\", size \"25698944\" in 14.180833876s" Apr 20 17:28:47.028906 containerd[1658]: time="2026-04-20T17:28:46.956011361Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.35.4\" returns image reference \"sha256:f21f27cddb23d0d7131dc7c59666b3b0e0b5ca4c3f003225f90307ab6211b6e1\"" Apr 20 17:28:47.028906 containerd[1658]: time="2026-04-20T17:28:46.960789370Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.13.1\"" Apr 20 17:28:51.570397 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4062009472.mount: Deactivated successfully. Apr 20 17:28:56.431716 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Apr 20 17:28:56.566243 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 20 17:29:00.255327 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 20 17:29:00.678375 (kubelet)[2281]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 20 17:29:01.116296 kubelet[2281]: E0420 17:29:01.110025 2281 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 20 17:29:01.128627 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 20 17:29:01.128846 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 20 17:29:01.153050 systemd[1]: kubelet.service: Consumed 2.616s CPU time, 111M memory peak. Apr 20 17:29:06.128515 containerd[1658]: time="2026-04-20T17:29:06.126119585Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.13.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 20 17:29:06.154295 containerd[1658]: time="2026-04-20T17:29:06.132479628Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.13.1: active requests=0, bytes read=23545226" Apr 20 17:29:06.154295 containerd[1658]: time="2026-04-20T17:29:06.141926335Z" level=info msg="ImageCreate event name:\"sha256:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 20 17:29:06.179215 containerd[1658]: time="2026-04-20T17:29:06.161058275Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 20 17:29:06.179215 containerd[1658]: time="2026-04-20T17:29:06.179348582Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.13.1\" with image id \"sha256:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139\", repo tag \"registry.k8s.io/coredns/coredns:v1.13.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6\", size \"23553139\" in 19.218329937s" Apr 20 17:29:06.179215 containerd[1658]: time="2026-04-20T17:29:06.180228247Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.13.1\" returns image reference \"sha256:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139\"" Apr 20 17:29:06.220850 containerd[1658]: time="2026-04-20T17:29:06.183223586Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Apr 20 17:29:08.987142 containerd[1658]: time="2026-04-20T17:29:08.985767017Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 20 17:29:09.024238 containerd[1658]: time="2026-04-20T17:29:09.002530611Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=2406" Apr 20 17:29:09.035395 containerd[1658]: time="2026-04-20T17:29:09.035013095Z" level=info msg="ImageCreate event name:\"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 20 17:29:09.078631 containerd[1658]: time="2026-04-20T17:29:09.077624733Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 20 17:29:09.088144 containerd[1658]: time="2026-04-20T17:29:09.085047390Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"320448\" in 2.901610915s" Apr 20 17:29:09.088144 containerd[1658]: time="2026-04-20T17:29:09.087726864Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\"" Apr 20 17:29:09.081107 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2529213477.mount: Deactivated successfully. Apr 20 17:29:09.116168 containerd[1658]: time="2026-04-20T17:29:09.091019548Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.6-0\"" Apr 20 17:29:11.200250 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Apr 20 17:29:11.921744 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 20 17:29:13.491651 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount907591759.mount: Deactivated successfully. Apr 20 17:29:14.756237 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 20 17:29:14.917218 (kubelet)[2325]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 20 17:29:15.544870 kubelet[2325]: E0420 17:29:15.538207 2325 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 20 17:29:15.608247 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 20 17:29:15.613693 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 20 17:29:15.658050 systemd[1]: kubelet.service: Consumed 2.223s CPU time, 112.6M memory peak. Apr 20 17:29:22.748137 containerd[1658]: time="2026-04-20T17:29:22.745666403Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.6-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 20 17:29:22.748137 containerd[1658]: time="2026-04-20T17:29:22.747861851Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.6-0: active requests=0, bytes read=23634840" Apr 20 17:29:22.785770 containerd[1658]: time="2026-04-20T17:29:22.749992317Z" level=info msg="ImageCreate event name:\"sha256:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 20 17:29:22.785770 containerd[1658]: time="2026-04-20T17:29:22.758306734Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:60a30b5d81b2217555e2cfb9537f655b7ba97220b99c39ee2e162a7127225890\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 20 17:29:22.785770 containerd[1658]: time="2026-04-20T17:29:22.776044702Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.6-0\" with image id \"sha256:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2\", repo tag \"registry.k8s.io/etcd:3.6.6-0\", repo digest \"registry.k8s.io/etcd@sha256:60a30b5d81b2217555e2cfb9537f655b7ba97220b99c39ee2e162a7127225890\", size \"23641797\" in 13.684790788s" Apr 20 17:29:22.785770 containerd[1658]: time="2026-04-20T17:29:22.776894898Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.6-0\" returns image reference \"sha256:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2\"" Apr 20 17:29:25.669249 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7. Apr 20 17:29:25.813935 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 20 17:29:27.226320 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 20 17:29:27.305213 (kubelet)[2415]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 20 17:29:27.512515 kubelet[2415]: E0420 17:29:27.501130 2415 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 20 17:29:27.513833 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 20 17:29:27.513937 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 20 17:29:27.521026 systemd[1]: kubelet.service: Consumed 1.084s CPU time, 110.8M memory peak. Apr 20 17:29:28.796902 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 20 17:29:28.797670 systemd[1]: kubelet.service: Consumed 1.084s CPU time, 110.8M memory peak. Apr 20 17:29:28.901713 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 20 17:29:29.236080 systemd[1]: Reload requested from client PID 2431 ('systemctl') (unit session-6.scope)... Apr 20 17:29:29.275022 systemd[1]: Reloading... Apr 20 17:29:30.728241 systemd-ssh-generator[2481]: Failed to query local AF_VSOCK CID: Cannot assign requested address Apr 20 17:29:30.731379 zram_generator::config[2488]: No configuration found. Apr 20 17:29:30.732735 (sd-exec-[2462]: /usr/lib/systemd/system-generators/systemd-ssh-generator failed with exit status 1. Apr 20 17:29:40.656497 systemd[1]: /usr/lib/systemd/system/update-engine.service:10: Support for option BlockIOWeight= has been removed and it is ignored Apr 20 17:29:44.348481 systemd[1]: Reloading finished in 15053 ms. Apr 20 17:29:44.838543 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Apr 20 17:29:44.847891 systemd[1]: kubelet.service: Failed with result 'signal'. Apr 20 17:29:44.868198 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 20 17:29:44.870697 systemd[1]: kubelet.service: Consumed 584ms CPU time, 98.6M memory peak. Apr 20 17:29:44.960858 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 20 17:29:46.331293 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 20 17:29:46.384750 (kubelet)[2532]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 20 17:29:46.746985 kubelet[2532]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 20 17:29:46.968990 kubelet[2532]: I0420 17:29:46.963627 2532 server.go:525] "Kubelet version" kubeletVersion="v1.35.1" Apr 20 17:29:46.968990 kubelet[2532]: I0420 17:29:46.968156 2532 server.go:527] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 20 17:29:46.968990 kubelet[2532]: I0420 17:29:46.968254 2532 watchdog_linux.go:95] "Systemd watchdog is not enabled" Apr 20 17:29:46.968990 kubelet[2532]: I0420 17:29:46.968261 2532 watchdog_linux.go:138] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 20 17:29:46.982842 kubelet[2532]: I0420 17:29:46.981040 2532 server.go:951] "Client rotation is on, will bootstrap in background" Apr 20 17:29:47.099369 kubelet[2532]: I0420 17:29:47.089363 2532 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 20 17:29:47.099369 kubelet[2532]: E0420 17:29:47.089825 2532 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.107:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.107:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 20 17:29:47.184872 kubelet[2532]: I0420 17:29:47.181792 2532 server.go:1418] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Apr 20 17:29:47.304314 kubelet[2532]: I0420 17:29:47.299963 2532 server.go:775] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Apr 20 17:29:47.313376 kubelet[2532]: I0420 17:29:47.312253 2532 container_manager_linux.go:272] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 20 17:29:47.313376 kubelet[2532]: I0420 17:29:47.312676 2532 container_manager_linux.go:277] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 20 17:29:47.313376 kubelet[2532]: I0420 17:29:47.312953 2532 topology_manager.go:143] "Creating topology manager with none policy" Apr 20 17:29:47.313376 kubelet[2532]: I0420 17:29:47.312964 2532 container_manager_linux.go:308] "Creating device plugin manager" Apr 20 17:29:47.356972 kubelet[2532]: I0420 17:29:47.313983 2532 container_manager_linux.go:317] "Creating Dynamic Resource Allocation (DRA) manager" Apr 20 17:29:47.356972 kubelet[2532]: I0420 17:29:47.333683 2532 state_mem.go:41] "Initialized" logger="CPUManager state memory" Apr 20 17:29:47.356972 kubelet[2532]: I0420 17:29:47.334097 2532 kubelet.go:482] "Attempting to sync node with API server" Apr 20 17:29:47.356972 kubelet[2532]: I0420 17:29:47.334937 2532 kubelet.go:383] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 20 17:29:47.356972 kubelet[2532]: I0420 17:29:47.335255 2532 kubelet.go:394] "Adding apiserver pod source" Apr 20 17:29:47.356972 kubelet[2532]: I0420 17:29:47.335318 2532 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 20 17:29:47.921051 kubelet[2532]: I0420 17:29:47.919255 2532 kuberuntime_manager.go:294] "Container runtime initialized" containerRuntime="containerd" version="v2.2.1" apiVersion="v1" Apr 20 17:29:47.943512 kubelet[2532]: I0420 17:29:47.943355 2532 kubelet.go:943] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 20 17:29:47.943512 kubelet[2532]: I0420 17:29:47.943532 2532 kubelet.go:970] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Apr 20 17:29:47.951097 kubelet[2532]: W0420 17:29:47.944030 2532 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Apr 20 17:29:48.094307 kubelet[2532]: I0420 17:29:48.090774 2532 server.go:1257] "Started kubelet" Apr 20 17:29:48.094307 kubelet[2532]: I0420 17:29:48.091028 2532 server.go:182] "Starting to listen" address="0.0.0.0" port=10250 Apr 20 17:29:48.104008 kubelet[2532]: I0420 17:29:48.103361 2532 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 20 17:29:48.104008 kubelet[2532]: I0420 17:29:48.103572 2532 server_v1.go:49] "podresources" method="list" useActivePods=true Apr 20 17:29:48.107731 kubelet[2532]: I0420 17:29:48.107655 2532 server.go:254] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 20 17:29:48.108137 kubelet[2532]: I0420 17:29:48.108031 2532 server.go:317] "Adding debug handlers to kubelet server" Apr 20 17:29:48.108226 kubelet[2532]: I0420 17:29:48.108212 2532 fs_resource_analyzer.go:69] "Starting FS ResourceAnalyzer" Apr 20 17:29:48.115063 kubelet[2532]: I0420 17:29:48.114477 2532 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 20 17:29:48.115063 kubelet[2532]: I0420 17:29:48.115263 2532 volume_manager.go:311] "Starting Kubelet Volume Manager" Apr 20 17:29:48.124999 kubelet[2532]: E0420 17:29:48.115690 2532 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 20 17:29:48.125520 kubelet[2532]: E0420 17:29:48.125318 2532 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.107:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.107:6443: connect: connection refused" interval="200ms" Apr 20 17:29:48.128060 kubelet[2532]: I0420 17:29:48.127380 2532 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Apr 20 17:29:48.128060 kubelet[2532]: I0420 17:29:48.127668 2532 reconciler.go:29] "Reconciler: start to sync state" Apr 20 17:29:48.135456 kubelet[2532]: E0420 17:29:48.134760 2532 kubelet.go:1656] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 20 17:29:48.135456 kubelet[2532]: I0420 17:29:48.135173 2532 factory.go:223] Registration of the systemd container factory successfully Apr 20 17:29:48.135456 kubelet[2532]: I0420 17:29:48.135272 2532 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 20 17:29:48.175585 kubelet[2532]: E0420 17:29:48.132831 2532 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.107:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.107:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18a820dd5adb4917 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-20 17:29:48.081842455 +0000 UTC m=+1.674168927,LastTimestamp:2026-04-20 17:29:48.081842455 +0000 UTC m=+1.674168927,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 20 17:29:48.175585 kubelet[2532]: I0420 17:29:48.143623 2532 factory.go:223] Registration of the containerd container factory successfully Apr 20 17:29:48.216794 kubelet[2532]: E0420 17:29:48.216061 2532 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 20 17:29:48.293475 kubelet[2532]: I0420 17:29:48.292892 2532 cpu_manager.go:225] "Starting" policy="none" Apr 20 17:29:48.293475 kubelet[2532]: I0420 17:29:48.292936 2532 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Apr 20 17:29:48.293475 kubelet[2532]: I0420 17:29:48.292998 2532 state_mem.go:41] "Initialized" logger="CPUManager state checkpoint.CPUManager state memory" Apr 20 17:29:48.308831 kubelet[2532]: I0420 17:29:48.304663 2532 policy_none.go:50] "Start" Apr 20 17:29:48.308831 kubelet[2532]: I0420 17:29:48.304873 2532 memory_manager.go:187] "Starting memorymanager" policy="None" Apr 20 17:29:48.308831 kubelet[2532]: I0420 17:29:48.304895 2532 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Apr 20 17:29:48.311679 kubelet[2532]: I0420 17:29:48.311649 2532 policy_none.go:44] "Start" Apr 20 17:29:48.316968 kubelet[2532]: E0420 17:29:48.316942 2532 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 20 17:29:48.329677 kubelet[2532]: E0420 17:29:48.328798 2532 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.107:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.107:6443: connect: connection refused" interval="400ms" Apr 20 17:29:48.348528 kubelet[2532]: I0420 17:29:48.348369 2532 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Apr 20 17:29:48.350802 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Apr 20 17:29:48.390553 kubelet[2532]: I0420 17:29:48.389957 2532 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Apr 20 17:29:48.390553 kubelet[2532]: I0420 17:29:48.390048 2532 status_manager.go:249] "Starting to sync pod status with apiserver" Apr 20 17:29:48.390553 kubelet[2532]: I0420 17:29:48.390144 2532 kubelet.go:2501] "Starting kubelet main sync loop" Apr 20 17:29:48.390553 kubelet[2532]: E0420 17:29:48.390207 2532 kubelet.go:2525] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 20 17:29:48.418372 kubelet[2532]: E0420 17:29:48.417889 2532 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 20 17:29:48.436751 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Apr 20 17:29:48.470011 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Apr 20 17:29:48.495338 kubelet[2532]: E0420 17:29:48.491963 2532 kubelet.go:2525] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Apr 20 17:29:48.504779 kubelet[2532]: E0420 17:29:48.503641 2532 manager.go:525] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 20 17:29:48.504779 kubelet[2532]: I0420 17:29:48.504198 2532 eviction_manager.go:194] "Eviction manager: starting control loop" Apr 20 17:29:48.504779 kubelet[2532]: I0420 17:29:48.504216 2532 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 20 17:29:48.504779 kubelet[2532]: I0420 17:29:48.505034 2532 plugin_manager.go:121] "Starting Kubelet Plugin Manager" Apr 20 17:29:48.523180 kubelet[2532]: E0420 17:29:48.517572 2532 eviction_manager.go:272] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 20 17:29:48.523180 kubelet[2532]: E0420 17:29:48.518953 2532 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 20 17:29:48.627992 kubelet[2532]: I0420 17:29:48.622314 2532 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Apr 20 17:29:48.643806 kubelet[2532]: E0420 17:29:48.636835 2532 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://10.0.0.107:6443/api/v1/nodes\": dial tcp 10.0.0.107:6443: connect: connection refused" node="localhost" Apr 20 17:29:48.737967 kubelet[2532]: E0420 17:29:48.731789 2532 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.107:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.107:6443: connect: connection refused" interval="800ms" Apr 20 17:29:48.737967 kubelet[2532]: I0420 17:29:48.733832 2532 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e1e996f53cc1b3aaf0fbbbafa68de43e-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"e1e996f53cc1b3aaf0fbbbafa68de43e\") " pod="kube-system/kube-apiserver-localhost" Apr 20 17:29:48.737967 kubelet[2532]: I0420 17:29:48.733927 2532 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e1e996f53cc1b3aaf0fbbbafa68de43e-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"e1e996f53cc1b3aaf0fbbbafa68de43e\") " pod="kube-system/kube-apiserver-localhost" Apr 20 17:29:48.737967 kubelet[2532]: I0420 17:29:48.735009 2532 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e1e996f53cc1b3aaf0fbbbafa68de43e-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"e1e996f53cc1b3aaf0fbbbafa68de43e\") " pod="kube-system/kube-apiserver-localhost" Apr 20 17:29:48.849328 kubelet[2532]: I0420 17:29:48.848985 2532 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Apr 20 17:29:48.858849 kubelet[2532]: E0420 17:29:48.849835 2532 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://10.0.0.107:6443/api/v1/nodes\": dial tcp 10.0.0.107:6443: connect: connection refused" node="localhost" Apr 20 17:29:48.939676 kubelet[2532]: I0420 17:29:48.939299 2532 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/14bc29ec35edba17af38052ec24275f2-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"14bc29ec35edba17af38052ec24275f2\") " pod="kube-system/kube-controller-manager-localhost" Apr 20 17:29:48.939676 kubelet[2532]: I0420 17:29:48.939351 2532 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/14bc29ec35edba17af38052ec24275f2-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"14bc29ec35edba17af38052ec24275f2\") " pod="kube-system/kube-controller-manager-localhost" Apr 20 17:29:48.939676 kubelet[2532]: I0420 17:29:48.939582 2532 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/14bc29ec35edba17af38052ec24275f2-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"14bc29ec35edba17af38052ec24275f2\") " pod="kube-system/kube-controller-manager-localhost" Apr 20 17:29:48.939676 kubelet[2532]: I0420 17:29:48.939654 2532 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/14bc29ec35edba17af38052ec24275f2-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"14bc29ec35edba17af38052ec24275f2\") " pod="kube-system/kube-controller-manager-localhost" Apr 20 17:29:48.939676 kubelet[2532]: I0420 17:29:48.939712 2532 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/14bc29ec35edba17af38052ec24275f2-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"14bc29ec35edba17af38052ec24275f2\") " pod="kube-system/kube-controller-manager-localhost" Apr 20 17:29:48.950288 kubelet[2532]: I0420 17:29:48.939789 2532 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f7c88b30fc803a3ec6b6c138191bdaca-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"f7c88b30fc803a3ec6b6c138191bdaca\") " pod="kube-system/kube-scheduler-localhost" Apr 20 17:29:48.960239 systemd[1]: Created slice kubepods-burstable-pode1e996f53cc1b3aaf0fbbbafa68de43e.slice - libcontainer container kubepods-burstable-pode1e996f53cc1b3aaf0fbbbafa68de43e.slice. Apr 20 17:29:48.992500 kubelet[2532]: E0420 17:29:48.988041 2532 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 20 17:29:48.999488 kubelet[2532]: E0420 17:29:48.999191 2532 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 17:29:49.004286 systemd[1]: Created slice kubepods-burstable-pod14bc29ec35edba17af38052ec24275f2.slice - libcontainer container kubepods-burstable-pod14bc29ec35edba17af38052ec24275f2.slice. Apr 20 17:29:49.017948 containerd[1658]: time="2026-04-20T17:29:49.016722161Z" level=info msg="RunPodSandbox for name:\"kube-apiserver-localhost\" uid:\"e1e996f53cc1b3aaf0fbbbafa68de43e\" namespace:\"kube-system\"" Apr 20 17:29:49.071547 kubelet[2532]: E0420 17:29:49.069804 2532 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 20 17:29:49.085795 kubelet[2532]: E0420 17:29:49.085271 2532 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 17:29:49.101034 containerd[1658]: time="2026-04-20T17:29:49.099611284Z" level=info msg="RunPodSandbox for name:\"kube-controller-manager-localhost\" uid:\"14bc29ec35edba17af38052ec24275f2\" namespace:\"kube-system\"" Apr 20 17:29:49.104596 systemd[1]: Created slice kubepods-burstable-podf7c88b30fc803a3ec6b6c138191bdaca.slice - libcontainer container kubepods-burstable-podf7c88b30fc803a3ec6b6c138191bdaca.slice. Apr 20 17:29:49.118584 kubelet[2532]: E0420 17:29:49.117632 2532 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 20 17:29:49.131330 kubelet[2532]: E0420 17:29:49.126023 2532 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 17:29:49.132198 containerd[1658]: time="2026-04-20T17:29:49.128969477Z" level=info msg="RunPodSandbox for name:\"kube-scheduler-localhost\" uid:\"f7c88b30fc803a3ec6b6c138191bdaca\" namespace:\"kube-system\"" Apr 20 17:29:49.245305 kubelet[2532]: E0420 17:29:49.243085 2532 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.107:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.107:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 20 17:29:49.333107 kubelet[2532]: I0420 17:29:49.332378 2532 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Apr 20 17:29:49.350282 kubelet[2532]: E0420 17:29:49.335699 2532 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://10.0.0.107:6443/api/v1/nodes\": dial tcp 10.0.0.107:6443: connect: connection refused" node="localhost" Apr 20 17:29:49.563914 containerd[1658]: time="2026-04-20T17:29:49.539573205Z" level=info msg="connecting to shim 3499c6956704318802827cc6051b05eee9229cbc60a941f53c88c4a10e92ae11" address="unix:///run/containerd/s/1028a3884c8fe7445e51378f8f75d8222496a64b030b92e150cc155157ded40c" namespace=k8s.io protocol=ttrpc version=3 Apr 20 17:29:49.564969 kubelet[2532]: E0420 17:29:49.562376 2532 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.107:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.107:6443: connect: connection refused" interval="1.6s" Apr 20 17:29:49.583783 containerd[1658]: time="2026-04-20T17:29:49.582843010Z" level=info msg="connecting to shim 9cb2f85127226d57992f176b25c3136fe0199a0600cc5a482d61555e2c43a765" address="unix:///run/containerd/s/5fefef3395097c564f2b3c2ee56bbf3d896c51718d1875278e73097e16ddfbb2" namespace=k8s.io protocol=ttrpc version=3 Apr 20 17:29:49.652653 containerd[1658]: time="2026-04-20T17:29:49.643998175Z" level=info msg="connecting to shim 1917323574b960de8ba00d74f9909f2e14c4f8be4dce6bcb901281008814d6d2" address="unix:///run/containerd/s/dec39bf7199c55115e1e022cb5ae3c147a590841eafc57aa9ed18dfe18514e73" namespace=k8s.io protocol=ttrpc version=3 Apr 20 17:29:50.307202 kubelet[2532]: I0420 17:29:50.298520 2532 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Apr 20 17:29:50.307202 kubelet[2532]: E0420 17:29:50.305875 2532 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://10.0.0.107:6443/api/v1/nodes\": dial tcp 10.0.0.107:6443: connect: connection refused" node="localhost" Apr 20 17:29:50.436172 systemd[1]: Started cri-containerd-1917323574b960de8ba00d74f9909f2e14c4f8be4dce6bcb901281008814d6d2.scope - libcontainer container 1917323574b960de8ba00d74f9909f2e14c4f8be4dce6bcb901281008814d6d2. Apr 20 17:29:50.485670 systemd[1]: Started cri-containerd-9cb2f85127226d57992f176b25c3136fe0199a0600cc5a482d61555e2c43a765.scope - libcontainer container 9cb2f85127226d57992f176b25c3136fe0199a0600cc5a482d61555e2c43a765. Apr 20 17:29:50.664853 systemd[1]: Started cri-containerd-3499c6956704318802827cc6051b05eee9229cbc60a941f53c88c4a10e92ae11.scope - libcontainer container 3499c6956704318802827cc6051b05eee9229cbc60a941f53c88c4a10e92ae11. Apr 20 17:29:50.733654 kubelet[2532]: E0420 17:29:50.729082 2532 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.107:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.107:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18a820dd5adb4917 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-20 17:29:48.081842455 +0000 UTC m=+1.674168927,LastTimestamp:2026-04-20 17:29:48.081842455 +0000 UTC m=+1.674168927,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 20 17:29:51.198950 kubelet[2532]: E0420 17:29:51.193527 2532 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.107:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.107:6443: connect: connection refused" interval="3.2s" Apr 20 17:29:51.413651 containerd[1658]: time="2026-04-20T17:29:51.412931336Z" level=info msg="RunPodSandbox for name:\"kube-scheduler-localhost\" uid:\"f7c88b30fc803a3ec6b6c138191bdaca\" namespace:\"kube-system\" returns sandbox id \"3499c6956704318802827cc6051b05eee9229cbc60a941f53c88c4a10e92ae11\"" Apr 20 17:29:51.418360 kubelet[2532]: E0420 17:29:51.416931 2532 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 17:29:51.433229 containerd[1658]: time="2026-04-20T17:29:51.432394973Z" level=info msg="RunPodSandbox for name:\"kube-controller-manager-localhost\" uid:\"14bc29ec35edba17af38052ec24275f2\" namespace:\"kube-system\" returns sandbox id \"1917323574b960de8ba00d74f9909f2e14c4f8be4dce6bcb901281008814d6d2\"" Apr 20 17:29:51.438100 containerd[1658]: time="2026-04-20T17:29:51.437360408Z" level=info msg="CreateContainer within sandbox \"3499c6956704318802827cc6051b05eee9229cbc60a941f53c88c4a10e92ae11\" for container name:\"kube-scheduler\"" Apr 20 17:29:51.438174 kubelet[2532]: E0420 17:29:51.435332 2532 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 17:29:51.458597 containerd[1658]: time="2026-04-20T17:29:51.451786967Z" level=info msg="CreateContainer within sandbox \"1917323574b960de8ba00d74f9909f2e14c4f8be4dce6bcb901281008814d6d2\" for container name:\"kube-controller-manager\"" Apr 20 17:29:51.458597 containerd[1658]: time="2026-04-20T17:29:51.452771999Z" level=info msg="RunPodSandbox for name:\"kube-apiserver-localhost\" uid:\"e1e996f53cc1b3aaf0fbbbafa68de43e\" namespace:\"kube-system\" returns sandbox id \"9cb2f85127226d57992f176b25c3136fe0199a0600cc5a482d61555e2c43a765\"" Apr 20 17:29:51.471812 kubelet[2532]: E0420 17:29:51.458064 2532 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 17:29:51.481292 containerd[1658]: time="2026-04-20T17:29:51.480636152Z" level=info msg="CreateContainer within sandbox \"9cb2f85127226d57992f176b25c3136fe0199a0600cc5a482d61555e2c43a765\" for container name:\"kube-apiserver\"" Apr 20 17:29:51.517753 containerd[1658]: time="2026-04-20T17:29:51.515188543Z" level=info msg="Container 068b64fc9e7bb17f4f31b6fa2b38667873a3e750fc3b308a0cff36dc9b277220: CDI devices from CRI Config.CDIDevices: []" Apr 20 17:29:51.520627 containerd[1658]: time="2026-04-20T17:29:51.520212251Z" level=info msg="Container aa45c298c24d90714f2f5e25e656c8aefa636218c1939dbe6b54ed38fe067d34: CDI devices from CRI Config.CDIDevices: []" Apr 20 17:29:51.574650 containerd[1658]: time="2026-04-20T17:29:51.573749938Z" level=info msg="Container d8fa23c424f8ed1d4920c20cd463d23b098559bd4cda8fd72d3895b2af18c40d: CDI devices from CRI Config.CDIDevices: []" Apr 20 17:29:51.603646 containerd[1658]: time="2026-04-20T17:29:51.603080824Z" level=info msg="CreateContainer within sandbox \"1917323574b960de8ba00d74f9909f2e14c4f8be4dce6bcb901281008814d6d2\" for name:\"kube-controller-manager\" returns container id \"aa45c298c24d90714f2f5e25e656c8aefa636218c1939dbe6b54ed38fe067d34\"" Apr 20 17:29:51.604383 containerd[1658]: time="2026-04-20T17:29:51.604312451Z" level=info msg="StartContainer for \"aa45c298c24d90714f2f5e25e656c8aefa636218c1939dbe6b54ed38fe067d34\"" Apr 20 17:29:51.629838 containerd[1658]: time="2026-04-20T17:29:51.624844904Z" level=info msg="connecting to shim aa45c298c24d90714f2f5e25e656c8aefa636218c1939dbe6b54ed38fe067d34" address="unix:///run/containerd/s/dec39bf7199c55115e1e022cb5ae3c147a590841eafc57aa9ed18dfe18514e73" protocol=ttrpc version=3 Apr 20 17:29:51.802495 containerd[1658]: time="2026-04-20T17:29:51.732212312Z" level=info msg="CreateContainer within sandbox \"3499c6956704318802827cc6051b05eee9229cbc60a941f53c88c4a10e92ae11\" for name:\"kube-scheduler\" returns container id \"068b64fc9e7bb17f4f31b6fa2b38667873a3e750fc3b308a0cff36dc9b277220\"" Apr 20 17:29:51.818797 containerd[1658]: time="2026-04-20T17:29:51.815823374Z" level=info msg="CreateContainer within sandbox \"9cb2f85127226d57992f176b25c3136fe0199a0600cc5a482d61555e2c43a765\" for name:\"kube-apiserver\" returns container id \"d8fa23c424f8ed1d4920c20cd463d23b098559bd4cda8fd72d3895b2af18c40d\"" Apr 20 17:29:51.856979 containerd[1658]: time="2026-04-20T17:29:51.856565924Z" level=info msg="StartContainer for \"d8fa23c424f8ed1d4920c20cd463d23b098559bd4cda8fd72d3895b2af18c40d\"" Apr 20 17:29:51.859521 containerd[1658]: time="2026-04-20T17:29:51.857502996Z" level=info msg="StartContainer for \"068b64fc9e7bb17f4f31b6fa2b38667873a3e750fc3b308a0cff36dc9b277220\"" Apr 20 17:29:51.861784 containerd[1658]: time="2026-04-20T17:29:51.861702458Z" level=info msg="connecting to shim 068b64fc9e7bb17f4f31b6fa2b38667873a3e750fc3b308a0cff36dc9b277220" address="unix:///run/containerd/s/1028a3884c8fe7445e51378f8f75d8222496a64b030b92e150cc155157ded40c" protocol=ttrpc version=3 Apr 20 17:29:51.866998 containerd[1658]: time="2026-04-20T17:29:51.866296025Z" level=info msg="connecting to shim d8fa23c424f8ed1d4920c20cd463d23b098559bd4cda8fd72d3895b2af18c40d" address="unix:///run/containerd/s/5fefef3395097c564f2b3c2ee56bbf3d896c51718d1875278e73097e16ddfbb2" protocol=ttrpc version=3 Apr 20 17:29:51.920131 kubelet[2532]: I0420 17:29:51.918311 2532 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Apr 20 17:29:51.932461 kubelet[2532]: E0420 17:29:51.920626 2532 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://10.0.0.107:6443/api/v1/nodes\": dial tcp 10.0.0.107:6443: connect: connection refused" node="localhost" Apr 20 17:29:52.117745 systemd[1]: Started cri-containerd-068b64fc9e7bb17f4f31b6fa2b38667873a3e750fc3b308a0cff36dc9b277220.scope - libcontainer container 068b64fc9e7bb17f4f31b6fa2b38667873a3e750fc3b308a0cff36dc9b277220. Apr 20 17:29:52.167832 systemd[1]: Started cri-containerd-aa45c298c24d90714f2f5e25e656c8aefa636218c1939dbe6b54ed38fe067d34.scope - libcontainer container aa45c298c24d90714f2f5e25e656c8aefa636218c1939dbe6b54ed38fe067d34. Apr 20 17:29:52.218221 systemd[1]: Started cri-containerd-d8fa23c424f8ed1d4920c20cd463d23b098559bd4cda8fd72d3895b2af18c40d.scope - libcontainer container d8fa23c424f8ed1d4920c20cd463d23b098559bd4cda8fd72d3895b2af18c40d. Apr 20 17:29:52.558034 containerd[1658]: time="2026-04-20T17:29:52.556960572Z" level=info msg="StartContainer for \"068b64fc9e7bb17f4f31b6fa2b38667873a3e750fc3b308a0cff36dc9b277220\" returns successfully" Apr 20 17:29:52.680996 containerd[1658]: time="2026-04-20T17:29:52.672104103Z" level=info msg="StartContainer for \"d8fa23c424f8ed1d4920c20cd463d23b098559bd4cda8fd72d3895b2af18c40d\" returns successfully" Apr 20 17:29:52.705390 containerd[1658]: time="2026-04-20T17:29:52.704046958Z" level=info msg="StartContainer for \"aa45c298c24d90714f2f5e25e656c8aefa636218c1939dbe6b54ed38fe067d34\" returns successfully" Apr 20 17:29:53.910465 kubelet[2532]: E0420 17:29:53.900960 2532 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 20 17:29:53.910465 kubelet[2532]: E0420 17:29:53.901649 2532 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 17:29:53.910465 kubelet[2532]: E0420 17:29:53.905200 2532 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 20 17:29:53.910465 kubelet[2532]: E0420 17:29:53.905630 2532 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 17:29:53.910465 kubelet[2532]: E0420 17:29:53.906586 2532 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 20 17:29:53.910465 kubelet[2532]: E0420 17:29:53.906820 2532 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 17:29:54.840939 kubelet[2532]: E0420 17:29:54.838843 2532 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 20 17:29:54.840939 kubelet[2532]: E0420 17:29:54.839106 2532 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 17:29:54.840939 kubelet[2532]: E0420 17:29:54.839760 2532 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 20 17:29:54.840939 kubelet[2532]: E0420 17:29:54.840356 2532 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 17:29:54.858035 kubelet[2532]: E0420 17:29:54.841491 2532 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 20 17:29:54.858035 kubelet[2532]: E0420 17:29:54.841594 2532 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 17:29:55.142686 kubelet[2532]: I0420 17:29:55.139489 2532 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Apr 20 17:29:55.862914 kubelet[2532]: E0420 17:29:55.857200 2532 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 20 17:29:55.862914 kubelet[2532]: E0420 17:29:55.857362 2532 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 17:29:55.862914 kubelet[2532]: E0420 17:29:55.861972 2532 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 20 17:29:55.862914 kubelet[2532]: E0420 17:29:55.862486 2532 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 17:29:57.829539 kubelet[2532]: E0420 17:29:57.822548 2532 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 20 17:29:57.829539 kubelet[2532]: E0420 17:29:57.822853 2532 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 17:29:58.520645 kubelet[2532]: E0420 17:29:58.519917 2532 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 20 17:29:59.820004 kubelet[2532]: E0420 17:29:59.816864 2532 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 20 17:29:59.820004 kubelet[2532]: E0420 17:29:59.817136 2532 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 17:30:00.865285 kubelet[2532]: E0420 17:30:00.862696 2532 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 20 17:30:00.865285 kubelet[2532]: E0420 17:30:00.871199 2532 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 17:30:03.314283 kubelet[2532]: E0420 17:30:03.300250 2532 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.107:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": net/http: TLS handshake timeout" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 20 17:30:04.429213 kubelet[2532]: E0420 17:30:04.422071 2532 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.107:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded" interval="6.4s" Apr 20 17:30:05.155112 kubelet[2532]: E0420 17:30:05.148275 2532 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://10.0.0.107:6443/api/v1/nodes\": net/http: TLS handshake timeout" node="localhost" Apr 20 17:30:07.802054 kubelet[2532]: E0420 17:30:07.800731 2532 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.18a820dd5adb4917 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-20 17:29:48.081842455 +0000 UTC m=+1.674168927,LastTimestamp:2026-04-20 17:29:48.081842455 +0000 UTC m=+1.674168927,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 20 17:30:08.019385 kubelet[2532]: E0420 17:30:08.015697 2532 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.18a820dd5e015802 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-20 17:29:48.13466829 +0000 UTC m=+1.726994771,LastTimestamp:2026-04-20 17:29:48.13466829 +0000 UTC m=+1.726994771,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 20 17:30:08.041204 kubelet[2532]: E0420 17:30:08.040069 2532 csi_plugin.go:399] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found Apr 20 17:30:08.391632 kubelet[2532]: I0420 17:30:08.389174 2532 apiserver.go:52] "Watching apiserver" Apr 20 17:30:08.584619 kubelet[2532]: E0420 17:30:08.577733 2532 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 20 17:30:09.248802 kubelet[2532]: I0420 17:30:09.233856 2532 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Apr 20 17:30:09.622473 kubelet[2532]: E0420 17:30:09.576904 2532 csi_plugin.go:399] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found Apr 20 17:30:09.923710 kubelet[2532]: E0420 17:30:09.919577 2532 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 20 17:30:09.923710 kubelet[2532]: E0420 17:30:09.919996 2532 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 17:30:10.411496 kubelet[2532]: E0420 17:30:10.392050 2532 csi_plugin.go:399] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found Apr 20 17:30:10.947851 kubelet[2532]: E0420 17:30:10.946368 2532 nodelease.go:50] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Apr 20 17:30:10.955093 kubelet[2532]: E0420 17:30:10.955006 2532 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 20 17:30:10.966721 kubelet[2532]: E0420 17:30:10.964513 2532 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 17:30:11.600297 kubelet[2532]: I0420 17:30:11.590315 2532 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Apr 20 17:30:11.688941 kubelet[2532]: I0420 17:30:11.684947 2532 kubelet_node_status.go:77] "Successfully registered node" node="localhost" Apr 20 17:30:11.688941 kubelet[2532]: E0420 17:30:11.685053 2532 kubelet_node_status.go:474] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Apr 20 17:30:11.737003 kubelet[2532]: I0420 17:30:11.735709 2532 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Apr 20 17:30:12.070190 kubelet[2532]: E0420 17:30:12.065183 2532 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 17:30:12.106589 kubelet[2532]: I0420 17:30:12.075976 2532 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Apr 20 17:30:12.183130 kubelet[2532]: I0420 17:30:12.181127 2532 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Apr 20 17:30:12.193711 kubelet[2532]: E0420 17:30:12.186261 2532 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 17:30:12.298093 kubelet[2532]: E0420 17:30:12.293372 2532 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 17:30:18.753596 kubelet[2532]: I0420 17:30:18.741014 2532 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=7.740926398 podStartE2EDuration="7.740926398s" podCreationTimestamp="2026-04-20 17:30:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-20 17:30:18.540675225 +0000 UTC m=+32.133001706" watchObservedRunningTime="2026-04-20 17:30:18.740926398 +0000 UTC m=+32.333252881" Apr 20 17:30:18.753596 kubelet[2532]: I0420 17:30:18.749835 2532 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=6.749818703 podStartE2EDuration="6.749818703s" podCreationTimestamp="2026-04-20 17:30:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-20 17:30:18.737808227 +0000 UTC m=+32.330134709" watchObservedRunningTime="2026-04-20 17:30:18.749818703 +0000 UTC m=+32.342145187" Apr 20 17:30:18.828042 kubelet[2532]: I0420 17:30:18.824996 2532 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=6.824012 podStartE2EDuration="6.824012s" podCreationTimestamp="2026-04-20 17:30:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-20 17:30:18.820012835 +0000 UTC m=+32.412339345" watchObservedRunningTime="2026-04-20 17:30:18.824012 +0000 UTC m=+32.416338480" Apr 20 17:30:28.374170 kubelet[2532]: E0420 17:30:28.369054 2532 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 17:30:35.372845 systemd[1]: Reload requested from client PID 2822 ('systemctl') (unit session-6.scope)... Apr 20 17:30:35.375938 systemd[1]: Reloading... Apr 20 17:30:36.720921 zram_generator::config[2876]: No configuration found. Apr 20 17:30:36.734220 systemd-ssh-generator[2871]: Failed to query local AF_VSOCK CID: Cannot assign requested address Apr 20 17:30:37.299354 (sd-exec-[2853]: /usr/lib/systemd/system-generators/systemd-ssh-generator failed with exit status 1. Apr 20 17:30:37.989556 kubelet[2532]: E0420 17:30:37.954043 2532 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 17:30:38.285018 systemd[1]: /usr/lib/systemd/system/update-engine.service:10: Support for option BlockIOWeight= has been removed and it is ignored Apr 20 17:30:38.882942 kubelet[2532]: E0420 17:30:38.882557 2532 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 17:30:39.099530 systemd[1]: Reloading finished in 3720 ms. Apr 20 17:30:39.299746 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 20 17:30:39.345317 systemd[1]: kubelet.service: Deactivated successfully. Apr 20 17:30:39.349978 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 20 17:30:39.350137 systemd[1]: kubelet.service: Consumed 9.141s CPU time, 134M memory peak. Apr 20 17:30:39.380209 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 20 17:30:40.614584 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 20 17:30:40.757559 (kubelet)[2921]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 20 17:30:41.313041 kubelet[2921]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 20 17:30:41.400061 kubelet[2921]: I0420 17:30:41.392613 2921 server.go:525] "Kubelet version" kubeletVersion="v1.35.1" Apr 20 17:30:41.400061 kubelet[2921]: I0420 17:30:41.393002 2921 server.go:527] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 20 17:30:41.400061 kubelet[2921]: I0420 17:30:41.393020 2921 watchdog_linux.go:95] "Systemd watchdog is not enabled" Apr 20 17:30:41.400061 kubelet[2921]: I0420 17:30:41.393027 2921 watchdog_linux.go:138] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 20 17:30:41.400061 kubelet[2921]: I0420 17:30:41.393356 2921 server.go:951] "Client rotation is on, will bootstrap in background" Apr 20 17:30:41.435278 kubelet[2921]: I0420 17:30:41.408688 2921 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Apr 20 17:30:41.435278 kubelet[2921]: I0420 17:30:41.432764 2921 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 20 17:30:41.494183 kubelet[2921]: I0420 17:30:41.493225 2921 server.go:1418] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Apr 20 17:30:41.636097 kubelet[2921]: I0420 17:30:41.604721 2921 server.go:775] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Apr 20 17:30:41.636097 kubelet[2921]: I0420 17:30:41.604987 2921 container_manager_linux.go:272] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 20 17:30:41.636097 kubelet[2921]: I0420 17:30:41.605016 2921 container_manager_linux.go:277] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 20 17:30:41.636097 kubelet[2921]: I0420 17:30:41.605195 2921 topology_manager.go:143] "Creating topology manager with none policy" Apr 20 17:30:41.711167 kubelet[2921]: I0420 17:30:41.605273 2921 container_manager_linux.go:308] "Creating device plugin manager" Apr 20 17:30:41.711167 kubelet[2921]: I0420 17:30:41.605298 2921 container_manager_linux.go:317] "Creating Dynamic Resource Allocation (DRA) manager" Apr 20 17:30:41.711167 kubelet[2921]: I0420 17:30:41.606173 2921 state_mem.go:41] "Initialized" logger="CPUManager state memory" Apr 20 17:30:41.711167 kubelet[2921]: I0420 17:30:41.610004 2921 kubelet.go:482] "Attempting to sync node with API server" Apr 20 17:30:41.711167 kubelet[2921]: I0420 17:30:41.610301 2921 kubelet.go:383] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 20 17:30:41.711167 kubelet[2921]: I0420 17:30:41.610601 2921 kubelet.go:394] "Adding apiserver pod source" Apr 20 17:30:41.711167 kubelet[2921]: I0420 17:30:41.616325 2921 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 20 17:30:41.711167 kubelet[2921]: I0420 17:30:41.681249 2921 kuberuntime_manager.go:294] "Container runtime initialized" containerRuntime="containerd" version="v2.2.1" apiVersion="v1" Apr 20 17:30:41.711167 kubelet[2921]: I0420 17:30:41.706315 2921 kubelet.go:943] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 20 17:30:41.711167 kubelet[2921]: I0420 17:30:41.706364 2921 kubelet.go:970] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Apr 20 17:30:41.803217 kubelet[2921]: I0420 17:30:41.802099 2921 server.go:1257] "Started kubelet" Apr 20 17:30:41.805139 kubelet[2921]: I0420 17:30:41.802239 2921 server.go:182] "Starting to listen" address="0.0.0.0" port=10250 Apr 20 17:30:41.805139 kubelet[2921]: I0420 17:30:41.802735 2921 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 20 17:30:41.805139 kubelet[2921]: I0420 17:30:41.803896 2921 server_v1.go:49] "podresources" method="list" useActivePods=true Apr 20 17:30:41.805139 kubelet[2921]: I0420 17:30:41.804289 2921 server.go:254] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 20 17:30:41.810594 kubelet[2921]: I0420 17:30:41.810346 2921 server.go:317] "Adding debug handlers to kubelet server" Apr 20 17:30:41.840149 kubelet[2921]: I0420 17:30:41.835673 2921 fs_resource_analyzer.go:69] "Starting FS ResourceAnalyzer" Apr 20 17:30:41.876524 kubelet[2921]: I0420 17:30:41.873294 2921 volume_manager.go:311] "Starting Kubelet Volume Manager" Apr 20 17:30:41.876524 kubelet[2921]: I0420 17:30:41.873906 2921 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 20 17:30:41.876524 kubelet[2921]: I0420 17:30:41.874537 2921 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Apr 20 17:30:41.876524 kubelet[2921]: I0420 17:30:41.874928 2921 reconciler.go:29] "Reconciler: start to sync state" Apr 20 17:30:41.923547 kubelet[2921]: I0420 17:30:41.899762 2921 factory.go:223] Registration of the systemd container factory successfully Apr 20 17:30:41.923547 kubelet[2921]: I0420 17:30:41.899956 2921 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 20 17:30:41.996177 kubelet[2921]: I0420 17:30:41.995334 2921 factory.go:223] Registration of the containerd container factory successfully Apr 20 17:30:41.996177 kubelet[2921]: E0420 17:30:42.003310 2921 kubelet.go:1656] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 20 17:30:42.313973 kubelet[2921]: I0420 17:30:42.312264 2921 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Apr 20 17:30:42.435821 kubelet[2921]: I0420 17:30:42.434902 2921 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Apr 20 17:30:42.442256 kubelet[2921]: I0420 17:30:42.437643 2921 status_manager.go:249] "Starting to sync pod status with apiserver" Apr 20 17:30:42.442256 kubelet[2921]: I0420 17:30:42.437754 2921 kubelet.go:2501] "Starting kubelet main sync loop" Apr 20 17:30:42.442256 kubelet[2921]: E0420 17:30:42.437825 2921 kubelet.go:2525] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 20 17:30:42.519385 kubelet[2921]: I0420 17:30:42.518748 2921 cpu_manager.go:225] "Starting" policy="none" Apr 20 17:30:42.519385 kubelet[2921]: I0420 17:30:42.518808 2921 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Apr 20 17:30:42.519385 kubelet[2921]: I0420 17:30:42.518835 2921 state_mem.go:41] "Initialized" logger="CPUManager state checkpoint.CPUManager state memory" Apr 20 17:30:42.519385 kubelet[2921]: I0420 17:30:42.518986 2921 state_mem.go:94] "Updated default CPUSet" logger="CPUManager state checkpoint.CPUManager state memory" cpuSet="" Apr 20 17:30:42.519385 kubelet[2921]: I0420 17:30:42.518997 2921 state_mem.go:102] "Updated CPUSet assignments" logger="CPUManager state checkpoint.CPUManager state memory" assignments={} Apr 20 17:30:42.519385 kubelet[2921]: I0420 17:30:42.519016 2921 policy_none.go:50] "Start" Apr 20 17:30:42.519385 kubelet[2921]: I0420 17:30:42.519024 2921 memory_manager.go:187] "Starting memorymanager" policy="None" Apr 20 17:30:42.519385 kubelet[2921]: I0420 17:30:42.519033 2921 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Apr 20 17:30:42.519385 kubelet[2921]: I0420 17:30:42.519148 2921 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Apr 20 17:30:42.519385 kubelet[2921]: I0420 17:30:42.519157 2921 policy_none.go:44] "Start" Apr 20 17:30:42.557300 kubelet[2921]: E0420 17:30:42.552237 2921 kubelet.go:2525] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Apr 20 17:30:42.616179 kubelet[2921]: E0420 17:30:42.606909 2921 manager.go:525] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 20 17:30:42.616179 kubelet[2921]: I0420 17:30:42.607529 2921 eviction_manager.go:194] "Eviction manager: starting control loop" Apr 20 17:30:42.616179 kubelet[2921]: I0420 17:30:42.607545 2921 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 20 17:30:42.616179 kubelet[2921]: I0420 17:30:42.612232 2921 plugin_manager.go:121] "Starting Kubelet Plugin Manager" Apr 20 17:30:42.635818 kubelet[2921]: I0420 17:30:42.626901 2921 apiserver.go:52] "Watching apiserver" Apr 20 17:30:42.645030 kubelet[2921]: E0420 17:30:42.644961 2921 eviction_manager.go:272] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 20 17:30:42.779066 kubelet[2921]: I0420 17:30:42.777193 2921 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Apr 20 17:30:42.836935 kubelet[2921]: I0420 17:30:42.836193 2921 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/14bc29ec35edba17af38052ec24275f2-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"14bc29ec35edba17af38052ec24275f2\") " pod="kube-system/kube-controller-manager-localhost" Apr 20 17:30:42.836935 kubelet[2921]: I0420 17:30:42.836284 2921 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/14bc29ec35edba17af38052ec24275f2-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"14bc29ec35edba17af38052ec24275f2\") " pod="kube-system/kube-controller-manager-localhost" Apr 20 17:30:42.836935 kubelet[2921]: I0420 17:30:42.836308 2921 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f7c88b30fc803a3ec6b6c138191bdaca-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"f7c88b30fc803a3ec6b6c138191bdaca\") " pod="kube-system/kube-scheduler-localhost" Apr 20 17:30:42.836935 kubelet[2921]: I0420 17:30:42.836326 2921 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e1e996f53cc1b3aaf0fbbbafa68de43e-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"e1e996f53cc1b3aaf0fbbbafa68de43e\") " pod="kube-system/kube-apiserver-localhost" Apr 20 17:30:42.836935 kubelet[2921]: I0420 17:30:42.836351 2921 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e1e996f53cc1b3aaf0fbbbafa68de43e-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"e1e996f53cc1b3aaf0fbbbafa68de43e\") " pod="kube-system/kube-apiserver-localhost" Apr 20 17:30:42.841328 kubelet[2921]: I0420 17:30:42.836372 2921 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/14bc29ec35edba17af38052ec24275f2-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"14bc29ec35edba17af38052ec24275f2\") " pod="kube-system/kube-controller-manager-localhost" Apr 20 17:30:42.841328 kubelet[2921]: I0420 17:30:42.836391 2921 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/14bc29ec35edba17af38052ec24275f2-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"14bc29ec35edba17af38052ec24275f2\") " pod="kube-system/kube-controller-manager-localhost" Apr 20 17:30:42.841328 kubelet[2921]: I0420 17:30:42.839918 2921 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/14bc29ec35edba17af38052ec24275f2-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"14bc29ec35edba17af38052ec24275f2\") " pod="kube-system/kube-controller-manager-localhost" Apr 20 17:30:42.841328 kubelet[2921]: I0420 17:30:42.840227 2921 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e1e996f53cc1b3aaf0fbbbafa68de43e-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"e1e996f53cc1b3aaf0fbbbafa68de43e\") " pod="kube-system/kube-apiserver-localhost" Apr 20 17:30:42.907553 kubelet[2921]: I0420 17:30:42.900499 2921 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Apr 20 17:30:43.100703 kubelet[2921]: E0420 17:30:43.091638 2921 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 17:30:43.100703 kubelet[2921]: E0420 17:30:43.091553 2921 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 17:30:43.100703 kubelet[2921]: E0420 17:30:43.092828 2921 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 17:30:43.160299 kubelet[2921]: I0420 17:30:43.153811 2921 kubelet_node_status.go:123] "Node was previously registered" node="localhost" Apr 20 17:30:43.160299 kubelet[2921]: I0420 17:30:43.154262 2921 kubelet_node_status.go:77] "Successfully registered node" node="localhost" Apr 20 17:30:43.647980 kubelet[2921]: E0420 17:30:43.641798 2921 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 17:30:43.718768 kubelet[2921]: E0420 17:30:43.683979 2921 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 17:30:50.837169 sudo[1825]: pam_unix(sudo:session): session closed for user root Apr 20 17:30:50.852211 sshd[1824]: Connection closed by 10.0.0.1 port 35258 Apr 20 17:30:50.865748 sshd-session[1820]: pam_unix(sshd:session): session closed for user core Apr 20 17:30:50.896074 systemd[1]: sshd@4-8193-10.0.0.107:22-10.0.0.1:35258.service: Deactivated successfully. Apr 20 17:30:51.079956 systemd[1]: session-6.scope: Deactivated successfully. Apr 20 17:30:51.084930 systemd[1]: session-6.scope: Consumed 9.435s CPU time, 216.6M memory peak. Apr 20 17:30:51.187172 systemd-logind[1622]: Session 6 logged out. Waiting for processes to exit. Apr 20 17:30:51.244980 systemd-logind[1622]: Removed session 6. Apr 20 17:30:51.797216 kubelet[2921]: I0420 17:30:51.786036 2921 kuberuntime_manager.go:2062] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Apr 20 17:30:51.841014 containerd[1658]: time="2026-04-20T17:30:51.817284071Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Apr 20 17:30:51.867574 kubelet[2921]: I0420 17:30:51.842849 2921 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Apr 20 17:30:54.633870 kubelet[2921]: I0420 17:30:54.632208 2921 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q6m7m\" (UniqueName: \"kubernetes.io/projected/9995cec1-f3b4-4fa6-86af-eafec965a91a-kube-api-access-q6m7m\") pod \"kube-flannel-ds-cx8np\" (UID: \"9995cec1-f3b4-4fa6-86af-eafec965a91a\") " pod="kube-flannel/kube-flannel-ds-cx8np" Apr 20 17:30:54.642141 kubelet[2921]: I0420 17:30:54.638352 2921 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/42e860ed-cb5a-4d7b-a7e2-6e3d3ea65f86-kube-proxy\") pod \"kube-proxy-cthnf\" (UID: \"42e860ed-cb5a-4d7b-a7e2-6e3d3ea65f86\") " pod="kube-system/kube-proxy-cthnf" Apr 20 17:30:54.642141 kubelet[2921]: I0420 17:30:54.640257 2921 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/42e860ed-cb5a-4d7b-a7e2-6e3d3ea65f86-lib-modules\") pod \"kube-proxy-cthnf\" (UID: \"42e860ed-cb5a-4d7b-a7e2-6e3d3ea65f86\") " pod="kube-system/kube-proxy-cthnf" Apr 20 17:30:54.642141 kubelet[2921]: I0420 17:30:54.640324 2921 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni\" (UniqueName: \"kubernetes.io/host-path/9995cec1-f3b4-4fa6-86af-eafec965a91a-cni\") pod \"kube-flannel-ds-cx8np\" (UID: \"9995cec1-f3b4-4fa6-86af-eafec965a91a\") " pod="kube-flannel/kube-flannel-ds-cx8np" Apr 20 17:30:54.642141 kubelet[2921]: I0420 17:30:54.640350 2921 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b6p6s\" (UniqueName: \"kubernetes.io/projected/42e860ed-cb5a-4d7b-a7e2-6e3d3ea65f86-kube-api-access-b6p6s\") pod \"kube-proxy-cthnf\" (UID: \"42e860ed-cb5a-4d7b-a7e2-6e3d3ea65f86\") " pod="kube-system/kube-proxy-cthnf" Apr 20 17:30:54.642141 kubelet[2921]: I0420 17:30:54.640374 2921 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9995cec1-f3b4-4fa6-86af-eafec965a91a-xtables-lock\") pod \"kube-flannel-ds-cx8np\" (UID: \"9995cec1-f3b4-4fa6-86af-eafec965a91a\") " pod="kube-flannel/kube-flannel-ds-cx8np" Apr 20 17:30:54.642325 kubelet[2921]: I0420 17:30:54.640390 2921 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/9995cec1-f3b4-4fa6-86af-eafec965a91a-run\") pod \"kube-flannel-ds-cx8np\" (UID: \"9995cec1-f3b4-4fa6-86af-eafec965a91a\") " pod="kube-flannel/kube-flannel-ds-cx8np" Apr 20 17:30:54.642325 kubelet[2921]: I0420 17:30:54.640876 2921 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-plugin\" (UniqueName: \"kubernetes.io/host-path/9995cec1-f3b4-4fa6-86af-eafec965a91a-cni-plugin\") pod \"kube-flannel-ds-cx8np\" (UID: \"9995cec1-f3b4-4fa6-86af-eafec965a91a\") " pod="kube-flannel/kube-flannel-ds-cx8np" Apr 20 17:30:54.642325 kubelet[2921]: I0420 17:30:54.641008 2921 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flannel-cfg\" (UniqueName: \"kubernetes.io/configmap/9995cec1-f3b4-4fa6-86af-eafec965a91a-flannel-cfg\") pod \"kube-flannel-ds-cx8np\" (UID: \"9995cec1-f3b4-4fa6-86af-eafec965a91a\") " pod="kube-flannel/kube-flannel-ds-cx8np" Apr 20 17:30:54.643893 kubelet[2921]: I0420 17:30:54.643550 2921 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/42e860ed-cb5a-4d7b-a7e2-6e3d3ea65f86-xtables-lock\") pod \"kube-proxy-cthnf\" (UID: \"42e860ed-cb5a-4d7b-a7e2-6e3d3ea65f86\") " pod="kube-system/kube-proxy-cthnf" Apr 20 17:30:54.645247 systemd[1]: Created slice kubepods-burstable-pod9995cec1_f3b4_4fa6_86af_eafec965a91a.slice - libcontainer container kubepods-burstable-pod9995cec1_f3b4_4fa6_86af_eafec965a91a.slice. Apr 20 17:30:54.777886 systemd[1]: Created slice kubepods-besteffort-pod42e860ed_cb5a_4d7b_a7e2_6e3d3ea65f86.slice - libcontainer container kubepods-besteffort-pod42e860ed_cb5a_4d7b_a7e2_6e3d3ea65f86.slice. Apr 20 17:30:55.363317 kubelet[2921]: E0420 17:30:55.362940 2921 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 17:30:55.399222 containerd[1658]: time="2026-04-20T17:30:55.397181967Z" level=info msg="RunPodSandbox for name:\"kube-flannel-ds-cx8np\" uid:\"9995cec1-f3b4-4fa6-86af-eafec965a91a\" namespace:\"kube-flannel\"" Apr 20 17:30:55.894597 kubelet[2921]: E0420 17:30:55.878250 2921 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 17:30:55.974698 containerd[1658]: time="2026-04-20T17:30:55.971881656Z" level=info msg="RunPodSandbox for name:\"kube-proxy-cthnf\" uid:\"42e860ed-cb5a-4d7b-a7e2-6e3d3ea65f86\" namespace:\"kube-system\"" Apr 20 17:30:56.115095 containerd[1658]: time="2026-04-20T17:30:56.112196690Z" level=info msg="connecting to shim 5dd550754eb9adec788c8d705fa27e06d702e86d16e7715588540353d41aaaad" address="unix:///run/containerd/s/67b498acd3a3cfe94f849830c421047b8a308bc2387d86fa34b494c1cbd5ecf2" namespace=k8s.io protocol=ttrpc version=3 Apr 20 17:30:57.332901 containerd[1658]: time="2026-04-20T17:30:57.331030566Z" level=info msg="connecting to shim 23ec7bee55aeab16de7062676bad18dbaea2b695532e07c29e988434cb69da53" address="unix:///run/containerd/s/0288304e3f8900cc031f45c6966b7c5b06e8af01ac555869ac7ac1d2581fd6e3" namespace=k8s.io protocol=ttrpc version=3 Apr 20 17:30:57.672014 systemd[1]: Started cri-containerd-5dd550754eb9adec788c8d705fa27e06d702e86d16e7715588540353d41aaaad.scope - libcontainer container 5dd550754eb9adec788c8d705fa27e06d702e86d16e7715588540353d41aaaad. Apr 20 17:31:00.209195 containerd[1658]: time="2026-04-20T17:31:00.004840892Z" level=error msg="get state for 5dd550754eb9adec788c8d705fa27e06d702e86d16e7715588540353d41aaaad" error="context deadline exceeded" Apr 20 17:31:00.209195 containerd[1658]: time="2026-04-20T17:31:00.038168681Z" level=warning msg="unknown status" status=0 Apr 20 17:31:01.019022 systemd[1]: Started cri-containerd-23ec7bee55aeab16de7062676bad18dbaea2b695532e07c29e988434cb69da53.scope - libcontainer container 23ec7bee55aeab16de7062676bad18dbaea2b695532e07c29e988434cb69da53. Apr 20 17:31:01.320832 containerd[1658]: time="2026-04-20T17:31:01.308031723Z" level=error msg="ttrpc: received message on inactive stream" stream=3 Apr 20 17:31:01.443527 containerd[1658]: time="2026-04-20T17:31:01.440925751Z" level=info msg="RunPodSandbox for name:\"kube-flannel-ds-cx8np\" uid:\"9995cec1-f3b4-4fa6-86af-eafec965a91a\" namespace:\"kube-flannel\" returns sandbox id \"5dd550754eb9adec788c8d705fa27e06d702e86d16e7715588540353d41aaaad\"" Apr 20 17:31:01.512312 kubelet[2921]: E0420 17:31:01.511837 2921 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 17:31:01.659234 containerd[1658]: time="2026-04-20T17:31:01.658279561Z" level=info msg="PullImage \"ghcr.io/flannel-io/flannel-cni-plugin:v1.6.2-flannel1\"" Apr 20 17:31:02.245736 containerd[1658]: time="2026-04-20T17:31:02.242085615Z" level=info msg="RunPodSandbox for name:\"kube-proxy-cthnf\" uid:\"42e860ed-cb5a-4d7b-a7e2-6e3d3ea65f86\" namespace:\"kube-system\" returns sandbox id \"23ec7bee55aeab16de7062676bad18dbaea2b695532e07c29e988434cb69da53\"" Apr 20 17:31:02.253141 kubelet[2921]: E0420 17:31:02.252875 2921 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 17:31:02.352301 containerd[1658]: time="2026-04-20T17:31:02.351841245Z" level=info msg="CreateContainer within sandbox \"23ec7bee55aeab16de7062676bad18dbaea2b695532e07c29e988434cb69da53\" for container name:\"kube-proxy\"" Apr 20 17:31:02.596168 containerd[1658]: time="2026-04-20T17:31:02.592808018Z" level=info msg="Container a608c319917f2a12a25512463046c1e178c66e4aa450a8c48c54d87a33e8fd19: CDI devices from CRI Config.CDIDevices: []" Apr 20 17:31:02.597822 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2939648518.mount: Deactivated successfully. Apr 20 17:31:02.912614 containerd[1658]: time="2026-04-20T17:31:02.912486644Z" level=info msg="CreateContainer within sandbox \"23ec7bee55aeab16de7062676bad18dbaea2b695532e07c29e988434cb69da53\" for name:\"kube-proxy\" returns container id \"a608c319917f2a12a25512463046c1e178c66e4aa450a8c48c54d87a33e8fd19\"" Apr 20 17:31:02.934905 containerd[1658]: time="2026-04-20T17:31:02.927683051Z" level=info msg="StartContainer for \"a608c319917f2a12a25512463046c1e178c66e4aa450a8c48c54d87a33e8fd19\"" Apr 20 17:31:02.996787 containerd[1658]: time="2026-04-20T17:31:02.984170130Z" level=info msg="connecting to shim a608c319917f2a12a25512463046c1e178c66e4aa450a8c48c54d87a33e8fd19" address="unix:///run/containerd/s/0288304e3f8900cc031f45c6966b7c5b06e8af01ac555869ac7ac1d2581fd6e3" protocol=ttrpc version=3 Apr 20 17:31:03.635375 systemd[1]: Started cri-containerd-a608c319917f2a12a25512463046c1e178c66e4aa450a8c48c54d87a33e8fd19.scope - libcontainer container a608c319917f2a12a25512463046c1e178c66e4aa450a8c48c54d87a33e8fd19. Apr 20 17:31:04.512102 containerd[1658]: time="2026-04-20T17:31:04.511770435Z" level=info msg="StartContainer for \"a608c319917f2a12a25512463046c1e178c66e4aa450a8c48c54d87a33e8fd19\" returns successfully" Apr 20 17:31:04.697828 kubelet[2921]: E0420 17:31:04.693755 2921 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 17:31:05.276116 kubelet[2921]: I0420 17:31:05.273007 2921 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-proxy-cthnf" podStartSLOduration=12.272846444 podStartE2EDuration="12.272846444s" podCreationTimestamp="2026-04-20 17:30:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-20 17:31:05.238788879 +0000 UTC m=+24.432731830" watchObservedRunningTime="2026-04-20 17:31:05.272846444 +0000 UTC m=+24.466789370" Apr 20 17:31:05.719455 kubelet[2921]: E0420 17:31:05.718809 2921 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 17:31:07.479737 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4008815504.mount: Deactivated successfully. Apr 20 17:31:08.496959 containerd[1658]: time="2026-04-20T17:31:08.495902608Z" level=info msg="ImageCreate event name:\"ghcr.io/flannel-io/flannel-cni-plugin:v1.6.2-flannel1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 20 17:31:08.510126 containerd[1658]: time="2026-04-20T17:31:08.507466054Z" level=info msg="stop pulling image ghcr.io/flannel-io/flannel-cni-plugin:v1.6.2-flannel1: active requests=0, bytes read=1208499" Apr 20 17:31:08.571693 containerd[1658]: time="2026-04-20T17:31:08.568741802Z" level=info msg="ImageCreate event name:\"sha256:55ce2385d9d8c6f720091c177fbe885a21c9dc07c9e480bfb4d94b3001f58182\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 20 17:31:08.992460 containerd[1658]: time="2026-04-20T17:31:08.986701723Z" level=info msg="ImageCreate event name:\"ghcr.io/flannel-io/flannel-cni-plugin@sha256:f1812994f0edbcb5bb5ccb63be2147ba6ad10e1faaa7ca9fcdad4f441739d84f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 20 17:31:08.992460 containerd[1658]: time="2026-04-20T17:31:08.989225020Z" level=info msg="Pulled image \"ghcr.io/flannel-io/flannel-cni-plugin:v1.6.2-flannel1\" with image id \"sha256:55ce2385d9d8c6f720091c177fbe885a21c9dc07c9e480bfb4d94b3001f58182\", repo tag \"ghcr.io/flannel-io/flannel-cni-plugin:v1.6.2-flannel1\", repo digest \"ghcr.io/flannel-io/flannel-cni-plugin@sha256:f1812994f0edbcb5bb5ccb63be2147ba6ad10e1faaa7ca9fcdad4f441739d84f\", size \"4856838\" in 7.330442476s" Apr 20 17:31:08.992460 containerd[1658]: time="2026-04-20T17:31:08.989322185Z" level=info msg="PullImage \"ghcr.io/flannel-io/flannel-cni-plugin:v1.6.2-flannel1\" returns image reference \"sha256:55ce2385d9d8c6f720091c177fbe885a21c9dc07c9e480bfb4d94b3001f58182\"" Apr 20 17:31:09.045747 containerd[1658]: time="2026-04-20T17:31:09.043908167Z" level=info msg="CreateContainer within sandbox \"5dd550754eb9adec788c8d705fa27e06d702e86d16e7715588540353d41aaaad\" for container name:\"install-cni-plugin\"" Apr 20 17:31:09.139041 containerd[1658]: time="2026-04-20T17:31:09.131234357Z" level=info msg="Container 2a7dd379d8a2accbe3696c0af9834a10134385b092ccea241ecb1c21def19984: CDI devices from CRI Config.CDIDevices: []" Apr 20 17:31:09.196626 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2927501048.mount: Deactivated successfully. Apr 20 17:31:09.200068 containerd[1658]: time="2026-04-20T17:31:09.198942679Z" level=info msg="CreateContainer within sandbox \"5dd550754eb9adec788c8d705fa27e06d702e86d16e7715588540353d41aaaad\" for name:\"install-cni-plugin\" returns container id \"2a7dd379d8a2accbe3696c0af9834a10134385b092ccea241ecb1c21def19984\"" Apr 20 17:31:09.211991 containerd[1658]: time="2026-04-20T17:31:09.201065299Z" level=info msg="StartContainer for \"2a7dd379d8a2accbe3696c0af9834a10134385b092ccea241ecb1c21def19984\"" Apr 20 17:31:09.223191 containerd[1658]: time="2026-04-20T17:31:09.222316851Z" level=info msg="connecting to shim 2a7dd379d8a2accbe3696c0af9834a10134385b092ccea241ecb1c21def19984" address="unix:///run/containerd/s/67b498acd3a3cfe94f849830c421047b8a308bc2387d86fa34b494c1cbd5ecf2" protocol=ttrpc version=3 Apr 20 17:31:09.521170 systemd[1]: Started cri-containerd-2a7dd379d8a2accbe3696c0af9834a10134385b092ccea241ecb1c21def19984.scope - libcontainer container 2a7dd379d8a2accbe3696c0af9834a10134385b092ccea241ecb1c21def19984. Apr 20 17:31:10.133088 containerd[1658]: time="2026-04-20T17:31:10.126175025Z" level=info msg="StartContainer for \"2a7dd379d8a2accbe3696c0af9834a10134385b092ccea241ecb1c21def19984\" returns successfully" Apr 20 17:31:10.401305 containerd[1658]: time="2026-04-20T17:31:10.296843002Z" level=info msg="received container exit event container_id:\"2a7dd379d8a2accbe3696c0af9834a10134385b092ccea241ecb1c21def19984\" id:\"2a7dd379d8a2accbe3696c0af9834a10134385b092ccea241ecb1c21def19984\" pid:3215 exited_at:{seconds:1776706270 nanos:252936503}" Apr 20 17:31:10.252829 systemd[1]: cri-containerd-2a7dd379d8a2accbe3696c0af9834a10134385b092ccea241ecb1c21def19984.scope: Deactivated successfully. Apr 20 17:31:10.950330 kubelet[2921]: E0420 17:31:10.940013 2921 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 17:31:11.391166 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2a7dd379d8a2accbe3696c0af9834a10134385b092ccea241ecb1c21def19984-rootfs.mount: Deactivated successfully. Apr 20 17:31:12.234501 kubelet[2921]: E0420 17:31:12.233035 2921 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 17:31:12.572504 containerd[1658]: time="2026-04-20T17:31:12.534108105Z" level=info msg="PullImage \"ghcr.io/flannel-io/flannel:v0.26.7\"" Apr 20 17:31:20.244063 kubelet[2921]: E0420 17:31:20.240709 2921 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.764s" Apr 20 17:31:37.194826 kubelet[2921]: E0420 17:31:37.130134 2921 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.628s" Apr 20 17:31:45.602360 kubelet[2921]: E0420 17:31:45.596362 2921 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.134s" Apr 20 17:32:02.630370 kubelet[2921]: E0420 17:32:02.593924 2921 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 17:32:07.808883 kubelet[2921]: E0420 17:32:07.803276 2921 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 17:32:11.815145 kubelet[2921]: E0420 17:32:11.741218 2921 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 17:32:18.220384 kubelet[2921]: E0420 17:32:18.217986 2921 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.091s" Apr 20 17:32:24.343269 containerd[1658]: time="2026-04-20T17:32:24.342243894Z" level=info msg="ImageCreate event name:\"ghcr.io/flannel-io/flannel:v0.26.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 20 17:32:24.404858 containerd[1658]: time="2026-04-20T17:32:24.361351480Z" level=info msg="stop pulling image ghcr.io/flannel-io/flannel:v0.26.7: active requests=0, bytes read=29344049" Apr 20 17:32:24.505146 containerd[1658]: time="2026-04-20T17:32:24.428998829Z" level=info msg="ImageCreate event name:\"sha256:965b9dd4aa4c1b6b68a4c54a166692b4645b6e6f8a5937d8dc17736cb63f515e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 20 17:32:24.506132 kubelet[2921]: E0420 17:32:24.505857 2921 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 17:32:24.763700 containerd[1658]: time="2026-04-20T17:32:24.760307277Z" level=info msg="ImageCreate event name:\"ghcr.io/flannel-io/flannel@sha256:7f471907fa940f944867270de4ed78121b8b4c5d564e17f940dc787cb16dea82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 20 17:32:25.155641 containerd[1658]: time="2026-04-20T17:32:25.139788602Z" level=info msg="Pulled image \"ghcr.io/flannel-io/flannel:v0.26.7\" with image id \"sha256:965b9dd4aa4c1b6b68a4c54a166692b4645b6e6f8a5937d8dc17736cb63f515e\", repo tag \"ghcr.io/flannel-io/flannel:v0.26.7\", repo digest \"ghcr.io/flannel-io/flannel@sha256:7f471907fa940f944867270de4ed78121b8b4c5d564e17f940dc787cb16dea82\", size \"32996046\" in 1m12.597976402s" Apr 20 17:32:25.155641 containerd[1658]: time="2026-04-20T17:32:25.140331683Z" level=info msg="PullImage \"ghcr.io/flannel-io/flannel:v0.26.7\" returns image reference \"sha256:965b9dd4aa4c1b6b68a4c54a166692b4645b6e6f8a5937d8dc17736cb63f515e\"" Apr 20 17:32:25.767785 containerd[1658]: time="2026-04-20T17:32:25.759047805Z" level=info msg="CreateContainer within sandbox \"5dd550754eb9adec788c8d705fa27e06d702e86d16e7715588540353d41aaaad\" for container name:\"install-cni\"" Apr 20 17:32:26.803353 containerd[1658]: time="2026-04-20T17:32:26.800269508Z" level=info msg="Container fe0f81046cfe21692409b4616d9a0a93f30a830e44e5377ded86c42db9e7ca97: CDI devices from CRI Config.CDIDevices: []" Apr 20 17:32:26.968340 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3066107927.mount: Deactivated successfully. Apr 20 17:32:27.017684 containerd[1658]: time="2026-04-20T17:32:27.000908713Z" level=info msg="CreateContainer within sandbox \"5dd550754eb9adec788c8d705fa27e06d702e86d16e7715588540353d41aaaad\" for name:\"install-cni\" returns container id \"fe0f81046cfe21692409b4616d9a0a93f30a830e44e5377ded86c42db9e7ca97\"" Apr 20 17:32:27.017684 containerd[1658]: time="2026-04-20T17:32:27.015784471Z" level=info msg="StartContainer for \"fe0f81046cfe21692409b4616d9a0a93f30a830e44e5377ded86c42db9e7ca97\"" Apr 20 17:32:27.090019 containerd[1658]: time="2026-04-20T17:32:27.057027138Z" level=info msg="connecting to shim fe0f81046cfe21692409b4616d9a0a93f30a830e44e5377ded86c42db9e7ca97" address="unix:///run/containerd/s/67b498acd3a3cfe94f849830c421047b8a308bc2387d86fa34b494c1cbd5ecf2" protocol=ttrpc version=3 Apr 20 17:32:27.192889 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4056639021.mount: Deactivated successfully. Apr 20 17:32:28.833295 systemd[1]: Started cri-containerd-fe0f81046cfe21692409b4616d9a0a93f30a830e44e5377ded86c42db9e7ca97.scope - libcontainer container fe0f81046cfe21692409b4616d9a0a93f30a830e44e5377ded86c42db9e7ca97. Apr 20 17:32:31.144888 containerd[1658]: time="2026-04-20T17:32:31.136112052Z" level=error msg="get state for fe0f81046cfe21692409b4616d9a0a93f30a830e44e5377ded86c42db9e7ca97" error="context deadline exceeded" Apr 20 17:32:31.144888 containerd[1658]: time="2026-04-20T17:32:31.140082033Z" level=warning msg="unknown status" status=0 Apr 20 17:32:31.909285 containerd[1658]: time="2026-04-20T17:32:31.895987115Z" level=error msg="ttrpc: received message on inactive stream" stream=3 Apr 20 17:32:33.214777 systemd[1]: cri-containerd-fe0f81046cfe21692409b4616d9a0a93f30a830e44e5377ded86c42db9e7ca97.scope: Deactivated successfully. Apr 20 17:32:33.417887 containerd[1658]: time="2026-04-20T17:32:33.412107632Z" level=info msg="received container exit event container_id:\"fe0f81046cfe21692409b4616d9a0a93f30a830e44e5377ded86c42db9e7ca97\" id:\"fe0f81046cfe21692409b4616d9a0a93f30a830e44e5377ded86c42db9e7ca97\" pid:3365 exited_at:{seconds:1776706353 nanos:219984095}" Apr 20 17:32:33.501133 containerd[1658]: time="2026-04-20T17:32:33.461135337Z" level=info msg="StartContainer for \"fe0f81046cfe21692409b4616d9a0a93f30a830e44e5377ded86c42db9e7ca97\" returns successfully" Apr 20 17:32:33.644121 kubelet[2921]: I0420 17:32:33.617262 2921 kubelet_node_status.go:427] "Fast updating node status as it just became ready" Apr 20 17:32:34.520998 kubelet[2921]: E0420 17:32:34.520625 2921 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 17:32:35.063708 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fe0f81046cfe21692409b4616d9a0a93f30a830e44e5377ded86c42db9e7ca97-rootfs.mount: Deactivated successfully. Apr 20 17:32:35.661881 kubelet[2921]: E0420 17:32:35.658546 2921 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 17:32:35.838963 kubelet[2921]: I0420 17:32:35.835096 2921 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4219345c-a413-4e69-a28c-2d2232522f0d-config-volume\") pod \"coredns-7d764666f9-xkbjw\" (UID: \"4219345c-a413-4e69-a28c-2d2232522f0d\") " pod="kube-system/coredns-7d764666f9-xkbjw" Apr 20 17:32:35.911297 kubelet[2921]: I0420 17:32:35.910194 2921 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dmcc8\" (UniqueName: \"kubernetes.io/projected/4219345c-a413-4e69-a28c-2d2232522f0d-kube-api-access-dmcc8\") pod \"coredns-7d764666f9-xkbjw\" (UID: \"4219345c-a413-4e69-a28c-2d2232522f0d\") " pod="kube-system/coredns-7d764666f9-xkbjw" Apr 20 17:32:35.911297 kubelet[2921]: I0420 17:32:35.910341 2921 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9615ef68-6f9f-458c-a098-bae2d529c7cc-config-volume\") pod \"coredns-7d764666f9-g548d\" (UID: \"9615ef68-6f9f-458c-a098-bae2d529c7cc\") " pod="kube-system/coredns-7d764666f9-g548d" Apr 20 17:32:35.911297 kubelet[2921]: I0420 17:32:35.910380 2921 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8xnhk\" (UniqueName: \"kubernetes.io/projected/9615ef68-6f9f-458c-a098-bae2d529c7cc-kube-api-access-8xnhk\") pod \"coredns-7d764666f9-g548d\" (UID: \"9615ef68-6f9f-458c-a098-bae2d529c7cc\") " pod="kube-system/coredns-7d764666f9-g548d" Apr 20 17:32:35.967988 systemd[1]: Created slice kubepods-burstable-pod4219345c_a413_4e69_a28c_2d2232522f0d.slice - libcontainer container kubepods-burstable-pod4219345c_a413_4e69_a28c_2d2232522f0d.slice. Apr 20 17:32:36.048539 containerd[1658]: time="2026-04-20T17:32:35.987121707Z" level=info msg="CreateContainer within sandbox \"5dd550754eb9adec788c8d705fa27e06d702e86d16e7715588540353d41aaaad\" for container name:\"kube-flannel\"" Apr 20 17:32:36.586159 systemd[1]: Created slice kubepods-burstable-pod9615ef68_6f9f_458c_a098_bae2d529c7cc.slice - libcontainer container kubepods-burstable-pod9615ef68_6f9f_458c_a098_bae2d529c7cc.slice. Apr 20 17:32:36.733837 containerd[1658]: time="2026-04-20T17:32:36.710930244Z" level=info msg="Container effeb36ba411be61e87fd492ba5ec98b65c1a1b79aff6d83e045b097a9c722e8: CDI devices from CRI Config.CDIDevices: []" Apr 20 17:32:37.386150 containerd[1658]: time="2026-04-20T17:32:37.376347945Z" level=info msg="CreateContainer within sandbox \"5dd550754eb9adec788c8d705fa27e06d702e86d16e7715588540353d41aaaad\" for name:\"kube-flannel\" returns container id \"effeb36ba411be61e87fd492ba5ec98b65c1a1b79aff6d83e045b097a9c722e8\"" Apr 20 17:32:37.580145 containerd[1658]: time="2026-04-20T17:32:37.548694820Z" level=info msg="StartContainer for \"effeb36ba411be61e87fd492ba5ec98b65c1a1b79aff6d83e045b097a9c722e8\"" Apr 20 17:32:37.655847 containerd[1658]: time="2026-04-20T17:32:37.633247339Z" level=info msg="connecting to shim effeb36ba411be61e87fd492ba5ec98b65c1a1b79aff6d83e045b097a9c722e8" address="unix:///run/containerd/s/67b498acd3a3cfe94f849830c421047b8a308bc2387d86fa34b494c1cbd5ecf2" protocol=ttrpc version=3 Apr 20 17:32:39.424118 systemd[1]: Started cri-containerd-effeb36ba411be61e87fd492ba5ec98b65c1a1b79aff6d83e045b097a9c722e8.scope - libcontainer container effeb36ba411be61e87fd492ba5ec98b65c1a1b79aff6d83e045b097a9c722e8. Apr 20 17:32:40.515016 kubelet[2921]: E0420 17:32:40.433526 2921 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 17:32:40.515016 kubelet[2921]: E0420 17:32:40.482288 2921 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.023s" Apr 20 17:32:40.748202 containerd[1658]: time="2026-04-20T17:32:40.498662887Z" level=info msg="RunPodSandbox for name:\"coredns-7d764666f9-g548d\" uid:\"9615ef68-6f9f-458c-a098-bae2d529c7cc\" namespace:\"kube-system\"" Apr 20 17:32:41.735939 kubelet[2921]: E0420 17:32:41.733017 2921 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 17:32:41.887175 kubelet[2921]: E0420 17:32:41.878814 2921 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.368s" Apr 20 17:32:42.052390 containerd[1658]: time="2026-04-20T17:32:42.048159434Z" level=info msg="RunPodSandbox for name:\"coredns-7d764666f9-xkbjw\" uid:\"4219345c-a413-4e69-a28c-2d2232522f0d\" namespace:\"kube-system\"" Apr 20 17:32:42.223359 containerd[1658]: time="2026-04-20T17:32:42.214142766Z" level=error msg="get state for effeb36ba411be61e87fd492ba5ec98b65c1a1b79aff6d83e045b097a9c722e8" error="context deadline exceeded" Apr 20 17:32:42.223359 containerd[1658]: time="2026-04-20T17:32:42.218505376Z" level=warning msg="unknown status" status=0 Apr 20 17:32:43.232848 containerd[1658]: time="2026-04-20T17:32:43.230805743Z" level=error msg="ttrpc: received message on inactive stream" stream=3 Apr 20 17:32:44.925383 systemd[1]: run-netns-cni\x2d49073279\x2d50dc\x2dd223\x2d409c\x2dc4813489e8f0.mount: Deactivated successfully. Apr 20 17:32:45.295749 containerd[1658]: time="2026-04-20T17:32:45.033221954Z" level=error msg="RunPodSandbox for name:\"coredns-7d764666f9-g548d\" uid:\"9615ef68-6f9f-458c-a098-bae2d529c7cc\" namespace:\"kube-system\" failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"297358c15eda35b41ac6d85492b5d4e7c5f80dfb8a0b60d34a677e38ae48d382\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Apr 20 17:32:46.270525 kubelet[2921]: E0420 17:32:46.236730 2921 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"297358c15eda35b41ac6d85492b5d4e7c5f80dfb8a0b60d34a677e38ae48d382\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Apr 20 17:32:46.624024 kubelet[2921]: E0420 17:32:46.455365 2921 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"297358c15eda35b41ac6d85492b5d4e7c5f80dfb8a0b60d34a677e38ae48d382\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-7d764666f9-g548d" Apr 20 17:32:46.725744 kubelet[2921]: E0420 17:32:46.722037 2921 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"297358c15eda35b41ac6d85492b5d4e7c5f80dfb8a0b60d34a677e38ae48d382\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-7d764666f9-g548d" Apr 20 17:32:46.931113 kubelet[2921]: E0420 17:32:46.818320 2921 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7d764666f9-g548d_kube-system(9615ef68-6f9f-458c-a098-bae2d529c7cc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7d764666f9-g548d_kube-system(9615ef68-6f9f-458c-a098-bae2d529c7cc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"297358c15eda35b41ac6d85492b5d4e7c5f80dfb8a0b60d34a677e38ae48d382\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-7d764666f9-g548d" podUID="9615ef68-6f9f-458c-a098-bae2d529c7cc" Apr 20 17:32:47.457716 kubelet[2921]: E0420 17:32:47.454004 2921 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.766s" Apr 20 17:32:49.141814 systemd[1]: run-netns-cni\x2db90a4589\x2d0553\x2df5ff\x2dbe71\x2ddecdab2b6345.mount: Deactivated successfully. Apr 20 17:32:49.287854 kubelet[2921]: E0420 17:32:49.159991 2921 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.367s" Apr 20 17:32:49.287854 kubelet[2921]: E0420 17:32:49.240040 2921 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8335d3a94efa50689b47ef19e34e2ebf591e177fab0376b7dbfc703c524d0beb\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Apr 20 17:32:49.287854 kubelet[2921]: E0420 17:32:49.240270 2921 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8335d3a94efa50689b47ef19e34e2ebf591e177fab0376b7dbfc703c524d0beb\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-7d764666f9-xkbjw" Apr 20 17:32:49.406711 containerd[1658]: time="2026-04-20T17:32:49.162323439Z" level=error msg="RunPodSandbox for name:\"coredns-7d764666f9-xkbjw\" uid:\"4219345c-a413-4e69-a28c-2d2232522f0d\" namespace:\"kube-system\" failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"8335d3a94efa50689b47ef19e34e2ebf591e177fab0376b7dbfc703c524d0beb\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Apr 20 17:32:49.420506 kubelet[2921]: E0420 17:32:49.313545 2921 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8335d3a94efa50689b47ef19e34e2ebf591e177fab0376b7dbfc703c524d0beb\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-7d764666f9-xkbjw" Apr 20 17:32:49.420506 kubelet[2921]: E0420 17:32:49.390489 2921 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7d764666f9-xkbjw_kube-system(4219345c-a413-4e69-a28c-2d2232522f0d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7d764666f9-xkbjw_kube-system(4219345c-a413-4e69-a28c-2d2232522f0d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8335d3a94efa50689b47ef19e34e2ebf591e177fab0376b7dbfc703c524d0beb\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-7d764666f9-xkbjw" podUID="4219345c-a413-4e69-a28c-2d2232522f0d" Apr 20 17:32:49.452052 containerd[1658]: time="2026-04-20T17:32:49.450691395Z" level=info msg="StartContainer for \"effeb36ba411be61e87fd492ba5ec98b65c1a1b79aff6d83e045b097a9c722e8\" returns successfully" Apr 20 17:32:49.893583 kubelet[2921]: E0420 17:32:49.892179 2921 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 17:32:49.946281 systemd[1]: cri-containerd-068b64fc9e7bb17f4f31b6fa2b38667873a3e750fc3b308a0cff36dc9b277220.scope: Deactivated successfully. Apr 20 17:32:49.985745 systemd[1]: cri-containerd-068b64fc9e7bb17f4f31b6fa2b38667873a3e750fc3b308a0cff36dc9b277220.scope: Consumed 25.250s CPU time, 22.6M memory peak. Apr 20 17:32:50.016643 containerd[1658]: time="2026-04-20T17:32:50.009977141Z" level=info msg="received container exit event container_id:\"068b64fc9e7bb17f4f31b6fa2b38667873a3e750fc3b308a0cff36dc9b277220\" id:\"068b64fc9e7bb17f4f31b6fa2b38667873a3e750fc3b308a0cff36dc9b277220\" pid:2753 exit_status:1 exited_at:{seconds:1776706369 nanos:990239877}" Apr 20 17:32:50.502819 kubelet[2921]: I0420 17:32:50.502341 2921 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-flannel/kube-flannel-ds-cx8np" podStartSLOduration=23.403525404 podStartE2EDuration="1m57.502322802s" podCreationTimestamp="2026-04-20 17:30:53 +0000 UTC" firstStartedPulling="2026-04-20 17:31:01.611870723 +0000 UTC m=+20.805813626" lastFinishedPulling="2026-04-20 17:32:35.710668131 +0000 UTC m=+114.904611024" observedRunningTime="2026-04-20 17:32:50.368165875 +0000 UTC m=+129.562108787" watchObservedRunningTime="2026-04-20 17:32:50.502322802 +0000 UTC m=+129.696265708" Apr 20 17:32:50.534965 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-068b64fc9e7bb17f4f31b6fa2b38667873a3e750fc3b308a0cff36dc9b277220-rootfs.mount: Deactivated successfully. Apr 20 17:32:51.167748 kubelet[2921]: I0420 17:32:51.164840 2921 scope.go:122] "RemoveContainer" containerID="068b64fc9e7bb17f4f31b6fa2b38667873a3e750fc3b308a0cff36dc9b277220" Apr 20 17:32:51.167748 kubelet[2921]: E0420 17:32:51.165545 2921 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 17:32:51.323320 kubelet[2921]: E0420 17:32:51.187294 2921 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 17:32:51.598829 containerd[1658]: time="2026-04-20T17:32:51.505174964Z" level=info msg="CreateContainer within sandbox \"3499c6956704318802827cc6051b05eee9229cbc60a941f53c88c4a10e92ae11\" for container name:\"kube-scheduler\" attempt:1" Apr 20 17:32:52.031557 containerd[1658]: time="2026-04-20T17:32:52.030972667Z" level=info msg="Container 47b8d56c11d320d9712e2d23cce73f72260afb25e9ec58b43bf196c68994379a: CDI devices from CRI Config.CDIDevices: []" Apr 20 17:32:52.094389 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2932274480.mount: Deactivated successfully. Apr 20 17:32:52.524892 containerd[1658]: time="2026-04-20T17:32:52.427821269Z" level=info msg="CreateContainer within sandbox \"3499c6956704318802827cc6051b05eee9229cbc60a941f53c88c4a10e92ae11\" for name:\"kube-scheduler\" attempt:1 returns container id \"47b8d56c11d320d9712e2d23cce73f72260afb25e9ec58b43bf196c68994379a\"" Apr 20 17:32:52.524892 containerd[1658]: time="2026-04-20T17:32:52.473650690Z" level=info msg="StartContainer for \"47b8d56c11d320d9712e2d23cce73f72260afb25e9ec58b43bf196c68994379a\"" Apr 20 17:32:52.595851 containerd[1658]: time="2026-04-20T17:32:52.580725795Z" level=info msg="connecting to shim 47b8d56c11d320d9712e2d23cce73f72260afb25e9ec58b43bf196c68994379a" address="unix:///run/containerd/s/1028a3884c8fe7445e51378f8f75d8222496a64b030b92e150cc155157ded40c" protocol=ttrpc version=3 Apr 20 17:32:54.048048 kubelet[2921]: E0420 17:32:54.042172 2921 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.563s" Apr 20 17:32:54.980154 systemd[1]: Started cri-containerd-47b8d56c11d320d9712e2d23cce73f72260afb25e9ec58b43bf196c68994379a.scope - libcontainer container 47b8d56c11d320d9712e2d23cce73f72260afb25e9ec58b43bf196c68994379a. Apr 20 17:32:57.683070 containerd[1658]: time="2026-04-20T17:32:57.675510346Z" level=error msg="get state for 47b8d56c11d320d9712e2d23cce73f72260afb25e9ec58b43bf196c68994379a" error="context deadline exceeded" Apr 20 17:32:57.683070 containerd[1658]: time="2026-04-20T17:32:57.675812701Z" level=warning msg="unknown status" status=0 Apr 20 17:32:58.818301 containerd[1658]: time="2026-04-20T17:32:58.794195468Z" level=error msg="ttrpc: received message on inactive stream" stream=3 Apr 20 17:33:00.119672 kubelet[2921]: E0420 17:33:00.115315 2921 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.612s" Apr 20 17:33:00.210375 kubelet[2921]: E0420 17:33:00.205739 2921 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 17:33:00.217966 containerd[1658]: time="2026-04-20T17:33:00.217326604Z" level=info msg="RunPodSandbox for name:\"coredns-7d764666f9-xkbjw\" uid:\"4219345c-a413-4e69-a28c-2d2232522f0d\" namespace:\"kube-system\"" Apr 20 17:33:00.265012 kubelet[2921]: E0420 17:33:00.264596 2921 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 17:33:00.267700 containerd[1658]: time="2026-04-20T17:33:00.265853536Z" level=info msg="RunPodSandbox for name:\"coredns-7d764666f9-g548d\" uid:\"9615ef68-6f9f-458c-a098-bae2d529c7cc\" namespace:\"kube-system\"" Apr 20 17:33:00.289307 containerd[1658]: time="2026-04-20T17:33:00.287962534Z" level=info msg="StartContainer for \"47b8d56c11d320d9712e2d23cce73f72260afb25e9ec58b43bf196c68994379a\" returns successfully" Apr 20 17:33:00.300500 systemd[1758]: Created slice background.slice - User Background Tasks Slice. Apr 20 17:33:00.320620 systemd[1758]: Starting systemd-tmpfiles-clean.service - Cleanup of User's Temporary Files and Directories... Apr 20 17:33:00.520706 systemd[1758]: Finished systemd-tmpfiles-clean.service - Cleanup of User's Temporary Files and Directories. Apr 20 17:33:01.398105 kubelet[2921]: E0420 17:33:01.387054 2921 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 17:33:01.444837 systemd[1]: run-netns-cni\x2d490b9aa9\x2d447e\x2d858f\x2d6f89\x2dc106c85b9791.mount: Deactivated successfully. Apr 20 17:33:01.453168 containerd[1658]: time="2026-04-20T17:33:01.452509299Z" level=error msg="RunPodSandbox for name:\"coredns-7d764666f9-xkbjw\" uid:\"4219345c-a413-4e69-a28c-2d2232522f0d\" namespace:\"kube-system\" failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"a427d100001a65b1418e91c1eaa9b269a625436abf31cd042d1d38ca2d4931b2\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Apr 20 17:33:01.473305 kubelet[2921]: E0420 17:33:01.471748 2921 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a427d100001a65b1418e91c1eaa9b269a625436abf31cd042d1d38ca2d4931b2\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Apr 20 17:33:01.473305 kubelet[2921]: E0420 17:33:01.472025 2921 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a427d100001a65b1418e91c1eaa9b269a625436abf31cd042d1d38ca2d4931b2\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-7d764666f9-xkbjw" Apr 20 17:33:01.473305 kubelet[2921]: E0420 17:33:01.472125 2921 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a427d100001a65b1418e91c1eaa9b269a625436abf31cd042d1d38ca2d4931b2\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-7d764666f9-xkbjw" Apr 20 17:33:01.473305 kubelet[2921]: E0420 17:33:01.472271 2921 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7d764666f9-xkbjw_kube-system(4219345c-a413-4e69-a28c-2d2232522f0d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7d764666f9-xkbjw_kube-system(4219345c-a413-4e69-a28c-2d2232522f0d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a427d100001a65b1418e91c1eaa9b269a625436abf31cd042d1d38ca2d4931b2\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-7d764666f9-xkbjw" podUID="4219345c-a413-4e69-a28c-2d2232522f0d" Apr 20 17:33:01.621993 containerd[1658]: time="2026-04-20T17:33:01.587913342Z" level=error msg="RunPodSandbox for name:\"coredns-7d764666f9-g548d\" uid:\"9615ef68-6f9f-458c-a098-bae2d529c7cc\" namespace:\"kube-system\" failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"a7d991c4cbeba813414746d2d2f98bae12703c2edbb4c29694d039623c8c7e7b\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Apr 20 17:33:01.596378 systemd[1]: run-netns-cni\x2daf813b44\x2dfd92\x2d414a\x2d5d6d\x2d5fdbc29a9b02.mount: Deactivated successfully. Apr 20 17:33:01.715749 kubelet[2921]: E0420 17:33:01.647651 2921 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a7d991c4cbeba813414746d2d2f98bae12703c2edbb4c29694d039623c8c7e7b\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Apr 20 17:33:01.715749 kubelet[2921]: E0420 17:33:01.648071 2921 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a7d991c4cbeba813414746d2d2f98bae12703c2edbb4c29694d039623c8c7e7b\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-7d764666f9-g548d" Apr 20 17:33:01.715749 kubelet[2921]: E0420 17:33:01.648174 2921 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a7d991c4cbeba813414746d2d2f98bae12703c2edbb4c29694d039623c8c7e7b\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-7d764666f9-g548d" Apr 20 17:33:01.715749 kubelet[2921]: E0420 17:33:01.698077 2921 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7d764666f9-g548d_kube-system(9615ef68-6f9f-458c-a098-bae2d529c7cc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7d764666f9-g548d_kube-system(9615ef68-6f9f-458c-a098-bae2d529c7cc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a7d991c4cbeba813414746d2d2f98bae12703c2edbb4c29694d039623c8c7e7b\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-7d764666f9-g548d" podUID="9615ef68-6f9f-458c-a098-bae2d529c7cc" Apr 20 17:33:02.300678 systemd-networkd[1438]: flannel.1: Link UP Apr 20 17:33:02.300689 systemd-networkd[1438]: flannel.1: Gained carrier Apr 20 17:33:02.686384 kubelet[2921]: E0420 17:33:02.686018 2921 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 17:33:03.962996 kubelet[2921]: E0420 17:33:03.959739 2921 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 17:33:04.015999 systemd-networkd[1438]: flannel.1: Gained IPv6LL Apr 20 17:33:10.938295 kubelet[2921]: E0420 17:33:10.935777 2921 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 17:33:12.589674 kubelet[2921]: E0420 17:33:12.577075 2921 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 17:33:13.003949 containerd[1658]: time="2026-04-20T17:33:12.794099505Z" level=info msg="RunPodSandbox for name:\"coredns-7d764666f9-xkbjw\" uid:\"4219345c-a413-4e69-a28c-2d2232522f0d\" namespace:\"kube-system\"" Apr 20 17:33:15.070808 systemd-networkd[1438]: cni0: Link UP Apr 20 17:33:15.101247 systemd-networkd[1438]: cni0: Gained carrier Apr 20 17:33:15.987316 systemd-networkd[1438]: vethb197c4c5: Link UP Apr 20 17:33:16.041030 kubelet[2921]: E0420 17:33:16.033225 2921 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.577s" Apr 20 17:33:16.240155 kernel: cni0: port 1(vethb197c4c5) entered blocking state Apr 20 17:33:16.266575 kernel: cni0: port 1(vethb197c4c5) entered disabled state Apr 20 17:33:16.286054 kernel: vethb197c4c5: entered allmulticast mode Apr 20 17:33:16.331827 kernel: vethb197c4c5: entered promiscuous mode Apr 20 17:33:16.502634 systemd-networkd[1438]: cni0: Lost carrier Apr 20 17:33:16.873915 systemd-networkd[1438]: cni0: Gained IPv6LL Apr 20 17:33:17.502312 kubelet[2921]: E0420 17:33:16.891103 2921 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 17:33:17.694129 containerd[1658]: time="2026-04-20T17:33:17.295216640Z" level=info msg="RunPodSandbox for name:\"coredns-7d764666f9-g548d\" uid:\"9615ef68-6f9f-458c-a098-bae2d529c7cc\" namespace:\"kube-system\"" Apr 20 17:33:19.370015 kubelet[2921]: E0420 17:33:19.368235 2921 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.906s" Apr 20 17:33:19.504816 kernel: cni0: port 1(vethb197c4c5) entered blocking state Apr 20 17:33:19.527116 kernel: cni0: port 1(vethb197c4c5) entered forwarding state Apr 20 17:33:19.926741 systemd-networkd[1438]: vethb197c4c5: Gained carrier Apr 20 17:33:20.232319 systemd-networkd[1438]: cni0: Gained carrier Apr 20 17:33:20.421312 systemd-networkd[1438]: veth0db79d8f: Link UP Apr 20 17:33:20.564294 kubelet[2921]: E0420 17:33:20.561315 2921 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.182s" Apr 20 17:33:20.581763 kernel: cni0: port 2(veth0db79d8f) entered blocking state Apr 20 17:33:20.590552 kernel: cni0: port 2(veth0db79d8f) entered disabled state Apr 20 17:33:20.742524 kernel: veth0db79d8f: entered allmulticast mode Apr 20 17:33:20.777054 kernel: veth0db79d8f: entered promiscuous mode Apr 20 17:33:20.957101 containerd[1658]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil), MTU:0, AdvMSS:0, Priority:0, Table:(*int)(nil), Scope:(*int)(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0xc000184950), "name":"cbr0", "type":"bridge"} Apr 20 17:33:20.957101 containerd[1658]: delegateAdd: netconf sent to delegate plugin: Apr 20 17:33:21.084805 containerd[1658]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2026-04-20T17:33:21.033834749Z" level=info msg="received container exit event container_id:\"aa45c298c24d90714f2f5e25e656c8aefa636218c1939dbe6b54ed38fe067d34\" id:\"aa45c298c24d90714f2f5e25e656c8aefa636218c1939dbe6b54ed38fe067d34\" pid:2759 exit_status:1 exited_at:{seconds:1776706401 nanos:13014919}" Apr 20 17:33:21.020215 systemd[1]: cri-containerd-aa45c298c24d90714f2f5e25e656c8aefa636218c1939dbe6b54ed38fe067d34.scope: Deactivated successfully. Apr 20 17:33:21.193329 kubelet[2921]: E0420 17:33:21.098180 2921 controller.go:251] "Failed to update lease" err="Put \"https://10.0.0.107:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded" Apr 20 17:33:21.033581 systemd[1]: cri-containerd-aa45c298c24d90714f2f5e25e656c8aefa636218c1939dbe6b54ed38fe067d34.scope: Consumed 57.449s CPU time, 56.4M memory peak. Apr 20 17:33:21.315066 systemd-networkd[1438]: vethb197c4c5: Gained IPv6LL Apr 20 17:33:21.792114 kubelet[2921]: E0420 17:33:21.719991 2921 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 17:33:22.067248 kernel: cni0: port 2(veth0db79d8f) entered blocking state Apr 20 17:33:22.128259 kernel: cni0: port 2(veth0db79d8f) entered forwarding state Apr 20 17:33:22.205147 systemd-networkd[1438]: veth0db79d8f: Gained carrier Apr 20 17:33:22.621036 containerd[1658]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil), MTU:0, AdvMSS:0, Priority:0, Table:(*int)(nil), Scope:(*int)(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0xc00009a950), "name":"cbr0", "type":"bridge"} Apr 20 17:33:22.621036 containerd[1658]: delegateAdd: netconf sent to delegate plugin: Apr 20 17:33:23.904744 systemd-networkd[1438]: veth0db79d8f: Gained IPv6LL Apr 20 17:33:24.491565 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-aa45c298c24d90714f2f5e25e656c8aefa636218c1939dbe6b54ed38fe067d34-rootfs.mount: Deactivated successfully. Apr 20 17:33:25.291990 kubelet[2921]: E0420 17:33:25.286112 2921 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 17:33:27.232668 containerd[1658]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2026-04-20T17:33:27.222247480Z" level=info msg="connecting to shim 82cee5f1dc408e5c95f48f603a99a897183f8067c51698e9491e17d9b8976095" address="unix:///run/containerd/s/6e0a0f15bab7eb145f434d8c0a53dd08ce6cd9cc0abddb189252348086663230" namespace=k8s.io protocol=ttrpc version=3 Apr 20 17:33:27.399164 containerd[1658]: time="2026-04-20T17:33:27.231307071Z" level=info msg="connecting to shim 792dfe7216e89c89a2413a97ea054eb9b9cd8b44103f42cf72139a0ce316ee43" address="unix:///run/containerd/s/09d8976aeca5acca42159149037e60f2e6793992ebcd79813a348c010407786f" namespace=k8s.io protocol=ttrpc version=3 Apr 20 17:33:27.409606 kubelet[2921]: I0420 17:33:27.236346 2921 scope.go:122] "RemoveContainer" containerID="aa45c298c24d90714f2f5e25e656c8aefa636218c1939dbe6b54ed38fe067d34" Apr 20 17:33:27.409606 kubelet[2921]: E0420 17:33:27.371735 2921 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 17:33:28.205249 containerd[1658]: time="2026-04-20T17:33:28.201110692Z" level=info msg="CreateContainer within sandbox \"1917323574b960de8ba00d74f9909f2e14c4f8be4dce6bcb901281008814d6d2\" for container name:\"kube-controller-manager\" attempt:1" Apr 20 17:33:30.315395 containerd[1658]: time="2026-04-20T17:33:30.311839694Z" level=info msg="Container 0076736cf1eedee29664628c2cdcb1cb6330a5f7b6aefc4ff7b6bb66058ef13e: CDI devices from CRI Config.CDIDevices: []" Apr 20 17:33:30.603198 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3793327888.mount: Deactivated successfully. Apr 20 17:33:30.783268 update_engine[1630]: I20260420 17:33:30.691123 1630 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Apr 20 17:33:30.783268 update_engine[1630]: I20260420 17:33:30.694350 1630 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Apr 20 17:33:30.783268 update_engine[1630]: I20260420 17:33:30.730309 1630 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Apr 20 17:33:31.191151 update_engine[1630]: I20260420 17:33:30.817107 1630 omaha_request_params.cc:62] Current group set to alpha Apr 20 17:33:31.191151 update_engine[1630]: I20260420 17:33:30.837170 1630 update_attempter.cc:499] Already updated boot flags. Skipping. Apr 20 17:33:31.191151 update_engine[1630]: I20260420 17:33:30.863889 1630 update_attempter.cc:643] Scheduling an action processor start. Apr 20 17:33:31.191151 update_engine[1630]: I20260420 17:33:30.865059 1630 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Apr 20 17:33:31.191151 update_engine[1630]: I20260420 17:33:30.867990 1630 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Apr 20 17:33:31.191151 update_engine[1630]: I20260420 17:33:30.868538 1630 omaha_request_action.cc:271] Posting an Omaha request to disabled Apr 20 17:33:31.191151 update_engine[1630]: I20260420 17:33:30.868551 1630 omaha_request_action.cc:272] Request: Apr 20 17:33:31.191151 update_engine[1630]: Apr 20 17:33:31.191151 update_engine[1630]: Apr 20 17:33:31.191151 update_engine[1630]: Apr 20 17:33:31.191151 update_engine[1630]: Apr 20 17:33:31.191151 update_engine[1630]: Apr 20 17:33:31.191151 update_engine[1630]: Apr 20 17:33:31.191151 update_engine[1630]: Apr 20 17:33:31.191151 update_engine[1630]: Apr 20 17:33:31.191151 update_engine[1630]: I20260420 17:33:30.868559 1630 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 20 17:33:31.191151 update_engine[1630]: I20260420 17:33:30.931782 1630 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 20 17:33:31.191151 update_engine[1630]: I20260420 17:33:30.989670 1630 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 20 17:33:31.191151 update_engine[1630]: E20260420 17:33:31.130144 1630 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (Domain name not found) Apr 20 17:33:31.344211 update_engine[1630]: I20260420 17:33:31.203969 1630 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Apr 20 17:33:31.434094 locksmithd[1718]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Apr 20 17:33:31.629597 kubelet[2921]: E0420 17:33:31.308888 2921 controller.go:251] "Failed to update lease" err="Put \"https://10.0.0.107:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Apr 20 17:33:31.713219 containerd[1658]: time="2026-04-20T17:33:31.712637463Z" level=info msg="CreateContainer within sandbox \"1917323574b960de8ba00d74f9909f2e14c4f8be4dce6bcb901281008814d6d2\" for name:\"kube-controller-manager\" attempt:1 returns container id \"0076736cf1eedee29664628c2cdcb1cb6330a5f7b6aefc4ff7b6bb66058ef13e\"" Apr 20 17:33:31.781200 kubelet[2921]: E0420 17:33:31.758301 2921 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.249s" Apr 20 17:33:31.833321 containerd[1658]: time="2026-04-20T17:33:31.756476272Z" level=info msg="StartContainer for \"0076736cf1eedee29664628c2cdcb1cb6330a5f7b6aefc4ff7b6bb66058ef13e\"" Apr 20 17:33:31.975318 containerd[1658]: time="2026-04-20T17:33:31.959179975Z" level=info msg="connecting to shim 0076736cf1eedee29664628c2cdcb1cb6330a5f7b6aefc4ff7b6bb66058ef13e" address="unix:///run/containerd/s/dec39bf7199c55115e1e022cb5ae3c147a590841eafc57aa9ed18dfe18514e73" protocol=ttrpc version=3 Apr 20 17:33:33.027124 systemd[1]: Started cri-containerd-82cee5f1dc408e5c95f48f603a99a897183f8067c51698e9491e17d9b8976095.scope - libcontainer container 82cee5f1dc408e5c95f48f603a99a897183f8067c51698e9491e17d9b8976095. Apr 20 17:33:33.619810 systemd[1]: Started cri-containerd-792dfe7216e89c89a2413a97ea054eb9b9cd8b44103f42cf72139a0ce316ee43.scope - libcontainer container 792dfe7216e89c89a2413a97ea054eb9b9cd8b44103f42cf72139a0ce316ee43. Apr 20 17:33:34.200656 systemd[1]: Started cri-containerd-0076736cf1eedee29664628c2cdcb1cb6330a5f7b6aefc4ff7b6bb66058ef13e.scope - libcontainer container 0076736cf1eedee29664628c2cdcb1cb6330a5f7b6aefc4ff7b6bb66058ef13e. Apr 20 17:33:34.471177 kubelet[2921]: E0420 17:33:34.467683 2921 controller.go:251] "Failed to update lease" err="Operation cannot be fulfilled on leases.coordination.k8s.io \"localhost\": the object has been modified; please apply your changes to the latest version and try again" Apr 20 17:33:34.828552 systemd-resolved[1407]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 20 17:33:35.075636 systemd-resolved[1407]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 20 17:33:35.482274 containerd[1658]: time="2026-04-20T17:33:35.444322574Z" level=error msg="get state for 82cee5f1dc408e5c95f48f603a99a897183f8067c51698e9491e17d9b8976095" error="context deadline exceeded" Apr 20 17:33:35.969260 containerd[1658]: time="2026-04-20T17:33:35.478521876Z" level=warning msg="unknown status" status=0 Apr 20 17:33:35.973262 kubelet[2921]: E0420 17:33:35.615717 2921 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 17:33:37.359731 containerd[1658]: time="2026-04-20T17:33:37.354364212Z" level=error msg="ttrpc: received message on inactive stream" stream=3 Apr 20 17:33:38.542293 kubelet[2921]: E0420 17:33:38.529393 2921 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.088s" Apr 20 17:33:39.025466 containerd[1658]: time="2026-04-20T17:33:39.011936270Z" level=info msg="StartContainer for \"0076736cf1eedee29664628c2cdcb1cb6330a5f7b6aefc4ff7b6bb66058ef13e\" returns successfully" Apr 20 17:33:39.515199 containerd[1658]: time="2026-04-20T17:33:39.513897641Z" level=info msg="RunPodSandbox for name:\"coredns-7d764666f9-g548d\" uid:\"9615ef68-6f9f-458c-a098-bae2d529c7cc\" namespace:\"kube-system\" returns sandbox id \"82cee5f1dc408e5c95f48f603a99a897183f8067c51698e9491e17d9b8976095\"" Apr 20 17:33:39.609720 containerd[1658]: time="2026-04-20T17:33:39.605090814Z" level=info msg="RunPodSandbox for name:\"coredns-7d764666f9-xkbjw\" uid:\"4219345c-a413-4e69-a28c-2d2232522f0d\" namespace:\"kube-system\" returns sandbox id \"792dfe7216e89c89a2413a97ea054eb9b9cd8b44103f42cf72139a0ce316ee43\"" Apr 20 17:33:39.818553 kubelet[2921]: E0420 17:33:39.807809 2921 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 17:33:40.079704 kubelet[2921]: E0420 17:33:40.070386 2921 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 17:33:40.669645 containerd[1658]: time="2026-04-20T17:33:40.665833012Z" level=info msg="CreateContainer within sandbox \"792dfe7216e89c89a2413a97ea054eb9b9cd8b44103f42cf72139a0ce316ee43\" for container name:\"coredns\"" Apr 20 17:33:41.032309 containerd[1658]: time="2026-04-20T17:33:41.014363666Z" level=info msg="CreateContainer within sandbox \"82cee5f1dc408e5c95f48f603a99a897183f8067c51698e9491e17d9b8976095\" for container name:\"coredns\"" Apr 20 17:33:41.579881 kubelet[2921]: E0420 17:33:41.578266 2921 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.132s" Apr 20 17:33:41.682902 update_engine[1630]: I20260420 17:33:41.680613 1630 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 20 17:33:41.773687 update_engine[1630]: I20260420 17:33:41.690253 1630 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 20 17:33:41.773687 update_engine[1630]: I20260420 17:33:41.713690 1630 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 20 17:33:41.773687 update_engine[1630]: E20260420 17:33:41.738938 1630 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (Domain name not found) Apr 20 17:33:41.776241 update_engine[1630]: I20260420 17:33:41.773963 1630 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Apr 20 17:33:42.003394 containerd[1658]: time="2026-04-20T17:33:42.002873170Z" level=info msg="Container 5a8984a1d480d190f4084520be9fd0b65c1d1e1f423e19548a30612eb958d040: CDI devices from CRI Config.CDIDevices: []" Apr 20 17:33:42.769922 kubelet[2921]: E0420 17:33:42.760170 2921 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 17:33:42.938994 containerd[1658]: time="2026-04-20T17:33:42.931934669Z" level=info msg="CreateContainer within sandbox \"792dfe7216e89c89a2413a97ea054eb9b9cd8b44103f42cf72139a0ce316ee43\" for name:\"coredns\" returns container id \"5a8984a1d480d190f4084520be9fd0b65c1d1e1f423e19548a30612eb958d040\"" Apr 20 17:33:43.263809 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2250538929.mount: Deactivated successfully. Apr 20 17:33:43.630641 containerd[1658]: time="2026-04-20T17:33:43.611305564Z" level=info msg="StartContainer for \"5a8984a1d480d190f4084520be9fd0b65c1d1e1f423e19548a30612eb958d040\"" Apr 20 17:33:43.914363 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount801716330.mount: Deactivated successfully. Apr 20 17:33:43.998347 containerd[1658]: time="2026-04-20T17:33:43.995940147Z" level=info msg="Container c4453e2ff7e1b18ffc66389348f6efdf9bb123823e4971e953792520b0640d67: CDI devices from CRI Config.CDIDevices: []" Apr 20 17:33:44.594845 kubelet[2921]: E0420 17:33:44.538271 2921 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.107:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="200ms" Apr 20 17:33:44.987878 containerd[1658]: time="2026-04-20T17:33:44.620084451Z" level=info msg="connecting to shim 5a8984a1d480d190f4084520be9fd0b65c1d1e1f423e19548a30612eb958d040" address="unix:///run/containerd/s/09d8976aeca5acca42159149037e60f2e6793992ebcd79813a348c010407786f" protocol=ttrpc version=3 Apr 20 17:33:45.439219 containerd[1658]: time="2026-04-20T17:33:45.403957111Z" level=info msg="CreateContainer within sandbox \"82cee5f1dc408e5c95f48f603a99a897183f8067c51698e9491e17d9b8976095\" for name:\"coredns\" returns container id \"c4453e2ff7e1b18ffc66389348f6efdf9bb123823e4971e953792520b0640d67\"" Apr 20 17:33:45.456162 kubelet[2921]: E0420 17:33:45.450938 2921 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.974s" Apr 20 17:33:46.102096 containerd[1658]: time="2026-04-20T17:33:46.100393909Z" level=info msg="StartContainer for \"c4453e2ff7e1b18ffc66389348f6efdf9bb123823e4971e953792520b0640d67\"" Apr 20 17:33:46.806804 containerd[1658]: time="2026-04-20T17:33:46.801980624Z" level=info msg="connecting to shim c4453e2ff7e1b18ffc66389348f6efdf9bb123823e4971e953792520b0640d67" address="unix:///run/containerd/s/6e0a0f15bab7eb145f434d8c0a53dd08ce6cd9cc0abddb189252348086663230" protocol=ttrpc version=3 Apr 20 17:33:47.313013 kubelet[2921]: E0420 17:33:47.308027 2921 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.857s" Apr 20 17:33:48.180671 kubelet[2921]: E0420 17:33:48.178214 2921 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 17:33:48.214994 systemd[1]: Started cri-containerd-5a8984a1d480d190f4084520be9fd0b65c1d1e1f423e19548a30612eb958d040.scope - libcontainer container 5a8984a1d480d190f4084520be9fd0b65c1d1e1f423e19548a30612eb958d040. Apr 20 17:33:48.599134 kubelet[2921]: E0420 17:33:48.585047 2921 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.247s" Apr 20 17:33:48.838114 kubelet[2921]: E0420 17:33:48.836157 2921 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 17:33:49.240385 systemd[1]: Started cri-containerd-c4453e2ff7e1b18ffc66389348f6efdf9bb123823e4971e953792520b0640d67.scope - libcontainer container c4453e2ff7e1b18ffc66389348f6efdf9bb123823e4971e953792520b0640d67. Apr 20 17:33:51.007984 kubelet[2921]: E0420 17:33:51.004213 2921 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 17:33:51.436122 containerd[1658]: time="2026-04-20T17:33:51.410375799Z" level=info msg="StartContainer for \"5a8984a1d480d190f4084520be9fd0b65c1d1e1f423e19548a30612eb958d040\" returns successfully" Apr 20 17:33:51.676678 update_engine[1630]: I20260420 17:33:51.675357 1630 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 20 17:33:51.855333 update_engine[1630]: I20260420 17:33:51.677886 1630 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 20 17:33:51.855333 update_engine[1630]: I20260420 17:33:51.684007 1630 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 20 17:33:51.855333 update_engine[1630]: E20260420 17:33:51.697018 1630 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (Domain name not found) Apr 20 17:33:51.855333 update_engine[1630]: I20260420 17:33:51.697278 1630 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Apr 20 17:33:52.936918 containerd[1658]: time="2026-04-20T17:33:52.933019284Z" level=error msg="get state for c4453e2ff7e1b18ffc66389348f6efdf9bb123823e4971e953792520b0640d67" error="context deadline exceeded" Apr 20 17:33:52.936918 containerd[1658]: time="2026-04-20T17:33:52.934359846Z" level=warning msg="unknown status" status=0 Apr 20 17:33:53.686766 containerd[1658]: time="2026-04-20T17:33:53.683591824Z" level=error msg="ttrpc: received message on inactive stream" stream=3 Apr 20 17:33:54.010128 kubelet[2921]: E0420 17:33:53.953897 2921 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 17:33:54.911084 kubelet[2921]: E0420 17:33:54.847770 2921 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.107:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="400ms" Apr 20 17:33:55.535050 kubelet[2921]: E0420 17:33:55.392178 2921 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 17:33:55.986892 kubelet[2921]: E0420 17:33:55.558006 2921 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.113s" Apr 20 17:33:57.210025 containerd[1658]: time="2026-04-20T17:33:57.192352609Z" level=info msg="StartContainer for \"c4453e2ff7e1b18ffc66389348f6efdf9bb123823e4971e953792520b0640d67\" returns successfully" Apr 20 17:33:57.828203 kubelet[2921]: E0420 17:33:57.823125 2921 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.311s" Apr 20 17:33:59.511048 kubelet[2921]: E0420 17:33:59.500581 2921 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.057s" Apr 20 17:34:00.059719 kubelet[2921]: E0420 17:34:00.042278 2921 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 17:34:01.108234 kubelet[2921]: E0420 17:34:01.106088 2921 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 17:34:01.689177 update_engine[1630]: I20260420 17:34:01.681454 1630 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 20 17:34:01.887005 update_engine[1630]: I20260420 17:34:01.692035 1630 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 20 17:34:01.887005 update_engine[1630]: I20260420 17:34:01.740278 1630 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 20 17:34:01.887005 update_engine[1630]: E20260420 17:34:01.747339 1630 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (Domain name not found) Apr 20 17:34:01.887005 update_engine[1630]: I20260420 17:34:01.754343 1630 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Apr 20 17:34:01.887005 update_engine[1630]: I20260420 17:34:01.755280 1630 omaha_request_action.cc:617] Omaha request response: Apr 20 17:34:01.887005 update_engine[1630]: E20260420 17:34:01.762969 1630 omaha_request_action.cc:636] Omaha request network transfer failed. Apr 20 17:34:01.887005 update_engine[1630]: I20260420 17:34:01.763325 1630 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Apr 20 17:34:01.887005 update_engine[1630]: I20260420 17:34:01.763340 1630 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Apr 20 17:34:01.887005 update_engine[1630]: I20260420 17:34:01.764227 1630 update_attempter.cc:306] Processing Done. Apr 20 17:34:01.887005 update_engine[1630]: E20260420 17:34:01.767336 1630 update_attempter.cc:619] Update failed. Apr 20 17:34:01.887005 update_engine[1630]: I20260420 17:34:01.768213 1630 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Apr 20 17:34:01.887005 update_engine[1630]: I20260420 17:34:01.768225 1630 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Apr 20 17:34:01.887005 update_engine[1630]: I20260420 17:34:01.768234 1630 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Apr 20 17:34:01.887005 update_engine[1630]: I20260420 17:34:01.772089 1630 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Apr 20 17:34:01.887005 update_engine[1630]: I20260420 17:34:01.772966 1630 omaha_request_action.cc:271] Posting an Omaha request to disabled Apr 20 17:34:01.887005 update_engine[1630]: I20260420 17:34:01.775657 1630 omaha_request_action.cc:272] Request: Apr 20 17:34:01.887005 update_engine[1630]: Apr 20 17:34:01.887005 update_engine[1630]: Apr 20 17:34:02.537070 locksmithd[1718]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Apr 20 17:34:02.537070 locksmithd[1718]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Apr 20 17:34:02.629580 kubelet[2921]: E0420 17:34:01.777295 2921 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 17:34:02.711268 update_engine[1630]: Apr 20 17:34:02.711268 update_engine[1630]: Apr 20 17:34:02.711268 update_engine[1630]: Apr 20 17:34:02.711268 update_engine[1630]: Apr 20 17:34:02.711268 update_engine[1630]: I20260420 17:34:01.775782 1630 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 20 17:34:02.711268 update_engine[1630]: I20260420 17:34:01.776081 1630 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 20 17:34:02.711268 update_engine[1630]: I20260420 17:34:01.828316 1630 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 20 17:34:02.711268 update_engine[1630]: E20260420 17:34:01.876677 1630 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (Domain name not found) Apr 20 17:34:02.711268 update_engine[1630]: I20260420 17:34:01.880316 1630 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Apr 20 17:34:02.711268 update_engine[1630]: I20260420 17:34:01.883213 1630 omaha_request_action.cc:617] Omaha request response: Apr 20 17:34:02.711268 update_engine[1630]: I20260420 17:34:01.883473 1630 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Apr 20 17:34:02.711268 update_engine[1630]: I20260420 17:34:01.883487 1630 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Apr 20 17:34:02.711268 update_engine[1630]: I20260420 17:34:01.883492 1630 update_attempter.cc:306] Processing Done. Apr 20 17:34:02.711268 update_engine[1630]: I20260420 17:34:01.883539 1630 update_attempter.cc:310] Error event sent. Apr 20 17:34:02.711268 update_engine[1630]: I20260420 17:34:01.883555 1630 update_check_scheduler.cc:74] Next update check in 41m34s Apr 20 17:34:05.603559 kubelet[2921]: E0420 17:34:05.598998 2921 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.107:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded" interval="800ms" Apr 20 17:34:06.768350 kubelet[2921]: E0420 17:34:06.741464 2921 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 17:34:13.065727 kubelet[2921]: E0420 17:34:13.060234 2921 status_manager.go:1068] "Failed to update status for pod" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"676bd3b7-5862-4415-b7de-d0b5fb43eeff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-04-20T17:33:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-20T17:33:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"200m\\\"},\\\"containerID\\\":\\\"containerd://aa45c298c24d90714f2f5e25e656c8aefa636218c1939dbe6b54ed38fe067d34\\\",\\\"image\\\":\\\"registry.k8s.io/kube-controller-manager:v1.35.4\\\",\\\"imageID\\\":\\\"registry.k8s.io/kube-controller-manager@sha256:7b036c805d57f203e9efaf43672cff6019b9083a9c0eb107ea8500eace29d8fd\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"200m\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"containerd://aa45c298c24d90714f2f5e25e656c8aefa636218c1939dbe6b54ed38fe067d34\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-04-20T17:33:21Z\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-04-20T17:29:52Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}]}}\" for pod \"kube-system\"/\"kube-controller-manager-localhost\": Timeout: request did not complete within requested timeout - context deadline exceeded" pod="kube-system/kube-controller-manager-localhost" Apr 20 17:34:14.704082 kubelet[2921]: E0420 17:34:14.643734 2921 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{kube-scheduler-localhost.18a820ea29ae5306 kube-system 653 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-scheduler-localhost,UID:f7c88b30fc803a3ec6b6c138191bdaca,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:DNSConfigForming,Message:Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-20 17:30:43 +0000 UTC,LastTimestamp:2026-04-20 17:33:02.685911698 +0000 UTC m=+141.879854599,Count:6,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 20 17:34:16.781702 kubelet[2921]: E0420 17:34:16.778060 2921 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 17:34:21.245048 kubelet[2921]: E0420 17:34:21.242963 2921 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 17:34:21.254263 kubelet[2921]: I0420 17:34:21.253763 2921 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/coredns-7d764666f9-g548d" podStartSLOduration=207.253732555 podStartE2EDuration="3m27.253732555s" podCreationTimestamp="2026-04-20 17:30:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-20 17:34:16.505990978 +0000 UTC m=+215.699933889" watchObservedRunningTime="2026-04-20 17:34:21.253732555 +0000 UTC m=+220.447675480" Apr 20 17:34:25.619821 kubelet[2921]: E0420 17:34:25.615872 2921 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 17:34:26.680471 kubelet[2921]: I0420 17:34:26.677583 2921 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/coredns-7d764666f9-xkbjw" podStartSLOduration=212.676827183 podStartE2EDuration="3m32.676827183s" podCreationTimestamp="2026-04-20 17:30:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-20 17:34:26.668945333 +0000 UTC m=+225.862888246" watchObservedRunningTime="2026-04-20 17:34:26.676827183 +0000 UTC m=+225.870770090" Apr 20 17:34:27.557134 kubelet[2921]: E0420 17:34:27.552731 2921 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 17:34:29.531308 kubelet[2921]: E0420 17:34:29.528389 2921 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.073s" Apr 20 17:34:39.877244 kubelet[2921]: E0420 17:34:39.840183 2921 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.393s" Apr 20 17:34:51.416206 containerd[1658]: time="2026-04-20T17:34:51.414107761Z" level=info msg="container event discarded" container=3499c6956704318802827cc6051b05eee9229cbc60a941f53c88c4a10e92ae11 type=CONTAINER_CREATED_EVENT Apr 20 17:34:51.735755 containerd[1658]: time="2026-04-20T17:34:51.432015586Z" level=info msg="container event discarded" container=3499c6956704318802827cc6051b05eee9229cbc60a941f53c88c4a10e92ae11 type=CONTAINER_STARTED_EVENT Apr 20 17:34:51.735755 containerd[1658]: time="2026-04-20T17:34:51.460260195Z" level=info msg="container event discarded" container=1917323574b960de8ba00d74f9909f2e14c4f8be4dce6bcb901281008814d6d2 type=CONTAINER_CREATED_EVENT Apr 20 17:34:51.735755 containerd[1658]: time="2026-04-20T17:34:51.461244119Z" level=info msg="container event discarded" container=1917323574b960de8ba00d74f9909f2e14c4f8be4dce6bcb901281008814d6d2 type=CONTAINER_STARTED_EVENT Apr 20 17:34:51.735755 containerd[1658]: time="2026-04-20T17:34:51.461331942Z" level=info msg="container event discarded" container=9cb2f85127226d57992f176b25c3136fe0199a0600cc5a482d61555e2c43a765 type=CONTAINER_CREATED_EVENT Apr 20 17:34:51.735755 containerd[1658]: time="2026-04-20T17:34:51.461343700Z" level=info msg="container event discarded" container=9cb2f85127226d57992f176b25c3136fe0199a0600cc5a482d61555e2c43a765 type=CONTAINER_STARTED_EVENT Apr 20 17:34:51.735755 containerd[1658]: time="2026-04-20T17:34:51.634701973Z" level=info msg="container event discarded" container=aa45c298c24d90714f2f5e25e656c8aefa636218c1939dbe6b54ed38fe067d34 type=CONTAINER_CREATED_EVENT Apr 20 17:34:51.735755 containerd[1658]: time="2026-04-20T17:34:51.675065919Z" level=info msg="container event discarded" container=068b64fc9e7bb17f4f31b6fa2b38667873a3e750fc3b308a0cff36dc9b277220 type=CONTAINER_CREATED_EVENT Apr 20 17:34:52.012546 containerd[1658]: time="2026-04-20T17:34:51.834031983Z" level=info msg="container event discarded" container=d8fa23c424f8ed1d4920c20cd463d23b098559bd4cda8fd72d3895b2af18c40d type=CONTAINER_CREATED_EVENT Apr 20 17:34:52.579265 containerd[1658]: time="2026-04-20T17:34:52.574349063Z" level=info msg="container event discarded" container=068b64fc9e7bb17f4f31b6fa2b38667873a3e750fc3b308a0cff36dc9b277220 type=CONTAINER_STARTED_EVENT Apr 20 17:34:52.682339 containerd[1658]: time="2026-04-20T17:34:52.680247742Z" level=info msg="container event discarded" container=d8fa23c424f8ed1d4920c20cd463d23b098559bd4cda8fd72d3895b2af18c40d type=CONTAINER_STARTED_EVENT Apr 20 17:34:52.717300 containerd[1658]: time="2026-04-20T17:34:52.711273927Z" level=info msg="container event discarded" container=aa45c298c24d90714f2f5e25e656c8aefa636218c1939dbe6b54ed38fe067d34 type=CONTAINER_STARTED_EVENT Apr 20 17:35:13.540341 kubelet[2921]: E0420 17:35:13.529783 2921 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 17:35:29.446668 kubelet[2921]: E0420 17:35:29.441998 2921 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 17:35:29.525197 kubelet[2921]: E0420 17:35:29.455846 2921 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 17:35:30.455853 kubelet[2921]: E0420 17:35:30.455309 2921 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 17:35:34.557050 kubelet[2921]: E0420 17:35:34.533862 2921 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 17:35:34.620065 kubelet[2921]: E0420 17:35:34.596797 2921 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 17:35:45.560334 kubelet[2921]: E0420 17:35:45.553366 2921 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 17:36:01.512246 containerd[1658]: time="2026-04-20T17:36:01.470263919Z" level=info msg="container event discarded" container=5dd550754eb9adec788c8d705fa27e06d702e86d16e7715588540353d41aaaad type=CONTAINER_CREATED_EVENT Apr 20 17:36:01.512246 containerd[1658]: time="2026-04-20T17:36:01.476102146Z" level=info msg="container event discarded" container=5dd550754eb9adec788c8d705fa27e06d702e86d16e7715588540353d41aaaad type=CONTAINER_STARTED_EVENT Apr 20 17:36:02.254516 containerd[1658]: time="2026-04-20T17:36:02.252685401Z" level=info msg="container event discarded" container=23ec7bee55aeab16de7062676bad18dbaea2b695532e07c29e988434cb69da53 type=CONTAINER_CREATED_EVENT Apr 20 17:36:02.254516 containerd[1658]: time="2026-04-20T17:36:02.253956362Z" level=info msg="container event discarded" container=23ec7bee55aeab16de7062676bad18dbaea2b695532e07c29e988434cb69da53 type=CONTAINER_STARTED_EVENT Apr 20 17:36:02.727906 containerd[1658]: time="2026-04-20T17:36:02.721655072Z" level=info msg="container event discarded" container=a608c319917f2a12a25512463046c1e178c66e4aa450a8c48c54d87a33e8fd19 type=CONTAINER_CREATED_EVENT Apr 20 17:36:04.517196 containerd[1658]: time="2026-04-20T17:36:04.514140267Z" level=info msg="container event discarded" container=a608c319917f2a12a25512463046c1e178c66e4aa450a8c48c54d87a33e8fd19 type=CONTAINER_STARTED_EVENT Apr 20 17:36:09.324516 containerd[1658]: time="2026-04-20T17:36:09.238368237Z" level=info msg="container event discarded" container=2a7dd379d8a2accbe3696c0af9834a10134385b092ccea241ecb1c21def19984 type=CONTAINER_CREATED_EVENT Apr 20 17:36:10.157021 containerd[1658]: time="2026-04-20T17:36:10.132746329Z" level=info msg="container event discarded" container=2a7dd379d8a2accbe3696c0af9834a10134385b092ccea241ecb1c21def19984 type=CONTAINER_STARTED_EVENT Apr 20 17:36:11.651315 containerd[1658]: time="2026-04-20T17:36:11.609196035Z" level=info msg="container event discarded" container=2a7dd379d8a2accbe3696c0af9834a10134385b092ccea241ecb1c21def19984 type=CONTAINER_STOPPED_EVENT Apr 20 17:36:24.532890 kubelet[2921]: E0420 17:36:24.531799 2921 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 17:36:40.542787 kubelet[2921]: E0420 17:36:40.445030 2921 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 17:36:42.616648 kubelet[2921]: E0420 17:36:42.610839 2921 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 17:36:48.773900 kubelet[2921]: E0420 17:36:48.773332 2921 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 17:36:57.461379 kubelet[2921]: E0420 17:36:57.450205 2921 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 17:36:58.914948 kubelet[2921]: E0420 17:36:58.914095 2921 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 17:37:08.498853 kubelet[2921]: E0420 17:37:08.494867 2921 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 17:37:24.012212 systemd[1]: Started sshd@5-8194-10.0.0.107:22-10.0.0.1:43360.service - OpenSSH per-connection server daemon (10.0.0.1:43360). Apr 20 17:37:27.010257 sshd[4705]: Accepted publickey for core from 10.0.0.1 port 43360 ssh2: RSA SHA256:snwWgtVh/dFlg/GoTCcrMcNNvPRv/2kcBFWNu/hKVq0 Apr 20 17:37:27.144013 containerd[1658]: time="2026-04-20T17:37:27.002907895Z" level=info msg="container event discarded" container=fe0f81046cfe21692409b4616d9a0a93f30a830e44e5377ded86c42db9e7ca97 type=CONTAINER_CREATED_EVENT Apr 20 17:37:27.126972 sshd-session[4705]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 20 17:37:27.794117 systemd-logind[1622]: New session '7' of user 'core' with class 'user' and type 'tty'. Apr 20 17:37:27.881227 systemd[1]: Started session-7.scope - Session 7 of User core. Apr 20 17:37:33.119950 containerd[1658]: time="2026-04-20T17:37:33.114315930Z" level=info msg="container event discarded" container=fe0f81046cfe21692409b4616d9a0a93f30a830e44e5377ded86c42db9e7ca97 type=CONTAINER_STARTED_EVENT Apr 20 17:37:33.543295 sshd[4725]: Connection closed by 10.0.0.1 port 43360 Apr 20 17:37:33.630057 sshd-session[4705]: pam_unix(sshd:session): session closed for user core Apr 20 17:37:33.709078 systemd[1]: sshd@5-8194-10.0.0.107:22-10.0.0.1:43360.service: Deactivated successfully. Apr 20 17:37:33.935969 systemd[1]: session-7.scope: Deactivated successfully. Apr 20 17:37:33.989451 systemd[1]: session-7.scope: Consumed 1.212s CPU time, 15.6M memory peak. Apr 20 17:37:34.146488 systemd-logind[1622]: Session 7 logged out. Waiting for processes to exit. Apr 20 17:37:34.205952 systemd-logind[1622]: Removed session 7. Apr 20 17:37:35.207230 containerd[1658]: time="2026-04-20T17:37:35.205589706Z" level=info msg="container event discarded" container=fe0f81046cfe21692409b4616d9a0a93f30a830e44e5377ded86c42db9e7ca97 type=CONTAINER_STOPPED_EVENT Apr 20 17:37:37.243875 containerd[1658]: time="2026-04-20T17:37:37.228113890Z" level=info msg="container event discarded" container=effeb36ba411be61e87fd492ba5ec98b65c1a1b79aff6d83e045b097a9c722e8 type=CONTAINER_CREATED_EVENT Apr 20 17:37:37.469029 kubelet[2921]: E0420 17:37:37.467924 2921 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 17:37:39.173299 systemd[1]: Started sshd@6-8195-10.0.0.107:22-10.0.0.1:44814.service - OpenSSH per-connection server daemon (10.0.0.1:44814). Apr 20 17:37:39.524808 kubelet[2921]: E0420 17:37:39.511951 2921 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.074s" Apr 20 17:37:41.799764 sshd[4767]: Accepted publickey for core from 10.0.0.1 port 44814 ssh2: RSA SHA256:snwWgtVh/dFlg/GoTCcrMcNNvPRv/2kcBFWNu/hKVq0 Apr 20 17:37:41.947878 sshd-session[4767]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 20 17:37:42.496604 systemd-logind[1622]: New session '8' of user 'core' with class 'user' and type 'tty'. Apr 20 17:37:42.523658 systemd[1]: Started session-8.scope - Session 8 of User core. Apr 20 17:37:43.620733 kubelet[2921]: E0420 17:37:43.618762 2921 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 17:37:46.727183 kubelet[2921]: E0420 17:37:46.726038 2921 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 17:37:48.187032 sshd[4794]: Connection closed by 10.0.0.1 port 44814 Apr 20 17:37:48.185141 sshd-session[4767]: pam_unix(sshd:session): session closed for user core Apr 20 17:37:48.285482 systemd[1]: sshd@6-8195-10.0.0.107:22-10.0.0.1:44814.service: Deactivated successfully. Apr 20 17:37:48.375371 systemd[1]: session-8.scope: Deactivated successfully. Apr 20 17:37:48.388692 systemd[1]: session-8.scope: Consumed 1.390s CPU time, 15.3M memory peak. Apr 20 17:37:48.425566 systemd-logind[1622]: Session 8 logged out. Waiting for processes to exit. Apr 20 17:37:48.508135 systemd-logind[1622]: Removed session 8. Apr 20 17:37:49.185738 containerd[1658]: time="2026-04-20T17:37:49.180847375Z" level=info msg="container event discarded" container=effeb36ba411be61e87fd492ba5ec98b65c1a1b79aff6d83e045b097a9c722e8 type=CONTAINER_STARTED_EVENT Apr 20 17:37:50.706894 containerd[1658]: time="2026-04-20T17:37:50.694471134Z" level=info msg="container event discarded" container=068b64fc9e7bb17f4f31b6fa2b38667873a3e750fc3b308a0cff36dc9b277220 type=CONTAINER_STOPPED_EVENT Apr 20 17:37:52.334307 containerd[1658]: time="2026-04-20T17:37:52.330713023Z" level=info msg="container event discarded" container=47b8d56c11d320d9712e2d23cce73f72260afb25e9ec58b43bf196c68994379a type=CONTAINER_CREATED_EVENT Apr 20 17:37:54.352851 systemd[1]: Started sshd@7-4099-10.0.0.107:22-10.0.0.1:59764.service - OpenSSH per-connection server daemon (10.0.0.1:59764). Apr 20 17:37:55.988899 kubelet[2921]: E0420 17:37:55.948008 2921 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.306s" Apr 20 17:37:58.527863 sshd[4840]: Accepted publickey for core from 10.0.0.1 port 59764 ssh2: RSA SHA256:snwWgtVh/dFlg/GoTCcrMcNNvPRv/2kcBFWNu/hKVq0 Apr 20 17:37:58.654032 sshd-session[4840]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 20 17:37:59.475004 systemd-logind[1622]: New session '9' of user 'core' with class 'user' and type 'tty'. Apr 20 17:37:59.683687 systemd[1]: Started session-9.scope - Session 9 of User core. Apr 20 17:38:00.282903 containerd[1658]: time="2026-04-20T17:38:00.282494379Z" level=info msg="container event discarded" container=47b8d56c11d320d9712e2d23cce73f72260afb25e9ec58b43bf196c68994379a type=CONTAINER_STARTED_EVENT Apr 20 17:38:02.586005 sshd[4865]: Connection closed by 10.0.0.1 port 59764 Apr 20 17:38:02.611620 sshd-session[4840]: pam_unix(sshd:session): session closed for user core Apr 20 17:38:03.086948 systemd[1]: sshd@7-4099-10.0.0.107:22-10.0.0.1:59764.service: Deactivated successfully. Apr 20 17:38:03.102695 systemd[1]: session-9.scope: Deactivated successfully. Apr 20 17:38:03.243328 systemd-logind[1622]: Session 9 logged out. Waiting for processes to exit. Apr 20 17:38:04.156502 systemd-logind[1622]: Removed session 9. Apr 20 17:38:08.382613 systemd[1]: Started sshd@8-8196-10.0.0.107:22-10.0.0.1:51436.service - OpenSSH per-connection server daemon (10.0.0.1:51436). Apr 20 17:38:11.310569 sshd[4892]: Accepted publickey for core from 10.0.0.1 port 51436 ssh2: RSA SHA256:snwWgtVh/dFlg/GoTCcrMcNNvPRv/2kcBFWNu/hKVq0 Apr 20 17:38:11.313004 sshd-session[4892]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 20 17:38:11.620374 kubelet[2921]: E0420 17:38:11.617180 2921 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 17:38:11.700840 systemd-logind[1622]: New session '10' of user 'core' with class 'user' and type 'tty'. Apr 20 17:38:11.774518 systemd[1]: Started session-10.scope - Session 10 of User core. Apr 20 17:38:13.008267 kubelet[2921]: E0420 17:38:12.998549 2921 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 17:38:13.076999 kubelet[2921]: E0420 17:38:13.076287 2921 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 17:38:14.492196 systemd[1]: cri-containerd-0076736cf1eedee29664628c2cdcb1cb6330a5f7b6aefc4ff7b6bb66058ef13e.scope: Deactivated successfully. Apr 20 17:38:14.492924 systemd[1]: cri-containerd-0076736cf1eedee29664628c2cdcb1cb6330a5f7b6aefc4ff7b6bb66058ef13e.scope: Consumed 57.122s CPU time, 45.6M memory peak, 8K read from disk. Apr 20 17:38:14.761715 containerd[1658]: time="2026-04-20T17:38:14.709941817Z" level=info msg="received container exit event container_id:\"0076736cf1eedee29664628c2cdcb1cb6330a5f7b6aefc4ff7b6bb66058ef13e\" id:\"0076736cf1eedee29664628c2cdcb1cb6330a5f7b6aefc4ff7b6bb66058ef13e\" pid:3930 exit_status:1 exited_at:{seconds:1776706694 nanos:692889302}" Apr 20 17:38:16.143665 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0076736cf1eedee29664628c2cdcb1cb6330a5f7b6aefc4ff7b6bb66058ef13e-rootfs.mount: Deactivated successfully. Apr 20 17:38:16.392678 containerd[1658]: time="2026-04-20T17:38:16.392345043Z" level=info msg="received container exit event container_id:\"47b8d56c11d320d9712e2d23cce73f72260afb25e9ec58b43bf196c68994379a\" id:\"47b8d56c11d320d9712e2d23cce73f72260afb25e9ec58b43bf196c68994379a\" pid:3534 exit_status:1 exited_at:{seconds:1776706696 nanos:383369163}" Apr 20 17:38:16.400867 systemd[1]: cri-containerd-47b8d56c11d320d9712e2d23cce73f72260afb25e9ec58b43bf196c68994379a.scope: Deactivated successfully. Apr 20 17:38:16.462532 systemd[1]: cri-containerd-47b8d56c11d320d9712e2d23cce73f72260afb25e9ec58b43bf196c68994379a.scope: Consumed 40.018s CPU time, 21.5M memory peak, 96K read from disk. Apr 20 17:38:17.307468 sshd[4907]: Connection closed by 10.0.0.1 port 51436 Apr 20 17:38:17.324537 sshd-session[4892]: pam_unix(sshd:session): session closed for user core Apr 20 17:38:17.534099 systemd[1]: sshd@8-8196-10.0.0.107:22-10.0.0.1:51436.service: Deactivated successfully. Apr 20 17:38:17.849780 systemd[1]: session-10.scope: Deactivated successfully. Apr 20 17:38:18.128972 systemd[1]: session-10.scope: Consumed 1.159s CPU time, 15.1M memory peak. Apr 20 17:38:18.342084 systemd-logind[1622]: Session 10 logged out. Waiting for processes to exit. Apr 20 17:38:18.403095 systemd-logind[1622]: Removed session 10. Apr 20 17:38:19.420233 kubelet[2921]: E0420 17:38:19.409669 2921 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 17:38:19.486975 kubelet[2921]: I0420 17:38:19.474902 2921 scope.go:122] "RemoveContainer" containerID="aa45c298c24d90714f2f5e25e656c8aefa636218c1939dbe6b54ed38fe067d34" Apr 20 17:38:19.532013 kubelet[2921]: I0420 17:38:19.526217 2921 scope.go:122] "RemoveContainer" containerID="0076736cf1eedee29664628c2cdcb1cb6330a5f7b6aefc4ff7b6bb66058ef13e" Apr 20 17:38:19.667863 kubelet[2921]: E0420 17:38:19.655086 2921 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 17:38:19.775752 kubelet[2921]: E0420 17:38:19.707802 2921 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-controller-manager pod=kube-controller-manager-localhost_kube-system(14bc29ec35edba17af38052ec24275f2)\"" pod="kube-system/kube-controller-manager-localhost" podUID="14bc29ec35edba17af38052ec24275f2" Apr 20 17:38:20.194233 containerd[1658]: time="2026-04-20T17:38:20.116487546Z" level=info msg="RemoveContainer for \"aa45c298c24d90714f2f5e25e656c8aefa636218c1939dbe6b54ed38fe067d34\"" Apr 20 17:38:20.443331 containerd[1658]: time="2026-04-20T17:38:20.442689168Z" level=info msg="RemoveContainer for \"aa45c298c24d90714f2f5e25e656c8aefa636218c1939dbe6b54ed38fe067d34\" returns successfully" Apr 20 17:38:20.952655 kubelet[2921]: I0420 17:38:20.951228 2921 scope.go:122] "RemoveContainer" containerID="0076736cf1eedee29664628c2cdcb1cb6330a5f7b6aefc4ff7b6bb66058ef13e" Apr 20 17:38:20.952655 kubelet[2921]: E0420 17:38:20.951310 2921 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 17:38:20.952655 kubelet[2921]: E0420 17:38:20.951486 2921 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-controller-manager pod=kube-controller-manager-localhost_kube-system(14bc29ec35edba17af38052ec24275f2)\"" pod="kube-system/kube-controller-manager-localhost" podUID="14bc29ec35edba17af38052ec24275f2" Apr 20 17:38:21.438011 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-47b8d56c11d320d9712e2d23cce73f72260afb25e9ec58b43bf196c68994379a-rootfs.mount: Deactivated successfully. Apr 20 17:38:22.405141 systemd[1]: Started sshd@9-8197-10.0.0.107:22-10.0.0.1:54852.service - OpenSSH per-connection server daemon (10.0.0.1:54852). Apr 20 17:38:23.513520 kubelet[2921]: I0420 17:38:23.506220 2921 scope.go:122] "RemoveContainer" containerID="068b64fc9e7bb17f4f31b6fa2b38667873a3e750fc3b308a0cff36dc9b277220" Apr 20 17:38:23.515964 kubelet[2921]: I0420 17:38:23.513918 2921 scope.go:122] "RemoveContainer" containerID="47b8d56c11d320d9712e2d23cce73f72260afb25e9ec58b43bf196c68994379a" Apr 20 17:38:23.515964 kubelet[2921]: E0420 17:38:23.514127 2921 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 17:38:23.515964 kubelet[2921]: E0420 17:38:23.514251 2921 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-scheduler pod=kube-scheduler-localhost_kube-system(f7c88b30fc803a3ec6b6c138191bdaca)\"" pod="kube-system/kube-scheduler-localhost" podUID="f7c88b30fc803a3ec6b6c138191bdaca" Apr 20 17:38:23.648675 containerd[1658]: time="2026-04-20T17:38:23.644676504Z" level=info msg="RemoveContainer for \"068b64fc9e7bb17f4f31b6fa2b38667873a3e750fc3b308a0cff36dc9b277220\"" Apr 20 17:38:23.940620 containerd[1658]: time="2026-04-20T17:38:23.932231333Z" level=info msg="RemoveContainer for \"068b64fc9e7bb17f4f31b6fa2b38667873a3e750fc3b308a0cff36dc9b277220\" returns successfully" Apr 20 17:38:24.980897 containerd[1658]: time="2026-04-20T17:38:24.928774225Z" level=info msg="container event discarded" container=aa45c298c24d90714f2f5e25e656c8aefa636218c1939dbe6b54ed38fe067d34 type=CONTAINER_STOPPED_EVENT Apr 20 17:38:25.148152 sshd[4987]: Accepted publickey for core from 10.0.0.1 port 54852 ssh2: RSA SHA256:snwWgtVh/dFlg/GoTCcrMcNNvPRv/2kcBFWNu/hKVq0 Apr 20 17:38:25.260664 sshd-session[4987]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 20 17:38:25.595871 systemd-logind[1622]: New session '11' of user 'core' with class 'user' and type 'tty'. Apr 20 17:38:26.186683 systemd[1]: Started session-11.scope - Session 11 of User core. Apr 20 17:38:26.540149 kubelet[2921]: I0420 17:38:26.459305 2921 scope.go:122] "RemoveContainer" containerID="0076736cf1eedee29664628c2cdcb1cb6330a5f7b6aefc4ff7b6bb66058ef13e" Apr 20 17:38:26.740575 kubelet[2921]: E0420 17:38:26.485905 2921 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 17:38:27.318058 containerd[1658]: time="2026-04-20T17:38:27.298647367Z" level=info msg="CreateContainer within sandbox \"1917323574b960de8ba00d74f9909f2e14c4f8be4dce6bcb901281008814d6d2\" for container name:\"kube-controller-manager\" attempt:2" Apr 20 17:38:28.334069 kubelet[2921]: I0420 17:38:28.328392 2921 scope.go:122] "RemoveContainer" containerID="47b8d56c11d320d9712e2d23cce73f72260afb25e9ec58b43bf196c68994379a" Apr 20 17:38:28.386076 kubelet[2921]: E0420 17:38:28.341782 2921 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 17:38:28.849549 containerd[1658]: time="2026-04-20T17:38:28.835175320Z" level=info msg="Container 82e8679ddb69d2fa6ccb28dd249f9cc9ac6cb2f24f965358e606db9d9f5c04e9: CDI devices from CRI Config.CDIDevices: []" Apr 20 17:38:29.664292 kubelet[2921]: E0420 17:38:29.651711 2921 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.17s" Apr 20 17:38:29.714591 containerd[1658]: time="2026-04-20T17:38:29.656942679Z" level=info msg="CreateContainer within sandbox \"3499c6956704318802827cc6051b05eee9229cbc60a941f53c88c4a10e92ae11\" for container name:\"kube-scheduler\" attempt:2" Apr 20 17:38:29.714591 containerd[1658]: time="2026-04-20T17:38:29.658365699Z" level=info msg="CreateContainer within sandbox \"1917323574b960de8ba00d74f9909f2e14c4f8be4dce6bcb901281008814d6d2\" for name:\"kube-controller-manager\" attempt:2 returns container id \"82e8679ddb69d2fa6ccb28dd249f9cc9ac6cb2f24f965358e606db9d9f5c04e9\"" Apr 20 17:38:30.054705 containerd[1658]: time="2026-04-20T17:38:30.028557688Z" level=info msg="StartContainer for \"82e8679ddb69d2fa6ccb28dd249f9cc9ac6cb2f24f965358e606db9d9f5c04e9\"" Apr 20 17:38:30.256845 containerd[1658]: time="2026-04-20T17:38:30.231172536Z" level=info msg="Container 053581eba8fcd29bef6872b77a855f19f570c706a57fca253deec938b7d095a1: CDI devices from CRI Config.CDIDevices: []" Apr 20 17:38:30.268878 containerd[1658]: time="2026-04-20T17:38:30.268550685Z" level=info msg="connecting to shim 82e8679ddb69d2fa6ccb28dd249f9cc9ac6cb2f24f965358e606db9d9f5c04e9" address="unix:///run/containerd/s/dec39bf7199c55115e1e022cb5ae3c147a590841eafc57aa9ed18dfe18514e73" protocol=ttrpc version=3 Apr 20 17:38:30.498048 containerd[1658]: time="2026-04-20T17:38:30.497263228Z" level=info msg="CreateContainer within sandbox \"3499c6956704318802827cc6051b05eee9229cbc60a941f53c88c4a10e92ae11\" for name:\"kube-scheduler\" attempt:2 returns container id \"053581eba8fcd29bef6872b77a855f19f570c706a57fca253deec938b7d095a1\"" Apr 20 17:38:30.619313 containerd[1658]: time="2026-04-20T17:38:30.612705839Z" level=info msg="StartContainer for \"053581eba8fcd29bef6872b77a855f19f570c706a57fca253deec938b7d095a1\"" Apr 20 17:38:30.788693 containerd[1658]: time="2026-04-20T17:38:30.771907188Z" level=info msg="connecting to shim 053581eba8fcd29bef6872b77a855f19f570c706a57fca253deec938b7d095a1" address="unix:///run/containerd/s/1028a3884c8fe7445e51378f8f75d8222496a64b030b92e150cc155157ded40c" protocol=ttrpc version=3 Apr 20 17:38:31.659696 systemd[1]: Started cri-containerd-82e8679ddb69d2fa6ccb28dd249f9cc9ac6cb2f24f965358e606db9d9f5c04e9.scope - libcontainer container 82e8679ddb69d2fa6ccb28dd249f9cc9ac6cb2f24f965358e606db9d9f5c04e9. Apr 20 17:38:31.717512 containerd[1658]: time="2026-04-20T17:38:31.717108214Z" level=info msg="container event discarded" container=0076736cf1eedee29664628c2cdcb1cb6330a5f7b6aefc4ff7b6bb66058ef13e type=CONTAINER_CREATED_EVENT Apr 20 17:38:32.328196 systemd[1]: Started cri-containerd-053581eba8fcd29bef6872b77a855f19f570c706a57fca253deec938b7d095a1.scope - libcontainer container 053581eba8fcd29bef6872b77a855f19f570c706a57fca253deec938b7d095a1. Apr 20 17:38:33.500312 sshd[4996]: Connection closed by 10.0.0.1 port 54852 Apr 20 17:38:33.500174 sshd-session[4987]: pam_unix(sshd:session): session closed for user core Apr 20 17:38:33.598574 systemd[1]: sshd@9-8197-10.0.0.107:22-10.0.0.1:54852.service: Deactivated successfully. Apr 20 17:38:33.705956 systemd[1]: session-11.scope: Deactivated successfully. Apr 20 17:38:33.756671 systemd[1]: session-11.scope: Consumed 1.201s CPU time, 15M memory peak. Apr 20 17:38:34.132311 containerd[1658]: time="2026-04-20T17:38:34.114201657Z" level=error msg="get state for 82e8679ddb69d2fa6ccb28dd249f9cc9ac6cb2f24f965358e606db9d9f5c04e9" error="context deadline exceeded" Apr 20 17:38:34.132311 containerd[1658]: time="2026-04-20T17:38:34.114334638Z" level=warning msg="unknown status" status=0 Apr 20 17:38:34.198256 systemd-logind[1622]: Session 11 logged out. Waiting for processes to exit. Apr 20 17:38:34.367256 systemd-logind[1622]: Removed session 11. Apr 20 17:38:35.888072 containerd[1658]: time="2026-04-20T17:38:35.867149665Z" level=error msg="ttrpc: received message on inactive stream" stream=5 Apr 20 17:38:36.930523 containerd[1658]: time="2026-04-20T17:38:36.926589312Z" level=info msg="StartContainer for \"053581eba8fcd29bef6872b77a855f19f570c706a57fca253deec938b7d095a1\" returns successfully" Apr 20 17:38:37.434012 kubelet[2921]: E0420 17:38:37.431344 2921 cadvisor_stats_provider.go:569] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf7c88b30fc803a3ec6b6c138191bdaca.slice/cri-containerd-053581eba8fcd29bef6872b77a855f19f570c706a57fca253deec938b7d095a1.scope\": RecentStats: unable to find data in memory cache]" Apr 20 17:38:38.952539 containerd[1658]: time="2026-04-20T17:38:38.933903162Z" level=info msg="container event discarded" container=0076736cf1eedee29664628c2cdcb1cb6330a5f7b6aefc4ff7b6bb66058ef13e type=CONTAINER_STARTED_EVENT Apr 20 17:38:39.037755 containerd[1658]: time="2026-04-20T17:38:39.031083588Z" level=info msg="StartContainer for \"82e8679ddb69d2fa6ccb28dd249f9cc9ac6cb2f24f965358e606db9d9f5c04e9\" returns successfully" Apr 20 17:38:39.233192 systemd[1]: Started sshd@10-3-10.0.0.107:22-10.0.0.1:45162.service - OpenSSH per-connection server daemon (10.0.0.1:45162). Apr 20 17:38:39.627264 containerd[1658]: time="2026-04-20T17:38:39.529682046Z" level=info msg="container event discarded" container=82cee5f1dc408e5c95f48f603a99a897183f8067c51698e9491e17d9b8976095 type=CONTAINER_CREATED_EVENT Apr 20 17:38:39.627264 containerd[1658]: time="2026-04-20T17:38:39.530149875Z" level=info msg="container event discarded" container=82cee5f1dc408e5c95f48f603a99a897183f8067c51698e9491e17d9b8976095 type=CONTAINER_STARTED_EVENT Apr 20 17:38:39.627264 containerd[1658]: time="2026-04-20T17:38:39.618860150Z" level=info msg="container event discarded" container=792dfe7216e89c89a2413a97ea054eb9b9cd8b44103f42cf72139a0ce316ee43 type=CONTAINER_CREATED_EVENT Apr 20 17:38:39.662623 containerd[1658]: time="2026-04-20T17:38:39.635348348Z" level=info msg="container event discarded" container=792dfe7216e89c89a2413a97ea054eb9b9cd8b44103f42cf72139a0ce316ee43 type=CONTAINER_STARTED_EVENT Apr 20 17:38:40.160707 kubelet[2921]: E0420 17:38:40.160053 2921 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 17:38:41.425351 kubelet[2921]: E0420 17:38:41.414750 2921 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 17:38:41.582757 kubelet[2921]: E0420 17:38:41.566640 2921 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 17:38:41.934856 sshd[5112]: Accepted publickey for core from 10.0.0.1 port 45162 ssh2: RSA SHA256:snwWgtVh/dFlg/GoTCcrMcNNvPRv/2kcBFWNu/hKVq0 Apr 20 17:38:42.355858 sshd-session[5112]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 20 17:38:42.443637 kubelet[2921]: E0420 17:38:42.442106 2921 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 17:38:42.447767 systemd-logind[1622]: New session '12' of user 'core' with class 'user' and type 'tty'. Apr 20 17:38:42.595175 containerd[1658]: time="2026-04-20T17:38:42.577171542Z" level=info msg="container event discarded" container=5a8984a1d480d190f4084520be9fd0b65c1d1e1f423e19548a30612eb958d040 type=CONTAINER_CREATED_EVENT Apr 20 17:38:42.879519 systemd[1]: Started session-12.scope - Session 12 of User core. Apr 20 17:38:44.860840 containerd[1658]: time="2026-04-20T17:38:44.840330117Z" level=info msg="container event discarded" container=c4453e2ff7e1b18ffc66389348f6efdf9bb123823e4971e953792520b0640d67 type=CONTAINER_CREATED_EVENT Apr 20 17:38:48.758368 kubelet[2921]: E0420 17:38:48.755775 2921 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 17:38:51.400992 containerd[1658]: time="2026-04-20T17:38:51.398607066Z" level=info msg="container event discarded" container=5a8984a1d480d190f4084520be9fd0b65c1d1e1f423e19548a30612eb958d040 type=CONTAINER_STARTED_EVENT Apr 20 17:38:51.512810 kubelet[2921]: E0420 17:38:51.506729 2921 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.063s" Apr 20 17:38:51.796818 kubelet[2921]: E0420 17:38:51.788347 2921 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 17:38:52.088049 sshd[5131]: Connection closed by 10.0.0.1 port 45162 Apr 20 17:38:52.143217 sshd-session[5112]: pam_unix(sshd:session): session closed for user core Apr 20 17:38:52.308526 systemd[1]: sshd@10-3-10.0.0.107:22-10.0.0.1:45162.service: Deactivated successfully. Apr 20 17:38:52.449805 systemd[1]: session-12.scope: Deactivated successfully. Apr 20 17:38:52.450572 systemd[1]: session-12.scope: Consumed 1.141s CPU time, 15.5M memory peak. Apr 20 17:38:52.469836 systemd-logind[1622]: Session 12 logged out. Waiting for processes to exit. Apr 20 17:38:52.508998 systemd-logind[1622]: Removed session 12. Apr 20 17:38:56.893146 containerd[1658]: time="2026-04-20T17:38:56.887296206Z" level=info msg="container event discarded" container=c4453e2ff7e1b18ffc66389348f6efdf9bb123823e4971e953792520b0640d67 type=CONTAINER_STARTED_EVENT Apr 20 17:38:57.553561 systemd[1]: Started sshd@11-8198-10.0.0.107:22-10.0.0.1:44266.service - OpenSSH per-connection server daemon (10.0.0.1:44266). Apr 20 17:38:59.808809 kubelet[2921]: E0420 17:38:59.807056 2921 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 17:39:01.339210 sshd[5189]: Accepted publickey for core from 10.0.0.1 port 44266 ssh2: RSA SHA256:snwWgtVh/dFlg/GoTCcrMcNNvPRv/2kcBFWNu/hKVq0 Apr 20 17:39:01.388695 sshd-session[5189]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 20 17:39:01.558723 kubelet[2921]: E0420 17:39:01.526184 2921 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.009s" Apr 20 17:39:01.789125 systemd-logind[1622]: New session '13' of user 'core' with class 'user' and type 'tty'. Apr 20 17:39:01.980155 systemd[1]: Started session-13.scope - Session 13 of User core. Apr 20 17:39:02.153628 kubelet[2921]: E0420 17:39:02.142031 2921 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 17:39:08.624056 sshd[5195]: Connection closed by 10.0.0.1 port 44266 Apr 20 17:39:08.697087 sshd-session[5189]: pam_unix(sshd:session): session closed for user core Apr 20 17:39:08.918732 systemd[1]: sshd@11-8198-10.0.0.107:22-10.0.0.1:44266.service: Deactivated successfully. Apr 20 17:39:09.053980 systemd[1]: session-13.scope: Deactivated successfully. Apr 20 17:39:09.065898 systemd[1]: session-13.scope: Consumed 1.629s CPU time, 15.1M memory peak. Apr 20 17:39:09.202156 systemd-logind[1622]: Session 13 logged out. Waiting for processes to exit. Apr 20 17:39:09.220972 systemd-logind[1622]: Removed session 13. Apr 20 17:39:09.459739 kubelet[2921]: E0420 17:39:09.457373 2921 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 17:39:13.498018 kubelet[2921]: E0420 17:39:13.495349 2921 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 17:39:14.172704 systemd[1]: Started sshd@12-8199-10.0.0.107:22-10.0.0.1:55664.service - OpenSSH per-connection server daemon (10.0.0.1:55664). Apr 20 17:39:15.495848 kubelet[2921]: E0420 17:39:15.494578 2921 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.033s" Apr 20 17:39:17.411199 kubelet[2921]: E0420 17:39:17.409803 2921 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 17:39:17.718562 sshd[5247]: Accepted publickey for core from 10.0.0.1 port 55664 ssh2: RSA SHA256:snwWgtVh/dFlg/GoTCcrMcNNvPRv/2kcBFWNu/hKVq0 Apr 20 17:39:17.999694 sshd-session[5247]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 20 17:39:18.879838 systemd-logind[1622]: New session '14' of user 'core' with class 'user' and type 'tty'. Apr 20 17:39:19.032320 systemd[1]: Started session-14.scope - Session 14 of User core. Apr 20 17:39:20.729287 kubelet[2921]: E0420 17:39:20.710216 2921 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 17:39:29.653679 sshd[5265]: Connection closed by 10.0.0.1 port 55664 Apr 20 17:39:29.685382 sshd-session[5247]: pam_unix(sshd:session): session closed for user core Apr 20 17:39:29.937083 systemd[1]: sshd@12-8199-10.0.0.107:22-10.0.0.1:55664.service: Deactivated successfully. Apr 20 17:39:30.003895 systemd[1]: sshd@12-8199-10.0.0.107:22-10.0.0.1:55664.service: Consumed 1.313s CPU time, 4.3M memory peak. Apr 20 17:39:30.198080 systemd[1]: session-14.scope: Deactivated successfully. Apr 20 17:39:30.218269 systemd[1]: session-14.scope: Consumed 3.717s CPU time, 14M memory peak. Apr 20 17:39:30.380992 systemd-logind[1622]: Session 14 logged out. Waiting for processes to exit. Apr 20 17:39:30.638114 systemd-logind[1622]: Removed session 14. Apr 20 17:39:30.794194 systemd[1]: Starting systemd-tmpfiles-clean.service - Cleanup of Temporary Directories... Apr 20 17:39:32.718783 systemd-tmpfiles[5301]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Apr 20 17:39:32.719161 systemd-tmpfiles[5301]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Apr 20 17:39:32.809185 systemd-tmpfiles[5301]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Apr 20 17:39:33.023843 systemd-tmpfiles[5301]: ACLs are not supported, ignoring. Apr 20 17:39:33.026006 systemd-tmpfiles[5301]: ACLs are not supported, ignoring. Apr 20 17:39:33.107703 systemd-tmpfiles[5301]: Detected autofs mount point /boot during canonicalization of boot. Apr 20 17:39:33.107776 systemd-tmpfiles[5301]: Skipping /boot Apr 20 17:39:33.416026 systemd[1]: systemd-tmpfiles-clean.service: Deactivated successfully. Apr 20 17:39:33.420283 systemd[1]: Finished systemd-tmpfiles-clean.service - Cleanup of Temporary Directories. Apr 20 17:39:35.442827 kubelet[2921]: E0420 17:39:35.442735 2921 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 17:39:35.443695 systemd[1]: Started sshd@13-8200-10.0.0.107:22-10.0.0.1:40554.service - OpenSSH per-connection server daemon (10.0.0.1:40554). Apr 20 17:39:39.584597 kubelet[2921]: E0420 17:39:39.583936 2921 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.125s" Apr 20 17:39:41.539151 sshd[5319]: Accepted publickey for core from 10.0.0.1 port 40554 ssh2: RSA SHA256:snwWgtVh/dFlg/GoTCcrMcNNvPRv/2kcBFWNu/hKVq0 Apr 20 17:39:41.882520 sshd-session[5319]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 20 17:39:42.789360 systemd-logind[1622]: New session '15' of user 'core' with class 'user' and type 'tty'. Apr 20 17:39:43.304210 systemd[1]: Started session-15.scope - Session 15 of User core. Apr 20 17:39:44.851963 kubelet[2921]: E0420 17:39:44.834132 2921 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.255s" Apr 20 17:39:46.042316 systemd[1]: cri-containerd-82e8679ddb69d2fa6ccb28dd249f9cc9ac6cb2f24f965358e606db9d9f5c04e9.scope: Deactivated successfully. Apr 20 17:39:46.088392 systemd[1]: cri-containerd-82e8679ddb69d2fa6ccb28dd249f9cc9ac6cb2f24f965358e606db9d9f5c04e9.scope: Consumed 14.578s CPU time, 16.9M memory peak. Apr 20 17:39:46.300123 containerd[1658]: time="2026-04-20T17:39:46.299465847Z" level=info msg="received container exit event container_id:\"82e8679ddb69d2fa6ccb28dd249f9cc9ac6cb2f24f965358e606db9d9f5c04e9\" id:\"82e8679ddb69d2fa6ccb28dd249f9cc9ac6cb2f24f965358e606db9d9f5c04e9\" pid:5051 exit_status:1 exited_at:{seconds:1776706786 nanos:186372842}" Apr 20 17:39:46.778099 kubelet[2921]: E0420 17:39:46.773945 2921 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.914s" Apr 20 17:39:47.682101 kubelet[2921]: E0420 17:39:47.681331 2921 controller.go:251] "Failed to update lease" err="Put \"https://10.0.0.107:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded" Apr 20 17:39:49.801190 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-82e8679ddb69d2fa6ccb28dd249f9cc9ac6cb2f24f965358e606db9d9f5c04e9-rootfs.mount: Deactivated successfully. Apr 20 17:39:50.576333 kubelet[2921]: E0420 17:39:50.439780 2921 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.996s" Apr 20 17:39:52.103895 kubelet[2921]: E0420 17:39:52.102159 2921 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.519s" Apr 20 17:39:52.225058 kubelet[2921]: I0420 17:39:52.170108 2921 scope.go:122] "RemoveContainer" containerID="0076736cf1eedee29664628c2cdcb1cb6330a5f7b6aefc4ff7b6bb66058ef13e" Apr 20 17:39:52.632153 kubelet[2921]: I0420 17:39:52.629725 2921 scope.go:122] "RemoveContainer" containerID="82e8679ddb69d2fa6ccb28dd249f9cc9ac6cb2f24f965358e606db9d9f5c04e9" Apr 20 17:39:52.638063 kubelet[2921]: E0420 17:39:52.637494 2921 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 17:39:52.716290 kubelet[2921]: E0420 17:39:52.707187 2921 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-controller-manager pod=kube-controller-manager-localhost_kube-system(14bc29ec35edba17af38052ec24275f2)\"" pod="kube-system/kube-controller-manager-localhost" podUID="14bc29ec35edba17af38052ec24275f2" Apr 20 17:39:53.099966 containerd[1658]: time="2026-04-20T17:39:53.096096127Z" level=info msg="RemoveContainer for \"0076736cf1eedee29664628c2cdcb1cb6330a5f7b6aefc4ff7b6bb66058ef13e\"" Apr 20 17:39:53.557336 containerd[1658]: time="2026-04-20T17:39:53.555502450Z" level=info msg="RemoveContainer for \"0076736cf1eedee29664628c2cdcb1cb6330a5f7b6aefc4ff7b6bb66058ef13e\" returns successfully" Apr 20 17:39:54.184471 systemd[1]: cri-containerd-053581eba8fcd29bef6872b77a855f19f570c706a57fca253deec938b7d095a1.scope: Deactivated successfully. Apr 20 17:39:54.211009 systemd[1]: cri-containerd-053581eba8fcd29bef6872b77a855f19f570c706a57fca253deec938b7d095a1.scope: Consumed 11.571s CPU time, 17.2M memory peak. Apr 20 17:39:54.434466 containerd[1658]: time="2026-04-20T17:39:54.426196782Z" level=info msg="received container exit event container_id:\"053581eba8fcd29bef6872b77a855f19f570c706a57fca253deec938b7d095a1\" id:\"053581eba8fcd29bef6872b77a855f19f570c706a57fca253deec938b7d095a1\" pid:5058 exit_status:1 exited_at:{seconds:1776706794 nanos:411864338}" Apr 20 17:39:54.979307 kubelet[2921]: E0420 17:39:54.966905 2921 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 17:39:55.289380 kubelet[2921]: E0420 17:39:55.288550 2921 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 17:39:55.601171 kubelet[2921]: E0420 17:39:55.588172 2921 controller.go:251] "Failed to update lease" err="Operation cannot be fulfilled on leases.coordination.k8s.io \"localhost\": the object has been modified; please apply your changes to the latest version and try again" Apr 20 17:39:55.957848 sshd[5342]: Connection closed by 10.0.0.1 port 40554 Apr 20 17:39:55.959187 sshd-session[5319]: pam_unix(sshd:session): session closed for user core Apr 20 17:39:56.156732 systemd[1]: sshd@13-8200-10.0.0.107:22-10.0.0.1:40554.service: Deactivated successfully. Apr 20 17:39:56.159825 systemd[1]: sshd@13-8200-10.0.0.107:22-10.0.0.1:40554.service: Consumed 1.933s CPU time, 4.1M memory peak. Apr 20 17:39:56.363684 systemd[1]: session-15.scope: Deactivated successfully. Apr 20 17:39:56.394110 systemd[1]: session-15.scope: Consumed 4.106s CPU time, 14.6M memory peak. Apr 20 17:39:56.578061 systemd-logind[1622]: Session 15 logged out. Waiting for processes to exit. Apr 20 17:39:56.942826 systemd-logind[1622]: Removed session 15. Apr 20 17:39:58.701104 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-053581eba8fcd29bef6872b77a855f19f570c706a57fca253deec938b7d095a1-rootfs.mount: Deactivated successfully. Apr 20 17:39:59.698260 kubelet[2921]: I0420 17:39:59.692544 2921 scope.go:122] "RemoveContainer" containerID="82e8679ddb69d2fa6ccb28dd249f9cc9ac6cb2f24f965358e606db9d9f5c04e9" Apr 20 17:39:59.740681 kubelet[2921]: E0420 17:39:59.714318 2921 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 17:39:59.801260 kubelet[2921]: E0420 17:39:59.800595 2921 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-controller-manager pod=kube-controller-manager-localhost_kube-system(14bc29ec35edba17af38052ec24275f2)\"" pod="kube-system/kube-controller-manager-localhost" podUID="14bc29ec35edba17af38052ec24275f2" Apr 20 17:40:00.158370 kubelet[2921]: I0420 17:40:00.156933 2921 scope.go:122] "RemoveContainer" containerID="47b8d56c11d320d9712e2d23cce73f72260afb25e9ec58b43bf196c68994379a" Apr 20 17:40:00.191355 kubelet[2921]: I0420 17:40:00.190827 2921 scope.go:122] "RemoveContainer" containerID="053581eba8fcd29bef6872b77a855f19f570c706a57fca253deec938b7d095a1" Apr 20 17:40:00.231104 kubelet[2921]: E0420 17:40:00.230716 2921 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 17:40:00.235365 kubelet[2921]: E0420 17:40:00.233886 2921 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-scheduler pod=kube-scheduler-localhost_kube-system(f7c88b30fc803a3ec6b6c138191bdaca)\"" pod="kube-system/kube-scheduler-localhost" podUID="f7c88b30fc803a3ec6b6c138191bdaca" Apr 20 17:40:00.524032 containerd[1658]: time="2026-04-20T17:40:00.523162672Z" level=info msg="RemoveContainer for \"47b8d56c11d320d9712e2d23cce73f72260afb25e9ec58b43bf196c68994379a\"" Apr 20 17:40:00.616063 containerd[1658]: time="2026-04-20T17:40:00.615048740Z" level=info msg="RemoveContainer for \"47b8d56c11d320d9712e2d23cce73f72260afb25e9ec58b43bf196c68994379a\" returns successfully" Apr 20 17:40:01.565386 systemd[1]: Started sshd@14-8201-10.0.0.107:22-10.0.0.1:41790.service - OpenSSH per-connection server daemon (10.0.0.1:41790). Apr 20 17:40:03.691300 sshd[5427]: Accepted publickey for core from 10.0.0.1 port 41790 ssh2: RSA SHA256:snwWgtVh/dFlg/GoTCcrMcNNvPRv/2kcBFWNu/hKVq0 Apr 20 17:40:03.795209 sshd-session[5427]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 20 17:40:04.369933 systemd-logind[1622]: New session '16' of user 'core' with class 'user' and type 'tty'. Apr 20 17:40:04.595038 systemd[1]: Started session-16.scope - Session 16 of User core. Apr 20 17:40:07.587393 kubelet[2921]: I0420 17:40:07.586177 2921 scope.go:122] "RemoveContainer" containerID="053581eba8fcd29bef6872b77a855f19f570c706a57fca253deec938b7d095a1" Apr 20 17:40:07.587393 kubelet[2921]: E0420 17:40:07.586367 2921 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 17:40:07.587393 kubelet[2921]: E0420 17:40:07.588471 2921 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-scheduler pod=kube-scheduler-localhost_kube-system(f7c88b30fc803a3ec6b6c138191bdaca)\"" pod="kube-system/kube-scheduler-localhost" podUID="f7c88b30fc803a3ec6b6c138191bdaca" Apr 20 17:40:09.542963 kubelet[2921]: I0420 17:40:09.530297 2921 scope.go:122] "RemoveContainer" containerID="82e8679ddb69d2fa6ccb28dd249f9cc9ac6cb2f24f965358e606db9d9f5c04e9" Apr 20 17:40:09.617118 kubelet[2921]: E0420 17:40:09.543631 2921 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 17:40:09.990213 containerd[1658]: time="2026-04-20T17:40:09.989062347Z" level=info msg="CreateContainer within sandbox \"1917323574b960de8ba00d74f9909f2e14c4f8be4dce6bcb901281008814d6d2\" for container name:\"kube-controller-manager\" attempt:3" Apr 20 17:40:11.015260 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1964506347.mount: Deactivated successfully. Apr 20 17:40:11.137069 containerd[1658]: time="2026-04-20T17:40:11.132653349Z" level=info msg="Container 401fb44f258095ee15f094c2623d4954ec478f180486d84334e9c70d9c03e258: CDI devices from CRI Config.CDIDevices: []" Apr 20 17:40:11.759141 containerd[1658]: time="2026-04-20T17:40:11.757354557Z" level=info msg="CreateContainer within sandbox \"1917323574b960de8ba00d74f9909f2e14c4f8be4dce6bcb901281008814d6d2\" for name:\"kube-controller-manager\" attempt:3 returns container id \"401fb44f258095ee15f094c2623d4954ec478f180486d84334e9c70d9c03e258\"" Apr 20 17:40:11.806506 containerd[1658]: time="2026-04-20T17:40:11.805832348Z" level=info msg="StartContainer for \"401fb44f258095ee15f094c2623d4954ec478f180486d84334e9c70d9c03e258\"" Apr 20 17:40:11.863795 containerd[1658]: time="2026-04-20T17:40:11.861889891Z" level=info msg="connecting to shim 401fb44f258095ee15f094c2623d4954ec478f180486d84334e9c70d9c03e258" address="unix:///run/containerd/s/dec39bf7199c55115e1e022cb5ae3c147a590841eafc57aa9ed18dfe18514e73" protocol=ttrpc version=3 Apr 20 17:40:11.881802 sshd[5445]: Connection closed by 10.0.0.1 port 41790 Apr 20 17:40:11.902022 sshd-session[5427]: pam_unix(sshd:session): session closed for user core Apr 20 17:40:12.722388 systemd[1]: sshd@14-8201-10.0.0.107:22-10.0.0.1:41790.service: Deactivated successfully. Apr 20 17:40:12.810106 systemd[1]: sshd@14-8201-10.0.0.107:22-10.0.0.1:41790.service: Consumed 1.022s CPU time, 4.4M memory peak. Apr 20 17:40:12.898984 systemd[1]: session-16.scope: Deactivated successfully. Apr 20 17:40:12.943792 systemd[1]: session-16.scope: Consumed 3.454s CPU time, 15.4M memory peak. Apr 20 17:40:12.980570 systemd-logind[1622]: Session 16 logged out. Waiting for processes to exit. Apr 20 17:40:13.146352 systemd-logind[1622]: Removed session 16. Apr 20 17:40:14.730194 systemd[1]: Started cri-containerd-401fb44f258095ee15f094c2623d4954ec478f180486d84334e9c70d9c03e258.scope - libcontainer container 401fb44f258095ee15f094c2623d4954ec478f180486d84334e9c70d9c03e258. Apr 20 17:40:16.509045 kubelet[2921]: I0420 17:40:16.500332 2921 scope.go:122] "RemoveContainer" containerID="053581eba8fcd29bef6872b77a855f19f570c706a57fca253deec938b7d095a1" Apr 20 17:40:16.510163 kubelet[2921]: E0420 17:40:16.510149 2921 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 17:40:16.700197 containerd[1658]: time="2026-04-20T17:40:16.696627205Z" level=info msg="CreateContainer within sandbox \"3499c6956704318802827cc6051b05eee9229cbc60a941f53c88c4a10e92ae11\" for container name:\"kube-scheduler\" attempt:3" Apr 20 17:40:16.891753 containerd[1658]: time="2026-04-20T17:40:16.891148497Z" level=info msg="Container 1329f9f59ad208392783f6941eafd260d9073cc50a74ef386efbe982950acdfe: CDI devices from CRI Config.CDIDevices: []" Apr 20 17:40:17.214132 systemd[1]: Started sshd@15-4-10.0.0.107:22-10.0.0.1:40096.service - OpenSSH per-connection server daemon (10.0.0.1:40096). Apr 20 17:40:17.273675 containerd[1658]: time="2026-04-20T17:40:17.271328713Z" level=info msg="CreateContainer within sandbox \"3499c6956704318802827cc6051b05eee9229cbc60a941f53c88c4a10e92ae11\" for name:\"kube-scheduler\" attempt:3 returns container id \"1329f9f59ad208392783f6941eafd260d9073cc50a74ef386efbe982950acdfe\"" Apr 20 17:40:17.478748 containerd[1658]: time="2026-04-20T17:40:17.456350955Z" level=info msg="StartContainer for \"1329f9f59ad208392783f6941eafd260d9073cc50a74ef386efbe982950acdfe\"" Apr 20 17:40:17.688100 containerd[1658]: time="2026-04-20T17:40:17.685992366Z" level=info msg="connecting to shim 1329f9f59ad208392783f6941eafd260d9073cc50a74ef386efbe982950acdfe" address="unix:///run/containerd/s/1028a3884c8fe7445e51378f8f75d8222496a64b030b92e150cc155157ded40c" protocol=ttrpc version=3 Apr 20 17:40:17.884139 containerd[1658]: time="2026-04-20T17:40:17.874308418Z" level=info msg="StartContainer for \"401fb44f258095ee15f094c2623d4954ec478f180486d84334e9c70d9c03e258\" returns successfully" Apr 20 17:40:18.842911 kubelet[2921]: E0420 17:40:18.837358 2921 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 17:40:19.126921 sshd[5535]: Accepted publickey for core from 10.0.0.1 port 40096 ssh2: RSA SHA256:snwWgtVh/dFlg/GoTCcrMcNNvPRv/2kcBFWNu/hKVq0 Apr 20 17:40:19.175655 sshd-session[5535]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 20 17:40:19.191277 systemd[1]: Started cri-containerd-1329f9f59ad208392783f6941eafd260d9073cc50a74ef386efbe982950acdfe.scope - libcontainer container 1329f9f59ad208392783f6941eafd260d9073cc50a74ef386efbe982950acdfe. Apr 20 17:40:19.317002 systemd-logind[1622]: New session '17' of user 'core' with class 'user' and type 'tty'. Apr 20 17:40:19.373082 systemd[1]: Started session-17.scope - Session 17 of User core. Apr 20 17:40:19.569299 kubelet[2921]: E0420 17:40:19.568119 2921 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 17:40:20.634810 kubelet[2921]: E0420 17:40:20.634032 2921 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 17:40:20.939831 containerd[1658]: time="2026-04-20T17:40:20.939166154Z" level=info msg="StartContainer for \"1329f9f59ad208392783f6941eafd260d9073cc50a74ef386efbe982950acdfe\" returns successfully" Apr 20 17:40:21.289995 kubelet[2921]: E0420 17:40:21.244882 2921 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 17:40:21.870861 sshd[5563]: Connection closed by 10.0.0.1 port 40096 Apr 20 17:40:21.876627 sshd-session[5535]: pam_unix(sshd:session): session closed for user core Apr 20 17:40:21.916260 systemd[1]: sshd@15-4-10.0.0.107:22-10.0.0.1:40096.service: Deactivated successfully. Apr 20 17:40:21.944084 systemd[1]: session-17.scope: Deactivated successfully. Apr 20 17:40:21.948104 systemd[1]: session-17.scope: Consumed 1.214s CPU time, 14.6M memory peak. Apr 20 17:40:21.961386 systemd-logind[1622]: Session 17 logged out. Waiting for processes to exit. Apr 20 17:40:21.966102 systemd-logind[1622]: Removed session 17. Apr 20 17:40:22.303941 kubelet[2921]: E0420 17:40:22.303552 2921 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 17:40:22.527830 kubelet[2921]: E0420 17:40:22.522966 2921 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 17:40:24.535349 kubelet[2921]: E0420 17:40:24.533910 2921 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 17:40:27.099343 systemd[1]: Started sshd@16-5-10.0.0.107:22-10.0.0.1:59980.service - OpenSSH per-connection server daemon (10.0.0.1:59980). Apr 20 17:40:28.093359 sshd[5617]: Accepted publickey for core from 10.0.0.1 port 59980 ssh2: RSA SHA256:snwWgtVh/dFlg/GoTCcrMcNNvPRv/2kcBFWNu/hKVq0 Apr 20 17:40:28.113223 sshd-session[5617]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 20 17:40:28.319042 systemd-logind[1622]: New session '18' of user 'core' with class 'user' and type 'tty'. Apr 20 17:40:28.363060 systemd[1]: Started session-18.scope - Session 18 of User core. Apr 20 17:40:30.933768 kubelet[2921]: E0420 17:40:30.928730 2921 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 17:40:31.422234 kubelet[2921]: E0420 17:40:31.415744 2921 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 17:40:32.518085 kubelet[2921]: E0420 17:40:32.509360 2921 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 17:40:33.177049 sshd[5641]: Connection closed by 10.0.0.1 port 59980 Apr 20 17:40:33.219955 sshd-session[5617]: pam_unix(sshd:session): session closed for user core Apr 20 17:40:33.441802 systemd[1]: sshd@16-5-10.0.0.107:22-10.0.0.1:59980.service: Deactivated successfully. Apr 20 17:40:33.567338 systemd[1]: session-18.scope: Deactivated successfully. Apr 20 17:40:33.569381 systemd[1]: session-18.scope: Consumed 3.424s CPU time, 14.1M memory peak. Apr 20 17:40:33.619048 systemd-logind[1622]: Session 18 logged out. Waiting for processes to exit. Apr 20 17:40:33.678036 systemd-logind[1622]: Removed session 18. Apr 20 17:40:33.779612 kubelet[2921]: E0420 17:40:33.778613 2921 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 17:40:39.020972 systemd[1]: Started sshd@17-8202-10.0.0.107:22-10.0.0.1:51826.service - OpenSSH per-connection server daemon (10.0.0.1:51826). Apr 20 17:40:41.838179 sshd[5680]: Accepted publickey for core from 10.0.0.1 port 51826 ssh2: RSA SHA256:snwWgtVh/dFlg/GoTCcrMcNNvPRv/2kcBFWNu/hKVq0 Apr 20 17:40:41.903293 sshd-session[5680]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 20 17:40:42.458855 systemd-logind[1622]: New session '19' of user 'core' with class 'user' and type 'tty'. Apr 20 17:40:42.515333 systemd[1]: Started session-19.scope - Session 19 of User core. Apr 20 17:40:42.704842 kubelet[2921]: E0420 17:40:42.704018 2921 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 17:40:47.644975 sshd[5705]: Connection closed by 10.0.0.1 port 51826 Apr 20 17:40:47.671997 sshd-session[5680]: pam_unix(sshd:session): session closed for user core Apr 20 17:40:47.870973 systemd[1]: sshd@17-8202-10.0.0.107:22-10.0.0.1:51826.service: Deactivated successfully. Apr 20 17:40:47.907139 systemd[1]: sshd@17-8202-10.0.0.107:22-10.0.0.1:51826.service: Consumed 1.076s CPU time, 4.2M memory peak. Apr 20 17:40:48.014152 systemd[1]: session-19.scope: Deactivated successfully. Apr 20 17:40:48.020215 systemd[1]: session-19.scope: Consumed 2.864s CPU time, 14.1M memory peak. Apr 20 17:40:48.095625 systemd-logind[1622]: Session 19 logged out. Waiting for processes to exit. Apr 20 17:40:48.207989 systemd-logind[1622]: Removed session 19. Apr 20 17:40:52.762213 systemd[1]: Started sshd@18-8203-10.0.0.107:22-10.0.0.1:59540.service - OpenSSH per-connection server daemon (10.0.0.1:59540). Apr 20 17:40:54.013830 sshd[5746]: Accepted publickey for core from 10.0.0.1 port 59540 ssh2: RSA SHA256:snwWgtVh/dFlg/GoTCcrMcNNvPRv/2kcBFWNu/hKVq0 Apr 20 17:40:54.097344 sshd-session[5746]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 20 17:40:54.162726 systemd-logind[1622]: New session '20' of user 'core' with class 'user' and type 'tty'. Apr 20 17:40:54.184014 systemd[1]: Started session-20.scope - Session 20 of User core. Apr 20 17:40:55.283564 sshd[5750]: Connection closed by 10.0.0.1 port 59540 Apr 20 17:40:55.291368 sshd-session[5746]: pam_unix(sshd:session): session closed for user core Apr 20 17:40:55.400322 systemd[1]: sshd@18-8203-10.0.0.107:22-10.0.0.1:59540.service: Deactivated successfully. Apr 20 17:40:55.460936 systemd[1]: session-20.scope: Deactivated successfully. Apr 20 17:40:55.467813 systemd-logind[1622]: Session 20 logged out. Waiting for processes to exit. Apr 20 17:40:55.478383 systemd[1]: Started sshd@19-6-10.0.0.107:22-10.0.0.1:33142.service - OpenSSH per-connection server daemon (10.0.0.1:33142). Apr 20 17:40:55.480444 systemd-logind[1622]: Removed session 20. Apr 20 17:40:56.042655 sshd[5769]: Accepted publickey for core from 10.0.0.1 port 33142 ssh2: RSA SHA256:snwWgtVh/dFlg/GoTCcrMcNNvPRv/2kcBFWNu/hKVq0 Apr 20 17:40:56.116226 sshd-session[5769]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 20 17:40:56.165037 systemd-logind[1622]: New session '21' of user 'core' with class 'user' and type 'tty'. Apr 20 17:40:56.174932 systemd[1]: Started session-21.scope - Session 21 of User core. Apr 20 17:40:59.856365 sshd[5783]: Connection closed by 10.0.0.1 port 33142 Apr 20 17:40:59.868915 sshd-session[5769]: pam_unix(sshd:session): session closed for user core Apr 20 17:41:00.137022 systemd[1]: sshd@19-6-10.0.0.107:22-10.0.0.1:33142.service: Deactivated successfully. Apr 20 17:41:00.241700 systemd[1]: session-21.scope: Deactivated successfully. Apr 20 17:41:00.245417 systemd[1]: session-21.scope: Consumed 1.804s CPU time, 22.4M memory peak. Apr 20 17:41:00.295799 systemd-logind[1622]: Session 21 logged out. Waiting for processes to exit. Apr 20 17:41:00.440294 systemd[1]: Started sshd@20-8204-10.0.0.107:22-10.0.0.1:33158.service - OpenSSH per-connection server daemon (10.0.0.1:33158). Apr 20 17:41:00.446181 systemd-logind[1622]: Removed session 21. Apr 20 17:41:00.988100 sshd[5800]: Accepted publickey for core from 10.0.0.1 port 33158 ssh2: RSA SHA256:snwWgtVh/dFlg/GoTCcrMcNNvPRv/2kcBFWNu/hKVq0 Apr 20 17:41:01.143972 sshd-session[5800]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 20 17:41:01.189174 systemd-logind[1622]: New session '22' of user 'core' with class 'user' and type 'tty'. Apr 20 17:41:01.283508 systemd[1]: Started session-22.scope - Session 22 of User core. Apr 20 17:41:06.436005 kubelet[2921]: E0420 17:41:06.433666 2921 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.922s" Apr 20 17:41:07.282131 sshd[5818]: Connection closed by 10.0.0.1 port 33158 Apr 20 17:41:07.291767 sshd-session[5800]: pam_unix(sshd:session): session closed for user core Apr 20 17:41:07.400448 systemd[1]: sshd@20-8204-10.0.0.107:22-10.0.0.1:33158.service: Deactivated successfully. Apr 20 17:41:07.487520 systemd[1]: session-22.scope: Deactivated successfully. Apr 20 17:41:07.488376 systemd[1]: session-22.scope: Consumed 1.940s CPU time, 15.8M memory peak. Apr 20 17:41:07.515764 systemd-logind[1622]: Session 22 logged out. Waiting for processes to exit. Apr 20 17:41:07.602806 systemd-logind[1622]: Removed session 22. Apr 20 17:41:12.640039 systemd[1]: Started sshd@21-8205-10.0.0.107:22-10.0.0.1:48660.service - OpenSSH per-connection server daemon (10.0.0.1:48660). Apr 20 17:41:13.383087 sshd[5864]: Accepted publickey for core from 10.0.0.1 port 48660 ssh2: RSA SHA256:snwWgtVh/dFlg/GoTCcrMcNNvPRv/2kcBFWNu/hKVq0 Apr 20 17:41:13.399819 sshd-session[5864]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 20 17:41:13.692761 systemd-logind[1622]: New session '23' of user 'core' with class 'user' and type 'tty'. Apr 20 17:41:13.771561 systemd[1]: Started session-23.scope - Session 23 of User core. Apr 20 17:41:17.718721 sshd[5879]: Connection closed by 10.0.0.1 port 48660 Apr 20 17:41:17.754001 sshd-session[5864]: pam_unix(sshd:session): session closed for user core Apr 20 17:41:17.816365 systemd[1]: sshd@21-8205-10.0.0.107:22-10.0.0.1:48660.service: Deactivated successfully. Apr 20 17:41:17.954811 systemd[1]: session-23.scope: Deactivated successfully. Apr 20 17:41:17.984318 systemd[1]: session-23.scope: Consumed 1.446s CPU time, 14.9M memory peak. Apr 20 17:41:17.994292 systemd-logind[1622]: Session 23 logged out. Waiting for processes to exit. Apr 20 17:41:18.003394 systemd-logind[1622]: Removed session 23. Apr 20 17:41:19.658952 kubelet[2921]: E0420 17:41:19.618206 2921 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 17:41:23.191084 systemd[1]: Started sshd@22-8206-10.0.0.107:22-10.0.0.1:33244.service - OpenSSH per-connection server daemon (10.0.0.1:33244). Apr 20 17:41:25.477218 sshd[5918]: Accepted publickey for core from 10.0.0.1 port 33244 ssh2: RSA SHA256:snwWgtVh/dFlg/GoTCcrMcNNvPRv/2kcBFWNu/hKVq0 Apr 20 17:41:25.514638 sshd-session[5918]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 20 17:41:25.834890 systemd-logind[1622]: New session '24' of user 'core' with class 'user' and type 'tty'. Apr 20 17:41:26.014141 systemd[1]: Started session-24.scope - Session 24 of User core. Apr 20 17:41:27.529312 kubelet[2921]: E0420 17:41:27.527283 2921 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.086s" Apr 20 17:41:27.840157 kubelet[2921]: E0420 17:41:27.816260 2921 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 17:41:34.602016 kubelet[2921]: E0420 17:41:34.588782 2921 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 17:41:34.829560 sshd[5937]: Connection closed by 10.0.0.1 port 33244 Apr 20 17:41:34.832848 sshd-session[5918]: pam_unix(sshd:session): session closed for user core Apr 20 17:41:35.013215 systemd[1]: sshd@22-8206-10.0.0.107:22-10.0.0.1:33244.service: Deactivated successfully. Apr 20 17:41:35.178954 systemd[1]: session-24.scope: Deactivated successfully. Apr 20 17:41:35.179772 systemd[1]: session-24.scope: Consumed 2.568s CPU time, 16.4M memory peak. Apr 20 17:41:35.192073 systemd-logind[1622]: Session 24 logged out. Waiting for processes to exit. Apr 20 17:41:35.242271 systemd-logind[1622]: Removed session 24. Apr 20 17:41:39.929902 systemd[1]: Started sshd@23-7-10.0.0.107:22-10.0.0.1:33340.service - OpenSSH per-connection server daemon (10.0.0.1:33340). Apr 20 17:41:40.679105 kubelet[2921]: E0420 17:41:40.674224 2921 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 17:41:42.959635 sshd[5989]: Accepted publickey for core from 10.0.0.1 port 33340 ssh2: RSA SHA256:snwWgtVh/dFlg/GoTCcrMcNNvPRv/2kcBFWNu/hKVq0 Apr 20 17:41:42.973551 sshd-session[5989]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 20 17:41:43.135389 systemd-logind[1622]: New session '25' of user 'core' with class 'user' and type 'tty'. Apr 20 17:41:43.592923 systemd[1]: Started session-25.scope - Session 25 of User core. Apr 20 17:41:44.786262 kubelet[2921]: E0420 17:41:44.781076 2921 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 17:41:50.412214 sshd[6007]: Connection closed by 10.0.0.1 port 33340 Apr 20 17:41:50.417193 sshd-session[5989]: pam_unix(sshd:session): session closed for user core Apr 20 17:41:50.589256 systemd[1]: sshd@23-7-10.0.0.107:22-10.0.0.1:33340.service: Deactivated successfully. Apr 20 17:41:50.904347 systemd[1]: session-25.scope: Deactivated successfully. Apr 20 17:41:50.910694 systemd[1]: session-25.scope: Consumed 2.907s CPU time, 14.2M memory peak. Apr 20 17:41:51.084181 systemd-logind[1622]: Session 25 logged out. Waiting for processes to exit. Apr 20 17:41:51.197784 systemd-logind[1622]: Removed session 25. Apr 20 17:41:51.543346 kubelet[2921]: E0420 17:41:51.538827 2921 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 17:41:52.457587 kubelet[2921]: E0420 17:41:52.456569 2921 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 17:41:55.787089 systemd[1]: Started sshd@24-8207-10.0.0.107:22-10.0.0.1:35094.service - OpenSSH per-connection server daemon (10.0.0.1:35094). Apr 20 17:41:58.549362 sshd[6046]: Accepted publickey for core from 10.0.0.1 port 35094 ssh2: RSA SHA256:snwWgtVh/dFlg/GoTCcrMcNNvPRv/2kcBFWNu/hKVq0 Apr 20 17:41:58.672948 sshd-session[6046]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 20 17:41:58.733801 systemd-logind[1622]: New session '26' of user 'core' with class 'user' and type 'tty'. Apr 20 17:41:58.825252 systemd[1]: Started session-26.scope - Session 26 of User core. Apr 20 17:42:07.439333 sshd[6065]: Connection closed by 10.0.0.1 port 35094 Apr 20 17:42:07.559163 sshd-session[6046]: pam_unix(sshd:session): session closed for user core Apr 20 17:42:07.831256 systemd[1]: sshd@24-8207-10.0.0.107:22-10.0.0.1:35094.service: Deactivated successfully. Apr 20 17:42:07.968609 systemd[1]: session-26.scope: Deactivated successfully. Apr 20 17:42:07.989629 systemd[1]: session-26.scope: Consumed 2.390s CPU time, 14.8M memory peak. Apr 20 17:42:08.001772 systemd-logind[1622]: Session 26 logged out. Waiting for processes to exit. Apr 20 17:42:08.076275 systemd-logind[1622]: Removed session 26. Apr 20 17:42:12.610201 systemd[1]: Started sshd@25-12289-10.0.0.107:22-10.0.0.1:52964.service - OpenSSH per-connection server daemon (10.0.0.1:52964). Apr 20 17:42:14.860861 sshd[6113]: Accepted publickey for core from 10.0.0.1 port 52964 ssh2: RSA SHA256:snwWgtVh/dFlg/GoTCcrMcNNvPRv/2kcBFWNu/hKVq0 Apr 20 17:42:14.917516 sshd-session[6113]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 20 17:42:15.479025 systemd-logind[1622]: New session '27' of user 'core' with class 'user' and type 'tty'. Apr 20 17:42:15.630515 systemd[1]: Started session-27.scope - Session 27 of User core. Apr 20 17:42:21.585037 kubelet[2921]: E0420 17:42:21.580124 2921 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.131s" Apr 20 17:42:24.377917 sshd[6130]: Connection closed by 10.0.0.1 port 52964 Apr 20 17:42:24.485980 sshd-session[6113]: pam_unix(sshd:session): session closed for user core Apr 20 17:42:24.744316 systemd[1]: sshd@25-12289-10.0.0.107:22-10.0.0.1:52964.service: Deactivated successfully. Apr 20 17:42:25.075537 systemd[1]: session-27.scope: Deactivated successfully. Apr 20 17:42:25.138058 systemd[1]: session-27.scope: Consumed 2.162s CPU time, 15.1M memory peak. Apr 20 17:42:25.220932 systemd-logind[1622]: Session 27 logged out. Waiting for processes to exit. Apr 20 17:42:25.366235 systemd-logind[1622]: Removed session 27. Apr 20 17:42:26.530366 kubelet[2921]: E0420 17:42:26.528173 2921 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 17:42:29.895920 systemd[1]: Started sshd@26-4100-10.0.0.107:22-10.0.0.1:59682.service - OpenSSH per-connection server daemon (10.0.0.1:59682). Apr 20 17:42:31.588937 kubelet[2921]: E0420 17:42:31.582888 2921 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 17:42:35.671013 kubelet[2921]: E0420 17:42:35.627515 2921 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 17:42:36.047867 sshd[6171]: Accepted publickey for core from 10.0.0.1 port 59682 ssh2: RSA SHA256:snwWgtVh/dFlg/GoTCcrMcNNvPRv/2kcBFWNu/hKVq0 Apr 20 17:42:35.993497 sshd-session[6171]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 20 17:42:36.512592 systemd-logind[1622]: New session '28' of user 'core' with class 'user' and type 'tty'. Apr 20 17:42:36.747301 systemd[1]: Started session-28.scope - Session 28 of User core. Apr 20 17:42:41.807815 systemd[1]: cri-containerd-401fb44f258095ee15f094c2623d4954ec478f180486d84334e9c70d9c03e258.scope: Deactivated successfully. Apr 20 17:42:41.823179 systemd[1]: cri-containerd-401fb44f258095ee15f094c2623d4954ec478f180486d84334e9c70d9c03e258.scope: Consumed 34.710s CPU time, 44.3M memory peak, 4K read from disk. Apr 20 17:42:41.867265 containerd[1658]: time="2026-04-20T17:42:41.854487513Z" level=info msg="received container exit event container_id:\"401fb44f258095ee15f094c2623d4954ec478f180486d84334e9c70d9c03e258\" id:\"401fb44f258095ee15f094c2623d4954ec478f180486d84334e9c70d9c03e258\" pid:5499 exit_status:1 exited_at:{seconds:1776706961 nanos:825345537}" Apr 20 17:42:41.904217 kubelet[2921]: E0420 17:42:41.903773 2921 controller.go:251] "Failed to update lease" err="Put \"https://10.0.0.107:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Apr 20 17:42:44.590124 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-401fb44f258095ee15f094c2623d4954ec478f180486d84334e9c70d9c03e258-rootfs.mount: Deactivated successfully. Apr 20 17:42:44.895501 kubelet[2921]: I0420 17:42:44.885891 2921 scope.go:122] "RemoveContainer" containerID="82e8679ddb69d2fa6ccb28dd249f9cc9ac6cb2f24f965358e606db9d9f5c04e9" Apr 20 17:42:44.895501 kubelet[2921]: I0420 17:42:44.894263 2921 scope.go:122] "RemoveContainer" containerID="401fb44f258095ee15f094c2623d4954ec478f180486d84334e9c70d9c03e258" Apr 20 17:42:44.900702 kubelet[2921]: E0420 17:42:44.899852 2921 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 17:42:44.900702 kubelet[2921]: E0420 17:42:44.900006 2921 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-controller-manager pod=kube-controller-manager-localhost_kube-system(14bc29ec35edba17af38052ec24275f2)\"" pod="kube-system/kube-controller-manager-localhost" podUID="14bc29ec35edba17af38052ec24275f2" Apr 20 17:42:45.091834 kubelet[2921]: E0420 17:42:45.087999 2921 controller.go:251] "Failed to update lease" err="Operation cannot be fulfilled on leases.coordination.k8s.io \"localhost\": the object has been modified; please apply your changes to the latest version and try again" Apr 20 17:42:45.116176 containerd[1658]: time="2026-04-20T17:42:45.104110814Z" level=info msg="RemoveContainer for \"82e8679ddb69d2fa6ccb28dd249f9cc9ac6cb2f24f965358e606db9d9f5c04e9\"" Apr 20 17:42:45.570281 containerd[1658]: time="2026-04-20T17:42:45.566462623Z" level=info msg="RemoveContainer for \"82e8679ddb69d2fa6ccb28dd249f9cc9ac6cb2f24f965358e606db9d9f5c04e9\" returns successfully" Apr 20 17:42:45.749901 sshd[6189]: Connection closed by 10.0.0.1 port 59682 Apr 20 17:42:45.751228 sshd-session[6171]: pam_unix(sshd:session): session closed for user core Apr 20 17:42:45.929809 systemd[1]: sshd@26-4100-10.0.0.107:22-10.0.0.1:59682.service: Deactivated successfully. Apr 20 17:42:45.930540 systemd[1]: sshd@26-4100-10.0.0.107:22-10.0.0.1:59682.service: Consumed 1.517s CPU time, 4.4M memory peak. Apr 20 17:42:45.997110 systemd[1]: session-28.scope: Deactivated successfully. Apr 20 17:42:45.997976 systemd[1]: session-28.scope: Consumed 3.743s CPU time, 15M memory peak. Apr 20 17:42:46.008054 systemd-logind[1622]: Session 28 logged out. Waiting for processes to exit. Apr 20 17:42:46.036599 systemd[1]: cri-containerd-1329f9f59ad208392783f6941eafd260d9073cc50a74ef386efbe982950acdfe.scope: Deactivated successfully. Apr 20 17:42:46.109116 containerd[1658]: time="2026-04-20T17:42:46.106095920Z" level=info msg="received container exit event container_id:\"1329f9f59ad208392783f6941eafd260d9073cc50a74ef386efbe982950acdfe\" id:\"1329f9f59ad208392783f6941eafd260d9073cc50a74ef386efbe982950acdfe\" pid:5557 exit_status:1 exited_at:{seconds:1776706966 nanos:79389115}" Apr 20 17:42:46.107470 systemd[1]: cri-containerd-1329f9f59ad208392783f6941eafd260d9073cc50a74ef386efbe982950acdfe.scope: Consumed 21.748s CPU time, 19.4M memory peak. Apr 20 17:42:46.124991 systemd-logind[1622]: Removed session 28. Apr 20 17:42:48.724546 kubelet[2921]: E0420 17:42:48.716711 2921 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 17:42:48.741142 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1329f9f59ad208392783f6941eafd260d9073cc50a74ef386efbe982950acdfe-rootfs.mount: Deactivated successfully. Apr 20 17:42:48.804790 kubelet[2921]: E0420 17:42:48.804460 2921 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 17:42:49.723834 kubelet[2921]: I0420 17:42:49.712764 2921 scope.go:122] "RemoveContainer" containerID="401fb44f258095ee15f094c2623d4954ec478f180486d84334e9c70d9c03e258" Apr 20 17:42:49.723834 kubelet[2921]: E0420 17:42:49.722785 2921 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 17:42:49.723834 kubelet[2921]: E0420 17:42:49.723714 2921 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-controller-manager pod=kube-controller-manager-localhost_kube-system(14bc29ec35edba17af38052ec24275f2)\"" pod="kube-system/kube-controller-manager-localhost" podUID="14bc29ec35edba17af38052ec24275f2" Apr 20 17:42:50.225932 kubelet[2921]: I0420 17:42:50.219204 2921 scope.go:122] "RemoveContainer" containerID="053581eba8fcd29bef6872b77a855f19f570c706a57fca253deec938b7d095a1" Apr 20 17:42:50.232548 kubelet[2921]: I0420 17:42:50.232049 2921 scope.go:122] "RemoveContainer" containerID="1329f9f59ad208392783f6941eafd260d9073cc50a74ef386efbe982950acdfe" Apr 20 17:42:50.233973 kubelet[2921]: E0420 17:42:50.233917 2921 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 17:42:50.234747 kubelet[2921]: E0420 17:42:50.234617 2921 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-scheduler pod=kube-scheduler-localhost_kube-system(f7c88b30fc803a3ec6b6c138191bdaca)\"" pod="kube-system/kube-scheduler-localhost" podUID="f7c88b30fc803a3ec6b6c138191bdaca" Apr 20 17:42:50.299152 containerd[1658]: time="2026-04-20T17:42:50.298547776Z" level=info msg="RemoveContainer for \"053581eba8fcd29bef6872b77a855f19f570c706a57fca253deec938b7d095a1\"" Apr 20 17:42:50.432104 containerd[1658]: time="2026-04-20T17:42:50.428799064Z" level=info msg="RemoveContainer for \"053581eba8fcd29bef6872b77a855f19f570c706a57fca253deec938b7d095a1\" returns successfully" Apr 20 17:42:51.260331 systemd[1]: Started sshd@27-8208-10.0.0.107:22-10.0.0.1:56068.service - OpenSSH per-connection server daemon (10.0.0.1:56068). Apr 20 17:42:53.527552 kubelet[2921]: E0420 17:42:53.487281 2921 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.048s" Apr 20 17:42:54.912686 sshd[6260]: Accepted publickey for core from 10.0.0.1 port 56068 ssh2: RSA SHA256:snwWgtVh/dFlg/GoTCcrMcNNvPRv/2kcBFWNu/hKVq0 Apr 20 17:42:55.085248 sshd-session[6260]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 20 17:42:55.702068 systemd-logind[1622]: New session '29' of user 'core' with class 'user' and type 'tty'. Apr 20 17:42:56.016169 systemd[1]: Started session-29.scope - Session 29 of User core. Apr 20 17:42:57.772761 kubelet[2921]: I0420 17:42:57.761272 2921 scope.go:122] "RemoveContainer" containerID="1329f9f59ad208392783f6941eafd260d9073cc50a74ef386efbe982950acdfe" Apr 20 17:42:57.942927 kubelet[2921]: E0420 17:42:57.918176 2921 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 17:42:58.119572 kubelet[2921]: E0420 17:42:58.018045 2921 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-scheduler pod=kube-scheduler-localhost_kube-system(f7c88b30fc803a3ec6b6c138191bdaca)\"" pod="kube-system/kube-scheduler-localhost" podUID="f7c88b30fc803a3ec6b6c138191bdaca" Apr 20 17:43:01.195622 sshd[6288]: Connection closed by 10.0.0.1 port 56068 Apr 20 17:43:01.256030 sshd-session[6260]: pam_unix(sshd:session): session closed for user core Apr 20 17:43:01.615921 systemd[1]: sshd@27-8208-10.0.0.107:22-10.0.0.1:56068.service: Deactivated successfully. Apr 20 17:43:01.678813 systemd[1]: session-29.scope: Deactivated successfully. Apr 20 17:43:01.679659 systemd[1]: session-29.scope: Consumed 1.757s CPU time, 16.9M memory peak. Apr 20 17:43:01.712146 systemd-logind[1622]: Session 29 logged out. Waiting for processes to exit. Apr 20 17:43:01.790899 systemd-logind[1622]: Removed session 29. Apr 20 17:43:06.607830 systemd[1]: Started sshd@28-8-10.0.0.107:22-10.0.0.1:58348.service - OpenSSH per-connection server daemon (10.0.0.1:58348). Apr 20 17:43:08.440624 sshd[6332]: Accepted publickey for core from 10.0.0.1 port 58348 ssh2: RSA SHA256:snwWgtVh/dFlg/GoTCcrMcNNvPRv/2kcBFWNu/hKVq0 Apr 20 17:43:08.442119 sshd-session[6332]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 20 17:43:09.110193 systemd[1]: Started session-30.scope - Session 30 of User core. Apr 20 17:43:09.161190 systemd-logind[1622]: New session '30' of user 'core' with class 'user' and type 'tty'. Apr 20 17:43:09.606264 kubelet[2921]: E0420 17:43:09.605967 2921 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.162s" Apr 20 17:43:09.782754 kubelet[2921]: E0420 17:43:09.765576 2921 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 17:43:15.599358 sshd[6351]: Connection closed by 10.0.0.1 port 58348 Apr 20 17:43:15.590887 sshd-session[6332]: pam_unix(sshd:session): session closed for user core Apr 20 17:43:15.639284 systemd[1]: sshd@28-8-10.0.0.107:22-10.0.0.1:58348.service: Deactivated successfully. Apr 20 17:43:15.664610 systemd[1]: session-30.scope: Deactivated successfully. Apr 20 17:43:15.690027 systemd[1]: session-30.scope: Consumed 1.976s CPU time, 14.2M memory peak. Apr 20 17:43:15.791106 systemd-logind[1622]: Session 30 logged out. Waiting for processes to exit. Apr 20 17:43:15.824689 systemd-logind[1622]: Removed session 30. Apr 20 17:43:16.663052 containerd[1658]: time="2026-04-20T17:43:16.640585009Z" level=info msg="container event discarded" container=0076736cf1eedee29664628c2cdcb1cb6330a5f7b6aefc4ff7b6bb66058ef13e type=CONTAINER_STOPPED_EVENT Apr 20 17:43:20.468940 containerd[1658]: time="2026-04-20T17:43:20.463917374Z" level=info msg="container event discarded" container=aa45c298c24d90714f2f5e25e656c8aefa636218c1939dbe6b54ed38fe067d34 type=CONTAINER_DELETED_EVENT Apr 20 17:43:21.099535 systemd[1]: Started sshd@29-12290-10.0.0.107:22-10.0.0.1:56390.service - OpenSSH per-connection server daemon (10.0.0.1:56390). Apr 20 17:43:21.465929 containerd[1658]: time="2026-04-20T17:43:21.464027781Z" level=info msg="container event discarded" container=47b8d56c11d320d9712e2d23cce73f72260afb25e9ec58b43bf196c68994379a type=CONTAINER_STOPPED_EVENT Apr 20 17:43:21.882633 kubelet[2921]: E0420 17:43:21.837264 2921 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 17:43:23.962809 containerd[1658]: time="2026-04-20T17:43:23.954708410Z" level=info msg="container event discarded" container=068b64fc9e7bb17f4f31b6fa2b38667873a3e750fc3b308a0cff36dc9b277220 type=CONTAINER_DELETED_EVENT Apr 20 17:43:24.164288 sshd[6391]: Accepted publickey for core from 10.0.0.1 port 56390 ssh2: RSA SHA256:snwWgtVh/dFlg/GoTCcrMcNNvPRv/2kcBFWNu/hKVq0 Apr 20 17:43:24.298952 sshd-session[6391]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 20 17:43:24.735621 systemd-logind[1622]: New session '31' of user 'core' with class 'user' and type 'tty'. Apr 20 17:43:24.830072 systemd[1]: Started session-31.scope - Session 31 of User core. Apr 20 17:43:29.716833 containerd[1658]: time="2026-04-20T17:43:29.695263520Z" level=info msg="container event discarded" container=82e8679ddb69d2fa6ccb28dd249f9cc9ac6cb2f24f965358e606db9d9f5c04e9 type=CONTAINER_CREATED_EVENT Apr 20 17:43:30.482952 containerd[1658]: time="2026-04-20T17:43:30.482144639Z" level=info msg="container event discarded" container=053581eba8fcd29bef6872b77a855f19f570c706a57fca253deec938b7d095a1 type=CONTAINER_CREATED_EVENT Apr 20 17:43:31.703463 sshd[6415]: Connection closed by 10.0.0.1 port 56390 Apr 20 17:43:31.706784 sshd-session[6391]: pam_unix(sshd:session): session closed for user core Apr 20 17:43:31.791520 systemd[1]: sshd@29-12290-10.0.0.107:22-10.0.0.1:56390.service: Deactivated successfully. Apr 20 17:43:31.793291 systemd[1]: sshd@29-12290-10.0.0.107:22-10.0.0.1:56390.service: Consumed 1.094s CPU time, 4.1M memory peak. Apr 20 17:43:31.949235 systemd[1]: session-31.scope: Deactivated successfully. Apr 20 17:43:31.953005 systemd[1]: session-31.scope: Consumed 3.130s CPU time, 16.8M memory peak. Apr 20 17:43:31.986848 systemd-logind[1622]: Session 31 logged out. Waiting for processes to exit. Apr 20 17:43:32.041245 systemd-logind[1622]: Removed session 31. Apr 20 17:43:35.027773 kubelet[2921]: I0420 17:43:35.023249 2921 scope.go:122] "RemoveContainer" containerID="1329f9f59ad208392783f6941eafd260d9073cc50a74ef386efbe982950acdfe" Apr 20 17:43:35.043905 kubelet[2921]: E0420 17:43:35.035621 2921 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 17:43:35.128628 containerd[1658]: time="2026-04-20T17:43:35.126843435Z" level=info msg="CreateContainer within sandbox \"3499c6956704318802827cc6051b05eee9229cbc60a941f53c88c4a10e92ae11\" for container name:\"kube-scheduler\" attempt:4" Apr 20 17:43:35.827069 containerd[1658]: time="2026-04-20T17:43:35.805244366Z" level=info msg="Container b0524a99b63bd0ed4ce9f6e762503c29f3be702a7e5853272438a5ef0bbc4abe: CDI devices from CRI Config.CDIDevices: []" Apr 20 17:43:36.405902 containerd[1658]: time="2026-04-20T17:43:36.404773435Z" level=info msg="CreateContainer within sandbox \"3499c6956704318802827cc6051b05eee9229cbc60a941f53c88c4a10e92ae11\" for name:\"kube-scheduler\" attempt:4 returns container id \"b0524a99b63bd0ed4ce9f6e762503c29f3be702a7e5853272438a5ef0bbc4abe\"" Apr 20 17:43:36.460165 containerd[1658]: time="2026-04-20T17:43:36.418793506Z" level=info msg="StartContainer for \"b0524a99b63bd0ed4ce9f6e762503c29f3be702a7e5853272438a5ef0bbc4abe\"" Apr 20 17:43:36.633203 containerd[1658]: time="2026-04-20T17:43:36.625941728Z" level=info msg="connecting to shim b0524a99b63bd0ed4ce9f6e762503c29f3be702a7e5853272438a5ef0bbc4abe" address="unix:///run/containerd/s/1028a3884c8fe7445e51378f8f75d8222496a64b030b92e150cc155157ded40c" protocol=ttrpc version=3 Apr 20 17:43:36.742776 containerd[1658]: time="2026-04-20T17:43:36.718203975Z" level=info msg="container event discarded" container=053581eba8fcd29bef6872b77a855f19f570c706a57fca253deec938b7d095a1 type=CONTAINER_STARTED_EVENT Apr 20 17:43:36.877155 kubelet[2921]: I0420 17:43:36.871390 2921 scope.go:122] "RemoveContainer" containerID="401fb44f258095ee15f094c2623d4954ec478f180486d84334e9c70d9c03e258" Apr 20 17:43:36.877155 kubelet[2921]: E0420 17:43:36.875108 2921 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 17:43:36.907953 systemd[1]: Started sshd@30-8209-10.0.0.107:22-10.0.0.1:37388.service - OpenSSH per-connection server daemon (10.0.0.1:37388). Apr 20 17:43:37.280370 containerd[1658]: time="2026-04-20T17:43:37.276865908Z" level=info msg="CreateContainer within sandbox \"1917323574b960de8ba00d74f9909f2e14c4f8be4dce6bcb901281008814d6d2\" for container name:\"kube-controller-manager\" attempt:4" Apr 20 17:43:37.697615 containerd[1658]: time="2026-04-20T17:43:37.696910581Z" level=info msg="Container 1054d068c818a13dc5db4e6b9aec3de9886eac3819b741c98c041a7afebe905d: CDI devices from CRI Config.CDIDevices: []" Apr 20 17:43:37.718902 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount615866593.mount: Deactivated successfully. Apr 20 17:43:37.753538 systemd[1]: Started cri-containerd-b0524a99b63bd0ed4ce9f6e762503c29f3be702a7e5853272438a5ef0bbc4abe.scope - libcontainer container b0524a99b63bd0ed4ce9f6e762503c29f3be702a7e5853272438a5ef0bbc4abe. Apr 20 17:43:37.867833 containerd[1658]: time="2026-04-20T17:43:37.854390733Z" level=info msg="CreateContainer within sandbox \"1917323574b960de8ba00d74f9909f2e14c4f8be4dce6bcb901281008814d6d2\" for name:\"kube-controller-manager\" attempt:4 returns container id \"1054d068c818a13dc5db4e6b9aec3de9886eac3819b741c98c041a7afebe905d\"" Apr 20 17:43:37.918761 containerd[1658]: time="2026-04-20T17:43:37.904064267Z" level=info msg="StartContainer for \"1054d068c818a13dc5db4e6b9aec3de9886eac3819b741c98c041a7afebe905d\"" Apr 20 17:43:38.172028 containerd[1658]: time="2026-04-20T17:43:38.162047486Z" level=info msg="connecting to shim 1054d068c818a13dc5db4e6b9aec3de9886eac3819b741c98c041a7afebe905d" address="unix:///run/containerd/s/dec39bf7199c55115e1e022cb5ae3c147a590841eafc57aa9ed18dfe18514e73" protocol=ttrpc version=3 Apr 20 17:43:38.627054 sshd[6465]: Accepted publickey for core from 10.0.0.1 port 37388 ssh2: RSA SHA256:snwWgtVh/dFlg/GoTCcrMcNNvPRv/2kcBFWNu/hKVq0 Apr 20 17:43:38.682391 sshd-session[6465]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 20 17:43:38.769095 containerd[1658]: time="2026-04-20T17:43:38.763136503Z" level=info msg="container event discarded" container=82e8679ddb69d2fa6ccb28dd249f9cc9ac6cb2f24f965358e606db9d9f5c04e9 type=CONTAINER_STARTED_EVENT Apr 20 17:43:39.065802 systemd-logind[1622]: New session '32' of user 'core' with class 'user' and type 'tty'. Apr 20 17:43:39.113050 systemd[1]: Started session-32.scope - Session 32 of User core. Apr 20 17:43:39.775782 systemd[1]: Started cri-containerd-1054d068c818a13dc5db4e6b9aec3de9886eac3819b741c98c041a7afebe905d.scope - libcontainer container 1054d068c818a13dc5db4e6b9aec3de9886eac3819b741c98c041a7afebe905d. Apr 20 17:43:40.466004 kubelet[2921]: E0420 17:43:40.457298 2921 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 17:43:40.640865 containerd[1658]: time="2026-04-20T17:43:40.629193152Z" level=info msg="StartContainer for \"b0524a99b63bd0ed4ce9f6e762503c29f3be702a7e5853272438a5ef0bbc4abe\" returns successfully" Apr 20 17:43:42.341070 kubelet[2921]: E0420 17:43:42.305557 2921 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 17:43:43.177000 containerd[1658]: time="2026-04-20T17:43:43.171013712Z" level=info msg="StartContainer for \"1054d068c818a13dc5db4e6b9aec3de9886eac3819b741c98c041a7afebe905d\" returns successfully" Apr 20 17:43:44.139728 kubelet[2921]: E0420 17:43:44.120822 2921 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 17:43:45.285286 kubelet[2921]: E0420 17:43:45.284162 2921 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 17:43:45.641080 sshd[6508]: Connection closed by 10.0.0.1 port 37388 Apr 20 17:43:45.694163 sshd-session[6465]: pam_unix(sshd:session): session closed for user core Apr 20 17:43:45.980799 systemd[1]: sshd@30-8209-10.0.0.107:22-10.0.0.1:37388.service: Deactivated successfully. Apr 20 17:43:46.156146 systemd[1]: session-32.scope: Deactivated successfully. Apr 20 17:43:46.165645 systemd[1]: session-32.scope: Consumed 1.691s CPU time, 17.7M memory peak. Apr 20 17:43:46.179013 systemd-logind[1622]: Session 32 logged out. Waiting for processes to exit. Apr 20 17:43:46.187852 systemd-logind[1622]: Removed session 32. Apr 20 17:43:50.698164 kubelet[2921]: E0420 17:43:50.696379 2921 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 17:43:50.841821 kubelet[2921]: E0420 17:43:50.840293 2921 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 17:43:50.913252 systemd[1]: Started sshd@31-8210-10.0.0.107:22-10.0.0.1:40862.service - OpenSSH per-connection server daemon (10.0.0.1:40862). Apr 20 17:43:53.512049 kubelet[2921]: E0420 17:43:53.508072 2921 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.07s" Apr 20 17:43:56.095152 sshd[6583]: Accepted publickey for core from 10.0.0.1 port 40862 ssh2: RSA SHA256:snwWgtVh/dFlg/GoTCcrMcNNvPRv/2kcBFWNu/hKVq0 Apr 20 17:43:56.237128 sshd-session[6583]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 20 17:43:57.109867 systemd-logind[1622]: New session '33' of user 'core' with class 'user' and type 'tty'. Apr 20 17:43:57.234111 systemd[1]: Started session-33.scope - Session 33 of User core. Apr 20 17:43:58.780922 kubelet[2921]: E0420 17:43:58.773350 2921 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.239s" Apr 20 17:44:00.588096 kubelet[2921]: E0420 17:44:00.586251 2921 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 17:44:01.612671 kubelet[2921]: E0420 17:44:01.590894 2921 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.146s" Apr 20 17:44:05.143678 kubelet[2921]: E0420 17:44:05.112307 2921 controller.go:251] "Failed to update lease" err="Put \"https://10.0.0.107:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded" Apr 20 17:44:05.703343 kubelet[2921]: E0420 17:44:05.680292 2921 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.237s" Apr 20 17:44:06.107071 kubelet[2921]: E0420 17:44:06.094696 2921 kubelet_node_status.go:474] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"NetworkUnavailable\\\"},{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-04-20T17:43:55Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-20T17:43:55Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-20T17:43:55Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-20T17:43:55Z\\\",\\\"type\\\":\\\"Ready\\\"}]}}\" for node \"localhost\": Patch \"https://10.0.0.107:6443/api/v1/nodes/localhost/status?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Apr 20 17:44:06.756102 kubelet[2921]: E0420 17:44:06.755734 2921 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 17:44:11.625092 kubelet[2921]: E0420 17:44:11.621938 2921 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 17:44:17.038048 sshd[6600]: Connection closed by 10.0.0.1 port 40862 Apr 20 17:44:17.094115 sshd-session[6583]: pam_unix(sshd:session): session closed for user core Apr 20 17:44:17.218226 systemd[1]: sshd@31-8210-10.0.0.107:22-10.0.0.1:40862.service: Deactivated successfully. Apr 20 17:44:17.289647 systemd[1]: sshd@31-8210-10.0.0.107:22-10.0.0.1:40862.service: Consumed 1.658s CPU time, 4.3M memory peak. Apr 20 17:44:17.334818 systemd[1]: session-33.scope: Deactivated successfully. Apr 20 17:44:17.343282 systemd[1]: session-33.scope: Consumed 6.179s CPU time, 14.3M memory peak. Apr 20 17:44:17.414251 systemd-logind[1622]: Session 33 logged out. Waiting for processes to exit. Apr 20 17:44:17.437222 systemd-logind[1622]: Removed session 33. Apr 20 17:44:17.491742 kubelet[2921]: E0420 17:44:17.490177 2921 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 17:44:21.651206 kubelet[2921]: E0420 17:44:21.640665 2921 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 17:44:22.524358 systemd[1]: Started sshd@32-8211-10.0.0.107:22-10.0.0.1:53034.service - OpenSSH per-connection server daemon (10.0.0.1:53034). Apr 20 17:44:25.891672 sshd[6674]: Accepted publickey for core from 10.0.0.1 port 53034 ssh2: RSA SHA256:snwWgtVh/dFlg/GoTCcrMcNNvPRv/2kcBFWNu/hKVq0 Apr 20 17:44:26.313986 sshd-session[6674]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 20 17:44:26.799552 systemd-logind[1622]: New session '34' of user 'core' with class 'user' and type 'tty'. Apr 20 17:44:26.943948 systemd[1]: Started session-34.scope - Session 34 of User core. Apr 20 17:44:34.706222 sshd[6693]: Connection closed by 10.0.0.1 port 53034 Apr 20 17:44:34.800157 sshd-session[6674]: pam_unix(sshd:session): session closed for user core Apr 20 17:44:35.070298 systemd[1]: sshd@32-8211-10.0.0.107:22-10.0.0.1:53034.service: Deactivated successfully. Apr 20 17:44:35.113096 systemd[1]: sshd@32-8211-10.0.0.107:22-10.0.0.1:53034.service: Consumed 1.303s CPU time, 4.1M memory peak. Apr 20 17:44:35.116132 kubelet[2921]: E0420 17:44:35.075290 2921 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 17:44:35.299735 systemd[1]: session-34.scope: Deactivated successfully. Apr 20 17:44:35.331156 systemd[1]: session-34.scope: Consumed 2.509s CPU time, 14.9M memory peak. Apr 20 17:44:35.437948 systemd-logind[1622]: Session 34 logged out. Waiting for processes to exit. Apr 20 17:44:35.520128 systemd-logind[1622]: Removed session 34. Apr 20 17:44:38.172012 kubelet[2921]: E0420 17:44:38.168305 2921 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 17:44:40.307267 systemd[1]: Started sshd@33-8212-10.0.0.107:22-10.0.0.1:53216.service - OpenSSH per-connection server daemon (10.0.0.1:53216). Apr 20 17:44:41.976742 kubelet[2921]: E0420 17:44:41.976488 2921 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.468s" Apr 20 17:44:43.541893 kubelet[2921]: E0420 17:44:43.538794 2921 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.068s" Apr 20 17:44:44.622023 sshd[6739]: Accepted publickey for core from 10.0.0.1 port 53216 ssh2: RSA SHA256:snwWgtVh/dFlg/GoTCcrMcNNvPRv/2kcBFWNu/hKVq0 Apr 20 17:44:44.725274 sshd-session[6739]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 20 17:44:45.410868 systemd-logind[1622]: New session '35' of user 'core' with class 'user' and type 'tty'. Apr 20 17:44:45.690198 systemd[1]: Started session-35.scope - Session 35 of User core. Apr 20 17:44:50.931948 systemd[1]: cri-containerd-1054d068c818a13dc5db4e6b9aec3de9886eac3819b741c98c041a7afebe905d.scope: Deactivated successfully. Apr 20 17:44:50.982833 systemd[1]: cri-containerd-1054d068c818a13dc5db4e6b9aec3de9886eac3819b741c98c041a7afebe905d.scope: Consumed 15.325s CPU time, 18M memory peak. Apr 20 17:44:51.029211 containerd[1658]: time="2026-04-20T17:44:50.933810063Z" level=info msg="container event discarded" container=82e8679ddb69d2fa6ccb28dd249f9cc9ac6cb2f24f965358e606db9d9f5c04e9 type=CONTAINER_STOPPED_EVENT Apr 20 17:44:51.029211 containerd[1658]: time="2026-04-20T17:44:51.026039757Z" level=info msg="received container exit event container_id:\"1054d068c818a13dc5db4e6b9aec3de9886eac3819b741c98c041a7afebe905d\" id:\"1054d068c818a13dc5db4e6b9aec3de9886eac3819b741c98c041a7afebe905d\" pid:6517 exit_status:1 exited_at:{seconds:1776707091 nanos:19680020}" Apr 20 17:44:53.053798 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1054d068c818a13dc5db4e6b9aec3de9886eac3819b741c98c041a7afebe905d-rootfs.mount: Deactivated successfully. Apr 20 17:44:53.571030 containerd[1658]: time="2026-04-20T17:44:53.566158261Z" level=info msg="container event discarded" container=0076736cf1eedee29664628c2cdcb1cb6330a5f7b6aefc4ff7b6bb66058ef13e type=CONTAINER_DELETED_EVENT Apr 20 17:44:53.656829 sshd[6761]: Connection closed by 10.0.0.1 port 53216 Apr 20 17:44:53.668814 sshd-session[6739]: pam_unix(sshd:session): session closed for user core Apr 20 17:44:53.980899 systemd[1]: sshd@33-8212-10.0.0.107:22-10.0.0.1:53216.service: Deactivated successfully. Apr 20 17:44:54.029982 systemd[1]: sshd@33-8212-10.0.0.107:22-10.0.0.1:53216.service: Consumed 1.288s CPU time, 4.1M memory peak. Apr 20 17:44:54.260758 systemd[1]: session-35.scope: Deactivated successfully. Apr 20 17:44:54.286170 systemd[1]: session-35.scope: Consumed 2.949s CPU time, 14.9M memory peak. Apr 20 17:44:54.344235 systemd-logind[1622]: Session 35 logged out. Waiting for processes to exit. Apr 20 17:44:54.408234 systemd-logind[1622]: Removed session 35. Apr 20 17:44:54.777565 kubelet[2921]: I0420 17:44:54.776940 2921 scope.go:122] "RemoveContainer" containerID="401fb44f258095ee15f094c2623d4954ec478f180486d84334e9c70d9c03e258" Apr 20 17:44:54.840927 kubelet[2921]: I0420 17:44:54.839152 2921 scope.go:122] "RemoveContainer" containerID="1054d068c818a13dc5db4e6b9aec3de9886eac3819b741c98c041a7afebe905d" Apr 20 17:44:54.907573 kubelet[2921]: E0420 17:44:54.907134 2921 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 17:44:55.002164 kubelet[2921]: E0420 17:44:54.999036 2921 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-controller-manager pod=kube-controller-manager-localhost_kube-system(14bc29ec35edba17af38052ec24275f2)\"" pod="kube-system/kube-controller-manager-localhost" podUID="14bc29ec35edba17af38052ec24275f2" Apr 20 17:44:55.078230 containerd[1658]: time="2026-04-20T17:44:55.061292163Z" level=info msg="RemoveContainer for \"401fb44f258095ee15f094c2623d4954ec478f180486d84334e9c70d9c03e258\"" Apr 20 17:44:55.332242 containerd[1658]: time="2026-04-20T17:44:55.329246556Z" level=info msg="RemoveContainer for \"401fb44f258095ee15f094c2623d4954ec478f180486d84334e9c70d9c03e258\" returns successfully" Apr 20 17:44:59.160231 containerd[1658]: time="2026-04-20T17:44:59.155955589Z" level=info msg="container event discarded" container=053581eba8fcd29bef6872b77a855f19f570c706a57fca253deec938b7d095a1 type=CONTAINER_STOPPED_EVENT Apr 20 17:44:59.170393 systemd[1]: Started sshd@34-8213-10.0.0.107:22-10.0.0.1:45796.service - OpenSSH per-connection server daemon (10.0.0.1:45796). Apr 20 17:44:59.685762 kubelet[2921]: I0420 17:44:59.685547 2921 scope.go:122] "RemoveContainer" containerID="1054d068c818a13dc5db4e6b9aec3de9886eac3819b741c98c041a7afebe905d" Apr 20 17:44:59.695552 kubelet[2921]: E0420 17:44:59.692127 2921 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 17:44:59.742104 kubelet[2921]: E0420 17:44:59.740643 2921 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-controller-manager pod=kube-controller-manager-localhost_kube-system(14bc29ec35edba17af38052ec24275f2)\"" pod="kube-system/kube-controller-manager-localhost" podUID="14bc29ec35edba17af38052ec24275f2" Apr 20 17:45:00.344929 sshd[6818]: Accepted publickey for core from 10.0.0.1 port 45796 ssh2: RSA SHA256:snwWgtVh/dFlg/GoTCcrMcNNvPRv/2kcBFWNu/hKVq0 Apr 20 17:45:00.356487 sshd-session[6818]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 20 17:45:00.613714 systemd-logind[1622]: New session '36' of user 'core' with class 'user' and type 'tty'. Apr 20 17:45:00.680915 containerd[1658]: time="2026-04-20T17:45:00.632629908Z" level=info msg="container event discarded" container=47b8d56c11d320d9712e2d23cce73f72260afb25e9ec58b43bf196c68994379a type=CONTAINER_DELETED_EVENT Apr 20 17:45:00.724585 systemd[1]: Started session-36.scope - Session 36 of User core. Apr 20 17:45:05.586135 kubelet[2921]: E0420 17:45:05.573323 2921 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 17:45:06.040752 sshd[6822]: Connection closed by 10.0.0.1 port 45796 Apr 20 17:45:06.094005 sshd-session[6818]: pam_unix(sshd:session): session closed for user core Apr 20 17:45:06.238182 systemd[1]: sshd@34-8213-10.0.0.107:22-10.0.0.1:45796.service: Deactivated successfully. Apr 20 17:45:06.485044 systemd[1]: session-36.scope: Deactivated successfully. Apr 20 17:45:06.527302 systemd[1]: session-36.scope: Consumed 2.519s CPU time, 16.2M memory peak. Apr 20 17:45:06.623832 systemd-logind[1622]: Session 36 logged out. Waiting for processes to exit. Apr 20 17:45:06.722753 systemd-logind[1622]: Removed session 36. Apr 20 17:45:08.574637 kubelet[2921]: E0420 17:45:08.573034 2921 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 17:45:11.306311 systemd[1]: Started sshd@35-8214-10.0.0.107:22-10.0.0.1:35406.service - OpenSSH per-connection server daemon (10.0.0.1:35406). Apr 20 17:45:11.692318 containerd[1658]: time="2026-04-20T17:45:11.685471490Z" level=info msg="container event discarded" container=401fb44f258095ee15f094c2623d4954ec478f180486d84334e9c70d9c03e258 type=CONTAINER_CREATED_EVENT Apr 20 17:45:13.670758 sshd[6869]: Accepted publickey for core from 10.0.0.1 port 35406 ssh2: RSA SHA256:snwWgtVh/dFlg/GoTCcrMcNNvPRv/2kcBFWNu/hKVq0 Apr 20 17:45:13.868580 sshd-session[6869]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 20 17:45:14.325453 systemd-logind[1622]: New session '37' of user 'core' with class 'user' and type 'tty'. Apr 20 17:45:14.564136 systemd[1]: Started session-37.scope - Session 37 of User core. Apr 20 17:45:17.226681 containerd[1658]: time="2026-04-20T17:45:17.225250783Z" level=info msg="container event discarded" container=1329f9f59ad208392783f6941eafd260d9073cc50a74ef386efbe982950acdfe type=CONTAINER_CREATED_EVENT Apr 20 17:45:17.882072 containerd[1658]: time="2026-04-20T17:45:17.881739193Z" level=info msg="container event discarded" container=401fb44f258095ee15f094c2623d4954ec478f180486d84334e9c70d9c03e258 type=CONTAINER_STARTED_EVENT Apr 20 17:45:20.843897 containerd[1658]: time="2026-04-20T17:45:20.843509638Z" level=info msg="container event discarded" container=1329f9f59ad208392783f6941eafd260d9073cc50a74ef386efbe982950acdfe type=CONTAINER_STARTED_EVENT Apr 20 17:45:21.656769 sshd[6883]: Connection closed by 10.0.0.1 port 35406 Apr 20 17:45:21.695266 sshd-session[6869]: pam_unix(sshd:session): session closed for user core Apr 20 17:45:22.450646 systemd[1]: sshd@35-8214-10.0.0.107:22-10.0.0.1:35406.service: Deactivated successfully. Apr 20 17:45:22.719191 systemd[1]: session-37.scope: Deactivated successfully. Apr 20 17:45:22.796974 systemd[1]: session-37.scope: Consumed 3.378s CPU time, 16.2M memory peak. Apr 20 17:45:22.842458 systemd-logind[1622]: Session 37 logged out. Waiting for processes to exit. Apr 20 17:45:22.864530 systemd-logind[1622]: Removed session 37. Apr 20 17:45:27.386048 systemd[1]: Started sshd@36-4101-10.0.0.107:22-10.0.0.1:49414.service - OpenSSH per-connection server daemon (10.0.0.1:49414). Apr 20 17:45:28.970242 sshd[6927]: Accepted publickey for core from 10.0.0.1 port 49414 ssh2: RSA SHA256:snwWgtVh/dFlg/GoTCcrMcNNvPRv/2kcBFWNu/hKVq0 Apr 20 17:45:29.139897 sshd-session[6927]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 20 17:45:29.756208 systemd-logind[1622]: New session '38' of user 'core' with class 'user' and type 'tty'. Apr 20 17:45:29.942873 systemd[1]: Started session-38.scope - Session 38 of User core. Apr 20 17:45:32.470991 kubelet[2921]: E0420 17:45:32.468473 2921 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 17:45:33.650877 kubelet[2921]: E0420 17:45:33.642002 2921 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 17:45:34.948067 sshd[6947]: Connection closed by 10.0.0.1 port 49414 Apr 20 17:45:34.976324 sshd-session[6927]: pam_unix(sshd:session): session closed for user core Apr 20 17:45:35.426636 systemd[1]: sshd@36-4101-10.0.0.107:22-10.0.0.1:49414.service: Deactivated successfully. Apr 20 17:45:35.503257 systemd[1]: session-38.scope: Deactivated successfully. Apr 20 17:45:35.525950 systemd[1]: session-38.scope: Consumed 1.530s CPU time, 17.8M memory peak. Apr 20 17:45:35.556640 systemd-logind[1622]: Session 38 logged out. Waiting for processes to exit. Apr 20 17:45:35.584043 kubelet[2921]: E0420 17:45:35.566830 2921 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 17:45:35.611140 systemd-logind[1622]: Removed session 38. Apr 20 17:45:40.406834 systemd[1]: Started sshd@37-9-10.0.0.107:22-10.0.0.1:39532.service - OpenSSH per-connection server daemon (10.0.0.1:39532). Apr 20 17:45:40.602290 kubelet[2921]: E0420 17:45:40.601220 2921 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 17:45:41.968902 sshd[6983]: Accepted publickey for core from 10.0.0.1 port 39532 ssh2: RSA SHA256:snwWgtVh/dFlg/GoTCcrMcNNvPRv/2kcBFWNu/hKVq0 Apr 20 17:45:42.000117 sshd-session[6983]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 20 17:45:42.535120 systemd-logind[1622]: New session '39' of user 'core' with class 'user' and type 'tty'. Apr 20 17:45:43.235173 systemd[1]: Started session-39.scope - Session 39 of User core. Apr 20 17:45:43.656021 kubelet[2921]: E0420 17:45:43.650148 2921 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.204s" Apr 20 17:45:47.438231 sshd[7009]: Connection closed by 10.0.0.1 port 39532 Apr 20 17:45:47.469637 sshd-session[6983]: pam_unix(sshd:session): session closed for user core Apr 20 17:45:47.614951 systemd[1]: sshd@37-9-10.0.0.107:22-10.0.0.1:39532.service: Deactivated successfully. Apr 20 17:45:47.694266 systemd[1]: session-39.scope: Deactivated successfully. Apr 20 17:45:47.698061 systemd[1]: session-39.scope: Consumed 1.941s CPU time, 15.8M memory peak. Apr 20 17:45:47.709718 systemd-logind[1622]: Session 39 logged out. Waiting for processes to exit. Apr 20 17:45:47.802245 systemd-logind[1622]: Removed session 39. Apr 20 17:45:52.956263 systemd[1]: Started sshd@38-8215-10.0.0.107:22-10.0.0.1:41198.service - OpenSSH per-connection server daemon (10.0.0.1:41198). Apr 20 17:45:54.424378 sshd[7049]: Accepted publickey for core from 10.0.0.1 port 41198 ssh2: RSA SHA256:snwWgtVh/dFlg/GoTCcrMcNNvPRv/2kcBFWNu/hKVq0 Apr 20 17:45:54.506085 sshd-session[7049]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 20 17:45:55.038258 systemd-logind[1622]: New session '40' of user 'core' with class 'user' and type 'tty'. Apr 20 17:45:55.129481 systemd[1]: Started session-40.scope - Session 40 of User core. Apr 20 17:45:59.060036 sshd[7064]: Connection closed by 10.0.0.1 port 41198 Apr 20 17:45:59.067322 sshd-session[7049]: pam_unix(sshd:session): session closed for user core Apr 20 17:45:59.124011 systemd[1]: sshd@38-8215-10.0.0.107:22-10.0.0.1:41198.service: Deactivated successfully. Apr 20 17:45:59.291153 systemd[1]: session-40.scope: Deactivated successfully. Apr 20 17:45:59.334259 systemd[1]: session-40.scope: Consumed 1.774s CPU time, 16.2M memory peak. Apr 20 17:45:59.413298 systemd-logind[1622]: Session 40 logged out. Waiting for processes to exit. Apr 20 17:45:59.542179 systemd-logind[1622]: Removed session 40. Apr 20 17:46:04.469001 systemd[1]: Started sshd@39-12291-10.0.0.107:22-10.0.0.1:34242.service - OpenSSH per-connection server daemon (10.0.0.1:34242). Apr 20 17:46:06.893041 sshd[7104]: Accepted publickey for core from 10.0.0.1 port 34242 ssh2: RSA SHA256:snwWgtVh/dFlg/GoTCcrMcNNvPRv/2kcBFWNu/hKVq0 Apr 20 17:46:07.007388 sshd-session[7104]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 20 17:46:07.489591 systemd-logind[1622]: New session '41' of user 'core' with class 'user' and type 'tty'. Apr 20 17:46:07.754038 systemd[1]: Started session-41.scope - Session 41 of User core. Apr 20 17:46:15.606354 sshd[7111]: Connection closed by 10.0.0.1 port 34242 Apr 20 17:46:15.639305 sshd-session[7104]: pam_unix(sshd:session): session closed for user core Apr 20 17:46:15.838212 systemd[1]: sshd@39-12291-10.0.0.107:22-10.0.0.1:34242.service: Deactivated successfully. Apr 20 17:46:15.876634 systemd[1]: sshd@39-12291-10.0.0.107:22-10.0.0.1:34242.service: Consumed 1.015s CPU time, 4.1M memory peak. Apr 20 17:46:16.001739 systemd[1]: session-41.scope: Deactivated successfully. Apr 20 17:46:16.007257 systemd[1]: session-41.scope: Consumed 3.550s CPU time, 14.9M memory peak. Apr 20 17:46:16.042308 systemd-logind[1622]: Session 41 logged out. Waiting for processes to exit. Apr 20 17:46:16.139955 systemd-logind[1622]: Removed session 41. Apr 20 17:46:21.100006 systemd[1]: Started sshd@40-8216-10.0.0.107:22-10.0.0.1:54552.service - OpenSSH per-connection server daemon (10.0.0.1:54552). Apr 20 17:46:22.305472 sshd[7163]: Accepted publickey for core from 10.0.0.1 port 54552 ssh2: RSA SHA256:snwWgtVh/dFlg/GoTCcrMcNNvPRv/2kcBFWNu/hKVq0 Apr 20 17:46:22.342173 sshd-session[7163]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 20 17:46:22.628721 systemd-logind[1622]: New session '42' of user 'core' with class 'user' and type 'tty'. Apr 20 17:46:22.708522 systemd[1]: Started session-42.scope - Session 42 of User core. Apr 20 17:46:24.502744 kubelet[2921]: I0420 17:46:24.487112 2921 scope.go:122] "RemoveContainer" containerID="1054d068c818a13dc5db4e6b9aec3de9886eac3819b741c98c041a7afebe905d" Apr 20 17:46:24.502744 kubelet[2921]: E0420 17:46:24.493264 2921 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 17:46:24.862624 containerd[1658]: time="2026-04-20T17:46:24.849160219Z" level=info msg="CreateContainer within sandbox \"1917323574b960de8ba00d74f9909f2e14c4f8be4dce6bcb901281008814d6d2\" for container name:\"kube-controller-manager\" attempt:5" Apr 20 17:46:25.076059 containerd[1658]: time="2026-04-20T17:46:25.075689269Z" level=info msg="Container 7a975bc0f87574c1b0e5a0aaa430b2f10a56f92cc9e1252f82f66697c9447cae: CDI devices from CRI Config.CDIDevices: []" Apr 20 17:46:25.391685 containerd[1658]: time="2026-04-20T17:46:25.379753555Z" level=info msg="CreateContainer within sandbox \"1917323574b960de8ba00d74f9909f2e14c4f8be4dce6bcb901281008814d6d2\" for name:\"kube-controller-manager\" attempt:5 returns container id \"7a975bc0f87574c1b0e5a0aaa430b2f10a56f92cc9e1252f82f66697c9447cae\"" Apr 20 17:46:25.425858 containerd[1658]: time="2026-04-20T17:46:25.425292502Z" level=info msg="StartContainer for \"7a975bc0f87574c1b0e5a0aaa430b2f10a56f92cc9e1252f82f66697c9447cae\"" Apr 20 17:46:25.626940 containerd[1658]: time="2026-04-20T17:46:25.626804319Z" level=info msg="connecting to shim 7a975bc0f87574c1b0e5a0aaa430b2f10a56f92cc9e1252f82f66697c9447cae" address="unix:///run/containerd/s/dec39bf7199c55115e1e022cb5ae3c147a590841eafc57aa9ed18dfe18514e73" protocol=ttrpc version=3 Apr 20 17:46:28.187128 sshd[7173]: Connection closed by 10.0.0.1 port 54552 Apr 20 17:46:28.189342 sshd-session[7163]: pam_unix(sshd:session): session closed for user core Apr 20 17:46:28.458061 systemd[1]: sshd@40-8216-10.0.0.107:22-10.0.0.1:54552.service: Deactivated successfully. Apr 20 17:46:28.540628 systemd[1]: session-42.scope: Deactivated successfully. Apr 20 17:46:28.639846 systemd[1]: session-42.scope: Consumed 1.169s CPU time, 16.5M memory peak. Apr 20 17:46:28.741894 systemd-logind[1622]: Session 42 logged out. Waiting for processes to exit. Apr 20 17:46:28.860230 systemd[1]: Started cri-containerd-7a975bc0f87574c1b0e5a0aaa430b2f10a56f92cc9e1252f82f66697c9447cae.scope - libcontainer container 7a975bc0f87574c1b0e5a0aaa430b2f10a56f92cc9e1252f82f66697c9447cae. Apr 20 17:46:28.946491 systemd-logind[1622]: Removed session 42. Apr 20 17:46:31.420832 containerd[1658]: time="2026-04-20T17:46:31.419849644Z" level=error msg="get state for 7a975bc0f87574c1b0e5a0aaa430b2f10a56f92cc9e1252f82f66697c9447cae" error="context deadline exceeded" Apr 20 17:46:31.420832 containerd[1658]: time="2026-04-20T17:46:31.421748837Z" level=warning msg="unknown status" status=0 Apr 20 17:46:31.782564 kubelet[2921]: E0420 17:46:31.769317 2921 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 17:46:32.459672 kubelet[2921]: E0420 17:46:32.458350 2921 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 17:46:32.934284 containerd[1658]: time="2026-04-20T17:46:32.932682571Z" level=error msg="ttrpc: received message on inactive stream" stream=3 Apr 20 17:46:33.456292 systemd[1]: Started sshd@41-4102-10.0.0.107:22-10.0.0.1:35076.service - OpenSSH per-connection server daemon (10.0.0.1:35076). Apr 20 17:46:34.269682 containerd[1658]: time="2026-04-20T17:46:34.269478603Z" level=info msg="StartContainer for \"7a975bc0f87574c1b0e5a0aaa430b2f10a56f92cc9e1252f82f66697c9447cae\" returns successfully" Apr 20 17:46:34.736331 kubelet[2921]: E0420 17:46:34.729878 2921 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 17:46:35.051916 sshd[7239]: Accepted publickey for core from 10.0.0.1 port 35076 ssh2: RSA SHA256:snwWgtVh/dFlg/GoTCcrMcNNvPRv/2kcBFWNu/hKVq0 Apr 20 17:46:35.148256 sshd-session[7239]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 20 17:46:35.477542 kubelet[2921]: E0420 17:46:35.477347 2921 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 17:46:35.499532 systemd-logind[1622]: New session '43' of user 'core' with class 'user' and type 'tty'. Apr 20 17:46:35.701127 systemd[1]: Started session-43.scope - Session 43 of User core. Apr 20 17:46:37.660013 sshd[7248]: Connection closed by 10.0.0.1 port 35076 Apr 20 17:46:37.675239 sshd-session[7239]: pam_unix(sshd:session): session closed for user core Apr 20 17:46:37.697487 systemd[1]: sshd@41-4102-10.0.0.107:22-10.0.0.1:35076.service: Deactivated successfully. Apr 20 17:46:37.722190 systemd[1]: session-43.scope: Deactivated successfully. Apr 20 17:46:37.737614 systemd-logind[1622]: Session 43 logged out. Waiting for processes to exit. Apr 20 17:46:37.753742 systemd-logind[1622]: Removed session 43. Apr 20 17:46:40.634037 kubelet[2921]: E0420 17:46:40.633129 2921 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 17:46:42.731534 systemd[1]: Started sshd@42-12292-10.0.0.107:22-10.0.0.1:46306.service - OpenSSH per-connection server daemon (10.0.0.1:46306). Apr 20 17:46:44.027901 sshd[7290]: Accepted publickey for core from 10.0.0.1 port 46306 ssh2: RSA SHA256:snwWgtVh/dFlg/GoTCcrMcNNvPRv/2kcBFWNu/hKVq0 Apr 20 17:46:44.036354 sshd-session[7290]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 20 17:46:44.343681 systemd-logind[1622]: New session '44' of user 'core' with class 'user' and type 'tty'. Apr 20 17:46:44.487513 systemd[1]: Started session-44.scope - Session 44 of User core. Apr 20 17:46:47.333912 sshd[7314]: Connection closed by 10.0.0.1 port 46306 Apr 20 17:46:47.345247 sshd-session[7290]: pam_unix(sshd:session): session closed for user core Apr 20 17:46:47.497186 systemd[1]: sshd@42-12292-10.0.0.107:22-10.0.0.1:46306.service: Deactivated successfully. Apr 20 17:46:47.639916 systemd[1]: session-44.scope: Deactivated successfully. Apr 20 17:46:47.651663 systemd-logind[1622]: Session 44 logged out. Waiting for processes to exit. Apr 20 17:46:47.668963 systemd-logind[1622]: Removed session 44. Apr 20 17:46:50.493530 kubelet[2921]: E0420 17:46:50.493343 2921 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 17:46:50.797977 kubelet[2921]: E0420 17:46:50.797204 2921 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 17:46:52.705202 kubelet[2921]: E0420 17:46:52.699167 2921 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 17:46:52.772356 systemd[1]: Started sshd@43-4103-10.0.0.107:22-10.0.0.1:55828.service - OpenSSH per-connection server daemon (10.0.0.1:55828). Apr 20 17:46:53.852767 sshd[7348]: Accepted publickey for core from 10.0.0.1 port 55828 ssh2: RSA SHA256:snwWgtVh/dFlg/GoTCcrMcNNvPRv/2kcBFWNu/hKVq0 Apr 20 17:46:53.981813 sshd-session[7348]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 20 17:46:54.248702 systemd-logind[1622]: New session '45' of user 'core' with class 'user' and type 'tty'. Apr 20 17:46:54.275271 systemd[1]: Started session-45.scope - Session 45 of User core. Apr 20 17:46:58.772237 kubelet[2921]: E0420 17:46:58.770864 2921 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 17:46:59.496889 sshd[7358]: Connection closed by 10.0.0.1 port 55828 Apr 20 17:46:59.477246 sshd-session[7348]: pam_unix(sshd:session): session closed for user core Apr 20 17:46:59.559863 systemd[1]: sshd@43-4103-10.0.0.107:22-10.0.0.1:55828.service: Deactivated successfully. Apr 20 17:46:59.614666 systemd[1]: session-45.scope: Deactivated successfully. Apr 20 17:46:59.689302 systemd-logind[1622]: Session 45 logged out. Waiting for processes to exit. Apr 20 17:46:59.722182 systemd-logind[1622]: Removed session 45. Apr 20 17:47:01.704646 kubelet[2921]: E0420 17:47:01.684767 2921 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.232s" Apr 20 17:47:05.102058 systemd[1]: Started sshd@44-12293-10.0.0.107:22-10.0.0.1:55304.service - OpenSSH per-connection server daemon (10.0.0.1:55304). Apr 20 17:47:06.549215 sshd[7408]: Accepted publickey for core from 10.0.0.1 port 55304 ssh2: RSA SHA256:snwWgtVh/dFlg/GoTCcrMcNNvPRv/2kcBFWNu/hKVq0 Apr 20 17:47:06.556025 sshd-session[7408]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 20 17:47:06.717146 systemd-logind[1622]: New session '46' of user 'core' with class 'user' and type 'tty'. Apr 20 17:47:06.784296 systemd[1]: Started session-46.scope - Session 46 of User core. Apr 20 17:47:08.353784 sshd[7418]: Connection closed by 10.0.0.1 port 55304 Apr 20 17:47:08.365093 sshd-session[7408]: pam_unix(sshd:session): session closed for user core Apr 20 17:47:08.485651 systemd[1]: sshd@44-12293-10.0.0.107:22-10.0.0.1:55304.service: Deactivated successfully. Apr 20 17:47:08.529224 systemd[1]: session-46.scope: Deactivated successfully. Apr 20 17:47:08.574841 systemd-logind[1622]: Session 46 logged out. Waiting for processes to exit. Apr 20 17:47:08.620370 systemd-logind[1622]: Removed session 46. Apr 20 17:47:13.507191 systemd[1]: Started sshd@45-12294-10.0.0.107:22-10.0.0.1:37888.service - OpenSSH per-connection server daemon (10.0.0.1:37888). Apr 20 17:47:15.058173 sshd[7454]: Accepted publickey for core from 10.0.0.1 port 37888 ssh2: RSA SHA256:snwWgtVh/dFlg/GoTCcrMcNNvPRv/2kcBFWNu/hKVq0 Apr 20 17:47:15.085997 sshd-session[7454]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 20 17:47:15.155963 systemd-logind[1622]: New session '47' of user 'core' with class 'user' and type 'tty'. Apr 20 17:47:15.190379 systemd[1]: Started session-47.scope - Session 47 of User core. Apr 20 17:47:18.239372 sshd[7458]: Connection closed by 10.0.0.1 port 37888 Apr 20 17:47:18.266370 sshd-session[7454]: pam_unix(sshd:session): session closed for user core Apr 20 17:47:18.536262 systemd[1]: sshd@45-12294-10.0.0.107:22-10.0.0.1:37888.service: Deactivated successfully. Apr 20 17:47:18.620361 systemd[1]: session-47.scope: Deactivated successfully. Apr 20 17:47:18.624892 systemd-logind[1622]: Session 47 logged out. Waiting for processes to exit. Apr 20 17:47:18.627050 systemd-logind[1622]: Removed session 47. Apr 20 17:47:23.470311 systemd[1]: Started sshd@46-12295-10.0.0.107:22-10.0.0.1:37930.service - OpenSSH per-connection server daemon (10.0.0.1:37930). Apr 20 17:47:24.195147 sshd[7501]: Accepted publickey for core from 10.0.0.1 port 37930 ssh2: RSA SHA256:snwWgtVh/dFlg/GoTCcrMcNNvPRv/2kcBFWNu/hKVq0 Apr 20 17:47:24.262660 sshd-session[7501]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 20 17:47:24.349130 systemd-logind[1622]: New session '48' of user 'core' with class 'user' and type 'tty'. Apr 20 17:47:24.403358 systemd[1]: Started session-48.scope - Session 48 of User core. Apr 20 17:47:27.610686 kubelet[2921]: E0420 17:47:27.604097 2921 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.161s" Apr 20 17:47:28.108796 sshd[7516]: Connection closed by 10.0.0.1 port 37930 Apr 20 17:47:28.116123 sshd-session[7501]: pam_unix(sshd:session): session closed for user core Apr 20 17:47:28.355170 systemd[1]: sshd@46-12295-10.0.0.107:22-10.0.0.1:37930.service: Deactivated successfully. Apr 20 17:47:28.804844 systemd[1]: session-48.scope: Deactivated successfully. Apr 20 17:47:28.938731 systemd-logind[1622]: Session 48 logged out. Waiting for processes to exit. Apr 20 17:47:29.038053 systemd-logind[1622]: Removed session 48. Apr 20 17:47:33.386599 systemd[1]: Started sshd@47-8217-10.0.0.107:22-10.0.0.1:58252.service - OpenSSH per-connection server daemon (10.0.0.1:58252). Apr 20 17:47:34.167828 sshd[7549]: Accepted publickey for core from 10.0.0.1 port 58252 ssh2: RSA SHA256:snwWgtVh/dFlg/GoTCcrMcNNvPRv/2kcBFWNu/hKVq0 Apr 20 17:47:34.207795 sshd-session[7549]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 20 17:47:34.320268 systemd-logind[1622]: New session '49' of user 'core' with class 'user' and type 'tty'. Apr 20 17:47:34.349928 systemd[1]: Started session-49.scope - Session 49 of User core. Apr 20 17:47:37.929621 sshd[7553]: Connection closed by 10.0.0.1 port 58252 Apr 20 17:47:37.945165 sshd-session[7549]: pam_unix(sshd:session): session closed for user core Apr 20 17:47:38.046508 systemd[1]: sshd@47-8217-10.0.0.107:22-10.0.0.1:58252.service: Deactivated successfully. Apr 20 17:47:38.067296 systemd[1]: session-49.scope: Deactivated successfully. Apr 20 17:47:38.068068 systemd[1]: session-49.scope: Consumed 1.126s CPU time, 14.6M memory peak. Apr 20 17:47:38.109706 systemd-logind[1622]: Session 49 logged out. Waiting for processes to exit. Apr 20 17:47:38.124747 systemd[1]: Started sshd@48-12296-10.0.0.107:22-10.0.0.1:43002.service - OpenSSH per-connection server daemon (10.0.0.1:43002). Apr 20 17:47:38.149541 systemd-logind[1622]: Removed session 49. Apr 20 17:47:40.097066 sshd[7584]: Accepted publickey for core from 10.0.0.1 port 43002 ssh2: RSA SHA256:snwWgtVh/dFlg/GoTCcrMcNNvPRv/2kcBFWNu/hKVq0 Apr 20 17:47:40.111853 sshd-session[7584]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 20 17:47:40.194956 systemd-logind[1622]: New session '50' of user 'core' with class 'user' and type 'tty'. Apr 20 17:47:40.240217 systemd[1]: Started session-50.scope - Session 50 of User core. Apr 20 17:47:43.808703 kubelet[2921]: E0420 17:47:43.796297 2921 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 17:47:44.894840 containerd[1658]: time="2026-04-20T17:47:44.884844962Z" level=info msg="container event discarded" container=401fb44f258095ee15f094c2623d4954ec478f180486d84334e9c70d9c03e258 type=CONTAINER_STOPPED_EVENT Apr 20 17:47:45.582600 containerd[1658]: time="2026-04-20T17:47:45.581962954Z" level=info msg="container event discarded" container=82e8679ddb69d2fa6ccb28dd249f9cc9ac6cb2f24f965358e606db9d9f5c04e9 type=CONTAINER_DELETED_EVENT Apr 20 17:47:47.048501 sshd[7595]: Connection closed by 10.0.0.1 port 43002 Apr 20 17:47:47.039126 sshd-session[7584]: pam_unix(sshd:session): session closed for user core Apr 20 17:47:47.603167 systemd[1]: Started sshd@49-4104-10.0.0.107:22-10.0.0.1:50498.service - OpenSSH per-connection server daemon (10.0.0.1:50498). Apr 20 17:47:47.867819 systemd[1]: sshd@48-12296-10.0.0.107:22-10.0.0.1:43002.service: Deactivated successfully. Apr 20 17:47:47.943375 systemd[1]: session-50.scope: Deactivated successfully. Apr 20 17:47:48.002295 systemd[1]: session-50.scope: Consumed 1.654s CPU time, 27.7M memory peak. Apr 20 17:47:48.216049 systemd-logind[1622]: Session 50 logged out. Waiting for processes to exit. Apr 20 17:47:48.283205 systemd-logind[1622]: Removed session 50. Apr 20 17:47:48.901744 containerd[1658]: time="2026-04-20T17:47:48.890810170Z" level=info msg="container event discarded" container=1329f9f59ad208392783f6941eafd260d9073cc50a74ef386efbe982950acdfe type=CONTAINER_STOPPED_EVENT Apr 20 17:47:50.448230 containerd[1658]: time="2026-04-20T17:47:50.447778466Z" level=info msg="container event discarded" container=053581eba8fcd29bef6872b77a855f19f570c706a57fca253deec938b7d095a1 type=CONTAINER_DELETED_EVENT Apr 20 17:47:51.142006 sshd[7629]: Accepted publickey for core from 10.0.0.1 port 50498 ssh2: RSA SHA256:snwWgtVh/dFlg/GoTCcrMcNNvPRv/2kcBFWNu/hKVq0 Apr 20 17:47:51.243830 sshd-session[7629]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 20 17:47:51.302365 systemd-logind[1622]: New session '51' of user 'core' with class 'user' and type 'tty'. Apr 20 17:47:51.365192 systemd[1]: Started session-51.scope - Session 51 of User core. Apr 20 17:47:54.463693 kubelet[2921]: E0420 17:47:54.454247 2921 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 17:47:56.474636 kubelet[2921]: E0420 17:47:56.463015 2921 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 17:47:58.588196 kubelet[2921]: E0420 17:47:58.587694 2921 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 17:48:01.658190 kubelet[2921]: E0420 17:48:01.657989 2921 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 17:48:04.658599 kubelet[2921]: E0420 17:48:04.652922 2921 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.108s" Apr 20 17:48:09.480814 kubelet[2921]: E0420 17:48:09.477282 2921 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 17:48:12.890476 systemd[1]: cri-containerd-7a975bc0f87574c1b0e5a0aaa430b2f10a56f92cc9e1252f82f66697c9447cae.scope: Deactivated successfully. Apr 20 17:48:12.910841 systemd[1]: cri-containerd-7a975bc0f87574c1b0e5a0aaa430b2f10a56f92cc9e1252f82f66697c9447cae.scope: Consumed 16.036s CPU time, 42.5M memory peak, 4K read from disk. Apr 20 17:48:12.996225 containerd[1658]: time="2026-04-20T17:48:12.969260816Z" level=info msg="received container exit event container_id:\"7a975bc0f87574c1b0e5a0aaa430b2f10a56f92cc9e1252f82f66697c9447cae\" id:\"7a975bc0f87574c1b0e5a0aaa430b2f10a56f92cc9e1252f82f66697c9447cae\" pid:7213 exit_status:1 exited_at:{seconds:1776707292 nanos:909787698}" Apr 20 17:48:13.863202 systemd[1]: cri-containerd-b0524a99b63bd0ed4ce9f6e762503c29f3be702a7e5853272438a5ef0bbc4abe.scope: Deactivated successfully. Apr 20 17:48:13.895372 systemd[1]: cri-containerd-b0524a99b63bd0ed4ce9f6e762503c29f3be702a7e5853272438a5ef0bbc4abe.scope: Consumed 56.057s CPU time, 20.2M memory peak. Apr 20 17:48:13.982063 containerd[1658]: time="2026-04-20T17:48:13.981877278Z" level=info msg="received container exit event container_id:\"b0524a99b63bd0ed4ce9f6e762503c29f3be702a7e5853272438a5ef0bbc4abe\" id:\"b0524a99b63bd0ed4ce9f6e762503c29f3be702a7e5853272438a5ef0bbc4abe\" pid:6486 exit_status:1 exited_at:{seconds:1776707293 nanos:981097837}" Apr 20 17:48:15.449207 kubelet[2921]: E0420 17:48:15.448915 2921 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.002s" Apr 20 17:48:16.399737 kubelet[2921]: E0420 17:48:16.396804 2921 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 17:48:18.757375 kubelet[2921]: E0420 17:48:18.756907 2921 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.251s" Apr 20 17:48:19.743948 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7a975bc0f87574c1b0e5a0aaa430b2f10a56f92cc9e1252f82f66697c9447cae-rootfs.mount: Deactivated successfully. Apr 20 17:48:20.171230 kubelet[2921]: E0420 17:48:19.982369 2921 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.203s" Apr 20 17:48:20.681835 kubelet[2921]: E0420 17:48:20.679697 2921 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 17:48:20.853118 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b0524a99b63bd0ed4ce9f6e762503c29f3be702a7e5853272438a5ef0bbc4abe-rootfs.mount: Deactivated successfully. Apr 20 17:48:21.353816 kubelet[2921]: I0420 17:48:21.343262 2921 scope.go:122] "RemoveContainer" containerID="1054d068c818a13dc5db4e6b9aec3de9886eac3819b741c98c041a7afebe905d" Apr 20 17:48:21.408918 kubelet[2921]: I0420 17:48:21.408066 2921 scope.go:122] "RemoveContainer" containerID="7a975bc0f87574c1b0e5a0aaa430b2f10a56f92cc9e1252f82f66697c9447cae" Apr 20 17:48:21.515693 kubelet[2921]: E0420 17:48:21.513346 2921 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 17:48:21.850746 kubelet[2921]: E0420 17:48:21.843869 2921 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-controller-manager pod=kube-controller-manager-localhost_kube-system(14bc29ec35edba17af38052ec24275f2)\"" pod="kube-system/kube-controller-manager-localhost" podUID="14bc29ec35edba17af38052ec24275f2" Apr 20 17:48:21.983892 containerd[1658]: time="2026-04-20T17:48:21.971928347Z" level=info msg="RemoveContainer for \"1054d068c818a13dc5db4e6b9aec3de9886eac3819b741c98c041a7afebe905d\"" Apr 20 17:48:22.218857 containerd[1658]: time="2026-04-20T17:48:22.218343108Z" level=info msg="RemoveContainer for \"1054d068c818a13dc5db4e6b9aec3de9886eac3819b741c98c041a7afebe905d\" returns successfully" Apr 20 17:48:23.157796 kubelet[2921]: I0420 17:48:23.154941 2921 scope.go:122] "RemoveContainer" containerID="1329f9f59ad208392783f6941eafd260d9073cc50a74ef386efbe982950acdfe" Apr 20 17:48:23.157796 kubelet[2921]: I0420 17:48:23.156136 2921 scope.go:122] "RemoveContainer" containerID="b0524a99b63bd0ed4ce9f6e762503c29f3be702a7e5853272438a5ef0bbc4abe" Apr 20 17:48:23.157796 kubelet[2921]: E0420 17:48:23.156217 2921 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 17:48:23.300371 kubelet[2921]: E0420 17:48:23.203931 2921 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-scheduler pod=kube-scheduler-localhost_kube-system(f7c88b30fc803a3ec6b6c138191bdaca)\"" pod="kube-system/kube-scheduler-localhost" podUID="f7c88b30fc803a3ec6b6c138191bdaca" Apr 20 17:48:23.529073 containerd[1658]: time="2026-04-20T17:48:23.501218318Z" level=info msg="RemoveContainer for \"1329f9f59ad208392783f6941eafd260d9073cc50a74ef386efbe982950acdfe\"" Apr 20 17:48:23.813554 containerd[1658]: time="2026-04-20T17:48:23.810149596Z" level=info msg="RemoveContainer for \"1329f9f59ad208392783f6941eafd260d9073cc50a74ef386efbe982950acdfe\" returns successfully" Apr 20 17:48:27.588147 kubelet[2921]: I0420 17:48:27.587731 2921 scope.go:122] "RemoveContainer" containerID="b0524a99b63bd0ed4ce9f6e762503c29f3be702a7e5853272438a5ef0bbc4abe" Apr 20 17:48:27.602892 kubelet[2921]: E0420 17:48:27.599949 2921 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 17:48:27.606003 kubelet[2921]: E0420 17:48:27.605290 2921 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-scheduler pod=kube-scheduler-localhost_kube-system(f7c88b30fc803a3ec6b6c138191bdaca)\"" pod="kube-system/kube-scheduler-localhost" podUID="f7c88b30fc803a3ec6b6c138191bdaca" Apr 20 17:48:28.502884 sshd[7646]: Connection closed by 10.0.0.1 port 50498 Apr 20 17:48:28.533028 sshd-session[7629]: pam_unix(sshd:session): session closed for user core Apr 20 17:48:29.047601 systemd[1]: sshd@49-4104-10.0.0.107:22-10.0.0.1:50498.service: Deactivated successfully. Apr 20 17:48:29.710128 systemd[1]: session-51.scope: Deactivated successfully. Apr 20 17:48:29.730311 systemd[1]: session-51.scope: Consumed 5.117s CPU time, 33M memory peak. Apr 20 17:48:29.842631 systemd-logind[1622]: Session 51 logged out. Waiting for processes to exit. Apr 20 17:48:29.898841 systemd-logind[1622]: Removed session 51. Apr 20 17:48:29.899693 kubelet[2921]: I0420 17:48:29.872584 2921 scope.go:122] "RemoveContainer" containerID="7a975bc0f87574c1b0e5a0aaa430b2f10a56f92cc9e1252f82f66697c9447cae" Apr 20 17:48:29.899693 kubelet[2921]: E0420 17:48:29.872804 2921 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 17:48:29.899693 kubelet[2921]: E0420 17:48:29.872916 2921 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-controller-manager pod=kube-controller-manager-localhost_kube-system(14bc29ec35edba17af38052ec24275f2)\"" pod="kube-system/kube-controller-manager-localhost" podUID="14bc29ec35edba17af38052ec24275f2" Apr 20 17:48:30.081130 systemd[1]: Started sshd@50-12297-10.0.0.107:22-10.0.0.1:60562.service - OpenSSH per-connection server daemon (10.0.0.1:60562). Apr 20 17:48:31.949952 sshd[7799]: Accepted publickey for core from 10.0.0.1 port 60562 ssh2: RSA SHA256:snwWgtVh/dFlg/GoTCcrMcNNvPRv/2kcBFWNu/hKVq0 Apr 20 17:48:31.984994 sshd-session[7799]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 20 17:48:32.413845 systemd-logind[1622]: New session '52' of user 'core' with class 'user' and type 'tty'. Apr 20 17:48:32.557303 systemd[1]: Started session-52.scope - Session 52 of User core. Apr 20 17:48:36.363383 containerd[1658]: time="2026-04-20T17:48:36.362993600Z" level=info msg="container event discarded" container=b0524a99b63bd0ed4ce9f6e762503c29f3be702a7e5853272438a5ef0bbc4abe type=CONTAINER_CREATED_EVENT Apr 20 17:48:37.820733 containerd[1658]: time="2026-04-20T17:48:37.816903616Z" level=info msg="container event discarded" container=1054d068c818a13dc5db4e6b9aec3de9886eac3819b741c98c041a7afebe905d type=CONTAINER_CREATED_EVENT Apr 20 17:48:40.505569 containerd[1658]: time="2026-04-20T17:48:40.504969011Z" level=info msg="container event discarded" container=b0524a99b63bd0ed4ce9f6e762503c29f3be702a7e5853272438a5ef0bbc4abe type=CONTAINER_STARTED_EVENT Apr 20 17:48:43.148874 containerd[1658]: time="2026-04-20T17:48:43.143779605Z" level=info msg="container event discarded" container=1054d068c818a13dc5db4e6b9aec3de9886eac3819b741c98c041a7afebe905d type=CONTAINER_STARTED_EVENT Apr 20 17:48:43.829318 sshd[7819]: Connection closed by 10.0.0.1 port 60562 Apr 20 17:48:43.838845 sshd-session[7799]: pam_unix(sshd:session): session closed for user core Apr 20 17:48:44.084350 systemd[1]: Started sshd@51-8218-10.0.0.107:22-10.0.0.1:44416.service - OpenSSH per-connection server daemon (10.0.0.1:44416). Apr 20 17:48:44.151570 systemd[1]: sshd@50-12297-10.0.0.107:22-10.0.0.1:60562.service: Deactivated successfully. Apr 20 17:48:44.208173 systemd[1]: session-52.scope: Deactivated successfully. Apr 20 17:48:44.212221 systemd[1]: session-52.scope: Consumed 1.892s CPU time, 27.8M memory peak. Apr 20 17:48:44.295992 systemd-logind[1622]: Session 52 logged out. Waiting for processes to exit. Apr 20 17:48:44.363031 systemd-logind[1622]: Removed session 52. Apr 20 17:48:47.719948 sshd[7858]: Accepted publickey for core from 10.0.0.1 port 44416 ssh2: RSA SHA256:snwWgtVh/dFlg/GoTCcrMcNNvPRv/2kcBFWNu/hKVq0 Apr 20 17:48:47.689171 sshd-session[7858]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 20 17:48:47.906726 systemd-logind[1622]: New session '53' of user 'core' with class 'user' and type 'tty'. Apr 20 17:48:48.024498 systemd[1]: Started session-53.scope - Session 53 of User core. Apr 20 17:48:51.035827 sshd[7878]: Connection closed by 10.0.0.1 port 44416 Apr 20 17:48:51.165854 sshd-session[7858]: pam_unix(sshd:session): session closed for user core Apr 20 17:48:51.369085 systemd[1]: sshd@51-8218-10.0.0.107:22-10.0.0.1:44416.service: Deactivated successfully. Apr 20 17:48:51.641024 systemd[1]: session-53.scope: Deactivated successfully. Apr 20 17:48:51.738345 systemd-logind[1622]: Session 53 logged out. Waiting for processes to exit. Apr 20 17:48:51.960368 systemd-logind[1622]: Removed session 53. Apr 20 17:48:56.400884 systemd[1]: Started sshd@52-12298-10.0.0.107:22-10.0.0.1:54172.service - OpenSSH per-connection server daemon (10.0.0.1:54172). Apr 20 17:48:57.598110 sshd[7919]: Accepted publickey for core from 10.0.0.1 port 54172 ssh2: RSA SHA256:snwWgtVh/dFlg/GoTCcrMcNNvPRv/2kcBFWNu/hKVq0 Apr 20 17:48:57.611477 kubelet[2921]: E0420 17:48:57.605607 2921 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 17:48:57.963277 sshd-session[7919]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 20 17:48:58.708973 systemd-logind[1622]: New session '54' of user 'core' with class 'user' and type 'tty'. Apr 20 17:48:58.781137 systemd[1]: Started session-54.scope - Session 54 of User core. Apr 20 17:49:00.753252 sshd[7929]: Connection closed by 10.0.0.1 port 54172 Apr 20 17:49:00.852647 sshd-session[7919]: pam_unix(sshd:session): session closed for user core Apr 20 17:49:01.173477 systemd[1]: sshd@52-12298-10.0.0.107:22-10.0.0.1:54172.service: Deactivated successfully. Apr 20 17:49:01.295495 systemd[1]: session-54.scope: Deactivated successfully. Apr 20 17:49:01.588035 systemd-logind[1622]: Session 54 logged out. Waiting for processes to exit. Apr 20 17:49:01.845366 systemd-logind[1622]: Removed session 54. Apr 20 17:49:05.031292 kubelet[2921]: E0420 17:49:05.025245 2921 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 17:49:06.585004 systemd[1]: Started sshd@53-4105-10.0.0.107:22-10.0.0.1:35458.service - OpenSSH per-connection server daemon (10.0.0.1:35458). Apr 20 17:49:08.000594 kubelet[2921]: E0420 17:49:08.000503 2921 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.543s" Apr 20 17:49:08.675343 kubelet[2921]: E0420 17:49:08.667238 2921 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 17:49:10.375725 sshd[7960]: Accepted publickey for core from 10.0.0.1 port 35458 ssh2: RSA SHA256:snwWgtVh/dFlg/GoTCcrMcNNvPRv/2kcBFWNu/hKVq0 Apr 20 17:49:10.391375 sshd-session[7960]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 20 17:49:10.634961 systemd-logind[1622]: New session '55' of user 'core' with class 'user' and type 'tty'. Apr 20 17:49:10.733145 systemd[1]: Started session-55.scope - Session 55 of User core. Apr 20 17:49:15.577615 sshd[7983]: Connection closed by 10.0.0.1 port 35458 Apr 20 17:49:15.612346 sshd-session[7960]: pam_unix(sshd:session): session closed for user core Apr 20 17:49:16.031154 systemd[1]: sshd@53-4105-10.0.0.107:22-10.0.0.1:35458.service: Deactivated successfully. Apr 20 17:49:16.254573 systemd[1]: session-55.scope: Deactivated successfully. Apr 20 17:49:16.292883 systemd[1]: session-55.scope: Consumed 1.338s CPU time, 15.8M memory peak. Apr 20 17:49:16.322815 systemd-logind[1622]: Session 55 logged out. Waiting for processes to exit. Apr 20 17:49:16.346582 systemd-logind[1622]: Removed session 55. Apr 20 17:49:18.515810 kubelet[2921]: E0420 17:49:18.515594 2921 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 17:49:21.548911 kubelet[2921]: E0420 17:49:21.537147 2921 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.006s" Apr 20 17:49:21.941302 systemd[1]: Started sshd@54-12299-10.0.0.107:22-10.0.0.1:53032.service - OpenSSH per-connection server daemon (10.0.0.1:53032). Apr 20 17:49:24.346986 kubelet[2921]: E0420 17:49:24.336222 2921 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.661s" Apr 20 17:49:25.838335 kubelet[2921]: E0420 17:49:25.830273 2921 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.38s" Apr 20 17:49:26.306890 sshd[8023]: Accepted publickey for core from 10.0.0.1 port 53032 ssh2: RSA SHA256:snwWgtVh/dFlg/GoTCcrMcNNvPRv/2kcBFWNu/hKVq0 Apr 20 17:49:26.478978 sshd-session[8023]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 20 17:49:27.023076 systemd-logind[1622]: New session '56' of user 'core' with class 'user' and type 'tty'. Apr 20 17:49:27.268682 systemd[1]: Started session-56.scope - Session 56 of User core. Apr 20 17:49:32.637300 kubelet[2921]: E0420 17:49:32.636346 2921 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.178s" Apr 20 17:49:39.292809 sshd[8044]: Connection closed by 10.0.0.1 port 53032 Apr 20 17:49:39.308800 sshd-session[8023]: pam_unix(sshd:session): session closed for user core Apr 20 17:49:39.488667 systemd[1]: sshd@54-12299-10.0.0.107:22-10.0.0.1:53032.service: Deactivated successfully. Apr 20 17:49:39.735122 systemd[1]: session-56.scope: Deactivated successfully. Apr 20 17:49:39.737739 systemd[1]: session-56.scope: Consumed 1.675s CPU time, 18.1M memory peak. Apr 20 17:49:40.054283 systemd-logind[1622]: Session 56 logged out. Waiting for processes to exit. Apr 20 17:49:40.373730 systemd-logind[1622]: Removed session 56. Apr 20 17:49:42.866937 kubelet[2921]: I0420 17:49:42.866851 2921 scope.go:122] "RemoveContainer" containerID="b0524a99b63bd0ed4ce9f6e762503c29f3be702a7e5853272438a5ef0bbc4abe" Apr 20 17:49:42.884246 kubelet[2921]: E0420 17:49:42.884220 2921 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 17:49:43.163638 containerd[1658]: time="2026-04-20T17:49:43.161813677Z" level=info msg="CreateContainer within sandbox \"3499c6956704318802827cc6051b05eee9229cbc60a941f53c88c4a10e92ae11\" for container name:\"kube-scheduler\" attempt:5" Apr 20 17:49:43.379554 containerd[1658]: time="2026-04-20T17:49:43.378150713Z" level=info msg="Container 7eb522b2e94b06eb5a1d9b756b447f495cebda1b0b44fcb67223382045831412: CDI devices from CRI Config.CDIDevices: []" Apr 20 17:49:43.493685 containerd[1658]: time="2026-04-20T17:49:43.492745684Z" level=info msg="CreateContainer within sandbox \"3499c6956704318802827cc6051b05eee9229cbc60a941f53c88c4a10e92ae11\" for name:\"kube-scheduler\" attempt:5 returns container id \"7eb522b2e94b06eb5a1d9b756b447f495cebda1b0b44fcb67223382045831412\"" Apr 20 17:49:43.506267 containerd[1658]: time="2026-04-20T17:49:43.499035855Z" level=info msg="StartContainer for \"7eb522b2e94b06eb5a1d9b756b447f495cebda1b0b44fcb67223382045831412\"" Apr 20 17:49:43.681343 containerd[1658]: time="2026-04-20T17:49:43.574174370Z" level=info msg="connecting to shim 7eb522b2e94b06eb5a1d9b756b447f495cebda1b0b44fcb67223382045831412" address="unix:///run/containerd/s/1028a3884c8fe7445e51378f8f75d8222496a64b030b92e150cc155157ded40c" protocol=ttrpc version=3 Apr 20 17:49:44.694655 kubelet[2921]: I0420 17:49:44.685178 2921 scope.go:122] "RemoveContainer" containerID="7a975bc0f87574c1b0e5a0aaa430b2f10a56f92cc9e1252f82f66697c9447cae" Apr 20 17:49:44.694655 kubelet[2921]: E0420 17:49:44.685272 2921 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 17:49:44.742900 kubelet[2921]: E0420 17:49:44.685394 2921 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-controller-manager pod=kube-controller-manager-localhost_kube-system(14bc29ec35edba17af38052ec24275f2)\"" pod="kube-system/kube-controller-manager-localhost" podUID="14bc29ec35edba17af38052ec24275f2" Apr 20 17:49:44.770235 kubelet[2921]: E0420 17:49:44.769820 2921 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 17:49:45.565326 systemd[1]: Started sshd@55-10-10.0.0.107:22-10.0.0.1:42526.service - OpenSSH per-connection server daemon (10.0.0.1:42526). Apr 20 17:49:46.859989 systemd[1]: Started cri-containerd-7eb522b2e94b06eb5a1d9b756b447f495cebda1b0b44fcb67223382045831412.scope - libcontainer container 7eb522b2e94b06eb5a1d9b756b447f495cebda1b0b44fcb67223382045831412. Apr 20 17:49:49.091515 sshd[8110]: Accepted publickey for core from 10.0.0.1 port 42526 ssh2: RSA SHA256:snwWgtVh/dFlg/GoTCcrMcNNvPRv/2kcBFWNu/hKVq0 Apr 20 17:49:49.232297 sshd-session[8110]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 20 17:49:49.394084 containerd[1658]: time="2026-04-20T17:49:49.390925384Z" level=error msg="get state for 7eb522b2e94b06eb5a1d9b756b447f495cebda1b0b44fcb67223382045831412" error="context deadline exceeded" Apr 20 17:49:49.394084 containerd[1658]: time="2026-04-20T17:49:49.391227424Z" level=warning msg="unknown status" status=0 Apr 20 17:49:49.616088 systemd-logind[1622]: New session '57' of user 'core' with class 'user' and type 'tty'. Apr 20 17:49:49.652693 systemd[1]: Started session-57.scope - Session 57 of User core. Apr 20 17:49:49.736233 containerd[1658]: time="2026-04-20T17:49:49.720094224Z" level=error msg="ttrpc: received message on inactive stream" stream=3 Apr 20 17:49:52.266847 containerd[1658]: time="2026-04-20T17:49:52.262161029Z" level=info msg="StartContainer for \"7eb522b2e94b06eb5a1d9b756b447f495cebda1b0b44fcb67223382045831412\" returns successfully" Apr 20 17:49:52.674132 kubelet[2921]: E0420 17:49:52.639381 2921 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.182s" Apr 20 17:49:53.215766 containerd[1658]: time="2026-04-20T17:49:53.200631530Z" level=info msg="container event discarded" container=1054d068c818a13dc5db4e6b9aec3de9886eac3819b741c98c041a7afebe905d type=CONTAINER_STOPPED_EVENT Apr 20 17:49:53.568225 kubelet[2921]: E0420 17:49:53.556262 2921 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 17:49:55.344619 containerd[1658]: time="2026-04-20T17:49:55.341116088Z" level=info msg="container event discarded" container=401fb44f258095ee15f094c2623d4954ec478f180486d84334e9c70d9c03e258 type=CONTAINER_DELETED_EVENT Apr 20 17:49:55.447351 kubelet[2921]: E0420 17:49:55.440998 2921 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 17:50:00.972031 sshd[8133]: Connection closed by 10.0.0.1 port 42526 Apr 20 17:50:01.020556 kubelet[2921]: E0420 17:50:00.979317 2921 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 17:50:00.984323 sshd-session[8110]: pam_unix(sshd:session): session closed for user core Apr 20 17:50:01.437746 systemd[1]: sshd@55-10-10.0.0.107:22-10.0.0.1:42526.service: Deactivated successfully. Apr 20 17:50:01.872779 systemd[1]: session-57.scope: Deactivated successfully. Apr 20 17:50:01.886783 systemd[1]: session-57.scope: Consumed 4.291s CPU time, 15.7M memory peak. Apr 20 17:50:01.999085 systemd-logind[1622]: Session 57 logged out. Waiting for processes to exit. Apr 20 17:50:02.083850 systemd-logind[1622]: Removed session 57. Apr 20 17:50:06.364925 kubelet[2921]: E0420 17:50:06.364516 2921 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.921s" Apr 20 17:50:06.459342 systemd[1]: Started sshd@56-11-10.0.0.107:22-10.0.0.1:42502.service - OpenSSH per-connection server daemon (10.0.0.1:42502). Apr 20 17:50:09.778397 sshd[8194]: Accepted publickey for core from 10.0.0.1 port 42502 ssh2: RSA SHA256:snwWgtVh/dFlg/GoTCcrMcNNvPRv/2kcBFWNu/hKVq0 Apr 20 17:50:09.892327 sshd-session[8194]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 20 17:50:10.096331 systemd-logind[1622]: New session '58' of user 'core' with class 'user' and type 'tty'. Apr 20 17:50:10.229187 systemd[1]: Started session-58.scope - Session 58 of User core. Apr 20 17:50:12.608201 kubelet[2921]: E0420 17:50:12.603360 2921 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 17:50:17.253351 sshd[8215]: Connection closed by 10.0.0.1 port 42502 Apr 20 17:50:17.335894 sshd-session[8194]: pam_unix(sshd:session): session closed for user core Apr 20 17:50:17.684052 systemd[1]: sshd@56-11-10.0.0.107:22-10.0.0.1:42502.service: Deactivated successfully. Apr 20 17:50:18.045710 systemd[1]: session-58.scope: Deactivated successfully. Apr 20 17:50:18.105793 systemd[1]: session-58.scope: Consumed 2.608s CPU time, 15.5M memory peak. Apr 20 17:50:18.194272 systemd-logind[1622]: Session 58 logged out. Waiting for processes to exit. Apr 20 17:50:18.243527 systemd-logind[1622]: Removed session 58. Apr 20 17:50:19.765849 kubelet[2921]: E0420 17:50:19.711668 2921 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.185s" Apr 20 17:50:21.680053 kubelet[2921]: E0420 17:50:21.679799 2921 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.193s" Apr 20 17:50:23.089892 systemd[1]: Started sshd@57-12-10.0.0.107:22-10.0.0.1:54370.service - OpenSSH per-connection server daemon (10.0.0.1:54370). Apr 20 17:50:23.414835 kubelet[2921]: E0420 17:50:23.414318 2921 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 17:50:24.296921 kubelet[2921]: E0420 17:50:24.288754 2921 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.799s" Apr 20 17:50:26.267642 kubelet[2921]: E0420 17:50:26.260024 2921 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.821s" Apr 20 17:50:26.897667 kubelet[2921]: E0420 17:50:26.893072 2921 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 17:50:26.940932 sshd[8255]: Accepted publickey for core from 10.0.0.1 port 54370 ssh2: RSA SHA256:snwWgtVh/dFlg/GoTCcrMcNNvPRv/2kcBFWNu/hKVq0 Apr 20 17:50:27.022051 sshd-session[8255]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 20 17:50:27.603834 systemd-logind[1622]: New session '59' of user 'core' with class 'user' and type 'tty'. Apr 20 17:50:27.703629 systemd[1]: Started session-59.scope - Session 59 of User core. Apr 20 17:50:30.515122 kubelet[2921]: E0420 17:50:30.445508 2921 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.904s" Apr 20 17:50:32.347147 kubelet[2921]: E0420 17:50:32.334522 2921 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.71s" Apr 20 17:50:35.606155 kubelet[2921]: E0420 17:50:35.548275 2921 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.096s" Apr 20 17:50:39.420731 kubelet[2921]: E0420 17:50:39.416347 2921 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 17:50:43.114297 sshd[8275]: Connection closed by 10.0.0.1 port 54370 Apr 20 17:50:43.125063 sshd-session[8255]: pam_unix(sshd:session): session closed for user core Apr 20 17:50:43.402028 systemd[1]: sshd@57-12-10.0.0.107:22-10.0.0.1:54370.service: Deactivated successfully. Apr 20 17:50:43.612018 systemd[1]: session-59.scope: Deactivated successfully. Apr 20 17:50:43.639154 systemd[1]: session-59.scope: Consumed 5.052s CPU time, 15.7M memory peak. Apr 20 17:50:43.749965 systemd-logind[1622]: Session 59 logged out. Waiting for processes to exit. Apr 20 17:50:43.810510 systemd-logind[1622]: Removed session 59. Apr 20 17:50:44.466141 kubelet[2921]: E0420 17:50:44.465483 2921 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 17:50:45.735160 kubelet[2921]: E0420 17:50:45.733391 2921 controller.go:251] "Failed to update lease" err="Put \"https://10.0.0.107:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Apr 20 17:50:48.879837 systemd[1]: Started sshd@58-8219-10.0.0.107:22-10.0.0.1:47588.service - OpenSSH per-connection server daemon (10.0.0.1:47588). Apr 20 17:50:50.372743 sshd[8341]: Accepted publickey for core from 10.0.0.1 port 47588 ssh2: RSA SHA256:snwWgtVh/dFlg/GoTCcrMcNNvPRv/2kcBFWNu/hKVq0 Apr 20 17:50:50.581830 sshd-session[8341]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 20 17:50:51.213073 systemd-logind[1622]: New session '60' of user 'core' with class 'user' and type 'tty'. Apr 20 17:50:51.303184 systemd[1]: Started session-60.scope - Session 60 of User core. Apr 20 17:50:53.542320 kubelet[2921]: E0420 17:50:53.541981 2921 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.088s" Apr 20 17:50:55.733092 kubelet[2921]: E0420 17:50:55.732159 2921 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.281s" Apr 20 17:50:55.782449 kubelet[2921]: E0420 17:50:55.763537 2921 controller.go:251] "Failed to update lease" err="Put \"https://10.0.0.107:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Apr 20 17:50:55.786143 kubelet[2921]: E0420 17:50:55.785859 2921 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 17:51:01.525042 kubelet[2921]: E0420 17:51:01.521097 2921 controller.go:251] "Failed to update lease" err="Operation cannot be fulfilled on leases.coordination.k8s.io \"localhost\": the object has been modified; please apply your changes to the latest version and try again" Apr 20 17:51:01.543970 kubelet[2921]: E0420 17:51:01.543908 2921 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 17:51:02.581081 sshd[8346]: Connection closed by 10.0.0.1 port 47588 Apr 20 17:51:02.623031 sshd-session[8341]: pam_unix(sshd:session): session closed for user core Apr 20 17:51:02.860161 systemd[1]: sshd@58-8219-10.0.0.107:22-10.0.0.1:47588.service: Deactivated successfully. Apr 20 17:51:03.103905 systemd[1]: session-60.scope: Deactivated successfully. Apr 20 17:51:03.124778 systemd[1]: session-60.scope: Consumed 2.185s CPU time, 18M memory peak. Apr 20 17:51:03.172940 systemd-logind[1622]: Session 60 logged out. Waiting for processes to exit. Apr 20 17:51:03.187072 systemd-logind[1622]: Removed session 60. Apr 20 17:51:08.530748 systemd[1]: Started sshd@59-8220-10.0.0.107:22-10.0.0.1:53722.service - OpenSSH per-connection server daemon (10.0.0.1:53722). Apr 20 17:51:10.395282 kubelet[2921]: E0420 17:51:10.393304 2921 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.892s" Apr 20 17:51:12.191635 kubelet[2921]: E0420 17:51:12.191123 2921 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.744s" Apr 20 17:51:12.436080 kubelet[2921]: I0420 17:51:12.434946 2921 scope.go:122] "RemoveContainer" containerID="7a975bc0f87574c1b0e5a0aaa430b2f10a56f92cc9e1252f82f66697c9447cae" Apr 20 17:51:12.465457 sshd[8405]: Accepted publickey for core from 10.0.0.1 port 53722 ssh2: RSA SHA256:snwWgtVh/dFlg/GoTCcrMcNNvPRv/2kcBFWNu/hKVq0 Apr 20 17:51:12.527209 kubelet[2921]: E0420 17:51:12.478017 2921 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 17:51:12.658144 sshd-session[8405]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 20 17:51:13.233012 containerd[1658]: time="2026-04-20T17:51:13.230270602Z" level=info msg="CreateContainer within sandbox \"1917323574b960de8ba00d74f9909f2e14c4f8be4dce6bcb901281008814d6d2\" for container name:\"kube-controller-manager\" attempt:6" Apr 20 17:51:13.419225 systemd-logind[1622]: New session '61' of user 'core' with class 'user' and type 'tty'. Apr 20 17:51:13.621935 systemd[1]: Started session-61.scope - Session 61 of User core. Apr 20 17:51:13.699363 kubelet[2921]: E0420 17:51:13.696255 2921 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.218s" Apr 20 17:51:14.128202 containerd[1658]: time="2026-04-20T17:51:14.127900726Z" level=info msg="Container 87ccdd1363addb01066c0ee6b298ff10cd1a63f9dffb8a2c71ae37dcf399b195: CDI devices from CRI Config.CDIDevices: []" Apr 20 17:51:14.409805 containerd[1658]: time="2026-04-20T17:51:14.403272696Z" level=info msg="CreateContainer within sandbox \"1917323574b960de8ba00d74f9909f2e14c4f8be4dce6bcb901281008814d6d2\" for name:\"kube-controller-manager\" attempt:6 returns container id \"87ccdd1363addb01066c0ee6b298ff10cd1a63f9dffb8a2c71ae37dcf399b195\"" Apr 20 17:51:14.419310 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2537540145.mount: Deactivated successfully. Apr 20 17:51:14.889291 containerd[1658]: time="2026-04-20T17:51:14.842679838Z" level=info msg="StartContainer for \"87ccdd1363addb01066c0ee6b298ff10cd1a63f9dffb8a2c71ae37dcf399b195\"" Apr 20 17:51:14.950205 containerd[1658]: time="2026-04-20T17:51:14.950022711Z" level=info msg="connecting to shim 87ccdd1363addb01066c0ee6b298ff10cd1a63f9dffb8a2c71ae37dcf399b195" address="unix:///run/containerd/s/dec39bf7199c55115e1e022cb5ae3c147a590841eafc57aa9ed18dfe18514e73" protocol=ttrpc version=3 Apr 20 17:51:16.035959 systemd[1]: Started cri-containerd-87ccdd1363addb01066c0ee6b298ff10cd1a63f9dffb8a2c71ae37dcf399b195.scope - libcontainer container 87ccdd1363addb01066c0ee6b298ff10cd1a63f9dffb8a2c71ae37dcf399b195. Apr 20 17:51:18.148218 containerd[1658]: time="2026-04-20T17:51:18.140316704Z" level=error msg="get state for 87ccdd1363addb01066c0ee6b298ff10cd1a63f9dffb8a2c71ae37dcf399b195" error="context deadline exceeded" Apr 20 17:51:18.148218 containerd[1658]: time="2026-04-20T17:51:18.145763174Z" level=warning msg="unknown status" status=0 Apr 20 17:51:18.212593 containerd[1658]: time="2026-04-20T17:51:18.208006628Z" level=error msg="ttrpc: received message on inactive stream" stream=3 Apr 20 17:51:19.073992 containerd[1658]: time="2026-04-20T17:51:19.073157365Z" level=info msg="StartContainer for \"87ccdd1363addb01066c0ee6b298ff10cd1a63f9dffb8a2c71ae37dcf399b195\" returns successfully" Apr 20 17:51:20.495603 kubelet[2921]: E0420 17:51:20.494285 2921 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 17:51:21.516915 kubelet[2921]: E0420 17:51:21.515699 2921 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 17:51:21.839937 sshd[8427]: Connection closed by 10.0.0.1 port 53722 Apr 20 17:51:21.864291 sshd-session[8405]: pam_unix(sshd:session): session closed for user core Apr 20 17:51:21.954779 systemd[1]: sshd@59-8220-10.0.0.107:22-10.0.0.1:53722.service: Deactivated successfully. Apr 20 17:51:21.965065 systemd[1]: sshd@59-8220-10.0.0.107:22-10.0.0.1:53722.service: Consumed 1.313s CPU time, 4.1M memory peak. Apr 20 17:51:22.000100 systemd[1]: session-61.scope: Deactivated successfully. Apr 20 17:51:22.001328 systemd[1]: session-61.scope: Consumed 2.499s CPU time, 15.9M memory peak. Apr 20 17:51:22.037530 systemd-logind[1622]: Session 61 logged out. Waiting for processes to exit. Apr 20 17:51:22.069511 systemd-logind[1622]: Removed session 61. Apr 20 17:51:22.651764 systemd[1]: cri-containerd-7eb522b2e94b06eb5a1d9b756b447f495cebda1b0b44fcb67223382045831412.scope: Deactivated successfully. Apr 20 17:51:22.694495 containerd[1658]: time="2026-04-20T17:51:22.694182252Z" level=info msg="received container exit event container_id:\"7eb522b2e94b06eb5a1d9b756b447f495cebda1b0b44fcb67223382045831412\" id:\"7eb522b2e94b06eb5a1d9b756b447f495cebda1b0b44fcb67223382045831412\" pid:8120 exit_status:1 exited_at:{seconds:1776707482 nanos:667316663}" Apr 20 17:51:22.702025 systemd[1]: cri-containerd-7eb522b2e94b06eb5a1d9b756b447f495cebda1b0b44fcb67223382045831412.scope: Consumed 23.256s CPU time, 18.8M memory peak. Apr 20 17:51:23.249561 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7eb522b2e94b06eb5a1d9b756b447f495cebda1b0b44fcb67223382045831412-rootfs.mount: Deactivated successfully. Apr 20 17:51:23.940792 kubelet[2921]: I0420 17:51:23.938987 2921 scope.go:122] "RemoveContainer" containerID="b0524a99b63bd0ed4ce9f6e762503c29f3be702a7e5853272438a5ef0bbc4abe" Apr 20 17:51:23.975327 kubelet[2921]: I0420 17:51:23.970138 2921 scope.go:122] "RemoveContainer" containerID="7eb522b2e94b06eb5a1d9b756b447f495cebda1b0b44fcb67223382045831412" Apr 20 17:51:23.992156 kubelet[2921]: E0420 17:51:23.975940 2921 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 17:51:23.992156 kubelet[2921]: E0420 17:51:23.976331 2921 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-scheduler pod=kube-scheduler-localhost_kube-system(f7c88b30fc803a3ec6b6c138191bdaca)\"" pod="kube-system/kube-scheduler-localhost" podUID="f7c88b30fc803a3ec6b6c138191bdaca" Apr 20 17:51:24.005981 containerd[1658]: time="2026-04-20T17:51:24.001614306Z" level=info msg="RemoveContainer for \"b0524a99b63bd0ed4ce9f6e762503c29f3be702a7e5853272438a5ef0bbc4abe\"" Apr 20 17:51:24.069655 containerd[1658]: time="2026-04-20T17:51:24.067535181Z" level=info msg="RemoveContainer for \"b0524a99b63bd0ed4ce9f6e762503c29f3be702a7e5853272438a5ef0bbc4abe\" returns successfully" Apr 20 17:51:25.318575 containerd[1658]: time="2026-04-20T17:51:25.256756609Z" level=info msg="container event discarded" container=7a975bc0f87574c1b0e5a0aaa430b2f10a56f92cc9e1252f82f66697c9447cae type=CONTAINER_CREATED_EVENT Apr 20 17:51:25.752130 kubelet[2921]: I0420 17:51:25.741376 2921 scope.go:122] "RemoveContainer" containerID="7eb522b2e94b06eb5a1d9b756b447f495cebda1b0b44fcb67223382045831412" Apr 20 17:51:25.752130 kubelet[2921]: E0420 17:51:25.750370 2921 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 17:51:25.752130 kubelet[2921]: E0420 17:51:25.750909 2921 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-scheduler pod=kube-scheduler-localhost_kube-system(f7c88b30fc803a3ec6b6c138191bdaca)\"" pod="kube-system/kube-scheduler-localhost" podUID="f7c88b30fc803a3ec6b6c138191bdaca" Apr 20 17:51:27.191090 systemd[1]: Started sshd@60-4106-10.0.0.107:22-10.0.0.1:37452.service - OpenSSH per-connection server daemon (10.0.0.1:37452). Apr 20 17:51:27.623505 kubelet[2921]: I0420 17:51:27.623321 2921 scope.go:122] "RemoveContainer" containerID="7eb522b2e94b06eb5a1d9b756b447f495cebda1b0b44fcb67223382045831412" Apr 20 17:51:27.653229 kubelet[2921]: E0420 17:51:27.652995 2921 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 17:51:27.655843 kubelet[2921]: E0420 17:51:27.655664 2921 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-scheduler pod=kube-scheduler-localhost_kube-system(f7c88b30fc803a3ec6b6c138191bdaca)\"" pod="kube-system/kube-scheduler-localhost" podUID="f7c88b30fc803a3ec6b6c138191bdaca" Apr 20 17:51:29.387729 sshd[8523]: Accepted publickey for core from 10.0.0.1 port 37452 ssh2: RSA SHA256:snwWgtVh/dFlg/GoTCcrMcNNvPRv/2kcBFWNu/hKVq0 Apr 20 17:51:29.422889 sshd-session[8523]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 20 17:51:29.611289 systemd-logind[1622]: New session '62' of user 'core' with class 'user' and type 'tty'. Apr 20 17:51:29.655215 systemd[1]: Started session-62.scope - Session 62 of User core. Apr 20 17:51:30.929445 kubelet[2921]: E0420 17:51:30.928444 2921 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 17:51:31.529237 kubelet[2921]: E0420 17:51:31.528132 2921 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 17:51:31.721220 sshd[8540]: Connection closed by 10.0.0.1 port 37452 Apr 20 17:51:31.731705 sshd-session[8523]: pam_unix(sshd:session): session closed for user core Apr 20 17:51:31.806683 systemd[1]: sshd@60-4106-10.0.0.107:22-10.0.0.1:37452.service: Deactivated successfully. Apr 20 17:51:31.928226 systemd[1]: session-62.scope: Deactivated successfully. Apr 20 17:51:31.946887 systemd-logind[1622]: Session 62 logged out. Waiting for processes to exit. Apr 20 17:51:31.951322 systemd-logind[1622]: Removed session 62. Apr 20 17:51:32.454350 kubelet[2921]: E0420 17:51:32.454213 2921 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 17:51:34.254366 containerd[1658]: time="2026-04-20T17:51:34.253619275Z" level=info msg="container event discarded" container=7a975bc0f87574c1b0e5a0aaa430b2f10a56f92cc9e1252f82f66697c9447cae type=CONTAINER_STARTED_EVENT Apr 20 17:51:37.547377 systemd[1]: Started sshd@61-8221-10.0.0.107:22-10.0.0.1:55348.service - OpenSSH per-connection server daemon (10.0.0.1:55348). Apr 20 17:51:38.878512 sshd[8574]: Accepted publickey for core from 10.0.0.1 port 55348 ssh2: RSA SHA256:snwWgtVh/dFlg/GoTCcrMcNNvPRv/2kcBFWNu/hKVq0 Apr 20 17:51:38.918247 sshd-session[8574]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 20 17:51:38.975257 systemd-logind[1622]: New session '63' of user 'core' with class 'user' and type 'tty'. Apr 20 17:51:39.082516 systemd[1]: Started session-63.scope - Session 63 of User core. Apr 20 17:51:44.006933 sshd[8600]: Connection closed by 10.0.0.1 port 55348 Apr 20 17:51:44.029904 sshd-session[8574]: pam_unix(sshd:session): session closed for user core Apr 20 17:51:44.232340 systemd[1]: sshd@61-8221-10.0.0.107:22-10.0.0.1:55348.service: Deactivated successfully. Apr 20 17:51:44.302805 systemd[1]: session-63.scope: Deactivated successfully. Apr 20 17:51:44.307867 systemd[1]: session-63.scope: Consumed 1.196s CPU time, 16.1M memory peak. Apr 20 17:51:44.368393 systemd-logind[1622]: Session 63 logged out. Waiting for processes to exit. Apr 20 17:51:44.371235 systemd-logind[1622]: Removed session 63. Apr 20 17:51:49.191376 systemd[1]: Started sshd@62-12300-10.0.0.107:22-10.0.0.1:47060.service - OpenSSH per-connection server daemon (10.0.0.1:47060). Apr 20 17:51:50.681573 sshd[8639]: Accepted publickey for core from 10.0.0.1 port 47060 ssh2: RSA SHA256:snwWgtVh/dFlg/GoTCcrMcNNvPRv/2kcBFWNu/hKVq0 Apr 20 17:51:50.680578 sshd-session[8639]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 20 17:51:50.756105 systemd-logind[1622]: New session '64' of user 'core' with class 'user' and type 'tty'. Apr 20 17:51:50.771630 systemd[1]: Started session-64.scope - Session 64 of User core. Apr 20 17:51:53.547608 sshd[8661]: Connection closed by 10.0.0.1 port 47060 Apr 20 17:51:53.590830 sshd-session[8639]: pam_unix(sshd:session): session closed for user core Apr 20 17:51:53.889618 systemd[1]: sshd@62-12300-10.0.0.107:22-10.0.0.1:47060.service: Deactivated successfully. Apr 20 17:51:54.137322 systemd[1]: session-64.scope: Deactivated successfully. Apr 20 17:51:54.222253 systemd-logind[1622]: Session 64 logged out. Waiting for processes to exit. Apr 20 17:51:54.293364 systemd-logind[1622]: Removed session 64. Apr 20 17:51:56.467509 kubelet[2921]: E0420 17:51:56.466393 2921 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 17:51:58.934947 systemd[1]: Started sshd@63-4107-10.0.0.107:22-10.0.0.1:40226.service - OpenSSH per-connection server daemon (10.0.0.1:40226). Apr 20 17:52:00.453723 sshd[8696]: Accepted publickey for core from 10.0.0.1 port 40226 ssh2: RSA SHA256:snwWgtVh/dFlg/GoTCcrMcNNvPRv/2kcBFWNu/hKVq0 Apr 20 17:52:00.464268 sshd-session[8696]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 20 17:52:00.574623 systemd-logind[1622]: New session '65' of user 'core' with class 'user' and type 'tty'. Apr 20 17:52:00.607848 systemd[1]: Started session-65.scope - Session 65 of User core. Apr 20 17:52:04.977169 sshd[8706]: Connection closed by 10.0.0.1 port 40226 Apr 20 17:52:04.999318 sshd-session[8696]: pam_unix(sshd:session): session closed for user core Apr 20 17:52:05.219893 systemd[1]: sshd@63-4107-10.0.0.107:22-10.0.0.1:40226.service: Deactivated successfully. Apr 20 17:52:05.428626 systemd[1]: session-65.scope: Deactivated successfully. Apr 20 17:52:05.463150 systemd[1]: session-65.scope: Consumed 1.262s CPU time, 16M memory peak. Apr 20 17:52:05.660579 systemd-logind[1622]: Session 65 logged out. Waiting for processes to exit. Apr 20 17:52:05.738729 systemd-logind[1622]: Removed session 65. Apr 20 17:52:08.635451 kubelet[2921]: E0420 17:52:08.631388 2921 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 17:52:09.664283 kubelet[2921]: E0420 17:52:09.663141 2921 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.217s" Apr 20 17:52:10.744439 systemd[1]: Started sshd@64-8222-10.0.0.107:22-10.0.0.1:52684.service - OpenSSH per-connection server daemon (10.0.0.1:52684). Apr 20 17:52:14.029258 kubelet[2921]: E0420 17:52:13.886285 2921 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.338s" Apr 20 17:52:15.318564 sshd[8746]: Accepted publickey for core from 10.0.0.1 port 52684 ssh2: RSA SHA256:snwWgtVh/dFlg/GoTCcrMcNNvPRv/2kcBFWNu/hKVq0 Apr 20 17:52:15.443502 sshd-session[8746]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 20 17:52:16.100089 kubelet[2921]: E0420 17:52:16.097879 2921 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.07s" Apr 20 17:52:16.175385 systemd-logind[1622]: New session '66' of user 'core' with class 'user' and type 'tty'. Apr 20 17:52:16.247966 containerd[1658]: time="2026-04-20T17:52:16.247582842Z" level=info msg="received container exit event container_id:\"87ccdd1363addb01066c0ee6b298ff10cd1a63f9dffb8a2c71ae37dcf399b195\" id:\"87ccdd1363addb01066c0ee6b298ff10cd1a63f9dffb8a2c71ae37dcf399b195\" pid:8454 exit_status:1 exited_at:{seconds:1776707536 nanos:243203458}" Apr 20 17:52:16.251936 systemd[1]: Started session-66.scope - Session 66 of User core. Apr 20 17:52:16.252370 systemd[1]: cri-containerd-87ccdd1363addb01066c0ee6b298ff10cd1a63f9dffb8a2c71ae37dcf399b195.scope: Deactivated successfully. Apr 20 17:52:16.253551 systemd[1]: cri-containerd-87ccdd1363addb01066c0ee6b298ff10cd1a63f9dffb8a2c71ae37dcf399b195.scope: Consumed 12.337s CPU time, 27.9M memory peak. Apr 20 17:52:17.825125 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-87ccdd1363addb01066c0ee6b298ff10cd1a63f9dffb8a2c71ae37dcf399b195-rootfs.mount: Deactivated successfully. Apr 20 17:52:18.613730 kubelet[2921]: I0420 17:52:18.611317 2921 scope.go:122] "RemoveContainer" containerID="7a975bc0f87574c1b0e5a0aaa430b2f10a56f92cc9e1252f82f66697c9447cae" Apr 20 17:52:18.673250 kubelet[2921]: I0420 17:52:18.656844 2921 scope.go:122] "RemoveContainer" containerID="87ccdd1363addb01066c0ee6b298ff10cd1a63f9dffb8a2c71ae37dcf399b195" Apr 20 17:52:18.704233 kubelet[2921]: E0420 17:52:18.698332 2921 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 17:52:18.704233 kubelet[2921]: E0420 17:52:18.699125 2921 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-controller-manager pod=kube-controller-manager-localhost_kube-system(14bc29ec35edba17af38052ec24275f2)\"" pod="kube-system/kube-controller-manager-localhost" podUID="14bc29ec35edba17af38052ec24275f2" Apr 20 17:52:19.032984 containerd[1658]: time="2026-04-20T17:52:18.947558365Z" level=info msg="RemoveContainer for \"7a975bc0f87574c1b0e5a0aaa430b2f10a56f92cc9e1252f82f66697c9447cae\"" Apr 20 17:52:19.203659 sshd[8765]: Connection closed by 10.0.0.1 port 52684 Apr 20 17:52:19.205039 sshd-session[8746]: pam_unix(sshd:session): session closed for user core Apr 20 17:52:19.266093 containerd[1658]: time="2026-04-20T17:52:19.256353275Z" level=info msg="RemoveContainer for \"7a975bc0f87574c1b0e5a0aaa430b2f10a56f92cc9e1252f82f66697c9447cae\" returns successfully" Apr 20 17:52:19.633001 systemd[1]: sshd@64-8222-10.0.0.107:22-10.0.0.1:52684.service: Deactivated successfully. Apr 20 17:52:19.740281 systemd[1]: sshd@64-8222-10.0.0.107:22-10.0.0.1:52684.service: Consumed 1.700s CPU time, 4.1M memory peak. Apr 20 17:52:20.014564 systemd[1]: session-66.scope: Deactivated successfully. Apr 20 17:52:20.046123 systemd[1]: session-66.scope: Consumed 1.257s CPU time, 16.5M memory peak. Apr 20 17:52:20.225816 systemd-logind[1622]: Session 66 logged out. Waiting for processes to exit. Apr 20 17:52:20.339144 systemd-logind[1622]: Removed session 66. Apr 20 17:52:20.771229 kubelet[2921]: I0420 17:52:20.770292 2921 scope.go:122] "RemoveContainer" containerID="87ccdd1363addb01066c0ee6b298ff10cd1a63f9dffb8a2c71ae37dcf399b195" Apr 20 17:52:20.801620 kubelet[2921]: E0420 17:52:20.793393 2921 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 17:52:20.929059 kubelet[2921]: E0420 17:52:20.893609 2921 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-controller-manager pod=kube-controller-manager-localhost_kube-system(14bc29ec35edba17af38052ec24275f2)\"" pod="kube-system/kube-controller-manager-localhost" podUID="14bc29ec35edba17af38052ec24275f2" Apr 20 17:52:21.626057 kubelet[2921]: E0420 17:52:21.606289 2921 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.102s" Apr 20 17:52:23.318560 kubelet[2921]: E0420 17:52:23.305016 2921 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 17:52:25.095105 systemd[1]: Started sshd@65-8223-10.0.0.107:22-10.0.0.1:48690.service - OpenSSH per-connection server daemon (10.0.0.1:48690). Apr 20 17:52:28.403714 kubelet[2921]: E0420 17:52:28.394381 2921 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.954s" Apr 20 17:52:30.515347 kubelet[2921]: E0420 17:52:30.510467 2921 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.965s" Apr 20 17:52:31.390147 sshd[8808]: Accepted publickey for core from 10.0.0.1 port 48690 ssh2: RSA SHA256:snwWgtVh/dFlg/GoTCcrMcNNvPRv/2kcBFWNu/hKVq0 Apr 20 17:52:31.549163 sshd-session[8808]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 20 17:52:32.543204 systemd-logind[1622]: New session '67' of user 'core' with class 'user' and type 'tty'. Apr 20 17:52:32.920260 systemd[1]: Started session-67.scope - Session 67 of User core. Apr 20 17:52:33.916271 kubelet[2921]: E0420 17:52:33.848903 2921 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.273s" Apr 20 17:52:39.563866 sshd[8826]: Connection closed by 10.0.0.1 port 48690 Apr 20 17:52:39.577028 sshd-session[8808]: pam_unix(sshd:session): session closed for user core Apr 20 17:52:39.907458 systemd[1]: sshd@65-8223-10.0.0.107:22-10.0.0.1:48690.service: Deactivated successfully. Apr 20 17:52:39.939645 systemd[1]: sshd@65-8223-10.0.0.107:22-10.0.0.1:48690.service: Consumed 2.207s CPU time, 4.3M memory peak. Apr 20 17:52:40.032820 systemd[1]: session-67.scope: Deactivated successfully. Apr 20 17:52:40.056029 systemd[1]: session-67.scope: Consumed 3.967s CPU time, 16.4M memory peak. Apr 20 17:52:40.212355 systemd-logind[1622]: Session 67 logged out. Waiting for processes to exit. Apr 20 17:52:40.254903 systemd-logind[1622]: Removed session 67. Apr 20 17:52:45.281777 systemd[1]: Started sshd@66-8224-10.0.0.107:22-10.0.0.1:40892.service - OpenSSH per-connection server daemon (10.0.0.1:40892). Apr 20 17:52:46.461691 kubelet[2921]: E0420 17:52:46.461053 2921 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 17:52:46.696260 sshd[8879]: Accepted publickey for core from 10.0.0.1 port 40892 ssh2: RSA SHA256:snwWgtVh/dFlg/GoTCcrMcNNvPRv/2kcBFWNu/hKVq0 Apr 20 17:52:46.721565 sshd-session[8879]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 20 17:52:47.137319 systemd-logind[1622]: New session '68' of user 'core' with class 'user' and type 'tty'. Apr 20 17:52:47.293622 systemd[1]: Started session-68.scope - Session 68 of User core. Apr 20 17:52:51.612250 kubelet[2921]: E0420 17:52:51.602211 2921 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 17:52:52.957012 sshd[8889]: Connection closed by 10.0.0.1 port 40892 Apr 20 17:52:52.973201 sshd-session[8879]: pam_unix(sshd:session): session closed for user core Apr 20 17:52:53.218697 systemd[1]: sshd@66-8224-10.0.0.107:22-10.0.0.1:40892.service: Deactivated successfully. Apr 20 17:52:53.282128 systemd[1]: session-68.scope: Deactivated successfully. Apr 20 17:52:53.314269 systemd[1]: session-68.scope: Consumed 3.480s CPU time, 15M memory peak. Apr 20 17:52:53.343546 systemd-logind[1622]: Session 68 logged out. Waiting for processes to exit. Apr 20 17:52:53.357238 systemd-logind[1622]: Removed session 68. Apr 20 17:52:54.497128 kubelet[2921]: I0420 17:52:54.494770 2921 scope.go:122] "RemoveContainer" containerID="7eb522b2e94b06eb5a1d9b756b447f495cebda1b0b44fcb67223382045831412" Apr 20 17:52:54.510008 kubelet[2921]: E0420 17:52:54.504323 2921 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 17:52:54.510008 kubelet[2921]: E0420 17:52:54.508239 2921 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-scheduler pod=kube-scheduler-localhost_kube-system(f7c88b30fc803a3ec6b6c138191bdaca)\"" pod="kube-system/kube-scheduler-localhost" podUID="f7c88b30fc803a3ec6b6c138191bdaca" Apr 20 17:52:58.686372 systemd[1]: Started sshd@67-8225-10.0.0.107:22-10.0.0.1:34594.service - OpenSSH per-connection server daemon (10.0.0.1:34594). Apr 20 17:53:03.287499 kubelet[2921]: E0420 17:53:03.286291 2921 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.791s" Apr 20 17:53:06.121032 sshd[8932]: Accepted publickey for core from 10.0.0.1 port 34594 ssh2: RSA SHA256:snwWgtVh/dFlg/GoTCcrMcNNvPRv/2kcBFWNu/hKVq0 Apr 20 17:53:06.303951 sshd-session[8932]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 20 17:53:06.834127 systemd-logind[1622]: New session '69' of user 'core' with class 'user' and type 'tty'. Apr 20 17:53:07.048031 systemd[1]: Started session-69.scope - Session 69 of User core. Apr 20 17:53:07.245357 kubelet[2921]: E0420 17:53:07.199311 2921 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.76s" Apr 20 17:53:10.908214 sshd[8956]: Connection closed by 10.0.0.1 port 34594 Apr 20 17:53:10.914785 sshd-session[8932]: pam_unix(sshd:session): session closed for user core Apr 20 17:53:10.950875 systemd[1]: sshd@67-8225-10.0.0.107:22-10.0.0.1:34594.service: Deactivated successfully. Apr 20 17:53:10.956259 systemd[1]: sshd@67-8225-10.0.0.107:22-10.0.0.1:34594.service: Consumed 2.087s CPU time, 4.1M memory peak. Apr 20 17:53:11.030244 systemd[1]: session-69.scope: Deactivated successfully. Apr 20 17:53:11.039220 systemd[1]: session-69.scope: Consumed 1.714s CPU time, 16.4M memory peak. Apr 20 17:53:11.141016 systemd-logind[1622]: Session 69 logged out. Waiting for processes to exit. Apr 20 17:53:11.145160 systemd-logind[1622]: Removed session 69. Apr 20 17:53:14.540669 kubelet[2921]: E0420 17:53:14.540311 2921 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 17:53:16.131950 systemd[1]: Started sshd@68-13-10.0.0.107:22-10.0.0.1:54364.service - OpenSSH per-connection server daemon (10.0.0.1:54364). Apr 20 17:53:18.401827 sshd[8993]: Accepted publickey for core from 10.0.0.1 port 54364 ssh2: RSA SHA256:snwWgtVh/dFlg/GoTCcrMcNNvPRv/2kcBFWNu/hKVq0 Apr 20 17:53:18.514452 sshd-session[8993]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 20 17:53:19.013458 kubelet[2921]: E0420 17:53:19.009018 2921 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 17:53:19.276758 systemd-logind[1622]: New session '70' of user 'core' with class 'user' and type 'tty'. Apr 20 17:53:19.325104 systemd[1]: Started session-70.scope - Session 70 of User core. Apr 20 17:53:20.304777 containerd[1658]: time="2026-04-20T17:53:20.284922472Z" level=info msg="container event discarded" container=7a975bc0f87574c1b0e5a0aaa430b2f10a56f92cc9e1252f82f66697c9447cae type=CONTAINER_STOPPED_EVENT Apr 20 17:53:21.439017 containerd[1658]: time="2026-04-20T17:53:21.437789006Z" level=info msg="container event discarded" container=b0524a99b63bd0ed4ce9f6e762503c29f3be702a7e5853272438a5ef0bbc4abe type=CONTAINER_STOPPED_EVENT Apr 20 17:53:22.229194 containerd[1658]: time="2026-04-20T17:53:22.228982781Z" level=info msg="container event discarded" container=1054d068c818a13dc5db4e6b9aec3de9886eac3819b741c98c041a7afebe905d type=CONTAINER_DELETED_EVENT Apr 20 17:53:23.826254 containerd[1658]: time="2026-04-20T17:53:23.821373086Z" level=info msg="container event discarded" container=1329f9f59ad208392783f6941eafd260d9073cc50a74ef386efbe982950acdfe type=CONTAINER_DELETED_EVENT Apr 20 17:53:23.906300 sshd[9022]: Connection closed by 10.0.0.1 port 54364 Apr 20 17:53:23.921163 sshd-session[8993]: pam_unix(sshd:session): session closed for user core Apr 20 17:53:24.221693 systemd[1]: sshd@68-13-10.0.0.107:22-10.0.0.1:54364.service: Deactivated successfully. Apr 20 17:53:24.398340 systemd[1]: session-70.scope: Deactivated successfully. Apr 20 17:53:24.425848 systemd[1]: session-70.scope: Consumed 3.198s CPU time, 18M memory peak. Apr 20 17:53:24.638880 systemd-logind[1622]: Session 70 logged out. Waiting for processes to exit. Apr 20 17:53:24.736965 systemd-logind[1622]: Removed session 70. Apr 20 17:53:29.523373 systemd[1]: Started sshd@69-12301-10.0.0.107:22-10.0.0.1:35472.service - OpenSSH per-connection server daemon (10.0.0.1:35472). Apr 20 17:53:31.529777 kubelet[2921]: E0420 17:53:31.529564 2921 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.09s" Apr 20 17:53:31.978081 kubelet[2921]: I0420 17:53:31.976192 2921 scope.go:122] "RemoveContainer" containerID="87ccdd1363addb01066c0ee6b298ff10cd1a63f9dffb8a2c71ae37dcf399b195" Apr 20 17:53:32.041228 kubelet[2921]: E0420 17:53:31.985606 2921 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 17:53:32.041228 kubelet[2921]: E0420 17:53:31.994494 2921 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-controller-manager pod=kube-controller-manager-localhost_kube-system(14bc29ec35edba17af38052ec24275f2)\"" pod="kube-system/kube-controller-manager-localhost" podUID="14bc29ec35edba17af38052ec24275f2" Apr 20 17:53:34.101328 sshd[9065]: Accepted publickey for core from 10.0.0.1 port 35472 ssh2: RSA SHA256:snwWgtVh/dFlg/GoTCcrMcNNvPRv/2kcBFWNu/hKVq0 Apr 20 17:53:34.297296 sshd-session[9065]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 20 17:53:34.971352 systemd-logind[1622]: New session '71' of user 'core' with class 'user' and type 'tty'. Apr 20 17:53:35.132706 systemd[1]: Started session-71.scope - Session 71 of User core. Apr 20 17:53:35.727237 kubelet[2921]: E0420 17:53:35.720211 2921 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.228s" Apr 20 17:53:36.024002 kubelet[2921]: E0420 17:53:36.020570 2921 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 17:53:43.953698 kubelet[2921]: E0420 17:53:43.953453 2921 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.142s" Apr 20 17:53:46.104969 sshd[9085]: Connection closed by 10.0.0.1 port 35472 Apr 20 17:53:46.116319 sshd-session[9065]: pam_unix(sshd:session): session closed for user core Apr 20 17:53:46.457339 systemd[1]: sshd@69-12301-10.0.0.107:22-10.0.0.1:35472.service: Deactivated successfully. Apr 20 17:53:46.503488 systemd[1]: sshd@69-12301-10.0.0.107:22-10.0.0.1:35472.service: Consumed 1.461s CPU time, 4.1M memory peak. Apr 20 17:53:46.694202 systemd[1]: session-71.scope: Deactivated successfully. Apr 20 17:53:46.721815 systemd[1]: session-71.scope: Consumed 5.576s CPU time, 17.8M memory peak. Apr 20 17:53:46.840874 systemd-logind[1622]: Session 71 logged out. Waiting for processes to exit. Apr 20 17:53:46.995373 systemd-logind[1622]: Removed session 71. Apr 20 17:53:51.610241 systemd[1]: Started sshd@70-4108-10.0.0.107:22-10.0.0.1:51760.service - OpenSSH per-connection server daemon (10.0.0.1:51760). Apr 20 17:53:54.131390 sshd[9137]: Accepted publickey for core from 10.0.0.1 port 51760 ssh2: RSA SHA256:snwWgtVh/dFlg/GoTCcrMcNNvPRv/2kcBFWNu/hKVq0 Apr 20 17:53:54.231158 sshd-session[9137]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 20 17:53:54.619028 systemd-logind[1622]: New session '72' of user 'core' with class 'user' and type 'tty'. Apr 20 17:53:54.707817 systemd[1]: Started session-72.scope - Session 72 of User core. Apr 20 17:53:57.238037 kubelet[2921]: I0420 17:53:57.237087 2921 scope.go:122] "RemoveContainer" containerID="7eb522b2e94b06eb5a1d9b756b447f495cebda1b0b44fcb67223382045831412" Apr 20 17:53:57.310164 kubelet[2921]: E0420 17:53:57.308338 2921 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 17:53:57.319273 kubelet[2921]: E0420 17:53:57.318927 2921 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-scheduler pod=kube-scheduler-localhost_kube-system(f7c88b30fc803a3ec6b6c138191bdaca)\"" pod="kube-system/kube-scheduler-localhost" podUID="f7c88b30fc803a3ec6b6c138191bdaca" Apr 20 17:53:59.545563 kubelet[2921]: E0420 17:53:59.532691 2921 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 17:54:02.545662 sshd[9156]: Connection closed by 10.0.0.1 port 51760 Apr 20 17:54:02.577832 sshd-session[9137]: pam_unix(sshd:session): session closed for user core Apr 20 17:54:02.902515 systemd[1]: sshd@70-4108-10.0.0.107:22-10.0.0.1:51760.service: Deactivated successfully. Apr 20 17:54:02.918508 systemd[1]: sshd@70-4108-10.0.0.107:22-10.0.0.1:51760.service: Consumed 1.037s CPU time, 4.1M memory peak. Apr 20 17:54:03.000606 systemd[1]: session-72.scope: Deactivated successfully. Apr 20 17:54:03.019350 systemd[1]: session-72.scope: Consumed 2.788s CPU time, 15M memory peak. Apr 20 17:54:03.108804 systemd-logind[1622]: Session 72 logged out. Waiting for processes to exit. Apr 20 17:54:03.211342 systemd-logind[1622]: Removed session 72. Apr 20 17:54:04.523045 kubelet[2921]: I0420 17:54:04.517201 2921 scope.go:122] "RemoveContainer" containerID="7eb522b2e94b06eb5a1d9b756b447f495cebda1b0b44fcb67223382045831412" Apr 20 17:54:04.615121 kubelet[2921]: E0420 17:54:04.544136 2921 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 17:54:04.636128 kubelet[2921]: E0420 17:54:04.623243 2921 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 17:54:04.789629 containerd[1658]: time="2026-04-20T17:54:04.749975990Z" level=info msg="CreateContainer within sandbox \"3499c6956704318802827cc6051b05eee9229cbc60a941f53c88c4a10e92ae11\" for container name:\"kube-scheduler\" attempt:6" Apr 20 17:54:05.229766 containerd[1658]: time="2026-04-20T17:54:05.137285336Z" level=info msg="Container b6aa43097b3dd18aee5ea0deea848240304754e4b4ee6e10e728b77be57dece3: CDI devices from CRI Config.CDIDevices: []" Apr 20 17:54:05.564512 containerd[1658]: time="2026-04-20T17:54:05.558057811Z" level=info msg="CreateContainer within sandbox \"3499c6956704318802827cc6051b05eee9229cbc60a941f53c88c4a10e92ae11\" for name:\"kube-scheduler\" attempt:6 returns container id \"b6aa43097b3dd18aee5ea0deea848240304754e4b4ee6e10e728b77be57dece3\"" Apr 20 17:54:05.587573 containerd[1658]: time="2026-04-20T17:54:05.585991183Z" level=info msg="StartContainer for \"b6aa43097b3dd18aee5ea0deea848240304754e4b4ee6e10e728b77be57dece3\"" Apr 20 17:54:05.732097 containerd[1658]: time="2026-04-20T17:54:05.731489815Z" level=info msg="connecting to shim b6aa43097b3dd18aee5ea0deea848240304754e4b4ee6e10e728b77be57dece3" address="unix:///run/containerd/s/1028a3884c8fe7445e51378f8f75d8222496a64b030b92e150cc155157ded40c" protocol=ttrpc version=3 Apr 20 17:54:07.222326 systemd[1]: Started cri-containerd-b6aa43097b3dd18aee5ea0deea848240304754e4b4ee6e10e728b77be57dece3.scope - libcontainer container b6aa43097b3dd18aee5ea0deea848240304754e4b4ee6e10e728b77be57dece3. Apr 20 17:54:07.992719 systemd[1]: Started sshd@71-4109-10.0.0.107:22-10.0.0.1:60474.service - OpenSSH per-connection server daemon (10.0.0.1:60474). Apr 20 17:54:09.389456 containerd[1658]: time="2026-04-20T17:54:09.389180883Z" level=error msg="get state for b6aa43097b3dd18aee5ea0deea848240304754e4b4ee6e10e728b77be57dece3" error="context deadline exceeded" Apr 20 17:54:09.389456 containerd[1658]: time="2026-04-20T17:54:09.389250380Z" level=warning msg="unknown status" status=0 Apr 20 17:54:09.525284 containerd[1658]: time="2026-04-20T17:54:09.515275355Z" level=error msg="ttrpc: received message on inactive stream" stream=3 Apr 20 17:54:09.530821 sshd[9215]: Accepted publickey for core from 10.0.0.1 port 60474 ssh2: RSA SHA256:snwWgtVh/dFlg/GoTCcrMcNNvPRv/2kcBFWNu/hKVq0 Apr 20 17:54:09.775630 sshd-session[9215]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 20 17:54:10.072037 kubelet[2921]: E0420 17:54:10.070194 2921 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.619s" Apr 20 17:54:10.071285 systemd-logind[1622]: New session '73' of user 'core' with class 'user' and type 'tty'. Apr 20 17:54:10.118937 systemd[1]: Started session-73.scope - Session 73 of User core. Apr 20 17:54:10.245081 containerd[1658]: time="2026-04-20T17:54:10.235682143Z" level=info msg="StartContainer for \"b6aa43097b3dd18aee5ea0deea848240304754e4b4ee6e10e728b77be57dece3\" returns successfully" Apr 20 17:54:11.221566 kubelet[2921]: E0420 17:54:11.217376 2921 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 17:54:12.388601 kubelet[2921]: E0420 17:54:12.387338 2921 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 17:54:12.591713 sshd[9248]: Connection closed by 10.0.0.1 port 60474 Apr 20 17:54:12.647380 sshd-session[9215]: pam_unix(sshd:session): session closed for user core Apr 20 17:54:12.781573 systemd[1]: sshd@71-4109-10.0.0.107:22-10.0.0.1:60474.service: Deactivated successfully. Apr 20 17:54:13.224692 systemd[1]: session-73.scope: Deactivated successfully. Apr 20 17:54:13.301025 systemd-logind[1622]: Session 73 logged out. Waiting for processes to exit. Apr 20 17:54:13.381632 systemd-logind[1622]: Removed session 73. Apr 20 17:54:13.537127 kubelet[2921]: E0420 17:54:13.532368 2921 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.089s" Apr 20 17:54:18.441285 systemd[1]: Started sshd@72-4110-10.0.0.107:22-10.0.0.1:37856.service - OpenSSH per-connection server daemon (10.0.0.1:37856). Apr 20 17:54:20.633674 sshd[9288]: Accepted publickey for core from 10.0.0.1 port 37856 ssh2: RSA SHA256:snwWgtVh/dFlg/GoTCcrMcNNvPRv/2kcBFWNu/hKVq0 Apr 20 17:54:20.730343 sshd-session[9288]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 20 17:54:20.918645 kubelet[2921]: E0420 17:54:20.893374 2921 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 17:54:21.106994 systemd-logind[1622]: New session '74' of user 'core' with class 'user' and type 'tty'. Apr 20 17:54:21.204715 systemd[1]: Started session-74.scope - Session 74 of User core. Apr 20 17:54:22.013135 kubelet[2921]: E0420 17:54:22.011055 2921 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 17:54:22.848624 kubelet[2921]: E0420 17:54:22.847809 2921 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 17:54:23.905109 sshd[9306]: Connection closed by 10.0.0.1 port 37856 Apr 20 17:54:23.933368 sshd-session[9288]: pam_unix(sshd:session): session closed for user core Apr 20 17:54:24.107687 systemd[1]: sshd@72-4110-10.0.0.107:22-10.0.0.1:37856.service: Deactivated successfully. Apr 20 17:54:24.131903 systemd[1]: sshd@72-4110-10.0.0.107:22-10.0.0.1:37856.service: Consumed 1.010s CPU time, 4.4M memory peak. Apr 20 17:54:24.243086 systemd[1]: session-74.scope: Deactivated successfully. Apr 20 17:54:24.292375 systemd[1]: session-74.scope: Consumed 1.591s CPU time, 16.2M memory peak. Apr 20 17:54:24.333623 systemd-logind[1622]: Session 74 logged out. Waiting for processes to exit. Apr 20 17:54:24.398250 systemd-logind[1622]: Removed session 74. Apr 20 17:54:26.523133 kubelet[2921]: E0420 17:54:26.516508 2921 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 17:54:29.494177 systemd[1]: Started sshd@73-4111-10.0.0.107:22-10.0.0.1:42078.service - OpenSSH per-connection server daemon (10.0.0.1:42078). Apr 20 17:54:31.173180 sshd[9343]: Accepted publickey for core from 10.0.0.1 port 42078 ssh2: RSA SHA256:snwWgtVh/dFlg/GoTCcrMcNNvPRv/2kcBFWNu/hKVq0 Apr 20 17:54:31.396308 sshd-session[9343]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 20 17:54:31.698388 kubelet[2921]: E0420 17:54:31.698036 2921 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.101s" Apr 20 17:54:31.736462 systemd-logind[1622]: New session '75' of user 'core' with class 'user' and type 'tty'. Apr 20 17:54:31.837659 systemd[1]: Started session-75.scope - Session 75 of User core. Apr 20 17:54:32.648308 kubelet[2921]: I0420 17:54:32.646989 2921 scope.go:122] "RemoveContainer" containerID="87ccdd1363addb01066c0ee6b298ff10cd1a63f9dffb8a2c71ae37dcf399b195" Apr 20 17:54:32.660210 kubelet[2921]: E0420 17:54:32.656321 2921 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 17:54:32.682900 kubelet[2921]: E0420 17:54:32.681992 2921 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-controller-manager pod=kube-controller-manager-localhost_kube-system(14bc29ec35edba17af38052ec24275f2)\"" pod="kube-system/kube-controller-manager-localhost" podUID="14bc29ec35edba17af38052ec24275f2" Apr 20 17:54:36.132381 sshd[9356]: Connection closed by 10.0.0.1 port 42078 Apr 20 17:54:36.145067 sshd-session[9343]: pam_unix(sshd:session): session closed for user core Apr 20 17:54:36.303344 systemd[1]: sshd@73-4111-10.0.0.107:22-10.0.0.1:42078.service: Deactivated successfully. Apr 20 17:54:36.379816 systemd[1]: session-75.scope: Deactivated successfully. Apr 20 17:54:36.394109 systemd[1]: session-75.scope: Consumed 2.518s CPU time, 14.6M memory peak. Apr 20 17:54:36.547098 systemd-logind[1622]: Session 75 logged out. Waiting for processes to exit. Apr 20 17:54:36.623330 systemd-logind[1622]: Removed session 75. Apr 20 17:54:41.956635 systemd[1]: Started sshd@74-4112-10.0.0.107:22-10.0.0.1:56374.service - OpenSSH per-connection server daemon (10.0.0.1:56374). Apr 20 17:54:43.285165 sshd[9394]: Accepted publickey for core from 10.0.0.1 port 56374 ssh2: RSA SHA256:snwWgtVh/dFlg/GoTCcrMcNNvPRv/2kcBFWNu/hKVq0 Apr 20 17:54:43.312683 sshd-session[9394]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 20 17:54:43.346193 systemd-logind[1622]: New session '76' of user 'core' with class 'user' and type 'tty'. Apr 20 17:54:43.441908 kubelet[2921]: E0420 17:54:43.441664 2921 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 17:54:43.489738 systemd[1]: Started session-76.scope - Session 76 of User core. Apr 20 17:54:43.494145 containerd[1658]: time="2026-04-20T17:54:43.490222034Z" level=info msg="container event discarded" container=7eb522b2e94b06eb5a1d9b756b447f495cebda1b0b44fcb67223382045831412 type=CONTAINER_CREATED_EVENT Apr 20 17:54:44.887817 sshd[9419]: Connection closed by 10.0.0.1 port 56374 Apr 20 17:54:44.889910 sshd-session[9394]: pam_unix(sshd:session): session closed for user core Apr 20 17:54:44.907006 systemd[1]: sshd@74-4112-10.0.0.107:22-10.0.0.1:56374.service: Deactivated successfully. Apr 20 17:54:45.011878 systemd[1]: session-76.scope: Deactivated successfully. Apr 20 17:54:45.229509 systemd-logind[1622]: Session 76 logged out. Waiting for processes to exit. Apr 20 17:54:45.270923 systemd-logind[1622]: Removed session 76. Apr 20 17:54:50.214255 systemd[1]: Started sshd@75-14-10.0.0.107:22-10.0.0.1:57468.service - OpenSSH per-connection server daemon (10.0.0.1:57468). Apr 20 17:54:50.974007 sshd[9453]: Accepted publickey for core from 10.0.0.1 port 57468 ssh2: RSA SHA256:snwWgtVh/dFlg/GoTCcrMcNNvPRv/2kcBFWNu/hKVq0 Apr 20 17:54:51.017499 sshd-session[9453]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 20 17:54:51.248351 systemd-logind[1622]: New session '77' of user 'core' with class 'user' and type 'tty'. Apr 20 17:54:51.276985 systemd[1]: Started session-77.scope - Session 77 of User core. Apr 20 17:54:51.877722 containerd[1658]: time="2026-04-20T17:54:51.872325383Z" level=info msg="container event discarded" container=7eb522b2e94b06eb5a1d9b756b447f495cebda1b0b44fcb67223382045831412 type=CONTAINER_STARTED_EVENT Apr 20 17:54:53.025563 sshd[9457]: Connection closed by 10.0.0.1 port 57468 Apr 20 17:54:53.033032 sshd-session[9453]: pam_unix(sshd:session): session closed for user core Apr 20 17:54:53.275335 systemd[1]: sshd@75-14-10.0.0.107:22-10.0.0.1:57468.service: Deactivated successfully. Apr 20 17:54:53.338004 systemd[1]: session-77.scope: Deactivated successfully. Apr 20 17:54:53.415739 systemd-logind[1622]: Session 77 logged out. Waiting for processes to exit. Apr 20 17:54:53.444693 systemd-logind[1622]: Removed session 77. Apr 20 17:54:58.297823 systemd[1]: Started sshd@76-8226-10.0.0.107:22-10.0.0.1:36926.service - OpenSSH per-connection server daemon (10.0.0.1:36926). Apr 20 17:54:59.182316 sshd[9493]: Accepted publickey for core from 10.0.0.1 port 36926 ssh2: RSA SHA256:snwWgtVh/dFlg/GoTCcrMcNNvPRv/2kcBFWNu/hKVq0 Apr 20 17:54:59.235108 sshd-session[9493]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 20 17:54:59.342712 systemd-logind[1622]: New session '78' of user 'core' with class 'user' and type 'tty'. Apr 20 17:54:59.401665 systemd[1]: Started session-78.scope - Session 78 of User core. Apr 20 17:55:01.263674 sshd[9508]: Connection closed by 10.0.0.1 port 36926 Apr 20 17:55:01.265011 sshd-session[9493]: pam_unix(sshd:session): session closed for user core Apr 20 17:55:01.327898 systemd[1]: sshd@76-8226-10.0.0.107:22-10.0.0.1:36926.service: Deactivated successfully. Apr 20 17:55:01.371569 systemd[1]: session-78.scope: Deactivated successfully. Apr 20 17:55:01.419697 systemd-logind[1622]: Session 78 logged out. Waiting for processes to exit. Apr 20 17:55:01.528625 systemd-logind[1622]: Removed session 78. Apr 20 17:55:03.440816 kubelet[2921]: E0420 17:55:03.438902 2921 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 17:55:06.486382 systemd[1]: Started sshd@77-15-10.0.0.107:22-10.0.0.1:50312.service - OpenSSH per-connection server daemon (10.0.0.1:50312). Apr 20 17:55:06.960132 sshd[9547]: Accepted publickey for core from 10.0.0.1 port 50312 ssh2: RSA SHA256:snwWgtVh/dFlg/GoTCcrMcNNvPRv/2kcBFWNu/hKVq0 Apr 20 17:55:06.972333 sshd-session[9547]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 20 17:55:06.992816 systemd-logind[1622]: New session '79' of user 'core' with class 'user' and type 'tty'. Apr 20 17:55:07.072156 systemd[1]: Started session-79.scope - Session 79 of User core. Apr 20 17:55:08.383680 sshd[9551]: Connection closed by 10.0.0.1 port 50312 Apr 20 17:55:08.386247 sshd-session[9547]: pam_unix(sshd:session): session closed for user core Apr 20 17:55:08.465994 systemd[1]: sshd@77-15-10.0.0.107:22-10.0.0.1:50312.service: Deactivated successfully. Apr 20 17:55:08.533536 systemd[1]: session-79.scope: Deactivated successfully. Apr 20 17:55:08.537933 systemd-logind[1622]: Session 79 logged out. Waiting for processes to exit. Apr 20 17:55:08.558025 systemd-logind[1622]: Removed session 79. Apr 20 17:55:11.510889 kubelet[2921]: E0420 17:55:11.502378 2921 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 17:55:13.443916 systemd[1]: Started sshd@78-12302-10.0.0.107:22-10.0.0.1:50316.service - OpenSSH per-connection server daemon (10.0.0.1:50316). Apr 20 17:55:14.364914 sshd[9587]: Accepted publickey for core from 10.0.0.1 port 50316 ssh2: RSA SHA256:snwWgtVh/dFlg/GoTCcrMcNNvPRv/2kcBFWNu/hKVq0 Apr 20 17:55:14.381573 sshd-session[9587]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 20 17:55:14.498182 systemd-logind[1622]: New session '80' of user 'core' with class 'user' and type 'tty'. Apr 20 17:55:14.632088 systemd[1]: Started session-80.scope - Session 80 of User core. Apr 20 17:55:16.440583 sshd[9597]: Connection closed by 10.0.0.1 port 50316 Apr 20 17:55:16.449228 sshd-session[9587]: pam_unix(sshd:session): session closed for user core Apr 20 17:55:16.575461 systemd[1]: sshd@78-12302-10.0.0.107:22-10.0.0.1:50316.service: Deactivated successfully. Apr 20 17:55:16.581586 systemd[1]: session-80.scope: Deactivated successfully. Apr 20 17:55:16.663789 systemd-logind[1622]: Session 80 logged out. Waiting for processes to exit. Apr 20 17:55:16.700673 systemd-logind[1622]: Removed session 80. Apr 20 17:55:21.540733 systemd[1]: Started sshd@79-16-10.0.0.107:22-10.0.0.1:35066.service - OpenSSH per-connection server daemon (10.0.0.1:35066). Apr 20 17:55:22.302188 sshd[9631]: Accepted publickey for core from 10.0.0.1 port 35066 ssh2: RSA SHA256:snwWgtVh/dFlg/GoTCcrMcNNvPRv/2kcBFWNu/hKVq0 Apr 20 17:55:22.321066 sshd-session[9631]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 20 17:55:22.432787 systemd-logind[1622]: New session '81' of user 'core' with class 'user' and type 'tty'. Apr 20 17:55:22.471304 systemd[1]: Started session-81.scope - Session 81 of User core. Apr 20 17:55:24.475332 sshd[9649]: Connection closed by 10.0.0.1 port 35066 Apr 20 17:55:24.476024 sshd-session[9631]: pam_unix(sshd:session): session closed for user core Apr 20 17:55:24.548572 systemd[1]: sshd@79-16-10.0.0.107:22-10.0.0.1:35066.service: Deactivated successfully. Apr 20 17:55:24.646631 systemd[1]: session-81.scope: Deactivated successfully. Apr 20 17:55:24.672550 systemd-logind[1622]: Session 81 logged out. Waiting for processes to exit. Apr 20 17:55:24.675671 systemd-logind[1622]: Removed session 81. Apr 20 17:55:26.469000 kubelet[2921]: E0420 17:55:26.453919 2921 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 17:55:29.578774 systemd[1]: Started sshd@80-4113-10.0.0.107:22-10.0.0.1:45586.service - OpenSSH per-connection server daemon (10.0.0.1:45586). Apr 20 17:55:29.924824 sshd[9683]: Accepted publickey for core from 10.0.0.1 port 45586 ssh2: RSA SHA256:snwWgtVh/dFlg/GoTCcrMcNNvPRv/2kcBFWNu/hKVq0 Apr 20 17:55:29.924749 sshd-session[9683]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 20 17:55:30.010219 systemd-logind[1622]: New session '82' of user 'core' with class 'user' and type 'tty'. Apr 20 17:55:30.027108 systemd[1]: Started session-82.scope - Session 82 of User core. Apr 20 17:55:30.660464 sshd[9687]: Connection closed by 10.0.0.1 port 45586 Apr 20 17:55:30.661058 sshd-session[9683]: pam_unix(sshd:session): session closed for user core Apr 20 17:55:30.675359 systemd[1]: sshd@80-4113-10.0.0.107:22-10.0.0.1:45586.service: Deactivated successfully. Apr 20 17:55:30.704131 systemd[1]: session-82.scope: Deactivated successfully. Apr 20 17:55:30.714921 systemd-logind[1622]: Session 82 logged out. Waiting for processes to exit. Apr 20 17:55:30.723486 systemd-logind[1622]: Removed session 82. Apr 20 17:55:36.002715 systemd[1]: Started sshd@81-12303-10.0.0.107:22-10.0.0.1:32986.service - OpenSSH per-connection server daemon (10.0.0.1:32986). Apr 20 17:55:36.667653 sshd[9727]: Accepted publickey for core from 10.0.0.1 port 32986 ssh2: RSA SHA256:snwWgtVh/dFlg/GoTCcrMcNNvPRv/2kcBFWNu/hKVq0 Apr 20 17:55:36.688094 sshd-session[9727]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 20 17:55:36.785004 systemd-logind[1622]: New session '83' of user 'core' with class 'user' and type 'tty'. Apr 20 17:55:36.825266 systemd[1]: Started session-83.scope - Session 83 of User core. Apr 20 17:55:37.460594 kubelet[2921]: E0420 17:55:37.459998 2921 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 17:55:37.997474 sshd[9731]: Connection closed by 10.0.0.1 port 32986 Apr 20 17:55:37.998729 sshd-session[9727]: pam_unix(sshd:session): session closed for user core Apr 20 17:55:38.035107 systemd[1]: sshd@81-12303-10.0.0.107:22-10.0.0.1:32986.service: Deactivated successfully. Apr 20 17:55:38.044726 systemd[1]: session-83.scope: Deactivated successfully. Apr 20 17:55:38.048532 systemd-logind[1622]: Session 83 logged out. Waiting for processes to exit. Apr 20 17:55:38.058718 systemd-logind[1622]: Removed session 83. Apr 20 17:55:43.189531 systemd[1]: Started sshd@82-8227-10.0.0.107:22-10.0.0.1:32998.service - OpenSSH per-connection server daemon (10.0.0.1:32998). Apr 20 17:55:43.949667 sshd[9772]: Accepted publickey for core from 10.0.0.1 port 32998 ssh2: RSA SHA256:snwWgtVh/dFlg/GoTCcrMcNNvPRv/2kcBFWNu/hKVq0 Apr 20 17:55:43.980170 sshd-session[9772]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 20 17:55:44.257254 systemd-logind[1622]: New session '84' of user 'core' with class 'user' and type 'tty'. Apr 20 17:55:44.338237 systemd[1]: Started session-84.scope - Session 84 of User core. Apr 20 17:55:46.146354 sshd[9787]: Connection closed by 10.0.0.1 port 32998 Apr 20 17:55:46.159028 sshd-session[9772]: pam_unix(sshd:session): session closed for user core Apr 20 17:55:46.180592 systemd[1]: sshd@82-8227-10.0.0.107:22-10.0.0.1:32998.service: Deactivated successfully. Apr 20 17:55:46.238837 systemd[1]: session-84.scope: Deactivated successfully. Apr 20 17:55:46.244961 systemd-logind[1622]: Session 84 logged out. Waiting for processes to exit. Apr 20 17:55:46.259901 systemd-logind[1622]: Removed session 84. Apr 20 17:55:50.511133 kubelet[2921]: E0420 17:55:50.507259 2921 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 17:55:51.829142 systemd[1]: Started sshd@83-12304-10.0.0.107:22-10.0.0.1:36300.service - OpenSSH per-connection server daemon (10.0.0.1:36300). Apr 20 17:55:53.538332 kubelet[2921]: E0420 17:55:53.533729 2921 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.049s" Apr 20 17:55:53.786315 kubelet[2921]: E0420 17:55:53.779299 2921 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 17:55:56.090564 systemd[1]: Started systemd-sysupdate.service - Automatic System Update. Apr 20 17:55:57.169758 sshd[9819]: Accepted publickey for core from 10.0.0.1 port 36300 ssh2: RSA SHA256:snwWgtVh/dFlg/GoTCcrMcNNvPRv/2kcBFWNu/hKVq0 Apr 20 17:55:57.232393 sshd-session[9819]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 20 17:55:57.686642 systemd-logind[1622]: New session '85' of user 'core' with class 'user' and type 'tty'. Apr 20 17:55:57.891989 systemd[1]: Started session-85.scope - Session 85 of User core. Apr 20 17:55:58.062991 systemd-sysupdate[9830]: Discovering installed instances… Apr 20 17:55:58.119397 systemd-sysupdate[9830]: Discovering available instances… Apr 20 17:55:58.128352 systemd-sysupdate[9830]: Determining installed update sets… Apr 20 17:55:58.128358 systemd-sysupdate[9830]: Determining available update sets… Apr 20 17:55:58.128362 systemd-sysupdate[9830]: No update needed. Apr 20 17:55:58.399996 systemd[1]: systemd-sysupdate.service: Deactivated successfully. Apr 20 17:55:58.717910 kubelet[2921]: E0420 17:55:58.688246 2921 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.157s" Apr 20 17:56:00.722867 kubelet[2921]: I0420 17:56:00.721647 2921 scope.go:122] "RemoveContainer" containerID="87ccdd1363addb01066c0ee6b298ff10cd1a63f9dffb8a2c71ae37dcf399b195" Apr 20 17:56:00.904022 kubelet[2921]: E0420 17:56:00.838319 2921 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 17:56:01.308168 kubelet[2921]: E0420 17:56:01.290394 2921 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-controller-manager pod=kube-controller-manager-localhost_kube-system(14bc29ec35edba17af38052ec24275f2)\"" pod="kube-system/kube-controller-manager-localhost" podUID="14bc29ec35edba17af38052ec24275f2" Apr 20 17:56:02.183334 kubelet[2921]: E0420 17:56:02.183140 2921 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.672s" Apr 20 17:56:04.811917 sshd[9832]: Connection closed by 10.0.0.1 port 36300 Apr 20 17:56:04.826467 sshd-session[9819]: pam_unix(sshd:session): session closed for user core Apr 20 17:56:05.098187 systemd[1]: sshd@83-12304-10.0.0.107:22-10.0.0.1:36300.service: Deactivated successfully. Apr 20 17:56:05.123115 systemd[1]: sshd@83-12304-10.0.0.107:22-10.0.0.1:36300.service: Consumed 1.838s CPU time, 4.1M memory peak. Apr 20 17:56:05.272149 systemd[1]: session-85.scope: Deactivated successfully. Apr 20 17:56:05.293347 systemd[1]: session-85.scope: Consumed 3.092s CPU time, 16.1M memory peak. Apr 20 17:56:05.352319 systemd-logind[1622]: Session 85 logged out. Waiting for processes to exit. Apr 20 17:56:05.437793 systemd-logind[1622]: Removed session 85. Apr 20 17:56:05.614661 systemd[1]: cri-containerd-b6aa43097b3dd18aee5ea0deea848240304754e4b4ee6e10e728b77be57dece3.scope: Deactivated successfully. Apr 20 17:56:05.617257 containerd[1658]: time="2026-04-20T17:56:05.616494821Z" level=info msg="received container exit event container_id:\"b6aa43097b3dd18aee5ea0deea848240304754e4b4ee6e10e728b77be57dece3\" id:\"b6aa43097b3dd18aee5ea0deea848240304754e4b4ee6e10e728b77be57dece3\" pid:9205 exit_status:1 exited_at:{seconds:1776707765 nanos:613914141}" Apr 20 17:56:05.635630 systemd[1]: cri-containerd-b6aa43097b3dd18aee5ea0deea848240304754e4b4ee6e10e728b77be57dece3.scope: Consumed 16.685s CPU time, 23.2M memory peak. Apr 20 17:56:06.114853 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b6aa43097b3dd18aee5ea0deea848240304754e4b4ee6e10e728b77be57dece3-rootfs.mount: Deactivated successfully. Apr 20 17:56:06.642593 kubelet[2921]: I0420 17:56:06.640581 2921 scope.go:122] "RemoveContainer" containerID="7eb522b2e94b06eb5a1d9b756b447f495cebda1b0b44fcb67223382045831412" Apr 20 17:56:06.642593 kubelet[2921]: I0420 17:56:06.641093 2921 scope.go:122] "RemoveContainer" containerID="b6aa43097b3dd18aee5ea0deea848240304754e4b4ee6e10e728b77be57dece3" Apr 20 17:56:06.642593 kubelet[2921]: E0420 17:56:06.641197 2921 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 17:56:06.642593 kubelet[2921]: E0420 17:56:06.641297 2921 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-scheduler pod=kube-scheduler-localhost_kube-system(f7c88b30fc803a3ec6b6c138191bdaca)\"" pod="kube-system/kube-scheduler-localhost" podUID="f7c88b30fc803a3ec6b6c138191bdaca" Apr 20 17:56:06.645341 containerd[1658]: time="2026-04-20T17:56:06.645302747Z" level=info msg="RemoveContainer for \"7eb522b2e94b06eb5a1d9b756b447f495cebda1b0b44fcb67223382045831412\"" Apr 20 17:56:06.662193 containerd[1658]: time="2026-04-20T17:56:06.662042378Z" level=info msg="RemoveContainer for \"7eb522b2e94b06eb5a1d9b756b447f495cebda1b0b44fcb67223382045831412\" returns successfully" Apr 20 17:56:07.829222 kubelet[2921]: I0420 17:56:07.817075 2921 scope.go:122] "RemoveContainer" containerID="b6aa43097b3dd18aee5ea0deea848240304754e4b4ee6e10e728b77be57dece3" Apr 20 17:56:07.853019 kubelet[2921]: E0420 17:56:07.848205 2921 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 17:56:07.853019 kubelet[2921]: E0420 17:56:07.852665 2921 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-scheduler pod=kube-scheduler-localhost_kube-system(f7c88b30fc803a3ec6b6c138191bdaca)\"" pod="kube-system/kube-scheduler-localhost" podUID="f7c88b30fc803a3ec6b6c138191bdaca" Apr 20 17:56:08.811212 kubelet[2921]: I0420 17:56:08.810942 2921 scope.go:122] "RemoveContainer" containerID="b6aa43097b3dd18aee5ea0deea848240304754e4b4ee6e10e728b77be57dece3" Apr 20 17:56:08.812985 kubelet[2921]: E0420 17:56:08.811343 2921 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 17:56:08.812985 kubelet[2921]: E0420 17:56:08.812773 2921 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-scheduler pod=kube-scheduler-localhost_kube-system(f7c88b30fc803a3ec6b6c138191bdaca)\"" pod="kube-system/kube-scheduler-localhost" podUID="f7c88b30fc803a3ec6b6c138191bdaca" Apr 20 17:56:10.325340 systemd[1]: Started sshd@84-17-10.0.0.107:22-10.0.0.1:35272.service - OpenSSH per-connection server daemon (10.0.0.1:35272). Apr 20 17:56:11.137107 sshd[9888]: Accepted publickey for core from 10.0.0.1 port 35272 ssh2: RSA SHA256:snwWgtVh/dFlg/GoTCcrMcNNvPRv/2kcBFWNu/hKVq0 Apr 20 17:56:11.141692 sshd-session[9888]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 20 17:56:11.272081 systemd-logind[1622]: New session '86' of user 'core' with class 'user' and type 'tty'. Apr 20 17:56:11.304147 systemd[1]: Started session-86.scope - Session 86 of User core. Apr 20 17:56:11.627674 sshd[9906]: Connection closed by 10.0.0.1 port 35272 Apr 20 17:56:11.629373 sshd-session[9888]: pam_unix(sshd:session): session closed for user core Apr 20 17:56:11.641173 systemd[1]: sshd@84-17-10.0.0.107:22-10.0.0.1:35272.service: Deactivated successfully. Apr 20 17:56:11.668192 systemd[1]: session-86.scope: Deactivated successfully. Apr 20 17:56:11.674150 systemd-logind[1622]: Session 86 logged out. Waiting for processes to exit. Apr 20 17:56:11.679019 systemd-logind[1622]: Removed session 86. Apr 20 17:56:14.342742 containerd[1658]: time="2026-04-20T17:56:14.342184405Z" level=info msg="container event discarded" container=87ccdd1363addb01066c0ee6b298ff10cd1a63f9dffb8a2c71ae37dcf399b195 type=CONTAINER_CREATED_EVENT Apr 20 17:56:16.814182 systemd[1]: Started sshd@85-4114-10.0.0.107:22-10.0.0.1:54406.service - OpenSSH per-connection server daemon (10.0.0.1:54406). Apr 20 17:56:17.788148 sshd[9941]: Accepted publickey for core from 10.0.0.1 port 54406 ssh2: RSA SHA256:snwWgtVh/dFlg/GoTCcrMcNNvPRv/2kcBFWNu/hKVq0 Apr 20 17:56:17.795666 sshd-session[9941]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 20 17:56:17.924008 systemd-logind[1622]: New session '87' of user 'core' with class 'user' and type 'tty'. Apr 20 17:56:18.027209 systemd[1]: Started session-87.scope - Session 87 of User core. Apr 20 17:56:19.086531 containerd[1658]: time="2026-04-20T17:56:19.048304523Z" level=info msg="container event discarded" container=87ccdd1363addb01066c0ee6b298ff10cd1a63f9dffb8a2c71ae37dcf399b195 type=CONTAINER_STARTED_EVENT Apr 20 17:56:19.850484 sshd[9945]: Connection closed by 10.0.0.1 port 54406 Apr 20 17:56:19.889983 sshd-session[9941]: pam_unix(sshd:session): session closed for user core Apr 20 17:56:19.937801 systemd[1]: sshd@85-4114-10.0.0.107:22-10.0.0.1:54406.service: Deactivated successfully. Apr 20 17:56:19.976774 systemd[1]: session-87.scope: Deactivated successfully. Apr 20 17:56:19.979877 systemd[1]: session-87.scope: Consumed 1.279s CPU time, 15.5M memory peak. Apr 20 17:56:19.981454 systemd-logind[1622]: Session 87 logged out. Waiting for processes to exit. Apr 20 17:56:19.984595 systemd-logind[1622]: Removed session 87. Apr 20 17:56:21.444102 kubelet[2921]: E0420 17:56:21.443747 2921 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 17:56:23.326583 containerd[1658]: time="2026-04-20T17:56:23.326040121Z" level=info msg="container event discarded" container=7eb522b2e94b06eb5a1d9b756b447f495cebda1b0b44fcb67223382045831412 type=CONTAINER_STOPPED_EVENT Apr 20 17:56:24.079464 containerd[1658]: time="2026-04-20T17:56:24.079038975Z" level=info msg="container event discarded" container=b0524a99b63bd0ed4ce9f6e762503c29f3be702a7e5853272438a5ef0bbc4abe type=CONTAINER_DELETED_EVENT Apr 20 17:56:24.981661 systemd[1]: Started sshd@86-8228-10.0.0.107:22-10.0.0.1:54412.service - OpenSSH per-connection server daemon (10.0.0.1:54412). Apr 20 17:56:26.196941 sshd[9985]: Accepted publickey for core from 10.0.0.1 port 54412 ssh2: RSA SHA256:snwWgtVh/dFlg/GoTCcrMcNNvPRv/2kcBFWNu/hKVq0 Apr 20 17:56:26.249870 sshd-session[9985]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 20 17:56:26.383191 systemd-logind[1622]: New session '88' of user 'core' with class 'user' and type 'tty'. Apr 20 17:56:26.450568 systemd[1]: Started session-88.scope - Session 88 of User core. Apr 20 17:56:29.841007 sshd[9999]: Connection closed by 10.0.0.1 port 54412 Apr 20 17:56:29.851953 sshd-session[9985]: pam_unix(sshd:session): session closed for user core Apr 20 17:56:30.012950 systemd[1]: sshd@86-8228-10.0.0.107:22-10.0.0.1:54412.service: Deactivated successfully. Apr 20 17:56:30.188861 systemd[1]: session-88.scope: Deactivated successfully. Apr 20 17:56:30.216080 systemd[1]: session-88.scope: Consumed 2.066s CPU time, 14.5M memory peak. Apr 20 17:56:30.290243 systemd-logind[1622]: Session 88 logged out. Waiting for processes to exit. Apr 20 17:56:30.336500 systemd-logind[1622]: Removed session 88.