Apr 20 15:06:16.029866 kernel: Linux version 6.12.81-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 15.2.1_p20260214 p5) 15.2.1 20260214, GNU ld (Gentoo 2.46.0 p1) 2.46.0) #1 SMP PREEMPT_DYNAMIC Tue Apr 14 02:21:25 -00 2026 Apr 20 15:06:16.030074 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=70de22794192cd52e167b5a4b1ae0509811ded61dbe4152dfc02378f843ae81a Apr 20 15:06:16.030091 kernel: BIOS-provided physical RAM map: Apr 20 15:06:16.030098 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000002ffff] usable Apr 20 15:06:16.030104 kernel: BIOS-e820: [mem 0x0000000000030000-0x000000000004ffff] reserved Apr 20 15:06:16.030111 kernel: BIOS-e820: [mem 0x0000000000050000-0x000000000009efff] usable Apr 20 15:06:16.030122 kernel: BIOS-e820: [mem 0x000000000009f000-0x000000000009ffff] reserved Apr 20 15:06:16.030231 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009b8ecfff] usable Apr 20 15:06:16.030243 kernel: BIOS-e820: [mem 0x000000009b8ed000-0x000000009bb6cfff] reserved Apr 20 15:06:16.030254 kernel: BIOS-e820: [mem 0x000000009bb6d000-0x000000009bb7efff] ACPI data Apr 20 15:06:16.030264 kernel: BIOS-e820: [mem 0x000000009bb7f000-0x000000009bbfefff] ACPI NVS Apr 20 15:06:16.030557 kernel: BIOS-e820: [mem 0x000000009bbff000-0x000000009bfb0fff] usable Apr 20 15:06:16.030567 kernel: BIOS-e820: [mem 0x000000009bfb1000-0x000000009bfb4fff] reserved Apr 20 15:06:16.030578 kernel: BIOS-e820: [mem 0x000000009bfb5000-0x000000009bfb6fff] ACPI NVS Apr 20 15:06:16.030589 kernel: BIOS-e820: [mem 0x000000009bfb7000-0x000000009bffffff] usable Apr 20 15:06:16.030601 kernel: BIOS-e820: [mem 0x000000009c000000-0x000000009cffffff] reserved Apr 20 15:06:16.030612 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Apr 20 15:06:16.030619 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Apr 20 15:06:16.030629 kernel: NX (Execute Disable) protection: active Apr 20 15:06:16.030639 kernel: APIC: Static calls initialized Apr 20 15:06:16.030649 kernel: e820: update [mem 0x9a142018-0x9a14bc57] usable ==> usable Apr 20 15:06:16.030662 kernel: e820: update [mem 0x9a105018-0x9a141e57] usable ==> usable Apr 20 15:06:16.030669 kernel: extended physical RAM map: Apr 20 15:06:16.030679 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000002ffff] usable Apr 20 15:06:16.030689 kernel: reserve setup_data: [mem 0x0000000000030000-0x000000000004ffff] reserved Apr 20 15:06:16.030698 kernel: reserve setup_data: [mem 0x0000000000050000-0x000000000009efff] usable Apr 20 15:06:16.030708 kernel: reserve setup_data: [mem 0x000000000009f000-0x000000000009ffff] reserved Apr 20 15:06:16.030715 kernel: reserve setup_data: [mem 0x0000000000100000-0x000000009a105017] usable Apr 20 15:06:16.030722 kernel: reserve setup_data: [mem 0x000000009a105018-0x000000009a141e57] usable Apr 20 15:06:16.030732 kernel: reserve setup_data: [mem 0x000000009a141e58-0x000000009a142017] usable Apr 20 15:06:16.030742 kernel: reserve setup_data: [mem 0x000000009a142018-0x000000009a14bc57] usable Apr 20 15:06:16.030752 kernel: reserve setup_data: [mem 0x000000009a14bc58-0x000000009b8ecfff] usable Apr 20 15:06:16.030761 kernel: reserve setup_data: [mem 0x000000009b8ed000-0x000000009bb6cfff] reserved Apr 20 15:06:16.030772 kernel: reserve setup_data: [mem 0x000000009bb6d000-0x000000009bb7efff] ACPI data Apr 20 15:06:16.030779 kernel: reserve setup_data: [mem 0x000000009bb7f000-0x000000009bbfefff] ACPI NVS Apr 20 15:06:16.030791 kernel: reserve setup_data: [mem 0x000000009bbff000-0x000000009bfb0fff] usable Apr 20 15:06:16.030798 kernel: reserve setup_data: [mem 0x000000009bfb1000-0x000000009bfb4fff] reserved Apr 20 15:06:16.030806 kernel: reserve setup_data: [mem 0x000000009bfb5000-0x000000009bfb6fff] ACPI NVS Apr 20 15:06:16.030814 kernel: reserve setup_data: [mem 0x000000009bfb7000-0x000000009bffffff] usable Apr 20 15:06:16.030823 kernel: reserve setup_data: [mem 0x000000009c000000-0x000000009cffffff] reserved Apr 20 15:06:16.030838 kernel: reserve setup_data: [mem 0x00000000e0000000-0x00000000efffffff] reserved Apr 20 15:06:16.031885 kernel: reserve setup_data: [mem 0x00000000feffc000-0x00000000feffffff] reserved Apr 20 15:06:16.032081 kernel: efi: EFI v2.7 by EDK II Apr 20 15:06:16.032090 kernel: efi: SMBIOS=0x9b9d5000 ACPI=0x9bb7e000 ACPI 2.0=0x9bb7e014 MEMATTR=0x9a1b4018 RNG=0x9bb73018 Apr 20 15:06:16.032098 kernel: random: crng init done Apr 20 15:06:16.032106 kernel: Kernel is locked down from EFI Secure Boot; see man kernel_lockdown.7 Apr 20 15:06:16.032146 kernel: secureboot: Secure boot enabled Apr 20 15:06:16.032185 kernel: SMBIOS 2.8 present. Apr 20 15:06:16.032193 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS unknown 02/02/2022 Apr 20 15:06:16.032205 kernel: DMI: Memory slots populated: 1/1 Apr 20 15:06:16.032216 kernel: Hypervisor detected: KVM Apr 20 15:06:16.032224 kernel: last_pfn = 0x9c000 max_arch_pfn = 0x10000000000 Apr 20 15:06:16.032232 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Apr 20 15:06:16.032243 kernel: kvm-clock: using sched offset of 26572580709 cycles Apr 20 15:06:16.032254 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Apr 20 15:06:16.033653 kernel: tsc: Detected 2793.438 MHz processor Apr 20 15:06:16.033793 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Apr 20 15:06:16.033904 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Apr 20 15:06:16.034015 kernel: last_pfn = 0x9c000 max_arch_pfn = 0x10000000000 Apr 20 15:06:16.034022 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Apr 20 15:06:16.034038 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Apr 20 15:06:16.034044 kernel: Using GB pages for direct mapping Apr 20 15:06:16.034891 kernel: ACPI: Early table checksum verification disabled Apr 20 15:06:16.035177 kernel: ACPI: RSDP 0x000000009BB7E014 000024 (v02 BOCHS ) Apr 20 15:06:16.035183 kernel: ACPI: XSDT 0x000000009BB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Apr 20 15:06:16.035189 kernel: ACPI: FACP 0x000000009BB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Apr 20 15:06:16.035195 kernel: ACPI: DSDT 0x000000009BB7A000 002237 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 20 15:06:16.035201 kernel: ACPI: FACS 0x000000009BBDD000 000040 Apr 20 15:06:16.035207 kernel: ACPI: APIC 0x000000009BB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 20 15:06:16.035213 kernel: ACPI: HPET 0x000000009BB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 20 15:06:16.035221 kernel: ACPI: MCFG 0x000000009BB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 20 15:06:16.035234 kernel: ACPI: WAET 0x000000009BB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 20 15:06:16.035240 kernel: ACPI: BGRT 0x000000009BB74000 000038 (v01 INTEL EDK2 00000002 01000013) Apr 20 15:06:16.035246 kernel: ACPI: Reserving FACP table memory at [mem 0x9bb79000-0x9bb790f3] Apr 20 15:06:16.035252 kernel: ACPI: Reserving DSDT table memory at [mem 0x9bb7a000-0x9bb7c236] Apr 20 15:06:16.035258 kernel: ACPI: Reserving FACS table memory at [mem 0x9bbdd000-0x9bbdd03f] Apr 20 15:06:16.035264 kernel: ACPI: Reserving APIC table memory at [mem 0x9bb78000-0x9bb7808f] Apr 20 15:06:16.035271 kernel: ACPI: Reserving HPET table memory at [mem 0x9bb77000-0x9bb77037] Apr 20 15:06:16.035278 kernel: ACPI: Reserving MCFG table memory at [mem 0x9bb76000-0x9bb7603b] Apr 20 15:06:16.035483 kernel: ACPI: Reserving WAET table memory at [mem 0x9bb75000-0x9bb75027] Apr 20 15:06:16.035489 kernel: ACPI: Reserving BGRT table memory at [mem 0x9bb74000-0x9bb74037] Apr 20 15:06:16.035497 kernel: No NUMA configuration found Apr 20 15:06:16.035503 kernel: Faking a node at [mem 0x0000000000000000-0x000000009bffffff] Apr 20 15:06:16.035509 kernel: NODE_DATA(0) allocated [mem 0x9bf57dc0-0x9bf5efff] Apr 20 15:06:16.035517 kernel: Zone ranges: Apr 20 15:06:16.035523 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Apr 20 15:06:16.035529 kernel: DMA32 [mem 0x0000000001000000-0x000000009bffffff] Apr 20 15:06:16.035535 kernel: Normal empty Apr 20 15:06:16.035541 kernel: Device empty Apr 20 15:06:16.035546 kernel: Movable zone start for each node Apr 20 15:06:16.035552 kernel: Early memory node ranges Apr 20 15:06:16.035558 kernel: node 0: [mem 0x0000000000001000-0x000000000002ffff] Apr 20 15:06:16.035565 kernel: node 0: [mem 0x0000000000050000-0x000000000009efff] Apr 20 15:06:16.035571 kernel: node 0: [mem 0x0000000000100000-0x000000009b8ecfff] Apr 20 15:06:16.035577 kernel: node 0: [mem 0x000000009bbff000-0x000000009bfb0fff] Apr 20 15:06:16.035583 kernel: node 0: [mem 0x000000009bfb7000-0x000000009bffffff] Apr 20 15:06:16.035588 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009bffffff] Apr 20 15:06:16.035594 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Apr 20 15:06:16.035600 kernel: On node 0, zone DMA: 32 pages in unavailable ranges Apr 20 15:06:16.035607 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Apr 20 15:06:16.035613 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Apr 20 15:06:16.035619 kernel: On node 0, zone DMA32: 6 pages in unavailable ranges Apr 20 15:06:16.035625 kernel: On node 0, zone DMA32: 16384 pages in unavailable ranges Apr 20 15:06:16.035631 kernel: ACPI: PM-Timer IO Port: 0x608 Apr 20 15:06:16.035726 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Apr 20 15:06:16.035734 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Apr 20 15:06:16.035742 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Apr 20 15:06:16.035748 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Apr 20 15:06:16.035754 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Apr 20 15:06:16.035760 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Apr 20 15:06:16.035765 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Apr 20 15:06:16.035771 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Apr 20 15:06:16.035777 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Apr 20 15:06:16.035783 kernel: TSC deadline timer available Apr 20 15:06:16.035791 kernel: CPU topo: Max. logical packages: 1 Apr 20 15:06:16.035797 kernel: CPU topo: Max. logical dies: 1 Apr 20 15:06:16.035802 kernel: CPU topo: Max. dies per package: 1 Apr 20 15:06:16.035808 kernel: CPU topo: Max. threads per core: 1 Apr 20 15:06:16.035820 kernel: CPU topo: Num. cores per package: 4 Apr 20 15:06:16.035827 kernel: CPU topo: Num. threads per package: 4 Apr 20 15:06:16.035834 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs Apr 20 15:06:16.036022 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Apr 20 15:06:16.036030 kernel: kvm-guest: KVM setup pv remote TLB flush Apr 20 15:06:16.036039 kernel: kvm-guest: setup PV sched yield Apr 20 15:06:16.036045 kernel: [mem 0x9d000000-0xdfffffff] available for PCI devices Apr 20 15:06:16.036052 kernel: Booting paravirtualized kernel on KVM Apr 20 15:06:16.036058 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Apr 20 15:06:16.036066 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Apr 20 15:06:16.036073 kernel: percpu: Embedded 60 pages/cpu s207960 r8192 d29608 u524288 Apr 20 15:06:16.036082 kernel: pcpu-alloc: s207960 r8192 d29608 u524288 alloc=1*2097152 Apr 20 15:06:16.036088 kernel: pcpu-alloc: [0] 0 1 2 3 Apr 20 15:06:16.036094 kernel: kvm-guest: PV spinlocks enabled Apr 20 15:06:16.036101 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Apr 20 15:06:16.036108 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=70de22794192cd52e167b5a4b1ae0509811ded61dbe4152dfc02378f843ae81a Apr 20 15:06:16.036116 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Apr 20 15:06:16.036123 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Apr 20 15:06:16.036129 kernel: Fallback order for Node 0: 0 Apr 20 15:06:16.036136 kernel: Built 1 zonelists, mobility grouping on. Total pages: 638054 Apr 20 15:06:16.036142 kernel: Policy zone: DMA32 Apr 20 15:06:16.036148 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Apr 20 15:06:16.036154 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Apr 20 15:06:16.036162 kernel: ftrace: allocating 40346 entries in 158 pages Apr 20 15:06:16.036168 kernel: ftrace: allocated 158 pages with 5 groups Apr 20 15:06:16.036174 kernel: Dynamic Preempt: voluntary Apr 20 15:06:16.036180 kernel: rcu: Preemptible hierarchical RCU implementation. Apr 20 15:06:16.036188 kernel: rcu: RCU event tracing is enabled. Apr 20 15:06:16.036194 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Apr 20 15:06:16.036200 kernel: Trampoline variant of Tasks RCU enabled. Apr 20 15:06:16.036208 kernel: Rude variant of Tasks RCU enabled. Apr 20 15:06:16.036214 kernel: Tracing variant of Tasks RCU enabled. Apr 20 15:06:16.036481 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Apr 20 15:06:16.036489 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Apr 20 15:06:16.036496 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Apr 20 15:06:16.036503 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Apr 20 15:06:16.036509 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Apr 20 15:06:16.036518 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Apr 20 15:06:16.036525 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Apr 20 15:06:16.036531 kernel: Console: colour dummy device 80x25 Apr 20 15:06:16.036537 kernel: printk: legacy console [ttyS0] enabled Apr 20 15:06:16.036543 kernel: ACPI: Core revision 20240827 Apr 20 15:06:16.036550 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Apr 20 15:06:16.036556 kernel: APIC: Switch to symmetric I/O mode setup Apr 20 15:06:16.036564 kernel: x2apic enabled Apr 20 15:06:16.036571 kernel: APIC: Switched APIC routing to: physical x2apic Apr 20 15:06:16.036577 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Apr 20 15:06:16.036583 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Apr 20 15:06:16.036589 kernel: kvm-guest: setup PV IPIs Apr 20 15:06:16.036596 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Apr 20 15:06:16.036602 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x284409db922, max_idle_ns: 440795228871 ns Apr 20 15:06:16.036608 kernel: Calibrating delay loop (skipped) preset value.. 5586.87 BogoMIPS (lpj=2793438) Apr 20 15:06:16.036616 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Apr 20 15:06:16.036622 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Apr 20 15:06:16.036628 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Apr 20 15:06:16.036635 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Apr 20 15:06:16.036641 kernel: Spectre V2 : Mitigation: Retpolines Apr 20 15:06:16.036647 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Apr 20 15:06:16.037235 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Apr 20 15:06:16.037542 kernel: RETBleed: Vulnerable Apr 20 15:06:16.037549 kernel: Speculative Store Bypass: Vulnerable Apr 20 15:06:16.037556 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Apr 20 15:06:16.037562 kernel: GDS: Unknown: Dependent on hypervisor status Apr 20 15:06:16.037569 kernel: active return thunk: its_return_thunk Apr 20 15:06:16.037575 kernel: ITS: Mitigation: Aligned branch/return thunks Apr 20 15:06:16.037581 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Apr 20 15:06:16.037590 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Apr 20 15:06:16.037597 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Apr 20 15:06:16.037603 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Apr 20 15:06:16.037609 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Apr 20 15:06:16.037616 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Apr 20 15:06:16.037622 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Apr 20 15:06:16.037628 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Apr 20 15:06:16.037637 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Apr 20 15:06:16.037643 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Apr 20 15:06:16.037649 kernel: x86/fpu: Enabled xstate features 0xe7, context size is 2432 bytes, using 'compacted' format. Apr 20 15:06:16.037655 kernel: Freeing SMP alternatives memory: 32K Apr 20 15:06:16.037662 kernel: pid_max: default: 32768 minimum: 301 Apr 20 15:06:16.037668 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Apr 20 15:06:16.038261 kernel: landlock: Up and running. Apr 20 15:06:16.038566 kernel: SELinux: Initializing. Apr 20 15:06:16.038573 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 20 15:06:16.038580 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 20 15:06:16.038586 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8370C CPU @ 2.80GHz (family: 0x6, model: 0x6a, stepping: 0x6) Apr 20 15:06:16.038593 kernel: Performance Events: unsupported p6 CPU model 106 no PMU driver, software events only. Apr 20 15:06:16.038599 kernel: signal: max sigframe size: 3632 Apr 20 15:06:16.038606 kernel: rcu: Hierarchical SRCU implementation. Apr 20 15:06:16.038615 kernel: rcu: Max phase no-delay instances is 400. Apr 20 15:06:16.038621 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Apr 20 15:06:16.038627 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Apr 20 15:06:16.038633 kernel: smp: Bringing up secondary CPUs ... Apr 20 15:06:16.038640 kernel: smpboot: x86: Booting SMP configuration: Apr 20 15:06:16.038646 kernel: .... node #0, CPUs: #1 #2 #3 Apr 20 15:06:16.038652 kernel: smp: Brought up 1 node, 4 CPUs Apr 20 15:06:16.038660 kernel: smpboot: Total of 4 processors activated (22347.50 BogoMIPS) Apr 20 15:06:16.038667 kernel: Memory: 2381832K/2552216K available (14336K kernel code, 2458K rwdata, 31736K rodata, 15944K init, 2284K bss, 164492K reserved, 0K cma-reserved) Apr 20 15:06:16.038673 kernel: devtmpfs: initialized Apr 20 15:06:16.038679 kernel: x86/mm: Memory block size: 128MB Apr 20 15:06:16.038686 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9bb7f000-0x9bbfefff] (524288 bytes) Apr 20 15:06:16.038692 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9bfb5000-0x9bfb6fff] (8192 bytes) Apr 20 15:06:16.038698 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Apr 20 15:06:16.038706 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Apr 20 15:06:16.038713 kernel: pinctrl core: initialized pinctrl subsystem Apr 20 15:06:16.038719 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Apr 20 15:06:16.038725 kernel: audit: initializing netlink subsys (disabled) Apr 20 15:06:16.038732 kernel: audit: type=2000 audit(1776697553.696:1): state=initialized audit_enabled=0 res=1 Apr 20 15:06:16.038738 kernel: thermal_sys: Registered thermal governor 'step_wise' Apr 20 15:06:16.038744 kernel: thermal_sys: Registered thermal governor 'user_space' Apr 20 15:06:16.038752 kernel: cpuidle: using governor menu Apr 20 15:06:16.038758 kernel: efi: Freeing EFI boot services memory: 42800K Apr 20 15:06:16.038764 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Apr 20 15:06:16.038771 kernel: dca service started, version 1.12.1 Apr 20 15:06:16.038777 kernel: PCI: ECAM [mem 0xe0000000-0xefffffff] (base 0xe0000000) for domain 0000 [bus 00-ff] Apr 20 15:06:16.038783 kernel: PCI: Using configuration type 1 for base access Apr 20 15:06:16.038790 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Apr 20 15:06:16.038798 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Apr 20 15:06:16.038804 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Apr 20 15:06:16.038811 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Apr 20 15:06:16.038817 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Apr 20 15:06:16.038823 kernel: ACPI: Added _OSI(Module Device) Apr 20 15:06:16.038830 kernel: ACPI: Added _OSI(Processor Device) Apr 20 15:06:16.038836 kernel: ACPI: Added _OSI(Processor Aggregator Device) Apr 20 15:06:16.038844 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Apr 20 15:06:16.038850 kernel: ACPI: Interpreter enabled Apr 20 15:06:16.038856 kernel: ACPI: PM: (supports S0 S5) Apr 20 15:06:16.038862 kernel: ACPI: Using IOAPIC for interrupt routing Apr 20 15:06:16.038869 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Apr 20 15:06:16.038875 kernel: PCI: Using E820 reservations for host bridge windows Apr 20 15:06:16.038881 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Apr 20 15:06:16.038889 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Apr 20 15:06:16.039576 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Apr 20 15:06:16.039694 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Apr 20 15:06:16.039792 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Apr 20 15:06:16.039800 kernel: PCI host bridge to bus 0000:00 Apr 20 15:06:16.039899 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Apr 20 15:06:16.040135 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Apr 20 15:06:16.040225 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Apr 20 15:06:16.040523 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xdfffffff window] Apr 20 15:06:16.040617 kernel: pci_bus 0000:00: root bus resource [mem 0xf0000000-0xfebfffff window] Apr 20 15:06:16.040710 kernel: pci_bus 0000:00: root bus resource [mem 0x380000000000-0x3807ffffffff window] Apr 20 15:06:16.040802 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Apr 20 15:06:16.041046 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Apr 20 15:06:16.041159 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Apr 20 15:06:16.041256 kernel: pci 0000:00:01.0: BAR 0 [mem 0xc0000000-0xc0ffffff pref] Apr 20 15:06:16.041561 kernel: pci 0000:00:01.0: BAR 2 [mem 0xc1044000-0xc1044fff] Apr 20 15:06:16.041661 kernel: pci 0000:00:01.0: ROM [mem 0xffff0000-0xffffffff pref] Apr 20 15:06:16.041762 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Apr 20 15:06:16.041857 kernel: pci 0000:00:01.0: pci_fixup_video+0x0/0xe0 took 19531 usecs Apr 20 15:06:16.042093 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Apr 20 15:06:16.042192 kernel: pci 0000:00:02.0: BAR 0 [io 0x6100-0x611f] Apr 20 15:06:16.042488 kernel: pci 0000:00:02.0: BAR 1 [mem 0xc1043000-0xc1043fff] Apr 20 15:06:16.042596 kernel: pci 0000:00:02.0: BAR 4 [mem 0x380000000000-0x380000003fff 64bit pref] Apr 20 15:06:16.042700 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Apr 20 15:06:16.042795 kernel: pci 0000:00:03.0: BAR 0 [io 0x6000-0x607f] Apr 20 15:06:16.042890 kernel: pci 0000:00:03.0: BAR 1 [mem 0xc1042000-0xc1042fff] Apr 20 15:06:16.043800 kernel: pci 0000:00:03.0: BAR 4 [mem 0x380000004000-0x380000007fff 64bit pref] Apr 20 15:06:16.044079 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Apr 20 15:06:16.044188 kernel: pci 0000:00:04.0: BAR 0 [io 0x60e0-0x60ff] Apr 20 15:06:16.044491 kernel: pci 0000:00:04.0: BAR 1 [mem 0xc1041000-0xc1041fff] Apr 20 15:06:16.044598 kernel: pci 0000:00:04.0: BAR 4 [mem 0x380000008000-0x38000000bfff 64bit pref] Apr 20 15:06:16.044696 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref] Apr 20 15:06:16.044800 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Apr 20 15:06:16.044899 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Apr 20 15:06:16.045133 kernel: pci 0000:00:1f.0: quirk_ich7_lpc+0x0/0xc0 took 15625 usecs Apr 20 15:06:16.045238 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Apr 20 15:06:16.045632 kernel: pci 0000:00:1f.2: BAR 4 [io 0x60c0-0x60df] Apr 20 15:06:16.045729 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xc1040000-0xc1040fff] Apr 20 15:06:16.045831 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Apr 20 15:06:16.047244 kernel: pci 0000:00:1f.3: BAR 4 [io 0x6080-0x60bf] Apr 20 15:06:16.047494 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Apr 20 15:06:16.047503 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Apr 20 15:06:16.047510 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Apr 20 15:06:16.047517 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Apr 20 15:06:16.047523 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Apr 20 15:06:16.047530 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Apr 20 15:06:16.047557 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Apr 20 15:06:16.047563 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Apr 20 15:06:16.047570 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Apr 20 15:06:16.047576 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Apr 20 15:06:16.047583 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Apr 20 15:06:16.047680 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Apr 20 15:06:16.047687 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Apr 20 15:06:16.047696 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Apr 20 15:06:16.047702 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Apr 20 15:06:16.047709 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Apr 20 15:06:16.047715 kernel: iommu: Default domain type: Translated Apr 20 15:06:16.047722 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Apr 20 15:06:16.047728 kernel: efivars: Registered efivars operations Apr 20 15:06:16.047734 kernel: PCI: Using ACPI for IRQ routing Apr 20 15:06:16.047742 kernel: PCI: pci_cache_line_size set to 64 bytes Apr 20 15:06:16.047749 kernel: e820: reserve RAM buffer [mem 0x0009f000-0x0009ffff] Apr 20 15:06:16.047755 kernel: e820: reserve RAM buffer [mem 0x9a105018-0x9bffffff] Apr 20 15:06:16.047761 kernel: e820: reserve RAM buffer [mem 0x9a142018-0x9bffffff] Apr 20 15:06:16.047768 kernel: e820: reserve RAM buffer [mem 0x9b8ed000-0x9bffffff] Apr 20 15:06:16.047774 kernel: e820: reserve RAM buffer [mem 0x9bfb1000-0x9bffffff] Apr 20 15:06:16.047895 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Apr 20 15:06:16.049193 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Apr 20 15:06:16.051601 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Apr 20 15:06:16.051615 kernel: vgaarb: loaded Apr 20 15:06:16.051622 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Apr 20 15:06:16.051629 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Apr 20 15:06:16.051635 kernel: clocksource: Switched to clocksource kvm-clock Apr 20 15:06:16.051642 kernel: VFS: Disk quotas dquot_6.6.0 Apr 20 15:06:16.051649 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Apr 20 15:06:16.051661 kernel: pnp: PnP ACPI init Apr 20 15:06:16.051853 kernel: system 00:05: [mem 0xe0000000-0xefffffff window] has been reserved Apr 20 15:06:16.051865 kernel: pnp: PnP ACPI: found 6 devices Apr 20 15:06:16.051872 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Apr 20 15:06:16.051878 kernel: NET: Registered PF_INET protocol family Apr 20 15:06:16.051885 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Apr 20 15:06:16.051896 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Apr 20 15:06:16.051903 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Apr 20 15:06:16.051909 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Apr 20 15:06:16.052694 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Apr 20 15:06:16.052703 kernel: TCP: Hash tables configured (established 32768 bind 32768) Apr 20 15:06:16.052710 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 20 15:06:16.052717 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 20 15:06:16.053768 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Apr 20 15:06:16.053778 kernel: NET: Registered PF_XDP protocol family Apr 20 15:06:16.057688 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref]: can't claim; no compatible bridge window Apr 20 15:06:16.057800 kernel: pci 0000:00:04.0: ROM [mem 0x9d000000-0x9d03ffff pref]: assigned Apr 20 15:06:16.057897 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Apr 20 15:06:16.058148 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Apr 20 15:06:16.058268 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Apr 20 15:06:16.059236 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xdfffffff window] Apr 20 15:06:16.059551 kernel: pci_bus 0000:00: resource 8 [mem 0xf0000000-0xfebfffff window] Apr 20 15:06:16.059641 kernel: pci_bus 0000:00: resource 9 [mem 0x380000000000-0x3807ffffffff window] Apr 20 15:06:16.059650 kernel: PCI: CLS 0 bytes, default 64 Apr 20 15:06:16.059657 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Apr 20 15:06:16.059664 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x284409db922, max_idle_ns: 440795228871 ns Apr 20 15:06:16.059676 kernel: Initialise system trusted keyrings Apr 20 15:06:16.059682 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Apr 20 15:06:16.059689 kernel: Key type asymmetric registered Apr 20 15:06:16.059696 kernel: Asymmetric key parser 'x509' registered Apr 20 15:06:16.059718 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Apr 20 15:06:16.059726 kernel: io scheduler mq-deadline registered Apr 20 15:06:16.059732 kernel: io scheduler kyber registered Apr 20 15:06:16.059741 kernel: io scheduler bfq registered Apr 20 15:06:16.059747 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Apr 20 15:06:16.059755 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Apr 20 15:06:16.059761 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Apr 20 15:06:16.059768 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Apr 20 15:06:16.059774 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Apr 20 15:06:16.059781 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Apr 20 15:06:16.059789 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Apr 20 15:06:16.059796 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Apr 20 15:06:16.059802 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Apr 20 15:06:16.060072 kernel: rtc_cmos 00:04: RTC can wake from S4 Apr 20 15:06:16.060085 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Apr 20 15:06:16.060179 kernel: rtc_cmos 00:04: registered as rtc0 Apr 20 15:06:16.060266 kernel: rtc_cmos 00:04: setting system clock to 2026-04-20T15:06:05 UTC (1776697565) Apr 20 15:06:16.060572 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Apr 20 15:06:16.060582 kernel: intel_pstate: CPU model not supported Apr 20 15:06:16.060589 kernel: efifb: probing for efifb Apr 20 15:06:16.060595 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k Apr 20 15:06:16.060602 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 Apr 20 15:06:16.060609 kernel: efifb: scrolling: redraw Apr 20 15:06:16.060619 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Apr 20 15:06:16.060626 kernel: Console: switching to colour frame buffer device 160x50 Apr 20 15:06:16.060632 kernel: fb0: EFI VGA frame buffer device Apr 20 15:06:16.060639 kernel: pstore: Using crash dump compression: deflate Apr 20 15:06:16.060646 kernel: pstore: Registered efi_pstore as persistent store backend Apr 20 15:06:16.060654 kernel: NET: Registered PF_INET6 protocol family Apr 20 15:06:16.060661 kernel: Segment Routing with IPv6 Apr 20 15:06:16.060668 kernel: In-situ OAM (IOAM) with IPv6 Apr 20 15:06:16.060674 kernel: NET: Registered PF_PACKET protocol family Apr 20 15:06:16.060681 kernel: Key type dns_resolver registered Apr 20 15:06:16.060687 kernel: IPI shorthand broadcast: enabled Apr 20 15:06:16.060694 kernel: sched_clock: Marking stable (11627135687, 2917719470)->(16129801161, -1584946004) Apr 20 15:06:16.060702 kernel: registered taskstats version 1 Apr 20 15:06:16.060709 kernel: Loading compiled-in X.509 certificates Apr 20 15:06:16.060715 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.81-flatcar: 7cf14208c08026297bea8a5678f7340932b35e4b' Apr 20 15:06:16.060722 kernel: Demotion targets for Node 0: null Apr 20 15:06:16.060728 kernel: Key type .fscrypt registered Apr 20 15:06:16.060734 kernel: Key type fscrypt-provisioning registered Apr 20 15:06:16.060741 kernel: ima: No TPM chip found, activating TPM-bypass! Apr 20 15:06:16.060749 kernel: ima: Allocated hash algorithm: sha1 Apr 20 15:06:16.060756 kernel: ima: No architecture policies found Apr 20 15:06:16.060762 kernel: clk: Disabling unused clocks Apr 20 15:06:16.060769 kernel: Freeing unused kernel image (initmem) memory: 15944K Apr 20 15:06:16.060775 kernel: Write protecting the kernel read-only data: 47104k Apr 20 15:06:16.060781 kernel: Freeing unused kernel image (rodata/data gap) memory: 1032K Apr 20 15:06:16.060788 kernel: Run /init as init process Apr 20 15:06:16.060795 kernel: with arguments: Apr 20 15:06:16.060802 kernel: /init Apr 20 15:06:16.060809 kernel: with environment: Apr 20 15:06:16.060817 kernel: HOME=/ Apr 20 15:06:16.060823 kernel: TERM=linux Apr 20 15:06:16.060830 kernel: SCSI subsystem initialized Apr 20 15:06:16.060836 kernel: libata version 3.00 loaded. Apr 20 15:06:16.061062 kernel: ahci 0000:00:1f.2: version 3.0 Apr 20 15:06:16.061077 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Apr 20 15:06:16.061177 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Apr 20 15:06:16.061274 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Apr 20 15:06:16.061641 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Apr 20 15:06:16.064265 kernel: scsi host0: ahci Apr 20 15:06:16.068756 kernel: scsi host1: ahci Apr 20 15:06:16.068885 kernel: scsi host2: ahci Apr 20 15:06:16.070075 kernel: scsi host3: ahci Apr 20 15:06:16.070202 kernel: scsi host4: ahci Apr 20 15:06:16.095544 kernel: scsi host5: ahci Apr 20 15:06:16.095591 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 26 lpm-pol 1 Apr 20 15:06:16.095606 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 26 lpm-pol 1 Apr 20 15:06:16.095613 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 26 lpm-pol 1 Apr 20 15:06:16.095620 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 26 lpm-pol 1 Apr 20 15:06:16.095627 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 26 lpm-pol 1 Apr 20 15:06:16.095633 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 26 lpm-pol 1 Apr 20 15:06:16.095640 kernel: ata2: SATA link down (SStatus 0 SControl 300) Apr 20 15:06:16.095647 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Apr 20 15:06:16.095656 kernel: ata6: SATA link down (SStatus 0 SControl 300) Apr 20 15:06:16.095663 kernel: ata4: SATA link down (SStatus 0 SControl 300) Apr 20 15:06:16.095669 kernel: ata1: SATA link down (SStatus 0 SControl 300) Apr 20 15:06:16.095676 kernel: ata3.00: LPM support broken, forcing max_power Apr 20 15:06:16.095683 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Apr 20 15:06:16.095690 kernel: ata3.00: applying bridge limits Apr 20 15:06:16.095697 kernel: ata5: SATA link down (SStatus 0 SControl 300) Apr 20 15:06:16.095706 kernel: ata3.00: LPM support broken, forcing max_power Apr 20 15:06:16.095712 kernel: ata3.00: configured for UDMA/100 Apr 20 15:06:16.095906 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Apr 20 15:06:16.096148 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Apr 20 15:06:16.096263 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Apr 20 15:06:16.096551 kernel: virtio_blk virtio1: [vda] 27000832 512-byte logical blocks (13.8 GB/12.9 GiB) Apr 20 15:06:16.096566 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Apr 20 15:06:16.096573 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Apr 20 15:06:16.096579 kernel: GPT:16515071 != 27000831 Apr 20 15:06:16.096586 kernel: GPT:Alternate GPT header not at the end of the disk. Apr 20 15:06:16.096593 kernel: GPT:16515071 != 27000831 Apr 20 15:06:16.096599 kernel: GPT: Use GNU Parted to correct GPT errors. Apr 20 15:06:16.096607 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 20 15:06:16.096717 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Apr 20 15:06:16.096726 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Apr 20 15:06:16.096732 kernel: device-mapper: uevent: version 1.0.3 Apr 20 15:06:16.096740 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Apr 20 15:06:16.096747 kernel: device-mapper: verity: sha256 using shash "sha256-generic" Apr 20 15:06:16.096753 kernel: raid6: avx512x4 gen() 12125 MB/s Apr 20 15:06:16.096762 kernel: raid6: avx512x2 gen() 23373 MB/s Apr 20 15:06:16.096769 kernel: raid6: avx512x1 gen() 27641 MB/s Apr 20 15:06:16.096775 kernel: raid6: avx2x4 gen() 17904 MB/s Apr 20 15:06:16.096782 kernel: raid6: avx2x2 gen() 18043 MB/s Apr 20 15:06:16.096788 kernel: raid6: avx2x1 gen() 21127 MB/s Apr 20 15:06:16.096795 kernel: raid6: using algorithm avx512x1 gen() 27641 MB/s Apr 20 15:06:16.096802 kernel: raid6: .... xor() 16218 MB/s, rmw enabled Apr 20 15:06:16.096809 kernel: raid6: using avx512x2 recovery algorithm Apr 20 15:06:16.096818 kernel: xor: automatically using best checksumming function avx Apr 20 15:06:16.096824 kernel: Btrfs loaded, zoned=no, fsverity=no Apr 20 15:06:16.096831 kernel: BTRFS: device fsid 2b1891e6-d4d2-4c02-a1ed-3a6feccae86f devid 1 transid 45 /dev/mapper/usr (253:0) scanned by mount (181) Apr 20 15:06:16.096838 kernel: BTRFS info (device dm-0): first mount of filesystem 2b1891e6-d4d2-4c02-a1ed-3a6feccae86f Apr 20 15:06:16.096845 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Apr 20 15:06:16.096851 kernel: BTRFS info (device dm-0 state E): disabling log replay at mount time Apr 20 15:06:16.096858 kernel: BTRFS info (device dm-0 state E): enabling free space tree Apr 20 15:06:16.096866 kernel: loop: module loaded Apr 20 15:06:16.096873 kernel: loop0: detected capacity change from 0 to 106960 Apr 20 15:06:16.096880 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Apr 20 15:06:16.096891 systemd[1]: /etc/systemd/system.conf.d/nocgroup.conf:2: Support for option DefaultCPUAccounting= has been removed and it is ignored Apr 20 15:06:16.096901 systemd[1]: /etc/systemd/system.conf.d/nocgroup.conf:5: Support for option DefaultBlockIOAccounting= has been removed and it is ignored Apr 20 15:06:16.096908 systemd[1]: Successfully made /usr/ read-only. Apr 20 15:06:16.097021 systemd[1]: systemd 258.3 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Apr 20 15:06:16.097028 systemd[1]: Detected virtualization kvm. Apr 20 15:06:16.097035 systemd[1]: Detected architecture x86-64. Apr 20 15:06:16.097042 systemd[1]: Running in initrd. Apr 20 15:06:16.097049 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Apr 20 15:06:16.097056 systemd[1]: No hostname configured, using default hostname. Apr 20 15:06:16.097065 systemd[1]: Hostname set to . Apr 20 15:06:16.097072 kernel: hrtimer: interrupt took 14703300 ns Apr 20 15:06:16.097079 systemd[1]: Queued start job for default target initrd.target. Apr 20 15:06:16.097086 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Apr 20 15:06:16.097093 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 20 15:06:16.097100 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 20 15:06:16.097111 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Apr 20 15:06:16.097118 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 20 15:06:16.097125 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Apr 20 15:06:16.097132 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Apr 20 15:06:16.097139 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 20 15:06:16.097147 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 20 15:06:16.097156 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Apr 20 15:06:16.097163 systemd[1]: Reached target paths.target - Path Units. Apr 20 15:06:16.097170 systemd[1]: Reached target slices.target - Slice Units. Apr 20 15:06:16.097177 systemd[1]: Reached target swap.target - Swaps. Apr 20 15:06:16.097184 systemd[1]: Reached target timers.target - Timer Units. Apr 20 15:06:16.097191 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Apr 20 15:06:16.097198 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 20 15:06:16.097207 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Apr 20 15:06:16.097214 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 20 15:06:16.097221 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Apr 20 15:06:16.097228 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 20 15:06:16.097235 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 20 15:06:16.097241 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 20 15:06:16.097248 systemd[1]: Reached target sockets.target - Socket Units. Apr 20 15:06:16.097257 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Apr 20 15:06:16.097264 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Apr 20 15:06:16.097271 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 20 15:06:16.097278 systemd[1]: Finished network-cleanup.service - Network Cleanup. Apr 20 15:06:16.097447 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Apr 20 15:06:16.097454 systemd[1]: Starting systemd-fsck-usr.service... Apr 20 15:06:16.097464 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 20 15:06:16.097471 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 20 15:06:16.097478 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 20 15:06:16.097485 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Apr 20 15:06:16.097603 systemd-journald[319]: Collecting audit messages is enabled. Apr 20 15:06:16.097635 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 20 15:06:16.097647 kernel: audit: type=1130 audit(1776697576.044:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 15:06:16.097661 systemd[1]: Finished systemd-fsck-usr.service. Apr 20 15:06:16.097671 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 20 15:06:16.097681 kernel: audit: type=1130 audit(1776697576.091:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 15:06:16.097691 systemd-journald[319]: Journal started Apr 20 15:06:16.097714 systemd-journald[319]: Runtime Journal (/run/log/journal/3af9b55cf7dd4692958ff0f457276295) is 5.9M, max 47.8M, 41.8M free. Apr 20 15:06:16.044000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 15:06:16.091000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 15:06:16.148587 systemd[1]: Started systemd-journald.service - Journal Service. Apr 20 15:06:16.159000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 15:06:16.189618 kernel: audit: type=1130 audit(1776697576.159:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 15:06:16.236765 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 20 15:06:16.351888 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Apr 20 15:06:16.368606 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 20 15:06:16.375908 systemd-modules-load[321]: Inserted module 'br_netfilter' Apr 20 15:06:16.396000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 15:06:16.399112 kernel: Bridge firewalling registered Apr 20 15:06:16.397782 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 20 15:06:16.439000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 15:06:16.505886 kernel: audit: type=1130 audit(1776697576.396:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 15:06:16.415746 systemd-tmpfiles[334]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Apr 20 15:06:16.522000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 15:06:16.574504 kernel: audit: type=1130 audit(1776697576.439:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 15:06:16.505496 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 20 15:06:16.576128 kernel: audit: type=1130 audit(1776697576.522:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 15:06:16.574621 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 20 15:06:16.588000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 15:06:16.633231 kernel: audit: type=1130 audit(1776697576.588:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 15:06:16.597556 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 20 15:06:16.694803 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 20 15:06:16.721122 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 20 15:06:16.860834 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 20 15:06:16.903000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 15:06:16.919912 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Apr 20 15:06:16.931717 kernel: audit: type=1130 audit(1776697576.903:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 15:06:16.979721 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 20 15:06:16.989000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 15:06:17.041110 kernel: audit: type=1130 audit(1776697576.989:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 15:06:17.061582 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 20 15:06:17.062000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 15:06:17.099000 audit: BPF prog-id=5 op=LOAD Apr 20 15:06:17.138271 kernel: audit: type=1130 audit(1776697577.062:11): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 15:06:17.114859 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 20 15:06:17.305707 dracut-cmdline[356]: dracut-109 Apr 20 15:06:17.337732 dracut-cmdline[356]: Using kernel command line parameters: SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=70de22794192cd52e167b5a4b1ae0509811ded61dbe4152dfc02378f843ae81a Apr 20 15:06:17.583109 systemd-resolved[359]: Positive Trust Anchors: Apr 20 15:06:17.585906 systemd-resolved[359]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 20 15:06:17.585915 systemd-resolved[359]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Apr 20 15:06:17.587902 systemd-resolved[359]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 20 15:06:17.861212 systemd-resolved[359]: Defaulting to hostname 'linux'. Apr 20 15:06:17.888103 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 20 15:06:17.902000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 15:06:17.902729 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 20 15:06:18.606085 kernel: Loading iSCSI transport class v2.0-870. Apr 20 15:06:18.655180 kernel: iscsi: registered transport (tcp) Apr 20 15:06:18.887695 kernel: iscsi: registered transport (qla4xxx) Apr 20 15:06:18.887897 kernel: QLogic iSCSI HBA Driver Apr 20 15:06:19.502233 systemd[1]: Starting systemd-network-generator.service - Generate Network Units from Kernel Command Line... Apr 20 15:06:19.606892 systemd[1]: Finished systemd-network-generator.service - Generate Network Units from Kernel Command Line. Apr 20 15:06:19.609000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 15:06:19.615210 systemd[1]: Reached target network-pre.target - Preparation for Network. Apr 20 15:06:19.924638 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Apr 20 15:06:19.924000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 15:06:19.927874 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Apr 20 15:06:19.956090 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Apr 20 15:06:20.116183 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Apr 20 15:06:20.116000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 15:06:20.154000 audit: BPF prog-id=6 op=LOAD Apr 20 15:06:20.154000 audit: BPF prog-id=7 op=LOAD Apr 20 15:06:20.161614 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 20 15:06:20.323643 systemd-udevd[584]: Using default interface naming scheme 'v258'. Apr 20 15:06:20.427669 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 20 15:06:20.437000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 15:06:20.441760 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Apr 20 15:06:20.612803 dracut-pre-trigger[624]: rd.md=0: removing MD RAID activation Apr 20 15:06:20.780922 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Apr 20 15:06:20.797000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 15:06:20.801530 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 20 15:06:20.874046 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 20 15:06:20.885000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 15:06:20.909000 audit: BPF prog-id=8 op=LOAD Apr 20 15:06:20.913600 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 20 15:06:21.212210 systemd-networkd[737]: lo: Link UP Apr 20 15:06:21.212550 systemd-networkd[737]: lo: Gained carrier Apr 20 15:06:21.229226 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 20 15:06:21.264000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 15:06:21.268735 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 20 15:06:21.368000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 15:06:21.374257 systemd[1]: Reached target network.target - Network. Apr 20 15:06:21.477106 kernel: kauditd_printk_skb: 11 callbacks suppressed Apr 20 15:06:21.433070 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Apr 20 15:06:21.497929 kernel: audit: type=1130 audit(1776697581.264:23): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 15:06:21.499841 kernel: audit: type=1130 audit(1776697581.368:24): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 15:06:21.687917 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Apr 20 15:06:21.822840 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Apr 20 15:06:21.938604 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Apr 20 15:06:22.024134 kernel: cryptd: max_cpu_qlen set to 1000 Apr 20 15:06:22.037540 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Apr 20 15:06:22.104769 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Apr 20 15:06:22.210798 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Apr 20 15:06:22.291151 disk-uuid[780]: Primary Header is updated. Apr 20 15:06:22.291151 disk-uuid[780]: Secondary Entries is updated. Apr 20 15:06:22.291151 disk-uuid[780]: Secondary Header is updated. Apr 20 15:06:22.321000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 15:06:22.416846 kernel: audit: type=1131 audit(1776697582.321:25): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 15:06:22.292623 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 20 15:06:22.441887 kernel: AES CTR mode by8 optimization enabled Apr 20 15:06:22.292799 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 20 15:06:22.360613 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Apr 20 15:06:22.409232 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 20 15:06:22.634886 systemd-networkd[737]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Apr 20 15:06:22.634896 systemd-networkd[737]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 20 15:06:22.641164 systemd-networkd[737]: eth0: Link UP Apr 20 15:06:22.649605 systemd-networkd[737]: eth0: Gained carrier Apr 20 15:06:22.649622 systemd-networkd[737]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Apr 20 15:06:22.695119 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 20 15:06:22.707237 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 20 15:06:22.795000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 15:06:22.797721 systemd-networkd[737]: eth0: DHCPv4 address 10.0.0.13/16, gateway 10.0.0.1 acquired from 10.0.0.1 Apr 20 15:06:22.795000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 15:06:22.883100 kernel: audit: type=1130 audit(1776697582.795:26): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 15:06:22.883125 kernel: audit: type=1131 audit(1776697582.795:27): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 15:06:22.947130 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 20 15:06:23.084142 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Apr 20 15:06:23.084000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 15:06:23.194924 kernel: audit: type=1130 audit(1776697583.084:28): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 15:06:23.086062 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Apr 20 15:06:23.136669 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 20 15:06:23.146689 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 20 15:06:23.199227 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Apr 20 15:06:23.235857 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 20 15:06:23.336000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 15:06:23.390219 kernel: audit: type=1130 audit(1776697583.336:29): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 15:06:23.513688 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Apr 20 15:06:23.526000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 15:06:23.570762 kernel: audit: type=1130 audit(1776697583.526:30): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 15:06:23.617853 disk-uuid[781]: Warning: The kernel is still using the old partition table. Apr 20 15:06:23.617853 disk-uuid[781]: The new table will be used at the next reboot or after you Apr 20 15:06:23.617853 disk-uuid[781]: run partprobe(8) or kpartx(8) Apr 20 15:06:23.617853 disk-uuid[781]: The operation has completed successfully. Apr 20 15:06:23.746921 systemd[1]: disk-uuid.service: Deactivated successfully. Apr 20 15:06:23.749825 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Apr 20 15:06:23.799000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 15:06:23.799000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 15:06:23.826867 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Apr 20 15:06:23.855472 kernel: audit: type=1130 audit(1776697583.799:31): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 15:06:23.855503 kernel: audit: type=1131 audit(1776697583.799:32): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 15:06:24.052846 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (901) Apr 20 15:06:24.092198 kernel: BTRFS info (device vda6): first mount of filesystem 17906e87-85d1-46f5-980e-3e85555360cf Apr 20 15:06:24.092507 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 20 15:06:24.134921 kernel: BTRFS info (device vda6): turning on async discard Apr 20 15:06:24.138701 kernel: BTRFS info (device vda6): enabling free space tree Apr 20 15:06:24.280636 systemd-networkd[737]: eth0: Gained IPv6LL Apr 20 15:06:24.304714 kernel: BTRFS info (device vda6): last unmount of filesystem 17906e87-85d1-46f5-980e-3e85555360cf Apr 20 15:06:24.309608 systemd[1]: Finished ignition-setup.service - Ignition (setup). Apr 20 15:06:24.327000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 15:06:24.347119 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Apr 20 15:06:24.838171 ignition[920]: Ignition 2.24.0 Apr 20 15:06:24.838272 ignition[920]: Stage: fetch-offline Apr 20 15:06:24.838627 ignition[920]: no configs at "/usr/lib/ignition/base.d" Apr 20 15:06:24.838635 ignition[920]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 20 15:06:24.838706 ignition[920]: parsed url from cmdline: "" Apr 20 15:06:24.838709 ignition[920]: no config URL provided Apr 20 15:06:24.838774 ignition[920]: reading system config file "/usr/lib/ignition/user.ign" Apr 20 15:06:24.838781 ignition[920]: no config at "/usr/lib/ignition/user.ign" Apr 20 15:06:24.838844 ignition[920]: op(1): [started] loading QEMU firmware config module Apr 20 15:06:24.838847 ignition[920]: op(1): executing: "modprobe" "qemu_fw_cfg" Apr 20 15:06:24.914807 ignition[920]: op(1): [finished] loading QEMU firmware config module Apr 20 15:06:24.914872 ignition[920]: QEMU firmware config was not found. Ignoring... Apr 20 15:06:25.531061 ignition[920]: parsing config with SHA512: 3bf7cbf3c443a65dade124a5af519e02f13d45911253313e42b168e6b53f18ef678b383c5a04b833cd2aa31850bd82d556f2495827dbe7de24fe0bc7914af83c Apr 20 15:06:25.575791 unknown[920]: fetched base config from "system" Apr 20 15:06:25.575931 unknown[920]: fetched user config from "qemu" Apr 20 15:06:25.578201 ignition[920]: fetch-offline: fetch-offline passed Apr 20 15:06:25.613000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 15:06:25.589822 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Apr 20 15:06:25.578682 ignition[920]: Ignition finished successfully Apr 20 15:06:25.615480 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Apr 20 15:06:25.617084 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Apr 20 15:06:25.922101 ignition[930]: Ignition 2.24.0 Apr 20 15:06:25.922118 ignition[930]: Stage: kargs Apr 20 15:06:25.956787 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Apr 20 15:06:25.978000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 15:06:25.924835 ignition[930]: no configs at "/usr/lib/ignition/base.d" Apr 20 15:06:25.987145 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Apr 20 15:06:25.924847 ignition[930]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 20 15:06:25.934174 ignition[930]: kargs: kargs passed Apr 20 15:06:25.934579 ignition[930]: Ignition finished successfully Apr 20 15:06:26.248744 ignition[938]: Ignition 2.24.0 Apr 20 15:06:26.248874 ignition[938]: Stage: disks Apr 20 15:06:26.249253 ignition[938]: no configs at "/usr/lib/ignition/base.d" Apr 20 15:06:26.249264 ignition[938]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 20 15:06:26.344000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 15:06:26.322947 systemd[1]: Finished ignition-disks.service - Ignition (disks). Apr 20 15:06:26.461250 kernel: kauditd_printk_skb: 3 callbacks suppressed Apr 20 15:06:26.254714 ignition[938]: disks: disks passed Apr 20 15:06:26.346213 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Apr 20 15:06:26.509243 kernel: audit: type=1130 audit(1776697586.344:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 15:06:26.254783 ignition[938]: Ignition finished successfully Apr 20 15:06:26.412820 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 20 15:06:26.431219 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 20 15:06:26.459504 systemd[1]: Reached target sysinit.target - System Initialization. Apr 20 15:06:26.475567 systemd[1]: Reached target basic.target - Basic System. Apr 20 15:06:26.512883 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Apr 20 15:06:26.936079 systemd-fsck[948]: ROOT: clean, 15/456736 files, 38230/456704 blocks Apr 20 15:06:26.970590 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Apr 20 15:06:26.982000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 15:06:27.025859 kernel: audit: type=1130 audit(1776697586.982:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 15:06:26.997753 systemd[1]: Mounting sysroot.mount - /sysroot... Apr 20 15:06:28.118849 kernel: EXT4-fs (vda9): mounted filesystem 2bdffc2e-451a-418b-b04b-9e3cd9229e7e r/w with ordered data mode. Quota mode: none. Apr 20 15:06:28.123137 systemd[1]: Mounted sysroot.mount - /sysroot. Apr 20 15:06:28.144540 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Apr 20 15:06:28.190144 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 20 15:06:28.203934 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Apr 20 15:06:28.235092 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Apr 20 15:06:28.235497 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Apr 20 15:06:28.235535 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Apr 20 15:06:28.385563 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Apr 20 15:06:28.427861 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (957) Apr 20 15:06:28.428206 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Apr 20 15:06:28.485272 kernel: BTRFS info (device vda6): first mount of filesystem 17906e87-85d1-46f5-980e-3e85555360cf Apr 20 15:06:28.486558 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 20 15:06:28.513589 kernel: BTRFS info (device vda6): turning on async discard Apr 20 15:06:28.513939 kernel: BTRFS info (device vda6): enabling free space tree Apr 20 15:06:28.533698 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 20 15:06:31.409841 kernel: loop1: detected capacity change from 0 to 43472 Apr 20 15:06:31.432783 kernel: loop1: p1 p2 p3 Apr 20 15:06:32.043198 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Apr 20 15:06:32.043978 kernel: device-mapper: verity: Unrecognized verity feature request: root_hash_sig_key_desc Apr 20 15:06:32.045228 kernel: device-mapper: table: 253:1: verity: Unrecognized verity feature request (-EINVAL) Apr 20 15:06:32.070618 kernel: device-mapper: ioctl: error adding target to table Apr 20 15:06:32.070830 systemd-confext[1047]: device-mapper: reload ioctl on 036bab43330e5a16b58b2997d79b59667c299046b83a0d438261a470d6586a8f-verity (253:1) failed: Invalid argument Apr 20 15:06:32.119187 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Apr 20 15:06:33.718156 kernel: erofs: (device dm-1): mounted with root inode @ nid 40. Apr 20 15:06:33.848652 kernel: loop2: detected capacity change from 0 to 43472 Apr 20 15:06:33.867987 kernel: loop2: p1 p2 p3 Apr 20 15:06:33.986185 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Apr 20 15:06:33.988162 kernel: device-mapper: verity: Unrecognized verity feature request: root_hash_sig_key_desc Apr 20 15:06:33.990584 kernel: device-mapper: table: 253:1: verity: Unrecognized verity feature request (-EINVAL) Apr 20 15:06:33.997899 kernel: device-mapper: ioctl: error adding target to table Apr 20 15:06:34.006821 (sd-merge)[1057]: device-mapper: reload ioctl on 036bab43330e5a16b58b2997d79b59667c299046b83a0d438261a470d6586a8f-verity (253:1) failed: Invalid argument Apr 20 15:06:34.047857 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Apr 20 15:06:34.777174 kernel: erofs: (device dm-1): mounted with root inode @ nid 40. Apr 20 15:06:34.780862 (sd-merge)[1057]: Using extensions '00-flatcar-default.raw'. Apr 20 15:06:34.807918 (sd-merge)[1057]: Merged extensions into '/sysroot/etc'. Apr 20 15:06:34.872976 initrd-setup-root[1064]: /etc 00-flatcar-default Mon 2026-04-20 15:06:16 UTC Apr 20 15:06:34.897881 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Apr 20 15:06:34.914000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 15:06:34.961921 kernel: audit: type=1130 audit(1776697594.914:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 15:06:34.938619 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Apr 20 15:06:35.000989 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Apr 20 15:06:35.115916 systemd[1]: sysroot-oem.mount: Deactivated successfully. Apr 20 15:06:35.140267 kernel: BTRFS info (device vda6): last unmount of filesystem 17906e87-85d1-46f5-980e-3e85555360cf Apr 20 15:06:35.217872 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Apr 20 15:06:35.217000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 15:06:35.331907 kernel: audit: type=1130 audit(1776697595.217:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 15:06:35.392992 ignition[1074]: INFO : Ignition 2.24.0 Apr 20 15:06:35.392992 ignition[1074]: INFO : Stage: mount Apr 20 15:06:35.392992 ignition[1074]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 20 15:06:35.392992 ignition[1074]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 20 15:06:35.485561 ignition[1074]: INFO : mount: mount passed Apr 20 15:06:35.501921 ignition[1074]: INFO : Ignition finished successfully Apr 20 15:06:35.517000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 15:06:35.493206 systemd[1]: Finished ignition-mount.service - Ignition (mount). Apr 20 15:06:35.597691 kernel: audit: type=1130 audit(1776697595.517:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 15:06:35.529687 systemd[1]: Starting ignition-files.service - Ignition (files)... Apr 20 15:06:35.705617 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 20 15:06:35.885176 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1085) Apr 20 15:06:35.915643 kernel: BTRFS info (device vda6): first mount of filesystem 17906e87-85d1-46f5-980e-3e85555360cf Apr 20 15:06:35.916784 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 20 15:06:35.983257 kernel: BTRFS info (device vda6): turning on async discard Apr 20 15:06:35.983678 kernel: BTRFS info (device vda6): enabling free space tree Apr 20 15:06:35.988844 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 20 15:06:36.148535 ignition[1102]: INFO : Ignition 2.24.0 Apr 20 15:06:36.148535 ignition[1102]: INFO : Stage: files Apr 20 15:06:36.148535 ignition[1102]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 20 15:06:36.148535 ignition[1102]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 20 15:06:36.210984 ignition[1102]: DEBUG : files: compiled without relabeling support, skipping Apr 20 15:06:36.210984 ignition[1102]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Apr 20 15:06:36.210984 ignition[1102]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Apr 20 15:06:36.210984 ignition[1102]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Apr 20 15:06:36.210984 ignition[1102]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Apr 20 15:06:36.210984 ignition[1102]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Apr 20 15:06:36.210984 ignition[1102]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Apr 20 15:06:36.210984 ignition[1102]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Apr 20 15:06:36.171744 unknown[1102]: wrote ssh authorized keys file for user: core Apr 20 15:06:36.534598 ignition[1102]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Apr 20 15:06:36.711685 ignition[1102]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Apr 20 15:06:36.711685 ignition[1102]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Apr 20 15:06:36.756785 ignition[1102]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Apr 20 15:06:36.756785 ignition[1102]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Apr 20 15:06:36.756785 ignition[1102]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Apr 20 15:06:36.756785 ignition[1102]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 20 15:06:36.756785 ignition[1102]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 20 15:06:36.756785 ignition[1102]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 20 15:06:36.756785 ignition[1102]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 20 15:06:36.756785 ignition[1102]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Apr 20 15:06:36.756785 ignition[1102]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Apr 20 15:06:36.756785 ignition[1102]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Apr 20 15:06:36.756785 ignition[1102]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Apr 20 15:06:36.756785 ignition[1102]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Apr 20 15:06:36.756785 ignition[1102]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.4-x86-64.raw: attempt #1 Apr 20 15:06:37.218230 ignition[1102]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Apr 20 15:06:42.215428 ignition[1102]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Apr 20 15:06:42.215428 ignition[1102]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Apr 20 15:06:42.245690 ignition[1102]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 20 15:06:42.265852 ignition[1102]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 20 15:06:42.265852 ignition[1102]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Apr 20 15:06:42.265852 ignition[1102]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Apr 20 15:06:42.265852 ignition[1102]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Apr 20 15:06:42.309581 ignition[1102]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Apr 20 15:06:42.309581 ignition[1102]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Apr 20 15:06:42.309581 ignition[1102]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Apr 20 15:06:42.504735 ignition[1102]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Apr 20 15:06:42.558018 ignition[1102]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Apr 20 15:06:42.573921 ignition[1102]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Apr 20 15:06:42.573921 ignition[1102]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Apr 20 15:06:42.573921 ignition[1102]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Apr 20 15:06:42.573921 ignition[1102]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Apr 20 15:06:42.615269 ignition[1102]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Apr 20 15:06:42.615269 ignition[1102]: INFO : files: files passed Apr 20 15:06:42.615269 ignition[1102]: INFO : Ignition finished successfully Apr 20 15:06:42.632000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 15:06:42.668249 kernel: audit: type=1130 audit(1776697602.632:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 15:06:42.616947 systemd[1]: Finished ignition-files.service - Ignition (files). Apr 20 15:06:42.717931 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Apr 20 15:06:42.731794 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Apr 20 15:06:42.760866 systemd[1]: ignition-quench.service: Deactivated successfully. Apr 20 15:06:42.767531 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Apr 20 15:06:42.780000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 15:06:42.780000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 15:06:42.806195 kernel: audit: type=1130 audit(1776697602.780:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 15:06:42.806630 kernel: audit: type=1131 audit(1776697602.780:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 15:06:42.843205 initrd-setup-root-after-ignition[1134]: grep: /sysroot/oem/oem-release: No such file or directory Apr 20 15:06:42.866028 initrd-setup-root-after-ignition[1136]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 20 15:06:42.866028 initrd-setup-root-after-ignition[1136]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Apr 20 15:06:42.890242 initrd-setup-root-after-ignition[1140]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 20 15:06:42.923453 kernel: loop3: detected capacity change from 0 to 43472 Apr 20 15:06:42.927627 kernel: loop3: p1 p2 p3 Apr 20 15:06:42.972939 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Apr 20 15:06:42.973007 kernel: device-mapper: verity: Unrecognized verity feature request: root_hash_sig_key_desc Apr 20 15:06:42.973022 kernel: device-mapper: table: 253:2: verity: Unrecognized verity feature request (-EINVAL) Apr 20 15:06:42.983681 kernel: device-mapper: ioctl: error adding target to table Apr 20 15:06:42.983988 systemd-confext[1142]: device-mapper: reload ioctl on loop3p1-verity (253:2) failed: Invalid argument Apr 20 15:06:42.999175 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Apr 20 15:06:43.132557 kernel: erofs: (device dm-2): mounted with root inode @ nid 40. Apr 20 15:06:43.171450 kernel: loop4: detected capacity change from 0 to 43472 Apr 20 15:06:43.211514 kernel: loop4: p1 p2 p3 Apr 20 15:06:43.260173 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Apr 20 15:06:43.261570 kernel: device-mapper: verity: Unrecognized verity feature request: root_hash_sig_key_desc Apr 20 15:06:43.261588 kernel: device-mapper: table: 253:2: verity: Unrecognized verity feature request (-EINVAL) Apr 20 15:06:43.270621 kernel: device-mapper: ioctl: error adding target to table Apr 20 15:06:43.270853 (sd-merge)[1154]: device-mapper: reload ioctl on loop4p1-verity (253:2) failed: Invalid argument Apr 20 15:06:43.285582 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Apr 20 15:06:43.420777 kernel: erofs: (device dm-2): mounted with root inode @ nid 40. Apr 20 15:06:43.422942 (sd-merge)[1154]: Skipping extension refresh because no change was found, use --always-refresh=yes to always do a refresh. Apr 20 15:06:43.451478 kernel: device-mapper: ioctl: remove_all left 2 open device(s) Apr 20 15:06:43.473392 kernel: loop4: detected capacity change from 0 to 378016 Apr 20 15:06:43.481407 kernel: loop4: p1 p2 p3 Apr 20 15:06:43.542644 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Apr 20 15:06:43.542860 kernel: device-mapper: verity: Unrecognized verity feature request: root_hash_sig_key_desc Apr 20 15:06:43.542872 kernel: device-mapper: table: 253:2: verity: Unrecognized verity feature request (-EINVAL) Apr 20 15:06:43.553607 kernel: device-mapper: ioctl: error adding target to table Apr 20 15:06:43.553882 systemd-sysext[1162]: device-mapper: reload ioctl on 5f63b01eb609e19b7df6b1f3554b098a8644903507171258f91f339ee69140b0-verity (253:2) failed: Invalid argument Apr 20 15:06:43.572208 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Apr 20 15:06:43.806756 kernel: erofs: (device dm-2): mounted with root inode @ nid 39. Apr 20 15:06:43.853600 kernel: loop5: detected capacity change from 0 to 219192 Apr 20 15:06:43.980814 kernel: loop6: detected capacity change from 0 to 178200 Apr 20 15:06:43.991776 kernel: loop6: p1 p2 p3 Apr 20 15:06:44.109752 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Apr 20 15:06:44.109959 kernel: device-mapper: verity: Unrecognized verity feature request: root_hash_sig_key_desc Apr 20 15:06:44.109975 kernel: device-mapper: table: 253:2: verity: Unrecognized verity feature request (-EINVAL) Apr 20 15:06:44.120735 kernel: device-mapper: ioctl: error adding target to table Apr 20 15:06:44.121194 systemd-sysext[1162]: device-mapper: reload ioctl on 47e3a0d62726bde98fcb471f946aa0f0e9f97280e4f7267ec40f142aba643eb6-verity (253:2) failed: Invalid argument Apr 20 15:06:44.140174 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Apr 20 15:06:44.345525 kernel: erofs: (device dm-2): mounted with root inode @ nid 39. Apr 20 15:06:44.428435 kernel: loop7: detected capacity change from 0 to 378016 Apr 20 15:06:44.436491 kernel: loop7: p1 p2 p3 Apr 20 15:06:44.497613 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Apr 20 15:06:44.497991 kernel: device-mapper: verity: Unrecognized verity feature request: root_hash_sig_key_desc Apr 20 15:06:44.498002 kernel: device-mapper: table: 253:2: verity: Unrecognized verity feature request (-EINVAL) Apr 20 15:06:44.507123 kernel: device-mapper: ioctl: error adding target to table Apr 20 15:06:44.507538 (sd-merge)[1180]: device-mapper: reload ioctl on 5f63b01eb609e19b7df6b1f3554b098a8644903507171258f91f339ee69140b0-verity (253:2) failed: Invalid argument Apr 20 15:06:44.524919 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Apr 20 15:06:44.811890 kernel: erofs: (device dm-2): mounted with root inode @ nid 39. Apr 20 15:06:44.818655 kernel: loop1: detected capacity change from 0 to 219192 Apr 20 15:06:44.938557 kernel: loop3: detected capacity change from 0 to 178200 Apr 20 15:06:44.950175 kernel: loop3: p1 p2 p3 Apr 20 15:06:45.099862 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Apr 20 15:06:45.100021 kernel: device-mapper: verity: Unrecognized verity feature request: root_hash_sig_key_desc Apr 20 15:06:45.100039 kernel: device-mapper: table: 253:3: verity: Unrecognized verity feature request (-EINVAL) Apr 20 15:06:45.108761 kernel: device-mapper: ioctl: error adding target to table Apr 20 15:06:45.109140 (sd-merge)[1180]: device-mapper: reload ioctl on 47e3a0d62726bde98fcb471f946aa0f0e9f97280e4f7267ec40f142aba643eb6-verity (253:3) failed: Invalid argument Apr 20 15:06:45.127987 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Apr 20 15:06:45.284518 kernel: erofs: (device dm-3): mounted with root inode @ nid 39. Apr 20 15:06:45.287833 (sd-merge)[1180]: Using extensions 'containerd-flatcar.raw', 'docker-flatcar.raw', 'kubernetes-v1.34.4-x86-64.raw'. Apr 20 15:06:45.299199 (sd-merge)[1180]: Merged extensions into '/sysroot/usr'. Apr 20 15:06:45.308475 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 20 15:06:45.323000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 15:06:45.325911 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Apr 20 15:06:45.348140 kernel: audit: type=1130 audit(1776697605.323:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 15:06:45.361513 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Apr 20 15:06:45.454267 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Apr 20 15:06:45.454612 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Apr 20 15:06:45.522000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 15:06:45.522000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 15:06:45.561459 kernel: audit: type=1130 audit(1776697605.522:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 15:06:45.522719 systemd[1]: initrd-parse-etc.service: Triggering OnSuccess= dependencies. Apr 20 15:06:45.576762 kernel: audit: type=1131 audit(1776697605.522:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 15:06:45.524566 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Apr 20 15:06:45.572230 systemd[1]: Reached target initrd.target - Initrd Default Target. Apr 20 15:06:45.598710 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Apr 20 15:06:45.609485 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Apr 20 15:06:45.709814 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 20 15:06:45.720000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 15:06:45.741709 kernel: audit: type=1130 audit(1776697605.720:47): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 15:06:45.732647 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Apr 20 15:06:45.795440 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Apr 20 15:06:45.795908 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 20 15:06:45.823950 systemd[1]: Stopped target timers.target - Timer Units. Apr 20 15:06:45.839750 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Apr 20 15:06:45.858000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 15:06:45.874926 kernel: audit: type=1131 audit(1776697605.858:48): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 15:06:45.840040 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 20 15:06:45.860657 systemd[1]: Stopped target initrd.target - Initrd Default Target. Apr 20 15:06:45.885169 systemd[1]: Stopped target basic.target - Basic System. Apr 20 15:06:45.891595 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Apr 20 15:06:45.916448 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Apr 20 15:06:45.931978 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Apr 20 15:06:45.942558 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Apr 20 15:06:45.959043 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Apr 20 15:06:45.973769 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Apr 20 15:06:45.985705 systemd[1]: Stopped target sysinit.target - System Initialization. Apr 20 15:06:46.001498 systemd[1]: Stopped target local-fs.target - Local File Systems. Apr 20 15:06:46.012662 systemd[1]: Stopped target swap.target - Swaps. Apr 20 15:06:46.023793 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Apr 20 15:06:46.036000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 15:06:46.071271 kernel: audit: type=1131 audit(1776697606.036:49): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 15:06:46.025222 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Apr 20 15:06:46.040644 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Apr 20 15:06:46.062896 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 20 15:06:46.147000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 15:06:46.171428 kernel: audit: type=1131 audit(1776697606.147:50): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 15:06:46.121213 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Apr 20 15:06:46.171000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 15:06:46.122145 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 20 15:06:46.125023 systemd[1]: dracut-initqueue.service: Deactivated successfully. Apr 20 15:06:46.125395 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Apr 20 15:06:46.148007 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Apr 20 15:06:46.148224 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Apr 20 15:06:46.172464 systemd[1]: Stopped target paths.target - Path Units. Apr 20 15:06:46.190159 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Apr 20 15:06:46.190696 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 20 15:06:46.206042 systemd[1]: Stopped target slices.target - Slice Units. Apr 20 15:06:46.303000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 15:06:46.211758 systemd[1]: Stopped target sockets.target - Socket Units. Apr 20 15:06:46.324000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 15:06:46.230496 systemd[1]: iscsid.socket: Deactivated successfully. Apr 20 15:06:46.231713 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Apr 20 15:06:46.250849 systemd[1]: iscsiuio.socket: Deactivated successfully. Apr 20 15:06:46.251458 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 20 15:06:46.271688 systemd[1]: systemd-journald-audit.socket: Deactivated successfully. Apr 20 15:06:46.274710 systemd[1]: Closed systemd-journald-audit.socket - Journal Audit Socket. Apr 20 15:06:46.290878 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Apr 20 15:06:46.291160 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 20 15:06:46.400000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 15:06:46.303942 systemd[1]: ignition-files.service: Deactivated successfully. Apr 20 15:06:46.422000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 15:06:46.304383 systemd[1]: Stopped ignition-files.service - Ignition (files). Apr 20 15:06:46.440000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 15:06:46.324779 systemd[1]: ignition-files.service: Consumed 3.344s CPU time. Apr 20 15:06:46.351240 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Apr 20 15:06:46.354894 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Apr 20 15:06:46.372906 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Apr 20 15:06:46.374174 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 20 15:06:46.401121 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Apr 20 15:06:46.580000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 15:06:46.580000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 15:06:46.401254 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Apr 20 15:06:46.424230 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Apr 20 15:06:46.424875 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Apr 20 15:06:46.555891 systemd[1]: initrd-cleanup.service: Deactivated successfully. Apr 20 15:06:46.556128 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Apr 20 15:06:46.629865 ignition[1209]: INFO : Ignition 2.24.0 Apr 20 15:06:46.629865 ignition[1209]: INFO : Stage: umount Apr 20 15:06:46.643862 ignition[1209]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 20 15:06:46.643862 ignition[1209]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 20 15:06:46.643862 ignition[1209]: INFO : umount: umount passed Apr 20 15:06:46.643862 ignition[1209]: INFO : Ignition finished successfully Apr 20 15:06:46.660000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 15:06:46.690000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 15:06:46.644857 systemd[1]: ignition-mount.service: Deactivated successfully. Apr 20 15:06:46.703000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 15:06:46.647802 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Apr 20 15:06:46.715000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 15:06:46.660753 systemd[1]: Stopped target network.target - Network. Apr 20 15:06:46.725000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup-pre comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 15:06:46.677769 systemd[1]: ignition-disks.service: Deactivated successfully. Apr 20 15:06:46.681022 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Apr 20 15:06:46.691047 systemd[1]: ignition-kargs.service: Deactivated successfully. Apr 20 15:06:46.691177 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Apr 20 15:06:46.704275 systemd[1]: ignition-setup.service: Deactivated successfully. Apr 20 15:06:46.772000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 15:06:46.705274 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Apr 20 15:06:46.782000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 15:06:46.715874 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Apr 20 15:06:46.715922 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Apr 20 15:06:46.726244 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Apr 20 15:06:46.812000 audit: BPF prog-id=8 op=UNLOAD Apr 20 15:06:46.740131 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Apr 20 15:06:46.756810 systemd[1]: sysroot-boot.mount: Deactivated successfully. Apr 20 15:06:46.771133 systemd[1]: sysroot-boot.service: Deactivated successfully. Apr 20 15:06:46.846000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 15:06:46.771499 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Apr 20 15:06:46.773160 systemd[1]: systemd-networkd.service: Deactivated successfully. Apr 20 15:06:46.773569 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Apr 20 15:06:46.812925 systemd[1]: Stopped target network-pre.target - Preparation for Network. Apr 20 15:06:46.880000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 15:06:46.815475 systemd[1]: systemd-networkd.socket: Deactivated successfully. Apr 20 15:06:46.815531 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Apr 20 15:06:46.827149 systemd[1]: initrd-setup-root.service: Deactivated successfully. Apr 20 15:06:46.827195 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Apr 20 15:06:46.931000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 15:06:46.847011 systemd[1]: initrd-setup-root.service: Consumed 2.369s CPU time. Apr 20 15:06:46.864249 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Apr 20 15:06:46.869110 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Apr 20 15:06:46.869170 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 20 15:06:46.880723 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 20 15:06:46.982000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 15:06:46.903618 systemd[1]: systemd-resolved.service: Deactivated successfully. Apr 20 15:06:46.996000 audit: BPF prog-id=5 op=UNLOAD Apr 20 15:06:46.909835 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Apr 20 15:06:46.971104 systemd[1]: systemd-udevd.service: Deactivated successfully. Apr 20 15:06:47.022000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 15:06:46.973604 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 20 15:06:47.039000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 15:06:46.982846 systemd[1]: systemd-udevd.service: Consumed 4.392s CPU time. Apr 20 15:06:47.053000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 15:06:46.998962 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Apr 20 15:06:46.999186 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Apr 20 15:06:47.010533 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Apr 20 15:06:47.010580 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Apr 20 15:06:47.026182 systemd[1]: dracut-cmdline.service: Deactivated successfully. Apr 20 15:06:47.026459 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Apr 20 15:06:47.039629 systemd[1]: dracut-cmdline.service: Consumed 1.640s CPU time. Apr 20 15:06:47.041254 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 20 15:06:47.151000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 15:06:47.176000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 15:06:47.041542 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 20 15:06:47.191000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 15:06:47.125246 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Apr 20 15:06:47.205000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 15:06:47.137992 systemd[1]: systemd-network-generator.service: Deactivated successfully. Apr 20 15:06:47.221000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 15:06:47.138269 systemd[1]: Stopped systemd-network-generator.service - Generate Network Units from Kernel Command Line. Apr 20 15:06:47.154974 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 20 15:06:47.236000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 15:06:47.247000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 15:06:47.158264 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 20 15:06:47.253000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 15:06:47.176850 systemd[1]: systemd-modules-load.service: Deactivated successfully. Apr 20 15:06:47.274000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 15:06:47.277000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 15:06:47.176903 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Apr 20 15:06:47.191795 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Apr 20 15:06:47.191848 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 20 15:06:47.205773 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Apr 20 15:06:47.205823 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 20 15:06:47.221748 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Apr 20 15:06:47.221816 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Apr 20 15:06:47.237689 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 20 15:06:47.237768 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 20 15:06:47.249467 systemd[1]: network-cleanup.service: Deactivated successfully. Apr 20 15:06:47.249705 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Apr 20 15:06:47.253660 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Apr 20 15:06:47.253765 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Apr 20 15:06:47.281798 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Apr 20 15:06:47.304892 systemd[1]: Starting initrd-switch-root.service - Switch Root... Apr 20 15:06:47.390736 systemd[1]: Switching root. Apr 20 15:06:47.456973 systemd-journald[319]: Journal stopped Apr 20 15:06:51.482696 systemd-journald[319]: Received SIGTERM from PID 1 (systemd). Apr 20 15:06:51.482804 kernel: SELinux: policy capability network_peer_controls=1 Apr 20 15:06:51.482818 kernel: SELinux: policy capability open_perms=1 Apr 20 15:06:51.482828 kernel: SELinux: policy capability extended_socket_class=1 Apr 20 15:06:51.482839 kernel: SELinux: policy capability always_check_network=0 Apr 20 15:06:51.482847 kernel: SELinux: policy capability cgroup_seclabel=1 Apr 20 15:06:51.482865 kernel: SELinux: policy capability nnp_nosuid_transition=1 Apr 20 15:06:51.482874 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Apr 20 15:06:51.482882 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Apr 20 15:06:51.482898 kernel: SELinux: policy capability userspace_initial_context=0 Apr 20 15:06:51.482906 kernel: kauditd_printk_skb: 35 callbacks suppressed Apr 20 15:06:51.482916 kernel: audit: type=1403 audit(1776697607.795:86): auid=4294967295 ses=4294967295 lsm=selinux res=1 Apr 20 15:06:51.482931 systemd[1]: Successfully loaded SELinux policy in 110.887ms. Apr 20 15:06:51.482953 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 10.522ms. Apr 20 15:06:51.482964 systemd[1]: systemd 258.3 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Apr 20 15:06:51.482973 systemd[1]: Detected virtualization kvm. Apr 20 15:06:51.482982 systemd[1]: Detected architecture x86-64. Apr 20 15:06:51.482991 systemd[1]: Detected first boot. Apr 20 15:06:51.482999 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Apr 20 15:06:51.483009 kernel: audit: type=1334 audit(1776697608.663:87): prog-id=9 op=LOAD Apr 20 15:06:51.483020 kernel: audit: type=1334 audit(1776697608.664:88): prog-id=9 op=UNLOAD Apr 20 15:06:51.483030 kernel: Guest personality initialized and is inactive Apr 20 15:06:51.483041 kernel: VMCI host device registered (name=vmci, major=10, minor=258) Apr 20 15:06:51.483049 kernel: Initialized host personality Apr 20 15:06:51.483057 kernel: NET: Registered PF_VSOCK protocol family Apr 20 15:06:51.483067 zram_generator::config[1259]: No configuration found. Apr 20 15:06:51.483131 systemd-ssh-generator[1254]: Failed to query local AF_VSOCK CID: Cannot assign requested address Apr 20 15:06:51.483146 (sd-exec-[1239]: /usr/lib/systemd/system-generators/systemd-ssh-generator failed with exit status 1. Apr 20 15:06:51.483155 systemd[1]: Applying preset policy. Apr 20 15:06:51.483167 systemd[1]: Created symlink '/etc/systemd/system/multi-user.target.wants/prepare-helm.service' → '/etc/systemd/system/prepare-helm.service'. Apr 20 15:06:51.483176 systemd[1]: Created symlink '/etc/systemd/system/timers.target.wants/google-oslogin-cache.timer' → '/usr/lib/systemd/system/google-oslogin-cache.timer'. Apr 20 15:06:51.483184 systemd[1]: Populated /etc with preset unit settings. Apr 20 15:06:51.483194 systemd[1]: /usr/lib/systemd/system/update-engine.service:10: Support for option BlockIOWeight= has been removed and it is ignored Apr 20 15:06:51.483205 kernel: audit: type=1334 audit(1776697609.967:89): prog-id=10 op=LOAD Apr 20 15:06:51.483213 kernel: audit: type=1334 audit(1776697609.967:90): prog-id=2 op=UNLOAD Apr 20 15:06:51.483222 kernel: audit: type=1334 audit(1776697609.967:91): prog-id=11 op=LOAD Apr 20 15:06:51.483231 kernel: audit: type=1334 audit(1776697609.967:92): prog-id=12 op=LOAD Apr 20 15:06:51.483240 kernel: audit: type=1334 audit(1776697609.967:93): prog-id=3 op=UNLOAD Apr 20 15:06:51.483248 kernel: audit: type=1334 audit(1776697609.967:94): prog-id=4 op=UNLOAD Apr 20 15:06:51.483256 kernel: audit: type=1334 audit(1776697609.968:95): prog-id=13 op=LOAD Apr 20 15:06:51.483268 systemd[1]: initrd-switch-root.service: Deactivated successfully. Apr 20 15:06:51.483276 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Apr 20 15:06:51.483397 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Apr 20 15:06:51.483410 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Apr 20 15:06:51.483419 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Apr 20 15:06:51.483428 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Apr 20 15:06:51.483440 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Apr 20 15:06:51.483449 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Apr 20 15:06:51.483459 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Apr 20 15:06:51.483468 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Apr 20 15:06:51.483476 systemd[1]: Created slice user.slice - User and Session Slice. Apr 20 15:06:51.483485 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 20 15:06:51.483494 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 20 15:06:51.483505 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Apr 20 15:06:51.483514 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Apr 20 15:06:51.483524 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Apr 20 15:06:51.483533 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 20 15:06:51.483544 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Apr 20 15:06:51.483553 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 20 15:06:51.483562 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 20 15:06:51.483572 systemd[1]: Reached target imports.target - Image Downloads. Apr 20 15:06:51.483582 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Apr 20 15:06:51.483591 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Apr 20 15:06:51.483600 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Apr 20 15:06:51.483609 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Apr 20 15:06:51.483618 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 20 15:06:51.483627 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 20 15:06:51.483638 systemd[1]: Reached target remote-integritysetup.target - Remote Integrity Protected Volumes. Apr 20 15:06:51.483647 systemd[1]: Reached target remote-veritysetup.target - Remote Verity Protected Volumes. Apr 20 15:06:51.483656 systemd[1]: Reached target slices.target - Slice Units. Apr 20 15:06:51.483665 systemd[1]: Reached target swap.target - Swaps. Apr 20 15:06:51.483673 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Apr 20 15:06:51.483682 systemd[1]: Listening on systemd-ask-password.socket - Query the User Interactively for a Password. Apr 20 15:06:51.483691 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Apr 20 15:06:51.483701 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Apr 20 15:06:51.483710 systemd[1]: Listening on systemd-factory-reset.socket - Factory Reset Management. Apr 20 15:06:51.483718 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Apr 20 15:06:51.483727 systemd[1]: Listening on systemd-mountfsd.socket - DDI File System Mounter Socket. Apr 20 15:06:51.483736 systemd[1]: Listening on systemd-networkd-varlink.socket - Network Service Varlink Socket. Apr 20 15:06:51.483745 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 20 15:06:51.483753 systemd[1]: Listening on systemd-nsresourced.socket - Namespace Resource Manager Socket. Apr 20 15:06:51.483763 systemd[1]: Listening on systemd-oomd.socket - Userspace Out-Of-Memory (OOM) Killer Socket. Apr 20 15:06:51.483772 systemd[1]: Listening on systemd-resolved-monitor.socket - Resolve Monitor Varlink Socket. Apr 20 15:06:51.483781 systemd[1]: Listening on systemd-resolved-varlink.socket - Resolve Service Varlink Socket. Apr 20 15:06:51.483789 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 20 15:06:51.483798 systemd[1]: Listening on systemd-udevd-varlink.socket - udev Varlink Socket. Apr 20 15:06:51.483807 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Apr 20 15:06:51.483816 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Apr 20 15:06:51.483826 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Apr 20 15:06:51.483835 systemd[1]: Mounting media.mount - External Media Directory... Apr 20 15:06:51.483844 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 20 15:06:51.483853 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Apr 20 15:06:51.483861 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Apr 20 15:06:51.483870 systemd[1]: tmp.mount: x-systemd.graceful-option=usrquota specified, but option is not available, suppressing. Apr 20 15:06:51.483881 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Apr 20 15:06:51.483890 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Apr 20 15:06:51.483899 systemd[1]: Reached target machines.target - Virtual Machines and Containers. Apr 20 15:06:51.483908 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Apr 20 15:06:51.483917 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 20 15:06:51.483929 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 20 15:06:51.483938 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Apr 20 15:06:51.483947 systemd[1]: modprobe@dm_mod.service - Load Kernel Module dm_mod was skipped because of an unmet condition check (ConditionKernelModuleLoaded=!dm_mod). Apr 20 15:06:51.483956 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 20 15:06:51.483965 systemd[1]: modprobe@efi_pstore.service - Load Kernel Module efi_pstore was skipped because of an unmet condition check (ConditionKernelModuleLoaded=!efi_pstore). Apr 20 15:06:51.483973 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Apr 20 15:06:51.483984 systemd[1]: modprobe@loop.service - Load Kernel Module loop was skipped because of an unmet condition check (ConditionKernelModuleLoaded=!loop). Apr 20 15:06:51.483994 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Apr 20 15:06:51.484003 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Apr 20 15:06:51.484012 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Apr 20 15:06:51.484020 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Apr 20 15:06:51.484031 systemd[1]: Stopped systemd-fsck-usr.service. Apr 20 15:06:51.484040 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Apr 20 15:06:51.484049 kernel: fuse: init (API version 7.41) Apr 20 15:06:51.484057 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 20 15:06:51.484066 kernel: ACPI: bus type drm_connector registered Apr 20 15:06:51.484130 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 20 15:06:51.484140 systemd[1]: Starting systemd-network-generator.service - Generate Network Units from Kernel Command Line... Apr 20 15:06:51.484149 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Apr 20 15:06:51.484158 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 20 15:06:51.484168 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Apr 20 15:06:51.484178 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 20 15:06:51.484187 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 20 15:06:51.484196 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Apr 20 15:06:51.484222 systemd-journald[1330]: Collecting audit messages is enabled. Apr 20 15:06:51.484250 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Apr 20 15:06:51.484262 systemd-journald[1330]: Journal started Apr 20 15:06:51.484280 systemd-journald[1330]: Runtime Journal (/run/log/journal/3af9b55cf7dd4692958ff0f457276295) is 5.9M, max 47.8M, 41.8M free. Apr 20 15:06:50.682000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Apr 20 15:06:51.249000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 15:06:51.266000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 15:06:51.286000 audit: BPF prog-id=18 op=UNLOAD Apr 20 15:06:51.286000 audit: BPF prog-id=17 op=UNLOAD Apr 20 15:06:51.287000 audit: BPF prog-id=19 op=LOAD Apr 20 15:06:51.290000 audit: BPF prog-id=20 op=LOAD Apr 20 15:06:51.291000 audit: BPF prog-id=21 op=LOAD Apr 20 15:06:51.479000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Apr 20 15:06:51.479000 audit[1330]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=3 a1=7ffd4e738a60 a2=4000 a3=0 items=0 ppid=1 pid=1330 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 15:06:51.479000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Apr 20 15:06:49.947620 systemd[1]: Queued start job for default target multi-user.target. Apr 20 15:06:49.969487 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Apr 20 15:06:49.970570 systemd[1]: systemd-journald.service: Deactivated successfully. Apr 20 15:06:49.970989 systemd[1]: systemd-journald.service: Consumed 4.325s CPU time. Apr 20 15:06:51.504490 systemd[1]: Started systemd-journald.service - Journal Service. Apr 20 15:06:51.504000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 15:06:51.505180 systemd[1]: Mounted media.mount - External Media Directory. Apr 20 15:06:51.511479 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Apr 20 15:06:51.517535 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Apr 20 15:06:51.523670 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Apr 20 15:06:51.529759 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Apr 20 15:06:51.536000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 15:06:51.537563 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 20 15:06:51.545000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 15:06:51.546805 systemd[1]: modprobe@configfs.service: Deactivated successfully. Apr 20 15:06:51.547240 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Apr 20 15:06:51.554000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 15:06:51.554000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 15:06:51.554757 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 20 15:06:51.555056 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 20 15:06:51.562000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 15:06:51.564000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 15:06:51.565590 systemd[1]: modprobe@fuse.service: Deactivated successfully. Apr 20 15:06:51.567429 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Apr 20 15:06:51.574000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 15:06:51.574000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 15:06:51.575729 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 20 15:06:51.581000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 15:06:51.582848 systemd[1]: Finished systemd-network-generator.service - Generate Network Units from Kernel Command Line. Apr 20 15:06:51.589000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 15:06:51.593015 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Apr 20 15:06:51.600000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 15:06:51.601830 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Apr 20 15:06:51.609000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-load-credentials comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 15:06:51.624229 systemd[1]: Reached target network-pre.target - Preparation for Network. Apr 20 15:06:51.632444 systemd[1]: Listening on systemd-importd.socket - Disk Image Download Service Socket. Apr 20 15:06:51.641894 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Apr 20 15:06:51.660231 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Apr 20 15:06:51.678556 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Apr 20 15:06:51.680657 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 20 15:06:51.713744 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Apr 20 15:06:51.722751 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 20 15:06:51.728564 systemd[1]: Starting systemd-confext.service - Merge System Configuration Images into /etc/... Apr 20 15:06:51.738746 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Apr 20 15:06:51.756884 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Apr 20 15:06:51.765269 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 20 15:06:51.767465 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Apr 20 15:06:51.775549 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 20 15:06:51.778241 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 20 15:06:51.778937 systemd-journald[1330]: Time spent on flushing to /var/log/journal/3af9b55cf7dd4692958ff0f457276295 is 23.103ms for 1284 entries. Apr 20 15:06:51.778937 systemd-journald[1330]: System Journal (/var/log/journal/3af9b55cf7dd4692958ff0f457276295) is 8M, max 163.5M, 155.5M free. Apr 20 15:06:51.817690 systemd-journald[1330]: Received client request to flush runtime journal. Apr 20 15:06:51.806569 systemd[1]: Starting systemd-userdb-load-credentials.service - Load JSON user/group Records from Credentials... Apr 20 15:06:51.819924 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 20 15:06:51.827000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 15:06:51.827865 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Apr 20 15:06:51.840902 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Apr 20 15:06:51.850862 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Apr 20 15:06:51.865000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 15:06:51.902759 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Apr 20 15:06:51.910000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 15:06:51.911821 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Apr 20 15:06:51.923796 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Apr 20 15:06:51.934566 kernel: loop4: detected capacity change from 0 to 43472 Apr 20 15:06:51.938210 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 20 15:06:51.946066 kernel: loop4: p1 p2 p3 Apr 20 15:06:51.948000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 15:06:51.955576 systemd-tmpfiles[1374]: ACLs are not supported, ignoring. Apr 20 15:06:51.955591 systemd-tmpfiles[1374]: ACLs are not supported, ignoring. Apr 20 15:06:51.970948 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 20 15:06:51.979000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 15:06:51.982799 systemd[1]: Starting systemd-sysusers.service - Create System Users... Apr 20 15:06:52.002441 systemd[1]: Finished systemd-userdb-load-credentials.service - Load JSON user/group Records from Credentials. Apr 20 15:06:52.009000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdb-load-credentials comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 15:06:52.015149 systemd-confext[1379]: device-mapper: reload ioctl on loop4p1-verity (253:4) failed: Invalid argument Apr 20 15:06:52.015539 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Apr 20 15:06:52.015559 kernel: device-mapper: verity: Unrecognized verity feature request: root_hash_sig_key_desc Apr 20 15:06:52.015571 kernel: device-mapper: table: 253:4: verity: Unrecognized verity feature request (-EINVAL) Apr 20 15:06:52.015580 kernel: device-mapper: ioctl: error adding target to table Apr 20 15:06:52.036734 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Apr 20 15:06:52.091034 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Apr 20 15:06:52.099000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 15:06:52.108576 systemd[1]: Finished systemd-sysusers.service - Create System Users. Apr 20 15:06:52.117000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 15:06:52.119000 audit: BPF prog-id=22 op=LOAD Apr 20 15:06:52.119000 audit: BPF prog-id=23 op=LOAD Apr 20 15:06:52.119000 audit: BPF prog-id=24 op=LOAD Apr 20 15:06:52.121405 systemd[1]: Starting systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer... Apr 20 15:06:52.132000 audit: BPF prog-id=25 op=LOAD Apr 20 15:06:52.134250 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 20 15:06:52.144000 audit: BPF prog-id=26 op=LOAD Apr 20 15:06:52.145972 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Apr 20 15:06:52.155443 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 20 15:06:52.167631 systemd[1]: Starting modprobe@tun.service - Load Kernel Module tun... Apr 20 15:06:52.180000 audit: BPF prog-id=27 op=LOAD Apr 20 15:06:52.180000 audit: BPF prog-id=28 op=LOAD Apr 20 15:06:52.180000 audit: BPF prog-id=29 op=LOAD Apr 20 15:06:52.190041 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Apr 20 15:06:52.217427 kernel: tun: Universal TUN/TAP device driver, 1.6 Apr 20 15:06:52.218747 systemd[1]: modprobe@tun.service: Deactivated successfully. Apr 20 15:06:52.219014 systemd[1]: Finished modprobe@tun.service - Load Kernel Module tun. Apr 20 15:06:52.226000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@tun comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 15:06:52.226000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@tun comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 15:06:52.229000 audit: BPF prog-id=30 op=LOAD Apr 20 15:06:52.229000 audit: BPF prog-id=31 op=LOAD Apr 20 15:06:52.230000 audit: BPF prog-id=32 op=LOAD Apr 20 15:06:52.254223 systemd[1]: Starting systemd-nsresourced.service - Namespace Resource Manager... Apr 20 15:06:52.347254 systemd-tmpfiles[1402]: ACLs are not supported, ignoring. Apr 20 15:06:52.347760 systemd-tmpfiles[1402]: ACLs are not supported, ignoring. Apr 20 15:06:52.355247 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 20 15:06:52.368000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 15:06:52.401254 systemd-nsresourced[1407]: Not setting up BPF subsystem, as functionality has been disabled at compile time. Apr 20 15:06:52.404048 systemd[1]: Started systemd-nsresourced.service - Namespace Resource Manager. Apr 20 15:06:52.413000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-nsresourced comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 15:06:52.496929 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Apr 20 15:06:52.520916 systemd[1]: Started systemd-userdbd.service - User Database Manager. Apr 20 15:06:52.530000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 15:06:52.664631 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Apr 20 15:06:52.675000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-timesyncd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 15:06:52.676683 systemd[1]: Reached target time-set.target - System Time Set. Apr 20 15:06:52.680949 systemd-oomd[1399]: No swap; memory pressure usage will be degraded Apr 20 15:06:52.686180 systemd[1]: Started systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer. Apr 20 15:06:52.693000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-oomd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 15:06:52.718029 systemd-resolved[1400]: Positive Trust Anchors: Apr 20 15:06:52.718152 systemd-resolved[1400]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 20 15:06:52.718156 systemd-resolved[1400]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Apr 20 15:06:52.718183 systemd-resolved[1400]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 20 15:06:52.736196 systemd-resolved[1400]: Defaulting to hostname 'linux'. Apr 20 15:06:52.749408 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 20 15:06:52.759000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 15:06:52.764253 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 20 15:06:53.986673 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Apr 20 15:06:53.995000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 15:06:54.005035 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 20 15:06:54.016879 kernel: kauditd_printk_skb: 66 callbacks suppressed Apr 20 15:06:54.001000 audit: BPF prog-id=7 op=UNLOAD Apr 20 15:06:54.017699 kernel: audit: type=1130 audit(1776697613.995:160): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 15:06:54.017788 kernel: audit: type=1334 audit(1776697614.001:161): prog-id=7 op=UNLOAD Apr 20 15:06:54.001000 audit: BPF prog-id=6 op=UNLOAD Apr 20 15:06:54.002000 audit: BPF prog-id=33 op=LOAD Apr 20 15:06:54.002000 audit: BPF prog-id=34 op=LOAD Apr 20 15:06:54.047505 kernel: audit: type=1334 audit(1776697614.001:162): prog-id=6 op=UNLOAD Apr 20 15:06:54.047527 kernel: audit: type=1334 audit(1776697614.002:163): prog-id=33 op=LOAD Apr 20 15:06:54.047538 kernel: audit: type=1334 audit(1776697614.002:164): prog-id=34 op=LOAD Apr 20 15:06:54.139080 systemd-udevd[1427]: Using default interface naming scheme 'v258'. Apr 20 15:06:54.511506 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 20 15:06:54.524000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 15:06:54.535878 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 20 15:06:54.528000 audit: BPF prog-id=35 op=LOAD Apr 20 15:06:54.540636 kernel: audit: type=1130 audit(1776697614.524:165): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 15:06:54.541272 kernel: audit: type=1334 audit(1776697614.528:166): prog-id=35 op=LOAD Apr 20 15:06:54.681753 systemd-networkd[1429]: lo: Link UP Apr 20 15:06:54.681762 systemd-networkd[1429]: lo: Gained carrier Apr 20 15:06:54.685786 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 20 15:06:54.693000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 15:06:54.698971 systemd[1]: Reached target network.target - Network. Apr 20 15:06:54.709438 kernel: audit: type=1130 audit(1776697614.693:167): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 15:06:54.718534 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Apr 20 15:06:54.732796 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Apr 20 15:06:54.834071 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Apr 20 15:06:54.842000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-persistent-storage comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 15:06:54.844487 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Apr 20 15:06:54.860230 systemd-networkd[1429]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Apr 20 15:06:54.860466 systemd-networkd[1429]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 20 15:06:54.861502 systemd-networkd[1429]: eth0: Link UP Apr 20 15:06:54.861673 systemd-networkd[1429]: eth0: Gained carrier Apr 20 15:06:54.861692 systemd-networkd[1429]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Apr 20 15:06:54.864496 kernel: audit: type=1130 audit(1776697614.842:168): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-persistent-storage comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 15:06:54.884434 systemd-networkd[1429]: eth0: DHCPv4 address 10.0.0.13/16, gateway 10.0.0.1 acquired from 10.0.0.1 Apr 20 15:06:54.886998 systemd-timesyncd[1401]: Network configuration changed, trying to establish connection. Apr 20 15:06:55.592310 systemd-timesyncd[1401]: Contacted time server 10.0.0.1:123 (10.0.0.1). Apr 20 15:06:55.592443 systemd-timesyncd[1401]: Initial clock synchronization to Mon 2026-04-20 15:06:55.591462 UTC. Apr 20 15:06:55.592645 systemd-resolved[1400]: Clock change detected. Flushing caches. Apr 20 15:06:55.606284 kernel: mousedev: PS/2 mouse device common for all mice Apr 20 15:06:55.726175 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Apr 20 15:06:55.734277 kernel: ACPI: button: Power Button [PWRF] Apr 20 15:06:55.784190 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Apr 20 15:06:55.784641 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Apr 20 15:06:55.793705 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Apr 20 15:06:55.831393 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Apr 20 15:06:55.857650 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Apr 20 15:06:56.058685 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Apr 20 15:06:56.067000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 15:06:56.092476 kernel: audit: type=1130 audit(1776697616.067:169): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 15:06:56.158870 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 20 15:06:56.184367 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 20 15:06:56.184613 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 20 15:06:56.192000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 15:06:56.192000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 15:06:56.198576 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 20 15:06:56.480276 kernel: erofs: (device dm-4): mounted with root inode @ nid 40. Apr 20 15:06:56.568211 kernel: loop4: detected capacity change from 0 to 43472 Apr 20 15:06:56.573428 kernel: loop4: p1 p2 p3 Apr 20 15:06:56.579364 kernel: loop4: p1 p2 p3 Apr 20 15:06:56.700114 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Apr 20 15:06:56.692235 (sd-merge)[1493]: device-mapper: reload ioctl on loop4p1-verity (253:4) failed: Invalid argument Apr 20 15:06:56.706197 kernel: device-mapper: verity: Unrecognized verity feature request: root_hash_sig_key_desc Apr 20 15:06:56.706222 kernel: device-mapper: table: 253:4: verity: Unrecognized verity feature request (-EINVAL) Apr 20 15:06:56.706234 kernel: device-mapper: ioctl: error adding target to table Apr 20 15:06:56.706243 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Apr 20 15:06:56.719840 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 20 15:06:56.728000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 15:06:56.957382 kernel: erofs: (device dm-4): mounted with root inode @ nid 40. Apr 20 15:06:56.961877 (sd-merge)[1493]: Skipping extension refresh because no change was found, use --always-refresh=yes to always do a refresh. Apr 20 15:06:57.009205 systemd[1]: Finished systemd-confext.service - Merge System Configuration Images into /etc/. Apr 20 15:06:57.025404 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Apr 20 15:06:57.020000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-confext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 15:06:57.040926 kernel: device-mapper: ioctl: remove_all left 4 open device(s) Apr 20 15:06:57.093713 kernel: loop4: detected capacity change from 0 to 378016 Apr 20 15:06:57.099057 kernel: loop4: p1 p2 p3 Apr 20 15:06:57.191480 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Apr 20 15:06:57.192263 kernel: device-mapper: verity: Unrecognized verity feature request: root_hash_sig_key_desc Apr 20 15:06:57.192383 kernel: device-mapper: table: 253:4: verity: Unrecognized verity feature request (-EINVAL) Apr 20 15:06:57.203621 kernel: device-mapper: ioctl: error adding target to table Apr 20 15:06:57.205866 systemd-sysext[1503]: device-mapper: reload ioctl on loop4p1-verity (253:4) failed: Invalid argument Apr 20 15:06:57.220709 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Apr 20 15:06:57.322632 kernel: erofs: (device dm-4): mounted with root inode @ nid 39. Apr 20 15:06:57.375399 systemd-networkd[1429]: eth0: Gained IPv6LL Apr 20 15:06:57.388161 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Apr 20 15:06:57.403000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-wait-online comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 15:06:57.424664 kernel: loop4: detected capacity change from 0 to 219192 Apr 20 15:06:57.435695 systemd[1]: Reached target network-online.target - Network is Online. Apr 20 15:06:57.605647 kernel: loop4: detected capacity change from 0 to 178200 Apr 20 15:06:57.613874 kernel: loop4: p1 p2 p3 Apr 20 15:06:57.652390 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Apr 20 15:06:57.654234 kernel: device-mapper: verity: Unrecognized verity feature request: root_hash_sig_key_desc Apr 20 15:06:57.654358 kernel: device-mapper: table: 253:4: verity: Unrecognized verity feature request (-EINVAL) Apr 20 15:06:57.670958 kernel: device-mapper: ioctl: error adding target to table Apr 20 15:06:57.671691 systemd-sysext[1503]: device-mapper: reload ioctl on loop4p1-verity (253:4) failed: Invalid argument Apr 20 15:06:57.685261 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Apr 20 15:06:57.749488 kernel: erofs: (device dm-4): mounted with root inode @ nid 39. Apr 20 15:06:57.813172 kernel: loop4: detected capacity change from 0 to 378016 Apr 20 15:06:57.818405 kernel: loop4: p1 p2 p3 Apr 20 15:06:57.827417 kernel: loop4: p1 p2 p3 Apr 20 15:06:57.890240 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Apr 20 15:06:57.890380 kernel: device-mapper: verity: Unrecognized verity feature request: root_hash_sig_key_desc Apr 20 15:06:57.890393 kernel: device-mapper: table: 253:4: verity: Unrecognized verity feature request (-EINVAL) Apr 20 15:06:57.904181 kernel: device-mapper: ioctl: error adding target to table Apr 20 15:06:57.904104 (sd-merge)[1525]: device-mapper: reload ioctl on loop4p1-verity (253:4) failed: Invalid argument Apr 20 15:06:57.946071 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Apr 20 15:06:58.033757 kernel: erofs: (device dm-4): mounted with root inode @ nid 39. Apr 20 15:06:58.084453 kernel: loop5: detected capacity change from 0 to 219192 Apr 20 15:06:58.159234 kernel: loop6: detected capacity change from 0 to 178200 Apr 20 15:06:58.164297 kernel: loop6: p1 p2 p3 Apr 20 15:06:58.177294 kernel: loop6: p1 p2 p3 Apr 20 15:06:58.210703 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Apr 20 15:06:58.211412 kernel: device-mapper: verity: Unrecognized verity feature request: root_hash_sig_key_desc Apr 20 15:06:58.211482 kernel: device-mapper: table: 253:5: verity: Unrecognized verity feature request (-EINVAL) Apr 20 15:06:58.218126 kernel: device-mapper: ioctl: error adding target to table Apr 20 15:06:58.225865 (sd-merge)[1525]: device-mapper: reload ioctl on loop6p1-verity (253:5) failed: Invalid argument Apr 20 15:06:58.238367 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Apr 20 15:06:58.333248 kernel: erofs: (device dm-5): mounted with root inode @ nid 39. Apr 20 15:06:58.336233 (sd-merge)[1525]: Skipping extension refresh because no change was found, use --always-refresh=yes to always do a refresh. Apr 20 15:06:58.343653 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Apr 20 15:06:58.351000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 15:06:58.380622 kernel: device-mapper: ioctl: remove_all left 4 open device(s) Apr 20 15:06:58.393720 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 20 15:06:58.433151 systemd-tmpfiles[1542]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Apr 20 15:06:58.435906 systemd-tmpfiles[1542]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Apr 20 15:06:58.436915 systemd-tmpfiles[1542]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Apr 20 15:06:58.437908 systemd-tmpfiles[1542]: ACLs are not supported, ignoring. Apr 20 15:06:58.438164 systemd-tmpfiles[1542]: ACLs are not supported, ignoring. Apr 20 15:06:58.454840 systemd-tmpfiles[1542]: Detected autofs mount point /boot during canonicalization of boot. Apr 20 15:06:58.454940 systemd-tmpfiles[1542]: Skipping /boot Apr 20 15:06:58.537310 systemd-tmpfiles[1542]: Detected autofs mount point /boot during canonicalization of boot. Apr 20 15:06:58.537396 systemd-tmpfiles[1542]: Skipping /boot Apr 20 15:06:58.587752 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 20 15:06:58.598000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 15:06:58.604610 systemd[1]: Starting audit-rules.service - Load Audit Rules... Apr 20 15:06:58.621475 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Apr 20 15:06:58.650104 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Apr 20 15:06:58.674902 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Apr 20 15:06:58.695734 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Apr 20 15:06:58.718000 audit[1558]: AUDIT1127 pid=1558 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Apr 20 15:06:58.727941 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Apr 20 15:06:58.735000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 15:06:58.773569 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Apr 20 15:06:58.783000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 15:06:58.807000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Apr 20 15:06:58.807000 audit[1574]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7fff56458ed0 a2=420 a3=0 items=0 ppid=1548 pid=1574 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 15:06:58.807000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Apr 20 15:06:58.814694 augenrules[1574]: No rules Apr 20 15:06:58.818310 systemd[1]: audit-rules.service: Deactivated successfully. Apr 20 15:06:58.818947 systemd[1]: Finished audit-rules.service - Load Audit Rules. Apr 20 15:06:58.998579 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Apr 20 15:06:59.021858 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Apr 20 15:07:00.406307 ldconfig[1550]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Apr 20 15:07:00.418311 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Apr 20 15:07:00.431522 systemd[1]: Starting systemd-update-done.service - Update is Completed... Apr 20 15:07:00.486507 systemd[1]: Finished systemd-update-done.service - Update is Completed. Apr 20 15:07:00.495649 systemd[1]: Reached target sysinit.target - System Initialization. Apr 20 15:07:00.506217 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Apr 20 15:07:00.513726 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Apr 20 15:07:00.524394 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Apr 20 15:07:00.531457 systemd[1]: Started logrotate.timer - Daily rotation of log files. Apr 20 15:07:00.537484 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Apr 20 15:07:00.547061 systemd[1]: Started systemd-sysupdate-reboot.timer - Reboot Automatically After System Update. Apr 20 15:07:00.554619 systemd[1]: Started systemd-sysupdate.timer - Automatic System Update. Apr 20 15:07:00.564152 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Apr 20 15:07:00.574252 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Apr 20 15:07:00.574443 systemd[1]: Reached target paths.target - Path Units. Apr 20 15:07:00.583605 systemd[1]: Reached target timers.target - Timer Units. Apr 20 15:07:00.592511 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Apr 20 15:07:00.602897 systemd[1]: Starting docker.socket - Docker Socket for the API... Apr 20 15:07:00.612328 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Apr 20 15:07:00.651927 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Apr 20 15:07:00.683287 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Apr 20 15:07:00.704939 systemd[1]: Listening on systemd-logind-varlink.socket - User Login Management Varlink Socket. Apr 20 15:07:00.727605 systemd[1]: Listening on systemd-machined.socket - Virtual Machine and Container Registration Service Socket. Apr 20 15:07:00.747630 systemd[1]: Listening on docker.socket - Docker Socket for the API. Apr 20 15:07:00.758414 systemd[1]: Reached target sockets.target - Socket Units. Apr 20 15:07:00.777261 systemd[1]: Reached target basic.target - Basic System. Apr 20 15:07:00.788210 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Apr 20 15:07:00.788318 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Apr 20 15:07:00.790626 systemd[1]: Starting containerd.service - containerd container runtime... Apr 20 15:07:00.806693 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Apr 20 15:07:00.833317 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Apr 20 15:07:00.846763 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Apr 20 15:07:00.882639 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Apr 20 15:07:00.902791 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Apr 20 15:07:00.914381 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Apr 20 15:07:00.934713 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Apr 20 15:07:00.948511 jq[1590]: false Apr 20 15:07:00.951601 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 20 15:07:00.960080 google_oslogin_nss_cache[1592]: oslogin_cache_refresh[1592]: Refreshing passwd entry cache Apr 20 15:07:00.959923 oslogin_cache_refresh[1592]: Refreshing passwd entry cache Apr 20 15:07:01.018487 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Apr 20 15:07:01.024892 extend-filesystems[1591]: Found /dev/vda6 Apr 20 15:07:01.037607 extend-filesystems[1591]: Found /dev/vda9 Apr 20 15:07:01.030743 oslogin_cache_refresh[1592]: Failure getting users, quitting Apr 20 15:07:01.042629 google_oslogin_nss_cache[1592]: oslogin_cache_refresh[1592]: Failure getting users, quitting Apr 20 15:07:01.042629 google_oslogin_nss_cache[1592]: oslogin_cache_refresh[1592]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Apr 20 15:07:01.042629 google_oslogin_nss_cache[1592]: oslogin_cache_refresh[1592]: Refreshing group entry cache Apr 20 15:07:01.042743 extend-filesystems[1591]: Checking size of /dev/vda9 Apr 20 15:07:01.030947 oslogin_cache_refresh[1592]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Apr 20 15:07:01.047947 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Apr 20 15:07:01.068364 extend-filesystems[1591]: Resized partition /dev/vda9 Apr 20 15:07:01.032294 oslogin_cache_refresh[1592]: Refreshing group entry cache Apr 20 15:07:01.074477 google_oslogin_nss_cache[1592]: oslogin_cache_refresh[1592]: Failure getting groups, quitting Apr 20 15:07:01.074477 google_oslogin_nss_cache[1592]: oslogin_cache_refresh[1592]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Apr 20 15:07:01.074451 oslogin_cache_refresh[1592]: Failure getting groups, quitting Apr 20 15:07:01.074464 oslogin_cache_refresh[1592]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Apr 20 15:07:01.080672 extend-filesystems[1608]: resize2fs 1.47.3 (8-Jul-2025) Apr 20 15:07:01.082445 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Apr 20 15:07:01.098472 kernel: EXT4-fs (vda9): resizing filesystem from 456704 to 1784827 blocks Apr 20 15:07:01.104354 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Apr 20 15:07:01.120653 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Apr 20 15:07:01.148351 systemd[1]: Starting systemd-logind.service - User Login Management... Apr 20 15:07:01.162462 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Apr 20 15:07:01.170233 kernel: EXT4-fs (vda9): resized filesystem to 1784827 Apr 20 15:07:01.177564 systemd[1]: Starting update-engine.service - Update Engine... Apr 20 15:07:01.191486 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Apr 20 15:07:01.200285 extend-filesystems[1608]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Apr 20 15:07:01.200285 extend-filesystems[1608]: old_desc_blocks = 1, new_desc_blocks = 1 Apr 20 15:07:01.200285 extend-filesystems[1608]: The filesystem on /dev/vda9 is now 1784827 (4k) blocks long. Apr 20 15:07:01.252400 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Apr 20 15:07:01.267530 extend-filesystems[1591]: Resized filesystem in /dev/vda9 Apr 20 15:07:01.274103 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Apr 20 15:07:01.276791 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Apr 20 15:07:01.279613 systemd[1]: extend-filesystems.service: Deactivated successfully. Apr 20 15:07:01.281291 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Apr 20 15:07:01.307880 jq[1623]: true Apr 20 15:07:01.295435 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Apr 20 15:07:01.296090 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Apr 20 15:07:01.324503 systemd[1]: motdgen.service: Deactivated successfully. Apr 20 15:07:01.335770 update_engine[1620]: I20260420 15:07:01.329774 1620 main.cc:92] Flatcar Update Engine starting Apr 20 15:07:01.324855 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Apr 20 15:07:01.345606 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Apr 20 15:07:01.419796 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Apr 20 15:07:01.433753 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Apr 20 15:07:01.546502 jq[1641]: true Apr 20 15:07:01.613685 systemd-logind[1618]: Watching system buttons on /dev/input/event2 (Power Button) Apr 20 15:07:01.614235 systemd-logind[1618]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Apr 20 15:07:01.617559 systemd-logind[1618]: New seat seat0. Apr 20 15:07:01.666898 systemd[1]: Started systemd-logind.service - User Login Management. Apr 20 15:07:01.679491 systemd[1]: coreos-metadata.service: Deactivated successfully. Apr 20 15:07:01.680113 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Apr 20 15:07:01.764160 bash[1673]: Updated "/home/core/.ssh/authorized_keys" Apr 20 15:07:01.864465 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Apr 20 15:07:01.929446 tar[1640]: linux-amd64/LICENSE Apr 20 15:07:01.929446 tar[1640]: linux-amd64/helm Apr 20 15:07:01.939401 sshd_keygen[1629]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Apr 20 15:07:01.957879 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Apr 20 15:07:01.958916 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Apr 20 15:07:01.985632 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Apr 20 15:07:01.999400 systemd[1]: Starting issuegen.service - Generate /run/issue... Apr 20 15:07:02.007952 dbus-daemon[1588]: [system] SELinux support is enabled Apr 20 15:07:02.010604 systemd[1]: Started dbus.service - D-Bus System Message Bus. Apr 20 15:07:02.036662 dbus-daemon[1588]: [system] Successfully activated service 'org.freedesktop.systemd1' Apr 20 15:07:02.052346 update_engine[1620]: I20260420 15:07:02.045960 1620 update_check_scheduler.cc:74] Next update check in 11m1s Apr 20 15:07:02.057481 systemd[1]: Started update-engine.service - Update Engine. Apr 20 15:07:02.071728 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Apr 20 15:07:02.073914 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Apr 20 15:07:02.083156 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Apr 20 15:07:02.083316 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Apr 20 15:07:02.093230 systemd[1]: Started locksmithd.service - Cluster reboot manager. Apr 20 15:07:02.102329 systemd[1]: issuegen.service: Deactivated successfully. Apr 20 15:07:02.102610 systemd[1]: Finished issuegen.service - Generate /run/issue. Apr 20 15:07:02.128236 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Apr 20 15:07:02.212910 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Apr 20 15:07:02.218782 locksmithd[1693]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Apr 20 15:07:02.224391 systemd[1]: Started getty@tty1.service - Getty on tty1. Apr 20 15:07:02.248472 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Apr 20 15:07:02.267602 systemd[1]: Reached target getty.target - Login Prompts. Apr 20 15:07:02.512203 containerd[1642]: time="2026-04-20T15:07:02Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Apr 20 15:07:02.512794 containerd[1642]: time="2026-04-20T15:07:02.512468828Z" level=info msg="starting containerd" revision=dea7da592f5d1d2b7755e3a161be07f43fad8f75 version=v2.2.1 Apr 20 15:07:02.552375 containerd[1642]: time="2026-04-20T15:07:02.551703178Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="19.685µs" Apr 20 15:07:02.552375 containerd[1642]: time="2026-04-20T15:07:02.551746223Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Apr 20 15:07:02.552375 containerd[1642]: time="2026-04-20T15:07:02.551786486Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Apr 20 15:07:02.552375 containerd[1642]: time="2026-04-20T15:07:02.551879622Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Apr 20 15:07:02.552375 containerd[1642]: time="2026-04-20T15:07:02.552149937Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Apr 20 15:07:02.552375 containerd[1642]: time="2026-04-20T15:07:02.552165185Z" level=info msg="loading plugin" id=io.containerd.mount-handler.v1.erofs type=io.containerd.mount-handler.v1 Apr 20 15:07:02.552375 containerd[1642]: time="2026-04-20T15:07:02.552176067Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Apr 20 15:07:02.552375 containerd[1642]: time="2026-04-20T15:07:02.552219574Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Apr 20 15:07:02.552375 containerd[1642]: time="2026-04-20T15:07:02.552230134Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Apr 20 15:07:02.552375 containerd[1642]: time="2026-04-20T15:07:02.552453440Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Apr 20 15:07:02.552375 containerd[1642]: time="2026-04-20T15:07:02.552466219Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Apr 20 15:07:02.552375 containerd[1642]: time="2026-04-20T15:07:02.552476660Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Apr 20 15:07:02.556742 containerd[1642]: time="2026-04-20T15:07:02.552484321Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Apr 20 15:07:02.556742 containerd[1642]: time="2026-04-20T15:07:02.552644766Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Apr 20 15:07:02.556742 containerd[1642]: time="2026-04-20T15:07:02.552697460Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Apr 20 15:07:02.556742 containerd[1642]: time="2026-04-20T15:07:02.553761905Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Apr 20 15:07:02.556742 containerd[1642]: time="2026-04-20T15:07:02.553963547Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Apr 20 15:07:02.556742 containerd[1642]: time="2026-04-20T15:07:02.554705204Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Apr 20 15:07:02.556742 containerd[1642]: time="2026-04-20T15:07:02.556340519Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Apr 20 15:07:02.559576 containerd[1642]: time="2026-04-20T15:07:02.558792757Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Apr 20 15:07:02.560634 containerd[1642]: time="2026-04-20T15:07:02.560490828Z" level=info msg="metadata content store policy set" policy=shared Apr 20 15:07:02.583271 containerd[1642]: time="2026-04-20T15:07:02.582865846Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Apr 20 15:07:02.583797 containerd[1642]: time="2026-04-20T15:07:02.583424940Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Apr 20 15:07:02.583797 containerd[1642]: time="2026-04-20T15:07:02.583463675Z" level=info msg="built-in NRI default validator is disabled" Apr 20 15:07:02.583797 containerd[1642]: time="2026-04-20T15:07:02.583469325Z" level=info msg="runtime interface created" Apr 20 15:07:02.583797 containerd[1642]: time="2026-04-20T15:07:02.583474736Z" level=info msg="created NRI interface" Apr 20 15:07:02.583797 containerd[1642]: time="2026-04-20T15:07:02.583518133Z" level=info msg="loading plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Apr 20 15:07:02.584525 containerd[1642]: time="2026-04-20T15:07:02.584439009Z" level=info msg="skip loading plugin" error="failed to check mkfs.erofs availability: failed to run mkfs.erofs --help: exec: \"mkfs.erofs\": executable file not found in $PATH: skip plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Apr 20 15:07:02.584549 containerd[1642]: time="2026-04-20T15:07:02.584520641Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Apr 20 15:07:02.584549 containerd[1642]: time="2026-04-20T15:07:02.584542158Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Apr 20 15:07:02.584605 containerd[1642]: time="2026-04-20T15:07:02.584552711Z" level=info msg="loading plugin" id=io.containerd.mount-manager.v1.bolt type=io.containerd.mount-manager.v1 Apr 20 15:07:02.584777 containerd[1642]: time="2026-04-20T15:07:02.584691664Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Apr 20 15:07:02.584890 containerd[1642]: time="2026-04-20T15:07:02.584786803Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Apr 20 15:07:02.584890 containerd[1642]: time="2026-04-20T15:07:02.584798178Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Apr 20 15:07:02.584890 containerd[1642]: time="2026-04-20T15:07:02.584865677Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Apr 20 15:07:02.584890 containerd[1642]: time="2026-04-20T15:07:02.584884027Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Apr 20 15:07:02.584943 containerd[1642]: time="2026-04-20T15:07:02.584896794Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Apr 20 15:07:02.584943 containerd[1642]: time="2026-04-20T15:07:02.584910029Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Apr 20 15:07:02.584943 containerd[1642]: time="2026-04-20T15:07:02.584924900Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Apr 20 15:07:02.584943 containerd[1642]: time="2026-04-20T15:07:02.584932947Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Apr 20 15:07:02.585792 containerd[1642]: time="2026-04-20T15:07:02.585735493Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Apr 20 15:07:02.585792 containerd[1642]: time="2026-04-20T15:07:02.585770392Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Apr 20 15:07:02.585792 containerd[1642]: time="2026-04-20T15:07:02.585782461Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Apr 20 15:07:02.585942 containerd[1642]: time="2026-04-20T15:07:02.585924068Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Apr 20 15:07:02.585958 containerd[1642]: time="2026-04-20T15:07:02.585945878Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Apr 20 15:07:02.585958 containerd[1642]: time="2026-04-20T15:07:02.585955331Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Apr 20 15:07:02.587324 containerd[1642]: time="2026-04-20T15:07:02.585963785Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Apr 20 15:07:02.587324 containerd[1642]: time="2026-04-20T15:07:02.586760931Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Apr 20 15:07:02.587324 containerd[1642]: time="2026-04-20T15:07:02.586886799Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.mounts type=io.containerd.grpc.v1 Apr 20 15:07:02.587324 containerd[1642]: time="2026-04-20T15:07:02.586903537Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Apr 20 15:07:02.587324 containerd[1642]: time="2026-04-20T15:07:02.586915966Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Apr 20 15:07:02.587324 containerd[1642]: time="2026-04-20T15:07:02.586935456Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Apr 20 15:07:02.587324 containerd[1642]: time="2026-04-20T15:07:02.587197744Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Apr 20 15:07:02.587562 containerd[1642]: time="2026-04-20T15:07:02.587362869Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Apr 20 15:07:02.587562 containerd[1642]: time="2026-04-20T15:07:02.587384011Z" level=info msg="Start snapshots syncer" Apr 20 15:07:02.589473 containerd[1642]: time="2026-04-20T15:07:02.588245497Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Apr 20 15:07:02.589473 containerd[1642]: time="2026-04-20T15:07:02.588864798Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"cgroupWritable\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"\",\"binDirs\":[\"/opt/cni/bin\"],\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogLineSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Apr 20 15:07:02.589761 containerd[1642]: time="2026-04-20T15:07:02.589398262Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Apr 20 15:07:02.591134 containerd[1642]: time="2026-04-20T15:07:02.590750360Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Apr 20 15:07:02.591674 containerd[1642]: time="2026-04-20T15:07:02.591655771Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Apr 20 15:07:02.591733 containerd[1642]: time="2026-04-20T15:07:02.591724563Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Apr 20 15:07:02.591771 containerd[1642]: time="2026-04-20T15:07:02.591762428Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Apr 20 15:07:02.591800 containerd[1642]: time="2026-04-20T15:07:02.591793975Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Apr 20 15:07:02.591924 containerd[1642]: time="2026-04-20T15:07:02.591915330Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Apr 20 15:07:02.591953 containerd[1642]: time="2026-04-20T15:07:02.591947136Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Apr 20 15:07:02.592213 containerd[1642]: time="2026-04-20T15:07:02.592202987Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Apr 20 15:07:02.592257 containerd[1642]: time="2026-04-20T15:07:02.592248725Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Apr 20 15:07:02.592291 containerd[1642]: time="2026-04-20T15:07:02.592284594Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Apr 20 15:07:02.592336 containerd[1642]: time="2026-04-20T15:07:02.592329324Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Apr 20 15:07:02.592502 containerd[1642]: time="2026-04-20T15:07:02.592490092Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Apr 20 15:07:02.592535 containerd[1642]: time="2026-04-20T15:07:02.592528769Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Apr 20 15:07:02.592654 containerd[1642]: time="2026-04-20T15:07:02.592643765Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Apr 20 15:07:02.592692 containerd[1642]: time="2026-04-20T15:07:02.592685062Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Apr 20 15:07:02.592729 containerd[1642]: time="2026-04-20T15:07:02.592714774Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Apr 20 15:07:02.592892 containerd[1642]: time="2026-04-20T15:07:02.592881772Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Apr 20 15:07:02.593167 containerd[1642]: time="2026-04-20T15:07:02.593157196Z" level=info msg="Connect containerd service" Apr 20 15:07:02.593217 containerd[1642]: time="2026-04-20T15:07:02.593210912Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Apr 20 15:07:02.599207 containerd[1642]: time="2026-04-20T15:07:02.598883575Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 20 15:07:02.602278 tar[1640]: linux-amd64/README.md Apr 20 15:07:02.633105 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Apr 20 15:07:02.822531 containerd[1642]: time="2026-04-20T15:07:02.821957509Z" level=info msg="Start subscribing containerd event" Apr 20 15:07:02.823756 containerd[1642]: time="2026-04-20T15:07:02.823267018Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Apr 20 15:07:02.823756 containerd[1642]: time="2026-04-20T15:07:02.823609129Z" level=info msg=serving... address=/run/containerd/containerd.sock Apr 20 15:07:02.824621 containerd[1642]: time="2026-04-20T15:07:02.823475580Z" level=info msg="Start recovering state" Apr 20 15:07:02.824621 containerd[1642]: time="2026-04-20T15:07:02.824534865Z" level=info msg="Start event monitor" Apr 20 15:07:02.824621 containerd[1642]: time="2026-04-20T15:07:02.824555166Z" level=info msg="Start cni network conf syncer for default" Apr 20 15:07:02.824621 containerd[1642]: time="2026-04-20T15:07:02.824565078Z" level=info msg="Start streaming server" Apr 20 15:07:02.824621 containerd[1642]: time="2026-04-20T15:07:02.824576387Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Apr 20 15:07:02.824621 containerd[1642]: time="2026-04-20T15:07:02.824586725Z" level=info msg="runtime interface starting up..." Apr 20 15:07:02.824621 containerd[1642]: time="2026-04-20T15:07:02.824593445Z" level=info msg="starting plugins..." Apr 20 15:07:02.827507 containerd[1642]: time="2026-04-20T15:07:02.825438156Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Apr 20 15:07:02.835577 containerd[1642]: time="2026-04-20T15:07:02.835119180Z" level=info msg="containerd successfully booted in 0.323868s" Apr 20 15:07:02.835915 systemd[1]: Started containerd.service - containerd container runtime. Apr 20 15:07:03.588804 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 20 15:07:03.602490 systemd[1]: Reached target multi-user.target - Multi-User System. Apr 20 15:07:03.611281 systemd[1]: Startup finished in 16.426s (kernel) + 36.974s (initrd) + 15.214s (userspace) = 1min 8.615s. Apr 20 15:07:03.616544 (kubelet)[1732]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 20 15:07:04.504307 kubelet[1732]: E0420 15:07:04.503662 1732 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 20 15:07:04.513607 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 20 15:07:04.514084 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 20 15:07:04.515285 systemd[1]: kubelet.service: Consumed 1.470s CPU time, 258.4M memory peak. Apr 20 15:07:06.873795 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Apr 20 15:07:06.877604 systemd[1]: Started sshd@0-1-10.0.0.13:22-10.0.0.1:48462.service - OpenSSH per-connection server daemon (10.0.0.1:48462). Apr 20 15:07:07.102425 sshd[1746]: Accepted publickey for core from 10.0.0.1 port 48462 ssh2: RSA SHA256:snwWgtVh/dFlg/GoTCcrMcNNvPRv/2kcBFWNu/hKVq0 Apr 20 15:07:07.106772 sshd-session[1746]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 20 15:07:07.126930 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Apr 20 15:07:07.131647 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Apr 20 15:07:07.142385 systemd-logind[1618]: New session '1' of user 'core' with class 'user' and type 'tty'. Apr 20 15:07:07.205947 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Apr 20 15:07:07.210955 systemd[1]: Starting user@500.service - User Manager for UID 500... Apr 20 15:07:07.241281 (systemd)[1752]: pam_unix(systemd-user:session): session opened for user core(uid=500) by core(uid=0) Apr 20 15:07:07.246384 systemd-logind[1618]: New session '2' of user 'core' with class 'manager-early' and type 'unspecified'. Apr 20 15:07:07.598164 systemd[1752]: Queued start job for default target default.target. Apr 20 15:07:07.613797 systemd[1752]: Created slice app.slice - User Application Slice. Apr 20 15:07:07.614218 systemd[1752]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of User's Temporary Directories. Apr 20 15:07:07.614236 systemd[1752]: Reached target machines.target - Virtual Machines and Containers. Apr 20 15:07:07.614370 systemd[1752]: Reached target paths.target - Paths. Apr 20 15:07:07.614397 systemd[1752]: Reached target timers.target - Timers. Apr 20 15:07:07.616553 systemd[1752]: Starting dbus.socket - D-Bus User Message Bus Socket... Apr 20 15:07:07.618243 systemd[1752]: Listening on systemd-ask-password.socket - Query the User Interactively for a Password. Apr 20 15:07:07.619466 systemd[1752]: Starting systemd-tmpfiles-setup.service - Create User Files and Directories... Apr 20 15:07:07.660740 systemd[1752]: Finished systemd-tmpfiles-setup.service - Create User Files and Directories. Apr 20 15:07:07.667394 systemd[1752]: Listening on dbus.socket - D-Bus User Message Bus Socket. Apr 20 15:07:07.681530 systemd[1752]: Reached target sockets.target - Sockets. Apr 20 15:07:07.683685 systemd[1752]: Reached target basic.target - Basic System. Apr 20 15:07:07.684444 systemd[1752]: Reached target default.target - Main User Target. Apr 20 15:07:07.684595 systemd[1752]: Startup finished in 421ms. Apr 20 15:07:07.684620 systemd[1]: Started user@500.service - User Manager for UID 500. Apr 20 15:07:07.699319 systemd[1]: Started session-1.scope - Session 1 of User core. Apr 20 15:07:07.773406 systemd[1]: Started sshd@1-2-10.0.0.13:22-10.0.0.1:48478.service - OpenSSH per-connection server daemon (10.0.0.1:48478). Apr 20 15:07:07.940214 sshd[1766]: Accepted publickey for core from 10.0.0.1 port 48478 ssh2: RSA SHA256:snwWgtVh/dFlg/GoTCcrMcNNvPRv/2kcBFWNu/hKVq0 Apr 20 15:07:07.942632 sshd-session[1766]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 20 15:07:07.962391 systemd-logind[1618]: New session '3' of user 'core' with class 'user' and type 'tty'. Apr 20 15:07:07.985251 systemd[1]: Started session-3.scope - Session 3 of User core. Apr 20 15:07:08.012952 sshd[1770]: Connection closed by 10.0.0.1 port 48478 Apr 20 15:07:08.013280 sshd-session[1766]: pam_unix(sshd:session): session closed for user core Apr 20 15:07:08.047320 systemd[1]: sshd@1-2-10.0.0.13:22-10.0.0.1:48478.service: Deactivated successfully. Apr 20 15:07:08.056632 systemd[1]: session-3.scope: Deactivated successfully. Apr 20 15:07:08.064366 systemd-logind[1618]: Session 3 logged out. Waiting for processes to exit. Apr 20 15:07:08.078752 systemd[1]: Started sshd@2-4097-10.0.0.13:22-10.0.0.1:48484.service - OpenSSH per-connection server daemon (10.0.0.1:48484). Apr 20 15:07:08.081308 systemd-logind[1618]: Removed session 3. Apr 20 15:07:08.273564 sshd[1776]: Accepted publickey for core from 10.0.0.1 port 48484 ssh2: RSA SHA256:snwWgtVh/dFlg/GoTCcrMcNNvPRv/2kcBFWNu/hKVq0 Apr 20 15:07:08.277351 sshd-session[1776]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 20 15:07:08.298283 systemd-logind[1618]: New session '4' of user 'core' with class 'user' and type 'tty'. Apr 20 15:07:08.310403 systemd[1]: Started session-4.scope - Session 4 of User core. Apr 20 15:07:08.349343 sshd[1780]: Connection closed by 10.0.0.1 port 48484 Apr 20 15:07:08.350294 sshd-session[1776]: pam_unix(sshd:session): session closed for user core Apr 20 15:07:08.384727 systemd[1]: sshd@2-4097-10.0.0.13:22-10.0.0.1:48484.service: Deactivated successfully. Apr 20 15:07:08.390164 systemd[1]: session-4.scope: Deactivated successfully. Apr 20 15:07:08.391681 systemd-logind[1618]: Session 4 logged out. Waiting for processes to exit. Apr 20 15:07:08.394797 systemd[1]: Started sshd@3-4098-10.0.0.13:22-10.0.0.1:48498.service - OpenSSH per-connection server daemon (10.0.0.1:48498). Apr 20 15:07:08.395397 systemd-logind[1618]: Removed session 4. Apr 20 15:07:08.538720 sshd[1786]: Accepted publickey for core from 10.0.0.1 port 48498 ssh2: RSA SHA256:snwWgtVh/dFlg/GoTCcrMcNNvPRv/2kcBFWNu/hKVq0 Apr 20 15:07:08.540769 sshd-session[1786]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 20 15:07:08.548248 systemd-logind[1618]: New session '5' of user 'core' with class 'user' and type 'tty'. Apr 20 15:07:08.567813 systemd[1]: Started session-5.scope - Session 5 of User core. Apr 20 15:07:08.590767 sshd[1791]: Connection closed by 10.0.0.1 port 48498 Apr 20 15:07:08.591577 sshd-session[1786]: pam_unix(sshd:session): session closed for user core Apr 20 15:07:08.613292 systemd[1]: sshd@3-4098-10.0.0.13:22-10.0.0.1:48498.service: Deactivated successfully. Apr 20 15:07:08.615943 systemd[1]: session-5.scope: Deactivated successfully. Apr 20 15:07:08.617597 systemd-logind[1618]: Session 5 logged out. Waiting for processes to exit. Apr 20 15:07:08.621216 systemd[1]: Started sshd@4-3-10.0.0.13:22-10.0.0.1:48504.service - OpenSSH per-connection server daemon (10.0.0.1:48504). Apr 20 15:07:08.622650 systemd-logind[1618]: Removed session 5. Apr 20 15:07:08.724452 sshd[1797]: Accepted publickey for core from 10.0.0.1 port 48504 ssh2: RSA SHA256:snwWgtVh/dFlg/GoTCcrMcNNvPRv/2kcBFWNu/hKVq0 Apr 20 15:07:08.726300 sshd-session[1797]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 20 15:07:08.733530 systemd-logind[1618]: New session '6' of user 'core' with class 'user' and type 'tty'. Apr 20 15:07:08.743354 systemd[1]: Started session-6.scope - Session 6 of User core. Apr 20 15:07:08.798297 sudo[1802]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Apr 20 15:07:08.798571 sudo[1802]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 20 15:07:11.092214 systemd[1]: Starting docker.service - Docker Application Container Engine... Apr 20 15:07:11.113438 (dockerd)[1823]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Apr 20 15:07:14.837632 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Apr 20 15:07:14.935814 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 20 15:07:15.881371 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 20 15:07:15.921216 dockerd[1823]: time="2026-04-20T15:07:15.918776271Z" level=info msg="Starting up" Apr 20 15:07:15.925374 (kubelet)[1841]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 20 15:07:15.949287 dockerd[1823]: time="2026-04-20T15:07:15.948759763Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Apr 20 15:07:16.155220 dockerd[1823]: time="2026-04-20T15:07:16.153771932Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Apr 20 15:07:16.250242 kubelet[1841]: E0420 15:07:16.249784 1841 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 20 15:07:16.274290 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 20 15:07:16.274621 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 20 15:07:16.278665 systemd[1]: kubelet.service: Consumed 1.017s CPU time, 108.9M memory peak. Apr 20 15:07:16.380447 systemd[1]: var-lib-docker-metacopy\x2dcheck3577168549-merged.mount: Deactivated successfully. Apr 20 15:07:16.449587 dockerd[1823]: time="2026-04-20T15:07:16.448154276Z" level=info msg="Loading containers: start." Apr 20 15:07:16.575371 kernel: Initializing XFRM netlink socket Apr 20 15:07:18.428806 systemd-networkd[1429]: docker0: Link UP Apr 20 15:07:18.450218 dockerd[1823]: time="2026-04-20T15:07:18.449554536Z" level=info msg="Loading containers: done." Apr 20 15:07:18.697498 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck4006063717-merged.mount: Deactivated successfully. Apr 20 15:07:18.709109 dockerd[1823]: time="2026-04-20T15:07:18.708350290Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Apr 20 15:07:18.717586 dockerd[1823]: time="2026-04-20T15:07:18.717274555Z" level=info msg="Docker daemon" commit=45873be4ae3f5488c9498b3d9f17deaddaf609f4 containerd-snapshotter=false storage-driver=overlay2 version=28.2.2 Apr 20 15:07:18.720724 dockerd[1823]: time="2026-04-20T15:07:18.720144372Z" level=info msg="Initializing buildkit" Apr 20 15:07:18.790759 dockerd[1823]: time="2026-04-20T15:07:18.788705793Z" level=warning msg="CDI setup error /etc/cdi: failed to monitor for changes: no such file or directory" Apr 20 15:07:18.790759 dockerd[1823]: time="2026-04-20T15:07:18.789322733Z" level=warning msg="CDI setup error /var/run/cdi: failed to monitor for changes: no such file or directory" Apr 20 15:07:19.104402 dockerd[1823]: time="2026-04-20T15:07:19.102375209Z" level=info msg="Completed buildkit initialization" Apr 20 15:07:19.122254 dockerd[1823]: time="2026-04-20T15:07:19.121801213Z" level=info msg="Daemon has completed initialization" Apr 20 15:07:19.122254 dockerd[1823]: time="2026-04-20T15:07:19.122260314Z" level=info msg="API listen on /run/docker.sock" Apr 20 15:07:19.125716 systemd[1]: Started docker.service - Docker Application Container Engine. Apr 20 15:07:22.980435 containerd[1642]: time="2026-04-20T15:07:22.979783435Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.7\"" Apr 20 15:07:25.321244 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2475730513.mount: Deactivated successfully. Apr 20 15:07:26.393233 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Apr 20 15:07:26.400395 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 20 15:07:27.453526 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 20 15:07:27.485896 (kubelet)[2114]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 20 15:07:27.909276 kubelet[2114]: E0420 15:07:27.908572 2114 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 20 15:07:27.914809 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 20 15:07:27.915886 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 20 15:07:27.917827 systemd[1]: kubelet.service: Consumed 1.169s CPU time, 110.6M memory peak. Apr 20 15:07:31.958618 kernel: clocksource: Long readout interval, skipping watchdog check: cs_nsec: 2657428014 wd_nsec: 2657428072 Apr 20 15:07:37.930285 containerd[1642]: time="2026-04-20T15:07:37.930102805Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.34.7: active requests=0, bytes read=26059072" Apr 20 15:07:37.931700 containerd[1642]: time="2026-04-20T15:07:37.930562641Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.34.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 20 15:07:37.936609 containerd[1642]: time="2026-04-20T15:07:37.936346215Z" level=info msg="ImageCreate event name:\"sha256:c15709457ff55a861a7259eb631c447f9bf906267615f9d8dcc820635a0bfb95\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 20 15:07:38.036716 containerd[1642]: time="2026-04-20T15:07:38.036235242Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:b96b8464d152a24c81d7f0435fd2198f8486970cd26a9e0e9c20826c73d1441c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 20 15:07:38.073747 containerd[1642]: time="2026-04-20T15:07:38.072809438Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.34.7\" with image id \"sha256:c15709457ff55a861a7259eb631c447f9bf906267615f9d8dcc820635a0bfb95\", repo tag \"registry.k8s.io/kube-apiserver:v1.34.7\", repo digest \"registry.k8s.io/kube-apiserver@sha256:b96b8464d152a24c81d7f0435fd2198f8486970cd26a9e0e9c20826c73d1441c\", size \"27097113\" in 15.092688001s" Apr 20 15:07:38.077528 containerd[1642]: time="2026-04-20T15:07:38.073543638Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.7\" returns image reference \"sha256:c15709457ff55a861a7259eb631c447f9bf906267615f9d8dcc820635a0bfb95\"" Apr 20 15:07:38.108426 containerd[1642]: time="2026-04-20T15:07:38.107732533Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.7\"" Apr 20 15:07:38.189208 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Apr 20 15:07:38.237330 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 20 15:07:39.697155 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 20 15:07:39.751508 (kubelet)[2141]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 20 15:07:40.748777 kubelet[2141]: E0420 15:07:40.747727 2141 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 20 15:07:40.758270 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 20 15:07:40.761257 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 20 15:07:40.771574 systemd[1]: kubelet.service: Consumed 2.012s CPU time, 110.5M memory peak. Apr 20 15:07:46.931774 containerd[1642]: time="2026-04-20T15:07:46.931302243Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.34.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 20 15:07:46.934319 containerd[1642]: time="2026-04-20T15:07:46.933963898Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.34.7: active requests=1, bytes read=17268736" Apr 20 15:07:46.936359 containerd[1642]: time="2026-04-20T15:07:46.936221140Z" level=info msg="ImageCreate event name:\"sha256:23986a24c803336f2a2dfbcaaf0547ee8bcf6638f23bec8967e210909d00a97a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 20 15:07:46.947603 containerd[1642]: time="2026-04-20T15:07:46.946598573Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:7d759bdc4fef10a3fc1ad60ce9439d58e1a4df7ebb22751f7cc0201ce55f280b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 20 15:07:46.951786 containerd[1642]: time="2026-04-20T15:07:46.951406735Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.34.7\" with image id \"sha256:23986a24c803336f2a2dfbcaaf0547ee8bcf6638f23bec8967e210909d00a97a\", repo tag \"registry.k8s.io/kube-controller-manager:v1.34.7\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:7d759bdc4fef10a3fc1ad60ce9439d58e1a4df7ebb22751f7cc0201ce55f280b\", size \"22819085\" in 8.843097835s" Apr 20 15:07:46.951786 containerd[1642]: time="2026-04-20T15:07:46.951494599Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.7\" returns image reference \"sha256:23986a24c803336f2a2dfbcaaf0547ee8bcf6638f23bec8967e210909d00a97a\"" Apr 20 15:07:46.973509 containerd[1642]: time="2026-04-20T15:07:46.972940197Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.7\"" Apr 20 15:07:47.029676 update_engine[1620]: I20260420 15:07:47.027711 1620 update_attempter.cc:509] Updating boot flags... Apr 20 15:07:50.895218 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Apr 20 15:07:50.904926 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 20 15:07:52.906334 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 20 15:07:53.131470 (kubelet)[2187]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 20 15:07:53.512201 containerd[1642]: time="2026-04-20T15:07:53.511472499Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.34.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 20 15:07:53.523734 containerd[1642]: time="2026-04-20T15:07:53.513918880Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.34.7: active requests=1, bytes read=8600891" Apr 20 15:07:53.545472 containerd[1642]: time="2026-04-20T15:07:53.544583824Z" level=info msg="ImageCreate event name:\"sha256:568f1856b0e1c464b0b50ab2879ebd535623c1a620b1d2530ba5dd594237dc82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 20 15:07:53.626794 containerd[1642]: time="2026-04-20T15:07:53.626237175Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:4ab32f707ff84beaac431797999707757b885196b0b9a52d29cb67f95efce7c1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 20 15:07:53.741493 containerd[1642]: time="2026-04-20T15:07:53.735484885Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.34.7\" with image id \"sha256:568f1856b0e1c464b0b50ab2879ebd535623c1a620b1d2530ba5dd594237dc82\", repo tag \"registry.k8s.io/kube-scheduler:v1.34.7\", repo digest \"registry.k8s.io/kube-scheduler@sha256:4ab32f707ff84beaac431797999707757b885196b0b9a52d29cb67f95efce7c1\", size \"17377256\" in 6.762074955s" Apr 20 15:07:53.747677 containerd[1642]: time="2026-04-20T15:07:53.743509025Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.7\" returns image reference \"sha256:568f1856b0e1c464b0b50ab2879ebd535623c1a620b1d2530ba5dd594237dc82\"" Apr 20 15:07:53.799517 containerd[1642]: time="2026-04-20T15:07:53.781750917Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.7\"" Apr 20 15:07:54.549600 kubelet[2187]: E0420 15:07:54.544609 2187 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 20 15:07:54.594350 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 20 15:07:54.599898 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 20 15:07:54.615746 systemd[1]: kubelet.service: Consumed 2.610s CPU time, 109.2M memory peak. Apr 20 15:08:04.720703 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Apr 20 15:08:05.431819 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 20 15:08:07.754198 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 20 15:08:07.818410 (kubelet)[2209]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 20 15:08:08.858199 kubelet[2209]: E0420 15:08:08.857363 2209 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 20 15:08:08.864650 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 20 15:08:08.865218 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 20 15:08:08.870466 systemd[1]: kubelet.service: Consumed 2.493s CPU time, 110.2M memory peak. Apr 20 15:08:12.722844 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1023920028.mount: Deactivated successfully. Apr 20 15:08:18.896725 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Apr 20 15:08:19.095737 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 20 15:08:21.050446 containerd[1642]: time="2026-04-20T15:08:21.046732503Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.34.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 20 15:08:21.057668 containerd[1642]: time="2026-04-20T15:08:21.054119434Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.34.7: active requests=1, bytes read=23874306" Apr 20 15:08:21.071786 containerd[1642]: time="2026-04-20T15:08:21.068714472Z" level=info msg="ImageCreate event name:\"sha256:345c2b8919907fbb425a843da24d86a16708ee53a49ad3fa2e6dc229c7b34643\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 20 15:08:21.095839 containerd[1642]: time="2026-04-20T15:08:21.095508572Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:062519bc0a14769e2f98c6bdff7816a17e6252de3f3c9cb102e6be33fe38d9e2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 20 15:08:21.110439 containerd[1642]: time="2026-04-20T15:08:21.108570736Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.34.7\" with image id \"sha256:345c2b8919907fbb425a843da24d86a16708ee53a49ad3fa2e6dc229c7b34643\", repo tag \"registry.k8s.io/kube-proxy:v1.34.7\", repo digest \"registry.k8s.io/kube-proxy@sha256:062519bc0a14769e2f98c6bdff7816a17e6252de3f3c9cb102e6be33fe38d9e2\", size \"25971973\" in 27.318197758s" Apr 20 15:08:21.110439 containerd[1642]: time="2026-04-20T15:08:21.109512712Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.7\" returns image reference \"sha256:345c2b8919907fbb425a843da24d86a16708ee53a49ad3fa2e6dc229c7b34643\"" Apr 20 15:08:21.123567 containerd[1642]: time="2026-04-20T15:08:21.122315223Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\"" Apr 20 15:08:21.842593 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 20 15:08:21.909358 (kubelet)[2229]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 20 15:08:22.610334 kubelet[2229]: E0420 15:08:22.609252 2229 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 20 15:08:22.613749 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 20 15:08:22.615384 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 20 15:08:22.616450 systemd[1]: kubelet.service: Consumed 2.576s CPU time, 110.4M memory peak. Apr 20 15:08:23.043639 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3236659564.mount: Deactivated successfully. Apr 20 15:08:30.058760 containerd[1642]: time="2026-04-20T15:08:30.057430323Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 20 15:08:30.081433 containerd[1642]: time="2026-04-20T15:08:30.064599531Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.1: active requests=0, bytes read=22126772" Apr 20 15:08:30.087528 containerd[1642]: time="2026-04-20T15:08:30.081917823Z" level=info msg="ImageCreate event name:\"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 20 15:08:30.122259 containerd[1642]: time="2026-04-20T15:08:30.121552412Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 20 15:08:30.153613 containerd[1642]: time="2026-04-20T15:08:30.151700959Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.1\" with image id \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\", size \"22384805\" in 9.029210892s" Apr 20 15:08:30.153613 containerd[1642]: time="2026-04-20T15:08:30.151884029Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\" returns image reference \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\"" Apr 20 15:08:30.162750 containerd[1642]: time="2026-04-20T15:08:30.160562999Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Apr 20 15:08:32.499845 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount904429745.mount: Deactivated successfully. Apr 20 15:08:32.503396 containerd[1642]: time="2026-04-20T15:08:32.502703027Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 20 15:08:32.512276 containerd[1642]: time="2026-04-20T15:08:32.510207354Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=892" Apr 20 15:08:32.515725 containerd[1642]: time="2026-04-20T15:08:32.515382525Z" level=info msg="ImageCreate event name:\"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 20 15:08:32.671601 containerd[1642]: time="2026-04-20T15:08:32.670402850Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 20 15:08:32.674505 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7. Apr 20 15:08:32.697722 containerd[1642]: time="2026-04-20T15:08:32.697222524Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"320448\" in 2.534539294s" Apr 20 15:08:32.697722 containerd[1642]: time="2026-04-20T15:08:32.697287258Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\"" Apr 20 15:08:32.712678 containerd[1642]: time="2026-04-20T15:08:32.710708809Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.5-0\"" Apr 20 15:08:32.714386 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 20 15:08:34.328358 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 20 15:08:34.425548 (kubelet)[2302]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 20 15:08:35.252930 kubelet[2302]: E0420 15:08:35.252173 2302 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 20 15:08:35.273548 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 20 15:08:35.282916 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 20 15:08:35.287579 systemd[1]: kubelet.service: Consumed 1.661s CPU time, 110.4M memory peak. Apr 20 15:08:36.003885 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1942130448.mount: Deactivated successfully. Apr 20 15:08:43.838225 containerd[1642]: time="2026-04-20T15:08:43.837542188Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.5-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 20 15:08:43.840266 containerd[1642]: time="2026-04-20T15:08:43.839894411Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.5-0: active requests=0, bytes read=22863484" Apr 20 15:08:43.845533 containerd[1642]: time="2026-04-20T15:08:43.845153873Z" level=info msg="ImageCreate event name:\"sha256:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 20 15:08:43.854468 containerd[1642]: time="2026-04-20T15:08:43.853915410Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 20 15:08:43.856193 containerd[1642]: time="2026-04-20T15:08:43.855925284Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.5-0\" with image id \"sha256:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1\", repo tag \"registry.k8s.io/etcd:3.6.5-0\", repo digest \"registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534\", size \"22871747\" in 11.145078352s" Apr 20 15:08:43.856278 containerd[1642]: time="2026-04-20T15:08:43.856219137Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.5-0\" returns image reference \"sha256:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1\"" Apr 20 15:08:45.417814 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 8. Apr 20 15:08:45.540634 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 20 15:08:47.024953 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 20 15:08:47.106305 (kubelet)[2400]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 20 15:08:47.442546 kubelet[2400]: E0420 15:08:47.441368 2400 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 20 15:08:47.463829 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 20 15:08:47.466672 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 20 15:08:47.472541 systemd[1]: kubelet.service: Consumed 1.203s CPU time, 109.2M memory peak. Apr 20 15:08:57.638439 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 9. Apr 20 15:08:57.725903 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 20 15:08:58.323244 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Apr 20 15:08:58.323296 systemd[1]: kubelet.service: Failed with result 'signal'. Apr 20 15:08:58.325686 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 20 15:08:58.483652 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 20 15:08:58.533614 systemd[1]: Reload requested from client PID 2425 ('systemctl') (unit session-6.scope)... Apr 20 15:08:58.533750 systemd[1]: Reloading... Apr 20 15:08:59.015518 systemd-ssh-generator[2471]: Failed to query local AF_VSOCK CID: Cannot assign requested address Apr 20 15:08:59.027430 zram_generator::config[2479]: No configuration found. Apr 20 15:08:59.146452 (sd-exec-[2456]: /usr/lib/systemd/system-generators/systemd-ssh-generator failed with exit status 1. Apr 20 15:08:59.709649 systemd[1]: /usr/lib/systemd/system/update-engine.service:10: Support for option BlockIOWeight= has been removed and it is ignored Apr 20 15:09:00.113561 systemd[1]: Reloading finished in 1574 ms. Apr 20 15:09:00.288610 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Apr 20 15:09:00.288741 systemd[1]: kubelet.service: Failed with result 'signal'. Apr 20 15:09:00.289461 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 20 15:09:00.289525 systemd[1]: kubelet.service: Consumed 310ms CPU time, 98.5M memory peak. Apr 20 15:09:00.293266 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 20 15:09:01.178584 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 20 15:09:01.205490 (kubelet)[2527]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 20 15:09:01.605936 kubelet[2527]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 20 15:09:01.605936 kubelet[2527]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 20 15:09:01.605936 kubelet[2527]: I0420 15:09:01.603940 2527 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 20 15:09:02.840373 kubelet[2527]: I0420 15:09:02.839451 2527 server.go:529] "Kubelet version" kubeletVersion="v1.34.4" Apr 20 15:09:02.840373 kubelet[2527]: I0420 15:09:02.839870 2527 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 20 15:09:02.840373 kubelet[2527]: I0420 15:09:02.840134 2527 watchdog_linux.go:95] "Systemd watchdog is not enabled" Apr 20 15:09:02.840373 kubelet[2527]: I0420 15:09:02.840165 2527 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 20 15:09:02.851559 kubelet[2527]: I0420 15:09:02.840863 2527 server.go:956] "Client rotation is on, will bootstrap in background" Apr 20 15:09:02.928655 kubelet[2527]: E0420 15:09:02.926939 2527 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.13:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.13:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 20 15:09:02.933392 kubelet[2527]: I0420 15:09:02.933322 2527 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 20 15:09:03.168682 kubelet[2527]: I0420 15:09:03.166497 2527 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Apr 20 15:09:03.871674 kubelet[2527]: I0420 15:09:03.870878 2527 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Apr 20 15:09:03.881746 kubelet[2527]: I0420 15:09:03.880688 2527 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 20 15:09:03.887313 kubelet[2527]: I0420 15:09:03.881640 2527 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 20 15:09:03.887313 kubelet[2527]: I0420 15:09:03.883910 2527 topology_manager.go:138] "Creating topology manager with none policy" Apr 20 15:09:03.887313 kubelet[2527]: I0420 15:09:03.883929 2527 container_manager_linux.go:306] "Creating device plugin manager" Apr 20 15:09:03.895811 kubelet[2527]: I0420 15:09:03.887478 2527 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Apr 20 15:09:03.922888 kubelet[2527]: I0420 15:09:03.922452 2527 state_mem.go:36] "Initialized new in-memory state store" Apr 20 15:09:03.946694 kubelet[2527]: I0420 15:09:03.943846 2527 kubelet.go:475] "Attempting to sync node with API server" Apr 20 15:09:03.946694 kubelet[2527]: I0420 15:09:03.946537 2527 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 20 15:09:03.946694 kubelet[2527]: I0420 15:09:03.946644 2527 kubelet.go:387] "Adding apiserver pod source" Apr 20 15:09:03.946694 kubelet[2527]: I0420 15:09:03.946663 2527 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 20 15:09:03.950600 kubelet[2527]: E0420 15:09:03.948397 2527 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.13:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.13:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 20 15:09:03.950600 kubelet[2527]: E0420 15:09:03.948784 2527 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.13:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.13:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 20 15:09:03.952794 kubelet[2527]: I0420 15:09:03.952562 2527 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.2.1" apiVersion="v1" Apr 20 15:09:03.953557 kubelet[2527]: I0420 15:09:03.953453 2527 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 20 15:09:03.953638 kubelet[2527]: I0420 15:09:03.953584 2527 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Apr 20 15:09:03.953750 kubelet[2527]: W0420 15:09:03.953735 2527 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Apr 20 15:09:04.047185 kubelet[2527]: I0420 15:09:04.046481 2527 server.go:1262] "Started kubelet" Apr 20 15:09:04.050788 kubelet[2527]: I0420 15:09:04.049843 2527 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 20 15:09:04.050788 kubelet[2527]: I0420 15:09:04.050096 2527 server_v1.go:49] "podresources" method="list" useActivePods=true Apr 20 15:09:04.059356 kubelet[2527]: I0420 15:09:04.058163 2527 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Apr 20 15:09:04.069418 kubelet[2527]: I0420 15:09:04.068885 2527 server.go:310] "Adding debug handlers to kubelet server" Apr 20 15:09:04.070661 kubelet[2527]: I0420 15:09:04.070166 2527 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 20 15:09:04.071813 kubelet[2527]: I0420 15:09:04.071790 2527 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 20 15:09:04.077539 kubelet[2527]: E0420 15:09:04.069622 2527 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.13:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.13:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18a8192f531ff33f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-20 15:09:04.046420799 +0000 UTC m=+2.830904246,LastTimestamp:2026-04-20 15:09:04.046420799 +0000 UTC m=+2.830904246,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 20 15:09:04.097850 kubelet[2527]: E0420 15:09:04.084899 2527 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 20 15:09:04.097850 kubelet[2527]: I0420 15:09:04.085124 2527 volume_manager.go:313] "Starting Kubelet Volume Manager" Apr 20 15:09:04.097850 kubelet[2527]: I0420 15:09:04.085966 2527 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Apr 20 15:09:04.097850 kubelet[2527]: I0420 15:09:04.086465 2527 reconciler.go:29] "Reconciler: start to sync state" Apr 20 15:09:04.097850 kubelet[2527]: E0420 15:09:04.086933 2527 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.13:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.13:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 20 15:09:04.097850 kubelet[2527]: E0420 15:09:04.086962 2527 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.13:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.13:6443: connect: connection refused" interval="200ms" Apr 20 15:09:04.097850 kubelet[2527]: I0420 15:09:04.084460 2527 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 20 15:09:04.099962 kubelet[2527]: E0420 15:09:04.099938 2527 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 20 15:09:04.119430 kubelet[2527]: I0420 15:09:04.116804 2527 factory.go:223] Registration of the containerd container factory successfully Apr 20 15:09:04.119430 kubelet[2527]: I0420 15:09:04.116838 2527 factory.go:223] Registration of the systemd container factory successfully Apr 20 15:09:04.119430 kubelet[2527]: I0420 15:09:04.116954 2527 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 20 15:09:04.185888 kubelet[2527]: E0420 15:09:04.185526 2527 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 20 15:09:04.194670 kubelet[2527]: I0420 15:09:04.192771 2527 cpu_manager.go:221] "Starting CPU manager" policy="none" Apr 20 15:09:04.194670 kubelet[2527]: I0420 15:09:04.192795 2527 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Apr 20 15:09:04.194670 kubelet[2527]: I0420 15:09:04.192820 2527 state_mem.go:36] "Initialized new in-memory state store" Apr 20 15:09:04.215712 kubelet[2527]: I0420 15:09:04.214481 2527 policy_none.go:49] "None policy: Start" Apr 20 15:09:04.215712 kubelet[2527]: I0420 15:09:04.214609 2527 memory_manager.go:187] "Starting memorymanager" policy="None" Apr 20 15:09:04.215712 kubelet[2527]: I0420 15:09:04.214632 2527 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Apr 20 15:09:04.219859 kubelet[2527]: I0420 15:09:04.219731 2527 policy_none.go:47] "Start" Apr 20 15:09:04.281618 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Apr 20 15:09:04.286638 kubelet[2527]: E0420 15:09:04.285895 2527 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 20 15:09:04.286638 kubelet[2527]: I0420 15:09:04.285182 2527 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Apr 20 15:09:04.295846 kubelet[2527]: E0420 15:09:04.295703 2527 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.13:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.13:6443: connect: connection refused" interval="400ms" Apr 20 15:09:04.304349 kubelet[2527]: I0420 15:09:04.303674 2527 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Apr 20 15:09:04.304349 kubelet[2527]: I0420 15:09:04.303717 2527 status_manager.go:244] "Starting to sync pod status with apiserver" Apr 20 15:09:04.304349 kubelet[2527]: I0420 15:09:04.303763 2527 kubelet.go:2428] "Starting kubelet main sync loop" Apr 20 15:09:04.311704 kubelet[2527]: E0420 15:09:04.307713 2527 kubelet.go:2452] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 20 15:09:04.312953 kubelet[2527]: E0420 15:09:04.312769 2527 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.13:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.13:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 20 15:09:04.346643 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Apr 20 15:09:04.364444 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Apr 20 15:09:04.391820 kubelet[2527]: E0420 15:09:04.387504 2527 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 20 15:09:04.397679 kubelet[2527]: E0420 15:09:04.397266 2527 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 20 15:09:04.398840 kubelet[2527]: I0420 15:09:04.397962 2527 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 20 15:09:04.398840 kubelet[2527]: I0420 15:09:04.398144 2527 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 20 15:09:04.398840 kubelet[2527]: I0420 15:09:04.398543 2527 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 20 15:09:04.403675 kubelet[2527]: E0420 15:09:04.403333 2527 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 20 15:09:04.403675 kubelet[2527]: E0420 15:09:04.403384 2527 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 20 15:09:04.488612 kubelet[2527]: E0420 15:09:04.475446 2527 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.13:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.13:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18a8192f531ff33f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-20 15:09:04.046420799 +0000 UTC m=+2.830904246,LastTimestamp:2026-04-20 15:09:04.046420799 +0000 UTC m=+2.830904246,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 20 15:09:04.573547 kubelet[2527]: I0420 15:09:04.571450 2527 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 20 15:09:04.590140 kubelet[2527]: I0420 15:09:04.588961 2527 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/77735e41b4153281131387b55637c08c-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"77735e41b4153281131387b55637c08c\") " pod="kube-system/kube-apiserver-localhost" Apr 20 15:09:04.597351 kubelet[2527]: E0420 15:09:04.592622 2527 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.13:6443/api/v1/nodes\": dial tcp 10.0.0.13:6443: connect: connection refused" node="localhost" Apr 20 15:09:04.602875 kubelet[2527]: I0420 15:09:04.598655 2527 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c6bb8708a026256e82ca4c5631a78b5a-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"c6bb8708a026256e82ca4c5631a78b5a\") " pod="kube-system/kube-controller-manager-localhost" Apr 20 15:09:04.602875 kubelet[2527]: I0420 15:09:04.598694 2527 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/c6bb8708a026256e82ca4c5631a78b5a-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"c6bb8708a026256e82ca4c5631a78b5a\") " pod="kube-system/kube-controller-manager-localhost" Apr 20 15:09:04.602875 kubelet[2527]: I0420 15:09:04.598767 2527 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c6bb8708a026256e82ca4c5631a78b5a-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"c6bb8708a026256e82ca4c5631a78b5a\") " pod="kube-system/kube-controller-manager-localhost" Apr 20 15:09:04.602875 kubelet[2527]: I0420 15:09:04.598921 2527 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c6bb8708a026256e82ca4c5631a78b5a-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"c6bb8708a026256e82ca4c5631a78b5a\") " pod="kube-system/kube-controller-manager-localhost" Apr 20 15:09:04.602875 kubelet[2527]: I0420 15:09:04.602187 2527 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/77735e41b4153281131387b55637c08c-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"77735e41b4153281131387b55637c08c\") " pod="kube-system/kube-apiserver-localhost" Apr 20 15:09:04.641576 kubelet[2527]: I0420 15:09:04.609124 2527 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/77735e41b4153281131387b55637c08c-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"77735e41b4153281131387b55637c08c\") " pod="kube-system/kube-apiserver-localhost" Apr 20 15:09:04.641576 kubelet[2527]: I0420 15:09:04.624521 2527 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c6bb8708a026256e82ca4c5631a78b5a-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"c6bb8708a026256e82ca4c5631a78b5a\") " pod="kube-system/kube-controller-manager-localhost" Apr 20 15:09:04.641576 kubelet[2527]: I0420 15:09:04.635883 2527 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/824fd89300514e351ed3b68d82c665c6-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"824fd89300514e351ed3b68d82c665c6\") " pod="kube-system/kube-scheduler-localhost" Apr 20 15:09:04.715889 kubelet[2527]: E0420 15:09:04.715587 2527 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.13:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.13:6443: connect: connection refused" interval="800ms" Apr 20 15:09:04.849566 kubelet[2527]: I0420 15:09:04.838915 2527 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 20 15:09:04.858711 kubelet[2527]: E0420 15:09:04.850966 2527 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.13:6443/api/v1/nodes\": dial tcp 10.0.0.13:6443: connect: connection refused" node="localhost" Apr 20 15:09:04.938442 systemd[1]: Created slice kubepods-burstable-pod77735e41b4153281131387b55637c08c.slice - libcontainer container kubepods-burstable-pod77735e41b4153281131387b55637c08c.slice. Apr 20 15:09:04.957541 kubelet[2527]: E0420 15:09:04.946385 2527 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.13:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.13:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 20 15:09:05.248571 kubelet[2527]: E0420 15:09:05.245607 2527 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 20 15:09:05.296819 kubelet[2527]: E0420 15:09:05.294720 2527 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.13:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.13:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 20 15:09:05.314945 kubelet[2527]: I0420 15:09:05.314736 2527 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 20 15:09:05.324327 kubelet[2527]: E0420 15:09:05.322950 2527 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.13:6443/api/v1/nodes\": dial tcp 10.0.0.13:6443: connect: connection refused" node="localhost" Apr 20 15:09:05.341610 kubelet[2527]: E0420 15:09:05.340639 2527 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 15:09:05.458949 systemd[1]: Created slice kubepods-burstable-podc6bb8708a026256e82ca4c5631a78b5a.slice - libcontainer container kubepods-burstable-podc6bb8708a026256e82ca4c5631a78b5a.slice. Apr 20 15:09:05.538689 kubelet[2527]: E0420 15:09:05.533739 2527 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.13:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.13:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 20 15:09:05.566799 kubelet[2527]: E0420 15:09:05.566568 2527 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.13:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.13:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 20 15:09:05.575421 kubelet[2527]: E0420 15:09:05.573779 2527 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.13:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.13:6443: connect: connection refused" interval="1.6s" Apr 20 15:09:05.630526 containerd[1642]: time="2026-04-20T15:09:05.622921918Z" level=info msg="RunPodSandbox for name:\"kube-apiserver-localhost\" uid:\"77735e41b4153281131387b55637c08c\" namespace:\"kube-system\"" Apr 20 15:09:05.650771 kubelet[2527]: E0420 15:09:05.650194 2527 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 20 15:09:05.700433 kubelet[2527]: E0420 15:09:05.699866 2527 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 15:09:05.710837 containerd[1642]: time="2026-04-20T15:09:05.710512126Z" level=info msg="RunPodSandbox for name:\"kube-controller-manager-localhost\" uid:\"c6bb8708a026256e82ca4c5631a78b5a\" namespace:\"kube-system\"" Apr 20 15:09:05.720351 systemd[1]: Created slice kubepods-burstable-pod824fd89300514e351ed3b68d82c665c6.slice - libcontainer container kubepods-burstable-pod824fd89300514e351ed3b68d82c665c6.slice. Apr 20 15:09:05.841866 kubelet[2527]: E0420 15:09:05.838150 2527 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 20 15:09:05.948562 kubelet[2527]: E0420 15:09:05.948115 2527 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.13:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.13:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 20 15:09:05.989280 kubelet[2527]: E0420 15:09:05.988647 2527 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 15:09:05.997741 containerd[1642]: time="2026-04-20T15:09:05.991417925Z" level=info msg="RunPodSandbox for name:\"kube-scheduler-localhost\" uid:\"824fd89300514e351ed3b68d82c665c6\" namespace:\"kube-system\"" Apr 20 15:09:06.301493 kubelet[2527]: I0420 15:09:06.300424 2527 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 20 15:09:06.304338 kubelet[2527]: E0420 15:09:06.303914 2527 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.13:6443/api/v1/nodes\": dial tcp 10.0.0.13:6443: connect: connection refused" node="localhost" Apr 20 15:09:06.429474 containerd[1642]: time="2026-04-20T15:09:06.428177766Z" level=info msg="connecting to shim a96d38bea9473b0df39f660af3112a3fd5d742e9f25d5fb628bbeaff53e7a804" address="unix:///run/containerd/s/0b13aec05e8b4b402034243c5b9da5270b125230e17b6f2aa20dc24b46f8f615" namespace=k8s.io protocol=ttrpc version=3 Apr 20 15:09:06.688966 containerd[1642]: time="2026-04-20T15:09:06.688715934Z" level=info msg="connecting to shim eed665b6aafe01026e082818c6c537fdddeaac2672bd5a28fc44d252ce07eaa2" address="unix:///run/containerd/s/e3520cba3d47f695f01bdf9b9ffdbd7a5c1e7f5540952fe7464054819c712fa9" namespace=k8s.io protocol=ttrpc version=3 Apr 20 15:09:06.803101 containerd[1642]: time="2026-04-20T15:09:06.801349076Z" level=info msg="connecting to shim 4f530dc06aa72949c49b41d18f2e8627314ffd67d443ed094bf5e3888c800fc9" address="unix:///run/containerd/s/cf24567a77901a2a777df8cdc50d09c11b84c57a463a127a8f70d3c430a1db55" namespace=k8s.io protocol=ttrpc version=3 Apr 20 15:09:07.532499 kubelet[2527]: E0420 15:09:07.477482 2527 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.13:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.13:6443: connect: connection refused" interval="3.2s" Apr 20 15:09:07.614602 kubelet[2527]: E0420 15:09:07.613823 2527 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.13:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.13:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 20 15:09:08.202379 kubelet[2527]: E0420 15:09:08.200713 2527 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.13:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.13:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 20 15:09:08.308777 kubelet[2527]: I0420 15:09:08.307617 2527 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 20 15:09:08.427666 kubelet[2527]: E0420 15:09:08.424741 2527 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.13:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.13:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 20 15:09:08.427666 kubelet[2527]: E0420 15:09:08.424715 2527 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.13:6443/api/v1/nodes\": dial tcp 10.0.0.13:6443: connect: connection refused" node="localhost" Apr 20 15:09:08.523423 kubelet[2527]: E0420 15:09:08.522601 2527 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.13:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.13:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 20 15:09:08.587180 systemd[1]: Started cri-containerd-eed665b6aafe01026e082818c6c537fdddeaac2672bd5a28fc44d252ce07eaa2.scope - libcontainer container eed665b6aafe01026e082818c6c537fdddeaac2672bd5a28fc44d252ce07eaa2. Apr 20 15:09:08.962530 systemd[1]: Started cri-containerd-4f530dc06aa72949c49b41d18f2e8627314ffd67d443ed094bf5e3888c800fc9.scope - libcontainer container 4f530dc06aa72949c49b41d18f2e8627314ffd67d443ed094bf5e3888c800fc9. Apr 20 15:09:09.144795 systemd[1]: Started cri-containerd-a96d38bea9473b0df39f660af3112a3fd5d742e9f25d5fb628bbeaff53e7a804.scope - libcontainer container a96d38bea9473b0df39f660af3112a3fd5d742e9f25d5fb628bbeaff53e7a804. Apr 20 15:09:09.351172 kubelet[2527]: E0420 15:09:09.342452 2527 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.13:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.13:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 20 15:09:10.793361 kubelet[2527]: E0420 15:09:10.792745 2527 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.13:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.13:6443: connect: connection refused" interval="6.4s" Apr 20 15:09:12.965839 kubelet[2527]: E0420 15:09:12.957776 2527 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.13:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.13:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 20 15:09:12.986959 kubelet[2527]: E0420 15:09:12.984924 2527 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.13:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.13:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 20 15:09:13.250779 kubelet[2527]: I0420 15:09:13.248966 2527 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 20 15:09:13.283689 kubelet[2527]: E0420 15:09:13.282848 2527 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.13:6443/api/v1/nodes\": dial tcp 10.0.0.13:6443: connect: connection refused" node="localhost" Apr 20 15:09:13.295091 containerd[1642]: time="2026-04-20T15:09:13.294757432Z" level=info msg="RunPodSandbox for name:\"kube-controller-manager-localhost\" uid:\"c6bb8708a026256e82ca4c5631a78b5a\" namespace:\"kube-system\" returns sandbox id \"eed665b6aafe01026e082818c6c537fdddeaac2672bd5a28fc44d252ce07eaa2\"" Apr 20 15:09:13.341613 kubelet[2527]: E0420 15:09:13.340400 2527 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 15:09:13.609780 containerd[1642]: time="2026-04-20T15:09:13.602450604Z" level=info msg="RunPodSandbox for name:\"kube-apiserver-localhost\" uid:\"77735e41b4153281131387b55637c08c\" namespace:\"kube-system\" returns sandbox id \"a96d38bea9473b0df39f660af3112a3fd5d742e9f25d5fb628bbeaff53e7a804\"" Apr 20 15:09:13.724633 containerd[1642]: time="2026-04-20T15:09:13.721390195Z" level=info msg="RunPodSandbox for name:\"kube-scheduler-localhost\" uid:\"824fd89300514e351ed3b68d82c665c6\" namespace:\"kube-system\" returns sandbox id \"4f530dc06aa72949c49b41d18f2e8627314ffd67d443ed094bf5e3888c800fc9\"" Apr 20 15:09:13.735931 kubelet[2527]: E0420 15:09:13.728915 2527 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 15:09:13.828821 kubelet[2527]: E0420 15:09:13.828546 2527 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 15:09:14.239529 kubelet[2527]: E0420 15:09:14.233866 2527 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.13:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.13:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 20 15:09:14.330737 kubelet[2527]: E0420 15:09:14.330386 2527 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.13:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.13:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 20 15:09:14.342324 containerd[1642]: time="2026-04-20T15:09:14.341918105Z" level=info msg="CreateContainer within sandbox \"eed665b6aafe01026e082818c6c537fdddeaac2672bd5a28fc44d252ce07eaa2\" for container name:\"kube-controller-manager\"" Apr 20 15:09:14.433834 kubelet[2527]: E0420 15:09:14.411127 2527 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 20 15:09:14.499414 kubelet[2527]: E0420 15:09:14.492647 2527 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.13:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.13:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18a8192f531ff33f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-20 15:09:04.046420799 +0000 UTC m=+2.830904246,LastTimestamp:2026-04-20 15:09:04.046420799 +0000 UTC m=+2.830904246,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 20 15:09:14.515481 containerd[1642]: time="2026-04-20T15:09:14.515210767Z" level=info msg="CreateContainer within sandbox \"4f530dc06aa72949c49b41d18f2e8627314ffd67d443ed094bf5e3888c800fc9\" for container name:\"kube-scheduler\"" Apr 20 15:09:14.545731 containerd[1642]: time="2026-04-20T15:09:14.544804471Z" level=info msg="CreateContainer within sandbox \"a96d38bea9473b0df39f660af3112a3fd5d742e9f25d5fb628bbeaff53e7a804\" for container name:\"kube-apiserver\"" Apr 20 15:09:15.179892 containerd[1642]: time="2026-04-20T15:09:15.177949310Z" level=info msg="Container d17439ea92908ce00a6e08af24eeaaaa4122b85561e2cedb708e273f0cd453f6: CDI devices from CRI Config.CDIDevices: []" Apr 20 15:09:15.236647 containerd[1642]: time="2026-04-20T15:09:15.223891769Z" level=info msg="Container 1201ba8c08049438bb6bfd2aaf3ba5641c38e34c54e36ee4ab3b35aa0d37edaa: CDI devices from CRI Config.CDIDevices: []" Apr 20 15:09:15.257865 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2146076176.mount: Deactivated successfully. Apr 20 15:09:15.325754 containerd[1642]: time="2026-04-20T15:09:15.320746650Z" level=info msg="Container cd11ef12301d2105c049831098a7dddba828253f92714db70c18080199493719: CDI devices from CRI Config.CDIDevices: []" Apr 20 15:09:15.352770 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount182557794.mount: Deactivated successfully. Apr 20 15:09:15.604251 containerd[1642]: time="2026-04-20T15:09:15.594165289Z" level=info msg="CreateContainer within sandbox \"4f530dc06aa72949c49b41d18f2e8627314ffd67d443ed094bf5e3888c800fc9\" for name:\"kube-scheduler\" returns container id \"d17439ea92908ce00a6e08af24eeaaaa4122b85561e2cedb708e273f0cd453f6\"" Apr 20 15:09:15.650372 containerd[1642]: time="2026-04-20T15:09:15.637435130Z" level=info msg="CreateContainer within sandbox \"eed665b6aafe01026e082818c6c537fdddeaac2672bd5a28fc44d252ce07eaa2\" for name:\"kube-controller-manager\" returns container id \"1201ba8c08049438bb6bfd2aaf3ba5641c38e34c54e36ee4ab3b35aa0d37edaa\"" Apr 20 15:09:15.695769 containerd[1642]: time="2026-04-20T15:09:15.659952563Z" level=info msg="StartContainer for \"d17439ea92908ce00a6e08af24eeaaaa4122b85561e2cedb708e273f0cd453f6\"" Apr 20 15:09:15.706730 containerd[1642]: time="2026-04-20T15:09:15.660159603Z" level=info msg="StartContainer for \"1201ba8c08049438bb6bfd2aaf3ba5641c38e34c54e36ee4ab3b35aa0d37edaa\"" Apr 20 15:09:15.717229 containerd[1642]: time="2026-04-20T15:09:15.660168842Z" level=info msg="CreateContainer within sandbox \"a96d38bea9473b0df39f660af3112a3fd5d742e9f25d5fb628bbeaff53e7a804\" for name:\"kube-apiserver\" returns container id \"cd11ef12301d2105c049831098a7dddba828253f92714db70c18080199493719\"" Apr 20 15:09:15.730428 containerd[1642]: time="2026-04-20T15:09:15.729659835Z" level=info msg="StartContainer for \"cd11ef12301d2105c049831098a7dddba828253f92714db70c18080199493719\"" Apr 20 15:09:15.733942 containerd[1642]: time="2026-04-20T15:09:15.733794607Z" level=info msg="connecting to shim 1201ba8c08049438bb6bfd2aaf3ba5641c38e34c54e36ee4ab3b35aa0d37edaa" address="unix:///run/containerd/s/e3520cba3d47f695f01bdf9b9ffdbd7a5c1e7f5540952fe7464054819c712fa9" protocol=ttrpc version=3 Apr 20 15:09:15.766816 containerd[1642]: time="2026-04-20T15:09:15.766509131Z" level=info msg="connecting to shim cd11ef12301d2105c049831098a7dddba828253f92714db70c18080199493719" address="unix:///run/containerd/s/0b13aec05e8b4b402034243c5b9da5270b125230e17b6f2aa20dc24b46f8f615" protocol=ttrpc version=3 Apr 20 15:09:15.776742 containerd[1642]: time="2026-04-20T15:09:15.776342510Z" level=info msg="connecting to shim d17439ea92908ce00a6e08af24eeaaaa4122b85561e2cedb708e273f0cd453f6" address="unix:///run/containerd/s/cf24567a77901a2a777df8cdc50d09c11b84c57a463a127a8f70d3c430a1db55" protocol=ttrpc version=3 Apr 20 15:09:16.327928 systemd[1]: Started cri-containerd-cd11ef12301d2105c049831098a7dddba828253f92714db70c18080199493719.scope - libcontainer container cd11ef12301d2105c049831098a7dddba828253f92714db70c18080199493719. Apr 20 15:09:16.999832 systemd[1]: Started cri-containerd-1201ba8c08049438bb6bfd2aaf3ba5641c38e34c54e36ee4ab3b35aa0d37edaa.scope - libcontainer container 1201ba8c08049438bb6bfd2aaf3ba5641c38e34c54e36ee4ab3b35aa0d37edaa. Apr 20 15:09:17.017260 systemd[1]: Started cri-containerd-d17439ea92908ce00a6e08af24eeaaaa4122b85561e2cedb708e273f0cd453f6.scope - libcontainer container d17439ea92908ce00a6e08af24eeaaaa4122b85561e2cedb708e273f0cd453f6. Apr 20 15:09:17.205907 kubelet[2527]: E0420 15:09:17.205678 2527 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.13:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.13:6443: connect: connection refused" interval="7s" Apr 20 15:09:17.815827 kubelet[2527]: E0420 15:09:17.810952 2527 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.13:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.13:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 20 15:09:18.120917 containerd[1642]: time="2026-04-20T15:09:18.119713640Z" level=info msg="StartContainer for \"cd11ef12301d2105c049831098a7dddba828253f92714db70c18080199493719\" returns successfully" Apr 20 15:09:18.645443 containerd[1642]: time="2026-04-20T15:09:18.644220509Z" level=info msg="StartContainer for \"1201ba8c08049438bb6bfd2aaf3ba5641c38e34c54e36ee4ab3b35aa0d37edaa\" returns successfully" Apr 20 15:09:18.702459 containerd[1642]: time="2026-04-20T15:09:18.701659903Z" level=info msg="StartContainer for \"d17439ea92908ce00a6e08af24eeaaaa4122b85561e2cedb708e273f0cd453f6\" returns successfully" Apr 20 15:09:19.630281 kubelet[2527]: E0420 15:09:19.629921 2527 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 20 15:09:19.642860 kubelet[2527]: E0420 15:09:19.631691 2527 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 20 15:09:19.642860 kubelet[2527]: E0420 15:09:19.636477 2527 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 15:09:19.642860 kubelet[2527]: E0420 15:09:19.636715 2527 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 15:09:19.642860 kubelet[2527]: E0420 15:09:19.637109 2527 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 20 15:09:19.642860 kubelet[2527]: E0420 15:09:19.637943 2527 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 15:09:19.708348 kubelet[2527]: I0420 15:09:19.707497 2527 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 20 15:09:20.699836 kubelet[2527]: E0420 15:09:20.698804 2527 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 20 15:09:20.713874 kubelet[2527]: E0420 15:09:20.702287 2527 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 15:09:20.713874 kubelet[2527]: E0420 15:09:20.705764 2527 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 20 15:09:20.713874 kubelet[2527]: E0420 15:09:20.708183 2527 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 15:09:20.784561 kubelet[2527]: E0420 15:09:20.779790 2527 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 20 15:09:20.801634 kubelet[2527]: E0420 15:09:20.801448 2527 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 15:09:22.216509 kubelet[2527]: E0420 15:09:22.216130 2527 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 20 15:09:22.248735 kubelet[2527]: E0420 15:09:22.232289 2527 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 20 15:09:22.248735 kubelet[2527]: E0420 15:09:22.242234 2527 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 15:09:22.273190 kubelet[2527]: E0420 15:09:22.270824 2527 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 15:09:23.102438 kubelet[2527]: E0420 15:09:23.101895 2527 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 20 15:09:23.103290 kubelet[2527]: E0420 15:09:23.102886 2527 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 15:09:24.418513 kubelet[2527]: E0420 15:09:24.416652 2527 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 20 15:09:28.881878 kubelet[2527]: E0420 15:09:28.875860 2527 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 20 15:09:28.885693 kubelet[2527]: E0420 15:09:28.883616 2527 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 15:09:28.914871 kubelet[2527]: E0420 15:09:28.914698 2527 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Apr 20 15:09:29.078924 kubelet[2527]: I0420 15:09:29.078161 2527 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Apr 20 15:09:29.091410 kubelet[2527]: I0420 15:09:29.090846 2527 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Apr 20 15:09:29.103903 kubelet[2527]: E0420 15:09:29.103322 2527 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.18a8192f531ff33f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-20 15:09:04.046420799 +0000 UTC m=+2.830904246,LastTimestamp:2026-04-20 15:09:04.046420799 +0000 UTC m=+2.830904246,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 20 15:09:29.193753 kubelet[2527]: E0420 15:09:29.192753 2527 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Apr 20 15:09:29.193753 kubelet[2527]: I0420 15:09:29.192869 2527 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Apr 20 15:09:29.202247 kubelet[2527]: E0420 15:09:29.199744 2527 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Apr 20 15:09:29.202247 kubelet[2527]: I0420 15:09:29.199774 2527 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Apr 20 15:09:29.231157 kubelet[2527]: E0420 15:09:29.230643 2527 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Apr 20 15:09:29.519766 kubelet[2527]: I0420 15:09:29.513148 2527 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Apr 20 15:09:29.545807 kubelet[2527]: E0420 15:09:29.542866 2527 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Apr 20 15:09:29.551882 kubelet[2527]: I0420 15:09:29.551487 2527 apiserver.go:52] "Watching apiserver" Apr 20 15:09:29.580161 kubelet[2527]: E0420 15:09:29.577934 2527 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 15:09:29.592789 kubelet[2527]: I0420 15:09:29.592439 2527 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Apr 20 15:09:32.757278 kubelet[2527]: I0420 15:09:32.755931 2527 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Apr 20 15:09:32.947729 kubelet[2527]: E0420 15:09:32.946459 2527 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 15:09:33.742362 kubelet[2527]: E0420 15:09:33.740214 2527 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 15:09:47.448522 kubelet[2527]: E0420 15:09:47.448143 2527 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.122s" Apr 20 15:09:49.842328 kubelet[2527]: E0420 15:09:49.841871 2527 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 15:09:49.917905 kubelet[2527]: I0420 15:09:49.917259 2527 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Apr 20 15:09:49.943799 kubelet[2527]: I0420 15:09:49.940807 2527 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=17.936963376 podStartE2EDuration="17.936963376s" podCreationTimestamp="2026-04-20 15:09:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-20 15:09:34.890803879 +0000 UTC m=+33.675287294" watchObservedRunningTime="2026-04-20 15:09:49.936963376 +0000 UTC m=+48.721446784" Apr 20 15:09:50.094893 kubelet[2527]: E0420 15:09:50.094166 2527 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 15:09:50.174919 kubelet[2527]: I0420 15:09:50.174551 2527 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=0.170965014 podStartE2EDuration="170.965014ms" podCreationTimestamp="2026-04-20 15:09:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-20 15:09:50.170853362 +0000 UTC m=+48.955336784" watchObservedRunningTime="2026-04-20 15:09:50.170965014 +0000 UTC m=+48.955448435" Apr 20 15:09:50.906926 kubelet[2527]: E0420 15:09:50.904750 2527 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 15:09:51.988356 kubelet[2527]: E0420 15:09:51.987843 2527 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 15:09:52.281408 systemd[1]: Reload requested from client PID 2819 ('systemctl') (unit session-6.scope)... Apr 20 15:09:52.281753 systemd[1]: Reloading... Apr 20 15:09:53.122391 systemd-ssh-generator[2869]: Failed to query local AF_VSOCK CID: Cannot assign requested address Apr 20 15:09:53.123597 zram_generator::config[2873]: No configuration found. Apr 20 15:09:53.149204 (sd-exec-[2850]: /usr/lib/systemd/system-generators/systemd-ssh-generator failed with exit status 1. Apr 20 15:09:54.722798 systemd[1]: /usr/lib/systemd/system/update-engine.service:10: Support for option BlockIOWeight= has been removed and it is ignored Apr 20 15:09:56.343419 systemd[1]: Reloading finished in 4058 ms. Apr 20 15:09:56.870416 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 20 15:09:57.000659 systemd[1]: kubelet.service: Deactivated successfully. Apr 20 15:09:57.013383 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 20 15:09:57.014320 systemd[1]: kubelet.service: Consumed 24.514s CPU time, 131.1M memory peak. Apr 20 15:09:57.045447 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 20 15:09:59.856413 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 20 15:09:59.953221 (kubelet)[2917]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 20 15:10:02.599681 kubelet[2917]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 20 15:10:02.630206 kubelet[2917]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 20 15:10:02.630206 kubelet[2917]: I0420 15:10:02.603295 2917 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 20 15:10:02.881317 kubelet[2917]: I0420 15:10:02.880838 2917 server.go:529] "Kubelet version" kubeletVersion="v1.34.4" Apr 20 15:10:02.886134 kubelet[2917]: I0420 15:10:02.884465 2917 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 20 15:10:02.886134 kubelet[2917]: I0420 15:10:02.884834 2917 watchdog_linux.go:95] "Systemd watchdog is not enabled" Apr 20 15:10:02.886134 kubelet[2917]: I0420 15:10:02.884846 2917 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 20 15:10:02.887371 kubelet[2917]: I0420 15:10:02.887353 2917 server.go:956] "Client rotation is on, will bootstrap in background" Apr 20 15:10:02.907836 kubelet[2917]: I0420 15:10:02.907426 2917 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Apr 20 15:10:02.957824 kubelet[2917]: I0420 15:10:02.955881 2917 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 20 15:10:03.173819 kubelet[2917]: I0420 15:10:03.167925 2917 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Apr 20 15:10:03.320361 kubelet[2917]: I0420 15:10:03.318425 2917 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Apr 20 15:10:03.320361 kubelet[2917]: I0420 15:10:03.319202 2917 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 20 15:10:03.320361 kubelet[2917]: I0420 15:10:03.319331 2917 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 20 15:10:03.320361 kubelet[2917]: I0420 15:10:03.320302 2917 topology_manager.go:138] "Creating topology manager with none policy" Apr 20 15:10:03.328853 kubelet[2917]: I0420 15:10:03.320400 2917 container_manager_linux.go:306] "Creating device plugin manager" Apr 20 15:10:03.328853 kubelet[2917]: I0420 15:10:03.320424 2917 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Apr 20 15:10:03.328853 kubelet[2917]: I0420 15:10:03.327658 2917 state_mem.go:36] "Initialized new in-memory state store" Apr 20 15:10:03.329612 kubelet[2917]: I0420 15:10:03.329356 2917 kubelet.go:475] "Attempting to sync node with API server" Apr 20 15:10:03.329612 kubelet[2917]: I0420 15:10:03.329469 2917 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 20 15:10:03.329725 kubelet[2917]: I0420 15:10:03.329623 2917 kubelet.go:387] "Adding apiserver pod source" Apr 20 15:10:03.329725 kubelet[2917]: I0420 15:10:03.329634 2917 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 20 15:10:03.396304 kubelet[2917]: I0420 15:10:03.392847 2917 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.2.1" apiVersion="v1" Apr 20 15:10:03.399506 kubelet[2917]: I0420 15:10:03.399482 2917 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 20 15:10:03.399506 kubelet[2917]: I0420 15:10:03.401304 2917 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Apr 20 15:10:03.637610 kubelet[2917]: I0420 15:10:03.636844 2917 server.go:1262] "Started kubelet" Apr 20 15:10:03.652462 kubelet[2917]: I0420 15:10:03.644363 2917 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Apr 20 15:10:03.658830 kubelet[2917]: I0420 15:10:03.657422 2917 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 20 15:10:03.658830 kubelet[2917]: I0420 15:10:03.657916 2917 server_v1.go:49] "podresources" method="list" useActivePods=true Apr 20 15:10:03.815914 kubelet[2917]: I0420 15:10:03.815473 2917 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 20 15:10:03.819728 kubelet[2917]: I0420 15:10:03.818690 2917 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 20 15:10:03.845273 kubelet[2917]: I0420 15:10:03.844331 2917 server.go:310] "Adding debug handlers to kubelet server" Apr 20 15:10:03.854926 kubelet[2917]: I0420 15:10:03.854486 2917 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 20 15:10:03.858609 kubelet[2917]: I0420 15:10:03.857940 2917 volume_manager.go:313] "Starting Kubelet Volume Manager" Apr 20 15:10:03.858766 kubelet[2917]: I0420 15:10:03.858687 2917 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Apr 20 15:10:03.937498 kubelet[2917]: I0420 15:10:03.925764 2917 reconciler.go:29] "Reconciler: start to sync state" Apr 20 15:10:04.134712 kubelet[2917]: I0420 15:10:04.130465 2917 factory.go:223] Registration of the systemd container factory successfully Apr 20 15:10:04.156401 kubelet[2917]: I0420 15:10:04.154380 2917 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 20 15:10:04.180885 kubelet[2917]: E0420 15:10:04.180503 2917 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 20 15:10:04.305691 kubelet[2917]: I0420 15:10:04.304508 2917 factory.go:223] Registration of the containerd container factory successfully Apr 20 15:10:04.342959 kubelet[2917]: I0420 15:10:04.340330 2917 apiserver.go:52] "Watching apiserver" Apr 20 15:10:04.628154 kubelet[2917]: I0420 15:10:04.625879 2917 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Apr 20 15:10:04.641494 kubelet[2917]: I0420 15:10:04.640869 2917 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Apr 20 15:10:04.641494 kubelet[2917]: I0420 15:10:04.641307 2917 status_manager.go:244] "Starting to sync pod status with apiserver" Apr 20 15:10:04.641494 kubelet[2917]: I0420 15:10:04.641424 2917 kubelet.go:2428] "Starting kubelet main sync loop" Apr 20 15:10:04.644766 kubelet[2917]: E0420 15:10:04.641861 2917 kubelet.go:2452] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 20 15:10:04.794730 kubelet[2917]: E0420 15:10:04.793886 2917 kubelet.go:2452] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Apr 20 15:10:04.997814 kubelet[2917]: E0420 15:10:04.996965 2917 kubelet.go:2452] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Apr 20 15:10:05.472483 kubelet[2917]: E0420 15:10:05.424898 2917 kubelet.go:2452] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Apr 20 15:10:05.914729 kubelet[2917]: I0420 15:10:05.914448 2917 cpu_manager.go:221] "Starting CPU manager" policy="none" Apr 20 15:10:05.916506 kubelet[2917]: I0420 15:10:05.916197 2917 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Apr 20 15:10:05.916506 kubelet[2917]: I0420 15:10:05.916227 2917 state_mem.go:36] "Initialized new in-memory state store" Apr 20 15:10:05.916506 kubelet[2917]: I0420 15:10:05.916376 2917 state_mem.go:88] "Updated default CPUSet" cpuSet="" Apr 20 15:10:05.916506 kubelet[2917]: I0420 15:10:05.916388 2917 state_mem.go:96] "Updated CPUSet assignments" assignments={} Apr 20 15:10:05.916506 kubelet[2917]: I0420 15:10:05.916405 2917 policy_none.go:49] "None policy: Start" Apr 20 15:10:05.916506 kubelet[2917]: I0420 15:10:05.916415 2917 memory_manager.go:187] "Starting memorymanager" policy="None" Apr 20 15:10:05.916506 kubelet[2917]: I0420 15:10:05.916425 2917 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Apr 20 15:10:05.917195 kubelet[2917]: I0420 15:10:05.916699 2917 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Apr 20 15:10:05.917195 kubelet[2917]: I0420 15:10:05.916710 2917 policy_none.go:47] "Start" Apr 20 15:10:06.040435 kubelet[2917]: E0420 15:10:06.037916 2917 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 20 15:10:06.040435 kubelet[2917]: I0420 15:10:06.038436 2917 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 20 15:10:06.040435 kubelet[2917]: I0420 15:10:06.038444 2917 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 20 15:10:06.049310 kubelet[2917]: I0420 15:10:06.044813 2917 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 20 15:10:06.072684 kubelet[2917]: E0420 15:10:06.072442 2917 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 20 15:10:06.317914 kubelet[2917]: I0420 15:10:06.307780 2917 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 20 15:10:06.317914 kubelet[2917]: I0420 15:10:06.312756 2917 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Apr 20 15:10:06.317914 kubelet[2917]: I0420 15:10:06.316500 2917 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Apr 20 15:10:06.323754 kubelet[2917]: I0420 15:10:06.319901 2917 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Apr 20 15:10:06.394755 kubelet[2917]: I0420 15:10:06.390170 2917 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/77735e41b4153281131387b55637c08c-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"77735e41b4153281131387b55637c08c\") " pod="kube-system/kube-apiserver-localhost" Apr 20 15:10:06.412433 kubelet[2917]: I0420 15:10:06.401453 2917 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Apr 20 15:10:06.412433 kubelet[2917]: I0420 15:10:06.404682 2917 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/77735e41b4153281131387b55637c08c-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"77735e41b4153281131387b55637c08c\") " pod="kube-system/kube-apiserver-localhost" Apr 20 15:10:06.412433 kubelet[2917]: I0420 15:10:06.406797 2917 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/77735e41b4153281131387b55637c08c-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"77735e41b4153281131387b55637c08c\") " pod="kube-system/kube-apiserver-localhost" Apr 20 15:10:06.456379 kubelet[2917]: E0420 15:10:06.452516 2917 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Apr 20 15:10:06.475420 kubelet[2917]: I0420 15:10:06.474497 2917 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Apr 20 15:10:06.475420 kubelet[2917]: I0420 15:10:06.475202 2917 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Apr 20 15:10:06.480332 kubelet[2917]: E0420 15:10:06.450959 2917 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Apr 20 15:10:06.514901 kubelet[2917]: I0420 15:10:06.511627 2917 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c6bb8708a026256e82ca4c5631a78b5a-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"c6bb8708a026256e82ca4c5631a78b5a\") " pod="kube-system/kube-controller-manager-localhost" Apr 20 15:10:06.514901 kubelet[2917]: I0420 15:10:06.511882 2917 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c6bb8708a026256e82ca4c5631a78b5a-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"c6bb8708a026256e82ca4c5631a78b5a\") " pod="kube-system/kube-controller-manager-localhost" Apr 20 15:10:06.514901 kubelet[2917]: I0420 15:10:06.514491 2917 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c6bb8708a026256e82ca4c5631a78b5a-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"c6bb8708a026256e82ca4c5631a78b5a\") " pod="kube-system/kube-controller-manager-localhost" Apr 20 15:10:06.520689 kubelet[2917]: I0420 15:10:06.514955 2917 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c6bb8708a026256e82ca4c5631a78b5a-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"c6bb8708a026256e82ca4c5631a78b5a\") " pod="kube-system/kube-controller-manager-localhost" Apr 20 15:10:06.520689 kubelet[2917]: I0420 15:10:06.520506 2917 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/824fd89300514e351ed3b68d82c665c6-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"824fd89300514e351ed3b68d82c665c6\") " pod="kube-system/kube-scheduler-localhost" Apr 20 15:10:06.520791 kubelet[2917]: I0420 15:10:06.520717 2917 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/c6bb8708a026256e82ca4c5631a78b5a-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"c6bb8708a026256e82ca4c5631a78b5a\") " pod="kube-system/kube-controller-manager-localhost" Apr 20 15:10:06.835337 kubelet[2917]: E0420 15:10:06.831808 2917 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 15:10:06.836524 kubelet[2917]: E0420 15:10:06.835949 2917 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 15:10:07.110767 kubelet[2917]: E0420 15:10:07.088515 2917 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 15:10:07.925274 kubelet[2917]: E0420 15:10:07.917477 2917 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.251s" Apr 20 15:10:08.316360 kubelet[2917]: E0420 15:10:08.189797 2917 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 15:10:08.549512 kubelet[2917]: E0420 15:10:08.316734 2917 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 15:10:08.549512 kubelet[2917]: E0420 15:10:08.518820 2917 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 15:10:09.231269 kubelet[2917]: E0420 15:10:09.230820 2917 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 15:10:09.238902 kubelet[2917]: E0420 15:10:09.231431 2917 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 15:10:09.887207 kubelet[2917]: I0420 15:10:09.884780 2917 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=3.884682826 podStartE2EDuration="3.884682826s" podCreationTimestamp="2026-04-20 15:10:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-20 15:10:08.634956911 +0000 UTC m=+8.444575227" watchObservedRunningTime="2026-04-20 15:10:09.884682826 +0000 UTC m=+9.694301145" Apr 20 15:10:10.107400 kubelet[2917]: I0420 15:10:10.102421 2917 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Apr 20 15:10:10.220508 containerd[1642]: time="2026-04-20T15:10:10.209331452Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Apr 20 15:10:10.496938 kubelet[2917]: I0420 15:10:10.468895 2917 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Apr 20 15:10:10.645399 kubelet[2917]: E0420 15:10:10.643220 2917 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 15:10:10.776749 kubelet[2917]: E0420 15:10:10.771758 2917 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 15:10:11.440429 kubelet[2917]: E0420 15:10:11.438928 2917 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 15:10:11.850455 kubelet[2917]: E0420 15:10:11.823419 2917 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 15:10:11.907292 kubelet[2917]: E0420 15:10:11.848141 2917 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 15:10:12.970470 kubelet[2917]: E0420 15:10:12.967483 2917 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 15:10:18.800560 kubelet[2917]: E0420 15:10:18.795791 2917 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="4.125s" Apr 20 15:10:21.054730 kubelet[2917]: E0420 15:10:21.051221 2917 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.253s" Apr 20 15:10:21.128470 kubelet[2917]: E0420 15:10:21.126818 2917 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 15:10:22.156434 kubelet[2917]: I0420 15:10:22.151958 2917 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a4d2cb71-c123-4673-ac4c-f5f0cf3c12d7-lib-modules\") pod \"kube-proxy-mgfpp\" (UID: \"a4d2cb71-c123-4673-ac4c-f5f0cf3c12d7\") " pod="kube-system/kube-proxy-mgfpp" Apr 20 15:10:22.193202 kubelet[2917]: I0420 15:10:22.185357 2917 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kc89t\" (UniqueName: \"kubernetes.io/projected/a4d2cb71-c123-4673-ac4c-f5f0cf3c12d7-kube-api-access-kc89t\") pod \"kube-proxy-mgfpp\" (UID: \"a4d2cb71-c123-4673-ac4c-f5f0cf3c12d7\") " pod="kube-system/kube-proxy-mgfpp" Apr 20 15:10:22.193202 kubelet[2917]: I0420 15:10:22.185515 2917 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/a4d2cb71-c123-4673-ac4c-f5f0cf3c12d7-kube-proxy\") pod \"kube-proxy-mgfpp\" (UID: \"a4d2cb71-c123-4673-ac4c-f5f0cf3c12d7\") " pod="kube-system/kube-proxy-mgfpp" Apr 20 15:10:22.193202 kubelet[2917]: I0420 15:10:22.185551 2917 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a4d2cb71-c123-4673-ac4c-f5f0cf3c12d7-xtables-lock\") pod \"kube-proxy-mgfpp\" (UID: \"a4d2cb71-c123-4673-ac4c-f5f0cf3c12d7\") " pod="kube-system/kube-proxy-mgfpp" Apr 20 15:10:22.838739 systemd[1]: Created slice kubepods-besteffort-poda4d2cb71_c123_4673_ac4c_f5f0cf3c12d7.slice - libcontainer container kubepods-besteffort-poda4d2cb71_c123_4673_ac4c_f5f0cf3c12d7.slice. Apr 20 15:10:23.475786 kubelet[2917]: E0420 15:10:23.473253 2917 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 15:10:24.772865 kubelet[2917]: E0420 15:10:24.768489 2917 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 15:10:24.798195 containerd[1642]: time="2026-04-20T15:10:24.797486189Z" level=info msg="RunPodSandbox for name:\"kube-proxy-mgfpp\" uid:\"a4d2cb71-c123-4673-ac4c-f5f0cf3c12d7\" namespace:\"kube-system\"" Apr 20 15:10:25.966311 containerd[1642]: time="2026-04-20T15:10:25.963799231Z" level=info msg="connecting to shim 1b7216a68340bfbcfd5000bb4ab4b4a82c4367aedc9ce390d0ad7722e67be127" address="unix:///run/containerd/s/48eeadccd549d34c115d1cfedaf6cfdfb12ac01dbc7df0a68bda3a95788a7f95" namespace=k8s.io protocol=ttrpc version=3 Apr 20 15:10:27.688386 systemd[1]: Started cri-containerd-1b7216a68340bfbcfd5000bb4ab4b4a82c4367aedc9ce390d0ad7722e67be127.scope - libcontainer container 1b7216a68340bfbcfd5000bb4ab4b4a82c4367aedc9ce390d0ad7722e67be127. Apr 20 15:10:29.088259 kubelet[2917]: I0420 15:10:29.079457 2917 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-plugin\" (UniqueName: \"kubernetes.io/host-path/b2a8f638-8000-49eb-8305-8b086a77e6a3-cni-plugin\") pod \"kube-flannel-ds-ntqbl\" (UID: \"b2a8f638-8000-49eb-8305-8b086a77e6a3\") " pod="kube-flannel/kube-flannel-ds-ntqbl" Apr 20 15:10:29.102243 kubelet[2917]: I0420 15:10:29.092264 2917 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g777r\" (UniqueName: \"kubernetes.io/projected/b2a8f638-8000-49eb-8305-8b086a77e6a3-kube-api-access-g777r\") pod \"kube-flannel-ds-ntqbl\" (UID: \"b2a8f638-8000-49eb-8305-8b086a77e6a3\") " pod="kube-flannel/kube-flannel-ds-ntqbl" Apr 20 15:10:29.102243 kubelet[2917]: I0420 15:10:29.092549 2917 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/b2a8f638-8000-49eb-8305-8b086a77e6a3-run\") pod \"kube-flannel-ds-ntqbl\" (UID: \"b2a8f638-8000-49eb-8305-8b086a77e6a3\") " pod="kube-flannel/kube-flannel-ds-ntqbl" Apr 20 15:10:29.102243 kubelet[2917]: I0420 15:10:29.092589 2917 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b2a8f638-8000-49eb-8305-8b086a77e6a3-xtables-lock\") pod \"kube-flannel-ds-ntqbl\" (UID: \"b2a8f638-8000-49eb-8305-8b086a77e6a3\") " pod="kube-flannel/kube-flannel-ds-ntqbl" Apr 20 15:10:29.102243 kubelet[2917]: I0420 15:10:29.092623 2917 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flannel-cfg\" (UniqueName: \"kubernetes.io/configmap/b2a8f638-8000-49eb-8305-8b086a77e6a3-flannel-cfg\") pod \"kube-flannel-ds-ntqbl\" (UID: \"b2a8f638-8000-49eb-8305-8b086a77e6a3\") " pod="kube-flannel/kube-flannel-ds-ntqbl" Apr 20 15:10:29.102243 kubelet[2917]: I0420 15:10:29.092642 2917 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni\" (UniqueName: \"kubernetes.io/host-path/b2a8f638-8000-49eb-8305-8b086a77e6a3-cni\") pod \"kube-flannel-ds-ntqbl\" (UID: \"b2a8f638-8000-49eb-8305-8b086a77e6a3\") " pod="kube-flannel/kube-flannel-ds-ntqbl" Apr 20 15:10:29.127385 systemd[1]: Created slice kubepods-burstable-podb2a8f638_8000_49eb_8305_8b086a77e6a3.slice - libcontainer container kubepods-burstable-podb2a8f638_8000_49eb_8305_8b086a77e6a3.slice. Apr 20 15:10:30.153938 containerd[1642]: time="2026-04-20T15:10:30.153658042Z" level=info msg="RunPodSandbox for name:\"kube-proxy-mgfpp\" uid:\"a4d2cb71-c123-4673-ac4c-f5f0cf3c12d7\" namespace:\"kube-system\" returns sandbox id \"1b7216a68340bfbcfd5000bb4ab4b4a82c4367aedc9ce390d0ad7722e67be127\"" Apr 20 15:10:30.371487 kubelet[2917]: E0420 15:10:30.366545 2917 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 15:10:30.420253 kubelet[2917]: E0420 15:10:30.418207 2917 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 15:10:30.762599 containerd[1642]: time="2026-04-20T15:10:30.743633165Z" level=info msg="RunPodSandbox for name:\"kube-flannel-ds-ntqbl\" uid:\"b2a8f638-8000-49eb-8305-8b086a77e6a3\" namespace:\"kube-flannel\"" Apr 20 15:10:32.146321 containerd[1642]: time="2026-04-20T15:10:32.141891974Z" level=info msg="CreateContainer within sandbox \"1b7216a68340bfbcfd5000bb4ab4b4a82c4367aedc9ce390d0ad7722e67be127\" for container name:\"kube-proxy\"" Apr 20 15:10:32.287591 containerd[1642]: time="2026-04-20T15:10:32.284221382Z" level=info msg="connecting to shim c3c27a5048699b844683aabb34ef88b4257fbcf5686d3b3679f1523c96188015" address="unix:///run/containerd/s/817d2778abb48afc281b34606a92889f6464510c0d7c4733a5d89330f915baf7" namespace=k8s.io protocol=ttrpc version=3 Apr 20 15:10:32.430827 kubelet[2917]: E0420 15:10:32.393903 2917 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.742s" Apr 20 15:10:32.726963 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount811086260.mount: Deactivated successfully. Apr 20 15:10:32.774556 sudo[1802]: pam_unix(sudo:session): session closed for user root Apr 20 15:10:32.798631 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount760590158.mount: Deactivated successfully. Apr 20 15:10:32.833492 sshd[1801]: Connection closed by 10.0.0.1 port 48504 Apr 20 15:10:32.947646 containerd[1642]: time="2026-04-20T15:10:32.909661318Z" level=info msg="Container 4fb28369ce685d2157806f1b37e02110c8e78d579ea01012b1c9d7b637645f0f: CDI devices from CRI Config.CDIDevices: []" Apr 20 15:10:32.923473 sshd-session[1797]: pam_unix(sshd:session): session closed for user core Apr 20 15:10:33.099345 systemd[1]: sshd@4-3-10.0.0.13:22-10.0.0.1:48504.service: Deactivated successfully. Apr 20 15:10:33.356506 systemd[1]: session-6.scope: Deactivated successfully. Apr 20 15:10:33.358292 systemd[1]: session-6.scope: Consumed 37.282s CPU time, 221.8M memory peak. Apr 20 15:10:33.593323 systemd-logind[1618]: Session 6 logged out. Waiting for processes to exit. Apr 20 15:10:33.855261 systemd-logind[1618]: Removed session 6. Apr 20 15:10:34.209800 containerd[1642]: time="2026-04-20T15:10:34.208637573Z" level=info msg="CreateContainer within sandbox \"1b7216a68340bfbcfd5000bb4ab4b4a82c4367aedc9ce390d0ad7722e67be127\" for name:\"kube-proxy\" returns container id \"4fb28369ce685d2157806f1b37e02110c8e78d579ea01012b1c9d7b637645f0f\"" Apr 20 15:10:34.247934 containerd[1642]: time="2026-04-20T15:10:34.247445579Z" level=info msg="StartContainer for \"4fb28369ce685d2157806f1b37e02110c8e78d579ea01012b1c9d7b637645f0f\"" Apr 20 15:10:34.318384 containerd[1642]: time="2026-04-20T15:10:34.317660082Z" level=info msg="connecting to shim 4fb28369ce685d2157806f1b37e02110c8e78d579ea01012b1c9d7b637645f0f" address="unix:///run/containerd/s/48eeadccd549d34c115d1cfedaf6cfdfb12ac01dbc7df0a68bda3a95788a7f95" protocol=ttrpc version=3 Apr 20 15:10:34.477614 systemd[1]: Started cri-containerd-c3c27a5048699b844683aabb34ef88b4257fbcf5686d3b3679f1523c96188015.scope - libcontainer container c3c27a5048699b844683aabb34ef88b4257fbcf5686d3b3679f1523c96188015. Apr 20 15:10:36.228625 systemd[1]: Started cri-containerd-4fb28369ce685d2157806f1b37e02110c8e78d579ea01012b1c9d7b637645f0f.scope - libcontainer container 4fb28369ce685d2157806f1b37e02110c8e78d579ea01012b1c9d7b637645f0f. Apr 20 15:10:36.757583 containerd[1642]: time="2026-04-20T15:10:36.715916221Z" level=error msg="get state for c3c27a5048699b844683aabb34ef88b4257fbcf5686d3b3679f1523c96188015" error="context deadline exceeded" Apr 20 15:10:36.807168 containerd[1642]: time="2026-04-20T15:10:36.766899558Z" level=warning msg="unknown status" status=0 Apr 20 15:10:37.611189 containerd[1642]: time="2026-04-20T15:10:37.610487142Z" level=error msg="ttrpc: received message on inactive stream" stream=3 Apr 20 15:10:37.818362 containerd[1642]: time="2026-04-20T15:10:37.818221524Z" level=info msg="RunPodSandbox for name:\"kube-flannel-ds-ntqbl\" uid:\"b2a8f638-8000-49eb-8305-8b086a77e6a3\" namespace:\"kube-flannel\" returns sandbox id \"c3c27a5048699b844683aabb34ef88b4257fbcf5686d3b3679f1523c96188015\"" Apr 20 15:10:38.006505 kubelet[2917]: E0420 15:10:37.999923 2917 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 15:10:38.036854 containerd[1642]: time="2026-04-20T15:10:38.036522544Z" level=info msg="PullImage \"ghcr.io/flannel-io/flannel-cni-plugin:v1.6.2-flannel1\"" Apr 20 15:10:39.339546 containerd[1642]: time="2026-04-20T15:10:39.337688811Z" level=info msg="StartContainer for \"4fb28369ce685d2157806f1b37e02110c8e78d579ea01012b1c9d7b637645f0f\" returns successfully" Apr 20 15:10:40.227417 kubelet[2917]: E0420 15:10:40.227200 2917 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 15:10:40.405659 kubelet[2917]: I0420 15:10:40.405457 2917 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-mgfpp" podStartSLOduration=27.405416339 podStartE2EDuration="27.405416339s" podCreationTimestamp="2026-04-20 15:10:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-20 15:10:40.405213025 +0000 UTC m=+40.214831356" watchObservedRunningTime="2026-04-20 15:10:40.405416339 +0000 UTC m=+40.215034663" Apr 20 15:10:41.293555 kubelet[2917]: E0420 15:10:41.293373 2917 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 15:10:41.620426 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1632201837.mount: Deactivated successfully. Apr 20 15:10:41.841286 containerd[1642]: time="2026-04-20T15:10:41.840521411Z" level=info msg="ImageCreate event name:\"ghcr.io/flannel-io/flannel-cni-plugin:v1.6.2-flannel1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 20 15:10:41.851106 containerd[1642]: time="2026-04-20T15:10:41.844632010Z" level=info msg="stop pulling image ghcr.io/flannel-io/flannel-cni-plugin:v1.6.2-flannel1: active requests=0, bytes read=0" Apr 20 15:10:41.854676 containerd[1642]: time="2026-04-20T15:10:41.854537849Z" level=info msg="ImageCreate event name:\"sha256:55ce2385d9d8c6f720091c177fbe885a21c9dc07c9e480bfb4d94b3001f58182\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 20 15:10:41.881133 containerd[1642]: time="2026-04-20T15:10:41.880680707Z" level=info msg="ImageCreate event name:\"ghcr.io/flannel-io/flannel-cni-plugin@sha256:f1812994f0edbcb5bb5ccb63be2147ba6ad10e1faaa7ca9fcdad4f441739d84f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 20 15:10:41.886236 containerd[1642]: time="2026-04-20T15:10:41.885855126Z" level=info msg="Pulled image \"ghcr.io/flannel-io/flannel-cni-plugin:v1.6.2-flannel1\" with image id \"sha256:55ce2385d9d8c6f720091c177fbe885a21c9dc07c9e480bfb4d94b3001f58182\", repo tag \"ghcr.io/flannel-io/flannel-cni-plugin:v1.6.2-flannel1\", repo digest \"ghcr.io/flannel-io/flannel-cni-plugin@sha256:f1812994f0edbcb5bb5ccb63be2147ba6ad10e1faaa7ca9fcdad4f441739d84f\", size \"4856838\" in 3.846565807s" Apr 20 15:10:41.886236 containerd[1642]: time="2026-04-20T15:10:41.886123927Z" level=info msg="PullImage \"ghcr.io/flannel-io/flannel-cni-plugin:v1.6.2-flannel1\" returns image reference \"sha256:55ce2385d9d8c6f720091c177fbe885a21c9dc07c9e480bfb4d94b3001f58182\"" Apr 20 15:10:41.897873 containerd[1642]: time="2026-04-20T15:10:41.897679105Z" level=info msg="CreateContainer within sandbox \"c3c27a5048699b844683aabb34ef88b4257fbcf5686d3b3679f1523c96188015\" for container name:\"install-cni-plugin\"" Apr 20 15:10:41.941203 containerd[1642]: time="2026-04-20T15:10:41.939650777Z" level=info msg="Container 565ae5b9e62d2eee028e0232febaa9678f23e9f566f35e83cc86f6ed0979909a: CDI devices from CRI Config.CDIDevices: []" Apr 20 15:10:42.052299 containerd[1642]: time="2026-04-20T15:10:42.051557178Z" level=info msg="CreateContainer within sandbox \"c3c27a5048699b844683aabb34ef88b4257fbcf5686d3b3679f1523c96188015\" for name:\"install-cni-plugin\" returns container id \"565ae5b9e62d2eee028e0232febaa9678f23e9f566f35e83cc86f6ed0979909a\"" Apr 20 15:10:42.092205 containerd[1642]: time="2026-04-20T15:10:42.090500403Z" level=info msg="StartContainer for \"565ae5b9e62d2eee028e0232febaa9678f23e9f566f35e83cc86f6ed0979909a\"" Apr 20 15:10:42.113451 containerd[1642]: time="2026-04-20T15:10:42.113161289Z" level=info msg="connecting to shim 565ae5b9e62d2eee028e0232febaa9678f23e9f566f35e83cc86f6ed0979909a" address="unix:///run/containerd/s/817d2778abb48afc281b34606a92889f6464510c0d7c4733a5d89330f915baf7" protocol=ttrpc version=3 Apr 20 15:10:42.693376 systemd[1]: Started cri-containerd-565ae5b9e62d2eee028e0232febaa9678f23e9f566f35e83cc86f6ed0979909a.scope - libcontainer container 565ae5b9e62d2eee028e0232febaa9678f23e9f566f35e83cc86f6ed0979909a. Apr 20 15:10:44.436642 systemd[1]: cri-containerd-565ae5b9e62d2eee028e0232febaa9678f23e9f566f35e83cc86f6ed0979909a.scope: Deactivated successfully. Apr 20 15:10:44.467828 containerd[1642]: time="2026-04-20T15:10:44.460551300Z" level=info msg="received container exit event container_id:\"565ae5b9e62d2eee028e0232febaa9678f23e9f566f35e83cc86f6ed0979909a\" id:\"565ae5b9e62d2eee028e0232febaa9678f23e9f566f35e83cc86f6ed0979909a\" pid:3229 exited_at:{seconds:1776697844 nanos:436445548}" Apr 20 15:10:44.512146 containerd[1642]: time="2026-04-20T15:10:44.503473911Z" level=info msg="StartContainer for \"565ae5b9e62d2eee028e0232febaa9678f23e9f566f35e83cc86f6ed0979909a\" returns successfully" Apr 20 15:10:44.796107 kubelet[2917]: E0420 15:10:44.795333 2917 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 15:10:45.017352 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-565ae5b9e62d2eee028e0232febaa9678f23e9f566f35e83cc86f6ed0979909a-rootfs.mount: Deactivated successfully. Apr 20 15:10:46.021407 kubelet[2917]: E0420 15:10:46.020899 2917 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 15:10:46.239342 containerd[1642]: time="2026-04-20T15:10:46.233529552Z" level=info msg="PullImage \"ghcr.io/flannel-io/flannel:v0.26.7\"" Apr 20 15:10:58.348478 kubelet[2917]: E0420 15:10:58.346810 2917 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.654s" Apr 20 15:11:02.638316 containerd[1642]: time="2026-04-20T15:11:02.637871248Z" level=info msg="ImageCreate event name:\"ghcr.io/flannel-io/flannel:v0.26.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 20 15:11:02.647908 containerd[1642]: time="2026-04-20T15:11:02.647664570Z" level=info msg="stop pulling image ghcr.io/flannel-io/flannel:v0.26.7: active requests=0, bytes read=20181428" Apr 20 15:11:02.737451 containerd[1642]: time="2026-04-20T15:11:02.734920477Z" level=info msg="ImageCreate event name:\"sha256:965b9dd4aa4c1b6b68a4c54a166692b4645b6e6f8a5937d8dc17736cb63f515e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 20 15:11:02.778304 containerd[1642]: time="2026-04-20T15:11:02.777221551Z" level=info msg="ImageCreate event name:\"ghcr.io/flannel-io/flannel@sha256:7f471907fa940f944867270de4ed78121b8b4c5d564e17f940dc787cb16dea82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 20 15:11:02.786253 containerd[1642]: time="2026-04-20T15:11:02.785306045Z" level=info msg="Pulled image \"ghcr.io/flannel-io/flannel:v0.26.7\" with image id \"sha256:965b9dd4aa4c1b6b68a4c54a166692b4645b6e6f8a5937d8dc17736cb63f515e\", repo tag \"ghcr.io/flannel-io/flannel:v0.26.7\", repo digest \"ghcr.io/flannel-io/flannel@sha256:7f471907fa940f944867270de4ed78121b8b4c5d564e17f940dc787cb16dea82\", size \"32996046\" in 16.547815496s" Apr 20 15:11:02.786253 containerd[1642]: time="2026-04-20T15:11:02.785686718Z" level=info msg="PullImage \"ghcr.io/flannel-io/flannel:v0.26.7\" returns image reference \"sha256:965b9dd4aa4c1b6b68a4c54a166692b4645b6e6f8a5937d8dc17736cb63f515e\"" Apr 20 15:11:02.875109 containerd[1642]: time="2026-04-20T15:11:02.873397517Z" level=info msg="CreateContainer within sandbox \"c3c27a5048699b844683aabb34ef88b4257fbcf5686d3b3679f1523c96188015\" for container name:\"install-cni\"" Apr 20 15:11:03.164429 containerd[1642]: time="2026-04-20T15:11:03.164288676Z" level=info msg="Container dd687350be8d706ad44a90111169a30120d52d806af515b654d23db051f81381: CDI devices from CRI Config.CDIDevices: []" Apr 20 15:11:03.204813 containerd[1642]: time="2026-04-20T15:11:03.204232642Z" level=info msg="CreateContainer within sandbox \"c3c27a5048699b844683aabb34ef88b4257fbcf5686d3b3679f1523c96188015\" for name:\"install-cni\" returns container id \"dd687350be8d706ad44a90111169a30120d52d806af515b654d23db051f81381\"" Apr 20 15:11:03.214868 containerd[1642]: time="2026-04-20T15:11:03.210806220Z" level=info msg="StartContainer for \"dd687350be8d706ad44a90111169a30120d52d806af515b654d23db051f81381\"" Apr 20 15:11:03.249274 containerd[1642]: time="2026-04-20T15:11:03.247374851Z" level=info msg="connecting to shim dd687350be8d706ad44a90111169a30120d52d806af515b654d23db051f81381" address="unix:///run/containerd/s/817d2778abb48afc281b34606a92889f6464510c0d7c4733a5d89330f915baf7" protocol=ttrpc version=3 Apr 20 15:11:03.659485 systemd[1]: Started cri-containerd-dd687350be8d706ad44a90111169a30120d52d806af515b654d23db051f81381.scope - libcontainer container dd687350be8d706ad44a90111169a30120d52d806af515b654d23db051f81381. Apr 20 15:11:04.102489 systemd[1]: cri-containerd-dd687350be8d706ad44a90111169a30120d52d806af515b654d23db051f81381.scope: Deactivated successfully. Apr 20 15:11:04.139227 containerd[1642]: time="2026-04-20T15:11:04.138711215Z" level=info msg="received container exit event container_id:\"dd687350be8d706ad44a90111169a30120d52d806af515b654d23db051f81381\" id:\"dd687350be8d706ad44a90111169a30120d52d806af515b654d23db051f81381\" pid:3349 exited_at:{seconds:1776697864 nanos:100957386}" Apr 20 15:11:04.164949 containerd[1642]: time="2026-04-20T15:11:04.163484820Z" level=info msg="StartContainer for \"dd687350be8d706ad44a90111169a30120d52d806af515b654d23db051f81381\" returns successfully" Apr 20 15:11:04.259463 kubelet[2917]: I0420 15:11:04.257728 2917 kubelet_node_status.go:439] "Fast updating node status as it just became ready" Apr 20 15:11:04.502309 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-dd687350be8d706ad44a90111169a30120d52d806af515b654d23db051f81381-rootfs.mount: Deactivated successfully. Apr 20 15:11:04.707867 kubelet[2917]: I0420 15:11:04.707420 2917 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d6d8c898-6367-4aff-b3fe-47cbcf746fbf-config-volume\") pod \"coredns-66bc5c9577-fbp9t\" (UID: \"d6d8c898-6367-4aff-b3fe-47cbcf746fbf\") " pod="kube-system/coredns-66bc5c9577-fbp9t" Apr 20 15:11:04.707867 kubelet[2917]: I0420 15:11:04.707720 2917 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9thsm\" (UniqueName: \"kubernetes.io/projected/d6d8c898-6367-4aff-b3fe-47cbcf746fbf-kube-api-access-9thsm\") pod \"coredns-66bc5c9577-fbp9t\" (UID: \"d6d8c898-6367-4aff-b3fe-47cbcf746fbf\") " pod="kube-system/coredns-66bc5c9577-fbp9t" Apr 20 15:11:04.758260 systemd[1]: Created slice kubepods-burstable-podd6d8c898_6367_4aff_b3fe_47cbcf746fbf.slice - libcontainer container kubepods-burstable-podd6d8c898_6367_4aff_b3fe_47cbcf746fbf.slice. Apr 20 15:11:04.820009 kubelet[2917]: I0420 15:11:04.819485 2917 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/545c9a3c-7c66-478f-888e-d4f75a1ecb44-config-volume\") pod \"coredns-66bc5c9577-79fj6\" (UID: \"545c9a3c-7c66-478f-888e-d4f75a1ecb44\") " pod="kube-system/coredns-66bc5c9577-79fj6" Apr 20 15:11:04.820009 kubelet[2917]: I0420 15:11:04.819862 2917 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bmlfs\" (UniqueName: \"kubernetes.io/projected/545c9a3c-7c66-478f-888e-d4f75a1ecb44-kube-api-access-bmlfs\") pod \"coredns-66bc5c9577-79fj6\" (UID: \"545c9a3c-7c66-478f-888e-d4f75a1ecb44\") " pod="kube-system/coredns-66bc5c9577-79fj6" Apr 20 15:11:04.844850 systemd[1]: Created slice kubepods-burstable-pod545c9a3c_7c66_478f_888e_d4f75a1ecb44.slice - libcontainer container kubepods-burstable-pod545c9a3c_7c66_478f_888e_d4f75a1ecb44.slice. Apr 20 15:11:05.084364 kubelet[2917]: E0420 15:11:05.083857 2917 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 15:11:05.136414 kubelet[2917]: E0420 15:11:05.135915 2917 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 15:11:05.139246 containerd[1642]: time="2026-04-20T15:11:05.137671339Z" level=info msg="CreateContainer within sandbox \"c3c27a5048699b844683aabb34ef88b4257fbcf5686d3b3679f1523c96188015\" for container name:\"kube-flannel\"" Apr 20 15:11:05.139246 containerd[1642]: time="2026-04-20T15:11:05.138742832Z" level=info msg="RunPodSandbox for name:\"coredns-66bc5c9577-fbp9t\" uid:\"d6d8c898-6367-4aff-b3fe-47cbcf746fbf\" namespace:\"kube-system\"" Apr 20 15:11:05.227926 kubelet[2917]: E0420 15:11:05.227219 2917 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 15:11:05.236483 containerd[1642]: time="2026-04-20T15:11:05.235293630Z" level=info msg="RunPodSandbox for name:\"coredns-66bc5c9577-79fj6\" uid:\"545c9a3c-7c66-478f-888e-d4f75a1ecb44\" namespace:\"kube-system\"" Apr 20 15:11:05.368181 containerd[1642]: time="2026-04-20T15:11:05.363354241Z" level=info msg="Container 0f8c375884a18d5d954a9dc4832c39304bd906164aa8efd2fb77fbec630ca187: CDI devices from CRI Config.CDIDevices: []" Apr 20 15:11:05.440191 containerd[1642]: time="2026-04-20T15:11:05.439599644Z" level=info msg="CreateContainer within sandbox \"c3c27a5048699b844683aabb34ef88b4257fbcf5686d3b3679f1523c96188015\" for name:\"kube-flannel\" returns container id \"0f8c375884a18d5d954a9dc4832c39304bd906164aa8efd2fb77fbec630ca187\"" Apr 20 15:11:05.655663 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3566724960.mount: Deactivated successfully. Apr 20 15:11:05.829377 containerd[1642]: time="2026-04-20T15:11:05.822474496Z" level=info msg="StartContainer for \"0f8c375884a18d5d954a9dc4832c39304bd906164aa8efd2fb77fbec630ca187\"" Apr 20 15:11:06.265448 containerd[1642]: time="2026-04-20T15:11:06.263434424Z" level=info msg="connecting to shim 0f8c375884a18d5d954a9dc4832c39304bd906164aa8efd2fb77fbec630ca187" address="unix:///run/containerd/s/817d2778abb48afc281b34606a92889f6464510c0d7c4733a5d89330f915baf7" protocol=ttrpc version=3 Apr 20 15:11:06.734757 systemd[1]: Started cri-containerd-0f8c375884a18d5d954a9dc4832c39304bd906164aa8efd2fb77fbec630ca187.scope - libcontainer container 0f8c375884a18d5d954a9dc4832c39304bd906164aa8efd2fb77fbec630ca187. Apr 20 15:11:06.755897 systemd[1]: run-netns-cni\x2d533272d2\x2d3c64\x2d343e\x2d189b\x2d094093219450.mount: Deactivated successfully. Apr 20 15:11:06.784474 containerd[1642]: time="2026-04-20T15:11:06.784266989Z" level=error msg="RunPodSandbox for name:\"coredns-66bc5c9577-79fj6\" uid:\"545c9a3c-7c66-478f-888e-d4f75a1ecb44\" namespace:\"kube-system\" failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"b797938586a3079ad8f11ec437b53b92987c8a75d83c4a91871544bd2dd55042\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Apr 20 15:11:06.791126 kubelet[2917]: E0420 15:11:06.789734 2917 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b797938586a3079ad8f11ec437b53b92987c8a75d83c4a91871544bd2dd55042\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Apr 20 15:11:06.793455 kubelet[2917]: E0420 15:11:06.791936 2917 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b797938586a3079ad8f11ec437b53b92987c8a75d83c4a91871544bd2dd55042\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-66bc5c9577-79fj6" Apr 20 15:11:06.793455 kubelet[2917]: E0420 15:11:06.792231 2917 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b797938586a3079ad8f11ec437b53b92987c8a75d83c4a91871544bd2dd55042\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-66bc5c9577-79fj6" Apr 20 15:11:06.793455 kubelet[2917]: E0420 15:11:06.792354 2917 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-79fj6_kube-system(545c9a3c-7c66-478f-888e-d4f75a1ecb44)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-79fj6_kube-system(545c9a3c-7c66-478f-888e-d4f75a1ecb44)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b797938586a3079ad8f11ec437b53b92987c8a75d83c4a91871544bd2dd55042\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-66bc5c9577-79fj6" podUID="545c9a3c-7c66-478f-888e-d4f75a1ecb44" Apr 20 15:11:06.805948 systemd[1]: run-netns-cni\x2d4264e980\x2d4cc8\x2df2b2\x2d9256\x2d103083d80b28.mount: Deactivated successfully. Apr 20 15:11:06.847134 containerd[1642]: time="2026-04-20T15:11:06.842722011Z" level=error msg="RunPodSandbox for name:\"coredns-66bc5c9577-fbp9t\" uid:\"d6d8c898-6367-4aff-b3fe-47cbcf746fbf\" namespace:\"kube-system\" failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"7727514e9e68de53ce25fb68abe099cddeead98c7c2b2be3ff160aadf88ccfd3\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Apr 20 15:11:06.856457 kubelet[2917]: E0420 15:11:06.856258 2917 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7727514e9e68de53ce25fb68abe099cddeead98c7c2b2be3ff160aadf88ccfd3\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Apr 20 15:11:06.861156 kubelet[2917]: E0420 15:11:06.856631 2917 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7727514e9e68de53ce25fb68abe099cddeead98c7c2b2be3ff160aadf88ccfd3\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-66bc5c9577-fbp9t" Apr 20 15:11:06.861156 kubelet[2917]: E0420 15:11:06.856681 2917 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7727514e9e68de53ce25fb68abe099cddeead98c7c2b2be3ff160aadf88ccfd3\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-66bc5c9577-fbp9t" Apr 20 15:11:06.861156 kubelet[2917]: E0420 15:11:06.856789 2917 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-fbp9t_kube-system(d6d8c898-6367-4aff-b3fe-47cbcf746fbf)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-fbp9t_kube-system(d6d8c898-6367-4aff-b3fe-47cbcf746fbf)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7727514e9e68de53ce25fb68abe099cddeead98c7c2b2be3ff160aadf88ccfd3\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-66bc5c9577-fbp9t" podUID="d6d8c898-6367-4aff-b3fe-47cbcf746fbf" Apr 20 15:11:07.303211 containerd[1642]: time="2026-04-20T15:11:07.302260860Z" level=info msg="StartContainer for \"0f8c375884a18d5d954a9dc4832c39304bd906164aa8efd2fb77fbec630ca187\" returns successfully" Apr 20 15:11:08.523381 kubelet[2917]: E0420 15:11:08.522795 2917 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 15:11:08.883199 kubelet[2917]: I0420 15:11:08.882464 2917 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-flannel/kube-flannel-ds-ntqbl" podStartSLOduration=16.109915819 podStartE2EDuration="40.882373271s" podCreationTimestamp="2026-04-20 15:10:28 +0000 UTC" firstStartedPulling="2026-04-20 15:10:38.03117038 +0000 UTC m=+37.840788692" lastFinishedPulling="2026-04-20 15:11:02.803627826 +0000 UTC m=+62.613246144" observedRunningTime="2026-04-20 15:11:08.856226769 +0000 UTC m=+68.665845096" watchObservedRunningTime="2026-04-20 15:11:08.882373271 +0000 UTC m=+68.691991587" Apr 20 15:11:09.536754 kubelet[2917]: E0420 15:11:09.536389 2917 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 15:11:10.244308 systemd-networkd[1429]: flannel.1: Link UP Apr 20 15:11:10.244316 systemd-networkd[1429]: flannel.1: Gained carrier Apr 20 15:11:11.404795 systemd-networkd[1429]: flannel.1: Gained IPv6LL Apr 20 15:11:19.656239 kubelet[2917]: E0420 15:11:19.655740 2917 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 15:11:19.663966 kubelet[2917]: E0420 15:11:19.662402 2917 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 15:11:19.685235 kubelet[2917]: E0420 15:11:19.682834 2917 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 15:11:19.705957 containerd[1642]: time="2026-04-20T15:11:19.696896123Z" level=info msg="RunPodSandbox for name:\"coredns-66bc5c9577-fbp9t\" uid:\"d6d8c898-6367-4aff-b3fe-47cbcf746fbf\" namespace:\"kube-system\"" Apr 20 15:11:19.714766 containerd[1642]: time="2026-04-20T15:11:19.709756755Z" level=info msg="RunPodSandbox for name:\"coredns-66bc5c9577-79fj6\" uid:\"545c9a3c-7c66-478f-888e-d4f75a1ecb44\" namespace:\"kube-system\"" Apr 20 15:11:22.210416 kubelet[2917]: E0420 15:11:22.205824 2917 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.555s" Apr 20 15:11:23.036725 systemd-networkd[1429]: cni0: Link UP Apr 20 15:11:23.036743 systemd-networkd[1429]: cni0: Gained carrier Apr 20 15:11:24.119903 systemd-networkd[1429]: cni0: Gained IPv6LL Apr 20 15:11:24.317360 systemd-networkd[1429]: veth50322bbe: Link UP Apr 20 15:11:24.653744 systemd-networkd[1429]: vethe7dea6bb: Link UP Apr 20 15:11:24.707326 systemd-networkd[1429]: cni0: Lost carrier Apr 20 15:11:25.200517 kernel: cni0: port 1(vethe7dea6bb) entered blocking state Apr 20 15:11:25.204255 kernel: cni0: port 1(vethe7dea6bb) entered disabled state Apr 20 15:11:25.252957 kernel: vethe7dea6bb: entered allmulticast mode Apr 20 15:11:25.288271 kernel: vethe7dea6bb: entered promiscuous mode Apr 20 15:11:25.535628 kernel: cni0: port 2(veth50322bbe) entered blocking state Apr 20 15:11:25.610845 kernel: cni0: port 2(veth50322bbe) entered disabled state Apr 20 15:11:25.846875 kernel: veth50322bbe: entered allmulticast mode Apr 20 15:11:26.083182 kernel: veth50322bbe: entered promiscuous mode Apr 20 15:11:27.548181 kernel: cni0: port 1(vethe7dea6bb) entered blocking state Apr 20 15:11:27.574687 kernel: cni0: port 1(vethe7dea6bb) entered forwarding state Apr 20 15:11:27.555289 systemd-networkd[1429]: vethe7dea6bb: Gained carrier Apr 20 15:11:27.700887 systemd-networkd[1429]: cni0: Gained carrier Apr 20 15:11:28.712385 kubelet[2917]: E0420 15:11:28.712349 2917 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="5.996s" Apr 20 15:11:28.864311 systemd-networkd[1429]: vethe7dea6bb: Gained IPv6LL Apr 20 15:11:28.896807 kernel: cni0: port 2(veth50322bbe) entered blocking state Apr 20 15:11:28.921652 kernel: cni0: port 2(veth50322bbe) entered forwarding state Apr 20 15:11:28.932574 systemd-networkd[1429]: veth50322bbe: Gained carrier Apr 20 15:11:28.954773 containerd[1642]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil), MTU:0, AdvMSS:0, Priority:0, Table:(*int)(nil), Scope:(*int)(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0xc000012930), "name":"cbr0", "type":"bridge"} Apr 20 15:11:28.954773 containerd[1642]: delegateAdd: netconf sent to delegate plugin: Apr 20 15:11:30.274570 containerd[1642]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"} Apr 20 15:11:30.274570 containerd[1642]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil), MTU:0, AdvMSS:0, Priority:0, Table:(*int)(nil), Scope:(*int)(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0xc000012930), "name":"cbr0", "type":"bridge"} Apr 20 15:11:30.274570 containerd[1642]: delegateAdd: netconf sent to delegate plugin: Apr 20 15:11:30.286575 kubelet[2917]: E0420 15:11:30.282701 2917 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.54s" Apr 20 15:11:30.719809 systemd-networkd[1429]: veth50322bbe: Gained IPv6LL Apr 20 15:11:31.757380 containerd[1642]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2026-04-20T15:11:31.723857666Z" level=info msg="connecting to shim 638522c2450123b2103aa2125043a6e9c38225e442872526bb62184f0dd321ca" address="unix:///run/containerd/s/bc89171def6499777c9969a2d1a4e9849d33c1bac108ddacdcd876088a63d836" namespace=k8s.io protocol=ttrpc version=3 Apr 20 15:11:32.208812 containerd[1642]: time="2026-04-20T15:11:32.207768949Z" level=info msg="connecting to shim e1636b2719891084cd503241c698d2ca4db00064473a624b76b04d47c726ebb6" address="unix:///run/containerd/s/fda5cd4badd32ba8a4cb2e58a96d43217c46133c54bca093b38c272eee0abcf4" namespace=k8s.io protocol=ttrpc version=3 Apr 20 15:11:48.443881 kubelet[2917]: E0420 15:11:47.428128 2917 controller.go:195] "Failed to update lease" err="Put \"https://10.0.0.13:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Apr 20 15:11:49.880747 systemd[1]: cri-containerd-1201ba8c08049438bb6bfd2aaf3ba5641c38e34c54e36ee4ab3b35aa0d37edaa.scope: Deactivated successfully. Apr 20 15:11:49.928938 systemd[1]: cri-containerd-1201ba8c08049438bb6bfd2aaf3ba5641c38e34c54e36ee4ab3b35aa0d37edaa.scope: Consumed 44.010s CPU time, 51.5M memory peak. Apr 20 15:11:52.042719 containerd[1642]: time="2026-04-20T15:11:52.041770033Z" level=info msg="received container exit event container_id:\"1201ba8c08049438bb6bfd2aaf3ba5641c38e34c54e36ee4ab3b35aa0d37edaa\" id:\"1201ba8c08049438bb6bfd2aaf3ba5641c38e34c54e36ee4ab3b35aa0d37edaa\" pid:2756 exit_status:1 exited_at:{seconds:1776697911 nanos:459779251}" Apr 20 15:11:52.424850 systemd[1]: Started cri-containerd-638522c2450123b2103aa2125043a6e9c38225e442872526bb62184f0dd321ca.scope - libcontainer container 638522c2450123b2103aa2125043a6e9c38225e442872526bb62184f0dd321ca. Apr 20 15:12:10.414946 systemd[1]: cri-containerd-d17439ea92908ce00a6e08af24eeaaaa4122b85561e2cedb708e273f0cd453f6.scope: Deactivated successfully. Apr 20 15:12:10.773918 systemd[1]: cri-containerd-d17439ea92908ce00a6e08af24eeaaaa4122b85561e2cedb708e273f0cd453f6.scope: Consumed 32.376s CPU time, 21.1M memory peak, 424K read from disk. Apr 20 15:12:18.339150 containerd[1642]: time="2026-04-20T15:12:18.335902980Z" level=error msg="ttrpc: received message on inactive stream" stream=45 Apr 20 15:12:21.440854 containerd[1642]: time="2026-04-20T15:12:20.968858916Z" level=error msg="ttrpc: received message on inactive stream" stream=47 Apr 20 15:12:21.581190 containerd[1642]: time="2026-04-20T15:12:21.492778858Z" level=info msg="received container exit event container_id:\"d17439ea92908ce00a6e08af24eeaaaa4122b85561e2cedb708e273f0cd453f6\" id:\"d17439ea92908ce00a6e08af24eeaaaa4122b85561e2cedb708e273f0cd453f6\" pid:2758 exit_status:1 exited_at:{seconds:1776697937 nanos:944630133}" Apr 20 15:12:21.752214 containerd[1642]: time="2026-04-20T15:12:21.402217986Z" level=error msg="failed to handle container TaskExit event container_id:\"1201ba8c08049438bb6bfd2aaf3ba5641c38e34c54e36ee4ab3b35aa0d37edaa\" id:\"1201ba8c08049438bb6bfd2aaf3ba5641c38e34c54e36ee4ab3b35aa0d37edaa\" pid:2756 exit_status:1 exited_at:{seconds:1776697911 nanos:459779251}" error="failed to stop container: context deadline exceeded" Apr 20 15:12:23.930963 containerd[1642]: time="2026-04-20T15:12:23.909837919Z" level=info msg="TaskExit event container_id:\"1201ba8c08049438bb6bfd2aaf3ba5641c38e34c54e36ee4ab3b35aa0d37edaa\" id:\"1201ba8c08049438bb6bfd2aaf3ba5641c38e34c54e36ee4ab3b35aa0d37edaa\" pid:2756 exit_status:1 exited_at:{seconds:1776697911 nanos:459779251}" Apr 20 15:12:29.729656 systemd-resolved[1400]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 20 15:12:31.763629 containerd[1642]: time="2026-04-20T15:12:31.758445024Z" level=error msg="failed to handle container TaskExit event container_id:\"d17439ea92908ce00a6e08af24eeaaaa4122b85561e2cedb708e273f0cd453f6\" id:\"d17439ea92908ce00a6e08af24eeaaaa4122b85561e2cedb708e273f0cd453f6\" pid:2758 exit_status:1 exited_at:{seconds:1776697937 nanos:944630133}" error="failed to stop container: context deadline exceeded" Apr 20 15:12:32.903866 containerd[1642]: time="2026-04-20T15:12:32.227629076Z" level=error msg="ttrpc: received message on inactive stream" stream=45 Apr 20 15:12:32.903866 containerd[1642]: time="2026-04-20T15:12:32.292327897Z" level=error msg="ttrpc: received message on inactive stream" stream=49 Apr 20 15:12:34.104752 kubelet[2917]: E0420 15:12:30.857218 2917 controller.go:195] "Failed to update lease" err="Put \"https://10.0.0.13:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Apr 20 15:12:35.024928 kubelet[2917]: I0420 15:12:34.907727 2917 reflector.go:571] "Warning: watch ended with error" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" err="an error on the server (\"unable to decode an event from the watch stream: http2: client connection lost\") has prevented the request from succeeding" Apr 20 15:12:36.801612 containerd[1642]: time="2026-04-20T15:12:36.797283511Z" level=error msg="ttrpc: received message on inactive stream" stream=55 Apr 20 15:12:37.500657 kubelet[2917]: I0420 15:12:37.138957 2917 reflector.go:571] "Warning: watch ended with error" reflector="object-\"kube-system\"/\"coredns\"" type="*v1.ConfigMap" err="an error on the server (\"unable to decode an event from the watch stream: http2: client connection lost\") has prevented the request from succeeding" Apr 20 15:12:37.798808 systemd[1752]: Created slice background.slice - User Background Tasks Slice. Apr 20 15:12:38.016585 systemd[1752]: Starting systemd-tmpfiles-clean.service - Cleanup of User's Temporary Files and Directories... Apr 20 15:12:38.730752 containerd[1642]: time="2026-04-20T15:12:38.712897559Z" level=error msg="Failed to handle backOff event container_id:\"1201ba8c08049438bb6bfd2aaf3ba5641c38e34c54e36ee4ab3b35aa0d37edaa\" id:\"1201ba8c08049438bb6bfd2aaf3ba5641c38e34c54e36ee4ab3b35aa0d37edaa\" pid:2756 exit_status:1 exited_at:{seconds:1776697911 nanos:459779251} for 1201ba8c08049438bb6bfd2aaf3ba5641c38e34c54e36ee4ab3b35aa0d37edaa" error="failed to handle container TaskExit event: failed to stop container: context deadline exceeded" Apr 20 15:12:40.404409 containerd[1642]: time="2026-04-20T15:12:40.394797045Z" level=info msg="TaskExit event container_id:\"d17439ea92908ce00a6e08af24eeaaaa4122b85561e2cedb708e273f0cd453f6\" id:\"d17439ea92908ce00a6e08af24eeaaaa4122b85561e2cedb708e273f0cd453f6\" pid:2758 exit_status:1 exited_at:{seconds:1776697937 nanos:944630133}" Apr 20 15:12:40.558579 systemd[1]: Started cri-containerd-e1636b2719891084cd503241c698d2ca4db00064473a624b76b04d47c726ebb6.scope - libcontainer container e1636b2719891084cd503241c698d2ca4db00064473a624b76b04d47c726ebb6. Apr 20 15:12:40.739810 systemd[1752]: Finished systemd-tmpfiles-clean.service - Cleanup of User's Temporary Files and Directories. Apr 20 15:12:41.377777 kubelet[2917]: I0420 15:12:41.373912 2917 reflector.go:571] "Warning: watch ended with error" reflector="pkg/kubelet/config/apiserver.go:66" type="*v1.Pod" err="an error on the server (\"unable to decode an event from the watch stream: http2: client connection lost\") has prevented the request from succeeding" Apr 20 15:12:42.551342 kubelet[2917]: I0420 15:12:42.550790 2917 reflector.go:571] "Warning: watch ended with error" reflector="object-\"kube-flannel\"/\"kube-flannel-cfg\"" type="*v1.ConfigMap" err="an error on the server (\"unable to decode an event from the watch stream: http2: client connection lost\") has prevented the request from succeeding" Apr 20 15:12:44.522202 kubelet[2917]: E0420 15:12:42.818800 2917 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.13:6443/api/v1/namespaces/kube-system/events\": http2: client connection lost" event="&Event{ObjectMeta:{kube-apiserver-localhost.18a81955c4b424a6 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-localhost,UID:77735e41b4153281131387b55637c08c,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Readiness probe failed: Get \"https://10.0.0.13:6443/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-20 15:11:49.16071543 +0000 UTC m=+108.970333867,LastTimestamp:2026-04-20 15:11:49.16071543 +0000 UTC m=+108.970333867,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 20 15:12:44.647834 kubelet[2917]: I0420 15:12:42.857655 2917 reflector.go:571] "Warning: watch ended with error" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" err="an error on the server (\"unable to decode an event from the watch stream: http2: client connection lost\") has prevented the request from succeeding" Apr 20 15:12:44.647834 kubelet[2917]: I0420 15:12:42.912948 2917 reflector.go:571] "Warning: watch ended with error" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" err="an error on the server (\"unable to decode an event from the watch stream: http2: client connection lost\") has prevented the request from succeeding" Apr 20 15:12:44.647834 kubelet[2917]: I0420 15:12:42.921952 2917 reflector.go:571] "Warning: watch ended with error" reflector="object-\"kube-system\"/\"kube-root-ca.crt\"" type="*v1.ConfigMap" err="an error on the server (\"unable to decode an event from the watch stream: http2: client connection lost\") has prevented the request from succeeding" Apr 20 15:12:44.647834 kubelet[2917]: I0420 15:12:42.953855 2917 reflector.go:571] "Warning: watch ended with error" reflector="object-\"kube-flannel\"/\"kube-root-ca.crt\"" type="*v1.ConfigMap" err="an error on the server (\"unable to decode an event from the watch stream: http2: client connection lost\") has prevented the request from succeeding" Apr 20 15:12:44.647834 kubelet[2917]: I0420 15:12:42.962763 2917 reflector.go:571] "Warning: watch ended with error" reflector="object-\"kube-system\"/\"kube-proxy\"" type="*v1.ConfigMap" err="an error on the server (\"unable to decode an event from the watch stream: http2: client connection lost\") has prevented the request from succeeding" Apr 20 15:12:44.949128 kubelet[2917]: I0420 15:12:42.736767 2917 reflector.go:571] "Warning: watch ended with error" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" err="an error on the server (\"unable to decode an event from the watch stream: http2: client connection lost\") has prevented the request from succeeding" Apr 20 15:12:44.949128 kubelet[2917]: I0420 15:12:44.713647 2917 setters.go:543] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-20T15:12:44Z","lastTransitionTime":"2026-04-20T15:12:44Z","reason":"KubeletNotReady","message":"container runtime is down"} Apr 20 15:12:49.811441 systemd-resolved[1400]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 20 15:12:50.123687 containerd[1642]: time="2026-04-20T15:12:49.904649886Z" level=error msg="ttrpc: received message on inactive stream" stream=55 Apr 20 15:12:50.123687 containerd[1642]: time="2026-04-20T15:12:50.034674562Z" level=error msg="ttrpc: received message on inactive stream" stream=59 Apr 20 15:12:50.123687 containerd[1642]: time="2026-04-20T15:12:50.024801717Z" level=error msg="Failed to handle backOff event container_id:\"d17439ea92908ce00a6e08af24eeaaaa4122b85561e2cedb708e273f0cd453f6\" id:\"d17439ea92908ce00a6e08af24eeaaaa4122b85561e2cedb708e273f0cd453f6\" pid:2758 exit_status:1 exited_at:{seconds:1776697937 nanos:944630133} for d17439ea92908ce00a6e08af24eeaaaa4122b85561e2cedb708e273f0cd453f6" error="failed to handle container TaskExit event: failed to stop container: context deadline exceeded" Apr 20 15:12:50.123687 containerd[1642]: time="2026-04-20T15:12:50.034855822Z" level=info msg="TaskExit event container_id:\"1201ba8c08049438bb6bfd2aaf3ba5641c38e34c54e36ee4ab3b35aa0d37edaa\" id:\"1201ba8c08049438bb6bfd2aaf3ba5641c38e34c54e36ee4ab3b35aa0d37edaa\" pid:2756 exit_status:1 exited_at:{seconds:1776697911 nanos:459779251}" Apr 20 15:12:52.561858 containerd[1642]: time="2026-04-20T15:12:52.549498740Z" level=error msg="get state for 638522c2450123b2103aa2125043a6e9c38225e442872526bb62184f0dd321ca" error="context deadline exceeded" Apr 20 15:12:52.710815 containerd[1642]: time="2026-04-20T15:12:52.630336901Z" level=warning msg="unknown status" status=0 Apr 20 15:12:53.739675 kubelet[2917]: E0420 15:12:53.725513 2917 controller.go:195] "Failed to update lease" err="Put \"https://10.0.0.13:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded" Apr 20 15:12:56.474686 kubelet[2917]: E0420 15:12:56.343284 2917 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1m21.685s" Apr 20 15:12:56.815759 kubelet[2917]: E0420 15:12:56.788728 2917 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.13:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&resourceVersion=550\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 20 15:12:56.952486 kubelet[2917]: E0420 15:12:56.946740 2917 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: Get \"https://10.0.0.13:6443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dcoredns&resourceVersion=505\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="object-\"kube-system\"/\"coredns\"" type="*v1.ConfigMap" Apr 20 15:12:57.134813 containerd[1642]: time="2026-04-20T15:12:57.021149004Z" level=error msg="ttrpc: received message on inactive stream" stream=11 Apr 20 15:13:00.155644 containerd[1642]: time="2026-04-20T15:13:00.152879130Z" level=error msg="Failed to handle backOff event container_id:\"1201ba8c08049438bb6bfd2aaf3ba5641c38e34c54e36ee4ab3b35aa0d37edaa\" id:\"1201ba8c08049438bb6bfd2aaf3ba5641c38e34c54e36ee4ab3b35aa0d37edaa\" pid:2756 exit_status:1 exited_at:{seconds:1776697911 nanos:459779251} for 1201ba8c08049438bb6bfd2aaf3ba5641c38e34c54e36ee4ab3b35aa0d37edaa" error="failed to handle container TaskExit event: failed to stop container: context deadline exceeded" Apr 20 15:13:00.686909 containerd[1642]: time="2026-04-20T15:13:00.379390977Z" level=info msg="TaskExit event container_id:\"d17439ea92908ce00a6e08af24eeaaaa4122b85561e2cedb708e273f0cd453f6\" id:\"d17439ea92908ce00a6e08af24eeaaaa4122b85561e2cedb708e273f0cd453f6\" pid:2758 exit_status:1 exited_at:{seconds:1776697937 nanos:944630133}" Apr 20 15:13:00.723245 kubelet[2917]: E0420 15:13:00.148687 2917 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.13:6443/apis/storage.k8s.io/v1/csidrivers?resourceVersion=557\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 20 15:13:01.216283 kubelet[2917]: E0420 15:12:59.434281 2917 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.13:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&resourceVersion=594\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 20 15:13:01.323390 containerd[1642]: time="2026-04-20T15:13:01.012364088Z" level=error msg="ttrpc: received message on inactive stream" stream=65 Apr 20 15:13:01.358362 kubelet[2917]: E0420 15:13:01.351614 2917 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: Get \"https://10.0.0.13:6443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dkube-proxy&resourceVersion=581\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="object-\"kube-system\"/\"kube-proxy\"" type="*v1.ConfigMap" Apr 20 15:13:01.407211 containerd[1642]: time="2026-04-20T15:13:01.228890086Z" level=error msg="ttrpc: received message on inactive stream" stream=61 Apr 20 15:13:02.009344 containerd[1642]: time="2026-04-20T15:13:02.003656214Z" level=info msg="RunPodSandbox for name:\"coredns-66bc5c9577-fbp9t\" uid:\"d6d8c898-6367-4aff-b3fe-47cbcf746fbf\" namespace:\"kube-system\" returns sandbox id \"638522c2450123b2103aa2125043a6e9c38225e442872526bb62184f0dd321ca\"" Apr 20 15:13:02.019922 kubelet[2917]: E0420 15:13:01.597895 2917 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: Get \"https://10.0.0.13:6443/api/v1/namespaces/kube-flannel/configmaps?fieldSelector=metadata.name%3Dkube-flannel-cfg&resourceVersion=581\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="object-\"kube-flannel\"/\"kube-flannel-cfg\"" type="*v1.ConfigMap" Apr 20 15:13:02.238901 kubelet[2917]: E0420 15:13:01.636346 2917 kubelet_node_status.go:486] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"NetworkUnavailable\\\"},{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-04-20T15:12:41Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-20T15:12:44Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-20T15:12:44Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-20T15:12:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-04-20T15:12:44Z\\\",\\\"message\\\":\\\"container runtime is down\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}]}}\" for node \"localhost\": Patch \"https://10.0.0.13:6443/api/v1/nodes/localhost/status?timeout=10s\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Apr 20 15:13:02.526550 kubelet[2917]: E0420 15:13:02.345350 2917 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 15:13:02.663255 kubelet[2917]: E0420 15:13:02.452587 2917 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: Get \"https://10.0.0.13:6443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=581\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="object-\"kube-system\"/\"kube-root-ca.crt\"" type="*v1.ConfigMap" Apr 20 15:13:03.131325 kubelet[2917]: E0420 15:13:03.130253 2917 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: Get \"https://10.0.0.13:6443/api/v1/namespaces/kube-flannel/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=581\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="object-\"kube-flannel\"/\"kube-root-ca.crt\"" type="*v1.ConfigMap" Apr 20 15:13:04.946580 kubelet[2917]: E0420 15:13:04.945870 2917 controller.go:195] "Failed to update lease" err="Put \"https://10.0.0.13:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Apr 20 15:13:10.066225 kubelet[2917]: E0420 15:13:10.027305 2917 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 15:13:11.008870 containerd[1642]: time="2026-04-20T15:13:10.992827799Z" level=error msg="Failed to handle backOff event container_id:\"d17439ea92908ce00a6e08af24eeaaaa4122b85561e2cedb708e273f0cd453f6\" id:\"d17439ea92908ce00a6e08af24eeaaaa4122b85561e2cedb708e273f0cd453f6\" pid:2758 exit_status:1 exited_at:{seconds:1776697937 nanos:944630133} for d17439ea92908ce00a6e08af24eeaaaa4122b85561e2cedb708e273f0cd453f6" error="failed to handle container TaskExit event: failed to stop container: context deadline exceeded" Apr 20 15:13:11.643288 containerd[1642]: time="2026-04-20T15:13:11.360937230Z" level=info msg="TaskExit event container_id:\"1201ba8c08049438bb6bfd2aaf3ba5641c38e34c54e36ee4ab3b35aa0d37edaa\" id:\"1201ba8c08049438bb6bfd2aaf3ba5641c38e34c54e36ee4ab3b35aa0d37edaa\" pid:2756 exit_status:1 exited_at:{seconds:1776697911 nanos:459779251}" Apr 20 15:13:11.840795 containerd[1642]: time="2026-04-20T15:13:11.837758207Z" level=error msg="ttrpc: received message on inactive stream" stream=65 Apr 20 15:13:12.661898 containerd[1642]: time="2026-04-20T15:13:12.417804877Z" level=error msg="ttrpc: received message on inactive stream" stream=69 Apr 20 15:13:14.601932 kubelet[2917]: E0420 15:13:14.579922 2917 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"localhost\": Get \"https://10.0.0.13:6443/api/v1/nodes/localhost?timeout=10s\": context deadline exceeded" Apr 20 15:13:15.557383 kubelet[2917]: E0420 15:13:15.535800 2917 controller.go:195] "Failed to update lease" err="Put \"https://10.0.0.13:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Apr 20 15:13:16.456187 kubelet[2917]: I0420 15:13:16.430935 2917 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Apr 20 15:13:18.331113 containerd[1642]: time="2026-04-20T15:13:18.326966172Z" level=info msg="RunPodSandbox for name:\"coredns-66bc5c9577-79fj6\" uid:\"545c9a3c-7c66-478f-888e-d4f75a1ecb44\" namespace:\"kube-system\" returns sandbox id \"e1636b2719891084cd503241c698d2ca4db00064473a624b76b04d47c726ebb6\"" Apr 20 15:13:21.538219 containerd[1642]: time="2026-04-20T15:13:21.450121967Z" level=error msg="Failed to handle backOff event container_id:\"1201ba8c08049438bb6bfd2aaf3ba5641c38e34c54e36ee4ab3b35aa0d37edaa\" id:\"1201ba8c08049438bb6bfd2aaf3ba5641c38e34c54e36ee4ab3b35aa0d37edaa\" pid:2756 exit_status:1 exited_at:{seconds:1776697911 nanos:459779251} for 1201ba8c08049438bb6bfd2aaf3ba5641c38e34c54e36ee4ab3b35aa0d37edaa" error="failed to handle container TaskExit event: failed to stop container: context deadline exceeded" Apr 20 15:13:21.835831 containerd[1642]: time="2026-04-20T15:13:21.815872884Z" level=error msg="ttrpc: received message on inactive stream" stream=73 Apr 20 15:13:22.029404 containerd[1642]: time="2026-04-20T15:13:21.951430428Z" level=error msg="ttrpc: received message on inactive stream" stream=75 Apr 20 15:13:22.068665 containerd[1642]: time="2026-04-20T15:13:22.014541582Z" level=info msg="TaskExit event container_id:\"d17439ea92908ce00a6e08af24eeaaaa4122b85561e2cedb708e273f0cd453f6\" id:\"d17439ea92908ce00a6e08af24eeaaaa4122b85561e2cedb708e273f0cd453f6\" pid:2758 exit_status:1 exited_at:{seconds:1776697937 nanos:944630133}" Apr 20 15:13:27.531853 kubelet[2917]: E0420 15:13:27.517674 2917 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 15:13:30.448421 kubelet[2917]: E0420 15:13:29.851834 2917 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"localhost\": Get \"https://10.0.0.13:6443/api/v1/nodes/localhost?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Apr 20 15:13:31.914868 containerd[1642]: time="2026-04-20T15:13:31.901703980Z" level=error msg="Failed to handle backOff event container_id:\"d17439ea92908ce00a6e08af24eeaaaa4122b85561e2cedb708e273f0cd453f6\" id:\"d17439ea92908ce00a6e08af24eeaaaa4122b85561e2cedb708e273f0cd453f6\" pid:2758 exit_status:1 exited_at:{seconds:1776697937 nanos:944630133} for d17439ea92908ce00a6e08af24eeaaaa4122b85561e2cedb708e273f0cd453f6" error="failed to handle container TaskExit event: failed to stop container: context deadline exceeded" Apr 20 15:13:32.858867 containerd[1642]: time="2026-04-20T15:13:32.822765144Z" level=info msg="TaskExit event container_id:\"1201ba8c08049438bb6bfd2aaf3ba5641c38e34c54e36ee4ab3b35aa0d37edaa\" id:\"1201ba8c08049438bb6bfd2aaf3ba5641c38e34c54e36ee4ab3b35aa0d37edaa\" pid:2756 exit_status:1 exited_at:{seconds:1776697911 nanos:459779251}" Apr 20 15:13:33.257950 containerd[1642]: time="2026-04-20T15:13:32.990924270Z" level=error msg="ttrpc: received message on inactive stream" stream=75 Apr 20 15:13:33.474687 containerd[1642]: time="2026-04-20T15:13:33.363823623Z" level=error msg="ttrpc: received message on inactive stream" stream=79 Apr 20 15:13:34.450875 kubelet[2917]: E0420 15:13:34.359954 2917 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.13:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="200ms" Apr 20 15:13:42.390238 containerd[1642]: time="2026-04-20T15:13:42.377621683Z" level=error msg="Failed to handle backOff event container_id:\"1201ba8c08049438bb6bfd2aaf3ba5641c38e34c54e36ee4ab3b35aa0d37edaa\" id:\"1201ba8c08049438bb6bfd2aaf3ba5641c38e34c54e36ee4ab3b35aa0d37edaa\" pid:2756 exit_status:1 exited_at:{seconds:1776697911 nanos:459779251} for 1201ba8c08049438bb6bfd2aaf3ba5641c38e34c54e36ee4ab3b35aa0d37edaa" error="failed to handle container TaskExit event: failed to stop container: context deadline exceeded" Apr 20 15:13:42.752647 containerd[1642]: time="2026-04-20T15:13:42.604385296Z" level=info msg="TaskExit event container_id:\"d17439ea92908ce00a6e08af24eeaaaa4122b85561e2cedb708e273f0cd453f6\" id:\"d17439ea92908ce00a6e08af24eeaaaa4122b85561e2cedb708e273f0cd453f6\" pid:2758 exit_status:1 exited_at:{seconds:1776697937 nanos:944630133}" Apr 20 15:13:43.139387 containerd[1642]: time="2026-04-20T15:13:43.054835940Z" level=error msg="ttrpc: received message on inactive stream" stream=81 Apr 20 15:13:43.234272 containerd[1642]: time="2026-04-20T15:13:43.134951239Z" level=error msg="ttrpc: received message on inactive stream" stream=85 Apr 20 15:13:45.339552 kubelet[2917]: E0420 15:13:44.709602 2917 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"localhost\": Get \"https://10.0.0.13:6443/api/v1/nodes/localhost?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Apr 20 15:13:47.193360 kubelet[2917]: E0420 15:13:47.144664 2917 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.13:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded" interval="400ms" Apr 20 15:13:49.829830 kubelet[2917]: E0420 15:13:49.301735 2917 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{kube-apiserver-localhost.18a81955c4b424a6 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-localhost,UID:77735e41b4153281131387b55637c08c,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Readiness probe failed: Get \"https://10.0.0.13:6443/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-20 15:11:49.16071543 +0000 UTC m=+108.970333867,LastTimestamp:2026-04-20 15:11:49.16071543 +0000 UTC m=+108.970333867,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 20 15:13:52.215774 containerd[1642]: time="2026-04-20T15:13:52.211886354Z" level=info msg="CreateContainer within sandbox \"638522c2450123b2103aa2125043a6e9c38225e442872526bb62184f0dd321ca\" for container name:\"coredns\"" Apr 20 15:13:52.460652 containerd[1642]: time="2026-04-20T15:13:52.423728282Z" level=error msg="failed to delete task" error="context deadline exceeded" id=d17439ea92908ce00a6e08af24eeaaaa4122b85561e2cedb708e273f0cd453f6 Apr 20 15:13:52.915741 containerd[1642]: time="2026-04-20T15:13:52.804486909Z" level=error msg="Failed to handle backOff event container_id:\"d17439ea92908ce00a6e08af24eeaaaa4122b85561e2cedb708e273f0cd453f6\" id:\"d17439ea92908ce00a6e08af24eeaaaa4122b85561e2cedb708e273f0cd453f6\" pid:2758 exit_status:1 exited_at:{seconds:1776697937 nanos:944630133} for d17439ea92908ce00a6e08af24eeaaaa4122b85561e2cedb708e273f0cd453f6" error="failed to handle container TaskExit event: failed to stop container: failed to delete task: context deadline exceeded" Apr 20 15:13:53.854957 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d17439ea92908ce00a6e08af24eeaaaa4122b85561e2cedb708e273f0cd453f6-rootfs.mount: Deactivated successfully. Apr 20 15:13:54.822958 containerd[1642]: time="2026-04-20T15:13:54.652926422Z" level=error msg="ttrpc: received message on inactive stream" stream=95 Apr 20 15:13:57.504412 kubelet[2917]: E0420 15:13:57.502262 2917 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1m1.016s" Apr 20 15:13:58.102721 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4242869216.mount: Deactivated successfully. Apr 20 15:13:58.243756 containerd[1642]: time="2026-04-20T15:13:58.139736488Z" level=info msg="Container fed840575fcd557f6235d523466c7b9fb4e29925e16ffa458edfd34063de0127: CDI devices from CRI Config.CDIDevices: []" Apr 20 15:13:58.271372 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2389374615.mount: Deactivated successfully. Apr 20 15:13:58.892483 containerd[1642]: time="2026-04-20T15:13:58.886332026Z" level=info msg="TaskExit event container_id:\"1201ba8c08049438bb6bfd2aaf3ba5641c38e34c54e36ee4ab3b35aa0d37edaa\" id:\"1201ba8c08049438bb6bfd2aaf3ba5641c38e34c54e36ee4ab3b35aa0d37edaa\" pid:2756 exit_status:1 exited_at:{seconds:1776697911 nanos:459779251}" Apr 20 15:14:00.631659 containerd[1642]: time="2026-04-20T15:14:00.621285219Z" level=info msg="CreateContainer within sandbox \"e1636b2719891084cd503241c698d2ca4db00064473a624b76b04d47c726ebb6\" for container name:\"coredns\"" Apr 20 15:14:01.851472 containerd[1642]: time="2026-04-20T15:14:01.846143519Z" level=info msg="CreateContainer within sandbox \"638522c2450123b2103aa2125043a6e9c38225e442872526bb62184f0dd321ca\" for name:\"coredns\" returns container id \"fed840575fcd557f6235d523466c7b9fb4e29925e16ffa458edfd34063de0127\"" Apr 20 15:14:02.360846 kubelet[2917]: E0420 15:14:02.353465 2917 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 15:14:03.150676 kubelet[2917]: E0420 15:14:03.137498 2917 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 15:14:08.848829 containerd[1642]: time="2026-04-20T15:14:08.831516216Z" level=info msg="StartContainer for \"fed840575fcd557f6235d523466c7b9fb4e29925e16ffa458edfd34063de0127\"" Apr 20 15:14:09.802548 containerd[1642]: time="2026-04-20T15:14:09.686837336Z" level=error msg="Failed to handle backOff event container_id:\"1201ba8c08049438bb6bfd2aaf3ba5641c38e34c54e36ee4ab3b35aa0d37edaa\" id:\"1201ba8c08049438bb6bfd2aaf3ba5641c38e34c54e36ee4ab3b35aa0d37edaa\" pid:2756 exit_status:1 exited_at:{seconds:1776697911 nanos:459779251} for 1201ba8c08049438bb6bfd2aaf3ba5641c38e34c54e36ee4ab3b35aa0d37edaa" error="failed to handle container TaskExit event: failed to stop container: context deadline exceeded" Apr 20 15:14:10.586799 containerd[1642]: time="2026-04-20T15:14:10.226063489Z" level=error msg="ttrpc: received message on inactive stream" stream=93 Apr 20 15:14:10.823152 containerd[1642]: time="2026-04-20T15:14:10.763751424Z" level=error msg="ttrpc: received message on inactive stream" stream=97 Apr 20 15:14:10.947914 containerd[1642]: time="2026-04-20T15:14:10.674531075Z" level=info msg="TaskExit event container_id:\"d17439ea92908ce00a6e08af24eeaaaa4122b85561e2cedb708e273f0cd453f6\" id:\"d17439ea92908ce00a6e08af24eeaaaa4122b85561e2cedb708e273f0cd453f6\" pid:2758 exit_status:1 exited_at:{seconds:1776697937 nanos:944630133}" Apr 20 15:14:11.489931 containerd[1642]: time="2026-04-20T15:14:11.207930526Z" level=info msg="StopContainer for \"1201ba8c08049438bb6bfd2aaf3ba5641c38e34c54e36ee4ab3b35aa0d37edaa\" with timeout 30 (s)" Apr 20 15:14:13.319938 containerd[1642]: time="2026-04-20T15:14:13.248930168Z" level=info msg="Stop container \"1201ba8c08049438bb6bfd2aaf3ba5641c38e34c54e36ee4ab3b35aa0d37edaa\" with signal terminated" Apr 20 15:14:13.604737 containerd[1642]: time="2026-04-20T15:14:13.351766034Z" level=info msg="container event discarded" container=eed665b6aafe01026e082818c6c537fdddeaac2672bd5a28fc44d252ce07eaa2 type=CONTAINER_CREATED_EVENT Apr 20 15:14:13.604737 containerd[1642]: time="2026-04-20T15:14:13.562790726Z" level=info msg="container event discarded" container=eed665b6aafe01026e082818c6c537fdddeaac2672bd5a28fc44d252ce07eaa2 type=CONTAINER_STARTED_EVENT Apr 20 15:14:13.739509 containerd[1642]: time="2026-04-20T15:14:13.725204464Z" level=info msg="container event discarded" container=a96d38bea9473b0df39f660af3112a3fd5d742e9f25d5fb628bbeaff53e7a804 type=CONTAINER_CREATED_EVENT Apr 20 15:14:13.757392 containerd[1642]: time="2026-04-20T15:14:13.740370258Z" level=info msg="container event discarded" container=a96d38bea9473b0df39f660af3112a3fd5d742e9f25d5fb628bbeaff53e7a804 type=CONTAINER_STARTED_EVENT Apr 20 15:14:13.996277 containerd[1642]: time="2026-04-20T15:14:13.825929901Z" level=info msg="container event discarded" container=4f530dc06aa72949c49b41d18f2e8627314ffd67d443ed094bf5e3888c800fc9 type=CONTAINER_CREATED_EVENT Apr 20 15:14:14.148852 containerd[1642]: time="2026-04-20T15:14:14.123443414Z" level=info msg="container event discarded" container=4f530dc06aa72949c49b41d18f2e8627314ffd67d443ed094bf5e3888c800fc9 type=CONTAINER_STARTED_EVENT Apr 20 15:14:15.398358 containerd[1642]: time="2026-04-20T15:14:15.397851366Z" level=info msg="connecting to shim fed840575fcd557f6235d523466c7b9fb4e29925e16ffa458edfd34063de0127" address="unix:///run/containerd/s/bc89171def6499777c9969a2d1a4e9849d33c1bac108ddacdcd876088a63d836" protocol=ttrpc version=3 Apr 20 15:14:15.839761 containerd[1642]: time="2026-04-20T15:14:15.709854359Z" level=info msg="container event discarded" container=d17439ea92908ce00a6e08af24eeaaaa4122b85561e2cedb708e273f0cd453f6 type=CONTAINER_CREATED_EVENT Apr 20 15:14:16.303942 containerd[1642]: time="2026-04-20T15:14:16.153833013Z" level=info msg="container event discarded" container=1201ba8c08049438bb6bfd2aaf3ba5641c38e34c54e36ee4ab3b35aa0d37edaa type=CONTAINER_CREATED_EVENT Apr 20 15:14:16.351819 containerd[1642]: time="2026-04-20T15:14:16.350865742Z" level=info msg="container event discarded" container=cd11ef12301d2105c049831098a7dddba828253f92714db70c18080199493719 type=CONTAINER_CREATED_EVENT Apr 20 15:14:18.202543 containerd[1642]: time="2026-04-20T15:14:18.196097550Z" level=info msg="container event discarded" container=cd11ef12301d2105c049831098a7dddba828253f92714db70c18080199493719 type=CONTAINER_STARTED_EVENT Apr 20 15:14:18.650833 containerd[1642]: time="2026-04-20T15:14:18.650445592Z" level=info msg="container event discarded" container=1201ba8c08049438bb6bfd2aaf3ba5641c38e34c54e36ee4ab3b35aa0d37edaa type=CONTAINER_STARTED_EVENT Apr 20 15:14:18.726947 containerd[1642]: time="2026-04-20T15:14:18.709491822Z" level=info msg="container event discarded" container=d17439ea92908ce00a6e08af24eeaaaa4122b85561e2cedb708e273f0cd453f6 type=CONTAINER_STARTED_EVENT Apr 20 15:14:20.206864 kubelet[2917]: E0420 15:14:20.205244 2917 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 15:14:22.049258 containerd[1642]: time="2026-04-20T15:14:21.623290786Z" level=error msg="Failed to handle backOff event container_id:\"d17439ea92908ce00a6e08af24eeaaaa4122b85561e2cedb708e273f0cd453f6\" id:\"d17439ea92908ce00a6e08af24eeaaaa4122b85561e2cedb708e273f0cd453f6\" pid:2758 exit_status:1 exited_at:{seconds:1776697937 nanos:944630133} for d17439ea92908ce00a6e08af24eeaaaa4122b85561e2cedb708e273f0cd453f6" error="failed to handle container TaskExit event: failed to stop container: context deadline exceeded" Apr 20 15:14:22.297509 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3549092357.mount: Deactivated successfully. Apr 20 15:14:22.402874 containerd[1642]: time="2026-04-20T15:14:21.635543555Z" level=error msg="ttrpc: received message on inactive stream" stream=101 Apr 20 15:14:22.402874 containerd[1642]: time="2026-04-20T15:14:22.391306938Z" level=error msg="ttrpc: received message on inactive stream" stream=105 Apr 20 15:14:25.580149 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2027236402.mount: Deactivated successfully. Apr 20 15:14:25.927325 containerd[1642]: time="2026-04-20T15:14:25.644473379Z" level=info msg="Container f91a85e6df6e19ac178e83aed0a22a392885d1618d35895f49911b27079a2ac0: CDI devices from CRI Config.CDIDevices: []" Apr 20 15:14:39.132362 containerd[1642]: time="2026-04-20T15:14:39.076595179Z" level=info msg="CreateContainer within sandbox \"e1636b2719891084cd503241c698d2ca4db00064473a624b76b04d47c726ebb6\" for name:\"coredns\" returns container id \"f91a85e6df6e19ac178e83aed0a22a392885d1618d35895f49911b27079a2ac0\"" Apr 20 15:14:40.589166 kubelet[2917]: E0420 15:14:40.587543 2917 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="43.084s" Apr 20 15:14:40.611233 containerd[1642]: time="2026-04-20T15:14:40.606870066Z" level=info msg="StartContainer for \"f91a85e6df6e19ac178e83aed0a22a392885d1618d35895f49911b27079a2ac0\"" Apr 20 15:14:41.180599 containerd[1642]: time="2026-04-20T15:14:41.162917768Z" level=info msg="connecting to shim f91a85e6df6e19ac178e83aed0a22a392885d1618d35895f49911b27079a2ac0" address="unix:///run/containerd/s/fda5cd4badd32ba8a4cb2e58a96d43217c46133c54bca093b38c272eee0abcf4" protocol=ttrpc version=3 Apr 20 15:14:41.508638 containerd[1642]: time="2026-04-20T15:14:41.504131426Z" level=info msg="StopContainer for \"d17439ea92908ce00a6e08af24eeaaaa4122b85561e2cedb708e273f0cd453f6\" with timeout 30 (s)" Apr 20 15:14:41.665308 containerd[1642]: time="2026-04-20T15:14:41.648643703Z" level=info msg="Stop container \"d17439ea92908ce00a6e08af24eeaaaa4122b85561e2cedb708e273f0cd453f6\" with signal terminated" Apr 20 15:14:42.369388 kubelet[2917]: E0420 15:14:42.369128 2917 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.248s" Apr 20 15:14:42.848849 containerd[1642]: time="2026-04-20T15:14:42.830634014Z" level=info msg="TaskExit event container_id:\"1201ba8c08049438bb6bfd2aaf3ba5641c38e34c54e36ee4ab3b35aa0d37edaa\" id:\"1201ba8c08049438bb6bfd2aaf3ba5641c38e34c54e36ee4ab3b35aa0d37edaa\" pid:2756 exit_status:1 exited_at:{seconds:1776697911 nanos:459779251}" Apr 20 15:14:44.938922 kubelet[2917]: E0420 15:14:44.938168 2917 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 15:14:45.670681 systemd[1]: Started cri-containerd-fed840575fcd557f6235d523466c7b9fb4e29925e16ffa458edfd34063de0127.scope - libcontainer container fed840575fcd557f6235d523466c7b9fb4e29925e16ffa458edfd34063de0127. Apr 20 15:14:46.726040 kubelet[2917]: E0420 15:14:46.723199 2917 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 15:14:46.970602 systemd[1]: Started cri-containerd-f91a85e6df6e19ac178e83aed0a22a392885d1618d35895f49911b27079a2ac0.scope - libcontainer container f91a85e6df6e19ac178e83aed0a22a392885d1618d35895f49911b27079a2ac0. Apr 20 15:14:48.360627 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1201ba8c08049438bb6bfd2aaf3ba5641c38e34c54e36ee4ab3b35aa0d37edaa-rootfs.mount: Deactivated successfully. Apr 20 15:14:48.644345 containerd[1642]: time="2026-04-20T15:14:48.639951977Z" level=error msg="get state for fed840575fcd557f6235d523466c7b9fb4e29925e16ffa458edfd34063de0127" error="context deadline exceeded" Apr 20 15:14:48.663441 containerd[1642]: time="2026-04-20T15:14:48.652526834Z" level=warning msg="unknown status" status=0 Apr 20 15:14:48.723164 containerd[1642]: time="2026-04-20T15:14:48.715679000Z" level=info msg="StopContainer for \"1201ba8c08049438bb6bfd2aaf3ba5641c38e34c54e36ee4ab3b35aa0d37edaa\" returns successfully" Apr 20 15:14:48.858411 containerd[1642]: time="2026-04-20T15:14:48.855741695Z" level=error msg="ttrpc: received message on inactive stream" stream=3 Apr 20 15:14:48.949415 kubelet[2917]: E0420 15:14:48.916854 2917 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 15:14:49.750892 containerd[1642]: time="2026-04-20T15:14:49.748354349Z" level=info msg="CreateContainer within sandbox \"eed665b6aafe01026e082818c6c537fdddeaac2672bd5a28fc44d252ce07eaa2\" for container name:\"kube-controller-manager\" attempt:1" Apr 20 15:14:51.007288 containerd[1642]: time="2026-04-20T15:14:51.006784218Z" level=info msg="StartContainer for \"fed840575fcd557f6235d523466c7b9fb4e29925e16ffa458edfd34063de0127\" returns successfully" Apr 20 15:14:51.164734 containerd[1642]: time="2026-04-20T15:14:51.163890907Z" level=info msg="StartContainer for \"f91a85e6df6e19ac178e83aed0a22a392885d1618d35895f49911b27079a2ac0\" returns successfully" Apr 20 15:14:51.418446 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3219063075.mount: Deactivated successfully. Apr 20 15:14:51.456709 containerd[1642]: time="2026-04-20T15:14:51.425471877Z" level=info msg="Container 1e8025ffd2089908a1a85626ad997c765cb77a5dd815d186708c4cb41c207781: CDI devices from CRI Config.CDIDevices: []" Apr 20 15:14:52.092531 containerd[1642]: time="2026-04-20T15:14:52.089537227Z" level=info msg="CreateContainer within sandbox \"eed665b6aafe01026e082818c6c537fdddeaac2672bd5a28fc44d252ce07eaa2\" for name:\"kube-controller-manager\" attempt:1 returns container id \"1e8025ffd2089908a1a85626ad997c765cb77a5dd815d186708c4cb41c207781\"" Apr 20 15:14:52.214199 containerd[1642]: time="2026-04-20T15:14:52.212269132Z" level=info msg="StartContainer for \"1e8025ffd2089908a1a85626ad997c765cb77a5dd815d186708c4cb41c207781\"" Apr 20 15:14:52.413563 containerd[1642]: time="2026-04-20T15:14:52.413177182Z" level=info msg="connecting to shim 1e8025ffd2089908a1a85626ad997c765cb77a5dd815d186708c4cb41c207781" address="unix:///run/containerd/s/e3520cba3d47f695f01bdf9b9ffdbd7a5c1e7f5540952fe7464054819c712fa9" protocol=ttrpc version=3 Apr 20 15:14:52.598957 kubelet[2917]: E0420 15:14:52.597214 2917 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 15:14:54.909368 containerd[1642]: time="2026-04-20T15:14:54.908809019Z" level=info msg="TaskExit event container_id:\"d17439ea92908ce00a6e08af24eeaaaa4122b85561e2cedb708e273f0cd453f6\" id:\"d17439ea92908ce00a6e08af24eeaaaa4122b85561e2cedb708e273f0cd453f6\" pid:2758 exit_status:1 exited_at:{seconds:1776697937 nanos:944630133}" Apr 20 15:14:56.025359 kubelet[2917]: E0420 15:14:56.025082 2917 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 15:14:56.209216 systemd[1]: Started cri-containerd-1e8025ffd2089908a1a85626ad997c765cb77a5dd815d186708c4cb41c207781.scope - libcontainer container 1e8025ffd2089908a1a85626ad997c765cb77a5dd815d186708c4cb41c207781. Apr 20 15:14:56.661742 kubelet[2917]: E0420 15:14:56.660456 2917 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 15:14:56.960601 kubelet[2917]: E0420 15:14:56.929043 2917 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.27s" Apr 20 15:14:57.278655 kubelet[2917]: E0420 15:14:57.238534 2917 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 15:14:57.476458 kubelet[2917]: I0420 15:14:57.464817 2917 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-79fj6" podStartSLOduration=284.46478767 podStartE2EDuration="4m44.46478767s" podCreationTimestamp="2026-04-20 15:10:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-20 15:14:56.168667727 +0000 UTC m=+295.978286080" watchObservedRunningTime="2026-04-20 15:14:57.46478767 +0000 UTC m=+297.274405999" Apr 20 15:14:58.055620 kubelet[2917]: E0420 15:14:58.054265 2917 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.125s" Apr 20 15:14:59.017081 kubelet[2917]: E0420 15:14:59.015619 2917 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 15:14:59.066168 containerd[1642]: time="2026-04-20T15:14:59.063647071Z" level=error msg="get state for 1e8025ffd2089908a1a85626ad997c765cb77a5dd815d186708c4cb41c207781" error="context deadline exceeded" Apr 20 15:14:59.092928 containerd[1642]: time="2026-04-20T15:14:59.078820533Z" level=warning msg="unknown status" status=0 Apr 20 15:14:59.093081 kubelet[2917]: E0420 15:14:59.092592 2917 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 15:14:59.388574 containerd[1642]: time="2026-04-20T15:14:59.385437618Z" level=error msg="ttrpc: received message on inactive stream" stream=3 Apr 20 15:14:59.637076 kubelet[2917]: I0420 15:14:59.626641 2917 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-fbp9t" podStartSLOduration=279.62646091 podStartE2EDuration="4m39.62646091s" podCreationTimestamp="2026-04-20 15:10:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-20 15:14:57.505944814 +0000 UTC m=+297.315563171" watchObservedRunningTime="2026-04-20 15:14:59.62646091 +0000 UTC m=+299.436079228" Apr 20 15:14:59.738224 containerd[1642]: time="2026-04-20T15:14:59.721789686Z" level=info msg="StopContainer for \"d17439ea92908ce00a6e08af24eeaaaa4122b85561e2cedb708e273f0cd453f6\" returns successfully" Apr 20 15:14:59.739601 kubelet[2917]: E0420 15:14:59.726403 2917 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 15:15:00.146154 containerd[1642]: time="2026-04-20T15:15:00.127873279Z" level=info msg="CreateContainer within sandbox \"4f530dc06aa72949c49b41d18f2e8627314ffd67d443ed094bf5e3888c800fc9\" for container name:\"kube-scheduler\" attempt:1" Apr 20 15:15:00.279150 containerd[1642]: time="2026-04-20T15:15:00.275512875Z" level=info msg="StartContainer for \"1e8025ffd2089908a1a85626ad997c765cb77a5dd815d186708c4cb41c207781\" returns successfully" Apr 20 15:15:00.692166 containerd[1642]: time="2026-04-20T15:15:00.664630986Z" level=info msg="Container 979f10babaefc83368e3896a99949c733c5b8a91dab6023472e01b976388b62d: CDI devices from CRI Config.CDIDevices: []" Apr 20 15:15:01.063866 containerd[1642]: time="2026-04-20T15:15:01.046649737Z" level=info msg="CreateContainer within sandbox \"4f530dc06aa72949c49b41d18f2e8627314ffd67d443ed094bf5e3888c800fc9\" for name:\"kube-scheduler\" attempt:1 returns container id \"979f10babaefc83368e3896a99949c733c5b8a91dab6023472e01b976388b62d\"" Apr 20 15:15:01.085412 kubelet[2917]: E0420 15:15:01.066318 2917 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 15:15:01.110438 kubelet[2917]: E0420 15:15:01.086555 2917 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 15:15:01.110438 kubelet[2917]: E0420 15:15:01.097885 2917 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 15:15:01.134702 containerd[1642]: time="2026-04-20T15:15:01.085656047Z" level=info msg="StartContainer for \"979f10babaefc83368e3896a99949c733c5b8a91dab6023472e01b976388b62d\"" Apr 20 15:15:01.152092 containerd[1642]: time="2026-04-20T15:15:01.145953002Z" level=info msg="connecting to shim 979f10babaefc83368e3896a99949c733c5b8a91dab6023472e01b976388b62d" address="unix:///run/containerd/s/cf24567a77901a2a777df8cdc50d09c11b84c57a463a127a8f70d3c430a1db55" protocol=ttrpc version=3 Apr 20 15:15:02.993205 systemd[1]: Started cri-containerd-979f10babaefc83368e3896a99949c733c5b8a91dab6023472e01b976388b62d.scope - libcontainer container 979f10babaefc83368e3896a99949c733c5b8a91dab6023472e01b976388b62d. Apr 20 15:15:03.317484 kubelet[2917]: E0420 15:15:03.264948 2917 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 15:15:04.424811 kubelet[2917]: E0420 15:15:04.424694 2917 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 15:15:05.725592 containerd[1642]: time="2026-04-20T15:15:05.715848542Z" level=info msg="StartContainer for \"979f10babaefc83368e3896a99949c733c5b8a91dab6023472e01b976388b62d\" returns successfully" Apr 20 15:15:07.050547 kubelet[2917]: E0420 15:15:07.046578 2917 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 15:15:08.455835 kubelet[2917]: E0420 15:15:08.452813 2917 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 15:15:09.615953 kubelet[2917]: E0420 15:15:09.614657 2917 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 15:15:11.176564 kubelet[2917]: E0420 15:15:11.164322 2917 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 15:15:11.673445 kubelet[2917]: E0420 15:15:11.672828 2917 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 15:15:21.649557 kubelet[2917]: E0420 15:15:21.637872 2917 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 15:15:21.921546 kubelet[2917]: E0420 15:15:21.918930 2917 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 15:15:28.681711 kubelet[2917]: E0420 15:15:28.681246 2917 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 15:15:29.703158 kubelet[2917]: E0420 15:15:29.702203 2917 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.056s" Apr 20 15:15:30.228596 containerd[1642]: time="2026-04-20T15:15:30.165359936Z" level=info msg="container event discarded" container=1b7216a68340bfbcfd5000bb4ab4b4a82c4367aedc9ce390d0ad7722e67be127 type=CONTAINER_CREATED_EVENT Apr 20 15:15:30.228596 containerd[1642]: time="2026-04-20T15:15:30.225125158Z" level=info msg="container event discarded" container=1b7216a68340bfbcfd5000bb4ab4b4a82c4367aedc9ce390d0ad7722e67be127 type=CONTAINER_STARTED_EVENT Apr 20 15:15:34.104582 containerd[1642]: time="2026-04-20T15:15:34.098574763Z" level=info msg="container event discarded" container=4fb28369ce685d2157806f1b37e02110c8e78d579ea01012b1c9d7b637645f0f type=CONTAINER_CREATED_EVENT Apr 20 15:15:37.830954 containerd[1642]: time="2026-04-20T15:15:37.829344663Z" level=info msg="container event discarded" container=c3c27a5048699b844683aabb34ef88b4257fbcf5686d3b3679f1523c96188015 type=CONTAINER_CREATED_EVENT Apr 20 15:15:37.857421 containerd[1642]: time="2026-04-20T15:15:37.831169594Z" level=info msg="container event discarded" container=c3c27a5048699b844683aabb34ef88b4257fbcf5686d3b3679f1523c96188015 type=CONTAINER_STARTED_EVENT Apr 20 15:15:39.318661 containerd[1642]: time="2026-04-20T15:15:39.317557763Z" level=info msg="container event discarded" container=4fb28369ce685d2157806f1b37e02110c8e78d579ea01012b1c9d7b637645f0f type=CONTAINER_STARTED_EVENT Apr 20 15:15:42.062815 containerd[1642]: time="2026-04-20T15:15:42.057650174Z" level=info msg="container event discarded" container=565ae5b9e62d2eee028e0232febaa9678f23e9f566f35e83cc86f6ed0979909a type=CONTAINER_CREATED_EVENT Apr 20 15:15:43.236667 kubelet[2917]: E0420 15:15:43.216676 2917 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 15:15:44.543758 containerd[1642]: time="2026-04-20T15:15:44.501772746Z" level=info msg="container event discarded" container=565ae5b9e62d2eee028e0232febaa9678f23e9f566f35e83cc86f6ed0979909a type=CONTAINER_STARTED_EVENT Apr 20 15:15:45.194772 containerd[1642]: time="2026-04-20T15:15:45.190641728Z" level=info msg="container event discarded" container=565ae5b9e62d2eee028e0232febaa9678f23e9f566f35e83cc86f6ed0979909a type=CONTAINER_STOPPED_EVENT Apr 20 15:15:45.300882 kubelet[2917]: E0420 15:15:45.297815 2917 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.559s" Apr 20 15:15:46.355669 kubelet[2917]: E0420 15:15:46.354717 2917 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.056s" Apr 20 15:15:49.278706 kubelet[2917]: E0420 15:15:49.263470 2917 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.603s" Apr 20 15:15:50.954762 kubelet[2917]: E0420 15:15:50.952638 2917 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.673s" Apr 20 15:15:54.245491 kubelet[2917]: E0420 15:15:54.220871 2917 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.572s" Apr 20 15:15:54.915702 kubelet[2917]: E0420 15:15:54.749607 2917 controller.go:195] "Failed to update lease" err="Put \"https://10.0.0.13:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Apr 20 15:15:58.734666 kubelet[2917]: E0420 15:15:58.728849 2917 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="4.021s" Apr 20 15:15:59.109424 systemd[1]: cri-containerd-1e8025ffd2089908a1a85626ad997c765cb77a5dd815d186708c4cb41c207781.scope: Deactivated successfully. Apr 20 15:15:59.114031 systemd[1]: cri-containerd-1e8025ffd2089908a1a85626ad997c765cb77a5dd815d186708c4cb41c207781.scope: Consumed 18.527s CPU time, 28.3M memory peak. Apr 20 15:15:59.440152 containerd[1642]: time="2026-04-20T15:15:59.385530282Z" level=info msg="received container exit event container_id:\"1e8025ffd2089908a1a85626ad997c765cb77a5dd815d186708c4cb41c207781\" id:\"1e8025ffd2089908a1a85626ad997c765cb77a5dd815d186708c4cb41c207781\" pid:4147 exit_status:1 exited_at:{seconds:1776698159 nanos:210721875}" Apr 20 15:16:03.299547 containerd[1642]: time="2026-04-20T15:16:03.281632996Z" level=info msg="container event discarded" container=dd687350be8d706ad44a90111169a30120d52d806af515b654d23db051f81381 type=CONTAINER_CREATED_EVENT Apr 20 15:16:04.845541 containerd[1642]: time="2026-04-20T15:16:04.816247965Z" level=info msg="container event discarded" container=dd687350be8d706ad44a90111169a30120d52d806af515b654d23db051f81381 type=CONTAINER_STARTED_EVENT Apr 20 15:16:05.201086 kubelet[2917]: E0420 15:16:05.173421 2917 controller.go:195] "Failed to update lease" err="Put \"https://10.0.0.13:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded" Apr 20 15:16:05.396601 containerd[1642]: time="2026-04-20T15:16:05.334906044Z" level=info msg="container event discarded" container=dd687350be8d706ad44a90111169a30120d52d806af515b654d23db051f81381 type=CONTAINER_STOPPED_EVENT Apr 20 15:16:05.573804 containerd[1642]: time="2026-04-20T15:16:05.564663874Z" level=info msg="container event discarded" container=0f8c375884a18d5d954a9dc4832c39304bd906164aa8efd2fb77fbec630ca187 type=CONTAINER_CREATED_EVENT Apr 20 15:16:06.188852 kubelet[2917]: E0420 15:16:06.174855 2917 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.523s" Apr 20 15:16:07.320416 containerd[1642]: time="2026-04-20T15:16:07.315797858Z" level=info msg="container event discarded" container=0f8c375884a18d5d954a9dc4832c39304bd906164aa8efd2fb77fbec630ca187 type=CONTAINER_STARTED_EVENT Apr 20 15:16:08.412089 kubelet[2917]: E0420 15:16:08.406785 2917 controller.go:195] "Failed to update lease" err="Operation cannot be fulfilled on leases.coordination.k8s.io \"localhost\": the object has been modified; please apply your changes to the latest version and try again" Apr 20 15:16:09.305862 systemd[1]: cri-containerd-979f10babaefc83368e3896a99949c733c5b8a91dab6023472e01b976388b62d.scope: Deactivated successfully. Apr 20 15:16:09.414872 systemd[1]: cri-containerd-979f10babaefc83368e3896a99949c733c5b8a91dab6023472e01b976388b62d.scope: Consumed 16.232s CPU time, 18.9M memory peak. Apr 20 15:16:09.620572 containerd[1642]: time="2026-04-20T15:16:09.592455213Z" level=error msg="failed to delete task" error="context deadline exceeded" id=1e8025ffd2089908a1a85626ad997c765cb77a5dd815d186708c4cb41c207781 Apr 20 15:16:09.745838 containerd[1642]: time="2026-04-20T15:16:09.745069902Z" level=error msg="failed to handle container TaskExit event container_id:\"1e8025ffd2089908a1a85626ad997c765cb77a5dd815d186708c4cb41c207781\" id:\"1e8025ffd2089908a1a85626ad997c765cb77a5dd815d186708c4cb41c207781\" pid:4147 exit_status:1 exited_at:{seconds:1776698159 nanos:210721875}" error="failed to stop container: failed to delete task: context deadline exceeded" Apr 20 15:16:09.763688 containerd[1642]: time="2026-04-20T15:16:09.762792698Z" level=info msg="received container exit event container_id:\"979f10babaefc83368e3896a99949c733c5b8a91dab6023472e01b976388b62d\" id:\"979f10babaefc83368e3896a99949c733c5b8a91dab6023472e01b976388b62d\" pid:4208 exit_status:1 exited_at:{seconds:1776698169 nanos:596351319}" Apr 20 15:16:09.945766 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1e8025ffd2089908a1a85626ad997c765cb77a5dd815d186708c4cb41c207781-rootfs.mount: Deactivated successfully. Apr 20 15:16:10.113911 containerd[1642]: time="2026-04-20T15:16:09.998097405Z" level=error msg="ttrpc: received message on inactive stream" stream=41 Apr 20 15:16:10.450844 kubelet[2917]: E0420 15:16:10.446646 2917 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="4.244s" Apr 20 15:16:10.817406 kubelet[2917]: E0420 15:16:10.814229 2917 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 15:16:10.849727 containerd[1642]: time="2026-04-20T15:16:10.826669254Z" level=info msg="TaskExit event container_id:\"1e8025ffd2089908a1a85626ad997c765cb77a5dd815d186708c4cb41c207781\" id:\"1e8025ffd2089908a1a85626ad997c765cb77a5dd815d186708c4cb41c207781\" pid:4147 exit_status:1 exited_at:{seconds:1776698159 nanos:210721875}" Apr 20 15:16:11.796664 kubelet[2917]: E0420 15:16:11.796384 2917 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.335s" Apr 20 15:16:12.206673 kubelet[2917]: E0420 15:16:12.161936 2917 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 15:16:12.733931 kubelet[2917]: E0420 15:16:12.730428 2917 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 15:16:13.620377 kubelet[2917]: E0420 15:16:13.614567 2917 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.593s" Apr 20 15:16:17.547408 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-979f10babaefc83368e3896a99949c733c5b8a91dab6023472e01b976388b62d-rootfs.mount: Deactivated successfully. Apr 20 15:16:20.033589 kubelet[2917]: E0420 15:16:20.008938 2917 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="6.39s" Apr 20 15:16:20.642672 kubelet[2917]: E0420 15:16:20.640672 2917 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 15:16:21.064837 kubelet[2917]: E0420 15:16:20.982877 2917 cadvisor_stats_provider.go:567] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod824fd89300514e351ed3b68d82c665c6.slice/cri-containerd-979f10babaefc83368e3896a99949c733c5b8a91dab6023472e01b976388b62d.scope\": RecentStats: unable to find data in memory cache]" Apr 20 15:16:21.209655 kubelet[2917]: E0420 15:16:21.206723 2917 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.051s" Apr 20 15:16:21.858316 kubelet[2917]: I0420 15:16:21.852839 2917 scope.go:117] "RemoveContainer" containerID="d17439ea92908ce00a6e08af24eeaaaa4122b85561e2cedb708e273f0cd453f6" Apr 20 15:16:21.947918 kubelet[2917]: I0420 15:16:21.892924 2917 scope.go:117] "RemoveContainer" containerID="979f10babaefc83368e3896a99949c733c5b8a91dab6023472e01b976388b62d" Apr 20 15:16:22.061884 kubelet[2917]: E0420 15:16:22.060442 2917 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 15:16:22.092765 kubelet[2917]: I0420 15:16:22.085556 2917 scope.go:117] "RemoveContainer" containerID="1e8025ffd2089908a1a85626ad997c765cb77a5dd815d186708c4cb41c207781" Apr 20 15:16:22.092765 kubelet[2917]: E0420 15:16:22.085709 2917 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 15:16:22.336584 containerd[1642]: time="2026-04-20T15:16:22.335666718Z" level=info msg="RemoveContainer for \"d17439ea92908ce00a6e08af24eeaaaa4122b85561e2cedb708e273f0cd453f6\"" Apr 20 15:16:22.996903 containerd[1642]: time="2026-04-20T15:16:22.987682557Z" level=info msg="CreateContainer within sandbox \"eed665b6aafe01026e082818c6c537fdddeaac2672bd5a28fc44d252ce07eaa2\" for container name:\"kube-controller-manager\" attempt:2" Apr 20 15:16:23.518907 containerd[1642]: time="2026-04-20T15:16:23.513713878Z" level=info msg="CreateContainer within sandbox \"4f530dc06aa72949c49b41d18f2e8627314ffd67d443ed094bf5e3888c800fc9\" for container name:\"kube-scheduler\" attempt:2" Apr 20 15:16:24.713927 containerd[1642]: time="2026-04-20T15:16:24.710703935Z" level=info msg="RemoveContainer for \"d17439ea92908ce00a6e08af24eeaaaa4122b85561e2cedb708e273f0cd453f6\" returns successfully" Apr 20 15:16:25.139573 kubelet[2917]: I0420 15:16:25.139439 2917 scope.go:117] "RemoveContainer" containerID="1201ba8c08049438bb6bfd2aaf3ba5641c38e34c54e36ee4ab3b35aa0d37edaa" Apr 20 15:16:27.226716 containerd[1642]: time="2026-04-20T15:16:27.220947339Z" level=info msg="RemoveContainer for \"1201ba8c08049438bb6bfd2aaf3ba5641c38e34c54e36ee4ab3b35aa0d37edaa\"" Apr 20 15:16:27.822933 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2650252167.mount: Deactivated successfully. Apr 20 15:16:28.011584 kubelet[2917]: E0420 15:16:27.993575 2917 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="5.352s" Apr 20 15:16:29.238263 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount127426264.mount: Deactivated successfully. Apr 20 15:16:29.587668 containerd[1642]: time="2026-04-20T15:16:29.239224497Z" level=info msg="Container 6c58d284647fa33060f7de1f0a6cd49700691fa2793bb80cba38dea106a1cd38: CDI devices from CRI Config.CDIDevices: []" Apr 20 15:16:33.179795 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount554695566.mount: Deactivated successfully. Apr 20 15:16:33.386879 containerd[1642]: time="2026-04-20T15:16:33.200930127Z" level=info msg="Container a74ef3bf27347deaf68e4e64e65794e824acd52c28673e7900383ea8a4310087: CDI devices from CRI Config.CDIDevices: []" Apr 20 15:16:36.525609 kubelet[2917]: E0420 15:16:36.522571 2917 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="8.529s" Apr 20 15:16:37.815765 containerd[1642]: time="2026-04-20T15:16:37.814528034Z" level=info msg="CreateContainer within sandbox \"4f530dc06aa72949c49b41d18f2e8627314ffd67d443ed094bf5e3888c800fc9\" for name:\"kube-scheduler\" attempt:2 returns container id \"a74ef3bf27347deaf68e4e64e65794e824acd52c28673e7900383ea8a4310087\"" Apr 20 15:16:38.146417 containerd[1642]: time="2026-04-20T15:16:38.139561233Z" level=info msg="RemoveContainer for \"1201ba8c08049438bb6bfd2aaf3ba5641c38e34c54e36ee4ab3b35aa0d37edaa\" returns successfully" Apr 20 15:16:38.565605 containerd[1642]: time="2026-04-20T15:16:38.513731738Z" level=info msg="CreateContainer within sandbox \"eed665b6aafe01026e082818c6c537fdddeaac2672bd5a28fc44d252ce07eaa2\" for name:\"kube-controller-manager\" attempt:2 returns container id \"6c58d284647fa33060f7de1f0a6cd49700691fa2793bb80cba38dea106a1cd38\"" Apr 20 15:16:39.815668 containerd[1642]: time="2026-04-20T15:16:39.814599712Z" level=info msg="StartContainer for \"6c58d284647fa33060f7de1f0a6cd49700691fa2793bb80cba38dea106a1cd38\"" Apr 20 15:16:39.903870 containerd[1642]: time="2026-04-20T15:16:39.814594446Z" level=info msg="StartContainer for \"a74ef3bf27347deaf68e4e64e65794e824acd52c28673e7900383ea8a4310087\"" Apr 20 15:16:40.851282 containerd[1642]: time="2026-04-20T15:16:40.850618418Z" level=info msg="connecting to shim 6c58d284647fa33060f7de1f0a6cd49700691fa2793bb80cba38dea106a1cd38" address="unix:///run/containerd/s/e3520cba3d47f695f01bdf9b9ffdbd7a5c1e7f5540952fe7464054819c712fa9" protocol=ttrpc version=3 Apr 20 15:16:41.062791 containerd[1642]: time="2026-04-20T15:16:40.905653106Z" level=info msg="connecting to shim a74ef3bf27347deaf68e4e64e65794e824acd52c28673e7900383ea8a4310087" address="unix:///run/containerd/s/cf24567a77901a2a777df8cdc50d09c11b84c57a463a127a8f70d3c430a1db55" protocol=ttrpc version=3 Apr 20 15:16:47.738788 kubelet[2917]: E0420 15:16:47.738440 2917 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="11.216s" Apr 20 15:16:47.907206 kubelet[2917]: E0420 15:16:47.906734 2917 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 15:16:48.784507 kubelet[2917]: E0420 15:16:48.774933 2917 cadvisor_stats_provider.go:567] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod824fd89300514e351ed3b68d82c665c6.slice/cri-containerd-979f10babaefc83368e3896a99949c733c5b8a91dab6023472e01b976388b62d.scope\": RecentStats: unable to find data in memory cache]" Apr 20 15:16:49.044389 systemd[1]: Started cri-containerd-6c58d284647fa33060f7de1f0a6cd49700691fa2793bb80cba38dea106a1cd38.scope - libcontainer container 6c58d284647fa33060f7de1f0a6cd49700691fa2793bb80cba38dea106a1cd38. Apr 20 15:16:49.914629 kubelet[2917]: E0420 15:16:49.911696 2917 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.269s" Apr 20 15:16:50.827654 systemd[1]: Started cri-containerd-a74ef3bf27347deaf68e4e64e65794e824acd52c28673e7900383ea8a4310087.scope - libcontainer container a74ef3bf27347deaf68e4e64e65794e824acd52c28673e7900383ea8a4310087. Apr 20 15:16:51.942662 containerd[1642]: time="2026-04-20T15:16:51.932890306Z" level=error msg="get state for 6c58d284647fa33060f7de1f0a6cd49700691fa2793bb80cba38dea106a1cd38" error="context deadline exceeded" Apr 20 15:16:52.176454 containerd[1642]: time="2026-04-20T15:16:51.951937677Z" level=warning msg="unknown status" status=0 Apr 20 15:16:54.623040 containerd[1642]: time="2026-04-20T15:16:54.622452675Z" level=error msg="get state for 6c58d284647fa33060f7de1f0a6cd49700691fa2793bb80cba38dea106a1cd38" error="context deadline exceeded" Apr 20 15:16:54.623040 containerd[1642]: time="2026-04-20T15:16:54.622789929Z" level=warning msg="unknown status" status=0 Apr 20 15:16:55.021602 kubelet[2917]: E0420 15:16:55.003823 2917 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 15:16:56.671639 kubelet[2917]: E0420 15:16:56.669643 2917 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.971s" Apr 20 15:16:57.202867 containerd[1642]: time="2026-04-20T15:16:57.202071838Z" level=error msg="get state for 6c58d284647fa33060f7de1f0a6cd49700691fa2793bb80cba38dea106a1cd38" error="context deadline exceeded" Apr 20 15:16:57.202867 containerd[1642]: time="2026-04-20T15:16:57.202506848Z" level=warning msg="unknown status" status=0 Apr 20 15:16:57.724312 containerd[1642]: time="2026-04-20T15:16:57.723282558Z" level=error msg="ttrpc: received message on inactive stream" stream=3 Apr 20 15:16:57.724312 containerd[1642]: time="2026-04-20T15:16:57.723583918Z" level=error msg="ttrpc: received message on inactive stream" stream=5 Apr 20 15:16:57.724312 containerd[1642]: time="2026-04-20T15:16:57.723596827Z" level=error msg="ttrpc: received message on inactive stream" stream=7 Apr 20 15:16:57.914697 kubelet[2917]: E0420 15:16:57.913794 2917 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.217s" Apr 20 15:17:00.149867 kubelet[2917]: E0420 15:17:00.142744 2917 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.495s" Apr 20 15:17:00.491491 containerd[1642]: time="2026-04-20T15:17:00.399683221Z" level=info msg="StartContainer for \"6c58d284647fa33060f7de1f0a6cd49700691fa2793bb80cba38dea106a1cd38\" returns successfully" Apr 20 15:17:04.089158 containerd[1642]: time="2026-04-20T15:17:04.065881817Z" level=info msg="StartContainer for \"a74ef3bf27347deaf68e4e64e65794e824acd52c28673e7900383ea8a4310087\" returns successfully" Apr 20 15:17:08.401810 kubelet[2917]: E0420 15:17:08.392310 2917 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="7.654s" Apr 20 15:17:10.622546 kubelet[2917]: E0420 15:17:10.620204 2917 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.227s" Apr 20 15:17:11.238603 kubelet[2917]: E0420 15:17:11.221862 2917 controller.go:195] "Failed to update lease" err="Put \"https://10.0.0.13:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded" Apr 20 15:17:12.627230 kubelet[2917]: E0420 15:17:12.624633 2917 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 15:17:12.874654 kubelet[2917]: E0420 15:17:12.870626 2917 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 15:17:13.733750 kubelet[2917]: E0420 15:17:13.704765 2917 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.08s" Apr 20 15:17:14.751881 kubelet[2917]: E0420 15:17:14.735760 2917 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 15:17:15.819540 kubelet[2917]: E0420 15:17:15.816560 2917 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.011s" Apr 20 15:17:16.927729 kubelet[2917]: E0420 15:17:16.927171 2917 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.032s" Apr 20 15:17:17.511880 kubelet[2917]: E0420 15:17:17.504888 2917 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 15:17:17.550386 kubelet[2917]: E0420 15:17:17.550075 2917 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 15:17:18.340684 kubelet[2917]: E0420 15:17:18.338766 2917 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.366s" Apr 20 15:17:18.841612 kubelet[2917]: E0420 15:17:18.838760 2917 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 15:17:20.753302 kubelet[2917]: E0420 15:17:20.742867 2917 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 15:17:21.056825 kubelet[2917]: E0420 15:17:21.051784 2917 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.3s" Apr 20 15:17:21.760168 kubelet[2917]: E0420 15:17:21.758459 2917 controller.go:195] "Failed to update lease" err="Put \"https://10.0.0.13:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded" Apr 20 15:17:31.215769 kubelet[2917]: E0420 15:17:31.199855 2917 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="10.128s" Apr 20 15:17:32.248303 kubelet[2917]: E0420 15:17:32.243660 2917 controller.go:195] "Failed to update lease" err="Put \"https://10.0.0.13:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Apr 20 15:17:32.529328 kubelet[2917]: E0420 15:17:32.418420 2917 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.194s" Apr 20 15:17:33.158511 kubelet[2917]: E0420 15:17:33.151787 2917 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 15:17:33.216677 kubelet[2917]: E0420 15:17:33.197471 2917 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 15:17:33.810544 kubelet[2917]: E0420 15:17:33.810435 2917 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.167s" Apr 20 15:17:33.897727 kubelet[2917]: E0420 15:17:33.815529 2917 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 15:17:35.727756 kubelet[2917]: E0420 15:17:35.724396 2917 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 15:17:36.664551 kubelet[2917]: E0420 15:17:36.649880 2917 controller.go:195] "Failed to update lease" err="Operation cannot be fulfilled on leases.coordination.k8s.io \"localhost\": the object has been modified; please apply your changes to the latest version and try again" Apr 20 15:17:40.517926 kubelet[2917]: E0420 15:17:40.499927 2917 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.811s" Apr 20 15:17:41.953399 kubelet[2917]: E0420 15:17:41.952792 2917 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.299s" Apr 20 15:17:45.268790 kubelet[2917]: E0420 15:17:45.246196 2917 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.587s" Apr 20 15:17:49.880540 kubelet[2917]: E0420 15:17:49.878772 2917 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="4.614s" Apr 20 15:17:52.426168 kubelet[2917]: E0420 15:17:52.424869 2917 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.531s" Apr 20 15:17:55.747801 kubelet[2917]: E0420 15:17:55.666759 2917 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.223s" Apr 20 15:17:59.993795 kubelet[2917]: E0420 15:17:59.960949 2917 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.951s" Apr 20 15:18:01.728842 containerd[1642]: time="2026-04-20T15:18:01.722464583Z" level=info msg="container event discarded" container=638522c2450123b2103aa2125043a6e9c38225e442872526bb62184f0dd321ca type=CONTAINER_CREATED_EVENT Apr 20 15:18:02.065924 containerd[1642]: time="2026-04-20T15:18:02.019872182Z" level=info msg="container event discarded" container=638522c2450123b2103aa2125043a6e9c38225e442872526bb62184f0dd321ca type=CONTAINER_STARTED_EVENT Apr 20 15:18:03.215829 update_engine[1620]: I20260420 15:18:03.161638 1620 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Apr 20 15:18:03.626963 update_engine[1620]: I20260420 15:18:03.242740 1620 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Apr 20 15:18:03.626963 update_engine[1620]: I20260420 15:18:03.514771 1620 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Apr 20 15:18:03.762949 kubelet[2917]: E0420 15:18:03.708850 2917 controller.go:195] "Failed to update lease" err="Put \"https://10.0.0.13:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded" Apr 20 15:18:04.306913 update_engine[1620]: I20260420 15:18:04.290740 1620 omaha_request_params.cc:62] Current group set to alpha Apr 20 15:18:04.476801 update_engine[1620]: I20260420 15:18:04.464957 1620 update_attempter.cc:499] Already updated boot flags. Skipping. Apr 20 15:18:04.476801 update_engine[1620]: I20260420 15:18:04.473560 1620 update_attempter.cc:643] Scheduling an action processor start. Apr 20 15:18:04.643469 update_engine[1620]: I20260420 15:18:04.481592 1620 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Apr 20 15:18:04.643469 update_engine[1620]: I20260420 15:18:04.543296 1620 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Apr 20 15:18:04.643469 update_engine[1620]: I20260420 15:18:04.596154 1620 omaha_request_action.cc:271] Posting an Omaha request to disabled Apr 20 15:18:04.643469 update_engine[1620]: I20260420 15:18:04.600311 1620 omaha_request_action.cc:272] Request: Apr 20 15:18:04.643469 update_engine[1620]: Apr 20 15:18:04.643469 update_engine[1620]: Apr 20 15:18:04.643469 update_engine[1620]: Apr 20 15:18:04.643469 update_engine[1620]: Apr 20 15:18:04.643469 update_engine[1620]: Apr 20 15:18:04.643469 update_engine[1620]: Apr 20 15:18:04.643469 update_engine[1620]: Apr 20 15:18:04.643469 update_engine[1620]: Apr 20 15:18:04.643469 update_engine[1620]: I20260420 15:18:04.609643 1620 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 20 15:18:05.420471 update_engine[1620]: I20260420 15:18:05.337638 1620 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 20 15:18:06.702731 update_engine[1620]: I20260420 15:18:06.549885 1620 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 20 15:18:07.038828 locksmithd[1693]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Apr 20 15:18:07.171923 update_engine[1620]: E20260420 15:18:06.936232 1620 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (Domain name not found) Apr 20 15:18:07.171923 update_engine[1620]: I20260420 15:18:07.166589 1620 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Apr 20 15:18:13.520542 kubelet[2917]: E0420 15:18:13.498452 2917 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="13.481s" Apr 20 15:18:14.102588 kubelet[2917]: E0420 15:18:14.101814 2917 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 15:18:14.122797 kubelet[2917]: E0420 15:18:14.103215 2917 controller.go:195] "Failed to update lease" err="Put \"https://10.0.0.13:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Apr 20 15:18:17.050545 update_engine[1620]: I20260420 15:18:17.013825 1620 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 20 15:18:17.518476 update_engine[1620]: I20260420 15:18:17.139389 1620 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 20 15:18:17.518476 update_engine[1620]: I20260420 15:18:17.413739 1620 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 20 15:18:17.632383 update_engine[1620]: E20260420 15:18:17.556780 1620 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (Domain name not found) Apr 20 15:18:17.699682 update_engine[1620]: I20260420 15:18:17.608338 1620 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Apr 20 15:18:17.714829 kubelet[2917]: E0420 15:18:17.703385 2917 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="4.078s" Apr 20 15:18:18.297690 containerd[1642]: time="2026-04-20T15:18:18.286178313Z" level=info msg="container event discarded" container=e1636b2719891084cd503241c698d2ca4db00064473a624b76b04d47c726ebb6 type=CONTAINER_CREATED_EVENT Apr 20 15:18:18.524533 kubelet[2917]: E0420 15:18:18.509263 2917 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 15:18:18.654463 containerd[1642]: time="2026-04-20T15:18:18.620393708Z" level=info msg="container event discarded" container=e1636b2719891084cd503241c698d2ca4db00064473a624b76b04d47c726ebb6 type=CONTAINER_STARTED_EVENT Apr 20 15:18:19.562594 kubelet[2917]: E0420 15:18:19.557743 2917 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 15:18:24.236841 kubelet[2917]: E0420 15:18:24.236184 2917 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="6.52s" Apr 20 15:18:25.029948 kubelet[2917]: E0420 15:18:24.786480 2917 controller.go:195] "Failed to update lease" err="Put \"https://10.0.0.13:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Apr 20 15:18:25.907907 kubelet[2917]: E0420 15:18:25.906900 2917 status_manager.go:1041] "Failed to update status for pod" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a65d021-f48b-4090-b751-a1de19bd3a32\\\"},\\\"status\\\":{\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"100m\\\"},\\\"containerID\\\":\\\"containerd://a74ef3bf27347deaf68e4e64e65794e824acd52c28673e7900383ea8a4310087\\\",\\\"image\\\":\\\"registry.k8s.io/kube-scheduler:v1.34.7\\\",\\\"imageID\\\":\\\"registry.k8s.io/kube-scheduler@sha256:4ab32f707ff84beaac431797999707757b885196b0b9a52d29cb67f95efce7c1\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"containerd://979f10babaefc83368e3896a99949c733c5b8a91dab6023472e01b976388b62d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-04-20T15:16:09Z\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-04-20T15:15:05Z\\\"}},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"100m\\\"}},\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-20T15:17:02Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}]}}\" for pod \"kube-system\"/\"kube-scheduler-localhost\": Timeout: request did not complete within requested timeout - context deadline exceeded" pod="kube-system/kube-scheduler-localhost" Apr 20 15:18:28.012503 update_engine[1620]: I20260420 15:18:28.008488 1620 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 20 15:18:28.020633 update_engine[1620]: I20260420 15:18:28.016103 1620 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 20 15:18:28.025659 update_engine[1620]: I20260420 15:18:28.020524 1620 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 20 15:18:28.076562 update_engine[1620]: E20260420 15:18:28.067932 1620 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (Domain name not found) Apr 20 15:18:28.159822 update_engine[1620]: I20260420 15:18:28.112712 1620 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Apr 20 15:18:28.915696 kubelet[2917]: E0420 15:18:28.909783 2917 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="4.636s" Apr 20 15:18:29.400342 kubelet[2917]: E0420 15:18:29.359528 2917 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 15:18:29.408401 kubelet[2917]: E0420 15:18:29.405597 2917 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 15:18:30.301845 kubelet[2917]: E0420 15:18:30.294408 2917 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.382s" Apr 20 15:18:36.113407 kubelet[2917]: E0420 15:18:36.100744 2917 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.359s" Apr 20 15:18:36.475777 kubelet[2917]: E0420 15:18:36.034535 2917 controller.go:195] "Failed to update lease" err="Put \"https://10.0.0.13:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Apr 20 15:18:36.652749 kubelet[2917]: E0420 15:18:36.619790 2917 kubelet_node_status.go:486] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"NetworkUnavailable\\\"},{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-04-20T15:18:24Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-20T15:18:24Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-20T15:18:24Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-20T15:18:24Z\\\",\\\"type\\\":\\\"Ready\\\"}]}}\" for node \"localhost\": Patch \"https://10.0.0.13:6443/api/v1/nodes/localhost/status?timeout=10s\": context deadline exceeded" Apr 20 15:18:38.056150 update_engine[1620]: I20260420 15:18:38.007835 1620 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 20 15:18:38.410768 update_engine[1620]: I20260420 15:18:38.153371 1620 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 20 15:18:38.410768 update_engine[1620]: I20260420 15:18:38.405579 1620 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 20 15:18:38.449629 update_engine[1620]: E20260420 15:18:38.426564 1620 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (Domain name not found) Apr 20 15:18:38.519193 update_engine[1620]: I20260420 15:18:38.442894 1620 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Apr 20 15:18:38.519193 update_engine[1620]: I20260420 15:18:38.517701 1620 omaha_request_action.cc:617] Omaha request response: Apr 20 15:18:38.731311 update_engine[1620]: E20260420 15:18:38.555897 1620 omaha_request_action.cc:636] Omaha request network transfer failed. Apr 20 15:18:38.731311 update_engine[1620]: I20260420 15:18:38.672787 1620 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Apr 20 15:18:38.922356 update_engine[1620]: I20260420 15:18:38.727408 1620 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Apr 20 15:18:38.922356 update_engine[1620]: I20260420 15:18:38.821084 1620 update_attempter.cc:306] Processing Done. Apr 20 15:18:38.922356 update_engine[1620]: E20260420 15:18:38.849885 1620 update_attempter.cc:619] Update failed. Apr 20 15:18:38.922356 update_engine[1620]: I20260420 15:18:38.910259 1620 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Apr 20 15:18:38.922356 update_engine[1620]: I20260420 15:18:38.912941 1620 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Apr 20 15:18:39.379778 update_engine[1620]: I20260420 15:18:38.922628 1620 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Apr 20 15:18:39.379778 update_engine[1620]: I20260420 15:18:39.035853 1620 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Apr 20 15:18:39.379778 update_engine[1620]: I20260420 15:18:39.140843 1620 omaha_request_action.cc:271] Posting an Omaha request to disabled Apr 20 15:18:39.379778 update_engine[1620]: I20260420 15:18:39.185195 1620 omaha_request_action.cc:272] Request: Apr 20 15:18:39.379778 update_engine[1620]: Apr 20 15:18:39.379778 update_engine[1620]: Apr 20 15:18:39.379778 update_engine[1620]: Apr 20 15:18:39.379778 update_engine[1620]: Apr 20 15:18:39.379778 update_engine[1620]: Apr 20 15:18:39.379778 update_engine[1620]: Apr 20 15:18:39.379778 update_engine[1620]: I20260420 15:18:39.194361 1620 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 20 15:18:39.379778 update_engine[1620]: I20260420 15:18:39.205928 1620 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 20 15:18:39.379778 update_engine[1620]: I20260420 15:18:39.314898 1620 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 20 15:18:40.007396 locksmithd[1693]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Apr 20 15:18:40.007396 locksmithd[1693]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Apr 20 15:18:40.125145 update_engine[1620]: E20260420 15:18:39.459362 1620 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (Domain name not found) Apr 20 15:18:40.125145 update_engine[1620]: I20260420 15:18:39.556952 1620 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Apr 20 15:18:40.125145 update_engine[1620]: I20260420 15:18:39.604424 1620 omaha_request_action.cc:617] Omaha request response: Apr 20 15:18:40.125145 update_engine[1620]: I20260420 15:18:39.624402 1620 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Apr 20 15:18:40.125145 update_engine[1620]: I20260420 15:18:39.701922 1620 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Apr 20 15:18:40.125145 update_engine[1620]: I20260420 15:18:39.799239 1620 update_attempter.cc:306] Processing Done. Apr 20 15:18:40.125145 update_engine[1620]: I20260420 15:18:39.853178 1620 update_attempter.cc:310] Error event sent. Apr 20 15:18:40.125145 update_engine[1620]: I20260420 15:18:39.899912 1620 update_check_scheduler.cc:74] Next update check in 46m33s Apr 20 15:18:44.501807 kubelet[2917]: E0420 15:18:44.497638 2917 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="8.338s" Apr 20 15:18:46.299428 kubelet[2917]: E0420 15:18:46.294882 2917 controller.go:195] "Failed to update lease" err="Put \"https://10.0.0.13:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded" Apr 20 15:18:46.384868 kubelet[2917]: I0420 15:18:46.301807 2917 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Apr 20 15:18:46.733550 kubelet[2917]: E0420 15:18:46.724844 2917 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"localhost\": Get \"https://10.0.0.13:6443/api/v1/nodes/localhost?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Apr 20 15:18:47.588881 kubelet[2917]: E0420 15:18:47.554913 2917 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.942s" Apr 20 15:18:50.760573 kubelet[2917]: E0420 15:18:50.758733 2917 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 15:18:51.318169 kubelet[2917]: E0420 15:18:51.317446 2917 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 15:18:55.631179 kubelet[2917]: E0420 15:18:55.615502 2917 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="8.015s" Apr 20 15:18:56.334752 kubelet[2917]: E0420 15:18:56.326660 2917 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.13:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded" interval="200ms" Apr 20 15:18:57.004507 kubelet[2917]: E0420 15:18:56.986373 2917 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"localhost\": Get \"https://10.0.0.13:6443/api/v1/nodes/localhost?timeout=10s\": context deadline exceeded" Apr 20 15:18:57.627824 kubelet[2917]: E0420 15:18:57.626190 2917 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.982s" Apr 20 15:18:59.596482 kubelet[2917]: E0420 15:18:59.592874 2917 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.928s" Apr 20 15:19:00.522806 containerd[1642]: time="2026-04-20T15:19:00.491448067Z" level=info msg="container event discarded" container=fed840575fcd557f6235d523466c7b9fb4e29925e16ffa458edfd34063de0127 type=CONTAINER_CREATED_EVENT Apr 20 15:19:01.509509 kubelet[2917]: E0420 15:19:01.443908 2917 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.849s" Apr 20 15:19:03.051223 kubelet[2917]: E0420 15:19:03.042921 2917 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.514s" Apr 20 15:19:05.539709 kubelet[2917]: E0420 15:19:05.536297 2917 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.479s" Apr 20 15:19:06.602893 kubelet[2917]: E0420 15:19:06.589696 2917 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.13:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded" interval="400ms" Apr 20 15:19:07.282787 kubelet[2917]: E0420 15:19:07.054466 2917 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.501s" Apr 20 15:19:07.340336 kubelet[2917]: E0420 15:19:07.339319 2917 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"localhost\": Get \"https://10.0.0.13:6443/api/v1/nodes/localhost?timeout=10s\": context deadline exceeded" Apr 20 15:19:08.613329 kubelet[2917]: E0420 15:19:08.564859 2917 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.27s" Apr 20 15:19:08.930283 kubelet[2917]: E0420 15:19:08.925474 2917 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 15:19:10.127531 kubelet[2917]: E0420 15:19:10.122446 2917 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.434s" Apr 20 15:19:19.215232 kubelet[2917]: E0420 15:19:19.211616 2917 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="8.453s" Apr 20 15:19:20.360852 kubelet[2917]: E0420 15:19:20.360635 2917 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.149s" Apr 20 15:19:20.914411 kubelet[2917]: E0420 15:19:20.909935 2917 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 15:19:23.665372 kubelet[2917]: E0420 15:19:23.661138 2917 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.017s" Apr 20 15:19:26.745225 kubelet[2917]: E0420 15:19:26.740806 2917 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.041s" Apr 20 15:19:27.985902 kubelet[2917]: E0420 15:19:27.983488 2917 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.242s" Apr 20 15:19:34.093431 kubelet[2917]: E0420 15:19:34.089395 2917 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.262s" Apr 20 15:19:34.730461 kubelet[2917]: E0420 15:19:34.726877 2917 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 15:19:35.520958 containerd[1642]: time="2026-04-20T15:19:35.518237484Z" level=info msg="container event discarded" container=f91a85e6df6e19ac178e83aed0a22a392885d1618d35895f49911b27079a2ac0 type=CONTAINER_CREATED_EVENT Apr 20 15:19:37.386578 kubelet[2917]: E0420 15:19:37.379783 2917 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 15:19:37.936703 kubelet[2917]: E0420 15:19:37.919447 2917 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 15:19:40.359032 kubelet[2917]: E0420 15:19:40.357925 2917 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.7s" Apr 20 15:19:41.802089 kubelet[2917]: E0420 15:19:41.796159 2917 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.152s" Apr 20 15:19:44.379743 kubelet[2917]: E0420 15:19:44.375459 2917 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.572s" Apr 20 15:19:47.746300 systemd[1]: cri-containerd-6c58d284647fa33060f7de1f0a6cd49700691fa2793bb80cba38dea106a1cd38.scope: Deactivated successfully. Apr 20 15:19:47.870950 systemd[1]: cri-containerd-6c58d284647fa33060f7de1f0a6cd49700691fa2793bb80cba38dea106a1cd38.scope: Consumed 36.748s CPU time, 24.4M memory peak, 4K read from disk. Apr 20 15:19:48.343334 kubelet[2917]: E0420 15:19:48.331760 2917 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.672s" Apr 20 15:19:48.663387 containerd[1642]: time="2026-04-20T15:19:48.604824180Z" level=info msg="received container exit event container_id:\"6c58d284647fa33060f7de1f0a6cd49700691fa2793bb80cba38dea106a1cd38\" id:\"6c58d284647fa33060f7de1f0a6cd49700691fa2793bb80cba38dea106a1cd38\" pid:4525 exit_status:1 exited_at:{seconds:1776698388 nanos:382479130}" Apr 20 15:19:48.971263 containerd[1642]: time="2026-04-20T15:19:48.710429892Z" level=info msg="container event discarded" container=1201ba8c08049438bb6bfd2aaf3ba5641c38e34c54e36ee4ab3b35aa0d37edaa type=CONTAINER_STOPPED_EVENT Apr 20 15:19:50.578799 containerd[1642]: time="2026-04-20T15:19:50.562594674Z" level=info msg="container event discarded" container=f91a85e6df6e19ac178e83aed0a22a392885d1618d35895f49911b27079a2ac0 type=CONTAINER_STARTED_EVENT Apr 20 15:19:50.897833 containerd[1642]: time="2026-04-20T15:19:50.885322912Z" level=info msg="container event discarded" container=fed840575fcd557f6235d523466c7b9fb4e29925e16ffa458edfd34063de0127 type=CONTAINER_STARTED_EVENT Apr 20 15:19:51.837437 kubelet[2917]: E0420 15:19:51.835468 2917 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.187s" Apr 20 15:19:52.012800 containerd[1642]: time="2026-04-20T15:19:51.877820948Z" level=info msg="container event discarded" container=1e8025ffd2089908a1a85626ad997c765cb77a5dd815d186708c4cb41c207781 type=CONTAINER_CREATED_EVENT Apr 20 15:19:52.308701 kubelet[2917]: E0420 15:19:52.280733 2917 controller.go:195] "Failed to update lease" err="Put \"https://10.0.0.13:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded" Apr 20 15:19:53.743500 kubelet[2917]: E0420 15:19:53.738060 2917 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.873s" Apr 20 15:19:57.126388 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6c58d284647fa33060f7de1f0a6cd49700691fa2793bb80cba38dea106a1cd38-rootfs.mount: Deactivated successfully. Apr 20 15:19:57.920459 kubelet[2917]: E0420 15:19:57.915467 2917 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="4.165s" Apr 20 15:19:58.414432 kubelet[2917]: E0420 15:19:58.329783 2917 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 15:19:58.934928 kubelet[2917]: E0420 15:19:58.933213 2917 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.015s" Apr 20 15:19:59.740624 containerd[1642]: time="2026-04-20T15:19:59.717413270Z" level=info msg="container event discarded" container=d17439ea92908ce00a6e08af24eeaaaa4122b85561e2cedb708e273f0cd453f6 type=CONTAINER_STOPPED_EVENT Apr 20 15:20:00.120269 containerd[1642]: time="2026-04-20T15:20:00.099856018Z" level=info msg="container event discarded" container=1e8025ffd2089908a1a85626ad997c765cb77a5dd815d186708c4cb41c207781 type=CONTAINER_STARTED_EVENT Apr 20 15:20:00.363453 kubelet[2917]: E0420 15:20:00.359559 2917 controller.go:195] "Failed to update lease" err="Operation cannot be fulfilled on leases.coordination.k8s.io \"localhost\": the object has been modified; please apply your changes to the latest version and try again" Apr 20 15:20:00.431768 kubelet[2917]: E0420 15:20:00.373771 2917 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.335s" Apr 20 15:20:00.431768 kubelet[2917]: I0420 15:20:00.394828 2917 scope.go:117] "RemoveContainer" containerID="1e8025ffd2089908a1a85626ad997c765cb77a5dd815d186708c4cb41c207781" Apr 20 15:20:00.603872 kubelet[2917]: I0420 15:20:00.602386 2917 scope.go:117] "RemoveContainer" containerID="6c58d284647fa33060f7de1f0a6cd49700691fa2793bb80cba38dea106a1cd38" Apr 20 15:20:00.715057 kubelet[2917]: E0420 15:20:00.700622 2917 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 15:20:01.104547 containerd[1642]: time="2026-04-20T15:20:01.066313179Z" level=info msg="container event discarded" container=979f10babaefc83368e3896a99949c733c5b8a91dab6023472e01b976388b62d type=CONTAINER_CREATED_EVENT Apr 20 15:20:02.000874 containerd[1642]: time="2026-04-20T15:20:01.996654943Z" level=info msg="RemoveContainer for \"1e8025ffd2089908a1a85626ad997c765cb77a5dd815d186708c4cb41c207781\"" Apr 20 15:20:03.188965 containerd[1642]: time="2026-04-20T15:20:03.184944321Z" level=info msg="CreateContainer within sandbox \"eed665b6aafe01026e082818c6c537fdddeaac2672bd5a28fc44d252ce07eaa2\" for container name:\"kube-controller-manager\" attempt:3" Apr 20 15:20:04.555659 kubelet[2917]: E0420 15:20:04.548625 2917 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.897s" Apr 20 15:20:05.167384 containerd[1642]: time="2026-04-20T15:20:05.161289808Z" level=info msg="RemoveContainer for \"1e8025ffd2089908a1a85626ad997c765cb77a5dd815d186708c4cb41c207781\" returns successfully" Apr 20 15:20:05.765665 containerd[1642]: time="2026-04-20T15:20:05.730430625Z" level=info msg="container event discarded" container=979f10babaefc83368e3896a99949c733c5b8a91dab6023472e01b976388b62d type=CONTAINER_STARTED_EVENT Apr 20 15:20:09.006581 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3854394038.mount: Deactivated successfully. Apr 20 15:20:09.030450 containerd[1642]: time="2026-04-20T15:20:09.016448855Z" level=info msg="Container c3940ebee6105d82bd109d6fa9202160aca80f351fcc975c66223d393b17c1a4: CDI devices from CRI Config.CDIDevices: []" Apr 20 15:20:11.021459 containerd[1642]: time="2026-04-20T15:20:11.004618302Z" level=info msg="CreateContainer within sandbox \"eed665b6aafe01026e082818c6c537fdddeaac2672bd5a28fc44d252ce07eaa2\" for name:\"kube-controller-manager\" attempt:3 returns container id \"c3940ebee6105d82bd109d6fa9202160aca80f351fcc975c66223d393b17c1a4\"" Apr 20 15:20:11.381174 containerd[1642]: time="2026-04-20T15:20:11.343635842Z" level=info msg="StartContainer for \"c3940ebee6105d82bd109d6fa9202160aca80f351fcc975c66223d393b17c1a4\"" Apr 20 15:20:12.523348 containerd[1642]: time="2026-04-20T15:20:12.520902579Z" level=info msg="connecting to shim c3940ebee6105d82bd109d6fa9202160aca80f351fcc975c66223d393b17c1a4" address="unix:///run/containerd/s/e3520cba3d47f695f01bdf9b9ffdbd7a5c1e7f5540952fe7464054819c712fa9" protocol=ttrpc version=3 Apr 20 15:20:14.150455 kubelet[2917]: E0420 15:20:14.148782 2917 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="8.062s" Apr 20 15:20:20.120724 kubelet[2917]: I0420 15:20:20.109637 2917 scope.go:117] "RemoveContainer" containerID="6c58d284647fa33060f7de1f0a6cd49700691fa2793bb80cba38dea106a1cd38" Apr 20 15:20:20.367424 kubelet[2917]: E0420 15:20:20.324473 2917 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="5.479s" Apr 20 15:20:20.927648 kubelet[2917]: E0420 15:20:20.823730 2917 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 15:20:21.885231 containerd[1642]: time="2026-04-20T15:20:21.882360225Z" level=info msg="RemoveContainer for \"6c58d284647fa33060f7de1f0a6cd49700691fa2793bb80cba38dea106a1cd38\"" Apr 20 15:20:21.923813 kubelet[2917]: E0420 15:20:21.922125 2917 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.583s" Apr 20 15:20:23.015520 systemd[1]: Started cri-containerd-c3940ebee6105d82bd109d6fa9202160aca80f351fcc975c66223d393b17c1a4.scope - libcontainer container c3940ebee6105d82bd109d6fa9202160aca80f351fcc975c66223d393b17c1a4. Apr 20 15:20:23.844382 kubelet[2917]: E0420 15:20:23.840638 2917 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.069s" Apr 20 15:20:23.947757 containerd[1642]: time="2026-04-20T15:20:23.944662288Z" level=error msg="get state for eed665b6aafe01026e082818c6c537fdddeaac2672bd5a28fc44d252ce07eaa2" error="context deadline exceeded" Apr 20 15:20:24.124404 containerd[1642]: time="2026-04-20T15:20:24.008668960Z" level=warning msg="unknown status" status=0 Apr 20 15:20:25.215609 containerd[1642]: time="2026-04-20T15:20:25.207582385Z" level=info msg="RemoveContainer for \"6c58d284647fa33060f7de1f0a6cd49700691fa2793bb80cba38dea106a1cd38\" returns successfully" Apr 20 15:20:25.831395 containerd[1642]: time="2026-04-20T15:20:25.826590374Z" level=error msg="get state for c3940ebee6105d82bd109d6fa9202160aca80f351fcc975c66223d393b17c1a4" error="context deadline exceeded" Apr 20 15:20:25.897193 containerd[1642]: time="2026-04-20T15:20:25.840370199Z" level=warning msg="unknown status" status=0 Apr 20 15:20:28.465295 kubelet[2917]: E0420 15:20:28.461891 2917 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.787s" Apr 20 15:20:28.877628 containerd[1642]: time="2026-04-20T15:20:28.746749269Z" level=error msg="ttrpc: received message on inactive stream" stream=73 Apr 20 15:20:28.947555 containerd[1642]: time="2026-04-20T15:20:28.920590985Z" level=error msg="ttrpc: received message on inactive stream" stream=5 Apr 20 15:20:29.046693 containerd[1642]: time="2026-04-20T15:20:29.003781143Z" level=error msg="get state for c3940ebee6105d82bd109d6fa9202160aca80f351fcc975c66223d393b17c1a4" error="context deadline exceeded" Apr 20 15:20:29.046693 containerd[1642]: time="2026-04-20T15:20:29.004344554Z" level=warning msg="unknown status" status=0 Apr 20 15:20:29.046693 containerd[1642]: time="2026-04-20T15:20:29.039127025Z" level=error msg="ttrpc: received message on inactive stream" stream=7 Apr 20 15:20:31.054567 kubelet[2917]: E0420 15:20:31.049517 2917 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.539s" Apr 20 15:20:36.017334 kubelet[2917]: E0420 15:20:36.016504 2917 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="4.895s" Apr 20 15:20:38.599757 kubelet[2917]: E0420 15:20:38.593776 2917 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.449s" Apr 20 15:20:39.429477 containerd[1642]: time="2026-04-20T15:20:39.428357046Z" level=info msg="StartContainer for \"c3940ebee6105d82bd109d6fa9202160aca80f351fcc975c66223d393b17c1a4\" returns successfully"