Apr 17 01:36:42.257436 kernel: Linux version 6.12.81-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 15.2.1_p20251122 p4) 15.2.1 20251122, GNU ld (Gentoo 2.45.1 p1) 2.45.1) #1 SMP PREEMPT_DYNAMIC Thu Apr 16 21:56:01 -00 2026 Apr 17 01:36:42.260129 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=51a9d6679b489a6054ad9328c7381b5a0b29a8dc6ebbd9b773cac2ef7c32e2e2 Apr 17 01:36:42.260257 kernel: BIOS-provided physical RAM map: Apr 17 01:36:42.260318 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Apr 17 01:36:42.260380 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Apr 17 01:36:42.260441 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Apr 17 01:36:42.260552 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Apr 17 01:36:42.260716 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Apr 17 01:36:42.263015 kernel: BIOS-e820: [mem 0x000000000080c000-0x0000000000810fff] usable Apr 17 01:36:42.263028 kernel: BIOS-e820: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Apr 17 01:36:42.263043 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009bd3efff] usable Apr 17 01:36:42.277329 kernel: BIOS-e820: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Apr 17 01:36:42.278280 kernel: BIOS-e820: [mem 0x000000009be00000-0x000000009c8ecfff] usable Apr 17 01:36:42.278298 kernel: BIOS-e820: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Apr 17 01:36:42.289542 kernel: BIOS-e820: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Apr 17 01:36:42.289692 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Apr 17 01:36:42.289701 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009ce90fff] usable Apr 17 01:36:42.289772 kernel: BIOS-e820: [mem 0x000000009ce91000-0x000000009ce94fff] reserved Apr 17 01:36:42.289781 kernel: BIOS-e820: [mem 0x000000009ce95000-0x000000009ce96fff] ACPI NVS Apr 17 01:36:42.289945 kernel: BIOS-e820: [mem 0x000000009ce97000-0x000000009cedbfff] usable Apr 17 01:36:42.289953 kernel: BIOS-e820: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Apr 17 01:36:42.289961 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Apr 17 01:36:42.290031 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Apr 17 01:36:42.290100 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Apr 17 01:36:42.290109 kernel: BIOS-e820: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Apr 17 01:36:42.290116 kernel: NX (Execute Disable) protection: active Apr 17 01:36:42.290123 kernel: APIC: Static calls initialized Apr 17 01:36:42.290192 kernel: e820: update [mem 0x9b31e018-0x9b327c57] usable ==> usable Apr 17 01:36:42.290260 kernel: e820: update [mem 0x9b2e1018-0x9b31de57] usable ==> usable Apr 17 01:36:42.291713 kernel: extended physical RAM map: Apr 17 01:36:42.291726 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable Apr 17 01:36:42.291733 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000007fffff] usable Apr 17 01:36:42.291740 kernel: reserve setup_data: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Apr 17 01:36:42.291749 kernel: reserve setup_data: [mem 0x0000000000808000-0x000000000080afff] usable Apr 17 01:36:42.291758 kernel: reserve setup_data: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Apr 17 01:36:42.291765 kernel: reserve setup_data: [mem 0x000000000080c000-0x0000000000810fff] usable Apr 17 01:36:42.291942 kernel: reserve setup_data: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Apr 17 01:36:42.291951 kernel: reserve setup_data: [mem 0x0000000000900000-0x000000009b2e1017] usable Apr 17 01:36:42.291959 kernel: reserve setup_data: [mem 0x000000009b2e1018-0x000000009b31de57] usable Apr 17 01:36:42.291968 kernel: reserve setup_data: [mem 0x000000009b31de58-0x000000009b31e017] usable Apr 17 01:36:42.292087 kernel: reserve setup_data: [mem 0x000000009b31e018-0x000000009b327c57] usable Apr 17 01:36:42.292148 kernel: reserve setup_data: [mem 0x000000009b327c58-0x000000009bd3efff] usable Apr 17 01:36:42.292160 kernel: reserve setup_data: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Apr 17 01:36:42.292227 kernel: reserve setup_data: [mem 0x000000009be00000-0x000000009c8ecfff] usable Apr 17 01:36:42.292291 kernel: reserve setup_data: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Apr 17 01:36:42.292300 kernel: reserve setup_data: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Apr 17 01:36:42.292309 kernel: reserve setup_data: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Apr 17 01:36:42.292317 kernel: reserve setup_data: [mem 0x000000009cbff000-0x000000009ce90fff] usable Apr 17 01:36:42.292326 kernel: reserve setup_data: [mem 0x000000009ce91000-0x000000009ce94fff] reserved Apr 17 01:36:42.292336 kernel: reserve setup_data: [mem 0x000000009ce95000-0x000000009ce96fff] ACPI NVS Apr 17 01:36:42.292345 kernel: reserve setup_data: [mem 0x000000009ce97000-0x000000009cedbfff] usable Apr 17 01:36:42.292410 kernel: reserve setup_data: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Apr 17 01:36:42.292475 kernel: reserve setup_data: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Apr 17 01:36:42.292485 kernel: reserve setup_data: [mem 0x00000000e0000000-0x00000000efffffff] reserved Apr 17 01:36:42.295770 kernel: reserve setup_data: [mem 0x00000000feffc000-0x00000000feffffff] reserved Apr 17 01:36:42.295990 kernel: reserve setup_data: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Apr 17 01:36:42.296000 kernel: efi: EFI v2.7 by EDK II Apr 17 01:36:42.296111 kernel: efi: SMBIOS=0x9c988000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b9e4198 RNG=0x9cb73018 Apr 17 01:36:42.296173 kernel: random: crng init done Apr 17 01:36:42.296183 kernel: efi: Remove mem151: MMIO range=[0xffc00000-0xffffffff] (4MB) from e820 map Apr 17 01:36:42.296192 kernel: e820: remove [mem 0xffc00000-0xffffffff] reserved Apr 17 01:36:42.296201 kernel: secureboot: Secure boot disabled Apr 17 01:36:42.296210 kernel: SMBIOS 2.8 present. Apr 17 01:36:42.296281 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS unknown 02/02/2022 Apr 17 01:36:42.296290 kernel: DMI: Memory slots populated: 1/1 Apr 17 01:36:42.296354 kernel: Hypervisor detected: KVM Apr 17 01:36:42.301591 kernel: last_pfn = 0x9cedc max_arch_pfn = 0x10000000000 Apr 17 01:36:42.303508 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Apr 17 01:36:42.303520 kernel: kvm-clock: using sched offset of 15970644017 cycles Apr 17 01:36:42.303532 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Apr 17 01:36:42.303544 kernel: tsc: Detected 2793.438 MHz processor Apr 17 01:36:42.303555 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Apr 17 01:36:42.303566 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Apr 17 01:36:42.303721 kernel: last_pfn = 0x9cedc max_arch_pfn = 0x10000000000 Apr 17 01:36:42.303733 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Apr 17 01:36:42.303744 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Apr 17 01:36:42.303755 kernel: Using GB pages for direct mapping Apr 17 01:36:42.303766 kernel: ACPI: Early table checksum verification disabled Apr 17 01:36:42.304021 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Apr 17 01:36:42.304093 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Apr 17 01:36:42.304104 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Apr 17 01:36:42.304171 kernel: ACPI: DSDT 0x000000009CB7A000 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 17 01:36:42.304182 kernel: ACPI: FACS 0x000000009CBDD000 000040 Apr 17 01:36:42.314392 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 17 01:36:42.314410 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 17 01:36:42.314585 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 17 01:36:42.314595 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 17 01:36:42.314605 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Apr 17 01:36:42.315311 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Apr 17 01:36:42.315323 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1b9] Apr 17 01:36:42.315333 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Apr 17 01:36:42.315343 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Apr 17 01:36:42.315352 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Apr 17 01:36:42.315363 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Apr 17 01:36:42.315373 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Apr 17 01:36:42.315449 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Apr 17 01:36:42.315460 kernel: No NUMA configuration found Apr 17 01:36:42.315471 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cedbfff] Apr 17 01:36:42.315482 kernel: NODE_DATA(0) allocated [mem 0x9ce36dc0-0x9ce3dfff] Apr 17 01:36:42.315492 kernel: Zone ranges: Apr 17 01:36:42.315502 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Apr 17 01:36:42.315512 kernel: DMA32 [mem 0x0000000001000000-0x000000009cedbfff] Apr 17 01:36:42.315585 kernel: Normal empty Apr 17 01:36:42.315596 kernel: Device empty Apr 17 01:36:42.315718 kernel: Movable zone start for each node Apr 17 01:36:42.315730 kernel: Early memory node ranges Apr 17 01:36:42.315741 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Apr 17 01:36:42.315751 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Apr 17 01:36:42.315761 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Apr 17 01:36:42.315774 kernel: node 0: [mem 0x000000000080c000-0x0000000000810fff] Apr 17 01:36:42.315952 kernel: node 0: [mem 0x0000000000900000-0x000000009bd3efff] Apr 17 01:36:42.315964 kernel: node 0: [mem 0x000000009be00000-0x000000009c8ecfff] Apr 17 01:36:42.315974 kernel: node 0: [mem 0x000000009cbff000-0x000000009ce90fff] Apr 17 01:36:42.315986 kernel: node 0: [mem 0x000000009ce97000-0x000000009cedbfff] Apr 17 01:36:42.315997 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cedbfff] Apr 17 01:36:42.316007 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Apr 17 01:36:42.316130 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Apr 17 01:36:42.316287 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Apr 17 01:36:42.316553 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Apr 17 01:36:42.317231 kernel: On node 0, zone DMA: 239 pages in unavailable ranges Apr 17 01:36:42.317300 kernel: On node 0, zone DMA32: 193 pages in unavailable ranges Apr 17 01:36:42.317310 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Apr 17 01:36:42.317320 kernel: On node 0, zone DMA32: 6 pages in unavailable ranges Apr 17 01:36:42.317378 kernel: On node 0, zone DMA32: 12580 pages in unavailable ranges Apr 17 01:36:42.317390 kernel: ACPI: PM-Timer IO Port: 0x608 Apr 17 01:36:42.317400 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Apr 17 01:36:42.317567 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Apr 17 01:36:42.317578 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Apr 17 01:36:42.317588 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Apr 17 01:36:42.317598 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Apr 17 01:36:42.335290 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Apr 17 01:36:42.335447 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Apr 17 01:36:42.335509 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Apr 17 01:36:42.335523 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Apr 17 01:36:42.335586 kernel: TSC deadline timer available Apr 17 01:36:42.335597 kernel: CPU topo: Max. logical packages: 1 Apr 17 01:36:42.335608 kernel: CPU topo: Max. logical dies: 1 Apr 17 01:36:42.349460 kernel: CPU topo: Max. dies per package: 1 Apr 17 01:36:42.349547 kernel: CPU topo: Max. threads per core: 1 Apr 17 01:36:42.349608 kernel: CPU topo: Num. cores per package: 4 Apr 17 01:36:42.357223 kernel: CPU topo: Num. threads per package: 4 Apr 17 01:36:42.357254 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs Apr 17 01:36:42.357267 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Apr 17 01:36:42.357277 kernel: kvm-guest: KVM setup pv remote TLB flush Apr 17 01:36:42.357287 kernel: kvm-guest: setup PV sched yield Apr 17 01:36:42.360745 kernel: [mem 0x9d000000-0xdfffffff] available for PCI devices Apr 17 01:36:42.360816 kernel: Booting paravirtualized kernel on KVM Apr 17 01:36:42.360831 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Apr 17 01:36:42.374816 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Apr 17 01:36:42.374958 kernel: percpu: Embedded 60 pages/cpu s207960 r8192 d29608 u524288 Apr 17 01:36:42.375026 kernel: pcpu-alloc: s207960 r8192 d29608 u524288 alloc=1*2097152 Apr 17 01:36:42.375037 kernel: pcpu-alloc: [0] 0 1 2 3 Apr 17 01:36:42.375122 kernel: kvm-guest: PV spinlocks enabled Apr 17 01:36:42.375133 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Apr 17 01:36:42.375147 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=51a9d6679b489a6054ad9328c7381b5a0b29a8dc6ebbd9b773cac2ef7c32e2e2 Apr 17 01:36:42.375158 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Apr 17 01:36:42.375168 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Apr 17 01:36:42.375177 kernel: Fallback order for Node 0: 0 Apr 17 01:36:42.375246 kernel: Built 1 zonelists, mobility grouping on. Total pages: 641450 Apr 17 01:36:42.375257 kernel: Policy zone: DMA32 Apr 17 01:36:42.375267 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Apr 17 01:36:42.375278 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Apr 17 01:36:42.375289 kernel: ftrace: allocating 40346 entries in 158 pages Apr 17 01:36:42.375300 kernel: ftrace: allocated 158 pages with 5 groups Apr 17 01:36:42.375311 kernel: Dynamic Preempt: voluntary Apr 17 01:36:42.375322 kernel: rcu: Preemptible hierarchical RCU implementation. Apr 17 01:36:42.375396 kernel: rcu: RCU event tracing is enabled. Apr 17 01:36:42.375407 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Apr 17 01:36:42.375418 kernel: Trampoline variant of Tasks RCU enabled. Apr 17 01:36:42.375429 kernel: Rude variant of Tasks RCU enabled. Apr 17 01:36:42.375439 kernel: Tracing variant of Tasks RCU enabled. Apr 17 01:36:42.375449 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Apr 17 01:36:42.375459 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Apr 17 01:36:42.375526 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Apr 17 01:36:42.375587 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Apr 17 01:36:42.375598 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Apr 17 01:36:42.375610 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Apr 17 01:36:42.375684 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Apr 17 01:36:42.375697 kernel: Console: colour dummy device 80x25 Apr 17 01:36:42.375707 kernel: printk: legacy console [ttyS0] enabled Apr 17 01:36:42.377201 kernel: ACPI: Core revision 20240827 Apr 17 01:36:42.377215 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Apr 17 01:36:42.377226 kernel: APIC: Switch to symmetric I/O mode setup Apr 17 01:36:42.377237 kernel: x2apic enabled Apr 17 01:36:42.377248 kernel: APIC: Switched APIC routing to: physical x2apic Apr 17 01:36:42.377259 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Apr 17 01:36:42.377270 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Apr 17 01:36:42.379761 kernel: kvm-guest: setup PV IPIs Apr 17 01:36:42.380017 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Apr 17 01:36:42.380032 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x284409db922, max_idle_ns: 440795228871 ns Apr 17 01:36:42.380044 kernel: Calibrating delay loop (skipped) preset value.. 5586.87 BogoMIPS (lpj=2793438) Apr 17 01:36:42.380054 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Apr 17 01:36:42.380065 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Apr 17 01:36:42.380075 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Apr 17 01:36:42.380199 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Apr 17 01:36:42.380210 kernel: Spectre V2 : Mitigation: Retpolines Apr 17 01:36:42.380221 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Apr 17 01:36:42.380285 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Apr 17 01:36:42.380297 kernel: RETBleed: Vulnerable Apr 17 01:36:42.380307 kernel: Speculative Store Bypass: Vulnerable Apr 17 01:36:42.380319 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Apr 17 01:36:42.380387 kernel: GDS: Unknown: Dependent on hypervisor status Apr 17 01:36:42.380398 kernel: active return thunk: its_return_thunk Apr 17 01:36:42.380410 kernel: ITS: Mitigation: Aligned branch/return thunks Apr 17 01:36:42.380422 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Apr 17 01:36:42.380434 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Apr 17 01:36:42.380445 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Apr 17 01:36:42.380456 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Apr 17 01:36:42.380524 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Apr 17 01:36:42.380535 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Apr 17 01:36:42.380545 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Apr 17 01:36:42.380555 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Apr 17 01:36:42.380565 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Apr 17 01:36:42.380576 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Apr 17 01:36:42.380587 kernel: x86/fpu: Enabled xstate features 0xe7, context size is 2432 bytes, using 'compacted' format. Apr 17 01:36:42.380712 kernel: Freeing SMP alternatives memory: 32K Apr 17 01:36:42.380722 kernel: pid_max: default: 32768 minimum: 301 Apr 17 01:36:42.380731 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Apr 17 01:36:42.380740 kernel: landlock: Up and running. Apr 17 01:36:42.380750 kernel: SELinux: Initializing. Apr 17 01:36:42.380759 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 17 01:36:42.380768 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 17 01:36:42.380940 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8370C CPU @ 2.80GHz (family: 0x6, model: 0x6a, stepping: 0x6) Apr 17 01:36:42.380956 kernel: Performance Events: unsupported p6 CPU model 106 no PMU driver, software events only. Apr 17 01:36:42.380966 kernel: signal: max sigframe size: 3632 Apr 17 01:36:42.380975 kernel: rcu: Hierarchical SRCU implementation. Apr 17 01:36:42.380987 kernel: rcu: Max phase no-delay instances is 400. Apr 17 01:36:42.380997 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Apr 17 01:36:42.381007 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Apr 17 01:36:42.381083 kernel: smp: Bringing up secondary CPUs ... Apr 17 01:36:42.381095 kernel: smpboot: x86: Booting SMP configuration: Apr 17 01:36:42.381106 kernel: .... node #0, CPUs: #1 #2 #3 Apr 17 01:36:42.381117 kernel: smp: Brought up 1 node, 4 CPUs Apr 17 01:36:42.384076 kernel: smpboot: Total of 4 processors activated (22347.50 BogoMIPS) Apr 17 01:36:42.384190 kernel: Memory: 2399272K/2565800K available (14336K kernel code, 2458K rwdata, 31688K rodata, 15924K init, 2304K bss, 160636K reserved, 0K cma-reserved) Apr 17 01:36:42.384202 kernel: devtmpfs: initialized Apr 17 01:36:42.384343 kernel: x86/mm: Memory block size: 128MB Apr 17 01:36:42.384355 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Apr 17 01:36:42.384366 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Apr 17 01:36:42.384377 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00811000-0x008fffff] (978944 bytes) Apr 17 01:36:42.384388 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Apr 17 01:36:42.384399 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9ce95000-0x9ce96fff] (8192 bytes) Apr 17 01:36:42.384409 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Apr 17 01:36:42.384481 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Apr 17 01:36:42.384493 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Apr 17 01:36:42.384504 kernel: pinctrl core: initialized pinctrl subsystem Apr 17 01:36:42.384514 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Apr 17 01:36:42.384524 kernel: audit: initializing netlink subsys (disabled) Apr 17 01:36:42.384535 kernel: audit: type=2000 audit(1776389760.513:1): state=initialized audit_enabled=0 res=1 Apr 17 01:36:42.384546 kernel: thermal_sys: Registered thermal governor 'step_wise' Apr 17 01:36:42.384615 kernel: thermal_sys: Registered thermal governor 'user_space' Apr 17 01:36:42.384684 kernel: cpuidle: using governor menu Apr 17 01:36:42.384693 kernel: efi: Freeing EFI boot services memory: 38812K Apr 17 01:36:42.384704 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Apr 17 01:36:42.384714 kernel: dca service started, version 1.12.1 Apr 17 01:36:42.384722 kernel: PCI: ECAM [mem 0xe0000000-0xefffffff] (base 0xe0000000) for domain 0000 [bus 00-ff] Apr 17 01:36:42.384732 kernel: PCI: Using configuration type 1 for base access Apr 17 01:36:42.392434 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Apr 17 01:36:42.392524 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Apr 17 01:36:42.392536 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Apr 17 01:36:42.392547 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Apr 17 01:36:42.392559 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Apr 17 01:36:42.392571 kernel: ACPI: Added _OSI(Module Device) Apr 17 01:36:42.392584 kernel: ACPI: Added _OSI(Processor Device) Apr 17 01:36:42.392595 kernel: ACPI: Added _OSI(Processor Aggregator Device) Apr 17 01:36:42.392719 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Apr 17 01:36:42.403376 kernel: ACPI: Interpreter enabled Apr 17 01:36:42.403422 kernel: ACPI: PM: (supports S0 S3 S5) Apr 17 01:36:42.403434 kernel: ACPI: Using IOAPIC for interrupt routing Apr 17 01:36:42.403445 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Apr 17 01:36:42.403456 kernel: PCI: Using E820 reservations for host bridge windows Apr 17 01:36:42.403466 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Apr 17 01:36:42.403589 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Apr 17 01:36:42.418215 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Apr 17 01:36:42.423369 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Apr 17 01:36:42.423781 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Apr 17 01:36:42.423801 kernel: PCI host bridge to bus 0000:00 Apr 17 01:36:42.425397 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Apr 17 01:36:42.425583 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Apr 17 01:36:42.426411 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Apr 17 01:36:42.430761 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xdfffffff window] Apr 17 01:36:42.432332 kernel: pci_bus 0000:00: root bus resource [mem 0xf0000000-0xfebfffff window] Apr 17 01:36:42.432707 kernel: pci_bus 0000:00: root bus resource [mem 0x380000000000-0x3807ffffffff window] Apr 17 01:36:42.432985 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Apr 17 01:36:42.433197 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Apr 17 01:36:42.433327 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Apr 17 01:36:42.433470 kernel: pci 0000:00:01.0: BAR 0 [mem 0xc0000000-0xc0ffffff pref] Apr 17 01:36:42.433569 kernel: pci 0000:00:01.0: BAR 2 [mem 0xc1044000-0xc1044fff] Apr 17 01:36:42.433807 kernel: pci 0000:00:01.0: ROM [mem 0xffff0000-0xffffffff pref] Apr 17 01:36:42.434015 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Apr 17 01:36:42.440196 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Apr 17 01:36:42.440345 kernel: pci 0000:00:02.0: BAR 0 [io 0x6100-0x611f] Apr 17 01:36:42.440487 kernel: pci 0000:00:02.0: BAR 1 [mem 0xc1043000-0xc1043fff] Apr 17 01:36:42.443524 kernel: pci 0000:00:02.0: BAR 4 [mem 0x380000000000-0x380000003fff 64bit pref] Apr 17 01:36:42.444591 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Apr 17 01:36:42.493812 kernel: pci 0000:00:03.0: BAR 0 [io 0x6000-0x607f] Apr 17 01:36:42.497274 kernel: pci 0000:00:03.0: BAR 1 [mem 0xc1042000-0xc1042fff] Apr 17 01:36:42.497446 kernel: pci 0000:00:03.0: BAR 4 [mem 0x380000004000-0x380000007fff 64bit pref] Apr 17 01:36:42.497610 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Apr 17 01:36:42.502787 kernel: pci 0000:00:04.0: BAR 0 [io 0x60e0-0x60ff] Apr 17 01:36:42.503115 kernel: pci 0000:00:04.0: BAR 1 [mem 0xc1041000-0xc1041fff] Apr 17 01:36:42.503287 kernel: pci 0000:00:04.0: BAR 4 [mem 0x380000008000-0x38000000bfff 64bit pref] Apr 17 01:36:42.503442 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref] Apr 17 01:36:42.503610 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Apr 17 01:36:42.505160 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Apr 17 01:36:42.510739 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Apr 17 01:36:42.529145 kernel: pci 0000:00:1f.2: BAR 4 [io 0x60c0-0x60df] Apr 17 01:36:42.529498 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xc1040000-0xc1040fff] Apr 17 01:36:42.546203 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Apr 17 01:36:42.552114 kernel: pci 0000:00:1f.3: BAR 4 [io 0x6080-0x60bf] Apr 17 01:36:42.552161 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Apr 17 01:36:42.552289 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Apr 17 01:36:42.552299 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Apr 17 01:36:42.552311 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Apr 17 01:36:42.552322 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Apr 17 01:36:42.552332 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Apr 17 01:36:42.552341 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Apr 17 01:36:42.552351 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Apr 17 01:36:42.554711 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Apr 17 01:36:42.554758 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Apr 17 01:36:42.554771 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Apr 17 01:36:42.554782 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Apr 17 01:36:42.554793 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Apr 17 01:36:42.554803 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Apr 17 01:36:42.554813 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Apr 17 01:36:42.573715 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Apr 17 01:36:42.573771 kernel: iommu: Default domain type: Translated Apr 17 01:36:42.573784 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Apr 17 01:36:42.573795 kernel: efivars: Registered efivars operations Apr 17 01:36:42.573805 kernel: PCI: Using ACPI for IRQ routing Apr 17 01:36:42.573816 kernel: PCI: pci_cache_line_size set to 64 bytes Apr 17 01:36:42.573825 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Apr 17 01:36:42.578610 kernel: e820: reserve RAM buffer [mem 0x00811000-0x008fffff] Apr 17 01:36:42.582298 kernel: e820: reserve RAM buffer [mem 0x9b2e1018-0x9bffffff] Apr 17 01:36:42.582316 kernel: e820: reserve RAM buffer [mem 0x9b31e018-0x9bffffff] Apr 17 01:36:42.582328 kernel: e820: reserve RAM buffer [mem 0x9bd3f000-0x9bffffff] Apr 17 01:36:42.582341 kernel: e820: reserve RAM buffer [mem 0x9c8ed000-0x9fffffff] Apr 17 01:36:42.582353 kernel: e820: reserve RAM buffer [mem 0x9ce91000-0x9fffffff] Apr 17 01:36:42.582365 kernel: e820: reserve RAM buffer [mem 0x9cedc000-0x9fffffff] Apr 17 01:36:42.588416 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Apr 17 01:36:42.590481 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Apr 17 01:36:42.591501 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Apr 17 01:36:42.591538 kernel: vgaarb: loaded Apr 17 01:36:42.591546 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Apr 17 01:36:42.591553 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Apr 17 01:36:42.591560 kernel: clocksource: Switched to clocksource kvm-clock Apr 17 01:36:42.591720 kernel: VFS: Disk quotas dquot_6.6.0 Apr 17 01:36:42.591727 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Apr 17 01:36:42.591734 kernel: pnp: PnP ACPI init Apr 17 01:36:42.591988 kernel: system 00:05: [mem 0xe0000000-0xefffffff window] has been reserved Apr 17 01:36:42.592000 kernel: pnp: PnP ACPI: found 6 devices Apr 17 01:36:42.592008 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Apr 17 01:36:42.598354 kernel: NET: Registered PF_INET protocol family Apr 17 01:36:42.599754 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Apr 17 01:36:42.599762 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Apr 17 01:36:42.599769 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Apr 17 01:36:42.599776 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Apr 17 01:36:42.599783 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Apr 17 01:36:42.599789 kernel: TCP: Hash tables configured (established 32768 bind 32768) Apr 17 01:36:42.600437 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 17 01:36:42.600460 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 17 01:36:42.600467 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Apr 17 01:36:42.600474 kernel: NET: Registered PF_XDP protocol family Apr 17 01:36:42.601771 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref]: can't claim; no compatible bridge window Apr 17 01:36:42.607555 kernel: pci 0000:00:04.0: ROM [mem 0x9d000000-0x9d03ffff pref]: assigned Apr 17 01:36:42.625050 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Apr 17 01:36:42.625441 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Apr 17 01:36:42.625590 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Apr 17 01:36:42.660818 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xdfffffff window] Apr 17 01:36:42.665557 kernel: pci_bus 0000:00: resource 8 [mem 0xf0000000-0xfebfffff window] Apr 17 01:36:42.666043 kernel: pci_bus 0000:00: resource 9 [mem 0x380000000000-0x3807ffffffff window] Apr 17 01:36:42.666065 kernel: PCI: CLS 0 bytes, default 64 Apr 17 01:36:42.666203 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Apr 17 01:36:42.666214 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x284409db922, max_idle_ns: 440795228871 ns Apr 17 01:36:42.666225 kernel: Initialise system trusted keyrings Apr 17 01:36:42.666301 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Apr 17 01:36:42.666369 kernel: Key type asymmetric registered Apr 17 01:36:42.666380 kernel: Asymmetric key parser 'x509' registered Apr 17 01:36:42.666390 kernel: hrtimer: interrupt took 6608075 ns Apr 17 01:36:42.666401 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Apr 17 01:36:42.666412 kernel: io scheduler mq-deadline registered Apr 17 01:36:42.666422 kernel: io scheduler kyber registered Apr 17 01:36:42.666433 kernel: io scheduler bfq registered Apr 17 01:36:42.666443 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Apr 17 01:36:42.666519 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Apr 17 01:36:42.666530 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Apr 17 01:36:42.666540 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Apr 17 01:36:42.666550 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Apr 17 01:36:42.666560 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Apr 17 01:36:42.666571 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Apr 17 01:36:42.666581 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Apr 17 01:36:42.666713 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Apr 17 01:36:42.667025 kernel: rtc_cmos 00:04: RTC can wake from S4 Apr 17 01:36:42.667044 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Apr 17 01:36:42.667186 kernel: rtc_cmos 00:04: registered as rtc0 Apr 17 01:36:42.668779 kernel: rtc_cmos 00:04: setting system clock to 2026-04-17T01:36:14 UTC (1776389774) Apr 17 01:36:42.684068 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Apr 17 01:36:42.684190 kernel: intel_pstate: CPU model not supported Apr 17 01:36:42.684201 kernel: efifb: probing for efifb Apr 17 01:36:42.684211 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k Apr 17 01:36:42.684222 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 Apr 17 01:36:42.684231 kernel: efifb: scrolling: redraw Apr 17 01:36:42.684240 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Apr 17 01:36:42.684251 kernel: Console: switching to colour frame buffer device 160x50 Apr 17 01:36:42.684330 kernel: fb0: EFI VGA frame buffer device Apr 17 01:36:42.684342 kernel: pstore: Using crash dump compression: deflate Apr 17 01:36:42.684352 kernel: pstore: Registered efi_pstore as persistent store backend Apr 17 01:36:42.684363 kernel: NET: Registered PF_INET6 protocol family Apr 17 01:36:42.684372 kernel: Segment Routing with IPv6 Apr 17 01:36:42.684381 kernel: In-situ OAM (IOAM) with IPv6 Apr 17 01:36:42.684391 kernel: NET: Registered PF_PACKET protocol family Apr 17 01:36:42.684525 kernel: Key type dns_resolver registered Apr 17 01:36:42.684537 kernel: IPI shorthand broadcast: enabled Apr 17 01:36:42.684549 kernel: sched_clock: Marking stable (14376049538, 1647012881)->(17407498926, -1384436507) Apr 17 01:36:42.684560 kernel: registered taskstats version 1 Apr 17 01:36:42.684571 kernel: Loading compiled-in X.509 certificates Apr 17 01:36:42.684580 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.81-flatcar: 4d342f13be37d8ac792d0af3a46fe016bcb54fe1' Apr 17 01:36:42.684589 kernel: Demotion targets for Node 0: null Apr 17 01:36:42.684600 kernel: Key type .fscrypt registered Apr 17 01:36:42.687542 kernel: Key type fscrypt-provisioning registered Apr 17 01:36:42.687554 kernel: ima: No TPM chip found, activating TPM-bypass! Apr 17 01:36:42.687566 kernel: ima: Allocated hash algorithm: sha1 Apr 17 01:36:42.687577 kernel: ima: No architecture policies found Apr 17 01:36:42.687588 kernel: clk: Disabling unused clocks Apr 17 01:36:42.687600 kernel: Freeing unused kernel image (initmem) memory: 15924K Apr 17 01:36:42.687610 kernel: Write protecting the kernel read-only data: 47104k Apr 17 01:36:42.693005 kernel: Freeing unused kernel image (rodata/data gap) memory: 1080K Apr 17 01:36:42.693022 kernel: Run /init as init process Apr 17 01:36:42.693033 kernel: with arguments: Apr 17 01:36:42.693044 kernel: /init Apr 17 01:36:42.693054 kernel: with environment: Apr 17 01:36:42.693064 kernel: HOME=/ Apr 17 01:36:42.693075 kernel: TERM=linux Apr 17 01:36:42.693159 kernel: SCSI subsystem initialized Apr 17 01:36:42.693171 kernel: libata version 3.00 loaded. Apr 17 01:36:42.693416 kernel: ahci 0000:00:1f.2: version 3.0 Apr 17 01:36:42.693433 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Apr 17 01:36:42.693576 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Apr 17 01:36:42.698793 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Apr 17 01:36:42.705155 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Apr 17 01:36:42.705750 kernel: scsi host0: ahci Apr 17 01:36:42.706062 kernel: scsi host1: ahci Apr 17 01:36:42.706231 kernel: scsi host2: ahci Apr 17 01:36:42.706390 kernel: scsi host3: ahci Apr 17 01:36:42.706548 kernel: scsi host4: ahci Apr 17 01:36:42.708001 kernel: scsi host5: ahci Apr 17 01:36:42.708024 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 26 lpm-pol 1 Apr 17 01:36:42.708036 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 26 lpm-pol 1 Apr 17 01:36:42.708049 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 26 lpm-pol 1 Apr 17 01:36:42.708061 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 26 lpm-pol 1 Apr 17 01:36:42.708073 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 26 lpm-pol 1 Apr 17 01:36:42.708159 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 26 lpm-pol 1 Apr 17 01:36:42.708171 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Apr 17 01:36:42.708183 kernel: ata2: SATA link down (SStatus 0 SControl 300) Apr 17 01:36:42.708195 kernel: ata3.00: LPM support broken, forcing max_power Apr 17 01:36:42.708207 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Apr 17 01:36:42.708219 kernel: ata3.00: applying bridge limits Apr 17 01:36:42.708230 kernel: ata5: SATA link down (SStatus 0 SControl 300) Apr 17 01:36:42.708304 kernel: ata1: SATA link down (SStatus 0 SControl 300) Apr 17 01:36:42.708315 kernel: ata3.00: LPM support broken, forcing max_power Apr 17 01:36:42.708325 kernel: ata3.00: configured for UDMA/100 Apr 17 01:36:42.708335 kernel: ata4: SATA link down (SStatus 0 SControl 300) Apr 17 01:36:42.708408 kernel: ata6: SATA link down (SStatus 0 SControl 300) Apr 17 01:36:42.715768 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Apr 17 01:36:42.717059 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Apr 17 01:36:42.717319 kernel: virtio_blk virtio1: [vda] 27000832 512-byte logical blocks (13.8 GB/12.9 GiB) Apr 17 01:36:42.717511 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Apr 17 01:36:42.717527 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Apr 17 01:36:42.717539 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Apr 17 01:36:42.717551 kernel: GPT:16515071 != 27000831 Apr 17 01:36:42.717562 kernel: GPT:Alternate GPT header not at the end of the disk. Apr 17 01:36:42.723203 kernel: GPT:16515071 != 27000831 Apr 17 01:36:42.723220 kernel: GPT: Use GNU Parted to correct GPT errors. Apr 17 01:36:42.723231 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 17 01:36:42.736974 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Apr 17 01:36:42.737012 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Apr 17 01:36:42.737025 kernel: device-mapper: uevent: version 1.0.3 Apr 17 01:36:42.737040 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Apr 17 01:36:42.737134 kernel: device-mapper: verity: sha256 using shash "sha256-generic" Apr 17 01:36:42.737146 kernel: raid6: avx512x4 gen() 10423 MB/s Apr 17 01:36:42.737157 kernel: raid6: avx512x2 gen() 20958 MB/s Apr 17 01:36:42.737169 kernel: raid6: avx512x1 gen() 15870 MB/s Apr 17 01:36:42.737182 kernel: raid6: avx2x4 gen() 13423 MB/s Apr 17 01:36:42.737194 kernel: raid6: avx2x2 gen() 9443 MB/s Apr 17 01:36:42.737207 kernel: raid6: avx2x1 gen() 13426 MB/s Apr 17 01:36:42.737285 kernel: raid6: using algorithm avx512x2 gen() 20958 MB/s Apr 17 01:36:42.737299 kernel: raid6: .... xor() 1713 MB/s, rmw enabled Apr 17 01:36:42.737311 kernel: raid6: using avx512x2 recovery algorithm Apr 17 01:36:42.737322 kernel: xor: automatically using best checksumming function avx Apr 17 01:36:42.737333 kernel: Btrfs loaded, zoned=no, fsverity=no Apr 17 01:36:42.737345 kernel: BTRFS: device fsid 58910d97-8794-41b7-abad-32c24d641674 devid 1 transid 41 /dev/mapper/usr (253:0) scanned by mount (181) Apr 17 01:36:42.737357 kernel: BTRFS info (device dm-0): first mount of filesystem 58910d97-8794-41b7-abad-32c24d641674 Apr 17 01:36:42.739175 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Apr 17 01:36:42.739193 kernel: BTRFS info (device dm-0 state E): disabling log replay at mount time Apr 17 01:36:42.739205 kernel: BTRFS info (device dm-0 state E): enabling free space tree Apr 17 01:36:42.739217 kernel: loop: module loaded Apr 17 01:36:42.739228 kernel: loop0: detected capacity change from 0 to 106856 Apr 17 01:36:42.739240 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Apr 17 01:36:42.739255 systemd[1]: /etc/systemd/system.conf.d/nocgroup.conf:2: Support for option DefaultCPUAccounting= has been removed and it is ignored Apr 17 01:36:42.739347 systemd[1]: /etc/systemd/system.conf.d/nocgroup.conf:5: Support for option DefaultBlockIOAccounting= has been removed and it is ignored Apr 17 01:36:42.739359 systemd[1]: Successfully made /usr/ read-only. Apr 17 01:36:42.739371 systemd[1]: systemd 258.2 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Apr 17 01:36:42.739383 systemd[1]: Detected virtualization kvm. Apr 17 01:36:42.739395 systemd[1]: Detected architecture x86-64. Apr 17 01:36:42.741764 systemd[1]: Running in initrd. Apr 17 01:36:42.741785 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Apr 17 01:36:42.741796 systemd[1]: No hostname configured, using default hostname. Apr 17 01:36:42.741807 systemd[1]: Hostname set to . Apr 17 01:36:42.741821 systemd[1]: Queued start job for default target initrd.target. Apr 17 01:36:42.745370 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Apr 17 01:36:42.745465 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 17 01:36:42.750465 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 17 01:36:42.753436 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Apr 17 01:36:42.753496 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 17 01:36:42.753511 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Apr 17 01:36:42.753524 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Apr 17 01:36:42.753537 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 17 01:36:42.754310 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 17 01:36:42.754322 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Apr 17 01:36:42.754334 systemd[1]: Reached target paths.target - Path Units. Apr 17 01:36:42.754345 systemd[1]: Reached target slices.target - Slice Units. Apr 17 01:36:42.757267 systemd[1]: Reached target swap.target - Swaps. Apr 17 01:36:42.757283 systemd[1]: Reached target timers.target - Timer Units. Apr 17 01:36:42.757293 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Apr 17 01:36:42.758323 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 17 01:36:42.758333 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Apr 17 01:36:42.758343 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 17 01:36:42.758352 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Apr 17 01:36:42.758361 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 17 01:36:42.758370 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 17 01:36:42.760422 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 17 01:36:42.760451 systemd[1]: Reached target sockets.target - Socket Units. Apr 17 01:36:42.760461 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Apr 17 01:36:42.760471 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Apr 17 01:36:42.760480 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 17 01:36:42.760489 systemd[1]: Finished network-cleanup.service - Network Cleanup. Apr 17 01:36:42.760498 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Apr 17 01:36:42.760581 systemd[1]: Starting systemd-fsck-usr.service... Apr 17 01:36:42.760591 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 17 01:36:42.760600 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 17 01:36:42.760609 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 17 01:36:42.761389 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 17 01:36:42.761399 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 17 01:36:42.761408 kernel: clocksource: Long readout interval, skipping watchdog check: cs_nsec: 1479548633 wd_nsec: 1479548240 Apr 17 01:36:42.761418 systemd[1]: Finished systemd-fsck-usr.service. Apr 17 01:36:42.761427 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Apr 17 01:36:42.761437 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 17 01:36:42.762355 systemd-journald[318]: Collecting audit messages is enabled. Apr 17 01:36:42.762386 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 17 01:36:42.762396 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Apr 17 01:36:42.762476 kernel: Bridge firewalling registered Apr 17 01:36:42.762486 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 17 01:36:42.762496 kernel: audit: type=1130 audit(1776389802.493:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 17 01:36:42.762505 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 17 01:36:42.762515 kernel: audit: type=1130 audit(1776389802.571:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 17 01:36:42.762524 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 17 01:36:42.762532 kernel: audit: type=1130 audit(1776389802.646:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 17 01:36:42.762589 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 17 01:36:42.762596 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 17 01:36:42.762606 systemd-journald[318]: Journal started Apr 17 01:36:42.762622 systemd-journald[318]: Runtime Journal (/run/log/journal/30e64c11057a45f59b7ccdd1d817a8f9) is 6M, max 48M, 42M free. Apr 17 01:36:42.493000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 17 01:36:42.571000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 17 01:36:42.646000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 17 01:36:42.443571 systemd-modules-load[320]: Inserted module 'br_netfilter' Apr 17 01:36:42.807361 systemd[1]: Started systemd-journald.service - Journal Service. Apr 17 01:36:42.819000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 17 01:36:42.839171 kernel: audit: type=1130 audit(1776389802.819:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 17 01:36:42.920183 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 17 01:36:42.987247 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 17 01:36:42.991000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 17 01:36:43.050476 kernel: audit: type=1130 audit(1776389802.991:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 17 01:36:43.057115 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Apr 17 01:36:43.112305 systemd-tmpfiles[349]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Apr 17 01:36:43.208557 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 17 01:36:43.219000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 17 01:36:43.263472 kernel: audit: type=1130 audit(1776389803.219:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 17 01:36:43.263601 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 17 01:36:43.290000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 17 01:36:43.328411 kernel: audit: type=1130 audit(1776389803.290:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 17 01:36:43.397000 audit: BPF prog-id=5 op=LOAD Apr 17 01:36:43.406968 kernel: audit: type=1334 audit(1776389803.397:9): prog-id=5 op=LOAD Apr 17 01:36:43.422554 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 17 01:36:43.523778 dracut-cmdline[354]: dracut-109 Apr 17 01:36:43.594353 dracut-cmdline[354]: Using kernel command line parameters: SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=51a9d6679b489a6054ad9328c7381b5a0b29a8dc6ebbd9b773cac2ef7c32e2e2 Apr 17 01:36:44.401339 systemd-resolved[364]: Positive Trust Anchors: Apr 17 01:36:44.401483 systemd-resolved[364]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 17 01:36:44.401487 systemd-resolved[364]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Apr 17 01:36:44.401517 systemd-resolved[364]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 17 01:36:44.627724 systemd-resolved[364]: Defaulting to hostname 'linux'. Apr 17 01:36:44.661449 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 17 01:36:44.711000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 17 01:36:44.715782 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 17 01:36:44.738016 kernel: audit: type=1130 audit(1776389804.711:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 17 01:36:46.098420 kernel: Loading iSCSI transport class v2.0-870. Apr 17 01:36:46.337965 kernel: iscsi: registered transport (tcp) Apr 17 01:36:46.725802 kernel: iscsi: registered transport (qla4xxx) Apr 17 01:36:46.731194 kernel: QLogic iSCSI HBA Driver Apr 17 01:36:48.951078 systemd[1]: Starting systemd-network-generator.service - Generate Network Units from Kernel Command Line... Apr 17 01:36:49.751106 systemd[1]: Finished systemd-network-generator.service - Generate Network Units from Kernel Command Line. Apr 17 01:36:49.791000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 17 01:36:49.847345 kernel: audit: type=1130 audit(1776389809.791:11): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 17 01:36:49.926315 systemd[1]: Reached target network-pre.target - Preparation for Network. Apr 17 01:36:57.761526 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Apr 17 01:36:57.839000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 17 01:36:57.961589 kernel: audit: type=1130 audit(1776389817.839:12): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 17 01:36:58.233008 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Apr 17 01:36:58.348324 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Apr 17 01:36:59.983373 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Apr 17 01:37:00.004000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 17 01:37:00.055595 kernel: audit: type=1130 audit(1776389820.004:13): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 17 01:37:00.108000 audit: BPF prog-id=6 op=LOAD Apr 17 01:37:00.115000 audit: BPF prog-id=7 op=LOAD Apr 17 01:37:00.126805 kernel: audit: type=1334 audit(1776389820.108:14): prog-id=6 op=LOAD Apr 17 01:37:00.128675 kernel: audit: type=1334 audit(1776389820.115:15): prog-id=7 op=LOAD Apr 17 01:37:00.135405 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 17 01:37:01.109266 systemd-udevd[587]: Using default interface naming scheme 'v258'. Apr 17 01:37:02.646422 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 17 01:37:02.717000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 17 01:37:02.756615 kernel: audit: type=1130 audit(1776389822.717:16): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 17 01:37:02.782213 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Apr 17 01:37:03.862095 dracut-pre-trigger[651]: rd.md=0: removing MD RAID activation Apr 17 01:37:04.489730 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 17 01:37:04.529000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 17 01:37:04.563541 kernel: audit: type=1130 audit(1776389824.529:17): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 17 01:37:04.588000 audit: BPF prog-id=8 op=LOAD Apr 17 01:37:04.603457 kernel: audit: type=1334 audit(1776389824.588:18): prog-id=8 op=LOAD Apr 17 01:37:04.639323 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 17 01:37:05.350534 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Apr 17 01:37:05.351000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 17 01:37:05.454745 kernel: audit: type=1130 audit(1776389825.351:19): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 17 01:37:05.482344 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 17 01:37:06.054047 systemd-networkd[723]: lo: Link UP Apr 17 01:37:06.055592 systemd-networkd[723]: lo: Gained carrier Apr 17 01:37:06.094000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 17 01:37:06.063417 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 17 01:37:06.105235 systemd[1]: Reached target network.target - Network. Apr 17 01:37:06.207473 kernel: audit: type=1130 audit(1776389826.094:20): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 17 01:37:23.290489 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 17 01:37:23.318000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 17 01:37:23.384688 kernel: audit: type=1130 audit(1776389843.318:21): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 17 01:37:23.386694 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Apr 17 01:37:27.274747 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Apr 17 01:37:27.538088 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Apr 17 01:37:27.784215 kernel: cryptd: max_cpu_qlen set to 1000 Apr 17 01:37:27.830712 kernel: AES CTR mode by8 optimization enabled Apr 17 01:37:27.882110 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Apr 17 01:37:28.049261 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Apr 17 01:37:28.113585 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Apr 17 01:37:28.160020 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Apr 17 01:37:28.284714 systemd-networkd[723]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Apr 17 01:37:28.284733 systemd-networkd[723]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 17 01:37:28.298482 systemd-networkd[723]: eth0: Link UP Apr 17 01:37:28.298831 systemd-networkd[723]: eth0: Gained carrier Apr 17 01:37:28.301351 systemd-networkd[723]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Apr 17 01:37:28.356130 disk-uuid[865]: Primary Header is updated. Apr 17 01:37:28.356130 disk-uuid[865]: Secondary Entries is updated. Apr 17 01:37:28.356130 disk-uuid[865]: Secondary Header is updated. Apr 17 01:37:28.428716 systemd-networkd[723]: eth0: DHCPv4 address 10.0.0.148/16, gateway 10.0.0.1 acquired from 10.0.0.1 Apr 17 01:37:28.475397 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 17 01:37:28.492543 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 17 01:37:28.505000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 17 01:37:28.553569 kernel: audit: type=1131 audit(1776389848.505:22): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 17 01:37:28.505712 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Apr 17 01:37:28.572570 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 17 01:37:28.914834 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 17 01:37:28.925000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 17 01:37:28.947805 kernel: audit: type=1130 audit(1776389848.925:23): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 17 01:37:29.429364 systemd-networkd[723]: eth0: Gained IPv6LL Apr 17 01:37:29.827392 disk-uuid[866]: Warning: The kernel is still using the old partition table. Apr 17 01:37:29.827392 disk-uuid[866]: The new table will be used at the next reboot or after you Apr 17 01:37:29.827392 disk-uuid[866]: run partprobe(8) or kpartx(8) Apr 17 01:37:29.827392 disk-uuid[866]: The operation has completed successfully. Apr 17 01:37:30.498047 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Apr 17 01:37:30.511000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 17 01:37:30.552824 kernel: audit: type=1130 audit(1776389850.511:24): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 17 01:37:30.601679 systemd[1]: disk-uuid.service: Deactivated successfully. Apr 17 01:37:30.637000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 17 01:37:30.644000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 17 01:37:30.603606 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Apr 17 01:37:30.702670 kernel: audit: type=1130 audit(1776389850.637:25): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 17 01:37:30.702832 kernel: audit: type=1131 audit(1776389850.644:26): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 17 01:37:31.147488 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Apr 17 01:37:31.233665 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 17 01:37:31.319740 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 17 01:37:32.010393 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Apr 17 01:37:32.062914 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Apr 17 01:37:32.541805 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Apr 17 01:37:32.575000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 17 01:37:32.594608 kernel: audit: type=1130 audit(1776389852.575:27): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 17 01:37:32.819630 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (894) Apr 17 01:37:32.859047 kernel: BTRFS info (device vda6): first mount of filesystem b823dcb4-84f1-433a-a45f-d2e8271b88a6 Apr 17 01:37:32.859149 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 17 01:37:32.946092 kernel: BTRFS info (device vda6): turning on async discard Apr 17 01:37:32.946484 kernel: BTRFS info (device vda6): enabling free space tree Apr 17 01:37:33.512767 kernel: BTRFS info (device vda6): last unmount of filesystem b823dcb4-84f1-433a-a45f-d2e8271b88a6 Apr 17 01:37:33.625543 systemd[1]: Finished ignition-setup.service - Ignition (setup). Apr 17 01:37:33.703000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 17 01:37:33.740619 kernel: audit: type=1130 audit(1776389853.703:28): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 17 01:37:33.888794 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Apr 17 01:37:42.447132 ignition[916]: Ignition 2.24.0 Apr 17 01:37:42.457792 ignition[916]: Stage: fetch-offline Apr 17 01:37:42.506823 ignition[916]: no configs at "/usr/lib/ignition/base.d" Apr 17 01:37:42.523451 ignition[916]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 17 01:37:42.538437 ignition[916]: parsed url from cmdline: "" Apr 17 01:37:42.542315 ignition[916]: no config URL provided Apr 17 01:37:42.562346 ignition[916]: reading system config file "/usr/lib/ignition/user.ign" Apr 17 01:37:42.580560 ignition[916]: no config at "/usr/lib/ignition/user.ign" Apr 17 01:37:42.597003 ignition[916]: op(1): [started] loading QEMU firmware config module Apr 17 01:37:42.601646 ignition[916]: op(1): executing: "modprobe" "qemu_fw_cfg" Apr 17 01:37:43.502709 ignition[916]: op(1): [finished] loading QEMU firmware config module Apr 17 01:37:44.256198 ignition[916]: parsing config with SHA512: 3590a1c1f5380cb957286a7814793065054b6e712c81e9f9b3916912855a0474eaf89f977f5e377e1fd1e250ded1ec36e98cf088df36107596897b6d614fb038 Apr 17 01:37:44.996302 unknown[916]: fetched base config from "system" Apr 17 01:37:44.996339 unknown[916]: fetched user config from "qemu" Apr 17 01:37:44.998043 ignition[916]: fetch-offline: fetch-offline passed Apr 17 01:37:45.039367 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Apr 17 01:37:45.114000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 17 01:37:45.004210 ignition[916]: Ignition finished successfully Apr 17 01:37:45.153729 kernel: audit: type=1130 audit(1776389865.114:29): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 17 01:37:45.144292 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Apr 17 01:37:45.163813 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Apr 17 01:37:47.711273 ignition[927]: Ignition 2.24.0 Apr 17 01:37:47.711529 ignition[927]: Stage: kargs Apr 17 01:37:47.727765 ignition[927]: no configs at "/usr/lib/ignition/base.d" Apr 17 01:37:47.749811 ignition[927]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 17 01:37:48.104339 ignition[927]: kargs: kargs passed Apr 17 01:37:48.108565 ignition[927]: Ignition finished successfully Apr 17 01:37:48.244723 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Apr 17 01:37:48.307000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 17 01:37:48.338571 kernel: audit: type=1130 audit(1776389868.307:30): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 17 01:37:48.479786 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Apr 17 01:37:50.118492 ignition[935]: Ignition 2.24.0 Apr 17 01:37:50.118550 ignition[935]: Stage: disks Apr 17 01:37:50.118710 ignition[935]: no configs at "/usr/lib/ignition/base.d" Apr 17 01:37:50.118716 ignition[935]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 17 01:37:50.174000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 17 01:37:50.140650 systemd[1]: Finished ignition-disks.service - Ignition (disks). Apr 17 01:37:50.129236 ignition[935]: disks: disks passed Apr 17 01:37:50.175950 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Apr 17 01:37:50.260979 kernel: audit: type=1130 audit(1776389870.174:31): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 17 01:37:50.129424 ignition[935]: Ignition finished successfully Apr 17 01:37:50.223804 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 17 01:37:50.261297 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 17 01:37:50.302814 systemd[1]: Reached target sysinit.target - System Initialization. Apr 17 01:37:50.340666 systemd[1]: Reached target basic.target - Basic System. Apr 17 01:37:50.477310 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Apr 17 01:37:52.593774 systemd-fsck[946]: ROOT: clean, 15/456736 files, 38230/456704 blocks Apr 17 01:37:52.728367 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Apr 17 01:37:52.756000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 17 01:37:52.788973 kernel: audit: type=1130 audit(1776389872.756:32): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 17 01:37:52.831159 systemd[1]: Mounting sysroot.mount - /sysroot... Apr 17 01:37:58.363650 kernel: EXT4-fs (vda9): mounted filesystem 69f74822-0811-451e-b15f-79e46fa71c56 r/w with ordered data mode. Quota mode: none. Apr 17 01:37:58.500053 systemd[1]: Mounted sysroot.mount - /sysroot. Apr 17 01:37:58.532544 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Apr 17 01:37:58.783812 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 17 01:37:58.932220 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Apr 17 01:37:58.988042 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Apr 17 01:37:59.001392 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Apr 17 01:37:59.001473 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Apr 17 01:37:59.134107 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (957) Apr 17 01:37:59.162175 kernel: BTRFS info (device vda6): first mount of filesystem b823dcb4-84f1-433a-a45f-d2e8271b88a6 Apr 17 01:37:59.184737 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 17 01:37:59.321035 kernel: BTRFS info (device vda6): turning on async discard Apr 17 01:37:59.321528 kernel: BTRFS info (device vda6): enabling free space tree Apr 17 01:37:59.483764 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 17 01:37:59.522789 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Apr 17 01:37:59.720820 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Apr 17 01:38:03.030026 kernel: loop1: detected capacity change from 0 to 43200 Apr 17 01:38:03.037496 kernel: loop1: p1 p2 p3 Apr 17 01:38:03.151928 kernel: erofs: (device loop1p1): mounted with root inode @ nid 40. Apr 17 01:38:03.182498 kernel: loop2: detected capacity change from 0 to 43200 Apr 17 01:38:03.186005 kernel: loop2: p1 p2 p3 Apr 17 01:38:03.357596 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Apr 17 01:38:03.357741 kernel: device-mapper: verity: Unrecognized verity feature request: root_hash_sig_key_desc Apr 17 01:38:03.371827 kernel: device-mapper: table: 253:1: verity: Unrecognized verity feature request (-EINVAL) Apr 17 01:38:03.372528 kernel: device-mapper: ioctl: error adding target to table Apr 17 01:38:03.376991 (sd-merge)[1056]: device-mapper: reload ioctl on 27b38cf3183f09e87d11124eb4c7969c9e70b2207995033f5d6ac2ea5553ea9b-verity (253:1) failed: Invalid argument Apr 17 01:38:03.412382 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Apr 17 01:38:03.985478 kernel: erofs: (device dm-1): mounted with root inode @ nid 40. Apr 17 01:38:03.989762 (sd-merge)[1056]: Using extensions '00-flatcar-default.raw'. Apr 17 01:38:03.997235 (sd-merge)[1056]: Merged extensions into '/sysroot/etc'. Apr 17 01:38:04.107043 initrd-setup-root[1063]: /etc 00-flatcar-default Fri 2026-04-17 01:36:43 UTC Apr 17 01:38:04.157318 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Apr 17 01:38:04.195000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 17 01:38:04.240741 kernel: audit: type=1130 audit(1776389884.195:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 17 01:38:04.306350 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Apr 17 01:38:04.369809 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Apr 17 01:38:04.638799 systemd[1]: sysroot-oem.mount: Deactivated successfully. Apr 17 01:38:04.648765 kernel: BTRFS info (device vda6): last unmount of filesystem b823dcb4-84f1-433a-a45f-d2e8271b88a6 Apr 17 01:38:04.690739 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Apr 17 01:38:04.700000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 17 01:38:04.712346 kernel: audit: type=1130 audit(1776389884.700:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 17 01:38:05.400792 ignition[1074]: INFO : Ignition 2.24.0 Apr 17 01:38:05.400792 ignition[1074]: INFO : Stage: mount Apr 17 01:38:05.400792 ignition[1074]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 17 01:38:05.400792 ignition[1074]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 17 01:38:05.517967 ignition[1074]: INFO : mount: mount passed Apr 17 01:38:05.517967 ignition[1074]: INFO : Ignition finished successfully Apr 17 01:38:05.531748 systemd[1]: Finished ignition-mount.service - Ignition (mount). Apr 17 01:38:05.540000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 17 01:38:05.577693 kernel: audit: type=1130 audit(1776389885.540:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 17 01:38:05.588492 systemd[1]: Starting ignition-files.service - Ignition (files)... Apr 17 01:38:05.905537 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 17 01:38:06.106192 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1085) Apr 17 01:38:06.129632 kernel: BTRFS info (device vda6): first mount of filesystem b823dcb4-84f1-433a-a45f-d2e8271b88a6 Apr 17 01:38:06.131279 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 17 01:38:06.152815 kernel: BTRFS info (device vda6): turning on async discard Apr 17 01:38:06.156824 kernel: BTRFS info (device vda6): enabling free space tree Apr 17 01:38:06.309043 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 17 01:38:06.741597 ignition[1102]: INFO : Ignition 2.24.0 Apr 17 01:38:06.741597 ignition[1102]: INFO : Stage: files Apr 17 01:38:06.796625 ignition[1102]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 17 01:38:06.796625 ignition[1102]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 17 01:38:06.837681 ignition[1102]: DEBUG : files: compiled without relabeling support, skipping Apr 17 01:38:06.892751 ignition[1102]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Apr 17 01:38:06.892751 ignition[1102]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Apr 17 01:38:06.982666 ignition[1102]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Apr 17 01:38:07.010402 ignition[1102]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Apr 17 01:38:07.052286 unknown[1102]: wrote ssh authorized keys file for user: core Apr 17 01:38:07.111192 ignition[1102]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Apr 17 01:38:07.121747 ignition[1102]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Apr 17 01:38:07.156434 ignition[1102]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Apr 17 01:38:07.655979 ignition[1102]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Apr 17 01:38:08.332128 ignition[1102]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Apr 17 01:38:08.350023 ignition[1102]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Apr 17 01:38:08.359955 ignition[1102]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Apr 17 01:38:08.359955 ignition[1102]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Apr 17 01:38:08.359955 ignition[1102]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Apr 17 01:38:08.359955 ignition[1102]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 17 01:38:08.359955 ignition[1102]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 17 01:38:08.359955 ignition[1102]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 17 01:38:08.359955 ignition[1102]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 17 01:38:08.359955 ignition[1102]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Apr 17 01:38:08.359955 ignition[1102]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Apr 17 01:38:08.359955 ignition[1102]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Apr 17 01:38:08.359955 ignition[1102]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Apr 17 01:38:08.359955 ignition[1102]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Apr 17 01:38:08.359955 ignition[1102]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.4-x86-64.raw: attempt #1 Apr 17 01:38:09.299269 ignition[1102]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Apr 17 01:38:13.774956 ignition[1102]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Apr 17 01:38:13.774956 ignition[1102]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Apr 17 01:38:13.805636 ignition[1102]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 17 01:38:13.805636 ignition[1102]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 17 01:38:13.805636 ignition[1102]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Apr 17 01:38:13.805636 ignition[1102]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Apr 17 01:38:13.805636 ignition[1102]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Apr 17 01:38:13.805636 ignition[1102]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Apr 17 01:38:13.805636 ignition[1102]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Apr 17 01:38:13.805636 ignition[1102]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Apr 17 01:38:14.211663 ignition[1102]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Apr 17 01:38:14.352775 ignition[1102]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Apr 17 01:38:14.368815 ignition[1102]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Apr 17 01:38:14.368815 ignition[1102]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Apr 17 01:38:14.368815 ignition[1102]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Apr 17 01:38:14.368815 ignition[1102]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Apr 17 01:38:14.368815 ignition[1102]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Apr 17 01:38:14.368815 ignition[1102]: INFO : files: files passed Apr 17 01:38:14.368815 ignition[1102]: INFO : Ignition finished successfully Apr 17 01:38:14.395000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 17 01:38:14.575669 kernel: audit: type=1130 audit(1776389894.395:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 17 01:38:14.370447 systemd[1]: Finished ignition-files.service - Ignition (files). Apr 17 01:38:14.517748 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Apr 17 01:38:14.557359 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Apr 17 01:38:14.684518 systemd[1]: ignition-quench.service: Deactivated successfully. Apr 17 01:38:14.684718 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Apr 17 01:38:14.696000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 17 01:38:14.721179 kernel: audit: type=1130 audit(1776389894.696:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 17 01:38:14.696000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 17 01:38:14.730373 kernel: audit: type=1131 audit(1776389894.696:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 17 01:38:14.730386 initrd-setup-root-after-ignition[1133]: grep: /sysroot/oem/oem-release: No such file or directory Apr 17 01:38:14.795067 initrd-setup-root-after-ignition[1138]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 17 01:38:14.807496 initrd-setup-root-after-ignition[1135]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 17 01:38:14.807496 initrd-setup-root-after-ignition[1135]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Apr 17 01:38:15.080142 kernel: loop3: detected capacity change from 0 to 43200 Apr 17 01:38:15.092476 kernel: loop3: p1 p2 p3 Apr 17 01:38:15.552804 kernel: erofs: (device loop3p1): mounted with root inode @ nid 40. Apr 17 01:38:15.752040 kernel: loop4: detected capacity change from 0 to 43200 Apr 17 01:38:15.765911 kernel: loop4: p1 p2 p3 Apr 17 01:38:16.024268 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Apr 17 01:38:16.026056 kernel: device-mapper: verity: Unrecognized verity feature request: root_hash_sig_key_desc Apr 17 01:38:16.026069 kernel: device-mapper: table: 253:2: verity: Unrecognized verity feature request (-EINVAL) Apr 17 01:38:16.038630 kernel: device-mapper: ioctl: error adding target to table Apr 17 01:38:16.039569 (sd-merge)[1144]: device-mapper: reload ioctl on loop4p1-verity (253:2) failed: Invalid argument Apr 17 01:38:16.107734 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Apr 17 01:38:18.312264 kernel: erofs: (device dm-2): mounted with root inode @ nid 40. Apr 17 01:38:18.313334 (sd-merge)[1144]: Skipping extension refresh because no change was found, use --always-refresh=yes to always do a refresh. Apr 17 01:38:18.395941 kernel: device-mapper: ioctl: remove_all left 2 open device(s) Apr 17 01:38:18.490594 kernel: loop4: detected capacity change from 0 to 177280 Apr 17 01:38:18.495947 kernel: loop4: p1 p2 p3 Apr 17 01:38:18.690917 kernel: erofs: (device loop4p1): mounted with root inode @ nid 39. Apr 17 01:38:18.760071 kernel: loop5: detected capacity change from 0 to 378016 Apr 17 01:38:18.785183 kernel: loop5: p1 p2 p3 Apr 17 01:38:18.953749 kernel: erofs: (device loop5p1): mounted with root inode @ nid 39. Apr 17 01:38:19.034611 kernel: loop6: detected capacity change from 0 to 219192 Apr 17 01:38:19.650415 kernel: loop7: detected capacity change from 0 to 177280 Apr 17 01:38:19.657542 kernel: loop7: p1 p2 p3 Apr 17 01:38:20.260748 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Apr 17 01:38:20.297447 kernel: device-mapper: verity: Unrecognized verity feature request: root_hash_sig_key_desc Apr 17 01:38:20.297309 (sd-merge)[1160]: device-mapper: reload ioctl on 429684c7da5cd0bf44b4ea28a5b92c998f2a539b19aaace83571e8a6096438eb-verity (253:2) failed: Invalid argument Apr 17 01:38:20.421794 kernel: device-mapper: table: 253:2: verity: Unrecognized verity feature request (-EINVAL) Apr 17 01:38:20.428474 kernel: device-mapper: ioctl: error adding target to table Apr 17 01:38:20.563794 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Apr 17 01:38:21.263775 kernel: erofs: (device dm-2): mounted with root inode @ nid 39. Apr 17 01:38:21.491757 kernel: loop1: detected capacity change from 0 to 378016 Apr 17 01:38:21.510157 kernel: loop1: p1 p2 p3 Apr 17 01:38:22.010049 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Apr 17 01:38:22.010327 kernel: device-mapper: verity: Unrecognized verity feature request: root_hash_sig_key_desc Apr 17 01:38:22.010346 kernel: device-mapper: table: 253:3: verity: Unrecognized verity feature request (-EINVAL) Apr 17 01:38:22.030703 kernel: device-mapper: ioctl: error adding target to table Apr 17 01:38:22.034764 (sd-merge)[1160]: device-mapper: reload ioctl on 71ff93066d900f03526b3b144b0ab25753655c6e9e6b92dee0da1c291babc60b-verity (253:3) failed: Invalid argument Apr 17 01:38:22.146360 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Apr 17 01:38:24.394804 kernel: erofs: (device dm-3): mounted with root inode @ nid 39. Apr 17 01:38:24.409023 kernel: loop3: detected capacity change from 0 to 219192 Apr 17 01:38:24.591788 (sd-merge)[1160]: Using extensions 'containerd-flatcar.raw', 'docker-flatcar.raw', 'kubernetes-v1.34.4-x86-64.raw'. Apr 17 01:38:24.612060 (sd-merge)[1160]: Merged extensions into '/sysroot/usr'. Apr 17 01:38:24.639409 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 17 01:38:24.652000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 17 01:38:24.773087 kernel: audit: type=1130 audit(1776389904.652:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 17 01:38:24.745447 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Apr 17 01:38:24.819006 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Apr 17 01:38:25.996969 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Apr 17 01:38:26.017000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 17 01:38:26.017000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 17 01:38:26.063044 kernel: audit: type=1130 audit(1776389906.017:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 17 01:38:25.998260 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Apr 17 01:38:26.131187 kernel: audit: type=1131 audit(1776389906.017:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 17 01:38:26.018537 systemd[1]: initrd-parse-etc.service: Triggering OnSuccess= dependencies. Apr 17 01:38:26.026028 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Apr 17 01:38:26.063274 systemd[1]: Reached target initrd.target - Initrd Default Target. Apr 17 01:38:26.132010 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Apr 17 01:38:26.141173 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Apr 17 01:38:27.278823 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 17 01:38:27.309000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 17 01:38:27.364210 kernel: audit: type=1130 audit(1776389907.309:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 17 01:38:27.588468 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Apr 17 01:38:28.365035 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Apr 17 01:38:28.520805 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 17 01:38:28.545048 systemd[1]: Stopped target timers.target - Timer Units. Apr 17 01:38:28.593573 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Apr 17 01:38:28.596254 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 17 01:38:28.684000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 17 01:38:28.688174 systemd[1]: Stopped target initrd.target - Initrd Default Target. Apr 17 01:38:28.700644 systemd[1]: Stopped target basic.target - Basic System. Apr 17 01:38:28.730701 kernel: audit: type=1131 audit(1776389908.684:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 17 01:38:28.730569 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Apr 17 01:38:28.755244 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Apr 17 01:38:28.839660 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Apr 17 01:38:28.881700 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Apr 17 01:38:28.908568 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Apr 17 01:38:28.927580 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Apr 17 01:38:28.954485 systemd[1]: Stopped target sysinit.target - System Initialization. Apr 17 01:38:29.038740 systemd[1]: Stopped target local-fs.target - Local File Systems. Apr 17 01:38:29.077975 systemd[1]: Stopped target swap.target - Swaps. Apr 17 01:38:29.095087 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Apr 17 01:38:29.095416 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Apr 17 01:38:29.133000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 17 01:38:29.151760 kernel: audit: type=1131 audit(1776389909.133:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 17 01:38:29.153482 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Apr 17 01:38:29.233072 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 17 01:38:29.285796 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Apr 17 01:38:29.307064 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 17 01:38:29.378809 systemd[1]: dracut-initqueue.service: Deactivated successfully. Apr 17 01:38:29.390148 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Apr 17 01:38:29.407000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 17 01:38:29.409464 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Apr 17 01:38:29.507000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 17 01:38:29.409767 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Apr 17 01:38:29.523956 kernel: audit: type=1131 audit(1776389909.407:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 17 01:38:29.508443 systemd[1]: ignition-fetch-offline.service: Consumed 5.823s CPU time. Apr 17 01:38:29.561157 kernel: audit: type=1131 audit(1776389909.507:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 17 01:38:29.508796 systemd[1]: Stopped target paths.target - Path Units. Apr 17 01:38:29.555213 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Apr 17 01:38:29.560950 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 17 01:38:29.589796 systemd[1]: Stopped target slices.target - Slice Units. Apr 17 01:38:29.654729 systemd[1]: Stopped target sockets.target - Socket Units. Apr 17 01:38:29.703000 systemd[1]: iscsid.socket: Deactivated successfully. Apr 17 01:38:29.707084 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Apr 17 01:38:29.805733 systemd[1]: iscsiuio.socket: Deactivated successfully. Apr 17 01:38:29.848789 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 17 01:38:29.957267 systemd[1]: systemd-journald-audit.socket: Deactivated successfully. Apr 17 01:38:29.987801 systemd[1]: Closed systemd-journald-audit.socket - Journal Audit Socket. Apr 17 01:38:30.026955 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Apr 17 01:38:30.032123 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 17 01:38:30.120000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 17 01:38:30.127599 systemd[1]: initrd-setup-root-after-ignition.service: Consumed 2.156s CPU time. Apr 17 01:38:30.129447 systemd[1]: ignition-files.service: Deactivated successfully. Apr 17 01:38:30.150000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 17 01:38:30.187821 kernel: audit: type=1131 audit(1776389910.120:47): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 17 01:38:30.134808 systemd[1]: Stopped ignition-files.service - Ignition (files). Apr 17 01:38:30.210705 kernel: audit: type=1131 audit(1776389910.150:48): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 17 01:38:30.151552 systemd[1]: ignition-files.service: Consumed 6.376s CPU time. Apr 17 01:38:30.160632 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Apr 17 01:38:30.237397 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Apr 17 01:38:30.260620 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Apr 17 01:38:30.295565 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 17 01:38:30.323597 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Apr 17 01:38:30.322000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 17 01:38:30.416230 kernel: audit: type=1131 audit(1776389910.322:49): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 17 01:38:30.352687 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Apr 17 01:38:30.434000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 17 01:38:30.456255 kernel: audit: type=1131 audit(1776389910.434:50): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 17 01:38:30.436164 systemd[1]: systemd-udev-trigger.service: Consumed 7.211s CPU time. Apr 17 01:38:30.460000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 17 01:38:30.501564 kernel: audit: type=1131 audit(1776389910.460:51): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 17 01:38:30.452645 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Apr 17 01:38:30.454064 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Apr 17 01:38:30.461165 systemd[1]: dracut-pre-trigger.service: Consumed 1.248s CPU time. Apr 17 01:38:30.627456 systemd[1]: initrd-cleanup.service: Deactivated successfully. Apr 17 01:38:30.627743 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Apr 17 01:38:30.646000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 17 01:38:30.646000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 17 01:38:30.683379 kernel: audit: type=1130 audit(1776389910.646:52): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 17 01:38:30.683415 kernel: audit: type=1131 audit(1776389910.646:53): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 17 01:38:30.757081 systemd[1]: sysroot-boot.mount: Deactivated successfully. Apr 17 01:38:30.834653 systemd[1]: sysroot-boot.service: Deactivated successfully. Apr 17 01:38:30.849900 ignition[1189]: INFO : Ignition 2.24.0 Apr 17 01:38:30.849900 ignition[1189]: INFO : Stage: umount Apr 17 01:38:30.849900 ignition[1189]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 17 01:38:30.849900 ignition[1189]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 17 01:38:30.849000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 17 01:38:30.835795 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Apr 17 01:38:30.921000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 17 01:38:30.975489 ignition[1189]: INFO : umount: umount passed Apr 17 01:38:30.975489 ignition[1189]: INFO : Ignition finished successfully Apr 17 01:38:30.997540 kernel: audit: type=1131 audit(1776389910.849:54): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 17 01:38:30.896139 systemd[1]: ignition-mount.service: Deactivated successfully. Apr 17 01:38:31.030738 kernel: audit: type=1131 audit(1776389910.921:55): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 17 01:38:30.896617 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Apr 17 01:38:31.045000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 17 01:38:31.120081 kernel: audit: type=1131 audit(1776389911.045:56): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 17 01:38:31.119000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 17 01:38:30.975552 systemd[1]: Stopped target network.target - Network. Apr 17 01:38:30.996545 systemd[1]: ignition-disks.service: Deactivated successfully. Apr 17 01:38:31.148000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 17 01:38:30.998054 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Apr 17 01:38:31.109207 systemd[1]: ignition-kargs.service: Deactivated successfully. Apr 17 01:38:31.173000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup-pre comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 17 01:38:31.109600 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Apr 17 01:38:31.120434 systemd[1]: ignition-kargs.service: Consumed 1.920s CPU time. Apr 17 01:38:31.195000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 17 01:38:31.120682 systemd[1]: ignition-setup.service: Deactivated successfully. Apr 17 01:38:31.120756 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Apr 17 01:38:31.160058 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Apr 17 01:38:31.163160 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Apr 17 01:38:31.177996 systemd[1]: ignition-setup-pre.service: Consumed 1.918s CPU time. Apr 17 01:38:31.182141 systemd[1]: initrd-setup-root.service: Deactivated successfully. Apr 17 01:38:31.325000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 17 01:38:31.183644 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Apr 17 01:38:31.196412 systemd[1]: initrd-setup-root.service: Consumed 2.009s CPU time. Apr 17 01:38:31.200315 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Apr 17 01:38:31.257783 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Apr 17 01:38:31.303710 systemd[1]: systemd-networkd.service: Deactivated successfully. Apr 17 01:38:31.313150 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Apr 17 01:38:31.485057 systemd[1]: systemd-resolved.service: Deactivated successfully. Apr 17 01:38:31.495000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 17 01:38:31.487480 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Apr 17 01:38:31.805000 audit: BPF prog-id=8 op=UNLOAD Apr 17 01:38:31.891000 audit: BPF prog-id=5 op=UNLOAD Apr 17 01:38:31.892272 systemd[1]: Stopped target network-pre.target - Preparation for Network. Apr 17 01:38:31.914722 systemd[1]: systemd-networkd.socket: Deactivated successfully. Apr 17 01:38:31.916404 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Apr 17 01:38:32.130281 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Apr 17 01:38:32.157110 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Apr 17 01:38:32.157494 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 17 01:38:32.206000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 17 01:38:32.215576 systemd[1]: parse-ip-for-networkd.service: Consumed 3.222s CPU time. Apr 17 01:38:32.227816 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 17 01:38:32.247068 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 17 01:38:32.322000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 17 01:38:32.328144 systemd[1]: systemd-modules-load.service: Deactivated successfully. Apr 17 01:38:32.328296 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Apr 17 01:38:32.343000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 17 01:38:32.364784 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 17 01:38:32.560663 systemd[1]: systemd-udevd.service: Deactivated successfully. Apr 17 01:38:32.560976 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 17 01:38:32.601000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 17 01:38:32.606014 systemd[1]: systemd-udevd.service: Consumed 14.686s CPU time. Apr 17 01:38:32.654622 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Apr 17 01:38:32.655012 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Apr 17 01:38:32.705825 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Apr 17 01:38:32.708788 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Apr 17 01:38:32.735000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 17 01:38:32.760000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 17 01:38:32.769000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 17 01:38:32.736213 systemd[1]: dracut-cmdline.service: Deactivated successfully. Apr 17 01:38:32.736302 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Apr 17 01:38:32.795000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 17 01:38:32.807000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 17 01:38:32.846000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 17 01:38:32.765671 systemd[1]: dracut-cmdline.service: Consumed 6.360s CPU time. Apr 17 01:38:32.861000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 17 01:38:32.923000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 17 01:38:32.767171 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 17 01:38:32.767370 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 17 01:38:32.793606 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Apr 17 01:38:32.960000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 17 01:38:32.796683 systemd[1]: systemd-network-generator.service: Deactivated successfully. Apr 17 01:38:32.796917 systemd[1]: Stopped systemd-network-generator.service - Generate Network Units from Kernel Command Line. Apr 17 01:38:32.797439 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Apr 17 01:38:32.797478 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 17 01:38:32.820310 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Apr 17 01:38:32.823460 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 17 01:38:32.851735 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Consumed 1.515s CPU time. Apr 17 01:38:32.854827 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Apr 17 01:38:33.160000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 17 01:38:33.165000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 17 01:38:32.858002 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Apr 17 01:38:32.862759 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 17 01:38:32.862923 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 17 01:38:32.938552 systemd[1]: network-cleanup.service: Deactivated successfully. Apr 17 01:38:32.954123 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Apr 17 01:38:33.157456 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Apr 17 01:38:33.157775 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Apr 17 01:38:33.193432 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Apr 17 01:38:33.240443 systemd[1]: Starting initrd-switch-root.service - Switch Root... Apr 17 01:38:33.506283 systemd[1]: Switching root. Apr 17 01:38:34.008182 systemd-journald[318]: Received SIGTERM from PID 1 (systemd). Apr 17 01:38:34.011199 systemd-journald[318]: Journal stopped Apr 17 01:38:50.239131 kernel: SELinux: policy capability network_peer_controls=1 Apr 17 01:38:50.239269 kernel: SELinux: policy capability open_perms=1 Apr 17 01:38:50.239286 kernel: SELinux: policy capability extended_socket_class=1 Apr 17 01:38:50.239296 kernel: SELinux: policy capability always_check_network=0 Apr 17 01:38:50.239311 kernel: SELinux: policy capability cgroup_seclabel=1 Apr 17 01:38:50.239320 kernel: SELinux: policy capability nnp_nosuid_transition=1 Apr 17 01:38:50.239331 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Apr 17 01:38:50.239340 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Apr 17 01:38:50.240091 kernel: SELinux: policy capability userspace_initial_context=0 Apr 17 01:38:50.240171 systemd[1]: Successfully loaded SELinux policy in 159.500ms. Apr 17 01:38:50.240191 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 17.668ms. Apr 17 01:38:50.240205 systemd[1]: systemd 258.2 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Apr 17 01:38:50.240216 systemd[1]: Detected virtualization kvm. Apr 17 01:38:50.240225 systemd[1]: Detected architecture x86-64. Apr 17 01:38:50.240234 systemd[1]: Detected first boot. Apr 17 01:38:50.240242 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Apr 17 01:38:50.240251 kernel: kauditd_printk_skb: 25 callbacks suppressed Apr 17 01:38:50.240261 kernel: audit: type=1334 audit(1776389915.853:82): prog-id=9 op=LOAD Apr 17 01:38:50.240274 kernel: audit: type=1334 audit(1776389915.853:83): prog-id=9 op=UNLOAD Apr 17 01:38:50.240283 zram_generator::config[1237]: No configuration found. Apr 17 01:38:50.240294 kernel: Guest personality initialized and is inactive Apr 17 01:38:50.240306 kernel: VMCI host device registered (name=vmci, major=10, minor=258) Apr 17 01:38:50.240315 kernel: Initialized host personality Apr 17 01:38:50.240325 kernel: NET: Registered PF_VSOCK protocol family Apr 17 01:38:50.240336 systemd[1]: Applying preset policy. Apr 17 01:38:50.241262 systemd[1]: Created symlink '/etc/systemd/system/multi-user.target.wants/prepare-helm.service' → '/etc/systemd/system/prepare-helm.service'. Apr 17 01:38:50.242337 systemd[1]: Created symlink '/etc/systemd/system/timers.target.wants/google-oslogin-cache.timer' → '/usr/lib/systemd/system/google-oslogin-cache.timer'. Apr 17 01:38:50.242401 systemd[1]: Populated /etc with preset unit settings. Apr 17 01:38:50.242413 systemd[1]: /usr/lib/systemd/system/update-engine.service:10: Support for option BlockIOWeight= has been removed and it is ignored Apr 17 01:38:50.242424 kernel: audit: type=1334 audit(1776389927.394:84): prog-id=10 op=LOAD Apr 17 01:38:50.242435 kernel: audit: type=1334 audit(1776389927.408:85): prog-id=2 op=UNLOAD Apr 17 01:38:50.242448 kernel: audit: type=1334 audit(1776389927.409:86): prog-id=11 op=LOAD Apr 17 01:38:50.242458 kernel: audit: type=1334 audit(1776389927.409:87): prog-id=12 op=LOAD Apr 17 01:38:50.242468 kernel: audit: type=1334 audit(1776389927.409:88): prog-id=3 op=UNLOAD Apr 17 01:38:50.242479 kernel: audit: type=1334 audit(1776389927.409:89): prog-id=4 op=UNLOAD Apr 17 01:38:50.242489 kernel: audit: type=1131 audit(1776389927.550:90): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 17 01:38:50.242499 kernel: audit: type=1334 audit(1776389927.586:91): prog-id=10 op=UNLOAD Apr 17 01:38:50.242509 systemd[1]: initrd-switch-root.service: Deactivated successfully. Apr 17 01:38:50.242521 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Apr 17 01:38:50.242531 kernel: audit: type=1130 audit(1776389927.625:92): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 17 01:38:50.242541 kernel: audit: type=1131 audit(1776389927.626:93): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 17 01:38:50.242552 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Apr 17 01:38:50.242565 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Apr 17 01:38:50.242578 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Apr 17 01:38:50.242588 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Apr 17 01:38:50.242596 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Apr 17 01:38:50.242605 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Apr 17 01:38:50.242614 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Apr 17 01:38:50.242624 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Apr 17 01:38:50.242632 systemd[1]: Created slice user.slice - User and Session Slice. Apr 17 01:38:50.242644 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 17 01:38:50.242653 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 17 01:38:50.242662 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Apr 17 01:38:50.242670 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Apr 17 01:38:50.242679 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Apr 17 01:38:50.242688 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 17 01:38:50.242697 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Apr 17 01:38:50.242707 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 17 01:38:50.242716 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 17 01:38:50.242724 systemd[1]: Reached target imports.target - Image Downloads. Apr 17 01:38:50.242737 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Apr 17 01:38:50.242752 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Apr 17 01:38:50.242764 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Apr 17 01:38:50.242780 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Apr 17 01:38:50.242798 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 17 01:38:50.242813 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 17 01:38:50.242824 systemd[1]: Reached target remote-integritysetup.target - Remote Integrity Protected Volumes. Apr 17 01:38:50.242833 systemd[1]: Reached target remote-veritysetup.target - Remote Verity Protected Volumes. Apr 17 01:38:50.242932 systemd[1]: Reached target slices.target - Slice Units. Apr 17 01:38:50.242941 systemd[1]: Reached target swap.target - Swaps. Apr 17 01:38:50.242950 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Apr 17 01:38:50.242962 systemd[1]: Listening on systemd-ask-password.socket - Query the User Interactively for a Password. Apr 17 01:38:50.242971 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Apr 17 01:38:50.242979 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Apr 17 01:38:50.242988 systemd[1]: Listening on systemd-factory-reset.socket - Factory Reset Management. Apr 17 01:38:50.242997 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Apr 17 01:38:50.243006 systemd[1]: Listening on systemd-mountfsd.socket - DDI File System Mounter Socket. Apr 17 01:38:50.243014 systemd[1]: Listening on systemd-networkd-varlink.socket - Network Service Varlink Socket. Apr 17 01:38:50.243025 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 17 01:38:50.243034 systemd[1]: Listening on systemd-nsresourced.socket - Namespace Resource Manager Socket. Apr 17 01:38:50.243042 systemd[1]: Listening on systemd-oomd.socket - Userspace Out-Of-Memory (OOM) Killer Socket. Apr 17 01:38:50.243050 systemd[1]: Listening on systemd-resolved-monitor.socket - Resolve Monitor Varlink Socket. Apr 17 01:38:50.243059 systemd[1]: Listening on systemd-resolved-varlink.socket - Resolve Service Varlink Socket. Apr 17 01:38:50.243068 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 17 01:38:50.243077 systemd[1]: Listening on systemd-udevd-varlink.socket - udev Varlink Socket. Apr 17 01:38:50.243135 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Apr 17 01:38:50.243145 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Apr 17 01:38:50.243153 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Apr 17 01:38:50.243162 systemd[1]: Mounting media.mount - External Media Directory... Apr 17 01:38:50.243171 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 17 01:38:50.243180 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Apr 17 01:38:50.243191 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Apr 17 01:38:50.243201 systemd[1]: tmp.mount: x-systemd.graceful-option=usrquota specified, but option is not available, suppressing. Apr 17 01:38:50.243210 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Apr 17 01:38:50.243219 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Apr 17 01:38:50.243229 systemd[1]: Reached target machines.target - Virtual Machines and Containers. Apr 17 01:38:50.243237 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Apr 17 01:38:50.243246 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 17 01:38:50.243255 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 17 01:38:50.243263 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Apr 17 01:38:50.243271 systemd[1]: modprobe@dm_mod.service - Load Kernel Module dm_mod was skipped because of an unmet condition check (ConditionKernelModuleLoaded=!dm_mod). Apr 17 01:38:50.243280 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 17 01:38:50.243290 systemd[1]: modprobe@efi_pstore.service - Load Kernel Module efi_pstore was skipped because of an unmet condition check (ConditionKernelModuleLoaded=!efi_pstore). Apr 17 01:38:50.243300 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Apr 17 01:38:50.243310 systemd[1]: modprobe@loop.service - Load Kernel Module loop was skipped because of an unmet condition check (ConditionKernelModuleLoaded=!loop). Apr 17 01:38:50.243320 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Apr 17 01:38:50.243329 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Apr 17 01:38:50.243338 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Apr 17 01:38:50.243379 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Apr 17 01:38:50.243389 systemd[1]: Stopped systemd-fsck-usr.service. Apr 17 01:38:50.243399 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Apr 17 01:38:50.243407 kernel: ACPI: bus type drm_connector registered Apr 17 01:38:50.243416 kernel: fuse: init (API version 7.41) Apr 17 01:38:50.243428 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 17 01:38:50.243437 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 17 01:38:50.243445 systemd[1]: Starting systemd-network-generator.service - Generate Network Units from Kernel Command Line... Apr 17 01:38:50.243454 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Apr 17 01:38:50.243463 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 17 01:38:50.243556 systemd-journald[1313]: Collecting audit messages is enabled. Apr 17 01:38:50.243580 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Apr 17 01:38:50.243590 systemd-journald[1313]: Journal started Apr 17 01:38:50.243611 systemd-journald[1313]: Runtime Journal (/run/log/journal/30e64c11057a45f59b7ccdd1d817a8f9) is 6M, max 48M, 42M free. Apr 17 01:38:48.757000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Apr 17 01:38:49.984000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 17 01:38:49.998000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 17 01:38:50.019000 audit: BPF prog-id=12 op=UNLOAD Apr 17 01:38:50.019000 audit: BPF prog-id=11 op=UNLOAD Apr 17 01:38:50.027000 audit: BPF prog-id=13 op=LOAD Apr 17 01:38:50.034000 audit: BPF prog-id=14 op=LOAD Apr 17 01:38:50.034000 audit: BPF prog-id=15 op=LOAD Apr 17 01:38:50.207000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Apr 17 01:38:50.207000 audit[1313]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=3 a1=7ffe7c125440 a2=4000 a3=0 items=0 ppid=1 pid=1313 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 17 01:38:50.207000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Apr 17 01:38:47.208785 systemd[1]: Queued start job for default target multi-user.target. Apr 17 01:38:47.525304 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Apr 17 01:38:47.546954 systemd[1]: systemd-journald.service: Deactivated successfully. Apr 17 01:38:47.552064 systemd[1]: systemd-journald.service: Consumed 8.242s CPU time. Apr 17 01:38:50.350392 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 17 01:38:50.350770 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 17 01:38:50.367793 systemd[1]: Started systemd-journald.service - Journal Service. Apr 17 01:38:50.397000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 17 01:38:50.404098 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Apr 17 01:38:50.415783 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Apr 17 01:38:50.427068 systemd[1]: Mounted media.mount - External Media Directory. Apr 17 01:38:50.432485 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Apr 17 01:38:50.510787 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Apr 17 01:38:50.521086 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Apr 17 01:38:50.531253 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Apr 17 01:38:50.537000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 17 01:38:50.539276 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 17 01:38:50.547000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 17 01:38:50.548442 systemd[1]: modprobe@configfs.service: Deactivated successfully. Apr 17 01:38:50.548736 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Apr 17 01:38:50.560000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 17 01:38:50.561000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 17 01:38:50.563732 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 17 01:38:50.565646 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 17 01:38:50.573000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 17 01:38:50.573000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 17 01:38:50.575079 systemd[1]: modprobe@fuse.service: Deactivated successfully. Apr 17 01:38:50.575276 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Apr 17 01:38:50.584000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 17 01:38:50.584000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 17 01:38:50.585784 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 17 01:38:50.592000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 17 01:38:50.600695 systemd[1]: Finished systemd-network-generator.service - Generate Network Units from Kernel Command Line. Apr 17 01:38:50.616000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 17 01:38:50.642458 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Apr 17 01:38:50.654000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 17 01:38:50.655686 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Apr 17 01:38:50.719000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-load-credentials comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 17 01:38:50.926831 systemd[1]: Reached target network-pre.target - Preparation for Network. Apr 17 01:38:50.948637 systemd[1]: Listening on systemd-importd.socket - Disk Image Download Service Socket. Apr 17 01:38:50.997937 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Apr 17 01:38:51.017674 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Apr 17 01:38:51.023184 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Apr 17 01:38:51.023277 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 17 01:38:51.033594 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Apr 17 01:38:51.043197 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 17 01:38:51.100236 systemd[1]: Starting systemd-confext.service - Merge System Configuration Images into /etc/... Apr 17 01:38:51.131759 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Apr 17 01:38:51.201081 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Apr 17 01:38:51.218113 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 17 01:38:51.258038 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Apr 17 01:38:51.271322 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 17 01:38:51.300963 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 17 01:38:51.359021 systemd-journald[1313]: Time spent on flushing to /var/log/journal/30e64c11057a45f59b7ccdd1d817a8f9 is 202.456ms for 1304 entries. Apr 17 01:38:51.359021 systemd-journald[1313]: System Journal (/var/log/journal/30e64c11057a45f59b7ccdd1d817a8f9) is 8M, max 163.5M, 155.5M free. Apr 17 01:38:51.603000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 17 01:38:51.357313 systemd[1]: Starting systemd-userdb-load-credentials.service - Load JSON user/group Records from Credentials... Apr 17 01:38:51.712600 systemd-journald[1313]: Received client request to flush runtime journal. Apr 17 01:38:51.432826 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Apr 17 01:38:51.517470 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Apr 17 01:38:51.585814 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Apr 17 01:38:51.736520 kernel: loop4: detected capacity change from 0 to 43200 Apr 17 01:38:51.736000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 17 01:38:51.614278 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Apr 17 01:38:51.629092 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Apr 17 01:38:51.716831 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Apr 17 01:38:51.745606 kernel: loop4: p1 p2 p3 Apr 17 01:38:51.746007 systemd[1]: Finished systemd-userdb-load-credentials.service - Load JSON user/group Records from Credentials. Apr 17 01:38:51.752000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdb-load-credentials comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 17 01:38:51.796065 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 17 01:38:51.805000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 17 01:38:51.816060 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 17 01:38:51.826000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 17 01:38:51.910820 systemd-tmpfiles[1353]: ACLs are not supported, ignoring. Apr 17 01:38:51.910907 systemd-tmpfiles[1353]: ACLs are not supported, ignoring. Apr 17 01:38:51.931028 kernel: erofs: (device loop4p1): mounted with root inode @ nid 40. Apr 17 01:38:52.058195 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 17 01:38:52.071789 kernel: loop4: detected capacity change from 0 to 43200 Apr 17 01:38:52.076000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 17 01:38:52.083236 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Apr 17 01:38:52.085677 kernel: loop4: p1 p2 p3 Apr 17 01:38:52.091000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 17 01:38:52.097125 systemd[1]: Starting systemd-sysusers.service - Create System Users... Apr 17 01:38:52.208170 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Apr 17 01:38:52.208266 kernel: device-mapper: verity: Unrecognized verity feature request: root_hash_sig_key_desc Apr 17 01:38:52.208292 kernel: device-mapper: table: 253:4: verity: Unrecognized verity feature request (-EINVAL) Apr 17 01:38:52.214982 kernel: device-mapper: ioctl: error adding target to table Apr 17 01:38:52.222173 (sd-merge)[1372]: device-mapper: reload ioctl on loop4p1-verity (253:4) failed: Invalid argument Apr 17 01:38:52.234024 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Apr 17 01:38:52.445179 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Apr 17 01:38:52.463298 systemd[1]: Finished systemd-sysusers.service - Create System Users. Apr 17 01:38:52.469000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 17 01:38:52.474106 kernel: kauditd_printk_skb: 31 callbacks suppressed Apr 17 01:38:52.475230 kernel: audit: type=1130 audit(1776389932.469:123): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 17 01:38:52.478000 audit: BPF prog-id=16 op=LOAD Apr 17 01:38:52.484559 systemd[1]: Starting systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer... Apr 17 01:38:52.486683 kernel: audit: type=1334 audit(1776389932.478:124): prog-id=16 op=LOAD Apr 17 01:38:52.478000 audit: BPF prog-id=17 op=LOAD Apr 17 01:38:52.478000 audit: BPF prog-id=18 op=LOAD Apr 17 01:38:52.486806 kernel: audit: type=1334 audit(1776389932.478:125): prog-id=17 op=LOAD Apr 17 01:38:52.486821 kernel: audit: type=1334 audit(1776389932.478:126): prog-id=18 op=LOAD Apr 17 01:38:52.498000 audit: BPF prog-id=19 op=LOAD Apr 17 01:38:52.502660 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 17 01:38:52.504715 kernel: audit: type=1334 audit(1776389932.498:127): prog-id=19 op=LOAD Apr 17 01:38:52.510000 audit: BPF prog-id=20 op=LOAD Apr 17 01:38:52.519155 kernel: audit: type=1334 audit(1776389932.510:128): prog-id=20 op=LOAD Apr 17 01:38:52.526823 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Apr 17 01:38:52.644221 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 17 01:38:52.668236 systemd[1]: Starting modprobe@tun.service - Load Kernel Module tun... Apr 17 01:38:52.677000 audit: BPF prog-id=21 op=LOAD Apr 17 01:38:52.677000 audit: BPF prog-id=22 op=LOAD Apr 17 01:38:52.688610 kernel: audit: type=1334 audit(1776389932.677:129): prog-id=21 op=LOAD Apr 17 01:38:52.688684 kernel: audit: type=1334 audit(1776389932.677:130): prog-id=22 op=LOAD Apr 17 01:38:52.689155 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Apr 17 01:38:52.677000 audit: BPF prog-id=23 op=LOAD Apr 17 01:38:52.699615 kernel: audit: type=1334 audit(1776389932.677:131): prog-id=23 op=LOAD Apr 17 01:38:52.725682 kernel: tun: Universal TUN/TAP device driver, 1.6 Apr 17 01:38:52.740229 systemd[1]: modprobe@tun.service: Deactivated successfully. Apr 17 01:38:52.743220 systemd[1]: Finished modprobe@tun.service - Load Kernel Module tun. Apr 17 01:38:52.748000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@tun comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 17 01:38:52.764563 kernel: audit: type=1130 audit(1776389932.748:132): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@tun comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 17 01:38:52.752000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@tun comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 17 01:38:52.817000 audit: BPF prog-id=24 op=LOAD Apr 17 01:38:52.819000 audit: BPF prog-id=25 op=LOAD Apr 17 01:38:52.820000 audit: BPF prog-id=26 op=LOAD Apr 17 01:38:52.828313 systemd[1]: Starting systemd-nsresourced.service - Namespace Resource Manager... Apr 17 01:38:52.860235 systemd-tmpfiles[1383]: ACLs are not supported, ignoring. Apr 17 01:38:52.860251 systemd-tmpfiles[1383]: ACLs are not supported, ignoring. Apr 17 01:38:52.879056 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 17 01:38:52.887000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 17 01:38:53.040027 systemd-nsresourced[1388]: Not setting up BPF subsystem, as functionality has been disabled at compile time. Apr 17 01:38:53.044359 systemd[1]: Started systemd-userdbd.service - User Database Manager. Apr 17 01:38:53.053000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 17 01:38:53.066959 systemd[1]: Started systemd-nsresourced.service - Namespace Resource Manager. Apr 17 01:38:53.075000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-nsresourced comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 17 01:38:53.902724 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Apr 17 01:38:53.920000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-timesyncd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 17 01:38:53.922454 systemd[1]: Reached target time-set.target - System Time Set. Apr 17 01:38:53.994284 systemd-oomd[1380]: No swap; memory pressure usage will be degraded Apr 17 01:38:54.003624 systemd[1]: Started systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer. Apr 17 01:38:54.021000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-oomd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 17 01:38:54.145778 systemd-resolved[1381]: Positive Trust Anchors: Apr 17 01:38:54.147425 systemd-resolved[1381]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 17 01:38:54.147439 systemd-resolved[1381]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Apr 17 01:38:54.147468 systemd-resolved[1381]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 17 01:38:54.235538 systemd-resolved[1381]: Defaulting to hostname 'linux'. Apr 17 01:38:54.249248 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 17 01:38:54.254000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 17 01:38:54.265732 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 17 01:39:02.689693 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Apr 17 01:39:02.715000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 17 01:39:02.748692 kernel: kauditd_printk_skb: 10 callbacks suppressed Apr 17 01:39:02.748917 kernel: audit: type=1130 audit(1776389942.715:143): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 17 01:39:02.758000 audit: BPF prog-id=7 op=UNLOAD Apr 17 01:39:02.759000 audit: BPF prog-id=6 op=UNLOAD Apr 17 01:39:02.782775 kernel: audit: type=1334 audit(1776389942.758:144): prog-id=7 op=UNLOAD Apr 17 01:39:02.782925 kernel: audit: type=1334 audit(1776389942.759:145): prog-id=6 op=UNLOAD Apr 17 01:39:02.782000 audit: BPF prog-id=27 op=LOAD Apr 17 01:39:02.783000 audit: BPF prog-id=28 op=LOAD Apr 17 01:39:02.789122 kernel: audit: type=1334 audit(1776389942.782:146): prog-id=27 op=LOAD Apr 17 01:39:02.789181 kernel: audit: type=1334 audit(1776389942.783:147): prog-id=28 op=LOAD Apr 17 01:39:02.855793 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 17 01:39:03.596833 systemd-udevd[1409]: Using default interface naming scheme 'v258'. Apr 17 01:39:14.657719 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 17 01:39:14.713000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 17 01:39:14.732146 kernel: audit: type=1130 audit(1776389954.713:148): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 17 01:39:14.759000 audit: BPF prog-id=29 op=LOAD Apr 17 01:39:14.776313 kernel: audit: type=1334 audit(1776389954.759:149): prog-id=29 op=LOAD Apr 17 01:39:14.800089 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 17 01:39:15.814677 systemd-networkd[1411]: lo: Link UP Apr 17 01:39:15.814725 systemd-networkd[1411]: lo: Gained carrier Apr 17 01:39:15.823089 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 17 01:39:15.831000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 17 01:39:15.836273 systemd[1]: Reached target network.target - Network. Apr 17 01:39:15.848101 kernel: audit: type=1130 audit(1776389955.831:150): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 17 01:39:15.928991 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Apr 17 01:39:16.007632 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Apr 17 01:39:16.216614 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Apr 17 01:39:16.229000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-persistent-storage comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 17 01:39:16.246368 kernel: audit: type=1130 audit(1776389956.229:151): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-persistent-storage comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 17 01:39:16.559449 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Apr 17 01:39:16.784491 systemd-networkd[1411]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Apr 17 01:39:16.785534 systemd-networkd[1411]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 17 01:39:16.798501 systemd-networkd[1411]: eth0: Link UP Apr 17 01:39:16.798815 systemd-networkd[1411]: eth0: Gained carrier Apr 17 01:39:16.799181 systemd-networkd[1411]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Apr 17 01:39:16.891515 systemd-networkd[1411]: eth0: DHCPv4 address 10.0.0.148/16, gateway 10.0.0.1 acquired from 10.0.0.1 Apr 17 01:39:16.917135 systemd-timesyncd[1382]: Network configuration changed, trying to establish connection. Apr 17 01:39:17.717740 systemd-timesyncd[1382]: Contacted time server 10.0.0.1:123 (10.0.0.1). Apr 17 01:39:17.718131 systemd-timesyncd[1382]: Initial clock synchronization to Fri 2026-04-17 01:39:17.715051 UTC. Apr 17 01:39:17.719036 systemd-resolved[1381]: Clock change detected. Flushing caches. Apr 17 01:39:18.038235 kernel: mousedev: PS/2 mouse device common for all mice Apr 17 01:39:18.220030 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input4 Apr 17 01:39:18.244219 kernel: ACPI: button: Power Button [PWRF] Apr 17 01:39:18.354153 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Apr 17 01:39:18.374429 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Apr 17 01:39:18.479126 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Apr 17 01:39:18.484081 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Apr 17 01:39:18.484947 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Apr 17 01:39:18.655766 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Apr 17 01:39:18.669000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 17 01:39:18.701525 kernel: audit: type=1130 audit(1776389958.669:152): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 17 01:39:19.007714 systemd-networkd[1411]: eth0: Gained IPv6LL Apr 17 01:39:19.023400 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Apr 17 01:39:19.032000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-wait-online comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 17 01:39:19.072498 kernel: audit: type=1130 audit(1776389959.032:153): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-wait-online comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 17 01:39:19.076115 systemd[1]: Reached target network-online.target - Network is Online. Apr 17 01:39:19.421890 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 17 01:39:19.604918 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 17 01:39:19.609969 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 17 01:39:19.661000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 17 01:39:19.686820 kernel: audit: type=1130 audit(1776389959.661:154): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 17 01:39:19.661000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 17 01:39:19.718875 kernel: audit: type=1131 audit(1776389959.661:155): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 17 01:39:19.723135 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 17 01:39:20.088037 kernel: erofs: (device dm-4): mounted with root inode @ nid 40. Apr 17 01:39:20.123516 (sd-merge)[1372]: Skipping extension refresh because no change was found, use --always-refresh=yes to always do a refresh. Apr 17 01:39:20.211174 systemd[1]: Finished systemd-confext.service - Merge System Configuration Images into /etc/. Apr 17 01:39:20.243675 kernel: device-mapper: ioctl: remove_all left 4 open device(s) Apr 17 01:39:20.243000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-confext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 17 01:39:20.292188 kernel: audit: type=1130 audit(1776389960.243:156): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-confext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 17 01:39:20.323661 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Apr 17 01:39:20.364141 kernel: loop4: detected capacity change from 0 to 378016 Apr 17 01:39:20.370818 kernel: loop4: p1 p2 p3 Apr 17 01:39:20.376464 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 17 01:39:20.385000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 17 01:39:20.409303 kernel: audit: type=1130 audit(1776389960.385:157): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 17 01:39:20.529549 kernel: erofs: (device loop4p1): mounted with root inode @ nid 39. Apr 17 01:39:20.614820 kernel: loop4: detected capacity change from 0 to 219192 Apr 17 01:39:20.803118 kernel: loop4: detected capacity change from 0 to 177280 Apr 17 01:39:20.805516 kernel: loop4: p1 p2 p3 Apr 17 01:39:20.886724 kernel: erofs: (device loop4p1): mounted with root inode @ nid 39. Apr 17 01:39:20.987854 kernel: loop4: detected capacity change from 0 to 378016 Apr 17 01:39:20.994666 kernel: loop4: p1 p2 p3 Apr 17 01:39:20.998725 kernel: loop4: p1 p2 p3 Apr 17 01:39:21.205321 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Apr 17 01:39:21.223907 kernel: device-mapper: verity: Unrecognized verity feature request: root_hash_sig_key_desc Apr 17 01:39:21.223927 kernel: device-mapper: table: 253:4: verity: Unrecognized verity feature request (-EINVAL) Apr 17 01:39:21.223941 kernel: device-mapper: ioctl: error adding target to table Apr 17 01:39:21.223952 (sd-merge)[1483]: device-mapper: reload ioctl on loop4p1-verity (253:4) failed: Invalid argument Apr 17 01:39:21.314194 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Apr 17 01:39:21.680454 kernel: erofs: (device dm-4): mounted with root inode @ nid 39. Apr 17 01:39:21.732749 kernel: loop5: detected capacity change from 0 to 219192 Apr 17 01:39:21.926736 kernel: loop6: detected capacity change from 0 to 177280 Apr 17 01:39:21.935991 kernel: loop6: p1 p2 p3 Apr 17 01:39:22.039850 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Apr 17 01:39:22.042127 kernel: device-mapper: verity: Unrecognized verity feature request: root_hash_sig_key_desc Apr 17 01:39:22.063052 kernel: device-mapper: table: 253:5: verity: Unrecognized verity feature request (-EINVAL) Apr 17 01:39:22.063260 kernel: device-mapper: ioctl: error adding target to table Apr 17 01:39:22.063026 (sd-merge)[1483]: device-mapper: reload ioctl on loop6p1-verity (253:5) failed: Invalid argument Apr 17 01:39:22.073693 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Apr 17 01:39:22.378046 kernel: erofs: (device dm-5): mounted with root inode @ nid 39. Apr 17 01:39:22.394439 (sd-merge)[1483]: Skipping extension refresh because no change was found, use --always-refresh=yes to always do a refresh. Apr 17 01:39:22.429040 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Apr 17 01:39:22.442000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 17 01:39:22.467863 kernel: device-mapper: ioctl: remove_all left 4 open device(s) Apr 17 01:39:22.468131 kernel: audit: type=1130 audit(1776389962.442:158): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 17 01:39:22.468177 kernel: device-mapper: ioctl: remove_all left 4 open device(s) Apr 17 01:39:22.479199 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 17 01:39:22.787011 systemd-tmpfiles[1500]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Apr 17 01:39:22.792212 systemd-tmpfiles[1500]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Apr 17 01:39:22.814246 systemd-tmpfiles[1500]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Apr 17 01:39:22.832884 systemd-tmpfiles[1500]: ACLs are not supported, ignoring. Apr 17 01:39:22.832962 systemd-tmpfiles[1500]: ACLs are not supported, ignoring. Apr 17 01:39:22.915026 systemd-tmpfiles[1500]: Detected autofs mount point /boot during canonicalization of boot. Apr 17 01:39:22.915069 systemd-tmpfiles[1500]: Skipping /boot Apr 17 01:39:22.950456 systemd-tmpfiles[1500]: Detected autofs mount point /boot during canonicalization of boot. Apr 17 01:39:22.950511 systemd-tmpfiles[1500]: Skipping /boot Apr 17 01:39:22.986510 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 17 01:39:22.999000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 17 01:39:23.015991 kernel: audit: type=1130 audit(1776389962.999:159): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 17 01:39:23.122984 systemd[1]: Starting audit-rules.service - Load Audit Rules... Apr 17 01:39:23.167064 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Apr 17 01:39:23.171948 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Apr 17 01:39:23.208767 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Apr 17 01:39:23.224698 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Apr 17 01:39:23.385000 audit[1511]: AUDIT1127 pid=1511 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Apr 17 01:39:23.398320 kernel: audit: type=1127 audit(1776389963.385:160): pid=1511 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Apr 17 01:39:23.402262 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Apr 17 01:39:23.413000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 17 01:39:23.472807 kernel: audit: type=1130 audit(1776389963.413:161): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 17 01:39:23.516234 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Apr 17 01:39:23.537000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 17 01:39:23.582185 kernel: audit: type=1130 audit(1776389963.537:162): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 17 01:39:23.725000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Apr 17 01:39:23.733535 augenrules[1532]: No rules Apr 17 01:39:23.733844 kernel: audit: type=1305 audit(1776389963.725:163): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Apr 17 01:39:23.725000 audit[1532]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffe2a786f00 a2=420 a3=0 items=0 ppid=1506 pid=1532 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 17 01:39:23.737772 systemd[1]: audit-rules.service: Deactivated successfully. Apr 17 01:39:23.742727 systemd[1]: Finished audit-rules.service - Load Audit Rules. Apr 17 01:39:23.725000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Apr 17 01:39:23.768922 kernel: audit: type=1300 audit(1776389963.725:163): arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffe2a786f00 a2=420 a3=0 items=0 ppid=1506 pid=1532 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 17 01:39:23.768945 kernel: audit: type=1327 audit(1776389963.725:163): proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Apr 17 01:39:23.803140 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Apr 17 01:39:23.812339 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Apr 17 01:39:29.008125 ldconfig[1508]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Apr 17 01:39:29.033995 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Apr 17 01:39:29.048415 systemd[1]: Starting systemd-update-done.service - Update is Completed... Apr 17 01:39:29.176438 systemd[1]: Finished systemd-update-done.service - Update is Completed. Apr 17 01:39:29.184718 systemd[1]: Reached target sysinit.target - System Initialization. Apr 17 01:39:29.197133 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Apr 17 01:39:29.210472 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Apr 17 01:39:29.234252 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Apr 17 01:39:29.241274 systemd[1]: Started logrotate.timer - Daily rotation of log files. Apr 17 01:39:29.316290 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Apr 17 01:39:29.324303 systemd[1]: Started systemd-sysupdate-reboot.timer - Reboot Automatically After System Update. Apr 17 01:39:29.336466 systemd[1]: Started systemd-sysupdate.timer - Automatic System Update. Apr 17 01:39:29.338803 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Apr 17 01:39:29.367857 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Apr 17 01:39:29.368134 systemd[1]: Reached target paths.target - Path Units. Apr 17 01:39:29.370265 systemd[1]: Reached target timers.target - Timer Units. Apr 17 01:39:29.393155 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Apr 17 01:39:29.530296 systemd[1]: Starting docker.socket - Docker Socket for the API... Apr 17 01:39:29.549710 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Apr 17 01:39:29.560777 systemd[1]: Starting sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK)... Apr 17 01:39:29.582037 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Apr 17 01:39:29.602483 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Apr 17 01:39:29.613082 systemd[1]: Listening on systemd-logind-varlink.socket - User Login Management Varlink Socket. Apr 17 01:39:29.616038 systemd[1]: Listening on systemd-machined.socket - Virtual Machine and Container Registration Service Socket. Apr 17 01:39:29.707098 systemd[1]: Listening on docker.socket - Docker Socket for the API. Apr 17 01:39:29.720039 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Apr 17 01:39:29.782748 systemd[1]: Reached target sockets.target - Socket Units. Apr 17 01:39:29.785164 systemd[1]: Reached target basic.target - Basic System. Apr 17 01:39:29.798150 systemd[1]: Reached target ssh-access.target - SSH Access Available. Apr 17 01:39:29.803028 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Apr 17 01:39:29.803136 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Apr 17 01:39:29.805334 systemd[1]: Starting containerd.service - containerd container runtime... Apr 17 01:39:29.817692 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Apr 17 01:39:29.824789 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Apr 17 01:39:29.843143 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Apr 17 01:39:29.889112 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Apr 17 01:39:29.897057 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Apr 17 01:39:29.901368 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Apr 17 01:39:29.919024 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Apr 17 01:39:29.923687 jq[1550]: false Apr 17 01:39:29.930011 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 17 01:39:29.935987 extend-filesystems[1551]: Found /dev/vda6 Apr 17 01:39:29.936950 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Apr 17 01:39:29.942285 extend-filesystems[1551]: Found /dev/vda9 Apr 17 01:39:29.951243 extend-filesystems[1551]: Checking size of /dev/vda9 Apr 17 01:39:29.966355 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Apr 17 01:39:29.966515 oslogin_cache_refresh[1552]: Refreshing passwd entry cache Apr 17 01:39:29.982901 google_oslogin_nss_cache[1552]: oslogin_cache_refresh[1552]: Refreshing passwd entry cache Apr 17 01:39:29.991743 google_oslogin_nss_cache[1552]: oslogin_cache_refresh[1552]: Failure getting users, quitting Apr 17 01:39:29.991743 google_oslogin_nss_cache[1552]: oslogin_cache_refresh[1552]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Apr 17 01:39:29.991698 oslogin_cache_refresh[1552]: Failure getting users, quitting Apr 17 01:39:29.992138 google_oslogin_nss_cache[1552]: oslogin_cache_refresh[1552]: Refreshing group entry cache Apr 17 01:39:29.991745 oslogin_cache_refresh[1552]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Apr 17 01:39:29.991837 oslogin_cache_refresh[1552]: Refreshing group entry cache Apr 17 01:39:30.012232 google_oslogin_nss_cache[1552]: oslogin_cache_refresh[1552]: Failure getting groups, quitting Apr 17 01:39:30.012232 google_oslogin_nss_cache[1552]: oslogin_cache_refresh[1552]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Apr 17 01:39:30.008194 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Apr 17 01:39:30.004288 oslogin_cache_refresh[1552]: Failure getting groups, quitting Apr 17 01:39:30.004306 oslogin_cache_refresh[1552]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Apr 17 01:39:30.042325 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Apr 17 01:39:30.104272 extend-filesystems[1551]: Resized partition /dev/vda9 Apr 17 01:39:30.084208 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Apr 17 01:39:30.118030 kernel: EXT4-fs (vda9): resizing filesystem from 456704 to 1784827 blocks Apr 17 01:39:30.118068 extend-filesystems[1570]: resize2fs 1.47.3 (8-Jul-2025) Apr 17 01:39:30.109289 systemd[1]: Starting systemd-logind.service - User Login Management... Apr 17 01:39:30.122311 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Apr 17 01:39:30.130720 systemd[1]: Starting update-engine.service - Update Engine... Apr 17 01:39:30.193065 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Apr 17 01:39:30.310060 jq[1584]: true Apr 17 01:39:30.310033 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Apr 17 01:39:30.323816 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Apr 17 01:39:30.331187 kernel: EXT4-fs (vda9): resized filesystem to 1784827 Apr 17 01:39:30.331434 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Apr 17 01:39:30.335423 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Apr 17 01:39:30.336979 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Apr 17 01:39:30.382923 systemd[1]: motdgen.service: Deactivated successfully. Apr 17 01:39:30.383174 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Apr 17 01:39:30.387190 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Apr 17 01:39:30.395261 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Apr 17 01:39:30.400926 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Apr 17 01:39:30.515074 extend-filesystems[1570]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Apr 17 01:39:30.515074 extend-filesystems[1570]: old_desc_blocks = 1, new_desc_blocks = 1 Apr 17 01:39:30.515074 extend-filesystems[1570]: The filesystem on /dev/vda9 is now 1784827 (4k) blocks long. Apr 17 01:39:30.574925 update_engine[1582]: I20260417 01:39:30.514089 1582 main.cc:92] Flatcar Update Engine starting Apr 17 01:39:30.419837 systemd[1]: extend-filesystems.service: Deactivated successfully. Apr 17 01:39:30.575343 extend-filesystems[1551]: Resized filesystem in /dev/vda9 Apr 17 01:39:30.420071 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Apr 17 01:39:30.522974 systemd[1]: coreos-metadata.service: Deactivated successfully. Apr 17 01:39:30.575913 jq[1606]: true Apr 17 01:39:30.523186 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Apr 17 01:39:30.735792 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Apr 17 01:39:30.877573 tar[1605]: linux-amd64/LICENSE Apr 17 01:39:30.877958 tar[1605]: linux-amd64/helm Apr 17 01:39:30.886031 systemd-logind[1580]: Watching system buttons on /dev/input/event2 (Power Button) Apr 17 01:39:30.886090 systemd-logind[1580]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Apr 17 01:39:30.887155 systemd-logind[1580]: New seat seat0. Apr 17 01:39:30.895897 systemd[1]: Started systemd-logind.service - User Login Management. Apr 17 01:39:30.949824 dbus-daemon[1548]: [system] SELinux support is enabled Apr 17 01:39:30.951232 systemd[1]: Started dbus.service - D-Bus System Message Bus. Apr 17 01:39:31.017575 update_engine[1582]: I20260417 01:39:31.015726 1582 update_check_scheduler.cc:74] Next update check in 8m5s Apr 17 01:39:31.031558 dbus-daemon[1548]: [system] Successfully activated service 'org.freedesktop.systemd1' Apr 17 01:39:31.041215 systemd[1]: Started update-engine.service - Update Engine. Apr 17 01:39:31.078005 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Apr 17 01:39:31.078310 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Apr 17 01:39:31.087572 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Apr 17 01:39:31.089366 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Apr 17 01:39:31.095003 bash[1658]: Updated "/home/core/.ssh/authorized_keys" Apr 17 01:39:31.104557 systemd[1]: Started locksmithd.service - Cluster reboot manager. Apr 17 01:39:31.118107 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Apr 17 01:39:31.147287 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Apr 17 01:39:31.368085 locksmithd[1659]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Apr 17 01:39:31.990473 containerd[1607]: time="2026-04-17T01:39:31Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Apr 17 01:39:32.014866 containerd[1607]: time="2026-04-17T01:39:32.014096924Z" level=info msg="starting containerd" revision=1c4457e00facac03ce1d75f7b6777a7a851e5c41 version=v2.2.0 Apr 17 01:39:32.015463 sshd_keygen[1591]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Apr 17 01:39:32.046713 containerd[1607]: time="2026-04-17T01:39:32.046322547Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="25.761µs" Apr 17 01:39:32.046713 containerd[1607]: time="2026-04-17T01:39:32.046440901Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Apr 17 01:39:32.046713 containerd[1607]: time="2026-04-17T01:39:32.046719037Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Apr 17 01:39:32.046713 containerd[1607]: time="2026-04-17T01:39:32.046751149Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Apr 17 01:39:32.104804 containerd[1607]: time="2026-04-17T01:39:32.103986669Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Apr 17 01:39:32.104804 containerd[1607]: time="2026-04-17T01:39:32.104185879Z" level=info msg="loading plugin" id=io.containerd.mount-handler.v1.erofs type=io.containerd.mount-handler.v1 Apr 17 01:39:32.104804 containerd[1607]: time="2026-04-17T01:39:32.104206083Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Apr 17 01:39:32.104804 containerd[1607]: time="2026-04-17T01:39:32.104565533Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Apr 17 01:39:32.104804 containerd[1607]: time="2026-04-17T01:39:32.104577798Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Apr 17 01:39:32.104956 containerd[1607]: time="2026-04-17T01:39:32.104895679Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Apr 17 01:39:32.104956 containerd[1607]: time="2026-04-17T01:39:32.104913661Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Apr 17 01:39:32.104956 containerd[1607]: time="2026-04-17T01:39:32.104926693Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Apr 17 01:39:32.104956 containerd[1607]: time="2026-04-17T01:39:32.104936205Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Apr 17 01:39:32.111758 containerd[1607]: time="2026-04-17T01:39:32.111041466Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Apr 17 01:39:32.117517 containerd[1607]: time="2026-04-17T01:39:32.116067683Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Apr 17 01:39:32.124972 containerd[1607]: time="2026-04-17T01:39:32.124698453Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Apr 17 01:39:32.124972 containerd[1607]: time="2026-04-17T01:39:32.124801001Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Apr 17 01:39:32.124972 containerd[1607]: time="2026-04-17T01:39:32.124812291Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Apr 17 01:39:32.129416 containerd[1607]: time="2026-04-17T01:39:32.126928066Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Apr 17 01:39:32.129416 containerd[1607]: time="2026-04-17T01:39:32.129100200Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Apr 17 01:39:32.132132 containerd[1607]: time="2026-04-17T01:39:32.130730935Z" level=info msg="metadata content store policy set" policy=shared Apr 17 01:39:32.147157 containerd[1607]: time="2026-04-17T01:39:32.144587481Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Apr 17 01:39:32.147157 containerd[1607]: time="2026-04-17T01:39:32.144724797Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Apr 17 01:39:32.147157 containerd[1607]: time="2026-04-17T01:39:32.144760011Z" level=info msg="built-in NRI default validator is disabled" Apr 17 01:39:32.147157 containerd[1607]: time="2026-04-17T01:39:32.144774300Z" level=info msg="runtime interface created" Apr 17 01:39:32.147157 containerd[1607]: time="2026-04-17T01:39:32.144779087Z" level=info msg="created NRI interface" Apr 17 01:39:32.147157 containerd[1607]: time="2026-04-17T01:39:32.144786743Z" level=info msg="loading plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Apr 17 01:39:32.147157 containerd[1607]: time="2026-04-17T01:39:32.144883742Z" level=info msg="skip loading plugin" error="failed to check mkfs.erofs availability: failed to run mkfs.erofs --help: exec: \"mkfs.erofs\": executable file not found in $PATH: skip plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Apr 17 01:39:32.147157 containerd[1607]: time="2026-04-17T01:39:32.144894240Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Apr 17 01:39:32.147157 containerd[1607]: time="2026-04-17T01:39:32.144904228Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Apr 17 01:39:32.147157 containerd[1607]: time="2026-04-17T01:39:32.144912411Z" level=info msg="loading plugin" id=io.containerd.mount-manager.v1.bolt type=io.containerd.mount-manager.v1 Apr 17 01:39:32.147157 containerd[1607]: time="2026-04-17T01:39:32.145037169Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Apr 17 01:39:32.147157 containerd[1607]: time="2026-04-17T01:39:32.145054703Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Apr 17 01:39:32.147157 containerd[1607]: time="2026-04-17T01:39:32.145063075Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Apr 17 01:39:32.147157 containerd[1607]: time="2026-04-17T01:39:32.145072331Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Apr 17 01:39:32.160033 containerd[1607]: time="2026-04-17T01:39:32.145081325Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Apr 17 01:39:32.160033 containerd[1607]: time="2026-04-17T01:39:32.145090928Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Apr 17 01:39:32.160033 containerd[1607]: time="2026-04-17T01:39:32.145114787Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Apr 17 01:39:32.160033 containerd[1607]: time="2026-04-17T01:39:32.145124231Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Apr 17 01:39:32.160033 containerd[1607]: time="2026-04-17T01:39:32.145133449Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Apr 17 01:39:32.160033 containerd[1607]: time="2026-04-17T01:39:32.153344248Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Apr 17 01:39:32.160033 containerd[1607]: time="2026-04-17T01:39:32.153449125Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Apr 17 01:39:32.160033 containerd[1607]: time="2026-04-17T01:39:32.153462468Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Apr 17 01:39:32.160033 containerd[1607]: time="2026-04-17T01:39:32.153480361Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Apr 17 01:39:32.160033 containerd[1607]: time="2026-04-17T01:39:32.153491686Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Apr 17 01:39:32.160033 containerd[1607]: time="2026-04-17T01:39:32.153504047Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Apr 17 01:39:32.160033 containerd[1607]: time="2026-04-17T01:39:32.153514428Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Apr 17 01:39:32.160033 containerd[1607]: time="2026-04-17T01:39:32.153522271Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Apr 17 01:39:32.160033 containerd[1607]: time="2026-04-17T01:39:32.153532416Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.mounts type=io.containerd.grpc.v1 Apr 17 01:39:32.160033 containerd[1607]: time="2026-04-17T01:39:32.153545218Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Apr 17 01:39:32.161559 containerd[1607]: time="2026-04-17T01:39:32.153558049Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Apr 17 01:39:32.161559 containerd[1607]: time="2026-04-17T01:39:32.153568164Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Apr 17 01:39:32.161559 containerd[1607]: time="2026-04-17T01:39:32.156295387Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Apr 17 01:39:32.161559 containerd[1607]: time="2026-04-17T01:39:32.156382923Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Apr 17 01:39:32.161559 containerd[1607]: time="2026-04-17T01:39:32.156424876Z" level=info msg="Start snapshots syncer" Apr 17 01:39:32.161559 containerd[1607]: time="2026-04-17T01:39:32.156474413Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Apr 17 01:39:32.162462 containerd[1607]: time="2026-04-17T01:39:32.156951535Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"cgroupWritable\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"\",\"binDirs\":[\"/opt/cni/bin\"],\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogLineSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Apr 17 01:39:32.162462 containerd[1607]: time="2026-04-17T01:39:32.157012002Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Apr 17 01:39:32.166572 containerd[1607]: time="2026-04-17T01:39:32.157198486Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Apr 17 01:39:32.166572 containerd[1607]: time="2026-04-17T01:39:32.158320181Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Apr 17 01:39:32.166572 containerd[1607]: time="2026-04-17T01:39:32.159644855Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Apr 17 01:39:32.166572 containerd[1607]: time="2026-04-17T01:39:32.159748092Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Apr 17 01:39:32.166572 containerd[1607]: time="2026-04-17T01:39:32.159756922Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Apr 17 01:39:32.166572 containerd[1607]: time="2026-04-17T01:39:32.159767893Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Apr 17 01:39:32.166572 containerd[1607]: time="2026-04-17T01:39:32.159784552Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Apr 17 01:39:32.166572 containerd[1607]: time="2026-04-17T01:39:32.159800774Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Apr 17 01:39:32.166572 containerd[1607]: time="2026-04-17T01:39:32.159817602Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Apr 17 01:39:32.166572 containerd[1607]: time="2026-04-17T01:39:32.159826043Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Apr 17 01:39:32.166572 containerd[1607]: time="2026-04-17T01:39:32.164508006Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Apr 17 01:39:32.166572 containerd[1607]: time="2026-04-17T01:39:32.164558679Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Apr 17 01:39:32.166572 containerd[1607]: time="2026-04-17T01:39:32.164566324Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Apr 17 01:39:32.166841 containerd[1607]: time="2026-04-17T01:39:32.164575591Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Apr 17 01:39:32.166841 containerd[1607]: time="2026-04-17T01:39:32.164582179Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Apr 17 01:39:32.166841 containerd[1607]: time="2026-04-17T01:39:32.165453074Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Apr 17 01:39:32.168832 containerd[1607]: time="2026-04-17T01:39:32.168309195Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Apr 17 01:39:32.168832 containerd[1607]: time="2026-04-17T01:39:32.168367032Z" level=info msg="Connect containerd service" Apr 17 01:39:32.168832 containerd[1607]: time="2026-04-17T01:39:32.168435732Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Apr 17 01:39:32.174787 containerd[1607]: time="2026-04-17T01:39:32.174703736Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 17 01:39:32.176860 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Apr 17 01:39:32.192091 systemd[1]: Starting issuegen.service - Generate /run/issue... Apr 17 01:39:32.368883 systemd[1]: issuegen.service: Deactivated successfully. Apr 17 01:39:32.376454 systemd[1]: Finished issuegen.service - Generate /run/issue. Apr 17 01:39:32.405298 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Apr 17 01:39:32.710989 tar[1605]: linux-amd64/README.md Apr 17 01:39:32.777265 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Apr 17 01:39:33.696934 systemd[1]: Started getty@tty1.service - Getty on tty1. Apr 17 01:39:33.780197 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Apr 17 01:39:33.791438 systemd[1]: Reached target getty.target - Login Prompts. Apr 17 01:39:33.800507 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Apr 17 01:39:34.373776 containerd[1607]: time="2026-04-17T01:39:34.373324713Z" level=info msg="Start subscribing containerd event" Apr 17 01:39:34.414333 containerd[1607]: time="2026-04-17T01:39:34.375799602Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Apr 17 01:39:34.414333 containerd[1607]: time="2026-04-17T01:39:34.377020926Z" level=info msg="Start recovering state" Apr 17 01:39:34.414333 containerd[1607]: time="2026-04-17T01:39:34.377436305Z" level=info msg=serving... address=/run/containerd/containerd.sock Apr 17 01:39:34.414333 containerd[1607]: time="2026-04-17T01:39:34.377666399Z" level=info msg="Start event monitor" Apr 17 01:39:34.414333 containerd[1607]: time="2026-04-17T01:39:34.377698734Z" level=info msg="Start cni network conf syncer for default" Apr 17 01:39:34.414333 containerd[1607]: time="2026-04-17T01:39:34.377710491Z" level=info msg="Start streaming server" Apr 17 01:39:34.414333 containerd[1607]: time="2026-04-17T01:39:34.377725210Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Apr 17 01:39:34.414333 containerd[1607]: time="2026-04-17T01:39:34.377736253Z" level=info msg="runtime interface starting up..." Apr 17 01:39:34.414333 containerd[1607]: time="2026-04-17T01:39:34.377743946Z" level=info msg="starting plugins..." Apr 17 01:39:34.414333 containerd[1607]: time="2026-04-17T01:39:34.377776999Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Apr 17 01:39:34.414333 containerd[1607]: time="2026-04-17T01:39:34.380953844Z" level=info msg="containerd successfully booted in 2.395408s" Apr 17 01:39:34.380952 systemd[1]: Started containerd.service - containerd container runtime. Apr 17 01:39:35.625637 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Apr 17 01:39:35.912106 systemd[1]: Started sshd@0-1-10.0.0.148:22-10.0.0.1:32774.service - OpenSSH per-connection server daemon (10.0.0.1:32774). Apr 17 01:39:39.018012 sshd[1709]: Accepted publickey for core from 10.0.0.1 port 32774 ssh2: RSA SHA256:i7kqhNuKKiTbKjuAgjCyuTyCHwdhXA5tMqOnuOgefVQ Apr 17 01:39:39.320054 sshd-session[1709]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 01:39:39.681165 systemd-logind[1580]: New session '1' of user 'core' with class 'user' and type 'tty'. Apr 17 01:39:39.690315 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Apr 17 01:39:39.745684 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Apr 17 01:39:39.808396 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 17 01:39:39.809867 systemd[1]: Reached target multi-user.target - Multi-User System. Apr 17 01:39:39.861346 (kubelet)[1722]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 17 01:39:40.014881 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Apr 17 01:39:40.224653 systemd[1]: Starting user@500.service - User Manager for UID 500... Apr 17 01:39:40.882311 (systemd)[1725]: pam_unix(systemd-user:session): session opened for user core(uid=500) by core(uid=0) Apr 17 01:39:41.075793 systemd-logind[1580]: New session '2' of user 'core' with class 'manager-early' and type 'unspecified'. Apr 17 01:39:52.013366 kubelet[1722]: E0417 01:39:52.007774 1722 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 17 01:39:52.095345 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 17 01:39:52.098732 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 17 01:39:52.158967 systemd[1]: kubelet.service: Consumed 11.223s CPU time, 259.9M memory peak. Apr 17 01:40:01.102515 systemd[1725]: Queued start job for default target default.target. Apr 17 01:40:01.935467 systemd[1725]: Created slice app.slice - User Application Slice. Apr 17 01:40:01.942479 systemd[1725]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of User's Temporary Directories. Apr 17 01:40:01.948454 systemd[1725]: Reached target machines.target - Virtual Machines and Containers. Apr 17 01:40:01.951996 systemd[1725]: Reached target paths.target - Paths. Apr 17 01:40:01.952073 systemd[1725]: Reached target timers.target - Timers. Apr 17 01:40:01.961267 systemd[1725]: Starting dbus.socket - D-Bus User Message Bus Socket... Apr 17 01:40:02.017283 systemd[1725]: Listening on systemd-ask-password.socket - Query the User Interactively for a Password. Apr 17 01:40:02.097500 systemd[1725]: Starting systemd-tmpfiles-setup.service - Create User Files and Directories... Apr 17 01:40:02.218943 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Apr 17 01:40:02.375581 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 17 01:40:02.381926 systemd[1725]: Listening on dbus.socket - D-Bus User Message Bus Socket. Apr 17 01:40:02.382965 systemd[1725]: Reached target sockets.target - Sockets. Apr 17 01:40:02.570831 systemd[1725]: Finished systemd-tmpfiles-setup.service - Create User Files and Directories. Apr 17 01:40:02.570973 systemd[1725]: Reached target basic.target - Basic System. Apr 17 01:40:02.571032 systemd[1725]: Reached target default.target - Main User Target. Apr 17 01:40:02.571063 systemd[1725]: Startup finished in 21.093s. Apr 17 01:40:02.574881 systemd[1]: Started user@500.service - User Manager for UID 500. Apr 17 01:40:02.748335 systemd[1]: Started session-1.scope - Session 1 of User core. Apr 17 01:40:03.186044 systemd[1]: Started sshd@1-4097-10.0.0.148:22-10.0.0.1:57244.service - OpenSSH per-connection server daemon (10.0.0.1:57244). Apr 17 01:40:04.932828 sshd[1750]: Accepted publickey for core from 10.0.0.1 port 57244 ssh2: RSA SHA256:i7kqhNuKKiTbKjuAgjCyuTyCHwdhXA5tMqOnuOgefVQ Apr 17 01:40:05.032925 sshd-session[1750]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 01:40:05.413463 systemd-logind[1580]: New session '3' of user 'core' with class 'user' and type 'tty'. Apr 17 01:40:05.511726 systemd[1]: Started session-3.scope - Session 3 of User core. Apr 17 01:40:05.899404 sshd[1756]: Connection closed by 10.0.0.1 port 57244 Apr 17 01:40:05.905920 sshd-session[1750]: pam_unix(sshd:session): session closed for user core Apr 17 01:40:06.022387 systemd[1]: sshd@1-4097-10.0.0.148:22-10.0.0.1:57244.service: Deactivated successfully. Apr 17 01:40:06.183173 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 17 01:40:06.237241 systemd[1]: session-3.scope: Deactivated successfully. Apr 17 01:40:06.305549 systemd-logind[1580]: Session 3 logged out. Waiting for processes to exit. Apr 17 01:40:06.340123 (kubelet)[1762]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 17 01:40:06.405178 systemd[1]: Started sshd@2-8193-10.0.0.148:22-10.0.0.1:57248.service - OpenSSH per-connection server daemon (10.0.0.1:57248). Apr 17 01:40:06.409885 systemd[1]: Startup finished in 29.776s (kernel) + 2min 4.278s (initrd) + 1min 31.061s (userspace) = 4min 5.116s. Apr 17 01:40:06.411661 systemd-logind[1580]: Removed session 3. Apr 17 01:40:06.908090 sshd[1766]: Accepted publickey for core from 10.0.0.1 port 57248 ssh2: RSA SHA256:i7kqhNuKKiTbKjuAgjCyuTyCHwdhXA5tMqOnuOgefVQ Apr 17 01:40:06.909737 sshd-session[1766]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 01:40:06.941872 kubelet[1762]: E0417 01:40:06.941757 1762 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 17 01:40:06.942286 systemd-logind[1580]: New session '4' of user 'core' with class 'user' and type 'tty'. Apr 17 01:40:06.994934 systemd[1]: Started session-4.scope - Session 4 of User core. Apr 17 01:40:07.017234 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 17 01:40:07.017378 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 17 01:40:07.017895 systemd[1]: kubelet.service: Consumed 2.333s CPU time, 110.7M memory peak. Apr 17 01:40:07.593852 sshd[1777]: Connection closed by 10.0.0.1 port 57248 Apr 17 01:40:07.603927 sshd-session[1766]: pam_unix(sshd:session): session closed for user core Apr 17 01:40:07.739585 systemd[1]: sshd@2-8193-10.0.0.148:22-10.0.0.1:57248.service: Deactivated successfully. Apr 17 01:40:07.792366 systemd[1]: session-4.scope: Deactivated successfully. Apr 17 01:40:07.807086 systemd-logind[1580]: Session 4 logged out. Waiting for processes to exit. Apr 17 01:40:07.885815 systemd-logind[1580]: Removed session 4. Apr 17 01:40:16.336147 update_engine[1582]: I20260417 01:40:16.303241 1582 update_attempter.cc:509] Updating boot flags... Apr 17 01:40:17.187907 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Apr 17 01:40:17.193762 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 17 01:40:18.081988 systemd[1]: Started sshd@3-12289-10.0.0.148:22-10.0.0.1:45646.service - OpenSSH per-connection server daemon (10.0.0.1:45646). Apr 17 01:40:18.517814 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 17 01:40:18.622501 (kubelet)[1811]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 17 01:40:18.799386 sshd[1802]: Accepted publickey for core from 10.0.0.1 port 45646 ssh2: RSA SHA256:i7kqhNuKKiTbKjuAgjCyuTyCHwdhXA5tMqOnuOgefVQ Apr 17 01:40:18.807587 sshd-session[1802]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 01:40:19.258543 systemd-logind[1580]: New session '5' of user 'core' with class 'user' and type 'tty'. Apr 17 01:40:19.284389 systemd[1]: Started session-5.scope - Session 5 of User core. Apr 17 01:40:19.909312 sshd[1820]: Connection closed by 10.0.0.1 port 45646 Apr 17 01:40:19.910010 sshd-session[1802]: pam_unix(sshd:session): session closed for user core Apr 17 01:40:20.008078 systemd[1]: sshd@3-12289-10.0.0.148:22-10.0.0.1:45646.service: Deactivated successfully. Apr 17 01:40:20.124644 systemd[1]: session-5.scope: Deactivated successfully. Apr 17 01:40:20.128248 systemd-logind[1580]: Session 5 logged out. Waiting for processes to exit. Apr 17 01:40:20.201097 kubelet[1811]: E0417 01:40:20.199523 1811 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 17 01:40:20.232761 systemd[1]: Started sshd@4-4098-10.0.0.148:22-10.0.0.1:47776.service - OpenSSH per-connection server daemon (10.0.0.1:47776). Apr 17 01:40:20.237204 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 17 01:40:20.239889 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 17 01:40:20.240639 systemd[1]: kubelet.service: Consumed 1.714s CPU time, 110.2M memory peak. Apr 17 01:40:20.272984 systemd-logind[1580]: Removed session 5. Apr 17 01:40:21.622212 sshd[1827]: Accepted publickey for core from 10.0.0.1 port 47776 ssh2: RSA SHA256:i7kqhNuKKiTbKjuAgjCyuTyCHwdhXA5tMqOnuOgefVQ Apr 17 01:40:21.792076 sshd-session[1827]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 01:40:21.919420 systemd-logind[1580]: New session '6' of user 'core' with class 'user' and type 'tty'. Apr 17 01:40:22.025331 systemd[1]: Started session-6.scope - Session 6 of User core. Apr 17 01:40:22.619012 sshd[1833]: Connection closed by 10.0.0.1 port 47776 Apr 17 01:40:22.623397 sshd-session[1827]: pam_unix(sshd:session): session closed for user core Apr 17 01:40:22.804567 systemd[1]: sshd@4-4098-10.0.0.148:22-10.0.0.1:47776.service: Deactivated successfully. Apr 17 01:40:22.817541 systemd[1]: session-6.scope: Deactivated successfully. Apr 17 01:40:22.843399 systemd-logind[1580]: Session 6 logged out. Waiting for processes to exit. Apr 17 01:40:22.848422 systemd-logind[1580]: Removed session 6. Apr 17 01:40:22.880782 systemd[1]: Started sshd@5-12290-10.0.0.148:22-10.0.0.1:47792.service - OpenSSH per-connection server daemon (10.0.0.1:47792). Apr 17 01:40:24.858275 sshd[1839]: Accepted publickey for core from 10.0.0.1 port 47792 ssh2: RSA SHA256:i7kqhNuKKiTbKjuAgjCyuTyCHwdhXA5tMqOnuOgefVQ Apr 17 01:40:24.882997 sshd-session[1839]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 01:40:25.137508 systemd-logind[1580]: New session '7' of user 'core' with class 'user' and type 'tty'. Apr 17 01:40:25.171372 systemd[1]: Started session-7.scope - Session 7 of User core. Apr 17 01:40:25.741957 sshd[1843]: Connection closed by 10.0.0.1 port 47792 Apr 17 01:40:25.808506 sshd-session[1839]: pam_unix(sshd:session): session closed for user core Apr 17 01:40:25.924677 systemd[1]: sshd@5-12290-10.0.0.148:22-10.0.0.1:47792.service: Deactivated successfully. Apr 17 01:40:26.069308 systemd[1]: session-7.scope: Deactivated successfully. Apr 17 01:40:26.269509 systemd-logind[1580]: Session 7 logged out. Waiting for processes to exit. Apr 17 01:40:26.293667 systemd[1]: Started sshd@6-12291-10.0.0.148:22-10.0.0.1:47798.service - OpenSSH per-connection server daemon (10.0.0.1:47798). Apr 17 01:40:26.390413 systemd-logind[1580]: Removed session 7. Apr 17 01:40:30.532961 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Apr 17 01:40:30.971825 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 17 01:40:31.388099 sshd[1849]: Accepted publickey for core from 10.0.0.1 port 47798 ssh2: RSA SHA256:i7kqhNuKKiTbKjuAgjCyuTyCHwdhXA5tMqOnuOgefVQ Apr 17 01:40:31.446283 sshd-session[1849]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 01:40:31.953830 systemd-logind[1580]: New session '8' of user 'core' with class 'user' and type 'tty'. Apr 17 01:40:32.186543 systemd[1]: Started session-8.scope - Session 8 of User core. Apr 17 01:40:33.078662 sudo[1857]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Apr 17 01:40:33.085469 sudo[1857]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 17 01:40:35.201779 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 17 01:40:35.394574 (kubelet)[1872]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 17 01:40:40.475082 kubelet[1872]: E0417 01:40:40.471442 1872 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 17 01:40:40.586995 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 17 01:40:40.587169 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 17 01:40:40.592271 systemd[1]: kubelet.service: Consumed 5.486s CPU time, 110.4M memory peak. Apr 17 01:40:51.824152 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Apr 17 01:40:52.188934 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 17 01:41:02.119494 systemd[1]: Starting docker.service - Docker Application Container Engine... Apr 17 01:41:02.240466 (dockerd)[1895]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Apr 17 01:41:03.346562 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 17 01:41:03.716016 (kubelet)[1901]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 17 01:41:14.533242 kubelet[1901]: E0417 01:41:14.532773 1901 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 17 01:41:14.595003 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 17 01:41:14.681335 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 17 01:41:14.714222 systemd[1]: kubelet.service: Consumed 12.242s CPU time, 110.4M memory peak. Apr 17 01:41:24.259034 dockerd[1895]: time="2026-04-17T01:41:24.257374646Z" level=info msg="Starting up" Apr 17 01:41:24.485879 dockerd[1895]: time="2026-04-17T01:41:24.484152651Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Apr 17 01:41:24.771160 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Apr 17 01:41:25.141901 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 17 01:41:25.805175 dockerd[1895]: time="2026-04-17T01:41:25.767772949Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Apr 17 01:41:29.289522 systemd[1]: var-lib-docker-metacopy\x2dcheck1464206898-merged.mount: Deactivated successfully. Apr 17 01:41:30.901292 dockerd[1895]: time="2026-04-17T01:41:30.895489439Z" level=info msg="Loading containers: start." Apr 17 01:41:31.100543 kernel: Initializing XFRM netlink socket Apr 17 01:41:31.508747 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 17 01:41:31.801030 (kubelet)[1948]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 17 01:41:37.245781 kubelet[1948]: E0417 01:41:37.244704 1948 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 17 01:41:37.289522 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 17 01:41:37.290932 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 17 01:41:37.323917 systemd[1]: kubelet.service: Consumed 7.295s CPU time, 111M memory peak. Apr 17 01:41:47.518826 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Apr 17 01:41:47.556378 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 17 01:41:48.381960 systemd-networkd[1411]: docker0: Link UP Apr 17 01:41:48.701083 dockerd[1895]: time="2026-04-17T01:41:48.698923396Z" level=info msg="Loading containers: done." Apr 17 01:41:49.209014 dockerd[1895]: time="2026-04-17T01:41:49.207031168Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Apr 17 01:41:49.217489 dockerd[1895]: time="2026-04-17T01:41:49.212173537Z" level=info msg="Docker daemon" commit=45873be4ae3f5488c9498b3d9f17deaddaf609f4 containerd-snapshotter=false storage-driver=overlay2 version=28.2.2 Apr 17 01:41:49.217489 dockerd[1895]: time="2026-04-17T01:41:49.216718510Z" level=info msg="Initializing buildkit" Apr 17 01:41:49.405179 dockerd[1895]: time="2026-04-17T01:41:49.404447402Z" level=warning msg="CDI setup error /etc/cdi: failed to monitor for changes: no such file or directory" Apr 17 01:41:49.405179 dockerd[1895]: time="2026-04-17T01:41:49.404644349Z" level=warning msg="CDI setup error /var/run/cdi: failed to monitor for changes: no such file or directory" Apr 17 01:41:50.524525 dockerd[1895]: time="2026-04-17T01:41:50.522935727Z" level=info msg="Completed buildkit initialization" Apr 17 01:41:50.841460 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 17 01:41:50.982582 dockerd[1895]: time="2026-04-17T01:41:50.982260705Z" level=info msg="Daemon has completed initialization" Apr 17 01:41:50.982862 dockerd[1895]: time="2026-04-17T01:41:50.982711858Z" level=info msg="API listen on /run/docker.sock" Apr 17 01:41:50.983161 systemd[1]: Started docker.service - Docker Application Container Engine. Apr 17 01:41:51.030082 (kubelet)[2143]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 17 01:41:52.886044 kubelet[2143]: E0417 01:41:52.885419 2143 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 17 01:41:52.940147 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 17 01:41:52.940290 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 17 01:41:52.940926 systemd[1]: kubelet.service: Consumed 3.228s CPU time, 112.3M memory peak. Apr 17 01:42:03.315490 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7. Apr 17 01:42:03.480000 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 17 01:42:09.961985 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 17 01:42:10.109407 (kubelet)[2174]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 17 01:42:13.541253 kubelet[2174]: E0417 01:42:13.534731 2174 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 17 01:42:13.613662 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 17 01:42:13.615821 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 17 01:42:13.680503 systemd[1]: kubelet.service: Consumed 5.341s CPU time, 110.3M memory peak. Apr 17 01:42:21.127485 containerd[1607]: time="2026-04-17T01:42:21.124310904Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.4\"" Apr 17 01:42:24.240423 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 8. Apr 17 01:42:24.631102 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 17 01:42:28.578139 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2283795745.mount: Deactivated successfully. Apr 17 01:42:29.271149 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 17 01:42:29.527000 (kubelet)[2208]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 17 01:42:34.147400 kubelet[2208]: E0417 01:42:34.145307 2208 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 17 01:42:34.186398 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 17 01:42:34.191902 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 17 01:42:34.194235 systemd[1]: kubelet.service: Consumed 5.849s CPU time, 110.6M memory peak. Apr 17 01:42:44.512572 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 9. Apr 17 01:42:44.593040 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 17 01:42:48.538584 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 17 01:42:48.667984 (kubelet)[2266]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 17 01:42:55.472694 kubelet[2266]: E0417 01:42:55.471950 2266 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 17 01:42:55.635312 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 17 01:42:55.638961 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 17 01:42:55.692312 systemd[1]: kubelet.service: Consumed 6.672s CPU time, 110.3M memory peak. Apr 17 01:43:05.873017 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 10. Apr 17 01:43:06.022503 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 17 01:43:06.291939 containerd[1607]: time="2026-04-17T01:43:06.290392521Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.34.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 01:43:06.294559 containerd[1607]: time="2026-04-17T01:43:06.292365128Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.34.4: active requests=0, bytes read=27064583" Apr 17 01:43:06.403549 containerd[1607]: time="2026-04-17T01:43:06.401869527Z" level=info msg="ImageCreate event name:\"sha256:580dc2bd813334b9ca30ac3a513b3577d055dd0bc8a7018a424b552afd7319f9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 01:43:06.582075 containerd[1607]: time="2026-04-17T01:43:06.547441000Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:f2b5a686d329b24ef4f4b057ddaf61e01874122d584e99c2a19d1e1714e4b7ae\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 01:43:06.602914 containerd[1607]: time="2026-04-17T01:43:06.596118242Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.34.4\" with image id \"sha256:580dc2bd813334b9ca30ac3a513b3577d055dd0bc8a7018a424b552afd7319f9\", repo tag \"registry.k8s.io/kube-apiserver:v1.34.4\", repo digest \"registry.k8s.io/kube-apiserver@sha256:f2b5a686d329b24ef4f4b057ddaf61e01874122d584e99c2a19d1e1714e4b7ae\", size \"27069180\" in 45.462336566s" Apr 17 01:43:06.602914 containerd[1607]: time="2026-04-17T01:43:06.598874600Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.4\" returns image reference \"sha256:580dc2bd813334b9ca30ac3a513b3577d055dd0bc8a7018a424b552afd7319f9\"" Apr 17 01:43:06.715560 containerd[1607]: time="2026-04-17T01:43:06.713301340Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.4\"" Apr 17 01:43:13.594821 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 17 01:43:13.761541 (kubelet)[2289]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 17 01:43:26.725019 containerd[1607]: time="2026-04-17T01:43:26.682045540Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.34.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 01:43:26.725019 containerd[1607]: time="2026-04-17T01:43:26.720313616Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.34.4: active requests=0, bytes read=21155802" Apr 17 01:43:26.812467 containerd[1607]: time="2026-04-17T01:43:26.811042917Z" level=info msg="ImageCreate event name:\"sha256:608737e269607b0d5c252a3296dc4fd80e7f2e90907f46ad5c8cf3e4f23c6d0d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 01:43:27.122046 containerd[1607]: time="2026-04-17T01:43:27.098868002Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:b8f0ae8a1bddb70981f4999e63df7e59838b9b4ee27831831802317101164e1e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 01:43:27.193733 containerd[1607]: time="2026-04-17T01:43:27.193195216Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.34.4\" with image id \"sha256:608737e269607b0d5c252a3296dc4fd80e7f2e90907f46ad5c8cf3e4f23c6d0d\", repo tag \"registry.k8s.io/kube-controller-manager:v1.34.4\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:b8f0ae8a1bddb70981f4999e63df7e59838b9b4ee27831831802317101164e1e\", size \"22820907\" in 20.478764363s" Apr 17 01:43:27.193733 containerd[1607]: time="2026-04-17T01:43:27.193488617Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.4\" returns image reference \"sha256:608737e269607b0d5c252a3296dc4fd80e7f2e90907f46ad5c8cf3e4f23c6d0d\"" Apr 17 01:43:27.247994 containerd[1607]: time="2026-04-17T01:43:27.214888775Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.4\"" Apr 17 01:43:28.204155 kubelet[2289]: E0417 01:43:28.156189 2289 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 17 01:43:28.284354 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 17 01:43:28.299981 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 17 01:43:28.368119 systemd[1]: kubelet.service: Consumed 15.300s CPU time, 110.8M memory peak. Apr 17 01:43:36.460544 containerd[1607]: time="2026-04-17T01:43:36.459520381Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.34.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 01:43:36.473931 containerd[1607]: time="2026-04-17T01:43:36.467446452Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.34.4: active requests=0, bytes read=15724352" Apr 17 01:43:36.487296 containerd[1607]: time="2026-04-17T01:43:36.477050456Z" level=info msg="ImageCreate event name:\"sha256:5ad88f27116a5809b6bdb7b410bc4c456e918bc25e96804201540fd30892e7aa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 01:43:36.723973 containerd[1607]: time="2026-04-17T01:43:36.707170997Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:5b0dcf6f7178b6bff5cbf59f2a695b13987181cb1610bfca63cad50b1df8f982\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 01:43:36.801996 containerd[1607]: time="2026-04-17T01:43:36.800401544Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.34.4\" with image id \"sha256:5ad88f27116a5809b6bdb7b410bc4c456e918bc25e96804201540fd30892e7aa\", repo tag \"registry.k8s.io/kube-scheduler:v1.34.4\", repo digest \"registry.k8s.io/kube-scheduler@sha256:5b0dcf6f7178b6bff5cbf59f2a695b13987181cb1610bfca63cad50b1df8f982\", size \"17384858\" in 9.585257664s" Apr 17 01:43:36.801996 containerd[1607]: time="2026-04-17T01:43:36.801468351Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.4\" returns image reference \"sha256:5ad88f27116a5809b6bdb7b410bc4c456e918bc25e96804201540fd30892e7aa\"" Apr 17 01:43:36.816136 containerd[1607]: time="2026-04-17T01:43:36.815505738Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.4\"" Apr 17 01:43:38.890173 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 11. Apr 17 01:43:38.942416 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 17 01:43:42.173247 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 17 01:43:42.470482 (kubelet)[2312]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 17 01:43:47.513312 kubelet[2312]: E0417 01:43:47.513061 2312 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 17 01:43:47.542046 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 17 01:43:47.545096 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 17 01:43:47.564729 systemd[1]: kubelet.service: Consumed 5.259s CPU time, 110.4M memory peak. Apr 17 01:43:58.134102 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 12. Apr 17 01:43:58.613694 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 17 01:44:05.307016 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 17 01:44:05.402061 (kubelet)[2329]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 17 01:44:11.715666 kubelet[2329]: E0417 01:44:11.709366 2329 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 17 01:44:11.936469 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 17 01:44:11.991389 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 17 01:44:12.017573 systemd[1]: kubelet.service: Consumed 7.938s CPU time, 110.4M memory peak. Apr 17 01:44:22.179285 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 13. Apr 17 01:44:23.025297 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 17 01:44:27.355649 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1978700552.mount: Deactivated successfully. Apr 17 01:44:29.116313 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 17 01:44:29.400412 (kubelet)[2358]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 17 01:44:33.199300 kubelet[2358]: E0417 01:44:33.196580 2358 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 17 01:44:33.280413 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 17 01:44:33.302346 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 17 01:44:33.390272 systemd[1]: kubelet.service: Consumed 5.687s CPU time, 111.3M memory peak. Apr 17 01:44:36.843105 containerd[1607]: time="2026-04-17T01:44:36.838012817Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.34.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 01:44:36.929540 containerd[1607]: time="2026-04-17T01:44:36.846852376Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.34.4: active requests=1, bytes read=19411782" Apr 17 01:44:36.929540 containerd[1607]: time="2026-04-17T01:44:36.912062383Z" level=info msg="ImageCreate event name:\"sha256:ccb613b010acadd9a69cf0ea80a60105c0d14106903c2572e2c6452f8615b3c7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 01:44:37.311351 containerd[1607]: time="2026-04-17T01:44:37.309429059Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:be6f624483c350da6022d54965ba5b01b35f067737959d7fb11d625f1d975045\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 01:44:37.380615 containerd[1607]: time="2026-04-17T01:44:37.378728617Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.34.4\" with image id \"sha256:ccb613b010acadd9a69cf0ea80a60105c0d14106903c2572e2c6452f8615b3c7\", repo tag \"registry.k8s.io/kube-proxy:v1.34.4\", repo digest \"registry.k8s.io/kube-proxy@sha256:be6f624483c350da6022d54965ba5b01b35f067737959d7fb11d625f1d975045\", size \"25858928\" in 1m0.563149261s" Apr 17 01:44:37.380615 containerd[1607]: time="2026-04-17T01:44:37.380520473Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.4\" returns image reference \"sha256:ccb613b010acadd9a69cf0ea80a60105c0d14106903c2572e2c6452f8615b3c7\"" Apr 17 01:44:37.505555 containerd[1607]: time="2026-04-17T01:44:37.505077626Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\"" Apr 17 01:44:43.588169 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 14. Apr 17 01:44:43.886256 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 17 01:44:45.191556 systemd[1725]: Created slice background.slice - User Background Tasks Slice. Apr 17 01:44:45.305762 systemd[1725]: Starting systemd-tmpfiles-clean.service - Cleanup of User's Temporary Files and Directories... Apr 17 01:44:46.160777 systemd[1725]: Finished systemd-tmpfiles-clean.service - Cleanup of User's Temporary Files and Directories. Apr 17 01:44:46.574045 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1005329263.mount: Deactivated successfully. Apr 17 01:44:48.513538 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 17 01:44:48.557742 (kubelet)[2389]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 17 01:44:51.368234 kubelet[2389]: E0417 01:44:51.366479 2389 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 17 01:44:51.374763 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 17 01:44:51.377733 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 17 01:44:51.383294 systemd[1]: kubelet.service: Consumed 4.668s CPU time, 110.7M memory peak. Apr 17 01:45:02.606451 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 15. Apr 17 01:45:02.832642 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 17 01:45:08.502759 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 17 01:45:08.730755 (kubelet)[2446]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 17 01:45:14.116660 kubelet[2446]: E0417 01:45:14.116298 2446 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 17 01:45:14.290754 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 17 01:45:14.291440 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 17 01:45:14.306583 systemd[1]: kubelet.service: Consumed 6.916s CPU time, 112.4M memory peak. Apr 17 01:45:20.428273 containerd[1607]: time="2026-04-17T01:45:20.427395589Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 01:45:20.435571 containerd[1607]: time="2026-04-17T01:45:20.431877485Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.1: active requests=0, bytes read=22377080" Apr 17 01:45:20.458974 containerd[1607]: time="2026-04-17T01:45:20.458492225Z" level=info msg="ImageCreate event name:\"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 01:45:20.470039 containerd[1607]: time="2026-04-17T01:45:20.469576496Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 01:45:20.472658 containerd[1607]: time="2026-04-17T01:45:20.472568644Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.1\" with image id \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\", size \"22384805\" in 42.967320048s" Apr 17 01:45:20.472977 containerd[1607]: time="2026-04-17T01:45:20.472694749Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\" returns image reference \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\"" Apr 17 01:45:20.486199 containerd[1607]: time="2026-04-17T01:45:20.484207457Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Apr 17 01:45:24.133862 containerd[1607]: time="2026-04-17T01:45:24.129724607Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 17 01:45:24.143769 containerd[1607]: time="2026-04-17T01:45:24.143469674Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=0" Apr 17 01:45:24.147368 containerd[1607]: time="2026-04-17T01:45:24.147134233Z" level=info msg="ImageCreate event name:\"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 17 01:45:24.249988 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1223240053.mount: Deactivated successfully. Apr 17 01:45:24.268337 containerd[1607]: time="2026-04-17T01:45:24.267652008Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 17 01:45:24.308812 containerd[1607]: time="2026-04-17T01:45:24.299115702Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"320448\" in 3.803343036s" Apr 17 01:45:24.315666 containerd[1607]: time="2026-04-17T01:45:24.306577665Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\"" Apr 17 01:45:24.317132 containerd[1607]: time="2026-04-17T01:45:24.317102044Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.5-0\"" Apr 17 01:45:24.646254 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 16. Apr 17 01:45:24.781177 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 17 01:45:30.069703 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount973435349.mount: Deactivated successfully. Apr 17 01:45:30.189233 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 17 01:45:30.400471 (kubelet)[2471]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 17 01:45:35.189221 kubelet[2471]: E0417 01:45:35.186043 2471 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 17 01:45:35.412576 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 17 01:45:35.431942 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 17 01:45:35.492333 systemd[1]: kubelet.service: Consumed 6.156s CPU time, 110.3M memory peak. Apr 17 01:45:46.425558 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 17. Apr 17 01:45:47.351201 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 17 01:45:56.743210 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 17 01:45:57.047928 (kubelet)[2497]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 17 01:46:07.709342 kubelet[2497]: E0417 01:46:07.706273 2497 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 17 01:46:07.918347 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 17 01:46:07.961832 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 17 01:46:08.080540 systemd[1]: kubelet.service: Consumed 12.125s CPU time, 110.4M memory peak. Apr 17 01:46:18.261031 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 18. Apr 17 01:46:18.274103 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 17 01:46:21.263282 containerd[1607]: time="2026-04-17T01:46:21.258110101Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.5-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 01:46:21.271902 containerd[1607]: time="2026-04-17T01:46:21.270785102Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.5-0: active requests=0, bytes read=22849743" Apr 17 01:46:21.315451 containerd[1607]: time="2026-04-17T01:46:21.312447268Z" level=info msg="ImageCreate event name:\"sha256:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 01:46:21.538000 containerd[1607]: time="2026-04-17T01:46:21.498349830Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 01:46:21.596202 containerd[1607]: time="2026-04-17T01:46:21.593871871Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.5-0\" with image id \"sha256:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1\", repo tag \"registry.k8s.io/etcd:3.6.5-0\", repo digest \"registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534\", size \"22871747\" in 57.275131547s" Apr 17 01:46:21.596202 containerd[1607]: time="2026-04-17T01:46:21.595492612Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.5-0\" returns image reference \"sha256:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1\"" Apr 17 01:46:25.011475 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 17 01:46:25.178585 (kubelet)[2572]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 17 01:46:33.722245 kubelet[2572]: E0417 01:46:33.712370 2572 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 17 01:46:33.894264 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 17 01:46:34.015400 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 17 01:46:34.177410 systemd[1]: kubelet.service: Consumed 8.358s CPU time, 110.9M memory peak. Apr 17 01:46:44.742444 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 19. Apr 17 01:46:45.589988 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 17 01:46:53.077927 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 17 01:46:53.387372 (kubelet)[2592]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 17 01:46:57.530113 kubelet[2592]: E0417 01:46:57.529911 2592 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 17 01:46:57.708340 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 17 01:46:57.739407 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 17 01:46:57.853691 systemd[1]: kubelet.service: Consumed 6.245s CPU time, 108M memory peak. Apr 17 01:47:07.773357 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 20. Apr 17 01:47:07.793277 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 17 01:47:19.605952 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 17 01:47:20.007330 (kubelet)[2621]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 17 01:47:33.838484 kubelet[2621]: E0417 01:47:33.835481 2621 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 17 01:47:33.880106 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 17 01:47:33.880206 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 17 01:47:33.891045 systemd[1]: kubelet.service: Consumed 13.883s CPU time, 110.8M memory peak. Apr 17 01:47:36.403232 update_engine[1582]: I20260417 01:47:36.398486 1582 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Apr 17 01:47:36.409186 update_engine[1582]: I20260417 01:47:36.404284 1582 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Apr 17 01:47:36.409186 update_engine[1582]: I20260417 01:47:36.406584 1582 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Apr 17 01:47:36.520003 update_engine[1582]: I20260417 01:47:36.517395 1582 omaha_request_params.cc:62] Current group set to alpha Apr 17 01:47:36.528508 update_engine[1582]: I20260417 01:47:36.525394 1582 update_attempter.cc:499] Already updated boot flags. Skipping. Apr 17 01:47:36.528508 update_engine[1582]: I20260417 01:47:36.526852 1582 update_attempter.cc:643] Scheduling an action processor start. Apr 17 01:47:36.529010 update_engine[1582]: I20260417 01:47:36.528539 1582 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Apr 17 01:47:36.531464 update_engine[1582]: I20260417 01:47:36.529132 1582 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Apr 17 01:47:36.531464 update_engine[1582]: I20260417 01:47:36.529372 1582 omaha_request_action.cc:271] Posting an Omaha request to disabled Apr 17 01:47:36.531464 update_engine[1582]: I20260417 01:47:36.529382 1582 omaha_request_action.cc:272] Request: Apr 17 01:47:36.531464 update_engine[1582]: Apr 17 01:47:36.531464 update_engine[1582]: Apr 17 01:47:36.531464 update_engine[1582]: Apr 17 01:47:36.531464 update_engine[1582]: Apr 17 01:47:36.531464 update_engine[1582]: Apr 17 01:47:36.531464 update_engine[1582]: Apr 17 01:47:36.531464 update_engine[1582]: Apr 17 01:47:36.531464 update_engine[1582]: Apr 17 01:47:36.531464 update_engine[1582]: I20260417 01:47:36.529388 1582 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 17 01:47:36.573381 update_engine[1582]: I20260417 01:47:36.565515 1582 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 17 01:47:36.573466 locksmithd[1659]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Apr 17 01:47:36.591270 update_engine[1582]: I20260417 01:47:36.586011 1582 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 17 01:47:36.608354 update_engine[1582]: E20260417 01:47:36.607388 1582 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (Domain name not found) Apr 17 01:47:36.608354 update_engine[1582]: I20260417 01:47:36.608217 1582 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Apr 17 01:47:44.333332 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 21. Apr 17 01:47:44.575536 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 17 01:47:47.323002 update_engine[1582]: I20260417 01:47:47.304313 1582 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 17 01:47:47.323002 update_engine[1582]: I20260417 01:47:47.314021 1582 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 17 01:47:47.381374 update_engine[1582]: I20260417 01:47:47.353426 1582 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 17 01:47:47.381374 update_engine[1582]: E20260417 01:47:47.375193 1582 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (Domain name not found) Apr 17 01:47:47.381374 update_engine[1582]: I20260417 01:47:47.379289 1582 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Apr 17 01:47:52.622151 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 17 01:47:53.041244 (kubelet)[2637]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 17 01:47:57.307943 update_engine[1582]: I20260417 01:47:57.303498 1582 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 17 01:47:57.413298 update_engine[1582]: I20260417 01:47:57.308440 1582 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 17 01:47:57.414379 update_engine[1582]: I20260417 01:47:57.414054 1582 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 17 01:47:57.437406 update_engine[1582]: E20260417 01:47:57.432459 1582 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (Domain name not found) Apr 17 01:47:57.477628 update_engine[1582]: I20260417 01:47:57.441310 1582 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Apr 17 01:47:59.786244 kubelet[2637]: E0417 01:47:59.784720 2637 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 17 01:47:59.833493 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 17 01:47:59.896849 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 17 01:48:00.064041 systemd[1]: kubelet.service: Consumed 7.928s CPU time, 110.2M memory peak. Apr 17 01:48:07.323670 update_engine[1582]: I20260417 01:48:07.319418 1582 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 17 01:48:07.331625 update_engine[1582]: I20260417 01:48:07.324139 1582 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 17 01:48:07.331625 update_engine[1582]: I20260417 01:48:07.328766 1582 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 17 01:48:07.337729 update_engine[1582]: E20260417 01:48:07.336344 1582 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (Domain name not found) Apr 17 01:48:07.340121 update_engine[1582]: I20260417 01:48:07.338129 1582 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Apr 17 01:48:07.340121 update_engine[1582]: I20260417 01:48:07.338184 1582 omaha_request_action.cc:617] Omaha request response: Apr 17 01:48:07.340121 update_engine[1582]: E20260417 01:48:07.339986 1582 omaha_request_action.cc:636] Omaha request network transfer failed. Apr 17 01:48:07.343220 update_engine[1582]: I20260417 01:48:07.340210 1582 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Apr 17 01:48:07.343220 update_engine[1582]: I20260417 01:48:07.340215 1582 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Apr 17 01:48:07.343220 update_engine[1582]: I20260417 01:48:07.340219 1582 update_attempter.cc:306] Processing Done. Apr 17 01:48:07.343220 update_engine[1582]: E20260417 01:48:07.340339 1582 update_attempter.cc:619] Update failed. Apr 17 01:48:07.343220 update_engine[1582]: I20260417 01:48:07.340344 1582 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Apr 17 01:48:07.343220 update_engine[1582]: I20260417 01:48:07.340348 1582 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Apr 17 01:48:07.343220 update_engine[1582]: I20260417 01:48:07.340353 1582 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Apr 17 01:48:07.343220 update_engine[1582]: I20260417 01:48:07.342839 1582 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Apr 17 01:48:07.345785 update_engine[1582]: I20260417 01:48:07.343261 1582 omaha_request_action.cc:271] Posting an Omaha request to disabled Apr 17 01:48:07.345785 update_engine[1582]: I20260417 01:48:07.343271 1582 omaha_request_action.cc:272] Request: Apr 17 01:48:07.345785 update_engine[1582]: Apr 17 01:48:07.345785 update_engine[1582]: Apr 17 01:48:07.345785 update_engine[1582]: Apr 17 01:48:07.345785 update_engine[1582]: Apr 17 01:48:07.345785 update_engine[1582]: Apr 17 01:48:07.345785 update_engine[1582]: Apr 17 01:48:07.345785 update_engine[1582]: I20260417 01:48:07.343277 1582 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 17 01:48:07.345785 update_engine[1582]: I20260417 01:48:07.343429 1582 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 17 01:48:07.346006 locksmithd[1659]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Apr 17 01:48:07.352726 update_engine[1582]: I20260417 01:48:07.346083 1582 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 17 01:48:07.367194 update_engine[1582]: E20260417 01:48:07.364431 1582 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (Domain name not found) Apr 17 01:48:07.368053 update_engine[1582]: I20260417 01:48:07.367967 1582 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Apr 17 01:48:07.368270 update_engine[1582]: I20260417 01:48:07.368038 1582 omaha_request_action.cc:617] Omaha request response: Apr 17 01:48:07.368270 update_engine[1582]: I20260417 01:48:07.368069 1582 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Apr 17 01:48:07.368270 update_engine[1582]: I20260417 01:48:07.368074 1582 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Apr 17 01:48:07.368270 update_engine[1582]: I20260417 01:48:07.368079 1582 update_attempter.cc:306] Processing Done. Apr 17 01:48:07.368270 update_engine[1582]: I20260417 01:48:07.368085 1582 update_attempter.cc:310] Error event sent. Apr 17 01:48:07.368270 update_engine[1582]: I20260417 01:48:07.368098 1582 update_check_scheduler.cc:74] Next update check in 45m1s Apr 17 01:48:07.370156 locksmithd[1659]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Apr 17 01:48:10.085222 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 22. Apr 17 01:48:10.107264 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 17 01:48:13.314312 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Apr 17 01:48:13.314422 systemd[1]: kubelet.service: Failed with result 'signal'. Apr 17 01:48:13.315114 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 17 01:48:13.315349 systemd[1]: kubelet.service: Consumed 1.725s CPU time, 68.9M memory peak. Apr 17 01:48:14.319140 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 17 01:48:16.780758 systemd[1]: Reload requested from client PID 2659 ('systemctl') (unit session-8.scope)... Apr 17 01:48:16.780887 systemd[1]: Reloading... Apr 17 01:48:25.926792 zram_generator::config[2713]: No configuration found. Apr 17 01:48:30.773224 systemd[1]: /usr/lib/systemd/system/update-engine.service:10: Support for option BlockIOWeight= has been removed and it is ignored Apr 17 01:48:53.483963 systemd[1]: Reloading finished in 36692 ms. Apr 17 01:49:14.861749 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 17 01:49:15.416539 (kubelet)[2752]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 17 01:49:17.035994 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 17 01:49:17.412493 systemd[1]: kubelet.service: Deactivated successfully. Apr 17 01:49:17.492625 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 17 01:49:17.519082 systemd[1]: kubelet.service: Consumed 4.400s CPU time, 103.5M memory peak. Apr 17 01:49:18.602864 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 17 01:49:36.349269 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 17 01:49:36.588976 (kubelet)[2777]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 17 01:50:09.467573 kubelet[2777]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 17 01:50:09.752092 kubelet[2777]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 17 01:50:09.752092 kubelet[2777]: I0417 01:50:09.504481 2777 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 17 01:50:40.501274 kubelet[2777]: I0417 01:50:40.498437 2777 server.go:529] "Kubelet version" kubeletVersion="v1.34.4" Apr 17 01:50:40.501274 kubelet[2777]: I0417 01:50:40.501059 2777 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 17 01:50:40.518400 kubelet[2777]: I0417 01:50:40.503926 2777 watchdog_linux.go:95] "Systemd watchdog is not enabled" Apr 17 01:50:40.518400 kubelet[2777]: I0417 01:50:40.504537 2777 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 17 01:50:40.518400 kubelet[2777]: I0417 01:50:40.517983 2777 server.go:956] "Client rotation is on, will bootstrap in background" Apr 17 01:50:42.611294 kubelet[2777]: E0417 01:50:42.585773 2777 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.148:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.148:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 17 01:50:43.077520 kubelet[2777]: I0417 01:50:43.075376 2777 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 17 01:50:45.174108 kubelet[2777]: E0417 01:50:45.171300 2777 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.148:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.148:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 17 01:50:45.698197 kubelet[2777]: I0417 01:50:45.621290 2777 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Apr 17 01:50:49.936213 kubelet[2777]: E0417 01:50:49.935125 2777 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.148:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.148:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 17 01:50:51.206059 kubelet[2777]: I0417 01:50:51.143485 2777 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Apr 17 01:50:51.252125 kubelet[2777]: I0417 01:50:51.243034 2777 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 17 01:50:51.331300 kubelet[2777]: I0417 01:50:51.254559 2777 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 17 01:50:51.331300 kubelet[2777]: I0417 01:50:51.333115 2777 topology_manager.go:138] "Creating topology manager with none policy" Apr 17 01:50:51.373423 kubelet[2777]: I0417 01:50:51.344587 2777 container_manager_linux.go:306] "Creating device plugin manager" Apr 17 01:50:51.373423 kubelet[2777]: I0417 01:50:51.368919 2777 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Apr 17 01:50:51.603330 kubelet[2777]: I0417 01:50:51.582751 2777 state_mem.go:36] "Initialized new in-memory state store" Apr 17 01:50:51.839297 kubelet[2777]: I0417 01:50:51.823365 2777 kubelet.go:475] "Attempting to sync node with API server" Apr 17 01:50:51.909520 kubelet[2777]: I0417 01:50:51.869988 2777 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 17 01:50:51.909520 kubelet[2777]: I0417 01:50:51.875168 2777 kubelet.go:387] "Adding apiserver pod source" Apr 17 01:50:51.909520 kubelet[2777]: I0417 01:50:51.881213 2777 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 17 01:50:52.272735 kubelet[2777]: E0417 01:50:52.272429 2777 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.148:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.148:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 17 01:50:52.419035 kubelet[2777]: E0417 01:50:52.273197 2777 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.148:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.148:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 17 01:50:52.506974 kubelet[2777]: I0417 01:50:52.502223 2777 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.2.0" apiVersion="v1" Apr 17 01:50:52.710014 kubelet[2777]: I0417 01:50:52.697089 2777 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 17 01:50:52.728113 kubelet[2777]: I0417 01:50:52.720758 2777 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Apr 17 01:50:52.743065 kubelet[2777]: W0417 01:50:52.742393 2777 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Apr 17 01:50:53.712536 kubelet[2777]: E0417 01:50:53.709856 2777 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.148:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.148:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 17 01:50:53.815962 kubelet[2777]: E0417 01:50:53.729561 2777 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.148:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.148:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 17 01:50:54.013508 kubelet[2777]: I0417 01:50:53.999530 2777 server.go:1262] "Started kubelet" Apr 17 01:50:54.435023 kubelet[2777]: I0417 01:50:54.419072 2777 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 17 01:50:54.435023 kubelet[2777]: I0417 01:50:54.418185 2777 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Apr 17 01:50:54.486860 kubelet[2777]: I0417 01:50:54.485770 2777 server_v1.go:49] "podresources" method="list" useActivePods=true Apr 17 01:50:54.869848 kubelet[2777]: I0417 01:50:54.864872 2777 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 17 01:50:55.014220 kubelet[2777]: E0417 01:50:54.978322 2777 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.148:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.148:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18a701e35930f510 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-17 01:50:53.986534672 +0000 UTC m=+76.992204623,LastTimestamp:2026-04-17 01:50:53.986534672 +0000 UTC m=+76.992204623,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 17 01:50:55.142576 kubelet[2777]: I0417 01:50:55.135396 2777 server.go:310] "Adding debug handlers to kubelet server" Apr 17 01:50:55.272418 kubelet[2777]: E0417 01:50:55.268411 2777 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 17 01:50:55.439200 kubelet[2777]: I0417 01:50:55.435847 2777 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 17 01:50:55.446057 kubelet[2777]: I0417 01:50:55.442013 2777 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 17 01:50:55.543349 kubelet[2777]: I0417 01:50:55.541916 2777 volume_manager.go:313] "Starting Kubelet Volume Manager" Apr 17 01:50:55.557669 kubelet[2777]: E0417 01:50:55.557091 2777 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 17 01:50:55.625114 kubelet[2777]: I0417 01:50:55.624493 2777 reconciler.go:29] "Reconciler: start to sync state" Apr 17 01:50:55.647491 kubelet[2777]: I0417 01:50:55.646288 2777 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Apr 17 01:50:55.698326 kubelet[2777]: E0417 01:50:55.687055 2777 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 17 01:50:55.878112 kubelet[2777]: E0417 01:50:55.776576 2777 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.148:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.148:6443: connect: connection refused" interval="200ms" Apr 17 01:50:55.944492 kubelet[2777]: E0417 01:50:55.881386 2777 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 17 01:50:55.944492 kubelet[2777]: E0417 01:50:55.923912 2777 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.148:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.148:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 17 01:50:55.987532 kubelet[2777]: I0417 01:50:55.948119 2777 factory.go:223] Registration of the systemd container factory successfully Apr 17 01:50:56.093451 kubelet[2777]: E0417 01:50:56.044046 2777 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 17 01:50:56.123231 kubelet[2777]: I0417 01:50:56.094172 2777 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 17 01:50:56.264128 kubelet[2777]: E0417 01:50:56.252985 2777 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 17 01:50:56.313348 kubelet[2777]: E0417 01:50:56.307513 2777 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.148:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.148:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 17 01:50:56.335495 kubelet[2777]: E0417 01:50:56.327444 2777 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.148:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.148:6443: connect: connection refused" interval="400ms" Apr 17 01:50:56.395454 kubelet[2777]: E0417 01:50:56.394106 2777 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 17 01:50:56.421116 kubelet[2777]: W0417 01:50:56.417566 2777 logging.go:55] [core] [Channel #9 SubChannel #10]grpc: addrConn.createTransport failed to connect to {Addr: "/run/containerd/containerd.sock", ServerName: "localhost", Attributes: {"<%!p(networktype.keyType=grpc.internal.transport.networktype)>": "unix" }, BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: failed to write client preface: write unix @->/run/containerd/containerd.sock: use of closed network connection" Apr 17 01:50:56.503425 kubelet[2777]: E0417 01:50:56.502621 2777 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 17 01:50:56.625783 kubelet[2777]: E0417 01:50:56.615483 2777 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 17 01:50:56.626312 kubelet[2777]: E0417 01:50:56.601754 2777 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.148:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.148:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 17 01:50:56.746363 kubelet[2777]: E0417 01:50:56.745392 2777 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 17 01:50:56.847270 kubelet[2777]: I0417 01:50:56.842540 2777 factory.go:223] Registration of the containerd container factory successfully Apr 17 01:50:56.926474 kubelet[2777]: E0417 01:50:56.914992 2777 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.148:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.148:6443: connect: connection refused" interval="800ms" Apr 17 01:50:57.039024 kubelet[2777]: E0417 01:50:56.919120 2777 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 17 01:50:57.223883 kubelet[2777]: E0417 01:50:57.214793 2777 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 17 01:50:57.311794 kubelet[2777]: E0417 01:50:57.311118 2777 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.148:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.148:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 17 01:50:57.495798 kubelet[2777]: E0417 01:50:57.421363 2777 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 17 01:50:57.667545 kubelet[2777]: E0417 01:50:57.649004 2777 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 17 01:50:57.808438 kubelet[2777]: E0417 01:50:57.785967 2777 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 17 01:50:57.909574 kubelet[2777]: E0417 01:50:57.908536 2777 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.148:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.148:6443: connect: connection refused" interval="1.6s" Apr 17 01:50:57.994082 kubelet[2777]: E0417 01:50:57.980990 2777 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 17 01:50:58.142024 kubelet[2777]: E0417 01:50:58.109273 2777 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 17 01:50:58.282992 kubelet[2777]: E0417 01:50:58.277970 2777 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 17 01:50:58.488949 kubelet[2777]: E0417 01:50:58.488550 2777 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 17 01:50:58.605708 kubelet[2777]: E0417 01:50:58.592557 2777 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 17 01:50:58.832316 kubelet[2777]: E0417 01:50:58.760847 2777 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 17 01:50:58.872233 kubelet[2777]: I0417 01:50:58.842584 2777 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Apr 17 01:50:58.989887 kubelet[2777]: E0417 01:50:58.980898 2777 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 17 01:50:59.115805 kubelet[2777]: I0417 01:50:59.103991 2777 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Apr 17 01:50:59.224132 kubelet[2777]: I0417 01:50:59.127257 2777 status_manager.go:244] "Starting to sync pod status with apiserver" Apr 17 01:50:59.315924 kubelet[2777]: E0417 01:50:59.107918 2777 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 17 01:50:59.372888 kubelet[2777]: E0417 01:50:59.368890 2777 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.148:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.148:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 17 01:50:59.372888 kubelet[2777]: I0417 01:50:59.309970 2777 kubelet.go:2428] "Starting kubelet main sync loop" Apr 17 01:50:59.448136 kubelet[2777]: E0417 01:50:59.445963 2777 kubelet.go:2452] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 17 01:50:59.448136 kubelet[2777]: E0417 01:50:59.493016 2777 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 17 01:50:59.616817 kubelet[2777]: E0417 01:50:59.616343 2777 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 17 01:50:59.672396 kubelet[2777]: E0417 01:50:59.618803 2777 kubelet.go:2452] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 17 01:50:59.796325 kubelet[2777]: E0417 01:50:59.789073 2777 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.148:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.148:6443: connect: connection refused" interval="3.2s" Apr 17 01:50:59.809368 kubelet[2777]: E0417 01:50:59.808205 2777 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 17 01:50:59.845292 kubelet[2777]: E0417 01:50:59.841650 2777 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.148:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.148:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 17 01:50:59.924268 kubelet[2777]: E0417 01:50:59.918131 2777 kubelet.go:2452] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Apr 17 01:50:59.937245 kubelet[2777]: E0417 01:50:59.925031 2777 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 17 01:50:59.985443 kubelet[2777]: E0417 01:50:59.936204 2777 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.148:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.148:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 17 01:51:00.053320 kubelet[2777]: E0417 01:51:00.049748 2777 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 17 01:51:00.120182 kubelet[2777]: E0417 01:51:00.092114 2777 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.148:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.148:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 17 01:51:00.223429 kubelet[2777]: E0417 01:51:00.187424 2777 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 17 01:51:00.333713 kubelet[2777]: E0417 01:51:00.319527 2777 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 17 01:51:00.384342 kubelet[2777]: E0417 01:51:00.337424 2777 kubelet.go:2452] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Apr 17 01:51:00.456549 kubelet[2777]: E0417 01:51:00.452036 2777 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 17 01:51:00.594557 kubelet[2777]: E0417 01:51:00.581069 2777 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 17 01:51:00.692107 kubelet[2777]: E0417 01:51:00.690917 2777 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 17 01:51:00.804119 kubelet[2777]: E0417 01:51:00.802870 2777 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 17 01:51:00.940940 kubelet[2777]: E0417 01:51:00.930566 2777 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 17 01:51:01.065980 kubelet[2777]: E0417 01:51:01.065388 2777 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 17 01:51:01.212359 kubelet[2777]: E0417 01:51:01.200909 2777 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 17 01:51:01.347756 kubelet[2777]: E0417 01:51:01.231587 2777 kubelet.go:2452] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Apr 17 01:51:01.375081 kubelet[2777]: E0417 01:51:01.374787 2777 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 17 01:51:01.477816 kubelet[2777]: E0417 01:51:01.466729 2777 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.148:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.148:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 17 01:51:01.507069 kubelet[2777]: E0417 01:51:01.491018 2777 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 17 01:51:01.584663 kubelet[2777]: E0417 01:51:01.550528 2777 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.148:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.148:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 17 01:51:01.620258 kubelet[2777]: E0417 01:51:01.613625 2777 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 17 01:51:01.737995 kubelet[2777]: E0417 01:51:01.728448 2777 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 17 01:51:01.892278 kubelet[2777]: E0417 01:51:01.880133 2777 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 17 01:51:02.007540 kubelet[2777]: E0417 01:51:01.998439 2777 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 17 01:51:02.110396 kubelet[2777]: E0417 01:51:02.107961 2777 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 17 01:51:02.233913 kubelet[2777]: E0417 01:51:02.232639 2777 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 17 01:51:02.429482 kubelet[2777]: E0417 01:51:02.349983 2777 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 17 01:51:02.540094 kubelet[2777]: E0417 01:51:02.526960 2777 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 17 01:51:02.804926 kubelet[2777]: E0417 01:51:02.792253 2777 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 17 01:51:02.917815 kubelet[2777]: E0417 01:51:02.913491 2777 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 17 01:51:02.969551 kubelet[2777]: E0417 01:51:02.968819 2777 kubelet.go:2452] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Apr 17 01:51:03.024205 kubelet[2777]: E0417 01:51:03.018675 2777 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 17 01:51:03.216429 kubelet[2777]: E0417 01:51:03.192570 2777 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 17 01:51:03.249383 kubelet[2777]: E0417 01:51:03.208900 2777 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.148:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.148:6443: connect: connection refused" interval="6.4s" Apr 17 01:51:03.375899 kubelet[2777]: E0417 01:51:03.313478 2777 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 17 01:51:03.424964 kubelet[2777]: E0417 01:51:03.418348 2777 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 17 01:51:03.519839 kubelet[2777]: I0417 01:51:03.512729 2777 cpu_manager.go:221] "Starting CPU manager" policy="none" Apr 17 01:51:03.519839 kubelet[2777]: I0417 01:51:03.516366 2777 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Apr 17 01:51:03.547895 kubelet[2777]: I0417 01:51:03.519947 2777 state_mem.go:36] "Initialized new in-memory state store" Apr 17 01:51:03.547895 kubelet[2777]: E0417 01:51:03.534023 2777 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 17 01:51:03.643327 kubelet[2777]: E0417 01:51:03.641878 2777 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 17 01:51:03.680014 kubelet[2777]: I0417 01:51:03.678241 2777 policy_none.go:49] "None policy: Start" Apr 17 01:51:03.681017 kubelet[2777]: I0417 01:51:03.680951 2777 memory_manager.go:187] "Starting memorymanager" policy="None" Apr 17 01:51:03.681081 kubelet[2777]: I0417 01:51:03.681035 2777 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Apr 17 01:51:03.736496 kubelet[2777]: I0417 01:51:03.734571 2777 policy_none.go:47] "Start" Apr 17 01:51:03.790094 kubelet[2777]: E0417 01:51:03.745553 2777 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 17 01:51:03.897266 kubelet[2777]: E0417 01:51:03.890890 2777 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 17 01:51:04.017029 kubelet[2777]: E0417 01:51:04.007577 2777 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 17 01:51:04.151455 kubelet[2777]: E0417 01:51:04.133955 2777 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 17 01:51:04.333105 kubelet[2777]: E0417 01:51:04.312440 2777 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 17 01:51:04.508330 kubelet[2777]: E0417 01:51:04.492092 2777 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.148:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.148:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 17 01:51:04.508330 kubelet[2777]: E0417 01:51:04.492067 2777 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 17 01:51:04.616190 kubelet[2777]: E0417 01:51:04.438987 2777 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.148:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.148:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18a701e35930f510 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-17 01:50:53.986534672 +0000 UTC m=+76.992204623,LastTimestamp:2026-04-17 01:50:53.986534672 +0000 UTC m=+76.992204623,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 17 01:51:04.776432 kubelet[2777]: E0417 01:51:04.765323 2777 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 17 01:51:04.952823 kubelet[2777]: E0417 01:51:04.949755 2777 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 17 01:51:05.141942 kubelet[2777]: E0417 01:51:05.108681 2777 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.148:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.148:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 17 01:51:05.226834 kubelet[2777]: E0417 01:51:05.114747 2777 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 17 01:51:05.142463 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Apr 17 01:51:05.439475 systemd[1]: Starting systemd-tmpfiles-clean.service - Cleanup of Temporary Directories... Apr 17 01:51:05.509419 kubelet[2777]: E0417 01:51:05.372225 2777 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 17 01:51:05.542910 kubelet[2777]: E0417 01:51:05.541897 2777 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 17 01:51:05.693200 kubelet[2777]: E0417 01:51:05.672538 2777 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 17 01:51:05.796391 kubelet[2777]: E0417 01:51:05.788092 2777 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 17 01:51:05.934633 kubelet[2777]: E0417 01:51:05.906098 2777 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 17 01:51:06.116045 kubelet[2777]: E0417 01:51:06.108744 2777 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 17 01:51:06.321537 kubelet[2777]: E0417 01:51:06.318856 2777 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 17 01:51:06.344976 kubelet[2777]: E0417 01:51:06.232564 2777 kubelet.go:2452] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Apr 17 01:51:06.489351 kubelet[2777]: E0417 01:51:06.475472 2777 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 17 01:51:06.933413 kubelet[2777]: E0417 01:51:06.807016 2777 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 17 01:51:06.950480 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Apr 17 01:51:07.226438 kubelet[2777]: E0417 01:51:07.199712 2777 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 17 01:51:07.453760 kubelet[2777]: E0417 01:51:07.424487 2777 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 17 01:51:07.578999 kubelet[2777]: E0417 01:51:07.575521 2777 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 17 01:51:07.710578 kubelet[2777]: E0417 01:51:07.700511 2777 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 17 01:51:07.904406 kubelet[2777]: E0417 01:51:07.847092 2777 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 17 01:51:07.937022 systemd-tmpfiles[2820]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Apr 17 01:51:07.976314 systemd-tmpfiles[2820]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Apr 17 01:51:08.109816 systemd-tmpfiles[2820]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Apr 17 01:51:08.273383 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Apr 17 01:51:08.293152 kubelet[2777]: E0417 01:51:08.023336 2777 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 17 01:51:08.398325 kubelet[2777]: E0417 01:51:08.395630 2777 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 17 01:51:08.397136 systemd-tmpfiles[2820]: ACLs are not supported, ignoring. Apr 17 01:51:08.415580 systemd-tmpfiles[2820]: ACLs are not supported, ignoring. Apr 17 01:51:08.597484 kubelet[2777]: E0417 01:51:08.523109 2777 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 17 01:51:08.904570 systemd-tmpfiles[2820]: Detected autofs mount point /boot during canonicalization of boot. Apr 17 01:51:08.925092 systemd-tmpfiles[2820]: Skipping /boot Apr 17 01:51:08.940434 kubelet[2777]: E0417 01:51:08.844042 2777 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 17 01:51:09.073943 kubelet[2777]: E0417 01:51:09.065027 2777 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 17 01:51:09.215370 kubelet[2777]: E0417 01:51:09.199915 2777 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 17 01:51:09.265565 kubelet[2777]: E0417 01:51:09.221302 2777 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 17 01:51:09.306517 kubelet[2777]: I0417 01:51:09.306089 2777 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 17 01:51:09.352507 systemd[1]: systemd-tmpfiles-clean.service: Deactivated successfully. Apr 17 01:51:09.426834 systemd[1]: Finished systemd-tmpfiles-clean.service - Cleanup of Temporary Directories. Apr 17 01:51:09.438840 kubelet[2777]: E0417 01:51:09.428172 2777 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 17 01:51:09.438840 kubelet[2777]: E0417 01:51:09.437002 2777 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.148:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.148:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 17 01:51:09.439051 systemd[1]: systemd-tmpfiles-clean.service: Consumed 1.878s CPU time, 4.5M memory peak. Apr 17 01:51:09.475799 kubelet[2777]: I0417 01:51:09.437532 2777 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 17 01:51:09.553581 kubelet[2777]: E0417 01:51:09.553291 2777 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 17 01:51:09.884061 kubelet[2777]: I0417 01:51:09.880679 2777 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 17 01:51:10.013806 kubelet[2777]: E0417 01:51:10.011123 2777 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.148:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.148:6443: connect: connection refused" interval="7s" Apr 17 01:51:10.512562 kubelet[2777]: E0417 01:51:10.511640 2777 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 17 01:51:10.557504 kubelet[2777]: E0417 01:51:10.553994 2777 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 17 01:51:11.449024 kubelet[2777]: I0417 01:51:11.439404 2777 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 17 01:51:11.633326 kubelet[2777]: I0417 01:51:11.606021 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/66a243c17a59d09458bf3b09d66260f5-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"66a243c17a59d09458bf3b09d66260f5\") " pod="kube-system/kube-scheduler-localhost" Apr 17 01:51:11.633326 kubelet[2777]: E0417 01:51:11.616308 2777 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.148:6443/api/v1/nodes\": dial tcp 10.0.0.148:6443: connect: connection refused" node="localhost" Apr 17 01:51:11.994135 kubelet[2777]: I0417 01:51:11.993514 2777 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 17 01:51:12.099436 kubelet[2777]: E0417 01:51:12.098625 2777 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.148:6443/api/v1/nodes\": dial tcp 10.0.0.148:6443: connect: connection refused" node="localhost" Apr 17 01:51:12.516972 kubelet[2777]: I0417 01:51:12.504045 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4fac3d71e98654620e15e49cc21797c2-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"4fac3d71e98654620e15e49cc21797c2\") " pod="kube-system/kube-apiserver-localhost" Apr 17 01:51:12.566970 kubelet[2777]: I0417 01:51:12.562982 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4fac3d71e98654620e15e49cc21797c2-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"4fac3d71e98654620e15e49cc21797c2\") " pod="kube-system/kube-apiserver-localhost" Apr 17 01:51:12.932905 kubelet[2777]: I0417 01:51:12.910049 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4fac3d71e98654620e15e49cc21797c2-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"4fac3d71e98654620e15e49cc21797c2\") " pod="kube-system/kube-apiserver-localhost" Apr 17 01:51:13.110471 kubelet[2777]: E0417 01:51:13.109068 2777 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.148:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.148:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 17 01:51:13.229492 kubelet[2777]: E0417 01:51:13.109284 2777 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.148:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.148:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 17 01:51:13.471780 kubelet[2777]: I0417 01:51:13.471345 2777 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 17 01:51:13.472651 kubelet[2777]: I0417 01:51:13.471425 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/82faa9ca0765979bc0118d46e6420ed8-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"82faa9ca0765979bc0118d46e6420ed8\") " pod="kube-system/kube-controller-manager-localhost" Apr 17 01:51:13.472651 kubelet[2777]: I0417 01:51:13.472481 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/82faa9ca0765979bc0118d46e6420ed8-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"82faa9ca0765979bc0118d46e6420ed8\") " pod="kube-system/kube-controller-manager-localhost" Apr 17 01:51:13.472651 kubelet[2777]: I0417 01:51:13.472509 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/82faa9ca0765979bc0118d46e6420ed8-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"82faa9ca0765979bc0118d46e6420ed8\") " pod="kube-system/kube-controller-manager-localhost" Apr 17 01:51:13.472651 kubelet[2777]: I0417 01:51:13.472534 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/82faa9ca0765979bc0118d46e6420ed8-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"82faa9ca0765979bc0118d46e6420ed8\") " pod="kube-system/kube-controller-manager-localhost" Apr 17 01:51:13.472651 kubelet[2777]: I0417 01:51:13.472553 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/82faa9ca0765979bc0118d46e6420ed8-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"82faa9ca0765979bc0118d46e6420ed8\") " pod="kube-system/kube-controller-manager-localhost" Apr 17 01:51:13.507406 kubelet[2777]: E0417 01:51:13.506863 2777 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.148:6443/api/v1/nodes\": dial tcp 10.0.0.148:6443: connect: connection refused" node="localhost" Apr 17 01:51:14.445478 kubelet[2777]: E0417 01:51:14.444409 2777 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.148:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.148:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 17 01:51:14.521025 kubelet[2777]: I0417 01:51:14.519402 2777 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 17 01:51:14.829101 kubelet[2777]: E0417 01:51:14.776045 2777 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.148:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.148:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18a701e35930f510 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-17 01:50:53.986534672 +0000 UTC m=+76.992204623,LastTimestamp:2026-04-17 01:50:53.986534672 +0000 UTC m=+76.992204623,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 17 01:51:14.841792 kubelet[2777]: E0417 01:51:14.831638 2777 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.148:6443/api/v1/nodes\": dial tcp 10.0.0.148:6443: connect: connection refused" node="localhost" Apr 17 01:51:15.927436 systemd[1]: Created slice kubepods-burstable-pod66a243c17a59d09458bf3b09d66260f5.slice - libcontainer container kubepods-burstable-pod66a243c17a59d09458bf3b09d66260f5.slice. Apr 17 01:51:16.507800 kubelet[2777]: E0417 01:51:16.505221 2777 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.148:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.148:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 17 01:51:16.588085 kubelet[2777]: E0417 01:51:16.516894 2777 certificate_manager.go:461] "Reached backoff limit, still unable to rotate certs" err="timed out waiting for the condition" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 17 01:51:17.079506 kubelet[2777]: I0417 01:51:17.078247 2777 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 17 01:51:17.098978 kubelet[2777]: E0417 01:51:17.094131 2777 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 17 01:51:17.372563 kubelet[2777]: E0417 01:51:17.346996 2777 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.148:6443/api/v1/nodes\": dial tcp 10.0.0.148:6443: connect: connection refused" node="localhost" Apr 17 01:51:18.021354 kubelet[2777]: E0417 01:51:18.016429 2777 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.148:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.148:6443: connect: connection refused" interval="7s" Apr 17 01:51:18.045465 systemd[1]: Created slice kubepods-burstable-pod4fac3d71e98654620e15e49cc21797c2.slice - libcontainer container kubepods-burstable-pod4fac3d71e98654620e15e49cc21797c2.slice. Apr 17 01:51:18.506527 kubelet[2777]: E0417 01:51:18.423493 2777 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 01:51:19.377442 containerd[1607]: time="2026-04-17T01:51:19.377156512Z" level=info msg="RunPodSandbox for name:\"kube-scheduler-localhost\" uid:\"66a243c17a59d09458bf3b09d66260f5\" namespace:\"kube-system\"" Apr 17 01:51:19.389040 kubelet[2777]: E0417 01:51:19.386739 2777 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 17 01:51:19.523318 kubelet[2777]: E0417 01:51:19.523174 2777 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.148:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.148:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 17 01:51:19.523397 systemd[1]: Created slice kubepods-burstable-pod82faa9ca0765979bc0118d46e6420ed8.slice - libcontainer container kubepods-burstable-pod82faa9ca0765979bc0118d46e6420ed8.slice. Apr 17 01:51:20.519881 kubelet[2777]: E0417 01:51:20.513523 2777 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 01:51:21.099674 kubelet[2777]: E0417 01:51:20.977042 2777 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 17 01:51:22.852559 containerd[1607]: time="2026-04-17T01:51:22.848362488Z" level=info msg="connecting to shim 2f3cc335d06fac8b129f2e0628de747fdc282f0a938b832bb4028352e59cf7a9" address="unix:///run/containerd/s/37fa565e1f3a26166e72f3404aaa6af399b8d96545306b6a5af7e8ce01f4b5c9" namespace=k8s.io protocol=ttrpc version=3 Apr 17 01:51:23.650880 containerd[1607]: time="2026-04-17T01:51:23.633516555Z" level=info msg="RunPodSandbox for name:\"kube-apiserver-localhost\" uid:\"4fac3d71e98654620e15e49cc21797c2\" namespace:\"kube-system\"" Apr 17 01:51:24.805112 kubelet[2777]: E0417 01:51:24.781151 2777 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 17 01:51:25.025547 kubelet[2777]: I0417 01:51:24.980885 2777 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 17 01:51:26.519238 kubelet[2777]: E0417 01:51:26.514046 2777 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.148:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.148:6443: connect: connection refused" interval="7s" Apr 17 01:51:26.837578 kubelet[2777]: E0417 01:51:26.748582 2777 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.148:6443/api/v1/nodes\": dial tcp 10.0.0.148:6443: connect: connection refused" node="localhost" Apr 17 01:51:27.011278 kubelet[2777]: E0417 01:51:26.915348 2777 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.148:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.148:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18a701e35930f510 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-17 01:50:53.986534672 +0000 UTC m=+76.992204623,LastTimestamp:2026-04-17 01:50:53.986534672 +0000 UTC m=+76.992204623,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 17 01:51:27.178239 containerd[1607]: time="2026-04-17T01:51:27.142331174Z" level=info msg="connecting to shim 41811a8b34cb686957955f8c584489d82bfa1224891a2d40e360e0c2eab76401" address="unix:///run/containerd/s/4d81adec68a9442019a82acf762d954549e6ff5b60ec1e3e42dedeb839b6bd86" namespace=k8s.io protocol=ttrpc version=3 Apr 17 01:51:27.313461 kubelet[2777]: E0417 01:51:27.107521 2777 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 01:51:27.628992 containerd[1607]: time="2026-04-17T01:51:27.608109422Z" level=info msg="RunPodSandbox for name:\"kube-controller-manager-localhost\" uid:\"82faa9ca0765979bc0118d46e6420ed8\" namespace:\"kube-system\"" Apr 17 01:51:27.869512 kubelet[2777]: E0417 01:51:27.737130 2777 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.148:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.148:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 17 01:51:27.997369 systemd[1]: Started cri-containerd-2f3cc335d06fac8b129f2e0628de747fdc282f0a938b832bb4028352e59cf7a9.scope - libcontainer container 2f3cc335d06fac8b129f2e0628de747fdc282f0a938b832bb4028352e59cf7a9. Apr 17 01:51:31.345114 containerd[1607]: time="2026-04-17T01:51:31.345032503Z" level=info msg="connecting to shim da9200c64e3b9430af1bec691648e29545feb58f1e354e9301bd9011d5800181" address="unix:///run/containerd/s/6dd5659f205fb6d79c30f8024892d72185c43fb20853b4daceb55fc7305fe8f6" namespace=k8s.io protocol=ttrpc version=3 Apr 17 01:51:31.491274 kubelet[2777]: E0417 01:51:31.459468 2777 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 17 01:51:31.578651 systemd[1]: Started cri-containerd-41811a8b34cb686957955f8c584489d82bfa1224891a2d40e360e0c2eab76401.scope - libcontainer container 41811a8b34cb686957955f8c584489d82bfa1224891a2d40e360e0c2eab76401. Apr 17 01:51:32.924332 kubelet[2777]: E0417 01:51:32.923654 2777 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.148:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.148:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 17 01:51:33.919509 kubelet[2777]: I0417 01:51:33.917709 2777 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 17 01:51:33.989941 kubelet[2777]: E0417 01:51:33.917585 2777 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.148:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.148:6443: connect: connection refused" interval="7s" Apr 17 01:51:34.521331 kubelet[2777]: E0417 01:51:34.414048 2777 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.148:6443/api/v1/nodes\": dial tcp 10.0.0.148:6443: connect: connection refused" node="localhost" Apr 17 01:51:34.609407 containerd[1607]: time="2026-04-17T01:51:34.603500225Z" level=info msg="RunPodSandbox for name:\"kube-scheduler-localhost\" uid:\"66a243c17a59d09458bf3b09d66260f5\" namespace:\"kube-system\" returns sandbox id \"2f3cc335d06fac8b129f2e0628de747fdc282f0a938b832bb4028352e59cf7a9\"" Apr 17 01:51:35.281245 kubelet[2777]: E0417 01:51:35.280762 2777 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 01:51:35.435249 systemd[1]: Started cri-containerd-da9200c64e3b9430af1bec691648e29545feb58f1e354e9301bd9011d5800181.scope - libcontainer container da9200c64e3b9430af1bec691648e29545feb58f1e354e9301bd9011d5800181. Apr 17 01:51:35.462175 kubelet[2777]: E0417 01:51:35.441063 2777 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.148:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.148:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 17 01:51:35.772445 containerd[1607]: time="2026-04-17T01:51:35.745269103Z" level=info msg="RunPodSandbox for name:\"kube-apiserver-localhost\" uid:\"4fac3d71e98654620e15e49cc21797c2\" namespace:\"kube-system\" returns sandbox id \"41811a8b34cb686957955f8c584489d82bfa1224891a2d40e360e0c2eab76401\"" Apr 17 01:51:35.814535 kubelet[2777]: E0417 01:51:35.814243 2777 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 01:51:35.924111 containerd[1607]: time="2026-04-17T01:51:35.910556880Z" level=info msg="CreateContainer within sandbox \"2f3cc335d06fac8b129f2e0628de747fdc282f0a938b832bb4028352e59cf7a9\" for container name:\"kube-scheduler\"" Apr 17 01:51:37.177572 containerd[1607]: time="2026-04-17T01:51:37.168484745Z" level=info msg="CreateContainer within sandbox \"41811a8b34cb686957955f8c584489d82bfa1224891a2d40e360e0c2eab76401\" for container name:\"kube-apiserver\"" Apr 17 01:51:37.411189 containerd[1607]: time="2026-04-17T01:51:37.409706067Z" level=info msg="Container fa85ede5a4dfcf74e785c3ae04761f9e24c5254c7b51fd332a2dd1bd75ab71d9: CDI devices from CRI Config.CDIDevices: []" Apr 17 01:51:37.423485 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount284928650.mount: Deactivated successfully. Apr 17 01:51:37.597047 kubelet[2777]: E0417 01:51:37.586349 2777 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.148:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.148:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18a701e35930f510 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-17 01:50:53.986534672 +0000 UTC m=+76.992204623,LastTimestamp:2026-04-17 01:50:53.986534672 +0000 UTC m=+76.992204623,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 17 01:51:37.892227 kubelet[2777]: E0417 01:51:37.881581 2777 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.148:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.148:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 17 01:51:37.908238 containerd[1607]: time="2026-04-17T01:51:37.908007294Z" level=info msg="CreateContainer within sandbox \"2f3cc335d06fac8b129f2e0628de747fdc282f0a938b832bb4028352e59cf7a9\" for name:\"kube-scheduler\" returns container id \"fa85ede5a4dfcf74e785c3ae04761f9e24c5254c7b51fd332a2dd1bd75ab71d9\"" Apr 17 01:51:38.023123 containerd[1607]: time="2026-04-17T01:51:38.022531663Z" level=info msg="StartContainer for \"fa85ede5a4dfcf74e785c3ae04761f9e24c5254c7b51fd332a2dd1bd75ab71d9\"" Apr 17 01:51:38.030992 containerd[1607]: time="2026-04-17T01:51:38.028994705Z" level=info msg="RunPodSandbox for name:\"kube-controller-manager-localhost\" uid:\"82faa9ca0765979bc0118d46e6420ed8\" namespace:\"kube-system\" returns sandbox id \"da9200c64e3b9430af1bec691648e29545feb58f1e354e9301bd9011d5800181\"" Apr 17 01:51:38.038416 containerd[1607]: time="2026-04-17T01:51:38.033501072Z" level=info msg="Container af2c559bb5351f191757890e0d6b2a64b0868f7c585f121703cbfc0632304fae: CDI devices from CRI Config.CDIDevices: []" Apr 17 01:51:38.077342 containerd[1607]: time="2026-04-17T01:51:38.077146094Z" level=info msg="connecting to shim fa85ede5a4dfcf74e785c3ae04761f9e24c5254c7b51fd332a2dd1bd75ab71d9" address="unix:///run/containerd/s/37fa565e1f3a26166e72f3404aaa6af399b8d96545306b6a5af7e8ce01f4b5c9" protocol=ttrpc version=3 Apr 17 01:51:38.081667 kubelet[2777]: E0417 01:51:38.080906 2777 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 01:51:38.353116 containerd[1607]: time="2026-04-17T01:51:38.353063010Z" level=info msg="CreateContainer within sandbox \"41811a8b34cb686957955f8c584489d82bfa1224891a2d40e360e0c2eab76401\" for name:\"kube-apiserver\" returns container id \"af2c559bb5351f191757890e0d6b2a64b0868f7c585f121703cbfc0632304fae\"" Apr 17 01:51:38.354395 containerd[1607]: time="2026-04-17T01:51:38.354362111Z" level=info msg="StartContainer for \"af2c559bb5351f191757890e0d6b2a64b0868f7c585f121703cbfc0632304fae\"" Apr 17 01:51:38.355858 containerd[1607]: time="2026-04-17T01:51:38.355836763Z" level=info msg="connecting to shim af2c559bb5351f191757890e0d6b2a64b0868f7c585f121703cbfc0632304fae" address="unix:///run/containerd/s/4d81adec68a9442019a82acf762d954549e6ff5b60ec1e3e42dedeb839b6bd86" protocol=ttrpc version=3 Apr 17 01:51:38.424662 containerd[1607]: time="2026-04-17T01:51:38.379284088Z" level=info msg="CreateContainer within sandbox \"da9200c64e3b9430af1bec691648e29545feb58f1e354e9301bd9011d5800181\" for container name:\"kube-controller-manager\"" Apr 17 01:51:39.842364 systemd[1]: Started cri-containerd-fa85ede5a4dfcf74e785c3ae04761f9e24c5254c7b51fd332a2dd1bd75ab71d9.scope - libcontainer container fa85ede5a4dfcf74e785c3ae04761f9e24c5254c7b51fd332a2dd1bd75ab71d9. Apr 17 01:51:40.670165 containerd[1607]: time="2026-04-17T01:51:40.669851554Z" level=info msg="Container 4074ecfeaaac5fe32d97a292f991a9b3aa24c0d4749613d52ca098df51c170be: CDI devices from CRI Config.CDIDevices: []" Apr 17 01:51:41.147808 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1268765086.mount: Deactivated successfully. Apr 17 01:51:42.361986 containerd[1607]: time="2026-04-17T01:51:42.361246445Z" level=info msg="CreateContainer within sandbox \"da9200c64e3b9430af1bec691648e29545feb58f1e354e9301bd9011d5800181\" for name:\"kube-controller-manager\" returns container id \"4074ecfeaaac5fe32d97a292f991a9b3aa24c0d4749613d52ca098df51c170be\"" Apr 17 01:51:42.817040 containerd[1607]: time="2026-04-17T01:51:42.801365551Z" level=error msg="get state for fa85ede5a4dfcf74e785c3ae04761f9e24c5254c7b51fd332a2dd1bd75ab71d9" error="context deadline exceeded" Apr 17 01:51:42.906872 containerd[1607]: time="2026-04-17T01:51:42.815158600Z" level=warning msg="unknown status" status=0 Apr 17 01:51:42.935186 kubelet[2777]: E0417 01:51:42.928388 2777 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 17 01:51:43.627072 containerd[1607]: time="2026-04-17T01:51:43.625038003Z" level=info msg="StartContainer for \"4074ecfeaaac5fe32d97a292f991a9b3aa24c0d4749613d52ca098df51c170be\"" Apr 17 01:51:43.649193 systemd[1]: Started cri-containerd-af2c559bb5351f191757890e0d6b2a64b0868f7c585f121703cbfc0632304fae.scope - libcontainer container af2c559bb5351f191757890e0d6b2a64b0868f7c585f121703cbfc0632304fae. Apr 17 01:51:43.940071 containerd[1607]: time="2026-04-17T01:51:43.904695435Z" level=info msg="connecting to shim 4074ecfeaaac5fe32d97a292f991a9b3aa24c0d4749613d52ca098df51c170be" address="unix:///run/containerd/s/6dd5659f205fb6d79c30f8024892d72185c43fb20853b4daceb55fc7305fe8f6" protocol=ttrpc version=3 Apr 17 01:51:44.081002 kubelet[2777]: E0417 01:51:44.078920 2777 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.148:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.148:6443: connect: connection refused" interval="7s" Apr 17 01:51:45.340200 kubelet[2777]: I0417 01:51:45.338293 2777 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 17 01:51:45.352986 kubelet[2777]: E0417 01:51:45.352761 2777 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.148:6443/api/v1/nodes\": dial tcp 10.0.0.148:6443: connect: connection refused" node="localhost" Apr 17 01:51:47.092061 containerd[1607]: time="2026-04-17T01:51:47.054218910Z" level=error msg="ttrpc: received message on inactive stream" stream=3 Apr 17 01:51:47.095114 containerd[1607]: time="2026-04-17T01:51:47.093259477Z" level=error msg="get state for fa85ede5a4dfcf74e785c3ae04761f9e24c5254c7b51fd332a2dd1bd75ab71d9" error="context deadline exceeded" Apr 17 01:51:47.095114 containerd[1607]: time="2026-04-17T01:51:47.093312608Z" level=warning msg="unknown status" status=0 Apr 17 01:51:47.095114 containerd[1607]: time="2026-04-17T01:51:47.095062838Z" level=error msg="ttrpc: received message on inactive stream" stream=5 Apr 17 01:51:47.512014 systemd[1]: Started cri-containerd-4074ecfeaaac5fe32d97a292f991a9b3aa24c0d4749613d52ca098df51c170be.scope - libcontainer container 4074ecfeaaac5fe32d97a292f991a9b3aa24c0d4749613d52ca098df51c170be. Apr 17 01:51:49.255000 kubelet[2777]: E0417 01:51:49.095023 2777 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.148:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.148:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18a701e35930f510 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-17 01:50:53.986534672 +0000 UTC m=+76.992204623,LastTimestamp:2026-04-17 01:50:53.986534672 +0000 UTC m=+76.992204623,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 17 01:51:50.923051 kubelet[2777]: E0417 01:51:50.921993 2777 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.148:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.148:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 17 01:51:51.587086 kubelet[2777]: E0417 01:51:51.586710 2777 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.148:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.148:6443: connect: connection refused" interval="7s" Apr 17 01:51:51.990124 containerd[1607]: time="2026-04-17T01:51:51.931053522Z" level=info msg="StartContainer for \"fa85ede5a4dfcf74e785c3ae04761f9e24c5254c7b51fd332a2dd1bd75ab71d9\" returns successfully" Apr 17 01:51:54.378935 kubelet[2777]: E0417 01:51:53.592300 2777 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 17 01:51:55.370066 containerd[1607]: time="2026-04-17T01:51:55.367817247Z" level=info msg="StartContainer for \"af2c559bb5351f191757890e0d6b2a64b0868f7c585f121703cbfc0632304fae\" returns successfully" Apr 17 01:52:00.466365 containerd[1607]: time="2026-04-17T01:52:00.460793192Z" level=info msg="StartContainer for \"4074ecfeaaac5fe32d97a292f991a9b3aa24c0d4749613d52ca098df51c170be\" returns successfully" Apr 17 01:52:01.193784 kubelet[2777]: I0417 01:52:00.930385 2777 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 17 01:52:01.727335 kubelet[2777]: E0417 01:52:01.707375 2777 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.148:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.148:6443: connect: connection refused" interval="7s" Apr 17 01:52:03.287809 kubelet[2777]: E0417 01:52:03.287581 2777 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.148:6443/api/v1/nodes\": dial tcp 10.0.0.148:6443: connect: connection refused" node="localhost" Apr 17 01:52:03.288935 kubelet[2777]: E0417 01:52:03.286896 2777 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.148:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.148:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18a701e35930f510 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-17 01:50:53.986534672 +0000 UTC m=+76.992204623,LastTimestamp:2026-04-17 01:50:53.986534672 +0000 UTC m=+76.992204623,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 17 01:52:04.382787 kubelet[2777]: E0417 01:52:04.380869 2777 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 17 01:52:07.021984 kubelet[2777]: E0417 01:52:07.010878 2777 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.148:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.148:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 17 01:52:09.230707 kubelet[2777]: E0417 01:52:08.675300 2777 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 17 01:52:11.178222 kubelet[2777]: E0417 01:52:11.016544 2777 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 01:52:14.022650 kubelet[2777]: E0417 01:52:12.605143 2777 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.148:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.148:6443: connect: connection refused" interval="7s" Apr 17 01:52:14.419452 kubelet[2777]: E0417 01:52:14.417013 2777 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.148:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.148:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 17 01:52:14.716894 kubelet[2777]: E0417 01:52:14.711013 2777 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 17 01:52:16.071766 kubelet[2777]: E0417 01:52:16.069117 2777 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 17 01:52:16.108546 kubelet[2777]: I0417 01:52:15.925540 2777 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 17 01:52:17.875100 kubelet[2777]: E0417 01:52:17.827362 2777 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 01:52:19.978460 kubelet[2777]: E0417 01:52:19.962528 2777 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 17 01:52:21.010574 kubelet[2777]: E0417 01:52:20.990791 2777 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 01:52:22.958264 kubelet[2777]: E0417 01:52:22.926307 2777 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 17 01:52:25.283912 kubelet[2777]: E0417 01:52:24.416555 2777 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.148:6443/api/v1/namespaces/default/events\": net/http: TLS handshake timeout" event="&Event{ObjectMeta:{localhost.18a701e35930f510 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-17 01:50:53.986534672 +0000 UTC m=+76.992204623,LastTimestamp:2026-04-17 01:50:53.986534672 +0000 UTC m=+76.992204623,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 17 01:52:25.489128 kubelet[2777]: E0417 01:52:25.379207 2777 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.148:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 17 01:52:25.489128 kubelet[2777]: E0417 01:52:25.384113 2777 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 01:52:26.480250 kubelet[2777]: E0417 01:52:26.080262 2777 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 17 01:52:26.798438 kubelet[2777]: E0417 01:52:26.495974 2777 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.148:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 17 01:52:27.153864 kubelet[2777]: E0417 01:52:27.123958 2777 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.148:6443/api/v1/nodes\": net/http: TLS handshake timeout" node="localhost" Apr 17 01:52:28.670177 kubelet[2777]: E0417 01:52:28.640481 2777 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 17 01:52:29.055469 kubelet[2777]: E0417 01:52:29.048375 2777 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 01:52:30.632495 kubelet[2777]: E0417 01:52:30.617366 2777 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 17 01:52:31.180731 kubelet[2777]: E0417 01:52:31.180465 2777 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 01:52:31.686198 kubelet[2777]: E0417 01:52:31.574837 2777 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.148:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded" interval="7s" Apr 17 01:52:33.036396 kubelet[2777]: E0417 01:52:33.036286 2777 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 17 01:52:33.173982 kubelet[2777]: E0417 01:52:33.173726 2777 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 01:52:33.182933 kubelet[2777]: E0417 01:52:33.182120 2777 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 17 01:52:33.199140 kubelet[2777]: E0417 01:52:33.198369 2777 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 01:52:33.870753 kubelet[2777]: E0417 01:52:33.870353 2777 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 17 01:52:33.952371 kubelet[2777]: E0417 01:52:33.951084 2777 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 01:52:34.399071 kubelet[2777]: I0417 01:52:34.394351 2777 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 17 01:52:36.006780 kubelet[2777]: E0417 01:52:36.000372 2777 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.148:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": net/http: TLS handshake timeout" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 17 01:52:36.949394 kubelet[2777]: E0417 01:52:36.861936 2777 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 17 01:52:42.213783 kubelet[2777]: E0417 01:52:42.211977 2777 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 17 01:52:42.412633 kubelet[2777]: E0417 01:52:42.277554 2777 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 01:52:44.875317 kubelet[2777]: E0417 01:52:44.849322 2777 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.148:6443/api/v1/nodes\": net/http: TLS handshake timeout" node="localhost" Apr 17 01:52:45.640155 kubelet[2777]: E0417 01:52:45.628125 2777 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.148:6443/api/v1/namespaces/default/events\": net/http: TLS handshake timeout" event="&Event{ObjectMeta:{localhost.18a701e35930f510 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-17 01:50:53.986534672 +0000 UTC m=+76.992204623,LastTimestamp:2026-04-17 01:50:53.986534672 +0000 UTC m=+76.992204623,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 17 01:52:47.042916 kubelet[2777]: E0417 01:52:47.038218 2777 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 17 01:52:48.636937 kubelet[2777]: E0417 01:52:48.634268 2777 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.148:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded" interval="7s" Apr 17 01:52:52.605513 kubelet[2777]: I0417 01:52:52.538210 2777 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 17 01:52:57.216529 kubelet[2777]: E0417 01:52:57.215304 2777 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 17 01:53:00.366113 kubelet[2777]: E0417 01:53:00.364424 2777 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.148:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 17 01:53:02.942788 kubelet[2777]: E0417 01:53:02.884092 2777 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.148:6443/api/v1/nodes\": net/http: TLS handshake timeout" node="localhost" Apr 17 01:53:03.165023 kubelet[2777]: E0417 01:53:03.159380 2777 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.148:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": net/http: TLS handshake timeout" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 17 01:53:06.523323 kubelet[2777]: E0417 01:53:06.518195 2777 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.148:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" interval="7s" Apr 17 01:53:07.055289 kubelet[2777]: E0417 01:53:06.683254 2777 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.148:6443/api/v1/namespaces/default/events\": net/http: TLS handshake timeout" event="&Event{ObjectMeta:{localhost.18a701e35930f510 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-17 01:50:53.986534672 +0000 UTC m=+76.992204623,LastTimestamp:2026-04-17 01:50:53.986534672 +0000 UTC m=+76.992204623,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 17 01:53:07.707185 kubelet[2777]: E0417 01:53:07.585342 2777 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.148:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 17 01:53:07.947266 kubelet[2777]: E0417 01:53:07.946292 2777 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 17 01:53:12.285448 kubelet[2777]: I0417 01:53:12.251452 2777 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 17 01:53:14.956413 kubelet[2777]: E0417 01:53:14.946374 2777 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 17 01:53:15.245446 kubelet[2777]: E0417 01:53:15.215374 2777 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 01:53:18.037490 kubelet[2777]: E0417 01:53:18.025347 2777 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 17 01:53:22.953298 kubelet[2777]: E0417 01:53:22.924731 2777 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.148:6443/api/v1/nodes\": net/http: TLS handshake timeout" node="localhost" Apr 17 01:53:24.062856 kubelet[2777]: E0417 01:53:24.052117 2777 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.148:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 17 01:53:24.074781 kubelet[2777]: E0417 01:53:24.073642 2777 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.148:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" interval="7s" Apr 17 01:53:27.457454 kubelet[2777]: E0417 01:53:27.412457 2777 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.148:6443/api/v1/namespaces/default/events\": net/http: TLS handshake timeout" event="&Event{ObjectMeta:{localhost.18a701e35930f510 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-17 01:50:53.986534672 +0000 UTC m=+76.992204623,LastTimestamp:2026-04-17 01:50:53.986534672 +0000 UTC m=+76.992204623,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 17 01:53:28.421325 kubelet[2777]: E0417 01:53:28.418482 2777 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 17 01:53:30.245244 kubelet[2777]: I0417 01:53:30.240151 2777 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 17 01:53:33.400136 kubelet[2777]: E0417 01:53:33.332304 2777 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.148:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 17 01:53:35.555154 kubelet[2777]: E0417 01:53:35.524490 2777 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.148:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": net/http: TLS handshake timeout" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 17 01:53:37.691140 kubelet[2777]: E0417 01:53:37.690877 2777 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 17 01:53:37.719115 kubelet[2777]: E0417 01:53:37.707771 2777 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 01:53:38.493825 kubelet[2777]: E0417 01:53:38.488536 2777 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 17 01:53:40.441263 kubelet[2777]: E0417 01:53:40.439926 2777 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.148:6443/api/v1/nodes\": net/http: TLS handshake timeout" node="localhost" Apr 17 01:53:41.136245 kubelet[2777]: E0417 01:53:41.133706 2777 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.148:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" interval="7s" Apr 17 01:53:47.523079 kubelet[2777]: E0417 01:53:47.516335 2777 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.148:6443/api/v1/namespaces/default/events\": net/http: TLS handshake timeout" event="&Event{ObjectMeta:{localhost.18a701e35930f510 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-17 01:50:53.986534672 +0000 UTC m=+76.992204623,LastTimestamp:2026-04-17 01:50:53.986534672 +0000 UTC m=+76.992204623,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 17 01:53:47.523079 kubelet[2777]: E0417 01:53:47.517254 2777 event.go:307] "Unable to write event (retry limit exceeded!)" event="&Event{ObjectMeta:{localhost.18a701e35930f510 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-17 01:50:53.986534672 +0000 UTC m=+76.992204623,LastTimestamp:2026-04-17 01:50:53.986534672 +0000 UTC m=+76.992204623,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 17 01:53:47.936798 kubelet[2777]: I0417 01:53:47.936241 2777 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 17 01:53:48.505322 kubelet[2777]: E0417 01:53:48.501825 2777 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 17 01:53:57.774506 kubelet[2777]: E0417 01:53:57.712541 2777 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.148:6443/api/v1/namespaces/default/events\": net/http: TLS handshake timeout" event="&Event{ObjectMeta:{localhost.18a701e3a54aabd0 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-17 01:50:55.263288272 +0000 UTC m=+78.268958223,LastTimestamp:2026-04-17 01:50:55.263288272 +0000 UTC m=+78.268958223,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 17 01:53:58.019330 kubelet[2777]: E0417 01:53:58.012247 2777 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.148:6443/api/v1/nodes\": net/http: TLS handshake timeout" node="localhost" Apr 17 01:53:58.190952 kubelet[2777]: E0417 01:53:58.190457 2777 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.148:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded" interval="7s" Apr 17 01:53:58.657517 kubelet[2777]: E0417 01:53:58.636073 2777 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 17 01:54:03.038075 kubelet[2777]: E0417 01:54:02.992341 2777 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 17 01:54:03.484084 kubelet[2777]: E0417 01:54:03.481418 2777 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 01:54:06.135730 kubelet[2777]: I0417 01:54:06.135301 2777 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 17 01:54:07.242311 kubelet[2777]: E0417 01:54:07.239551 2777 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.148:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": net/http: TLS handshake timeout" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 17 01:54:07.847068 kubelet[2777]: E0417 01:54:07.835484 2777 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.148:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 17 01:54:08.750524 kubelet[2777]: E0417 01:54:08.733682 2777 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 17 01:54:09.101235 kubelet[2777]: E0417 01:54:09.052804 2777 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.148:6443/api/v1/namespaces/default/events\": net/http: TLS handshake timeout" event="&Event{ObjectMeta:{localhost.18a701e3a54aabd0 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-17 01:50:55.263288272 +0000 UTC m=+78.268958223,LastTimestamp:2026-04-17 01:50:55.263288272 +0000 UTC m=+78.268958223,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 17 01:54:09.101235 kubelet[2777]: E0417 01:54:09.136584 2777 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.148:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 17 01:54:14.294526 kubelet[2777]: E0417 01:54:14.293927 2777 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.148:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 17 01:54:15.482888 kubelet[2777]: E0417 01:54:15.403338 2777 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.148:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" interval="7s" Apr 17 01:54:16.879873 kubelet[2777]: E0417 01:54:16.879371 2777 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.148:6443/api/v1/nodes\": net/http: TLS handshake timeout" node="localhost" Apr 17 01:54:18.821039 kubelet[2777]: E0417 01:54:18.819371 2777 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 17 01:54:21.537279 kubelet[2777]: E0417 01:54:21.536870 2777 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 17 01:54:21.540796 kubelet[2777]: E0417 01:54:21.537888 2777 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 01:54:23.938514 kubelet[2777]: I0417 01:54:23.938202 2777 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 17 01:54:28.335070 kubelet[2777]: E0417 01:54:28.334047 2777 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Apr 17 01:54:29.078523 kubelet[2777]: E0417 01:54:28.879158 2777 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 17 01:54:30.640364 kubelet[2777]: I0417 01:54:30.638552 2777 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Apr 17 01:54:30.907079 kubelet[2777]: E0417 01:54:30.648328 2777 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Apr 17 01:54:31.412562 kubelet[2777]: E0417 01:54:31.389744 2777 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.18a701e3a54aabd0 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-17 01:50:55.263288272 +0000 UTC m=+78.268958223,LastTimestamp:2026-04-17 01:50:55.263288272 +0000 UTC m=+78.268958223,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 17 01:54:33.001238 kubelet[2777]: E0417 01:54:32.939203 2777 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.18a701e588bed553 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node localhost status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-17 01:51:03.374296403 +0000 UTC m=+86.379966346,LastTimestamp:2026-04-17 01:51:03.374296403 +0000 UTC m=+86.379966346,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 17 01:54:33.144209 kubelet[2777]: E0417 01:54:33.047143 2777 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 17 01:54:33.271204 kubelet[2777]: E0417 01:54:33.252098 2777 kubelet_node_status.go:398] "Node not becoming ready in time after startup" Apr 17 01:54:34.415002 kubelet[2777]: E0417 01:54:34.412811 2777 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 17 01:54:39.221019 kubelet[2777]: E0417 01:54:39.220250 2777 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 17 01:54:39.648155 kubelet[2777]: E0417 01:54:39.623055 2777 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 17 01:54:43.336814 kubelet[2777]: E0417 01:54:43.336311 2777 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Apr 17 01:54:44.780255 kubelet[2777]: E0417 01:54:44.779685 2777 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 17 01:54:49.353465 kubelet[2777]: E0417 01:54:49.351551 2777 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 17 01:54:49.918297 kubelet[2777]: E0417 01:54:49.907396 2777 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 17 01:54:54.087040 kubelet[2777]: E0417 01:54:54.082891 2777 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Apr 17 01:54:55.291786 kubelet[2777]: E0417 01:54:55.291489 2777 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 17 01:54:58.591053 kubelet[2777]: I0417 01:54:58.590354 2777 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Apr 17 01:54:59.042170 kubelet[2777]: I0417 01:54:59.029559 2777 apiserver.go:52] "Watching apiserver" Apr 17 01:54:59.435000 kubelet[2777]: I0417 01:54:59.434063 2777 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Apr 17 01:54:59.472551 kubelet[2777]: I0417 01:54:59.465925 2777 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Apr 17 01:54:59.935510 kubelet[2777]: I0417 01:54:59.916238 2777 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Apr 17 01:55:00.742327 kubelet[2777]: E0417 01:55:00.740839 2777 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 01:55:01.171776 kubelet[2777]: E0417 01:55:01.167248 2777 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 01:55:01.295406 kubelet[2777]: E0417 01:55:01.294058 2777 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 17 01:55:02.080147 kubelet[2777]: E0417 01:55:02.071338 2777 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Apr 17 01:55:02.109082 containerd[1607]: time="2026-04-17T01:55:02.102137487Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod66a243c17a59d09458bf3b09d66260f5.slice/cri-containerd-fa85ede5a4dfcf74e785c3ae04761f9e24c5254c7b51fd332a2dd1bd75ab71d9.scope/hugetlb.2MB.events\"" Apr 17 01:55:02.213529 containerd[1607]: time="2026-04-17T01:55:02.109200420Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod66a243c17a59d09458bf3b09d66260f5.slice/cri-containerd-fa85ede5a4dfcf74e785c3ae04761f9e24c5254c7b51fd332a2dd1bd75ab71d9.scope/hugetlb.1GB.events\"" Apr 17 01:55:02.247705 kubelet[2777]: I0417 01:55:02.150242 2777 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Apr 17 01:55:02.512223 containerd[1607]: time="2026-04-17T01:55:02.504663460Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4fac3d71e98654620e15e49cc21797c2.slice/cri-containerd-af2c559bb5351f191757890e0d6b2a64b0868f7c585f121703cbfc0632304fae.scope/hugetlb.2MB.events\"" Apr 17 01:55:02.512223 containerd[1607]: time="2026-04-17T01:55:02.506221642Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4fac3d71e98654620e15e49cc21797c2.slice/cri-containerd-af2c559bb5351f191757890e0d6b2a64b0868f7c585f121703cbfc0632304fae.scope/hugetlb.1GB.events\"" Apr 17 01:55:02.516851 containerd[1607]: time="2026-04-17T01:55:02.516190572Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod82faa9ca0765979bc0118d46e6420ed8.slice/cri-containerd-4074ecfeaaac5fe32d97a292f991a9b3aa24c0d4749613d52ca098df51c170be.scope/hugetlb.2MB.events\"" Apr 17 01:55:02.516851 containerd[1607]: time="2026-04-17T01:55:02.516775848Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod82faa9ca0765979bc0118d46e6420ed8.slice/cri-containerd-4074ecfeaaac5fe32d97a292f991a9b3aa24c0d4749613d52ca098df51c170be.scope/hugetlb.1GB.events\"" Apr 17 01:55:04.799908 kubelet[2777]: E0417 01:55:04.798120 2777 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 01:55:05.691339 kubelet[2777]: E0417 01:55:05.578934 2777 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.013s" Apr 17 01:55:07.452440 kubelet[2777]: E0417 01:55:07.187765 2777 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 17 01:55:13.474039 kubelet[2777]: E0417 01:55:13.473402 2777 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="7.71s" Apr 17 01:55:15.091794 kubelet[2777]: E0417 01:55:14.914985 2777 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 17 01:55:19.794749 kubelet[2777]: E0417 01:55:19.792023 2777 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="5.977s" Apr 17 01:55:20.796319 kubelet[2777]: E0417 01:55:20.795900 2777 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 17 01:55:20.898236 kubelet[2777]: E0417 01:55:20.895475 2777 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 01:55:21.528271 systemd[1]: cri-containerd-4074ecfeaaac5fe32d97a292f991a9b3aa24c0d4749613d52ca098df51c170be.scope: Deactivated successfully. Apr 17 01:55:21.528991 systemd[1]: cri-containerd-4074ecfeaaac5fe32d97a292f991a9b3aa24c0d4749613d52ca098df51c170be.scope: Consumed 37.096s CPU time, 22.4M memory peak. Apr 17 01:55:21.707192 containerd[1607]: time="2026-04-17T01:55:21.706817927Z" level=info msg="received container exit event container_id:\"4074ecfeaaac5fe32d97a292f991a9b3aa24c0d4749613d52ca098df51c170be\" id:\"4074ecfeaaac5fe32d97a292f991a9b3aa24c0d4749613d52ca098df51c170be\" pid:3021 exit_status:1 exited_at:{seconds:1776390921 nanos:657976039}" Apr 17 01:55:21.739913 kubelet[2777]: I0417 01:55:21.738498 2777 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=22.738260909 podStartE2EDuration="22.738260909s" podCreationTimestamp="2026-04-17 01:54:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-17 01:55:21.708683304 +0000 UTC m=+344.714353254" watchObservedRunningTime="2026-04-17 01:55:21.738260909 +0000 UTC m=+344.743930932" Apr 17 01:55:22.883069 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4074ecfeaaac5fe32d97a292f991a9b3aa24c0d4749613d52ca098df51c170be-rootfs.mount: Deactivated successfully. Apr 17 01:55:23.089482 containerd[1607]: time="2026-04-17T01:55:23.088471605Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod66a243c17a59d09458bf3b09d66260f5.slice/cri-containerd-fa85ede5a4dfcf74e785c3ae04761f9e24c5254c7b51fd332a2dd1bd75ab71d9.scope/hugetlb.2MB.events\"" Apr 17 01:55:23.089482 containerd[1607]: time="2026-04-17T01:55:23.089031303Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod66a243c17a59d09458bf3b09d66260f5.slice/cri-containerd-fa85ede5a4dfcf74e785c3ae04761f9e24c5254c7b51fd332a2dd1bd75ab71d9.scope/hugetlb.1GB.events\"" Apr 17 01:55:23.184023 containerd[1607]: time="2026-04-17T01:55:23.176706430Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4fac3d71e98654620e15e49cc21797c2.slice/cri-containerd-af2c559bb5351f191757890e0d6b2a64b0868f7c585f121703cbfc0632304fae.scope/hugetlb.2MB.events\"" Apr 17 01:55:23.184023 containerd[1607]: time="2026-04-17T01:55:23.176869686Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4fac3d71e98654620e15e49cc21797c2.slice/cri-containerd-af2c559bb5351f191757890e0d6b2a64b0868f7c585f121703cbfc0632304fae.scope/hugetlb.1GB.events\"" Apr 17 01:55:26.056476 kubelet[2777]: I0417 01:55:26.049997 2777 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=27.049974653 podStartE2EDuration="27.049974653s" podCreationTimestamp="2026-04-17 01:54:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-17 01:55:22.791582526 +0000 UTC m=+345.797252473" watchObservedRunningTime="2026-04-17 01:55:26.049974653 +0000 UTC m=+349.055644610" Apr 17 01:55:26.056476 kubelet[2777]: I0417 01:55:26.050376 2777 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=21.050362233 podStartE2EDuration="21.050362233s" podCreationTimestamp="2026-04-17 01:55:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-17 01:55:26.049703645 +0000 UTC m=+349.055373598" watchObservedRunningTime="2026-04-17 01:55:26.050362233 +0000 UTC m=+349.056032194" Apr 17 01:55:26.605337 kubelet[2777]: E0417 01:55:26.533044 2777 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 17 01:55:27.511555 kubelet[2777]: E0417 01:55:27.504941 2777 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.911s" Apr 17 01:55:28.826158 kubelet[2777]: E0417 01:55:28.801579 2777 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.28s" Apr 17 01:55:29.532006 kubelet[2777]: I0417 01:55:29.527800 2777 scope.go:117] "RemoveContainer" containerID="4074ecfeaaac5fe32d97a292f991a9b3aa24c0d4749613d52ca098df51c170be" Apr 17 01:55:29.714330 kubelet[2777]: E0417 01:55:29.713507 2777 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 01:55:30.629524 containerd[1607]: time="2026-04-17T01:55:30.623986030Z" level=info msg="CreateContainer within sandbox \"da9200c64e3b9430af1bec691648e29545feb58f1e354e9301bd9011d5800181\" for container name:\"kube-controller-manager\" attempt:1" Apr 17 01:55:31.454409 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1672994824.mount: Deactivated successfully. Apr 17 01:55:31.853558 containerd[1607]: time="2026-04-17T01:55:31.810425664Z" level=info msg="Container c0810b168f6607ff82be0df0ff8e93d40e21cccb775f74df4533c1a5dfd15502: CDI devices from CRI Config.CDIDevices: []" Apr 17 01:55:31.932461 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3729524486.mount: Deactivated successfully. Apr 17 01:55:33.614438 containerd[1607]: time="2026-04-17T01:55:33.593290043Z" level=info msg="CreateContainer within sandbox \"da9200c64e3b9430af1bec691648e29545feb58f1e354e9301bd9011d5800181\" for name:\"kube-controller-manager\" attempt:1 returns container id \"c0810b168f6607ff82be0df0ff8e93d40e21cccb775f74df4533c1a5dfd15502\"" Apr 17 01:55:34.006468 kubelet[2777]: E0417 01:55:34.005265 2777 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 17 01:55:34.341290 containerd[1607]: time="2026-04-17T01:55:34.335781342Z" level=info msg="StartContainer for \"c0810b168f6607ff82be0df0ff8e93d40e21cccb775f74df4533c1a5dfd15502\"" Apr 17 01:55:35.182918 containerd[1607]: time="2026-04-17T01:55:35.182212592Z" level=info msg="connecting to shim c0810b168f6607ff82be0df0ff8e93d40e21cccb775f74df4533c1a5dfd15502" address="unix:///run/containerd/s/6dd5659f205fb6d79c30f8024892d72185c43fb20853b4daceb55fc7305fe8f6" protocol=ttrpc version=3 Apr 17 01:55:39.876196 systemd[1]: Started cri-containerd-c0810b168f6607ff82be0df0ff8e93d40e21cccb775f74df4533c1a5dfd15502.scope - libcontainer container c0810b168f6607ff82be0df0ff8e93d40e21cccb775f74df4533c1a5dfd15502. Apr 17 01:55:40.578216 kubelet[2777]: E0417 01:55:40.576870 2777 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 17 01:55:44.295517 kubelet[2777]: E0417 01:55:44.212081 2777 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="12.518s" Apr 17 01:55:44.915066 containerd[1607]: time="2026-04-17T01:55:44.911843453Z" level=error msg="get state for c0810b168f6607ff82be0df0ff8e93d40e21cccb775f74df4533c1a5dfd15502" error="context deadline exceeded" Apr 17 01:55:44.965543 containerd[1607]: time="2026-04-17T01:55:44.927302979Z" level=warning msg="unknown status" status=0 Apr 17 01:55:45.291924 kubelet[2777]: E0417 01:55:45.291344 2777 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 01:55:45.954021 containerd[1607]: time="2026-04-17T01:55:45.952059209Z" level=error msg="ttrpc: received message on inactive stream" stream=3 Apr 17 01:55:46.588547 containerd[1607]: time="2026-04-17T01:55:46.578408545Z" level=info msg="StartContainer for \"c0810b168f6607ff82be0df0ff8e93d40e21cccb775f74df4533c1a5dfd15502\" returns successfully" Apr 17 01:55:46.895329 kubelet[2777]: E0417 01:55:46.887548 2777 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 17 01:55:48.154326 kubelet[2777]: E0417 01:55:48.131576 2777 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.637s" Apr 17 01:55:50.770866 containerd[1607]: time="2026-04-17T01:55:50.768261937Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod66a243c17a59d09458bf3b09d66260f5.slice/cri-containerd-fa85ede5a4dfcf74e785c3ae04761f9e24c5254c7b51fd332a2dd1bd75ab71d9.scope/hugetlb.2MB.events\"" Apr 17 01:55:50.940051 containerd[1607]: time="2026-04-17T01:55:50.775840514Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod66a243c17a59d09458bf3b09d66260f5.slice/cri-containerd-fa85ede5a4dfcf74e785c3ae04761f9e24c5254c7b51fd332a2dd1bd75ab71d9.scope/hugetlb.1GB.events\"" Apr 17 01:55:51.303101 containerd[1607]: time="2026-04-17T01:55:51.295715648Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4fac3d71e98654620e15e49cc21797c2.slice/cri-containerd-af2c559bb5351f191757890e0d6b2a64b0868f7c585f121703cbfc0632304fae.scope/hugetlb.2MB.events\"" Apr 17 01:55:51.351074 containerd[1607]: time="2026-04-17T01:55:51.350382358Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4fac3d71e98654620e15e49cc21797c2.slice/cri-containerd-af2c559bb5351f191757890e0d6b2a64b0868f7c585f121703cbfc0632304fae.scope/hugetlb.1GB.events\"" Apr 17 01:55:51.384078 kubelet[2777]: E0417 01:55:51.351983 2777 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.778s" Apr 17 01:55:51.411584 containerd[1607]: time="2026-04-17T01:55:51.410648984Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod82faa9ca0765979bc0118d46e6420ed8.slice/cri-containerd-c0810b168f6607ff82be0df0ff8e93d40e21cccb775f74df4533c1a5dfd15502.scope/hugetlb.2MB.events\"" Apr 17 01:55:51.414223 containerd[1607]: time="2026-04-17T01:55:51.413586321Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod82faa9ca0765979bc0118d46e6420ed8.slice/cri-containerd-c0810b168f6607ff82be0df0ff8e93d40e21cccb775f74df4533c1a5dfd15502.scope/hugetlb.1GB.events\"" Apr 17 01:55:52.716211 kubelet[2777]: E0417 01:55:52.713583 2777 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 17 01:55:54.320687 kubelet[2777]: E0417 01:55:54.320196 2777 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.968s" Apr 17 01:55:56.238077 kubelet[2777]: E0417 01:55:56.236850 2777 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 01:55:56.303501 kubelet[2777]: E0417 01:55:56.303042 2777 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.702s" Apr 17 01:55:58.813487 kubelet[2777]: E0417 01:55:58.813235 2777 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.51s" Apr 17 01:55:58.938212 kubelet[2777]: E0417 01:55:58.737259 2777 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 01:55:59.495055 kubelet[2777]: E0417 01:55:59.257434 2777 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 17 01:56:00.038664 kubelet[2777]: E0417 01:56:00.026094 2777 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.03s" Apr 17 01:56:04.407487 systemd[1]: cri-containerd-fa85ede5a4dfcf74e785c3ae04761f9e24c5254c7b51fd332a2dd1bd75ab71d9.scope: Deactivated successfully. Apr 17 01:56:04.496415 systemd[1]: cri-containerd-fa85ede5a4dfcf74e785c3ae04761f9e24c5254c7b51fd332a2dd1bd75ab71d9.scope: Consumed 1min 19.063s CPU time, 22.3M memory peak. Apr 17 01:56:04.777478 containerd[1607]: time="2026-04-17T01:56:04.774512576Z" level=info msg="received container exit event container_id:\"fa85ede5a4dfcf74e785c3ae04761f9e24c5254c7b51fd332a2dd1bd75ab71d9\" id:\"fa85ede5a4dfcf74e785c3ae04761f9e24c5254c7b51fd332a2dd1bd75ab71d9\" pid:2993 exit_status:1 exited_at:{seconds:1776390964 nanos:759340507}" Apr 17 01:56:05.941528 kubelet[2777]: E0417 01:56:05.937334 2777 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="5.882s" Apr 17 01:56:07.523292 kubelet[2777]: E0417 01:56:07.519993 2777 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 01:56:08.322140 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fa85ede5a4dfcf74e785c3ae04761f9e24c5254c7b51fd332a2dd1bd75ab71d9-rootfs.mount: Deactivated successfully. Apr 17 01:56:08.384538 kubelet[2777]: E0417 01:56:08.295452 2777 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 17 01:56:09.567732 containerd[1607]: time="2026-04-17T01:56:09.537834603Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4fac3d71e98654620e15e49cc21797c2.slice/cri-containerd-af2c559bb5351f191757890e0d6b2a64b0868f7c585f121703cbfc0632304fae.scope/hugetlb.2MB.events\"" Apr 17 01:56:09.590723 containerd[1607]: time="2026-04-17T01:56:09.570977394Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4fac3d71e98654620e15e49cc21797c2.slice/cri-containerd-af2c559bb5351f191757890e0d6b2a64b0868f7c585f121703cbfc0632304fae.scope/hugetlb.1GB.events\"" Apr 17 01:56:09.886351 containerd[1607]: time="2026-04-17T01:56:09.822266880Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod82faa9ca0765979bc0118d46e6420ed8.slice/cri-containerd-c0810b168f6607ff82be0df0ff8e93d40e21cccb775f74df4533c1a5dfd15502.scope/hugetlb.2MB.events\"" Apr 17 01:56:09.934767 containerd[1607]: time="2026-04-17T01:56:09.919313174Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod82faa9ca0765979bc0118d46e6420ed8.slice/cri-containerd-c0810b168f6607ff82be0df0ff8e93d40e21cccb775f74df4533c1a5dfd15502.scope/hugetlb.1GB.events\"" Apr 17 01:56:10.023133 kubelet[2777]: E0417 01:56:10.004275 2777 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="4.065s" Apr 17 01:56:10.600328 kubelet[2777]: I0417 01:56:10.598525 2777 scope.go:117] "RemoveContainer" containerID="fa85ede5a4dfcf74e785c3ae04761f9e24c5254c7b51fd332a2dd1bd75ab71d9" Apr 17 01:56:10.723424 kubelet[2777]: E0417 01:56:10.626981 2777 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 01:56:11.131919 kubelet[2777]: E0417 01:56:11.131424 2777 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 01:56:11.149245 containerd[1607]: time="2026-04-17T01:56:11.139419584Z" level=info msg="CreateContainer within sandbox \"2f3cc335d06fac8b129f2e0628de747fdc282f0a938b832bb4028352e59cf7a9\" for container name:\"kube-scheduler\" attempt:1" Apr 17 01:56:11.540836 containerd[1607]: time="2026-04-17T01:56:11.540696873Z" level=info msg="Container a3997a9fadd23778e3d326117888dc9ed26eb50c1686b9d632b5dd431da5794e: CDI devices from CRI Config.CDIDevices: []" Apr 17 01:56:11.647293 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1802766496.mount: Deactivated successfully. Apr 17 01:56:11.796863 containerd[1607]: time="2026-04-17T01:56:11.792647265Z" level=info msg="CreateContainer within sandbox \"2f3cc335d06fac8b129f2e0628de747fdc282f0a938b832bb4028352e59cf7a9\" for name:\"kube-scheduler\" attempt:1 returns container id \"a3997a9fadd23778e3d326117888dc9ed26eb50c1686b9d632b5dd431da5794e\"" Apr 17 01:56:12.706743 containerd[1607]: time="2026-04-17T01:56:12.705768935Z" level=info msg="StartContainer for \"a3997a9fadd23778e3d326117888dc9ed26eb50c1686b9d632b5dd431da5794e\"" Apr 17 01:56:12.895347 containerd[1607]: time="2026-04-17T01:56:12.893888972Z" level=info msg="connecting to shim a3997a9fadd23778e3d326117888dc9ed26eb50c1686b9d632b5dd431da5794e" address="unix:///run/containerd/s/37fa565e1f3a26166e72f3404aaa6af399b8d96545306b6a5af7e8ce01f4b5c9" protocol=ttrpc version=3 Apr 17 01:56:12.931383 kubelet[2777]: I0417 01:56:12.893567 2777 scope.go:117] "RemoveContainer" containerID="fa85ede5a4dfcf74e785c3ae04761f9e24c5254c7b51fd332a2dd1bd75ab71d9" Apr 17 01:56:12.993548 containerd[1607]: time="2026-04-17T01:56:12.993392357Z" level=info msg="RemoveContainer for \"fa85ede5a4dfcf74e785c3ae04761f9e24c5254c7b51fd332a2dd1bd75ab71d9\"" Apr 17 01:56:13.602180 kubelet[2777]: E0417 01:56:13.599931 2777 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 17 01:56:13.921466 systemd[1]: Started cri-containerd-a3997a9fadd23778e3d326117888dc9ed26eb50c1686b9d632b5dd431da5794e.scope - libcontainer container a3997a9fadd23778e3d326117888dc9ed26eb50c1686b9d632b5dd431da5794e. Apr 17 01:56:15.029616 containerd[1607]: time="2026-04-17T01:56:15.013835825Z" level=error msg="get state for 2f3cc335d06fac8b129f2e0628de747fdc282f0a938b832bb4028352e59cf7a9" error="context deadline exceeded" Apr 17 01:56:16.423250 containerd[1607]: time="2026-04-17T01:56:15.074522026Z" level=warning msg="unknown status" status=0 Apr 17 01:56:16.423250 containerd[1607]: time="2026-04-17T01:56:16.125532076Z" level=error msg="get state for a3997a9fadd23778e3d326117888dc9ed26eb50c1686b9d632b5dd431da5794e" error="context deadline exceeded" Apr 17 01:56:16.423250 containerd[1607]: time="2026-04-17T01:56:16.126100529Z" level=warning msg="unknown status" status=0 Apr 17 01:56:17.875429 containerd[1607]: time="2026-04-17T01:56:17.872794387Z" level=info msg="RemoveContainer for \"fa85ede5a4dfcf74e785c3ae04761f9e24c5254c7b51fd332a2dd1bd75ab71d9\" returns successfully" Apr 17 01:56:19.708294 kubelet[2777]: E0417 01:56:19.702711 2777 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 17 01:56:19.908879 containerd[1607]: time="2026-04-17T01:56:19.896513758Z" level=error msg="get state for a3997a9fadd23778e3d326117888dc9ed26eb50c1686b9d632b5dd431da5794e" error="context deadline exceeded" Apr 17 01:56:19.908879 containerd[1607]: time="2026-04-17T01:56:19.907109203Z" level=warning msg="unknown status" status=0 Apr 17 01:56:20.706933 kubelet[2777]: E0417 01:56:20.526347 2777 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="4.67s" Apr 17 01:56:22.569354 containerd[1607]: time="2026-04-17T01:56:22.539558449Z" level=error msg="get state for a3997a9fadd23778e3d326117888dc9ed26eb50c1686b9d632b5dd431da5794e" error="context deadline exceeded" Apr 17 01:56:22.968778 containerd[1607]: time="2026-04-17T01:56:22.557515533Z" level=warning msg="unknown status" status=0 Apr 17 01:56:24.768578 containerd[1607]: time="2026-04-17T01:56:24.753992068Z" level=error msg="ttrpc: received message on inactive stream" stream=3 Apr 17 01:56:24.768578 containerd[1607]: time="2026-04-17T01:56:24.757828874Z" level=error msg="ttrpc: received message on inactive stream" stream=31 Apr 17 01:56:24.768578 containerd[1607]: time="2026-04-17T01:56:24.792044346Z" level=error msg="ttrpc: received message on inactive stream" stream=5 Apr 17 01:56:24.768578 containerd[1607]: time="2026-04-17T01:56:24.792518817Z" level=error msg="ttrpc: received message on inactive stream" stream=7 Apr 17 01:56:27.776819 containerd[1607]: time="2026-04-17T01:56:27.748521939Z" level=error msg="get state for a3997a9fadd23778e3d326117888dc9ed26eb50c1686b9d632b5dd431da5794e" error="context deadline exceeded" Apr 17 01:56:27.812323 containerd[1607]: time="2026-04-17T01:56:27.778691046Z" level=warning msg="unknown status" status=0 Apr 17 01:56:27.947920 kubelet[2777]: E0417 01:56:27.939437 2777 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 17 01:56:30.090451 containerd[1607]: time="2026-04-17T01:56:30.085351576Z" level=error msg="ttrpc: received message on inactive stream" stream=19 Apr 17 01:56:31.906797 containerd[1607]: time="2026-04-17T01:56:31.828656687Z" level=info msg="StartContainer for \"a3997a9fadd23778e3d326117888dc9ed26eb50c1686b9d632b5dd431da5794e\" returns successfully" Apr 17 01:56:33.228376 kubelet[2777]: E0417 01:56:33.224988 2777 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="12.48s" Apr 17 01:56:33.228376 kubelet[2777]: E0417 01:56:33.225403 2777 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 17 01:56:34.613418 containerd[1607]: time="2026-04-17T01:56:34.611184267Z" level=info msg="container event discarded" container=2f3cc335d06fac8b129f2e0628de747fdc282f0a938b832bb4028352e59cf7a9 type=CONTAINER_CREATED_EVENT Apr 17 01:56:34.613418 containerd[1607]: time="2026-04-17T01:56:34.611828323Z" level=info msg="container event discarded" container=2f3cc335d06fac8b129f2e0628de747fdc282f0a938b832bb4028352e59cf7a9 type=CONTAINER_STARTED_EVENT Apr 17 01:56:35.072924 kubelet[2777]: E0417 01:56:35.056206 2777 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 01:56:35.830560 containerd[1607]: time="2026-04-17T01:56:35.757304455Z" level=info msg="container event discarded" container=41811a8b34cb686957955f8c584489d82bfa1224891a2d40e360e0c2eab76401 type=CONTAINER_CREATED_EVENT Apr 17 01:56:35.830560 containerd[1607]: time="2026-04-17T01:56:35.829311957Z" level=info msg="container event discarded" container=41811a8b34cb686957955f8c584489d82bfa1224891a2d40e360e0c2eab76401 type=CONTAINER_STARTED_EVENT Apr 17 01:56:37.732255 kubelet[2777]: E0417 01:56:37.720332 2777 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="4.47s" Apr 17 01:56:37.979317 containerd[1607]: time="2026-04-17T01:56:37.927579262Z" level=info msg="container event discarded" container=fa85ede5a4dfcf74e785c3ae04761f9e24c5254c7b51fd332a2dd1bd75ab71d9 type=CONTAINER_CREATED_EVENT Apr 17 01:56:38.282278 containerd[1607]: time="2026-04-17T01:56:38.040283738Z" level=info msg="container event discarded" container=da9200c64e3b9430af1bec691648e29545feb58f1e354e9301bd9011d5800181 type=CONTAINER_CREATED_EVENT Apr 17 01:56:38.282278 containerd[1607]: time="2026-04-17T01:56:38.072154294Z" level=info msg="container event discarded" container=da9200c64e3b9430af1bec691648e29545feb58f1e354e9301bd9011d5800181 type=CONTAINER_STARTED_EVENT Apr 17 01:56:38.394266 containerd[1607]: time="2026-04-17T01:56:38.386461911Z" level=info msg="container event discarded" container=af2c559bb5351f191757890e0d6b2a64b0868f7c585f121703cbfc0632304fae type=CONTAINER_CREATED_EVENT Apr 17 01:56:38.616767 kubelet[2777]: E0417 01:56:38.562192 2777 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 01:56:38.689550 kubelet[2777]: E0417 01:56:38.554506 2777 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 17 01:56:39.294495 containerd[1607]: time="2026-04-17T01:56:39.256549755Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4fac3d71e98654620e15e49cc21797c2.slice/cri-containerd-af2c559bb5351f191757890e0d6b2a64b0868f7c585f121703cbfc0632304fae.scope/hugetlb.2MB.events\"" Apr 17 01:56:39.308403 containerd[1607]: time="2026-04-17T01:56:39.306816943Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4fac3d71e98654620e15e49cc21797c2.slice/cri-containerd-af2c559bb5351f191757890e0d6b2a64b0868f7c585f121703cbfc0632304fae.scope/hugetlb.1GB.events\"" Apr 17 01:56:39.385235 containerd[1607]: time="2026-04-17T01:56:39.383194881Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod82faa9ca0765979bc0118d46e6420ed8.slice/cri-containerd-c0810b168f6607ff82be0df0ff8e93d40e21cccb775f74df4533c1a5dfd15502.scope/hugetlb.2MB.events\"" Apr 17 01:56:39.435534 containerd[1607]: time="2026-04-17T01:56:39.385053265Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod82faa9ca0765979bc0118d46e6420ed8.slice/cri-containerd-c0810b168f6607ff82be0df0ff8e93d40e21cccb775f74df4533c1a5dfd15502.scope/hugetlb.1GB.events\"" Apr 17 01:56:40.729556 containerd[1607]: time="2026-04-17T01:56:40.689566693Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod66a243c17a59d09458bf3b09d66260f5.slice/cri-containerd-a3997a9fadd23778e3d326117888dc9ed26eb50c1686b9d632b5dd431da5794e.scope/hugetlb.2MB.events\"" Apr 17 01:56:40.819401 containerd[1607]: time="2026-04-17T01:56:40.799896229Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod66a243c17a59d09458bf3b09d66260f5.slice/cri-containerd-a3997a9fadd23778e3d326117888dc9ed26eb50c1686b9d632b5dd431da5794e.scope/hugetlb.1GB.events\"" Apr 17 01:56:41.537430 kubelet[2777]: E0417 01:56:41.513550 2777 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.019s" Apr 17 01:56:42.316512 containerd[1607]: time="2026-04-17T01:56:42.303402015Z" level=info msg="container event discarded" container=4074ecfeaaac5fe32d97a292f991a9b3aa24c0d4749613d52ca098df51c170be type=CONTAINER_CREATED_EVENT Apr 17 01:56:42.558510 kubelet[2777]: E0417 01:56:42.558230 2777 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 01:56:44.638577 kubelet[2777]: E0417 01:56:44.636435 2777 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 17 01:56:44.997369 kubelet[2777]: E0417 01:56:44.996388 2777 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 01:56:45.225424 kubelet[2777]: E0417 01:56:45.224531 2777 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.855s" Apr 17 01:56:47.594297 kubelet[2777]: E0417 01:56:47.591807 2777 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.185s" Apr 17 01:56:48.724087 systemd[1]: cri-containerd-c0810b168f6607ff82be0df0ff8e93d40e21cccb775f74df4533c1a5dfd15502.scope: Deactivated successfully. Apr 17 01:56:48.837525 systemd[1]: cri-containerd-c0810b168f6607ff82be0df0ff8e93d40e21cccb775f74df4533c1a5dfd15502.scope: Consumed 20.581s CPU time, 19.8M memory peak. Apr 17 01:56:49.193369 containerd[1607]: time="2026-04-17T01:56:48.992519230Z" level=info msg="received container exit event container_id:\"c0810b168f6607ff82be0df0ff8e93d40e21cccb775f74df4533c1a5dfd15502\" id:\"c0810b168f6607ff82be0df0ff8e93d40e21cccb775f74df4533c1a5dfd15502\" pid:3124 exit_status:1 exited_at:{seconds:1776391008 nanos:934189064}" Apr 17 01:56:51.422377 containerd[1607]: time="2026-04-17T01:56:51.379283424Z" level=info msg="container event discarded" container=fa85ede5a4dfcf74e785c3ae04761f9e24c5254c7b51fd332a2dd1bd75ab71d9 type=CONTAINER_STARTED_EVENT Apr 17 01:56:51.595311 kubelet[2777]: E0417 01:56:51.588714 2777 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.979s" Apr 17 01:56:53.514562 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c0810b168f6607ff82be0df0ff8e93d40e21cccb775f74df4533c1a5dfd15502-rootfs.mount: Deactivated successfully. Apr 17 01:56:54.647494 containerd[1607]: time="2026-04-17T01:56:54.637129568Z" level=info msg="container event discarded" container=af2c559bb5351f191757890e0d6b2a64b0868f7c585f121703cbfc0632304fae type=CONTAINER_STARTED_EVENT Apr 17 01:56:55.273341 kubelet[2777]: E0417 01:56:55.271864 2777 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 17 01:56:57.374344 kubelet[2777]: E0417 01:56:57.235436 2777 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="4.654s" Apr 17 01:57:00.221488 containerd[1607]: time="2026-04-17T01:57:00.217257344Z" level=info msg="container event discarded" container=4074ecfeaaac5fe32d97a292f991a9b3aa24c0d4749613d52ca098df51c170be type=CONTAINER_STARTED_EVENT Apr 17 01:57:00.793433 kubelet[2777]: E0417 01:57:00.793250 2777 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.419s" Apr 17 01:57:00.848229 containerd[1607]: time="2026-04-17T01:57:00.799449241Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod66a243c17a59d09458bf3b09d66260f5.slice/cri-containerd-a3997a9fadd23778e3d326117888dc9ed26eb50c1686b9d632b5dd431da5794e.scope/hugetlb.2MB.events\"" Apr 17 01:57:00.888014 containerd[1607]: time="2026-04-17T01:57:00.855968627Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod66a243c17a59d09458bf3b09d66260f5.slice/cri-containerd-a3997a9fadd23778e3d326117888dc9ed26eb50c1686b9d632b5dd431da5794e.scope/hugetlb.1GB.events\"" Apr 17 01:57:01.212130 containerd[1607]: time="2026-04-17T01:57:01.135227077Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4fac3d71e98654620e15e49cc21797c2.slice/cri-containerd-af2c559bb5351f191757890e0d6b2a64b0868f7c585f121703cbfc0632304fae.scope/hugetlb.2MB.events\"" Apr 17 01:57:01.212130 containerd[1607]: time="2026-04-17T01:57:01.135560776Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4fac3d71e98654620e15e49cc21797c2.slice/cri-containerd-af2c559bb5351f191757890e0d6b2a64b0868f7c585f121703cbfc0632304fae.scope/hugetlb.1GB.events\"" Apr 17 01:57:01.550578 kubelet[2777]: E0417 01:57:01.511504 2777 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 17 01:57:02.839378 kubelet[2777]: E0417 01:57:02.818580 2777 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.02s" Apr 17 01:57:03.387218 kubelet[2777]: I0417 01:57:03.358329 2777 scope.go:117] "RemoveContainer" containerID="4074ecfeaaac5fe32d97a292f991a9b3aa24c0d4749613d52ca098df51c170be" Apr 17 01:57:03.940344 kubelet[2777]: I0417 01:57:03.935268 2777 scope.go:117] "RemoveContainer" containerID="c0810b168f6607ff82be0df0ff8e93d40e21cccb775f74df4533c1a5dfd15502" Apr 17 01:57:04.144993 kubelet[2777]: E0417 01:57:04.098676 2777 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 01:57:06.015977 containerd[1607]: time="2026-04-17T01:57:06.012847039Z" level=info msg="RemoveContainer for \"4074ecfeaaac5fe32d97a292f991a9b3aa24c0d4749613d52ca098df51c170be\"" Apr 17 01:57:06.348523 containerd[1607]: time="2026-04-17T01:57:06.346002601Z" level=info msg="RemoveContainer for \"4074ecfeaaac5fe32d97a292f991a9b3aa24c0d4749613d52ca098df51c170be\" returns successfully" Apr 17 01:57:06.719880 containerd[1607]: time="2026-04-17T01:57:06.716513484Z" level=info msg="CreateContainer within sandbox \"da9200c64e3b9430af1bec691648e29545feb58f1e354e9301bd9011d5800181\" for container name:\"kube-controller-manager\" attempt:2" Apr 17 01:57:06.907453 kubelet[2777]: E0417 01:57:06.904121 2777 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 17 01:57:07.689722 kubelet[2777]: E0417 01:57:07.689245 2777 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="4.535s" Apr 17 01:57:08.203577 containerd[1607]: time="2026-04-17T01:57:08.150441513Z" level=info msg="Container 46217a85797f45cae167b6198d484a7f47b3dc8af72b0b52414cc3059ecd10a7: CDI devices from CRI Config.CDIDevices: []" Apr 17 01:57:08.246549 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4162536022.mount: Deactivated successfully. Apr 17 01:57:09.232542 containerd[1607]: time="2026-04-17T01:57:09.231247005Z" level=info msg="CreateContainer within sandbox \"da9200c64e3b9430af1bec691648e29545feb58f1e354e9301bd9011d5800181\" for name:\"kube-controller-manager\" attempt:2 returns container id \"46217a85797f45cae167b6198d484a7f47b3dc8af72b0b52414cc3059ecd10a7\"" Apr 17 01:57:09.438152 containerd[1607]: time="2026-04-17T01:57:09.347938031Z" level=info msg="StartContainer for \"46217a85797f45cae167b6198d484a7f47b3dc8af72b0b52414cc3059ecd10a7\"" Apr 17 01:57:09.505447 containerd[1607]: time="2026-04-17T01:57:09.504338913Z" level=info msg="connecting to shim 46217a85797f45cae167b6198d484a7f47b3dc8af72b0b52414cc3059ecd10a7" address="unix:///run/containerd/s/6dd5659f205fb6d79c30f8024892d72185c43fb20853b4daceb55fc7305fe8f6" protocol=ttrpc version=3 Apr 17 01:57:11.885502 kubelet[2777]: E0417 01:57:11.817966 2777 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.362s" Apr 17 01:57:11.929326 systemd[1]: Started cri-containerd-46217a85797f45cae167b6198d484a7f47b3dc8af72b0b52414cc3059ecd10a7.scope - libcontainer container 46217a85797f45cae167b6198d484a7f47b3dc8af72b0b52414cc3059ecd10a7. Apr 17 01:57:12.249703 kubelet[2777]: E0417 01:57:12.248967 2777 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 17 01:57:14.455423 containerd[1607]: time="2026-04-17T01:57:14.447341538Z" level=error msg="get state for 46217a85797f45cae167b6198d484a7f47b3dc8af72b0b52414cc3059ecd10a7" error="context deadline exceeded" Apr 17 01:57:14.527497 containerd[1607]: time="2026-04-17T01:57:14.458524447Z" level=warning msg="unknown status" status=0 Apr 17 01:57:14.604207 containerd[1607]: time="2026-04-17T01:57:14.602980805Z" level=error msg="ttrpc: received message on inactive stream" stream=3 Apr 17 01:57:15.046211 containerd[1607]: time="2026-04-17T01:57:15.044176231Z" level=info msg="StartContainer for \"46217a85797f45cae167b6198d484a7f47b3dc8af72b0b52414cc3059ecd10a7\" returns successfully" Apr 17 01:57:16.599660 kubelet[2777]: E0417 01:57:16.586524 2777 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 01:57:18.054528 kubelet[2777]: E0417 01:57:18.043199 2777 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 17 01:57:18.392453 kubelet[2777]: E0417 01:57:18.385259 2777 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 01:57:19.297419 containerd[1607]: time="2026-04-17T01:57:19.296369188Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod66a243c17a59d09458bf3b09d66260f5.slice/cri-containerd-a3997a9fadd23778e3d326117888dc9ed26eb50c1686b9d632b5dd431da5794e.scope/hugetlb.2MB.events\"" Apr 17 01:57:19.297419 containerd[1607]: time="2026-04-17T01:57:19.296585361Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod66a243c17a59d09458bf3b09d66260f5.slice/cri-containerd-a3997a9fadd23778e3d326117888dc9ed26eb50c1686b9d632b5dd431da5794e.scope/hugetlb.1GB.events\"" Apr 17 01:57:19.308922 containerd[1607]: time="2026-04-17T01:57:19.308794028Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4fac3d71e98654620e15e49cc21797c2.slice/cri-containerd-af2c559bb5351f191757890e0d6b2a64b0868f7c585f121703cbfc0632304fae.scope/hugetlb.2MB.events\"" Apr 17 01:57:19.394202 containerd[1607]: time="2026-04-17T01:57:19.391677443Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4fac3d71e98654620e15e49cc21797c2.slice/cri-containerd-af2c559bb5351f191757890e0d6b2a64b0868f7c585f121703cbfc0632304fae.scope/hugetlb.1GB.events\"" Apr 17 01:57:19.602190 containerd[1607]: time="2026-04-17T01:57:19.596229345Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod82faa9ca0765979bc0118d46e6420ed8.slice/cri-containerd-46217a85797f45cae167b6198d484a7f47b3dc8af72b0b52414cc3059ecd10a7.scope/hugetlb.2MB.events\"" Apr 17 01:57:19.731317 containerd[1607]: time="2026-04-17T01:57:19.727147731Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod82faa9ca0765979bc0118d46e6420ed8.slice/cri-containerd-46217a85797f45cae167b6198d484a7f47b3dc8af72b0b52414cc3059ecd10a7.scope/hugetlb.1GB.events\"" Apr 17 01:57:20.781181 kubelet[2777]: E0417 01:57:20.780733 2777 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.382s" Apr 17 01:57:21.004101 kubelet[2777]: E0417 01:57:21.002944 2777 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 01:57:23.112907 kubelet[2777]: E0417 01:57:23.111212 2777 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.669s" Apr 17 01:57:23.699568 kubelet[2777]: E0417 01:57:23.693501 2777 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 17 01:57:25.434841 kubelet[2777]: E0417 01:57:25.430226 2777 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 01:57:26.614446 kubelet[2777]: E0417 01:57:26.591231 2777 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.983s" Apr 17 01:57:28.396021 kubelet[2777]: E0417 01:57:28.389429 2777 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.771s" Apr 17 01:57:28.856208 kubelet[2777]: E0417 01:57:28.843475 2777 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 17 01:57:33.146930 containerd[1607]: time="2026-04-17T01:57:33.117349513Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod82faa9ca0765979bc0118d46e6420ed8.slice/cri-containerd-46217a85797f45cae167b6198d484a7f47b3dc8af72b0b52414cc3059ecd10a7.scope/hugetlb.2MB.events\"" Apr 17 01:57:33.480009 containerd[1607]: time="2026-04-17T01:57:33.166668701Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod82faa9ca0765979bc0118d46e6420ed8.slice/cri-containerd-46217a85797f45cae167b6198d484a7f47b3dc8af72b0b52414cc3059ecd10a7.scope/hugetlb.1GB.events\"" Apr 17 01:57:33.535976 containerd[1607]: time="2026-04-17T01:57:33.508223461Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod66a243c17a59d09458bf3b09d66260f5.slice/cri-containerd-a3997a9fadd23778e3d326117888dc9ed26eb50c1686b9d632b5dd431da5794e.scope/hugetlb.2MB.events\"" Apr 17 01:57:33.535976 containerd[1607]: time="2026-04-17T01:57:33.509011688Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod66a243c17a59d09458bf3b09d66260f5.slice/cri-containerd-a3997a9fadd23778e3d326117888dc9ed26eb50c1686b9d632b5dd431da5794e.scope/hugetlb.1GB.events\"" Apr 17 01:57:33.771844 containerd[1607]: time="2026-04-17T01:57:33.686503309Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4fac3d71e98654620e15e49cc21797c2.slice/cri-containerd-af2c559bb5351f191757890e0d6b2a64b0868f7c585f121703cbfc0632304fae.scope/hugetlb.2MB.events\"" Apr 17 01:57:33.798197 containerd[1607]: time="2026-04-17T01:57:33.781160200Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4fac3d71e98654620e15e49cc21797c2.slice/cri-containerd-af2c559bb5351f191757890e0d6b2a64b0868f7c585f121703cbfc0632304fae.scope/hugetlb.1GB.events\"" Apr 17 01:57:34.396083 kubelet[2777]: E0417 01:57:34.300569 2777 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 17 01:57:35.227477 kubelet[2777]: E0417 01:57:35.219895 2777 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.783s" Apr 17 01:57:36.578313 kubelet[2777]: E0417 01:57:36.573227 2777 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.164s" Apr 17 01:57:38.696317 kubelet[2777]: E0417 01:57:38.686099 2777 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.302s" Apr 17 01:57:38.821364 kubelet[2777]: E0417 01:57:38.820379 2777 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 01:57:39.310054 systemd[1]: Reload requested from client PID 3247 ('systemctl') (unit session-8.scope)... Apr 17 01:57:39.312038 systemd[1]: Reloading... Apr 17 01:57:39.488282 kubelet[2777]: E0417 01:57:39.482348 2777 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 17 01:57:42.658225 kubelet[2777]: E0417 01:57:42.648313 2777 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.191s" Apr 17 01:57:43.708258 zram_generator::config[3301]: No configuration found. Apr 17 01:57:43.949466 kubelet[2777]: E0417 01:57:43.946174 2777 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 01:57:44.675571 kubelet[2777]: E0417 01:57:44.672243 2777 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 17 01:57:44.908497 kubelet[2777]: E0417 01:57:44.906239 2777 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 01:57:47.694986 containerd[1607]: time="2026-04-17T01:57:47.692044611Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod82faa9ca0765979bc0118d46e6420ed8.slice/cri-containerd-46217a85797f45cae167b6198d484a7f47b3dc8af72b0b52414cc3059ecd10a7.scope/hugetlb.2MB.events\"" Apr 17 01:57:47.694986 containerd[1607]: time="2026-04-17T01:57:47.692119843Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod82faa9ca0765979bc0118d46e6420ed8.slice/cri-containerd-46217a85797f45cae167b6198d484a7f47b3dc8af72b0b52414cc3059ecd10a7.scope/hugetlb.1GB.events\"" Apr 17 01:57:47.699857 containerd[1607]: time="2026-04-17T01:57:47.696534306Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod66a243c17a59d09458bf3b09d66260f5.slice/cri-containerd-a3997a9fadd23778e3d326117888dc9ed26eb50c1686b9d632b5dd431da5794e.scope/hugetlb.2MB.events\"" Apr 17 01:57:47.700936 containerd[1607]: time="2026-04-17T01:57:47.700775633Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod66a243c17a59d09458bf3b09d66260f5.slice/cri-containerd-a3997a9fadd23778e3d326117888dc9ed26eb50c1686b9d632b5dd431da5794e.scope/hugetlb.1GB.events\"" Apr 17 01:57:47.737252 containerd[1607]: time="2026-04-17T01:57:47.717524854Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4fac3d71e98654620e15e49cc21797c2.slice/cri-containerd-af2c559bb5351f191757890e0d6b2a64b0868f7c585f121703cbfc0632304fae.scope/hugetlb.2MB.events\"" Apr 17 01:57:47.737252 containerd[1607]: time="2026-04-17T01:57:47.732283780Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4fac3d71e98654620e15e49cc21797c2.slice/cri-containerd-af2c559bb5351f191757890e0d6b2a64b0868f7c585f121703cbfc0632304fae.scope/hugetlb.1GB.events\"" Apr 17 01:57:49.784366 kubelet[2777]: E0417 01:57:49.783243 2777 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 17 01:57:53.086810 systemd[1]: /usr/lib/systemd/system/update-engine.service:10: Support for option BlockIOWeight= has been removed and it is ignored Apr 17 01:57:55.070068 kubelet[2777]: E0417 01:57:55.068557 2777 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 17 01:57:59.252891 containerd[1607]: time="2026-04-17T01:57:59.248237571Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod66a243c17a59d09458bf3b09d66260f5.slice/cri-containerd-a3997a9fadd23778e3d326117888dc9ed26eb50c1686b9d632b5dd431da5794e.scope/hugetlb.2MB.events\"" Apr 17 01:57:59.382123 containerd[1607]: time="2026-04-17T01:57:59.311889625Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod66a243c17a59d09458bf3b09d66260f5.slice/cri-containerd-a3997a9fadd23778e3d326117888dc9ed26eb50c1686b9d632b5dd431da5794e.scope/hugetlb.1GB.events\"" Apr 17 01:57:59.522456 containerd[1607]: time="2026-04-17T01:57:59.513687020Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4fac3d71e98654620e15e49cc21797c2.slice/cri-containerd-af2c559bb5351f191757890e0d6b2a64b0868f7c585f121703cbfc0632304fae.scope/hugetlb.2MB.events\"" Apr 17 01:57:59.522456 containerd[1607]: time="2026-04-17T01:57:59.514091191Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4fac3d71e98654620e15e49cc21797c2.slice/cri-containerd-af2c559bb5351f191757890e0d6b2a64b0868f7c585f121703cbfc0632304fae.scope/hugetlb.1GB.events\"" Apr 17 01:57:59.813103 containerd[1607]: time="2026-04-17T01:57:59.727488284Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod82faa9ca0765979bc0118d46e6420ed8.slice/cri-containerd-46217a85797f45cae167b6198d484a7f47b3dc8af72b0b52414cc3059ecd10a7.scope/hugetlb.2MB.events\"" Apr 17 01:57:59.813103 containerd[1607]: time="2026-04-17T01:57:59.793541047Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod82faa9ca0765979bc0118d46e6420ed8.slice/cri-containerd-46217a85797f45cae167b6198d484a7f47b3dc8af72b0b52414cc3059ecd10a7.scope/hugetlb.1GB.events\"" Apr 17 01:58:01.457990 kubelet[2777]: E0417 01:58:01.440252 2777 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 17 01:58:01.620422 kubelet[2777]: E0417 01:58:01.605977 2777 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.212s" Apr 17 01:58:02.454257 systemd[1]: Reloading finished in 23126 ms. Apr 17 01:58:03.473540 kubelet[2777]: E0417 01:58:03.473190 2777 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.867s" Apr 17 01:58:07.191193 kubelet[2777]: E0417 01:58:07.146953 2777 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 17 01:58:08.578662 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 17 01:58:09.010514 systemd[1]: kubelet.service: Deactivated successfully. Apr 17 01:58:09.245276 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 17 01:58:09.258297 systemd[1]: kubelet.service: Consumed 6min 8.132s CPU time, 141M memory peak. Apr 17 01:58:10.094117 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 17 01:58:18.870415 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 17 01:58:19.248223 (kubelet)[3349]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 17 01:58:28.289303 kubelet[3349]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 17 01:58:28.355336 kubelet[3349]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 17 01:58:28.355336 kubelet[3349]: I0417 01:58:28.316476 3349 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 17 01:58:29.805467 kubelet[3349]: I0417 01:58:29.803409 3349 server.go:529] "Kubelet version" kubeletVersion="v1.34.4" Apr 17 01:58:29.805467 kubelet[3349]: I0417 01:58:29.804108 3349 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 17 01:58:29.819177 kubelet[3349]: I0417 01:58:29.809568 3349 watchdog_linux.go:95] "Systemd watchdog is not enabled" Apr 17 01:58:29.819177 kubelet[3349]: I0417 01:58:29.809910 3349 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 17 01:58:30.032380 kubelet[3349]: I0417 01:58:30.025529 3349 server.go:956] "Client rotation is on, will bootstrap in background" Apr 17 01:58:30.953897 kubelet[3349]: I0417 01:58:30.951831 3349 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Apr 17 01:58:31.436463 kubelet[3349]: I0417 01:58:31.435095 3349 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 17 01:58:33.285167 kubelet[3349]: I0417 01:58:33.282937 3349 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Apr 17 01:58:34.475407 kubelet[3349]: I0417 01:58:34.472738 3349 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Apr 17 01:58:34.532461 kubelet[3349]: I0417 01:58:34.503670 3349 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 17 01:58:34.532461 kubelet[3349]: I0417 01:58:34.503970 3349 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 17 01:58:34.532461 kubelet[3349]: I0417 01:58:34.504542 3349 topology_manager.go:138] "Creating topology manager with none policy" Apr 17 01:58:34.532461 kubelet[3349]: I0417 01:58:34.504559 3349 container_manager_linux.go:306] "Creating device plugin manager" Apr 17 01:58:34.715071 kubelet[3349]: I0417 01:58:34.504755 3349 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Apr 17 01:58:34.715071 kubelet[3349]: I0417 01:58:34.505423 3349 state_mem.go:36] "Initialized new in-memory state store" Apr 17 01:58:34.715071 kubelet[3349]: I0417 01:58:34.519155 3349 kubelet.go:475] "Attempting to sync node with API server" Apr 17 01:58:34.715071 kubelet[3349]: I0417 01:58:34.604203 3349 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 17 01:58:34.723282 kubelet[3349]: I0417 01:58:34.723129 3349 kubelet.go:387] "Adding apiserver pod source" Apr 17 01:58:34.760669 kubelet[3349]: I0417 01:58:34.745471 3349 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 17 01:58:35.649151 kubelet[3349]: I0417 01:58:35.646241 3349 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.2.0" apiVersion="v1" Apr 17 01:58:36.199404 kubelet[3349]: I0417 01:58:36.195212 3349 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 17 01:58:36.199404 kubelet[3349]: I0417 01:58:36.196850 3349 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Apr 17 01:58:37.884416 kubelet[3349]: I0417 01:58:37.883166 3349 server.go:1262] "Started kubelet" Apr 17 01:58:38.362211 kubelet[3349]: I0417 01:58:38.086578 3349 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 17 01:58:38.429285 kubelet[3349]: I0417 01:58:38.345285 3349 apiserver.go:52] "Watching apiserver" Apr 17 01:58:38.552631 kubelet[3349]: I0417 01:58:38.552190 3349 server_v1.go:49] "podresources" method="list" useActivePods=true Apr 17 01:58:38.828456 kubelet[3349]: I0417 01:58:38.375284 3349 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Apr 17 01:58:39.123051 kubelet[3349]: I0417 01:58:38.935017 3349 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 17 01:58:41.014054 kubelet[3349]: I0417 01:58:41.009408 3349 server.go:310] "Adding debug handlers to kubelet server" Apr 17 01:58:41.399137 kubelet[3349]: I0417 01:58:41.372842 3349 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 17 01:58:41.547238 kubelet[3349]: I0417 01:58:41.532082 3349 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 17 01:58:41.837233 kubelet[3349]: I0417 01:58:41.825392 3349 volume_manager.go:313] "Starting Kubelet Volume Manager" Apr 17 01:58:42.025639 kubelet[3349]: I0417 01:58:41.835780 3349 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Apr 17 01:58:42.481137 kubelet[3349]: E0417 01:58:42.252685 3349 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 17 01:58:42.971363 kubelet[3349]: I0417 01:58:42.954920 3349 reconciler.go:29] "Reconciler: start to sync state" Apr 17 01:58:43.889352 kubelet[3349]: I0417 01:58:43.863042 3349 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 17 01:58:45.407082 kubelet[3349]: W0417 01:58:45.398042 3349 logging.go:55] [core] [Channel #9 SubChannel #10]grpc: addrConn.createTransport failed to connect to {Addr: "/run/containerd/containerd.sock", ServerName: "localhost", Attributes: {"<%!p(networktype.keyType=grpc.internal.transport.networktype)>": "unix" }, BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial unix:///run/containerd/containerd.sock: timeout" Apr 17 01:58:46.607450 kubelet[3349]: W0417 01:58:46.606034 3349 logging.go:55] [core] [Channel #9 SubChannel #10]grpc: addrConn.createTransport failed to connect to {Addr: "/run/containerd/containerd.sock", ServerName: "localhost", Attributes: {"<%!p(networktype.keyType=grpc.internal.transport.networktype)>": "unix" }, BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial unix:///run/containerd/containerd.sock: timeout" Apr 17 01:58:46.625522 kubelet[3349]: I0417 01:58:46.625425 3349 factory.go:221] Registration of the containerd container factory failed: unable to create containerd client: context deadline exceeded Apr 17 01:58:46.644233 kubelet[3349]: I0417 01:58:46.643217 3349 factory.go:223] Registration of the systemd container factory successfully Apr 17 01:58:50.030350 systemd[1]: cri-containerd-46217a85797f45cae167b6198d484a7f47b3dc8af72b0b52414cc3059ecd10a7.scope: Deactivated successfully. Apr 17 01:58:50.154285 systemd[1]: cri-containerd-46217a85797f45cae167b6198d484a7f47b3dc8af72b0b52414cc3059ecd10a7.scope: Consumed 33.006s CPU time, 19M memory peak. Apr 17 01:58:50.315105 containerd[1607]: time="2026-04-17T01:58:50.064093314Z" level=info msg="received container exit event container_id:\"46217a85797f45cae167b6198d484a7f47b3dc8af72b0b52414cc3059ecd10a7\" id:\"46217a85797f45cae167b6198d484a7f47b3dc8af72b0b52414cc3059ecd10a7\" pid:3224 exit_status:1 exited_at:{seconds:1776391130 nanos:37011916}" Apr 17 01:58:50.509507 kubelet[3349]: I0417 01:58:50.456405 3349 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Apr 17 01:58:51.848234 kubelet[3349]: I0417 01:58:51.841760 3349 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Apr 17 01:58:52.575130 kubelet[3349]: I0417 01:58:52.409266 3349 status_manager.go:244] "Starting to sync pod status with apiserver" Apr 17 01:58:53.450302 kubelet[3349]: I0417 01:58:53.423058 3349 kubelet.go:2428] "Starting kubelet main sync loop" Apr 17 01:58:54.043403 kubelet[3349]: E0417 01:58:54.038006 3349 kubelet.go:2452] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 17 01:58:55.220811 kubelet[3349]: E0417 01:58:55.219802 3349 kubelet.go:2452] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 17 01:58:56.943857 kubelet[3349]: E0417 01:58:56.941098 3349 kubelet.go:2452] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 17 01:58:57.003963 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-46217a85797f45cae167b6198d484a7f47b3dc8af72b0b52414cc3059ecd10a7-rootfs.mount: Deactivated successfully. Apr 17 01:58:57.586564 kubelet[3349]: E0417 01:58:57.579978 3349 kubelet.go:2452] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Apr 17 01:58:57.580266 systemd[1]: cri-containerd-a3997a9fadd23778e3d326117888dc9ed26eb50c1686b9d632b5dd431da5794e.scope: Deactivated successfully. Apr 17 01:58:57.712355 containerd[1607]: time="2026-04-17T01:58:57.591234583Z" level=info msg="received container exit event container_id:\"a3997a9fadd23778e3d326117888dc9ed26eb50c1686b9d632b5dd431da5794e\" id:\"a3997a9fadd23778e3d326117888dc9ed26eb50c1686b9d632b5dd431da5794e\" pid:3173 exit_status:1 exited_at:{seconds:1776391137 nanos:575211418}" Apr 17 01:58:57.591140 systemd[1]: cri-containerd-a3997a9fadd23778e3d326117888dc9ed26eb50c1686b9d632b5dd431da5794e.scope: Consumed 40.897s CPU time, 18.2M memory peak. Apr 17 01:58:58.638197 kubelet[3349]: E0417 01:58:58.430545 3349 kubelet.go:2452] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Apr 17 01:59:00.536518 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a3997a9fadd23778e3d326117888dc9ed26eb50c1686b9d632b5dd431da5794e-rootfs.mount: Deactivated successfully. Apr 17 01:59:00.745984 kubelet[3349]: E0417 01:59:00.736327 3349 kubelet.go:2452] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Apr 17 01:59:03.883188 kubelet[3349]: I0417 01:59:03.881931 3349 cpu_manager.go:221] "Starting CPU manager" policy="none" Apr 17 01:59:03.883188 kubelet[3349]: I0417 01:59:03.882245 3349 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Apr 17 01:59:03.885897 kubelet[3349]: I0417 01:59:03.884637 3349 state_mem.go:36] "Initialized new in-memory state store" Apr 17 01:59:03.885897 kubelet[3349]: I0417 01:59:03.885783 3349 state_mem.go:88] "Updated default CPUSet" cpuSet="" Apr 17 01:59:03.885897 kubelet[3349]: I0417 01:59:03.885796 3349 state_mem.go:96] "Updated CPUSet assignments" assignments={} Apr 17 01:59:03.886054 kubelet[3349]: I0417 01:59:03.885933 3349 policy_none.go:49] "None policy: Start" Apr 17 01:59:03.886054 kubelet[3349]: I0417 01:59:03.885991 3349 memory_manager.go:187] "Starting memorymanager" policy="None" Apr 17 01:59:03.886054 kubelet[3349]: I0417 01:59:03.886013 3349 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Apr 17 01:59:03.913254 kubelet[3349]: I0417 01:59:03.911553 3349 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Apr 17 01:59:03.913254 kubelet[3349]: I0417 01:59:03.911915 3349 policy_none.go:47] "Start" Apr 17 01:59:04.030305 kubelet[3349]: E0417 01:59:04.018487 3349 kubelet.go:2452] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Apr 17 01:59:04.800650 kubelet[3349]: E0417 01:59:04.799848 3349 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 17 01:59:04.800650 kubelet[3349]: I0417 01:59:04.800473 3349 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 17 01:59:04.800650 kubelet[3349]: I0417 01:59:04.800501 3349 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 17 01:59:04.924138 kubelet[3349]: I0417 01:59:04.922864 3349 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 17 01:59:05.246552 kubelet[3349]: E0417 01:59:05.242945 3349 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 17 01:59:07.275797 kubelet[3349]: I0417 01:59:07.274843 3349 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 17 01:59:08.722995 kubelet[3349]: I0417 01:59:08.721564 3349 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Apr 17 01:59:08.802563 kubelet[3349]: I0417 01:59:08.800649 3349 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Apr 17 01:59:08.952390 containerd[1607]: time="2026-04-17T01:59:08.949635391Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4fac3d71e98654620e15e49cc21797c2.slice/cri-containerd-af2c559bb5351f191757890e0d6b2a64b0868f7c585f121703cbfc0632304fae.scope/hugetlb.2MB.events\"" Apr 17 01:59:08.985543 containerd[1607]: time="2026-04-17T01:59:08.969248120Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4fac3d71e98654620e15e49cc21797c2.slice/cri-containerd-af2c559bb5351f191757890e0d6b2a64b0868f7c585f121703cbfc0632304fae.scope/hugetlb.1GB.events\"" Apr 17 01:59:09.311016 kubelet[3349]: I0417 01:59:09.295775 3349 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Apr 17 01:59:09.489316 kubelet[3349]: I0417 01:59:09.488205 3349 scope.go:117] "RemoveContainer" containerID="c0810b168f6607ff82be0df0ff8e93d40e21cccb775f74df4533c1a5dfd15502" Apr 17 01:59:09.530925 kubelet[3349]: I0417 01:59:09.518449 3349 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/82faa9ca0765979bc0118d46e6420ed8-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"82faa9ca0765979bc0118d46e6420ed8\") " pod="kube-system/kube-controller-manager-localhost" Apr 17 01:59:09.931087 kubelet[3349]: I0417 01:59:09.779238 3349 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/66a243c17a59d09458bf3b09d66260f5-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"66a243c17a59d09458bf3b09d66260f5\") " pod="kube-system/kube-scheduler-localhost" Apr 17 01:59:10.410245 kubelet[3349]: I0417 01:59:10.320350 3349 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4fac3d71e98654620e15e49cc21797c2-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"4fac3d71e98654620e15e49cc21797c2\") " pod="kube-system/kube-apiserver-localhost" Apr 17 01:59:10.410245 kubelet[3349]: I0417 01:59:10.349363 3349 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4fac3d71e98654620e15e49cc21797c2-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"4fac3d71e98654620e15e49cc21797c2\") " pod="kube-system/kube-apiserver-localhost" Apr 17 01:59:10.410245 kubelet[3349]: I0417 01:59:10.349536 3349 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/82faa9ca0765979bc0118d46e6420ed8-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"82faa9ca0765979bc0118d46e6420ed8\") " pod="kube-system/kube-controller-manager-localhost" Apr 17 01:59:10.479769 kubelet[3349]: I0417 01:59:10.418063 3349 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/82faa9ca0765979bc0118d46e6420ed8-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"82faa9ca0765979bc0118d46e6420ed8\") " pod="kube-system/kube-controller-manager-localhost" Apr 17 01:59:10.624833 kubelet[3349]: I0417 01:59:10.597760 3349 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4fac3d71e98654620e15e49cc21797c2-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"4fac3d71e98654620e15e49cc21797c2\") " pod="kube-system/kube-apiserver-localhost" Apr 17 01:59:10.809775 kubelet[3349]: I0417 01:59:10.809010 3349 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/82faa9ca0765979bc0118d46e6420ed8-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"82faa9ca0765979bc0118d46e6420ed8\") " pod="kube-system/kube-controller-manager-localhost" Apr 17 01:59:10.809775 kubelet[3349]: I0417 01:59:10.809827 3349 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/82faa9ca0765979bc0118d46e6420ed8-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"82faa9ca0765979bc0118d46e6420ed8\") " pod="kube-system/kube-controller-manager-localhost" Apr 17 01:59:10.965346 containerd[1607]: time="2026-04-17T01:59:10.963989185Z" level=info msg="RemoveContainer for \"c0810b168f6607ff82be0df0ff8e93d40e21cccb775f74df4533c1a5dfd15502\"" Apr 17 01:59:11.106443 kubelet[3349]: E0417 01:59:11.094237 3349 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.487s" Apr 17 01:59:11.241041 containerd[1607]: time="2026-04-17T01:59:11.240538910Z" level=info msg="RemoveContainer for \"c0810b168f6607ff82be0df0ff8e93d40e21cccb775f74df4533c1a5dfd15502\" returns successfully" Apr 17 01:59:11.266185 kubelet[3349]: I0417 01:59:11.265705 3349 scope.go:117] "RemoveContainer" containerID="c0810b168f6607ff82be0df0ff8e93d40e21cccb775f74df4533c1a5dfd15502" Apr 17 01:59:11.286232 containerd[1607]: time="2026-04-17T01:59:11.284961760Z" level=error msg="ContainerStatus for \"c0810b168f6607ff82be0df0ff8e93d40e21cccb775f74df4533c1a5dfd15502\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c0810b168f6607ff82be0df0ff8e93d40e21cccb775f74df4533c1a5dfd15502\": not found" Apr 17 01:59:11.514998 kubelet[3349]: E0417 01:59:11.501374 3349 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c0810b168f6607ff82be0df0ff8e93d40e21cccb775f74df4533c1a5dfd15502\": not found" containerID="c0810b168f6607ff82be0df0ff8e93d40e21cccb775f74df4533c1a5dfd15502" Apr 17 01:59:11.515858 kubelet[3349]: I0417 01:59:11.515208 3349 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c0810b168f6607ff82be0df0ff8e93d40e21cccb775f74df4533c1a5dfd15502"} err="failed to get container status \"c0810b168f6607ff82be0df0ff8e93d40e21cccb775f74df4533c1a5dfd15502\": rpc error: code = NotFound desc = an error occurred when try to find container \"c0810b168f6607ff82be0df0ff8e93d40e21cccb775f74df4533c1a5dfd15502\": not found" Apr 17 01:59:11.516182 kubelet[3349]: I0417 01:59:11.513151 3349 scope.go:117] "RemoveContainer" containerID="a3997a9fadd23778e3d326117888dc9ed26eb50c1686b9d632b5dd431da5794e" Apr 17 01:59:11.516182 kubelet[3349]: I0417 01:59:11.516062 3349 scope.go:117] "RemoveContainer" containerID="46217a85797f45cae167b6198d484a7f47b3dc8af72b0b52414cc3059ecd10a7" Apr 17 01:59:11.516520 kubelet[3349]: E0417 01:59:11.516505 3349 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 01:59:11.517997 kubelet[3349]: E0417 01:59:11.517932 3349 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 01:59:11.519534 kubelet[3349]: E0417 01:59:11.519417 3349 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 01:59:13.118407 containerd[1607]: time="2026-04-17T01:59:13.116291521Z" level=info msg="CreateContainer within sandbox \"2f3cc335d06fac8b129f2e0628de747fdc282f0a938b832bb4028352e59cf7a9\" for container name:\"kube-scheduler\" attempt:2" Apr 17 01:59:13.154060 containerd[1607]: time="2026-04-17T01:59:13.119024244Z" level=info msg="CreateContainer within sandbox \"da9200c64e3b9430af1bec691648e29545feb58f1e354e9301bd9011d5800181\" for container name:\"kube-controller-manager\" attempt:3" Apr 17 01:59:14.317340 containerd[1607]: time="2026-04-17T01:59:14.242536984Z" level=info msg="Container bef1947ecb47877214e29a02bf50616487d79761aee407b3d3aa6f2a35e9d8cf: CDI devices from CRI Config.CDIDevices: []" Apr 17 01:59:14.445366 containerd[1607]: time="2026-04-17T01:59:14.437570770Z" level=info msg="Container e60ea82085b16e212267c022e2075e234ef65e6834c94df35872b2ec6367ee2c: CDI devices from CRI Config.CDIDevices: []" Apr 17 01:59:14.783900 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1207624074.mount: Deactivated successfully. Apr 17 01:59:14.841517 containerd[1607]: time="2026-04-17T01:59:14.839938817Z" level=info msg="CreateContainer within sandbox \"2f3cc335d06fac8b129f2e0628de747fdc282f0a938b832bb4028352e59cf7a9\" for name:\"kube-scheduler\" attempt:2 returns container id \"bef1947ecb47877214e29a02bf50616487d79761aee407b3d3aa6f2a35e9d8cf\"" Apr 17 01:59:15.115316 containerd[1607]: time="2026-04-17T01:59:15.109139035Z" level=info msg="CreateContainer within sandbox \"da9200c64e3b9430af1bec691648e29545feb58f1e354e9301bd9011d5800181\" for name:\"kube-controller-manager\" attempt:3 returns container id \"e60ea82085b16e212267c022e2075e234ef65e6834c94df35872b2ec6367ee2c\"" Apr 17 01:59:15.636957 containerd[1607]: time="2026-04-17T01:59:15.617293766Z" level=info msg="StartContainer for \"bef1947ecb47877214e29a02bf50616487d79761aee407b3d3aa6f2a35e9d8cf\"" Apr 17 01:59:16.080288 containerd[1607]: time="2026-04-17T01:59:16.078550638Z" level=info msg="connecting to shim bef1947ecb47877214e29a02bf50616487d79761aee407b3d3aa6f2a35e9d8cf" address="unix:///run/containerd/s/37fa565e1f3a26166e72f3404aaa6af399b8d96545306b6a5af7e8ce01f4b5c9" protocol=ttrpc version=3 Apr 17 01:59:16.694797 kubelet[3349]: E0417 01:59:15.853370 3349 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.354s" Apr 17 01:59:18.040372 containerd[1607]: time="2026-04-17T01:59:18.007525617Z" level=info msg="StartContainer for \"e60ea82085b16e212267c022e2075e234ef65e6834c94df35872b2ec6367ee2c\"" Apr 17 01:59:18.312089 containerd[1607]: time="2026-04-17T01:59:18.307356969Z" level=info msg="connecting to shim e60ea82085b16e212267c022e2075e234ef65e6834c94df35872b2ec6367ee2c" address="unix:///run/containerd/s/6dd5659f205fb6d79c30f8024892d72185c43fb20853b4daceb55fc7305fe8f6" protocol=ttrpc version=3 Apr 17 01:59:18.521187 kubelet[3349]: E0417 01:59:18.512364 3349 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 01:59:19.106914 systemd[1]: Started cri-containerd-e60ea82085b16e212267c022e2075e234ef65e6834c94df35872b2ec6367ee2c.scope - libcontainer container e60ea82085b16e212267c022e2075e234ef65e6834c94df35872b2ec6367ee2c. Apr 17 01:59:19.241706 systemd[1]: Started cri-containerd-bef1947ecb47877214e29a02bf50616487d79761aee407b3d3aa6f2a35e9d8cf.scope - libcontainer container bef1947ecb47877214e29a02bf50616487d79761aee407b3d3aa6f2a35e9d8cf. Apr 17 01:59:19.493743 containerd[1607]: time="2026-04-17T01:59:19.493125383Z" level=info msg="StartContainer for \"e60ea82085b16e212267c022e2075e234ef65e6834c94df35872b2ec6367ee2c\" returns successfully" Apr 17 01:59:19.686480 containerd[1607]: time="2026-04-17T01:59:19.686158550Z" level=info msg="StartContainer for \"bef1947ecb47877214e29a02bf50616487d79761aee407b3d3aa6f2a35e9d8cf\" returns successfully" Apr 17 01:59:20.268844 kubelet[3349]: E0417 01:59:20.268536 3349 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 01:59:20.294380 kubelet[3349]: E0417 01:59:20.294137 3349 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 01:59:21.263263 containerd[1607]: time="2026-04-17T01:59:21.262691310Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod66a243c17a59d09458bf3b09d66260f5.slice/cri-containerd-bef1947ecb47877214e29a02bf50616487d79761aee407b3d3aa6f2a35e9d8cf.scope/hugetlb.2MB.events\"" Apr 17 01:59:21.263263 containerd[1607]: time="2026-04-17T01:59:21.262876709Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod66a243c17a59d09458bf3b09d66260f5.slice/cri-containerd-bef1947ecb47877214e29a02bf50616487d79761aee407b3d3aa6f2a35e9d8cf.scope/hugetlb.1GB.events\"" Apr 17 01:59:21.265074 containerd[1607]: time="2026-04-17T01:59:21.264860826Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4fac3d71e98654620e15e49cc21797c2.slice/cri-containerd-af2c559bb5351f191757890e0d6b2a64b0868f7c585f121703cbfc0632304fae.scope/hugetlb.2MB.events\"" Apr 17 01:59:21.265074 containerd[1607]: time="2026-04-17T01:59:21.264914409Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4fac3d71e98654620e15e49cc21797c2.slice/cri-containerd-af2c559bb5351f191757890e0d6b2a64b0868f7c585f121703cbfc0632304fae.scope/hugetlb.1GB.events\"" Apr 17 01:59:21.268672 containerd[1607]: time="2026-04-17T01:59:21.268545807Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod82faa9ca0765979bc0118d46e6420ed8.slice/cri-containerd-e60ea82085b16e212267c022e2075e234ef65e6834c94df35872b2ec6367ee2c.scope/hugetlb.2MB.events\"" Apr 17 01:59:21.269150 containerd[1607]: time="2026-04-17T01:59:21.269076068Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod82faa9ca0765979bc0118d46e6420ed8.slice/cri-containerd-e60ea82085b16e212267c022e2075e234ef65e6834c94df35872b2ec6367ee2c.scope/hugetlb.1GB.events\"" Apr 17 01:59:21.640536 kubelet[3349]: E0417 01:59:21.633098 3349 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 01:59:21.690716 kubelet[3349]: E0417 01:59:21.690520 3349 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 01:59:21.810790 kubelet[3349]: E0417 01:59:21.810181 3349 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 01:59:22.736244 kubelet[3349]: E0417 01:59:22.736135 3349 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 01:59:22.773756 kubelet[3349]: E0417 01:59:22.773461 3349 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 01:59:22.773756 kubelet[3349]: E0417 01:59:22.773533 3349 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 01:59:23.684917 sudo[1857]: pam_unix(sudo:session): session closed for user root Apr 17 01:59:23.688858 sshd[1856]: Connection closed by 10.0.0.1 port 47798 Apr 17 01:59:23.696381 sshd-session[1849]: pam_unix(sshd:session): session closed for user core Apr 17 01:59:23.733065 systemd[1]: sshd@6-12291-10.0.0.148:22-10.0.0.1:47798.service: Deactivated successfully. Apr 17 01:59:23.735360 systemd[1]: sshd@6-12291-10.0.0.148:22-10.0.0.1:47798.service: Consumed 2.243s CPU time, 4.4M memory peak. Apr 17 01:59:23.755142 systemd[1]: session-8.scope: Deactivated successfully. Apr 17 01:59:23.757469 systemd[1]: session-8.scope: Consumed 4min 24.723s CPU time, 226.1M memory peak. Apr 17 01:59:23.842573 systemd-logind[1580]: Session 8 logged out. Waiting for processes to exit. Apr 17 01:59:23.866585 systemd-logind[1580]: Removed session 8. Apr 17 01:59:29.564996 kubelet[3349]: E0417 01:59:29.564123 3349 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.977s" Apr 17 01:59:34.711468 kubelet[3349]: E0417 01:59:34.710134 3349 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.04s" Apr 17 01:59:35.094974 containerd[1607]: time="2026-04-17T01:59:35.088445208Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod82faa9ca0765979bc0118d46e6420ed8.slice/cri-containerd-e60ea82085b16e212267c022e2075e234ef65e6834c94df35872b2ec6367ee2c.scope/hugetlb.2MB.events\"" Apr 17 01:59:35.094974 containerd[1607]: time="2026-04-17T01:59:35.092354352Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod82faa9ca0765979bc0118d46e6420ed8.slice/cri-containerd-e60ea82085b16e212267c022e2075e234ef65e6834c94df35872b2ec6367ee2c.scope/hugetlb.1GB.events\"" Apr 17 01:59:35.100753 containerd[1607]: time="2026-04-17T01:59:35.097552941Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod66a243c17a59d09458bf3b09d66260f5.slice/cri-containerd-bef1947ecb47877214e29a02bf50616487d79761aee407b3d3aa6f2a35e9d8cf.scope/hugetlb.2MB.events\"" Apr 17 01:59:35.120779 containerd[1607]: time="2026-04-17T01:59:35.120055750Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod66a243c17a59d09458bf3b09d66260f5.slice/cri-containerd-bef1947ecb47877214e29a02bf50616487d79761aee407b3d3aa6f2a35e9d8cf.scope/hugetlb.1GB.events\"" Apr 17 01:59:35.122517 containerd[1607]: time="2026-04-17T01:59:35.122211920Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4fac3d71e98654620e15e49cc21797c2.slice/cri-containerd-af2c559bb5351f191757890e0d6b2a64b0868f7c585f121703cbfc0632304fae.scope/hugetlb.2MB.events\"" Apr 17 01:59:35.122517 containerd[1607]: time="2026-04-17T01:59:35.122384164Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4fac3d71e98654620e15e49cc21797c2.slice/cri-containerd-af2c559bb5351f191757890e0d6b2a64b0868f7c585f121703cbfc0632304fae.scope/hugetlb.1GB.events\"" Apr 17 01:59:35.317367 kubelet[3349]: E0417 01:59:35.313766 3349 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 01:59:35.335958 kubelet[3349]: E0417 01:59:35.314294 3349 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 01:59:44.484493 kubelet[3349]: E0417 01:59:44.484289 3349 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 01:59:45.700930 kubelet[3349]: E0417 01:59:45.607185 3349 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="5.71s" Apr 17 01:59:49.817215 kubelet[3349]: E0417 01:59:49.814135 3349 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.356s" Apr 17 01:59:52.958199 kubelet[3349]: E0417 01:59:52.947272 3349 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.054s" Apr 17 01:59:54.881214 kubelet[3349]: E0417 01:59:54.880177 3349 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.321s" Apr 17 01:59:57.213993 containerd[1607]: time="2026-04-17T01:59:57.207980902Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod66a243c17a59d09458bf3b09d66260f5.slice/cri-containerd-bef1947ecb47877214e29a02bf50616487d79761aee407b3d3aa6f2a35e9d8cf.scope/hugetlb.2MB.events\"" Apr 17 01:59:57.319663 containerd[1607]: time="2026-04-17T01:59:57.312120819Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod66a243c17a59d09458bf3b09d66260f5.slice/cri-containerd-bef1947ecb47877214e29a02bf50616487d79761aee407b3d3aa6f2a35e9d8cf.scope/hugetlb.1GB.events\"" Apr 17 01:59:57.422107 containerd[1607]: time="2026-04-17T01:59:57.416857413Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4fac3d71e98654620e15e49cc21797c2.slice/cri-containerd-af2c559bb5351f191757890e0d6b2a64b0868f7c585f121703cbfc0632304fae.scope/hugetlb.2MB.events\"" Apr 17 01:59:57.613198 containerd[1607]: time="2026-04-17T01:59:57.592538579Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4fac3d71e98654620e15e49cc21797c2.slice/cri-containerd-af2c559bb5351f191757890e0d6b2a64b0868f7c585f121703cbfc0632304fae.scope/hugetlb.1GB.events\"" Apr 17 01:59:57.987946 containerd[1607]: time="2026-04-17T01:59:57.929099964Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod82faa9ca0765979bc0118d46e6420ed8.slice/cri-containerd-e60ea82085b16e212267c022e2075e234ef65e6834c94df35872b2ec6367ee2c.scope/hugetlb.2MB.events\"" Apr 17 01:59:58.078137 containerd[1607]: time="2026-04-17T01:59:58.016342125Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod82faa9ca0765979bc0118d46e6420ed8.slice/cri-containerd-e60ea82085b16e212267c022e2075e234ef65e6834c94df35872b2ec6367ee2c.scope/hugetlb.1GB.events\"" Apr 17 02:00:08.888069 kubelet[3349]: E0417 02:00:08.883957 3349 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="11.145s" Apr 17 02:00:16.414074 kubelet[3349]: E0417 02:00:16.409341 3349 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="7.195s" Apr 17 02:00:18.811763 kubelet[3349]: E0417 02:00:18.803266 3349 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.996s" Apr 17 02:00:19.189284 containerd[1607]: time="2026-04-17T02:00:19.185239967Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod66a243c17a59d09458bf3b09d66260f5.slice/cri-containerd-bef1947ecb47877214e29a02bf50616487d79761aee407b3d3aa6f2a35e9d8cf.scope/hugetlb.2MB.events\"" Apr 17 02:00:19.298182 containerd[1607]: time="2026-04-17T02:00:19.293843998Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod66a243c17a59d09458bf3b09d66260f5.slice/cri-containerd-bef1947ecb47877214e29a02bf50616487d79761aee407b3d3aa6f2a35e9d8cf.scope/hugetlb.1GB.events\"" Apr 17 02:00:19.484037 containerd[1607]: time="2026-04-17T02:00:19.478866220Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4fac3d71e98654620e15e49cc21797c2.slice/cri-containerd-af2c559bb5351f191757890e0d6b2a64b0868f7c585f121703cbfc0632304fae.scope/hugetlb.2MB.events\"" Apr 17 02:00:19.484037 containerd[1607]: time="2026-04-17T02:00:19.479132739Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4fac3d71e98654620e15e49cc21797c2.slice/cri-containerd-af2c559bb5351f191757890e0d6b2a64b0868f7c585f121703cbfc0632304fae.scope/hugetlb.1GB.events\"" Apr 17 02:00:19.511667 containerd[1607]: time="2026-04-17T02:00:19.508687721Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod82faa9ca0765979bc0118d46e6420ed8.slice/cri-containerd-e60ea82085b16e212267c022e2075e234ef65e6834c94df35872b2ec6367ee2c.scope/hugetlb.2MB.events\"" Apr 17 02:00:19.533419 containerd[1607]: time="2026-04-17T02:00:19.516908530Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod82faa9ca0765979bc0118d46e6420ed8.slice/cri-containerd-e60ea82085b16e212267c022e2075e234ef65e6834c94df35872b2ec6367ee2c.scope/hugetlb.1GB.events\"" Apr 17 02:00:19.540009 systemd[1]: cri-containerd-bef1947ecb47877214e29a02bf50616487d79761aee407b3d3aa6f2a35e9d8cf.scope: Deactivated successfully. Apr 17 02:00:19.544350 systemd[1]: cri-containerd-bef1947ecb47877214e29a02bf50616487d79761aee407b3d3aa6f2a35e9d8cf.scope: Consumed 15.701s CPU time, 18.8M memory peak. Apr 17 02:00:19.722140 containerd[1607]: time="2026-04-17T02:00:19.720169310Z" level=info msg="received container exit event container_id:\"bef1947ecb47877214e29a02bf50616487d79761aee407b3d3aa6f2a35e9d8cf\" id:\"bef1947ecb47877214e29a02bf50616487d79761aee407b3d3aa6f2a35e9d8cf\" pid:3461 exit_status:1 exited_at:{seconds:1776391219 nanos:670461340}" Apr 17 02:00:22.725032 kubelet[3349]: E0417 02:00:22.723648 3349 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.184s" Apr 17 02:00:23.331317 containerd[1607]: time="2026-04-17T02:00:23.325367809Z" level=info msg="container event discarded" container=4074ecfeaaac5fe32d97a292f991a9b3aa24c0d4749613d52ca098df51c170be type=CONTAINER_STOPPED_EVENT Apr 17 02:00:24.190017 kubelet[3349]: E0417 02:00:24.189509 3349 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.149s" Apr 17 02:00:24.389415 kubelet[3349]: E0417 02:00:24.389026 3349 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 02:00:24.530788 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bef1947ecb47877214e29a02bf50616487d79761aee407b3d3aa6f2a35e9d8cf-rootfs.mount: Deactivated successfully. Apr 17 02:00:27.638265 kubelet[3349]: E0417 02:00:27.617782 3349 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.984s" Apr 17 02:00:30.179491 kubelet[3349]: E0417 02:00:30.177987 3349 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.358s" Apr 17 02:00:30.903054 kubelet[3349]: I0417 02:00:30.851885 3349 scope.go:117] "RemoveContainer" containerID="a3997a9fadd23778e3d326117888dc9ed26eb50c1686b9d632b5dd431da5794e" Apr 17 02:00:30.909319 kubelet[3349]: I0417 02:00:30.904295 3349 scope.go:117] "RemoveContainer" containerID="bef1947ecb47877214e29a02bf50616487d79761aee407b3d3aa6f2a35e9d8cf" Apr 17 02:00:30.923885 kubelet[3349]: E0417 02:00:30.923424 3349 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 02:00:31.194530 containerd[1607]: time="2026-04-17T02:00:31.194043184Z" level=info msg="RemoveContainer for \"a3997a9fadd23778e3d326117888dc9ed26eb50c1686b9d632b5dd431da5794e\"" Apr 17 02:00:31.724578 containerd[1607]: time="2026-04-17T02:00:31.721933072Z" level=info msg="RemoveContainer for \"a3997a9fadd23778e3d326117888dc9ed26eb50c1686b9d632b5dd431da5794e\" returns successfully" Apr 17 02:00:31.743199 containerd[1607]: time="2026-04-17T02:00:31.742942607Z" level=info msg="CreateContainer within sandbox \"2f3cc335d06fac8b129f2e0628de747fdc282f0a938b832bb4028352e59cf7a9\" for container name:\"kube-scheduler\" attempt:3" Apr 17 02:00:32.566081 containerd[1607]: time="2026-04-17T02:00:32.562650489Z" level=info msg="Container 0cfe15bca5ab0e9f06c011dcb38fe148a5eb718f9f11e3b695625884b0303927: CDI devices from CRI Config.CDIDevices: []" Apr 17 02:00:32.940148 containerd[1607]: time="2026-04-17T02:00:32.920241077Z" level=info msg="container event discarded" container=c0810b168f6607ff82be0df0ff8e93d40e21cccb775f74df4533c1a5dfd15502 type=CONTAINER_CREATED_EVENT Apr 17 02:00:33.450060 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3478054009.mount: Deactivated successfully. Apr 17 02:00:33.630808 containerd[1607]: time="2026-04-17T02:00:33.630296246Z" level=info msg="CreateContainer within sandbox \"2f3cc335d06fac8b129f2e0628de747fdc282f0a938b832bb4028352e59cf7a9\" for name:\"kube-scheduler\" attempt:3 returns container id \"0cfe15bca5ab0e9f06c011dcb38fe148a5eb718f9f11e3b695625884b0303927\"" Apr 17 02:00:33.987229 containerd[1607]: time="2026-04-17T02:00:33.984524123Z" level=info msg="StartContainer for \"0cfe15bca5ab0e9f06c011dcb38fe148a5eb718f9f11e3b695625884b0303927\"" Apr 17 02:00:34.033559 containerd[1607]: time="2026-04-17T02:00:34.033247195Z" level=info msg="connecting to shim 0cfe15bca5ab0e9f06c011dcb38fe148a5eb718f9f11e3b695625884b0303927" address="unix:///run/containerd/s/37fa565e1f3a26166e72f3404aaa6af399b8d96545306b6a5af7e8ce01f4b5c9" protocol=ttrpc version=3 Apr 17 02:00:34.128245 kubelet[3349]: E0417 02:00:34.127512 3349 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.524s" Apr 17 02:00:34.665239 systemd[1]: Started cri-containerd-0cfe15bca5ab0e9f06c011dcb38fe148a5eb718f9f11e3b695625884b0303927.scope - libcontainer container 0cfe15bca5ab0e9f06c011dcb38fe148a5eb718f9f11e3b695625884b0303927. Apr 17 02:00:34.897301 containerd[1607]: time="2026-04-17T02:00:34.895445895Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod66a243c17a59d09458bf3b09d66260f5.slice/cri-containerd-0cfe15bca5ab0e9f06c011dcb38fe148a5eb718f9f11e3b695625884b0303927.scope/hugetlb.2MB.events\"" Apr 17 02:00:34.897301 containerd[1607]: time="2026-04-17T02:00:34.896308728Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod66a243c17a59d09458bf3b09d66260f5.slice/cri-containerd-0cfe15bca5ab0e9f06c011dcb38fe148a5eb718f9f11e3b695625884b0303927.scope/hugetlb.1GB.events\"" Apr 17 02:00:34.911086 containerd[1607]: time="2026-04-17T02:00:34.909867090Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4fac3d71e98654620e15e49cc21797c2.slice/cri-containerd-af2c559bb5351f191757890e0d6b2a64b0868f7c585f121703cbfc0632304fae.scope/hugetlb.2MB.events\"" Apr 17 02:00:34.911086 containerd[1607]: time="2026-04-17T02:00:34.910219307Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4fac3d71e98654620e15e49cc21797c2.slice/cri-containerd-af2c559bb5351f191757890e0d6b2a64b0868f7c585f121703cbfc0632304fae.scope/hugetlb.1GB.events\"" Apr 17 02:00:34.923453 containerd[1607]: time="2026-04-17T02:00:34.918734031Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod82faa9ca0765979bc0118d46e6420ed8.slice/cri-containerd-e60ea82085b16e212267c022e2075e234ef65e6834c94df35872b2ec6367ee2c.scope/hugetlb.2MB.events\"" Apr 17 02:00:34.926813 containerd[1607]: time="2026-04-17T02:00:34.921136068Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod82faa9ca0765979bc0118d46e6420ed8.slice/cri-containerd-e60ea82085b16e212267c022e2075e234ef65e6834c94df35872b2ec6367ee2c.scope/hugetlb.1GB.events\"" Apr 17 02:00:34.990550 containerd[1607]: time="2026-04-17T02:00:34.989312334Z" level=info msg="StartContainer for \"0cfe15bca5ab0e9f06c011dcb38fe148a5eb718f9f11e3b695625884b0303927\" returns successfully" Apr 17 02:00:36.755230 kubelet[3349]: E0417 02:00:36.755012 3349 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 02:00:37.994015 kubelet[3349]: E0417 02:00:37.993349 3349 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 02:00:39.654557 kubelet[3349]: E0417 02:00:39.636054 3349 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 02:00:40.813487 kubelet[3349]: E0417 02:00:40.813185 3349 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.27s" Apr 17 02:00:40.939297 kubelet[3349]: E0417 02:00:40.938917 3349 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 02:00:44.414499 kubelet[3349]: E0417 02:00:44.313485 3349 kubelet_node_status.go:398] "Node not becoming ready in time after startup" Apr 17 02:00:46.539745 containerd[1607]: time="2026-04-17T02:00:46.534276734Z" level=info msg="container event discarded" container=c0810b168f6607ff82be0df0ff8e93d40e21cccb775f74df4533c1a5dfd15502 type=CONTAINER_STARTED_EVENT Apr 17 02:00:48.277025 kubelet[3349]: E0417 02:00:48.274802 3349 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="6.6s" Apr 17 02:00:51.468229 systemd[1]: cri-containerd-e60ea82085b16e212267c022e2075e234ef65e6834c94df35872b2ec6367ee2c.scope: Deactivated successfully. Apr 17 02:00:51.655991 systemd[1]: cri-containerd-e60ea82085b16e212267c022e2075e234ef65e6834c94df35872b2ec6367ee2c.scope: Consumed 33.451s CPU time, 37.3M memory peak. Apr 17 02:00:52.151289 containerd[1607]: time="2026-04-17T02:00:52.050500267Z" level=info msg="received container exit event container_id:\"e60ea82085b16e212267c022e2075e234ef65e6834c94df35872b2ec6367ee2c\" id:\"e60ea82085b16e212267c022e2075e234ef65e6834c94df35872b2ec6367ee2c\" pid:3455 exit_status:1 exited_at:{seconds:1776391251 nanos:997137201}" Apr 17 02:00:55.446734 kubelet[3349]: E0417 02:00:55.446201 3349 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 17 02:00:55.513934 kubelet[3349]: E0417 02:00:55.513711 3349 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="7.039s" Apr 17 02:00:55.518468 containerd[1607]: time="2026-04-17T02:00:55.518167384Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4fac3d71e98654620e15e49cc21797c2.slice/cri-containerd-af2c559bb5351f191757890e0d6b2a64b0868f7c585f121703cbfc0632304fae.scope/hugetlb.2MB.events\"" Apr 17 02:00:55.538269 containerd[1607]: time="2026-04-17T02:00:55.531338363Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4fac3d71e98654620e15e49cc21797c2.slice/cri-containerd-af2c559bb5351f191757890e0d6b2a64b0868f7c585f121703cbfc0632304fae.scope/hugetlb.1GB.events\"" Apr 17 02:00:55.688032 containerd[1607]: time="2026-04-17T02:00:55.686313695Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod66a243c17a59d09458bf3b09d66260f5.slice/cri-containerd-0cfe15bca5ab0e9f06c011dcb38fe148a5eb718f9f11e3b695625884b0303927.scope/hugetlb.2MB.events\"" Apr 17 02:00:55.689360 containerd[1607]: time="2026-04-17T02:00:55.689327301Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod66a243c17a59d09458bf3b09d66260f5.slice/cri-containerd-0cfe15bca5ab0e9f06c011dcb38fe148a5eb718f9f11e3b695625884b0303927.scope/hugetlb.1GB.events\"" Apr 17 02:00:55.882953 kubelet[3349]: E0417 02:00:55.881841 3349 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 02:00:55.884481 kubelet[3349]: E0417 02:00:55.884178 3349 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 02:00:56.070176 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e60ea82085b16e212267c022e2075e234ef65e6834c94df35872b2ec6367ee2c-rootfs.mount: Deactivated successfully. Apr 17 02:00:57.023064 kubelet[3349]: I0417 02:00:57.022756 3349 scope.go:117] "RemoveContainer" containerID="46217a85797f45cae167b6198d484a7f47b3dc8af72b0b52414cc3059ecd10a7" Apr 17 02:00:57.037321 kubelet[3349]: I0417 02:00:57.030290 3349 scope.go:117] "RemoveContainer" containerID="e60ea82085b16e212267c022e2075e234ef65e6834c94df35872b2ec6367ee2c" Apr 17 02:00:57.176072 kubelet[3349]: E0417 02:00:57.030545 3349 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 02:00:57.253779 kubelet[3349]: E0417 02:00:57.236436 3349 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-controller-manager pod=kube-controller-manager-localhost_kube-system(82faa9ca0765979bc0118d46e6420ed8)\"" pod="kube-system/kube-controller-manager-localhost" podUID="82faa9ca0765979bc0118d46e6420ed8" Apr 17 02:00:57.373570 containerd[1607]: time="2026-04-17T02:00:57.354548429Z" level=info msg="RemoveContainer for \"46217a85797f45cae167b6198d484a7f47b3dc8af72b0b52414cc3059ecd10a7\"" Apr 17 02:00:57.561002 containerd[1607]: time="2026-04-17T02:00:57.560693903Z" level=info msg="RemoveContainer for \"46217a85797f45cae167b6198d484a7f47b3dc8af72b0b52414cc3059ecd10a7\" returns successfully" Apr 17 02:00:58.440456 kubelet[3349]: E0417 02:00:58.440175 3349 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 02:00:59.510365 kubelet[3349]: E0417 02:00:59.510063 3349 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 02:01:00.489379 kubelet[3349]: E0417 02:01:00.482519 3349 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 17 02:01:01.548465 kubelet[3349]: I0417 02:01:01.547503 3349 scope.go:117] "RemoveContainer" containerID="e60ea82085b16e212267c022e2075e234ef65e6834c94df35872b2ec6367ee2c" Apr 17 02:01:01.604149 kubelet[3349]: E0417 02:01:01.552711 3349 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 02:01:01.604149 kubelet[3349]: E0417 02:01:01.552863 3349 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-controller-manager pod=kube-controller-manager-localhost_kube-system(82faa9ca0765979bc0118d46e6420ed8)\"" pod="kube-system/kube-controller-manager-localhost" podUID="82faa9ca0765979bc0118d46e6420ed8" Apr 17 02:01:05.621005 kubelet[3349]: E0417 02:01:05.620525 3349 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 17 02:01:06.282211 containerd[1607]: time="2026-04-17T02:01:06.281230178Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod66a243c17a59d09458bf3b09d66260f5.slice/cri-containerd-0cfe15bca5ab0e9f06c011dcb38fe148a5eb718f9f11e3b695625884b0303927.scope/hugetlb.2MB.events\"" Apr 17 02:01:06.282211 containerd[1607]: time="2026-04-17T02:01:06.281747046Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod66a243c17a59d09458bf3b09d66260f5.slice/cri-containerd-0cfe15bca5ab0e9f06c011dcb38fe148a5eb718f9f11e3b695625884b0303927.scope/hugetlb.1GB.events\"" Apr 17 02:01:06.293370 containerd[1607]: time="2026-04-17T02:01:06.292017763Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4fac3d71e98654620e15e49cc21797c2.slice/cri-containerd-af2c559bb5351f191757890e0d6b2a64b0868f7c585f121703cbfc0632304fae.scope/hugetlb.2MB.events\"" Apr 17 02:01:06.293370 containerd[1607]: time="2026-04-17T02:01:06.293495192Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4fac3d71e98654620e15e49cc21797c2.slice/cri-containerd-af2c559bb5351f191757890e0d6b2a64b0868f7c585f121703cbfc0632304fae.scope/hugetlb.1GB.events\"" Apr 17 02:01:08.745220 containerd[1607]: time="2026-04-17T02:01:08.737263489Z" level=info msg="container event discarded" container=fa85ede5a4dfcf74e785c3ae04761f9e24c5254c7b51fd332a2dd1bd75ab71d9 type=CONTAINER_STOPPED_EVENT Apr 17 02:01:10.684983 kubelet[3349]: E0417 02:01:10.684491 3349 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.143s" Apr 17 02:01:10.960272 kubelet[3349]: E0417 02:01:10.943462 3349 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 17 02:01:11.754540 containerd[1607]: time="2026-04-17T02:01:11.753535629Z" level=info msg="container event discarded" container=a3997a9fadd23778e3d326117888dc9ed26eb50c1686b9d632b5dd431da5794e type=CONTAINER_CREATED_EVENT Apr 17 02:01:16.949375 kubelet[3349]: E0417 02:01:16.915233 3349 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 17 02:01:17.900378 containerd[1607]: time="2026-04-17T02:01:17.896115989Z" level=info msg="container event discarded" container=fa85ede5a4dfcf74e785c3ae04761f9e24c5254c7b51fd332a2dd1bd75ab71d9 type=CONTAINER_DELETED_EVENT Apr 17 02:01:19.394153 kubelet[3349]: E0417 02:01:19.388255 3349 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.774s" Apr 17 02:01:24.524947 containerd[1607]: time="2026-04-17T02:01:24.420006274Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4fac3d71e98654620e15e49cc21797c2.slice/cri-containerd-af2c559bb5351f191757890e0d6b2a64b0868f7c585f121703cbfc0632304fae.scope/hugetlb.2MB.events\"" Apr 17 02:01:24.524947 containerd[1607]: time="2026-04-17T02:01:24.480200525Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4fac3d71e98654620e15e49cc21797c2.slice/cri-containerd-af2c559bb5351f191757890e0d6b2a64b0868f7c585f121703cbfc0632304fae.scope/hugetlb.1GB.events\"" Apr 17 02:01:24.926527 containerd[1607]: time="2026-04-17T02:01:24.908347486Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod66a243c17a59d09458bf3b09d66260f5.slice/cri-containerd-0cfe15bca5ab0e9f06c011dcb38fe148a5eb718f9f11e3b695625884b0303927.scope/hugetlb.2MB.events\"" Apr 17 02:01:25.038162 containerd[1607]: time="2026-04-17T02:01:25.013232540Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod66a243c17a59d09458bf3b09d66260f5.slice/cri-containerd-0cfe15bca5ab0e9f06c011dcb38fe148a5eb718f9f11e3b695625884b0303927.scope/hugetlb.1GB.events\"" Apr 17 02:01:26.331341 kubelet[3349]: E0417 02:01:26.155586 3349 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 17 02:01:29.096291 kubelet[3349]: E0417 02:01:29.096035 3349 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="9.464s" Apr 17 02:01:30.386469 kubelet[3349]: I0417 02:01:30.383388 3349 scope.go:117] "RemoveContainer" containerID="e60ea82085b16e212267c022e2075e234ef65e6834c94df35872b2ec6367ee2c" Apr 17 02:01:30.921031 kubelet[3349]: E0417 02:01:30.919991 3349 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 02:01:31.306040 containerd[1607]: time="2026-04-17T02:01:31.244278665Z" level=info msg="container event discarded" container=a3997a9fadd23778e3d326117888dc9ed26eb50c1686b9d632b5dd431da5794e type=CONTAINER_STARTED_EVENT Apr 17 02:01:33.779070 kubelet[3349]: E0417 02:01:33.769299 3349 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 17 02:01:35.009236 kubelet[3349]: E0417 02:01:35.008943 3349 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="5.912s" Apr 17 02:01:35.298910 containerd[1607]: time="2026-04-17T02:01:35.279554238Z" level=info msg="CreateContainer within sandbox \"da9200c64e3b9430af1bec691648e29545feb58f1e354e9301bd9011d5800181\" for container name:\"kube-controller-manager\" attempt:4" Apr 17 02:01:37.093192 containerd[1607]: time="2026-04-17T02:01:37.089274239Z" level=info msg="Container ba61d1548c7d013516bb8e36f45c3e2214bda293ebcdec9d0053fb61bf4720dc: CDI devices from CRI Config.CDIDevices: []" Apr 17 02:01:37.135706 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount853281374.mount: Deactivated successfully. Apr 17 02:01:37.153240 kubelet[3349]: E0417 02:01:37.152878 3349 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.144s" Apr 17 02:01:37.305995 containerd[1607]: time="2026-04-17T02:01:37.305509889Z" level=info msg="CreateContainer within sandbox \"da9200c64e3b9430af1bec691648e29545feb58f1e354e9301bd9011d5800181\" for name:\"kube-controller-manager\" attempt:4 returns container id \"ba61d1548c7d013516bb8e36f45c3e2214bda293ebcdec9d0053fb61bf4720dc\"" Apr 17 02:01:37.318091 containerd[1607]: time="2026-04-17T02:01:37.317343299Z" level=info msg="StartContainer for \"ba61d1548c7d013516bb8e36f45c3e2214bda293ebcdec9d0053fb61bf4720dc\"" Apr 17 02:01:37.337925 containerd[1607]: time="2026-04-17T02:01:37.337380595Z" level=info msg="connecting to shim ba61d1548c7d013516bb8e36f45c3e2214bda293ebcdec9d0053fb61bf4720dc" address="unix:///run/containerd/s/6dd5659f205fb6d79c30f8024892d72185c43fb20853b4daceb55fc7305fe8f6" protocol=ttrpc version=3 Apr 17 02:01:39.344284 kubelet[3349]: E0417 02:01:39.320727 3349 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 17 02:01:39.559451 systemd[1]: Started cri-containerd-ba61d1548c7d013516bb8e36f45c3e2214bda293ebcdec9d0053fb61bf4720dc.scope - libcontainer container ba61d1548c7d013516bb8e36f45c3e2214bda293ebcdec9d0053fb61bf4720dc. Apr 17 02:01:40.834252 containerd[1607]: time="2026-04-17T02:01:40.833921955Z" level=info msg="StartContainer for \"ba61d1548c7d013516bb8e36f45c3e2214bda293ebcdec9d0053fb61bf4720dc\" returns successfully" Apr 17 02:01:41.413952 kubelet[3349]: E0417 02:01:41.409816 3349 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 02:01:42.953168 kubelet[3349]: E0417 02:01:42.952928 3349 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.414s" Apr 17 02:01:42.955692 kubelet[3349]: E0417 02:01:42.955574 3349 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 02:01:44.442171 kubelet[3349]: E0417 02:01:44.436938 3349 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 17 02:01:45.421834 containerd[1607]: time="2026-04-17T02:01:45.421181769Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4fac3d71e98654620e15e49cc21797c2.slice/cri-containerd-af2c559bb5351f191757890e0d6b2a64b0868f7c585f121703cbfc0632304fae.scope/hugetlb.2MB.events\"" Apr 17 02:01:45.421834 containerd[1607]: time="2026-04-17T02:01:45.421248661Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4fac3d71e98654620e15e49cc21797c2.slice/cri-containerd-af2c559bb5351f191757890e0d6b2a64b0868f7c585f121703cbfc0632304fae.scope/hugetlb.1GB.events\"" Apr 17 02:01:45.425460 containerd[1607]: time="2026-04-17T02:01:45.424141405Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod82faa9ca0765979bc0118d46e6420ed8.slice/cri-containerd-ba61d1548c7d013516bb8e36f45c3e2214bda293ebcdec9d0053fb61bf4720dc.scope/hugetlb.2MB.events\"" Apr 17 02:01:45.425460 containerd[1607]: time="2026-04-17T02:01:45.424273826Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod82faa9ca0765979bc0118d46e6420ed8.slice/cri-containerd-ba61d1548c7d013516bb8e36f45c3e2214bda293ebcdec9d0053fb61bf4720dc.scope/hugetlb.1GB.events\"" Apr 17 02:01:45.426809 containerd[1607]: time="2026-04-17T02:01:45.426215250Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod66a243c17a59d09458bf3b09d66260f5.slice/cri-containerd-0cfe15bca5ab0e9f06c011dcb38fe148a5eb718f9f11e3b695625884b0303927.scope/hugetlb.2MB.events\"" Apr 17 02:01:45.426809 containerd[1607]: time="2026-04-17T02:01:45.426270234Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod66a243c17a59d09458bf3b09d66260f5.slice/cri-containerd-0cfe15bca5ab0e9f06c011dcb38fe148a5eb718f9f11e3b695625884b0303927.scope/hugetlb.1GB.events\"" Apr 17 02:01:48.605303 kubelet[3349]: E0417 02:01:48.603264 3349 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 02:01:49.591308 kubelet[3349]: E0417 02:01:49.590881 3349 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 17 02:01:51.758696 kubelet[3349]: E0417 02:01:51.757289 3349 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 02:01:54.240163 containerd[1607]: time="2026-04-17T02:01:54.238909021Z" level=info msg="container event discarded" container=c0810b168f6607ff82be0df0ff8e93d40e21cccb775f74df4533c1a5dfd15502 type=CONTAINER_STOPPED_EVENT Apr 17 02:01:54.697942 kubelet[3349]: E0417 02:01:54.651227 3349 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 17 02:01:56.707520 kubelet[3349]: E0417 02:01:56.707185 3349 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.092s" Apr 17 02:01:58.996586 containerd[1607]: time="2026-04-17T02:01:58.996069749Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod66a243c17a59d09458bf3b09d66260f5.slice/cri-containerd-0cfe15bca5ab0e9f06c011dcb38fe148a5eb718f9f11e3b695625884b0303927.scope/hugetlb.2MB.events\"" Apr 17 02:01:58.996586 containerd[1607]: time="2026-04-17T02:01:58.996426863Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod66a243c17a59d09458bf3b09d66260f5.slice/cri-containerd-0cfe15bca5ab0e9f06c011dcb38fe148a5eb718f9f11e3b695625884b0303927.scope/hugetlb.1GB.events\"" Apr 17 02:01:59.107279 containerd[1607]: time="2026-04-17T02:01:59.055094121Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4fac3d71e98654620e15e49cc21797c2.slice/cri-containerd-af2c559bb5351f191757890e0d6b2a64b0868f7c585f121703cbfc0632304fae.scope/hugetlb.2MB.events\"" Apr 17 02:01:59.107279 containerd[1607]: time="2026-04-17T02:01:59.055283538Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4fac3d71e98654620e15e49cc21797c2.slice/cri-containerd-af2c559bb5351f191757890e0d6b2a64b0868f7c585f121703cbfc0632304fae.scope/hugetlb.1GB.events\"" Apr 17 02:01:59.199242 containerd[1607]: time="2026-04-17T02:01:59.179273567Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod82faa9ca0765979bc0118d46e6420ed8.slice/cri-containerd-ba61d1548c7d013516bb8e36f45c3e2214bda293ebcdec9d0053fb61bf4720dc.scope/hugetlb.2MB.events\"" Apr 17 02:01:59.199242 containerd[1607]: time="2026-04-17T02:01:59.179411462Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod82faa9ca0765979bc0118d46e6420ed8.slice/cri-containerd-ba61d1548c7d013516bb8e36f45c3e2214bda293ebcdec9d0053fb61bf4720dc.scope/hugetlb.1GB.events\"" Apr 17 02:02:00.194345 kubelet[3349]: E0417 02:02:00.177946 3349 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 17 02:02:00.844315 kubelet[3349]: E0417 02:02:00.842085 3349 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.236s" Apr 17 02:02:04.315854 kubelet[3349]: E0417 02:02:04.312386 3349 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.679s" Apr 17 02:02:05.922219 kubelet[3349]: E0417 02:02:05.920376 3349 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.606s" Apr 17 02:02:06.395338 containerd[1607]: time="2026-04-17T02:02:06.387385249Z" level=info msg="container event discarded" container=4074ecfeaaac5fe32d97a292f991a9b3aa24c0d4749613d52ca098df51c170be type=CONTAINER_DELETED_EVENT Apr 17 02:02:06.717396 kubelet[3349]: E0417 02:02:06.682086 3349 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 17 02:02:09.098346 containerd[1607]: time="2026-04-17T02:02:09.041567630Z" level=info msg="container event discarded" container=46217a85797f45cae167b6198d484a7f47b3dc8af72b0b52414cc3059ecd10a7 type=CONTAINER_CREATED_EVENT Apr 17 02:02:10.072005 kubelet[3349]: E0417 02:02:10.070440 3349 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.789s" Apr 17 02:02:15.056927 containerd[1607]: time="2026-04-17T02:02:15.031338510Z" level=info msg="container event discarded" container=46217a85797f45cae167b6198d484a7f47b3dc8af72b0b52414cc3059ecd10a7 type=CONTAINER_STARTED_EVENT Apr 17 02:02:15.478308 kubelet[3349]: E0417 02:02:14.859166 3349 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 17 02:02:17.728181 kubelet[3349]: E0417 02:02:17.727902 3349 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="7.616s" Apr 17 02:02:19.730160 kubelet[3349]: E0417 02:02:19.713829 3349 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.986s" Apr 17 02:02:19.774036 containerd[1607]: time="2026-04-17T02:02:19.771131239Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod66a243c17a59d09458bf3b09d66260f5.slice/cri-containerd-0cfe15bca5ab0e9f06c011dcb38fe148a5eb718f9f11e3b695625884b0303927.scope/hugetlb.2MB.events\"" Apr 17 02:02:19.867010 containerd[1607]: time="2026-04-17T02:02:19.864705757Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod66a243c17a59d09458bf3b09d66260f5.slice/cri-containerd-0cfe15bca5ab0e9f06c011dcb38fe148a5eb718f9f11e3b695625884b0303927.scope/hugetlb.1GB.events\"" Apr 17 02:02:19.875409 containerd[1607]: time="2026-04-17T02:02:19.872249636Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4fac3d71e98654620e15e49cc21797c2.slice/cri-containerd-af2c559bb5351f191757890e0d6b2a64b0868f7c585f121703cbfc0632304fae.scope/hugetlb.2MB.events\"" Apr 17 02:02:19.875409 containerd[1607]: time="2026-04-17T02:02:19.872438860Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4fac3d71e98654620e15e49cc21797c2.slice/cri-containerd-af2c559bb5351f191757890e0d6b2a64b0868f7c585f121703cbfc0632304fae.scope/hugetlb.1GB.events\"" Apr 17 02:02:20.228973 containerd[1607]: time="2026-04-17T02:02:20.217229101Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod82faa9ca0765979bc0118d46e6420ed8.slice/cri-containerd-ba61d1548c7d013516bb8e36f45c3e2214bda293ebcdec9d0053fb61bf4720dc.scope/hugetlb.2MB.events\"" Apr 17 02:02:20.228973 containerd[1607]: time="2026-04-17T02:02:20.218282939Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod82faa9ca0765979bc0118d46e6420ed8.slice/cri-containerd-ba61d1548c7d013516bb8e36f45c3e2214bda293ebcdec9d0053fb61bf4720dc.scope/hugetlb.1GB.events\"" Apr 17 02:02:20.424345 kubelet[3349]: E0417 02:02:20.424006 3349 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 02:02:21.215403 kubelet[3349]: E0417 02:02:21.212785 3349 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 17 02:02:23.968167 kubelet[3349]: E0417 02:02:23.967836 3349 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.379s" Apr 17 02:02:26.740697 kubelet[3349]: E0417 02:02:26.707227 3349 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 17 02:02:32.201002 kubelet[3349]: E0417 02:02:32.196565 3349 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="4.589s" Apr 17 02:02:32.439579 kubelet[3349]: E0417 02:02:32.430464 3349 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 17 02:02:34.182869 kubelet[3349]: E0417 02:02:34.181909 3349 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.975s" Apr 17 02:02:36.759470 kubelet[3349]: E0417 02:02:36.758166 3349 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.405s" Apr 17 02:02:36.768140 containerd[1607]: time="2026-04-17T02:02:36.758370203Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod66a243c17a59d09458bf3b09d66260f5.slice/cri-containerd-0cfe15bca5ab0e9f06c011dcb38fe148a5eb718f9f11e3b695625884b0303927.scope/hugetlb.2MB.events\"" Apr 17 02:02:36.818384 containerd[1607]: time="2026-04-17T02:02:36.767030763Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod66a243c17a59d09458bf3b09d66260f5.slice/cri-containerd-0cfe15bca5ab0e9f06c011dcb38fe148a5eb718f9f11e3b695625884b0303927.scope/hugetlb.1GB.events\"" Apr 17 02:02:37.056255 containerd[1607]: time="2026-04-17T02:02:37.023325334Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4fac3d71e98654620e15e49cc21797c2.slice/cri-containerd-af2c559bb5351f191757890e0d6b2a64b0868f7c585f121703cbfc0632304fae.scope/hugetlb.2MB.events\"" Apr 17 02:02:37.056255 containerd[1607]: time="2026-04-17T02:02:37.029217423Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4fac3d71e98654620e15e49cc21797c2.slice/cri-containerd-af2c559bb5351f191757890e0d6b2a64b0868f7c585f121703cbfc0632304fae.scope/hugetlb.1GB.events\"" Apr 17 02:02:37.111863 containerd[1607]: time="2026-04-17T02:02:37.058367346Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod82faa9ca0765979bc0118d46e6420ed8.slice/cri-containerd-ba61d1548c7d013516bb8e36f45c3e2214bda293ebcdec9d0053fb61bf4720dc.scope/hugetlb.2MB.events\"" Apr 17 02:02:37.152243 containerd[1607]: time="2026-04-17T02:02:37.143239006Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod82faa9ca0765979bc0118d46e6420ed8.slice/cri-containerd-ba61d1548c7d013516bb8e36f45c3e2214bda293ebcdec9d0053fb61bf4720dc.scope/hugetlb.1GB.events\"" Apr 17 02:02:38.413923 kubelet[3349]: E0417 02:02:38.403202 3349 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 17 02:02:40.125208 kubelet[3349]: E0417 02:02:40.124837 3349 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.334s" Apr 17 02:02:42.411260 kubelet[3349]: E0417 02:02:42.407066 3349 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.282s" Apr 17 02:02:43.638241 kubelet[3349]: E0417 02:02:43.633033 3349 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 17 02:02:46.291247 kubelet[3349]: E0417 02:02:46.287483 3349 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.695s" Apr 17 02:02:48.250229 kubelet[3349]: E0417 02:02:48.239045 3349 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.891s" Apr 17 02:02:49.098293 kubelet[3349]: E0417 02:02:49.097079 3349 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 17 02:02:49.409189 kubelet[3349]: E0417 02:02:49.407389 3349 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.121s" Apr 17 02:02:55.395853 kubelet[3349]: E0417 02:02:54.850468 3349 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 17 02:03:00.200176 kubelet[3349]: E0417 02:03:00.197300 3349 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="8.539s" Apr 17 02:03:01.883008 kubelet[3349]: E0417 02:03:01.881107 3349 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 17 02:03:07.397375 containerd[1607]: time="2026-04-17T02:03:07.396511024Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod66a243c17a59d09458bf3b09d66260f5.slice/cri-containerd-0cfe15bca5ab0e9f06c011dcb38fe148a5eb718f9f11e3b695625884b0303927.scope/hugetlb.2MB.events\"" Apr 17 02:03:07.713691 containerd[1607]: time="2026-04-17T02:03:07.549006265Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod66a243c17a59d09458bf3b09d66260f5.slice/cri-containerd-0cfe15bca5ab0e9f06c011dcb38fe148a5eb718f9f11e3b695625884b0303927.scope/hugetlb.1GB.events\"" Apr 17 02:03:07.756096 containerd[1607]: time="2026-04-17T02:03:07.743470379Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4fac3d71e98654620e15e49cc21797c2.slice/cri-containerd-af2c559bb5351f191757890e0d6b2a64b0868f7c585f121703cbfc0632304fae.scope/hugetlb.2MB.events\"" Apr 17 02:03:07.756096 containerd[1607]: time="2026-04-17T02:03:07.752911756Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4fac3d71e98654620e15e49cc21797c2.slice/cri-containerd-af2c559bb5351f191757890e0d6b2a64b0868f7c585f121703cbfc0632304fae.scope/hugetlb.1GB.events\"" Apr 17 02:03:08.058279 containerd[1607]: time="2026-04-17T02:03:08.022458484Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod82faa9ca0765979bc0118d46e6420ed8.slice/cri-containerd-ba61d1548c7d013516bb8e36f45c3e2214bda293ebcdec9d0053fb61bf4720dc.scope/hugetlb.2MB.events\"" Apr 17 02:03:08.146358 containerd[1607]: time="2026-04-17T02:03:08.144838844Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod82faa9ca0765979bc0118d46e6420ed8.slice/cri-containerd-ba61d1548c7d013516bb8e36f45c3e2214bda293ebcdec9d0053fb61bf4720dc.scope/hugetlb.1GB.events\"" Apr 17 02:03:08.451435 kubelet[3349]: E0417 02:03:08.449473 3349 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 17 02:03:09.305881 kubelet[3349]: E0417 02:03:09.304846 3349 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="9.107s" Apr 17 02:03:10.927892 kubelet[3349]: E0417 02:03:10.927044 3349 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 02:03:10.950239 kubelet[3349]: E0417 02:03:10.928708 3349 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 02:03:12.512441 kubelet[3349]: E0417 02:03:12.458351 3349 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.146s" Apr 17 02:03:13.793829 kubelet[3349]: E0417 02:03:13.793327 3349 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 17 02:03:16.122386 kubelet[3349]: E0417 02:03:16.121049 3349 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.576s" Apr 17 02:03:18.933503 kubelet[3349]: E0417 02:03:18.929348 3349 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.125s" Apr 17 02:03:19.222527 kubelet[3349]: E0417 02:03:19.197410 3349 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 17 02:03:21.834336 kubelet[3349]: E0417 02:03:21.824159 3349 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.176s" Apr 17 02:03:24.858370 kubelet[3349]: E0417 02:03:24.856085 3349 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 17 02:03:25.314362 kubelet[3349]: E0417 02:03:25.312341 3349 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.406s" Apr 17 02:03:26.141403 containerd[1607]: time="2026-04-17T02:03:26.129667669Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod66a243c17a59d09458bf3b09d66260f5.slice/cri-containerd-0cfe15bca5ab0e9f06c011dcb38fe148a5eb718f9f11e3b695625884b0303927.scope/hugetlb.2MB.events\"" Apr 17 02:03:26.300359 containerd[1607]: time="2026-04-17T02:03:26.167454472Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod66a243c17a59d09458bf3b09d66260f5.slice/cri-containerd-0cfe15bca5ab0e9f06c011dcb38fe148a5eb718f9f11e3b695625884b0303927.scope/hugetlb.1GB.events\"" Apr 17 02:03:26.335242 containerd[1607]: time="2026-04-17T02:03:26.317882500Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4fac3d71e98654620e15e49cc21797c2.slice/cri-containerd-af2c559bb5351f191757890e0d6b2a64b0868f7c585f121703cbfc0632304fae.scope/hugetlb.2MB.events\"" Apr 17 02:03:26.335242 containerd[1607]: time="2026-04-17T02:03:26.318188128Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4fac3d71e98654620e15e49cc21797c2.slice/cri-containerd-af2c559bb5351f191757890e0d6b2a64b0868f7c585f121703cbfc0632304fae.scope/hugetlb.1GB.events\"" Apr 17 02:03:26.495412 containerd[1607]: time="2026-04-17T02:03:26.477214035Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod82faa9ca0765979bc0118d46e6420ed8.slice/cri-containerd-ba61d1548c7d013516bb8e36f45c3e2214bda293ebcdec9d0053fb61bf4720dc.scope/hugetlb.2MB.events\"" Apr 17 02:03:26.495412 containerd[1607]: time="2026-04-17T02:03:26.477457418Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod82faa9ca0765979bc0118d46e6420ed8.slice/cri-containerd-ba61d1548c7d013516bb8e36f45c3e2214bda293ebcdec9d0053fb61bf4720dc.scope/hugetlb.1GB.events\"" Apr 17 02:03:27.081377 kubelet[3349]: E0417 02:03:27.062159 3349 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.195s" Apr 17 02:03:29.645219 kubelet[3349]: E0417 02:03:29.524482 3349 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.818s" Apr 17 02:03:30.289441 kubelet[3349]: E0417 02:03:30.288987 3349 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 17 02:03:31.019362 kubelet[3349]: E0417 02:03:31.018818 3349 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.352s" Apr 17 02:03:33.292920 kubelet[3349]: E0417 02:03:33.292306 3349 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.737s" Apr 17 02:03:34.845400 kubelet[3349]: E0417 02:03:34.844142 3349 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.253s" Apr 17 02:03:35.458535 kubelet[3349]: E0417 02:03:35.447052 3349 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 17 02:03:38.362917 kubelet[3349]: E0417 02:03:38.357338 3349 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.741s" Apr 17 02:03:40.685297 kubelet[3349]: E0417 02:03:40.683532 3349 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 17 02:03:40.821295 kubelet[3349]: E0417 02:03:40.820947 3349 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.268s" Apr 17 02:03:41.301089 containerd[1607]: time="2026-04-17T02:03:41.298530963Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4fac3d71e98654620e15e49cc21797c2.slice/cri-containerd-af2c559bb5351f191757890e0d6b2a64b0868f7c585f121703cbfc0632304fae.scope/hugetlb.2MB.events\"" Apr 17 02:03:41.412862 containerd[1607]: time="2026-04-17T02:03:41.302211476Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4fac3d71e98654620e15e49cc21797c2.slice/cri-containerd-af2c559bb5351f191757890e0d6b2a64b0868f7c585f121703cbfc0632304fae.scope/hugetlb.1GB.events\"" Apr 17 02:03:41.550925 containerd[1607]: time="2026-04-17T02:03:41.540296968Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod82faa9ca0765979bc0118d46e6420ed8.slice/cri-containerd-ba61d1548c7d013516bb8e36f45c3e2214bda293ebcdec9d0053fb61bf4720dc.scope/hugetlb.2MB.events\"" Apr 17 02:03:41.550925 containerd[1607]: time="2026-04-17T02:03:41.550256563Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod82faa9ca0765979bc0118d46e6420ed8.slice/cri-containerd-ba61d1548c7d013516bb8e36f45c3e2214bda293ebcdec9d0053fb61bf4720dc.scope/hugetlb.1GB.events\"" Apr 17 02:03:41.590555 containerd[1607]: time="2026-04-17T02:03:41.582364984Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod66a243c17a59d09458bf3b09d66260f5.slice/cri-containerd-0cfe15bca5ab0e9f06c011dcb38fe148a5eb718f9f11e3b695625884b0303927.scope/hugetlb.2MB.events\"" Apr 17 02:03:41.590555 containerd[1607]: time="2026-04-17T02:03:41.582737752Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod66a243c17a59d09458bf3b09d66260f5.slice/cri-containerd-0cfe15bca5ab0e9f06c011dcb38fe148a5eb718f9f11e3b695625884b0303927.scope/hugetlb.1GB.events\"" Apr 17 02:03:44.337189 kubelet[3349]: E0417 02:03:44.199455 3349 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.657s" Apr 17 02:03:45.899377 kubelet[3349]: E0417 02:03:45.892127 3349 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.518s" Apr 17 02:03:46.696962 kubelet[3349]: E0417 02:03:46.632435 3349 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 17 02:03:48.123353 kubelet[3349]: E0417 02:03:48.113363 3349 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.211s" Apr 17 02:03:48.582277 kubelet[3349]: E0417 02:03:48.581872 3349 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 02:03:51.911456 kubelet[3349]: E0417 02:03:51.906413 3349 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 17 02:03:54.256927 kubelet[3349]: E0417 02:03:54.089479 3349 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.501s" Apr 17 02:03:56.022949 kubelet[3349]: E0417 02:03:56.018085 3349 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.626s" Apr 17 02:03:56.152860 containerd[1607]: time="2026-04-17T02:03:56.140113754Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod66a243c17a59d09458bf3b09d66260f5.slice/cri-containerd-0cfe15bca5ab0e9f06c011dcb38fe148a5eb718f9f11e3b695625884b0303927.scope/hugetlb.2MB.events\"" Apr 17 02:03:56.152860 containerd[1607]: time="2026-04-17T02:03:56.140315855Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod66a243c17a59d09458bf3b09d66260f5.slice/cri-containerd-0cfe15bca5ab0e9f06c011dcb38fe148a5eb718f9f11e3b695625884b0303927.scope/hugetlb.1GB.events\"" Apr 17 02:03:56.206899 containerd[1607]: time="2026-04-17T02:03:56.206005694Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4fac3d71e98654620e15e49cc21797c2.slice/cri-containerd-af2c559bb5351f191757890e0d6b2a64b0868f7c585f121703cbfc0632304fae.scope/hugetlb.2MB.events\"" Apr 17 02:03:56.206899 containerd[1607]: time="2026-04-17T02:03:56.206406300Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4fac3d71e98654620e15e49cc21797c2.slice/cri-containerd-af2c559bb5351f191757890e0d6b2a64b0868f7c585f121703cbfc0632304fae.scope/hugetlb.1GB.events\"" Apr 17 02:03:56.313021 containerd[1607]: time="2026-04-17T02:03:56.311763407Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod82faa9ca0765979bc0118d46e6420ed8.slice/cri-containerd-ba61d1548c7d013516bb8e36f45c3e2214bda293ebcdec9d0053fb61bf4720dc.scope/hugetlb.2MB.events\"" Apr 17 02:03:56.317213 containerd[1607]: time="2026-04-17T02:03:56.313662056Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod82faa9ca0765979bc0118d46e6420ed8.slice/cri-containerd-ba61d1548c7d013516bb8e36f45c3e2214bda293ebcdec9d0053fb61bf4720dc.scope/hugetlb.1GB.events\"" Apr 17 02:03:57.122369 containerd[1607]: time="2026-04-17T02:03:57.118493035Z" level=info msg="container event discarded" container=46217a85797f45cae167b6198d484a7f47b3dc8af72b0b52414cc3059ecd10a7 type=CONTAINER_STOPPED_EVENT Apr 17 02:03:57.244752 kubelet[3349]: E0417 02:03:57.242460 3349 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 17 02:04:00.798574 kubelet[3349]: E0417 02:04:00.792483 3349 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.181s" Apr 17 02:04:00.963438 containerd[1607]: time="2026-04-17T02:04:00.959304892Z" level=info msg="container event discarded" container=a3997a9fadd23778e3d326117888dc9ed26eb50c1686b9d632b5dd431da5794e type=CONTAINER_STOPPED_EVENT Apr 17 02:04:02.719894 kubelet[3349]: E0417 02:04:02.717919 3349 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 17 02:04:02.903967 kubelet[3349]: E0417 02:04:02.902994 3349 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.322s" Apr 17 02:04:06.666236 kubelet[3349]: E0417 02:04:06.663536 3349 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.114s" Apr 17 02:04:08.276369 kubelet[3349]: E0417 02:04:08.274890 3349 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 17 02:04:09.398335 containerd[1607]: time="2026-04-17T02:04:09.386942808Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod66a243c17a59d09458bf3b09d66260f5.slice/cri-containerd-0cfe15bca5ab0e9f06c011dcb38fe148a5eb718f9f11e3b695625884b0303927.scope/hugetlb.2MB.events\"" Apr 17 02:04:09.574503 containerd[1607]: time="2026-04-17T02:04:09.400568853Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod66a243c17a59d09458bf3b09d66260f5.slice/cri-containerd-0cfe15bca5ab0e9f06c011dcb38fe148a5eb718f9f11e3b695625884b0303927.scope/hugetlb.1GB.events\"" Apr 17 02:04:09.645549 containerd[1607]: time="2026-04-17T02:04:09.641837129Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4fac3d71e98654620e15e49cc21797c2.slice/cri-containerd-af2c559bb5351f191757890e0d6b2a64b0868f7c585f121703cbfc0632304fae.scope/hugetlb.2MB.events\"" Apr 17 02:04:09.645549 containerd[1607]: time="2026-04-17T02:04:09.642142176Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4fac3d71e98654620e15e49cc21797c2.slice/cri-containerd-af2c559bb5351f191757890e0d6b2a64b0868f7c585f121703cbfc0632304fae.scope/hugetlb.1GB.events\"" Apr 17 02:04:10.044870 containerd[1607]: time="2026-04-17T02:04:10.037457425Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod82faa9ca0765979bc0118d46e6420ed8.slice/cri-containerd-ba61d1548c7d013516bb8e36f45c3e2214bda293ebcdec9d0053fb61bf4720dc.scope/hugetlb.2MB.events\"" Apr 17 02:04:10.081036 containerd[1607]: time="2026-04-17T02:04:10.077389148Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod82faa9ca0765979bc0118d46e6420ed8.slice/cri-containerd-ba61d1548c7d013516bb8e36f45c3e2214bda293ebcdec9d0053fb61bf4720dc.scope/hugetlb.1GB.events\"" Apr 17 02:04:11.256208 containerd[1607]: time="2026-04-17T02:04:11.252538528Z" level=info msg="container event discarded" container=c0810b168f6607ff82be0df0ff8e93d40e21cccb775f74df4533c1a5dfd15502 type=CONTAINER_DELETED_EVENT Apr 17 02:04:13.890356 kubelet[3349]: E0417 02:04:13.836585 3349 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 17 02:04:13.940110 kubelet[3349]: E0417 02:04:13.913168 3349 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="4.265s" Apr 17 02:04:14.840678 containerd[1607]: time="2026-04-17T02:04:14.839887438Z" level=info msg="container event discarded" container=bef1947ecb47877214e29a02bf50616487d79761aee407b3d3aa6f2a35e9d8cf type=CONTAINER_CREATED_EVENT Apr 17 02:04:15.158152 containerd[1607]: time="2026-04-17T02:04:15.118651882Z" level=info msg="container event discarded" container=e60ea82085b16e212267c022e2075e234ef65e6834c94df35872b2ec6367ee2c type=CONTAINER_CREATED_EVENT Apr 17 02:04:15.217999 kubelet[3349]: E0417 02:04:15.217184 3349 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 02:04:18.405069 kubelet[3349]: E0417 02:04:18.388547 3349 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.816s" Apr 17 02:04:19.502358 containerd[1607]: time="2026-04-17T02:04:19.496682084Z" level=info msg="container event discarded" container=e60ea82085b16e212267c022e2075e234ef65e6834c94df35872b2ec6367ee2c type=CONTAINER_STARTED_EVENT Apr 17 02:04:19.719378 containerd[1607]: time="2026-04-17T02:04:19.717492427Z" level=info msg="container event discarded" container=bef1947ecb47877214e29a02bf50616487d79761aee407b3d3aa6f2a35e9d8cf type=CONTAINER_STARTED_EVENT Apr 17 02:04:20.102374 kubelet[3349]: E0417 02:04:20.091385 3349 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 17 02:04:20.321292 kubelet[3349]: E0417 02:04:20.312890 3349 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 02:04:20.767576 kubelet[3349]: E0417 02:04:20.766098 3349 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.901s" Apr 17 02:04:25.615954 kubelet[3349]: E0417 02:04:25.612462 3349 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="4.79s" Apr 17 02:04:26.390442 kubelet[3349]: E0417 02:04:26.388014 3349 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 17 02:04:26.893461 kubelet[3349]: E0417 02:04:26.892162 3349 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.274s" Apr 17 02:04:27.583208 containerd[1607]: time="2026-04-17T02:04:27.550368309Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod66a243c17a59d09458bf3b09d66260f5.slice/cri-containerd-0cfe15bca5ab0e9f06c011dcb38fe148a5eb718f9f11e3b695625884b0303927.scope/hugetlb.2MB.events\"" Apr 17 02:04:27.815378 containerd[1607]: time="2026-04-17T02:04:27.811308016Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod66a243c17a59d09458bf3b09d66260f5.slice/cri-containerd-0cfe15bca5ab0e9f06c011dcb38fe148a5eb718f9f11e3b695625884b0303927.scope/hugetlb.1GB.events\"" Apr 17 02:04:28.451235 containerd[1607]: time="2026-04-17T02:04:28.436352673Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4fac3d71e98654620e15e49cc21797c2.slice/cri-containerd-af2c559bb5351f191757890e0d6b2a64b0868f7c585f121703cbfc0632304fae.scope/hugetlb.2MB.events\"" Apr 17 02:04:28.570336 containerd[1607]: time="2026-04-17T02:04:28.535051499Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4fac3d71e98654620e15e49cc21797c2.slice/cri-containerd-af2c559bb5351f191757890e0d6b2a64b0868f7c585f121703cbfc0632304fae.scope/hugetlb.1GB.events\"" Apr 17 02:04:28.871955 containerd[1607]: time="2026-04-17T02:04:28.852585175Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod82faa9ca0765979bc0118d46e6420ed8.slice/cri-containerd-ba61d1548c7d013516bb8e36f45c3e2214bda293ebcdec9d0053fb61bf4720dc.scope/hugetlb.2MB.events\"" Apr 17 02:04:28.889021 containerd[1607]: time="2026-04-17T02:04:28.878497670Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod82faa9ca0765979bc0118d46e6420ed8.slice/cri-containerd-ba61d1548c7d013516bb8e36f45c3e2214bda293ebcdec9d0053fb61bf4720dc.scope/hugetlb.1GB.events\"" Apr 17 02:04:29.960466 systemd[1]: cri-containerd-ba61d1548c7d013516bb8e36f45c3e2214bda293ebcdec9d0053fb61bf4720dc.scope: Deactivated successfully. Apr 17 02:04:30.071499 systemd[1]: cri-containerd-ba61d1548c7d013516bb8e36f45c3e2214bda293ebcdec9d0053fb61bf4720dc.scope: Consumed 1min 29.179s CPU time, 38.1M memory peak. Apr 17 02:04:30.197317 containerd[1607]: time="2026-04-17T02:04:30.183983301Z" level=info msg="received container exit event container_id:\"ba61d1548c7d013516bb8e36f45c3e2214bda293ebcdec9d0053fb61bf4720dc\" id:\"ba61d1548c7d013516bb8e36f45c3e2214bda293ebcdec9d0053fb61bf4720dc\" pid:3613 exit_status:1 exited_at:{seconds:1776391470 nanos:181651429}" Apr 17 02:04:30.769806 kubelet[3349]: W0417 02:04:30.747066 3349 helpers.go:245] readString: Failed to read "/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod82faa9ca0765979bc0118d46e6420ed8.slice/cri-containerd-ba61d1548c7d013516bb8e36f45c3e2214bda293ebcdec9d0053fb61bf4720dc.scope/cpuset.cpus.effective": read /sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod82faa9ca0765979bc0118d46e6420ed8.slice/cri-containerd-ba61d1548c7d013516bb8e36f45c3e2214bda293ebcdec9d0053fb61bf4720dc.scope/cpuset.cpus.effective: no such device Apr 17 02:04:36.148552 systemd[1]: cri-containerd-0cfe15bca5ab0e9f06c011dcb38fe148a5eb718f9f11e3b695625884b0303927.scope: Deactivated successfully. Apr 17 02:04:36.196076 systemd[1]: cri-containerd-0cfe15bca5ab0e9f06c011dcb38fe148a5eb718f9f11e3b695625884b0303927.scope: Consumed 1min 3.331s CPU time, 21.7M memory peak. Apr 17 02:04:36.296503 containerd[1607]: time="2026-04-17T02:04:36.219337820Z" level=info msg="received container exit event container_id:\"0cfe15bca5ab0e9f06c011dcb38fe148a5eb718f9f11e3b695625884b0303927\" id:\"0cfe15bca5ab0e9f06c011dcb38fe148a5eb718f9f11e3b695625884b0303927\" pid:3559 exit_status:1 exited_at:{seconds:1776391476 nanos:213095449}" Apr 17 02:04:36.332976 kubelet[3349]: E0417 02:04:36.332300 3349 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 17 02:04:37.375655 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ba61d1548c7d013516bb8e36f45c3e2214bda293ebcdec9d0053fb61bf4720dc-rootfs.mount: Deactivated successfully. Apr 17 02:04:37.570779 kubelet[3349]: E0417 02:04:37.569664 3349 cadvisor_stats_provider.go:567] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod82faa9ca0765979bc0118d46e6420ed8.slice/cri-containerd-ba61d1548c7d013516bb8e36f45c3e2214bda293ebcdec9d0053fb61bf4720dc.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod66a243c17a59d09458bf3b09d66260f5.slice/cri-containerd-0cfe15bca5ab0e9f06c011dcb38fe148a5eb718f9f11e3b695625884b0303927.scope\": RecentStats: unable to find data in memory cache]" Apr 17 02:04:42.574985 kubelet[3349]: E0417 02:04:42.572712 3349 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 17 02:04:42.791939 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0cfe15bca5ab0e9f06c011dcb38fe148a5eb718f9f11e3b695625884b0303927-rootfs.mount: Deactivated successfully. Apr 17 02:04:43.445230 kubelet[3349]: E0417 02:04:43.444483 3349 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="15.633s" Apr 17 02:04:43.490922 kubelet[3349]: E0417 02:04:43.490544 3349 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 02:04:44.358951 kubelet[3349]: I0417 02:04:44.358359 3349 scope.go:117] "RemoveContainer" containerID="e60ea82085b16e212267c022e2075e234ef65e6834c94df35872b2ec6367ee2c" Apr 17 02:04:44.423012 kubelet[3349]: I0417 02:04:44.360356 3349 scope.go:117] "RemoveContainer" containerID="ba61d1548c7d013516bb8e36f45c3e2214bda293ebcdec9d0053fb61bf4720dc" Apr 17 02:04:44.423012 kubelet[3349]: E0417 02:04:44.361331 3349 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 02:04:44.423012 kubelet[3349]: E0417 02:04:44.406551 3349 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-controller-manager pod=kube-controller-manager-localhost_kube-system(82faa9ca0765979bc0118d46e6420ed8)\"" pod="kube-system/kube-controller-manager-localhost" podUID="82faa9ca0765979bc0118d46e6420ed8" Apr 17 02:04:44.660551 containerd[1607]: time="2026-04-17T02:04:44.641456978Z" level=info msg="RemoveContainer for \"e60ea82085b16e212267c022e2075e234ef65e6834c94df35872b2ec6367ee2c\"" Apr 17 02:04:45.467024 containerd[1607]: time="2026-04-17T02:04:45.465255514Z" level=info msg="RemoveContainer for \"e60ea82085b16e212267c022e2075e234ef65e6834c94df35872b2ec6367ee2c\" returns successfully" Apr 17 02:04:47.514648 kubelet[3349]: E0417 02:04:47.509509 3349 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.933s" Apr 17 02:04:47.530936 kubelet[3349]: I0417 02:04:47.528676 3349 scope.go:117] "RemoveContainer" containerID="bef1947ecb47877214e29a02bf50616487d79761aee407b3d3aa6f2a35e9d8cf" Apr 17 02:04:47.532227 kubelet[3349]: I0417 02:04:47.532195 3349 scope.go:117] "RemoveContainer" containerID="0cfe15bca5ab0e9f06c011dcb38fe148a5eb718f9f11e3b695625884b0303927" Apr 17 02:04:47.532551 kubelet[3349]: E0417 02:04:47.532483 3349 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 02:04:47.532927 kubelet[3349]: E0417 02:04:47.532897 3349 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-scheduler pod=kube-scheduler-localhost_kube-system(66a243c17a59d09458bf3b09d66260f5)\"" pod="kube-system/kube-scheduler-localhost" podUID="66a243c17a59d09458bf3b09d66260f5" Apr 17 02:04:47.818682 containerd[1607]: time="2026-04-17T02:04:47.807143117Z" level=info msg="RemoveContainer for \"bef1947ecb47877214e29a02bf50616487d79761aee407b3d3aa6f2a35e9d8cf\"" Apr 17 02:04:48.030189 kubelet[3349]: E0417 02:04:48.027855 3349 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 17 02:04:48.181352 containerd[1607]: time="2026-04-17T02:04:48.159412039Z" level=info msg="RemoveContainer for \"bef1947ecb47877214e29a02bf50616487d79761aee407b3d3aa6f2a35e9d8cf\" returns successfully" Apr 17 02:04:51.555378 kubelet[3349]: I0417 02:04:51.555061 3349 scope.go:117] "RemoveContainer" containerID="0cfe15bca5ab0e9f06c011dcb38fe148a5eb718f9f11e3b695625884b0303927" Apr 17 02:04:51.555378 kubelet[3349]: E0417 02:04:51.555258 3349 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 02:04:51.621395 kubelet[3349]: E0417 02:04:51.555396 3349 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-scheduler pod=kube-scheduler-localhost_kube-system(66a243c17a59d09458bf3b09d66260f5)\"" pod="kube-system/kube-scheduler-localhost" podUID="66a243c17a59d09458bf3b09d66260f5" Apr 17 02:04:51.621395 kubelet[3349]: I0417 02:04:51.555681 3349 scope.go:117] "RemoveContainer" containerID="ba61d1548c7d013516bb8e36f45c3e2214bda293ebcdec9d0053fb61bf4720dc" Apr 17 02:04:51.621395 kubelet[3349]: E0417 02:04:51.555773 3349 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 02:04:52.786382 containerd[1607]: time="2026-04-17T02:04:52.783845699Z" level=info msg="CreateContainer within sandbox \"da9200c64e3b9430af1bec691648e29545feb58f1e354e9301bd9011d5800181\" for container name:\"kube-controller-manager\" attempt:5" Apr 17 02:04:53.838749 containerd[1607]: time="2026-04-17T02:04:53.835397839Z" level=info msg="Container fe367010a5fdbab55ef1575dd7f93f3df8847712f4b7067dfbb32b79388caf1e: CDI devices from CRI Config.CDIDevices: []" Apr 17 02:04:54.076072 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4205180626.mount: Deactivated successfully. Apr 17 02:04:54.232978 kubelet[3349]: E0417 02:04:54.232696 3349 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 17 02:04:54.337984 containerd[1607]: time="2026-04-17T02:04:54.335542484Z" level=info msg="CreateContainer within sandbox \"da9200c64e3b9430af1bec691648e29545feb58f1e354e9301bd9011d5800181\" for name:\"kube-controller-manager\" attempt:5 returns container id \"fe367010a5fdbab55ef1575dd7f93f3df8847712f4b7067dfbb32b79388caf1e\"" Apr 17 02:04:54.437279 containerd[1607]: time="2026-04-17T02:04:54.433032770Z" level=info msg="StartContainer for \"fe367010a5fdbab55ef1575dd7f93f3df8847712f4b7067dfbb32b79388caf1e\"" Apr 17 02:04:54.744944 containerd[1607]: time="2026-04-17T02:04:54.743485712Z" level=info msg="connecting to shim fe367010a5fdbab55ef1575dd7f93f3df8847712f4b7067dfbb32b79388caf1e" address="unix:///run/containerd/s/6dd5659f205fb6d79c30f8024892d72185c43fb20853b4daceb55fc7305fe8f6" protocol=ttrpc version=3 Apr 17 02:04:55.418583 kubelet[3349]: E0417 02:04:55.418522 3349 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.73s" Apr 17 02:04:55.660913 containerd[1607]: time="2026-04-17T02:04:55.654198228Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4fac3d71e98654620e15e49cc21797c2.slice/cri-containerd-af2c559bb5351f191757890e0d6b2a64b0868f7c585f121703cbfc0632304fae.scope/hugetlb.2MB.events\"" Apr 17 02:04:55.660913 containerd[1607]: time="2026-04-17T02:04:55.747212653Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4fac3d71e98654620e15e49cc21797c2.slice/cri-containerd-af2c559bb5351f191757890e0d6b2a64b0868f7c585f121703cbfc0632304fae.scope/hugetlb.1GB.events\"" Apr 17 02:04:56.401934 systemd[1]: Started cri-containerd-fe367010a5fdbab55ef1575dd7f93f3df8847712f4b7067dfbb32b79388caf1e.scope - libcontainer container fe367010a5fdbab55ef1575dd7f93f3df8847712f4b7067dfbb32b79388caf1e. Apr 17 02:04:59.259359 kubelet[3349]: E0417 02:04:59.256786 3349 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.379s" Apr 17 02:04:59.259359 kubelet[3349]: E0417 02:04:59.258057 3349 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 17 02:05:00.365943 containerd[1607]: time="2026-04-17T02:05:00.360902954Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod82faa9ca0765979bc0118d46e6420ed8.slice/cri-containerd-fe367010a5fdbab55ef1575dd7f93f3df8847712f4b7067dfbb32b79388caf1e.scope/hugetlb.2MB.events\"" Apr 17 02:05:00.365943 containerd[1607]: time="2026-04-17T02:05:00.362236573Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod82faa9ca0765979bc0118d46e6420ed8.slice/cri-containerd-fe367010a5fdbab55ef1575dd7f93f3df8847712f4b7067dfbb32b79388caf1e.scope/hugetlb.1GB.events\"" Apr 17 02:05:01.056103 containerd[1607]: time="2026-04-17T02:05:01.055899962Z" level=info msg="StartContainer for \"fe367010a5fdbab55ef1575dd7f93f3df8847712f4b7067dfbb32b79388caf1e\" returns successfully" Apr 17 02:05:03.525507 kubelet[3349]: E0417 02:05:03.522389 3349 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 02:05:04.423497 kubelet[3349]: E0417 02:05:04.415572 3349 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 17 02:05:04.592885 kubelet[3349]: E0417 02:05:04.592337 3349 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 02:05:04.771964 kubelet[3349]: I0417 02:05:04.764417 3349 scope.go:117] "RemoveContainer" containerID="0cfe15bca5ab0e9f06c011dcb38fe148a5eb718f9f11e3b695625884b0303927" Apr 17 02:05:04.771964 kubelet[3349]: E0417 02:05:04.767630 3349 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 02:05:05.314681 containerd[1607]: time="2026-04-17T02:05:05.286378103Z" level=info msg="CreateContainer within sandbox \"2f3cc335d06fac8b129f2e0628de747fdc282f0a938b832bb4028352e59cf7a9\" for container name:\"kube-scheduler\" attempt:4" Apr 17 02:05:05.849153 containerd[1607]: time="2026-04-17T02:05:05.844636886Z" level=info msg="Container b17c5719b522c8a5f66ca2724981d17354ae8a84eb279e8e5db44008bccd5716: CDI devices from CRI Config.CDIDevices: []" Apr 17 02:05:06.659623 containerd[1607]: time="2026-04-17T02:05:06.654989961Z" level=info msg="CreateContainer within sandbox \"2f3cc335d06fac8b129f2e0628de747fdc282f0a938b832bb4028352e59cf7a9\" for name:\"kube-scheduler\" attempt:4 returns container id \"b17c5719b522c8a5f66ca2724981d17354ae8a84eb279e8e5db44008bccd5716\"" Apr 17 02:05:07.074357 containerd[1607]: time="2026-04-17T02:05:07.073820653Z" level=info msg="StartContainer for \"b17c5719b522c8a5f66ca2724981d17354ae8a84eb279e8e5db44008bccd5716\"" Apr 17 02:05:07.473800 containerd[1607]: time="2026-04-17T02:05:07.472984456Z" level=info msg="connecting to shim b17c5719b522c8a5f66ca2724981d17354ae8a84eb279e8e5db44008bccd5716" address="unix:///run/containerd/s/37fa565e1f3a26166e72f3404aaa6af399b8d96545306b6a5af7e8ce01f4b5c9" protocol=ttrpc version=3 Apr 17 02:05:07.763544 kubelet[3349]: E0417 02:05:07.652376 3349 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.036s" Apr 17 02:05:09.807422 kubelet[3349]: E0417 02:05:09.806228 3349 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 17 02:05:10.396485 kubelet[3349]: E0417 02:05:10.393858 3349 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.579s" Apr 17 02:05:11.306183 systemd[1]: Started cri-containerd-b17c5719b522c8a5f66ca2724981d17354ae8a84eb279e8e5db44008bccd5716.scope - libcontainer container b17c5719b522c8a5f66ca2724981d17354ae8a84eb279e8e5db44008bccd5716. Apr 17 02:05:11.472434 kubelet[3349]: E0417 02:05:11.472203 3349 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.078s" Apr 17 02:05:11.602451 containerd[1607]: time="2026-04-17T02:05:11.591059642Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod82faa9ca0765979bc0118d46e6420ed8.slice/cri-containerd-fe367010a5fdbab55ef1575dd7f93f3df8847712f4b7067dfbb32b79388caf1e.scope/hugetlb.2MB.events\"" Apr 17 02:05:11.602451 containerd[1607]: time="2026-04-17T02:05:11.600128696Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod82faa9ca0765979bc0118d46e6420ed8.slice/cri-containerd-fe367010a5fdbab55ef1575dd7f93f3df8847712f4b7067dfbb32b79388caf1e.scope/hugetlb.1GB.events\"" Apr 17 02:05:11.693983 kubelet[3349]: E0417 02:05:11.692459 3349 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 02:05:11.782381 containerd[1607]: time="2026-04-17T02:05:11.752514048Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4fac3d71e98654620e15e49cc21797c2.slice/cri-containerd-af2c559bb5351f191757890e0d6b2a64b0868f7c585f121703cbfc0632304fae.scope/hugetlb.2MB.events\"" Apr 17 02:05:11.801252 containerd[1607]: time="2026-04-17T02:05:11.781543096Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4fac3d71e98654620e15e49cc21797c2.slice/cri-containerd-af2c559bb5351f191757890e0d6b2a64b0868f7c585f121703cbfc0632304fae.scope/hugetlb.1GB.events\"" Apr 17 02:05:15.378754 kubelet[3349]: E0417 02:05:15.344484 3349 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 17 02:05:16.879556 kubelet[3349]: E0417 02:05:16.877812 3349 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.322s" Apr 17 02:05:18.361258 kubelet[3349]: E0417 02:05:18.352897 3349 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.474s"