Apr 20 15:22:58.004956 kernel: Linux version 6.12.81-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 15.2.1_p20260214 p5) 15.2.1 20260214, GNU ld (Gentoo 2.46.0 p1) 2.46.0) #1 SMP PREEMPT_DYNAMIC Tue Apr 14 02:21:25 -00 2026 Apr 20 15:22:58.005683 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=70de22794192cd52e167b5a4b1ae0509811ded61dbe4152dfc02378f843ae81a Apr 20 15:22:58.005732 kernel: BIOS-provided physical RAM map: Apr 20 15:22:58.005809 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Apr 20 15:22:58.005850 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Apr 20 15:22:58.005890 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Apr 20 15:22:58.005952 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Apr 20 15:22:58.005994 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Apr 20 15:22:58.006033 kernel: BIOS-e820: [mem 0x000000000080c000-0x0000000000810fff] usable Apr 20 15:22:58.006040 kernel: BIOS-e820: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Apr 20 15:22:58.006102 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009bd3efff] usable Apr 20 15:22:58.006177 kernel: BIOS-e820: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Apr 20 15:22:58.006242 kernel: BIOS-e820: [mem 0x000000009be00000-0x000000009c8ecfff] usable Apr 20 15:22:58.006284 kernel: BIOS-e820: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Apr 20 15:22:58.006349 kernel: BIOS-e820: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Apr 20 15:22:58.006889 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Apr 20 15:22:58.006963 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009ce90fff] usable Apr 20 15:22:58.006971 kernel: BIOS-e820: [mem 0x000000009ce91000-0x000000009ce94fff] reserved Apr 20 15:22:58.006979 kernel: BIOS-e820: [mem 0x000000009ce95000-0x000000009ce96fff] ACPI NVS Apr 20 15:22:58.007060 kernel: BIOS-e820: [mem 0x000000009ce97000-0x000000009cedbfff] usable Apr 20 15:22:58.007067 kernel: BIOS-e820: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Apr 20 15:22:58.007074 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Apr 20 15:22:58.007205 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Apr 20 15:22:58.007215 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Apr 20 15:22:58.007222 kernel: BIOS-e820: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Apr 20 15:22:58.007229 kernel: NX (Execute Disable) protection: active Apr 20 15:22:58.007237 kernel: APIC: Static calls initialized Apr 20 15:22:58.007274 kernel: e820: update [mem 0x9b31e018-0x9b327c57] usable ==> usable Apr 20 15:22:58.007336 kernel: e820: update [mem 0x9b2e1018-0x9b31de57] usable ==> usable Apr 20 15:22:58.007376 kernel: extended physical RAM map: Apr 20 15:22:58.007384 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable Apr 20 15:22:58.007391 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000007fffff] usable Apr 20 15:22:58.007400 kernel: reserve setup_data: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Apr 20 15:22:58.007824 kernel: reserve setup_data: [mem 0x0000000000808000-0x000000000080afff] usable Apr 20 15:22:58.007834 kernel: reserve setup_data: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Apr 20 15:22:58.007841 kernel: reserve setup_data: [mem 0x000000000080c000-0x0000000000810fff] usable Apr 20 15:22:58.007875 kernel: reserve setup_data: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Apr 20 15:22:58.007911 kernel: reserve setup_data: [mem 0x0000000000900000-0x000000009b2e1017] usable Apr 20 15:22:58.007947 kernel: reserve setup_data: [mem 0x000000009b2e1018-0x000000009b31de57] usable Apr 20 15:22:58.007956 kernel: reserve setup_data: [mem 0x000000009b31de58-0x000000009b31e017] usable Apr 20 15:22:58.008050 kernel: reserve setup_data: [mem 0x000000009b31e018-0x000000009b327c57] usable Apr 20 15:22:58.008058 kernel: reserve setup_data: [mem 0x000000009b327c58-0x000000009bd3efff] usable Apr 20 15:22:58.008169 kernel: reserve setup_data: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Apr 20 15:22:58.008179 kernel: reserve setup_data: [mem 0x000000009be00000-0x000000009c8ecfff] usable Apr 20 15:22:58.008248 kernel: reserve setup_data: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Apr 20 15:22:58.008293 kernel: reserve setup_data: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Apr 20 15:22:58.008301 kernel: reserve setup_data: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Apr 20 15:22:58.008308 kernel: reserve setup_data: [mem 0x000000009cbff000-0x000000009ce90fff] usable Apr 20 15:22:58.008317 kernel: reserve setup_data: [mem 0x000000009ce91000-0x000000009ce94fff] reserved Apr 20 15:22:58.008326 kernel: reserve setup_data: [mem 0x000000009ce95000-0x000000009ce96fff] ACPI NVS Apr 20 15:22:58.008336 kernel: reserve setup_data: [mem 0x000000009ce97000-0x000000009cedbfff] usable Apr 20 15:22:58.008343 kernel: reserve setup_data: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Apr 20 15:22:58.008387 kernel: reserve setup_data: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Apr 20 15:22:58.008396 kernel: reserve setup_data: [mem 0x00000000e0000000-0x00000000efffffff] reserved Apr 20 15:22:58.008439 kernel: reserve setup_data: [mem 0x00000000feffc000-0x00000000feffffff] reserved Apr 20 15:22:58.008449 kernel: reserve setup_data: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Apr 20 15:22:58.008457 kernel: efi: EFI v2.7 by EDK II Apr 20 15:22:58.008465 kernel: efi: SMBIOS=0x9c988000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b9e4198 RNG=0x9cb73018 Apr 20 15:22:58.008474 kernel: random: crng init done Apr 20 15:22:58.008483 kernel: efi: Remove mem151: MMIO range=[0xffc00000-0xffffffff] (4MB) from e820 map Apr 20 15:22:58.008528 kernel: e820: remove [mem 0xffc00000-0xffffffff] reserved Apr 20 15:22:58.008536 kernel: secureboot: Secure boot disabled Apr 20 15:22:58.008544 kernel: SMBIOS 2.8 present. Apr 20 15:22:58.008589 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS unknown 02/02/2022 Apr 20 15:22:58.008598 kernel: DMI: Memory slots populated: 1/1 Apr 20 15:22:58.008641 kernel: Hypervisor detected: KVM Apr 20 15:22:58.008648 kernel: last_pfn = 0x9cedc max_arch_pfn = 0x10000000000 Apr 20 15:22:58.008692 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Apr 20 15:22:58.008702 kernel: kvm-clock: using sched offset of 17360100126 cycles Apr 20 15:22:58.008713 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Apr 20 15:22:58.008721 kernel: tsc: Detected 2793.438 MHz processor Apr 20 15:22:58.008730 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Apr 20 15:22:58.008740 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Apr 20 15:22:58.008843 kernel: last_pfn = 0x9cedc max_arch_pfn = 0x10000000000 Apr 20 15:22:58.008854 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Apr 20 15:22:58.008899 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Apr 20 15:22:58.008907 kernel: Using GB pages for direct mapping Apr 20 15:22:58.008916 kernel: ACPI: Early table checksum verification disabled Apr 20 15:22:58.008926 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Apr 20 15:22:58.008970 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Apr 20 15:22:58.008980 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Apr 20 15:22:58.009016 kernel: ACPI: DSDT 0x000000009CB7A000 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 20 15:22:58.009060 kernel: ACPI: FACS 0x000000009CBDD000 000040 Apr 20 15:22:58.009068 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 20 15:22:58.009077 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 20 15:22:58.009087 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 20 15:22:58.009097 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 20 15:22:58.009105 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Apr 20 15:22:58.009189 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Apr 20 15:22:58.009199 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1b9] Apr 20 15:22:58.009207 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Apr 20 15:22:58.009215 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Apr 20 15:22:58.009225 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Apr 20 15:22:58.009235 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Apr 20 15:22:58.009243 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Apr 20 15:22:58.009286 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Apr 20 15:22:58.009296 kernel: No NUMA configuration found Apr 20 15:22:58.009306 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cedbfff] Apr 20 15:22:58.009315 kernel: NODE_DATA(0) allocated [mem 0x9ce36dc0-0x9ce3dfff] Apr 20 15:22:58.009323 kernel: Zone ranges: Apr 20 15:22:58.009332 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Apr 20 15:22:58.009343 kernel: DMA32 [mem 0x0000000001000000-0x000000009cedbfff] Apr 20 15:22:58.009352 kernel: Normal empty Apr 20 15:22:58.009395 kernel: Device empty Apr 20 15:22:58.009404 kernel: Movable zone start for each node Apr 20 15:22:58.009414 kernel: Early memory node ranges Apr 20 15:22:58.009424 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Apr 20 15:22:58.009432 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Apr 20 15:22:58.009440 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Apr 20 15:22:58.009449 kernel: node 0: [mem 0x000000000080c000-0x0000000000810fff] Apr 20 15:22:58.009492 kernel: node 0: [mem 0x0000000000900000-0x000000009bd3efff] Apr 20 15:22:58.009502 kernel: node 0: [mem 0x000000009be00000-0x000000009c8ecfff] Apr 20 15:22:58.009510 kernel: node 0: [mem 0x000000009cbff000-0x000000009ce90fff] Apr 20 15:22:58.009518 kernel: node 0: [mem 0x000000009ce97000-0x000000009cedbfff] Apr 20 15:22:58.009527 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cedbfff] Apr 20 15:22:58.009537 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Apr 20 15:22:58.009546 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Apr 20 15:22:58.009589 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Apr 20 15:22:58.010343 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Apr 20 15:22:58.010357 kernel: On node 0, zone DMA: 239 pages in unavailable ranges Apr 20 15:22:58.010955 kernel: On node 0, zone DMA32: 193 pages in unavailable ranges Apr 20 15:22:58.011044 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Apr 20 15:22:58.011053 kernel: On node 0, zone DMA32: 6 pages in unavailable ranges Apr 20 15:22:58.011061 kernel: On node 0, zone DMA32: 12580 pages in unavailable ranges Apr 20 15:22:58.011070 kernel: ACPI: PM-Timer IO Port: 0x608 Apr 20 15:22:58.011079 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Apr 20 15:22:58.011186 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Apr 20 15:22:58.011193 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Apr 20 15:22:58.011199 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Apr 20 15:22:58.011206 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Apr 20 15:22:58.011212 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Apr 20 15:22:58.011248 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Apr 20 15:22:58.011254 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Apr 20 15:22:58.011260 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Apr 20 15:22:58.011270 kernel: TSC deadline timer available Apr 20 15:22:58.011281 kernel: CPU topo: Max. logical packages: 1 Apr 20 15:22:58.011289 kernel: CPU topo: Max. logical dies: 1 Apr 20 15:22:58.011297 kernel: CPU topo: Max. dies per package: 1 Apr 20 15:22:58.011338 kernel: CPU topo: Max. threads per core: 1 Apr 20 15:22:58.011347 kernel: CPU topo: Num. cores per package: 4 Apr 20 15:22:58.011355 kernel: CPU topo: Num. threads per package: 4 Apr 20 15:22:58.011364 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs Apr 20 15:22:58.011374 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Apr 20 15:22:58.011382 kernel: kvm-guest: KVM setup pv remote TLB flush Apr 20 15:22:58.011391 kernel: kvm-guest: setup PV sched yield Apr 20 15:22:58.011400 kernel: [mem 0x9d000000-0xdfffffff] available for PCI devices Apr 20 15:22:58.011441 kernel: Booting paravirtualized kernel on KVM Apr 20 15:22:58.011450 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Apr 20 15:22:58.011459 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Apr 20 15:22:58.011467 kernel: percpu: Embedded 60 pages/cpu s207960 r8192 d29608 u524288 Apr 20 15:22:58.011476 kernel: pcpu-alloc: s207960 r8192 d29608 u524288 alloc=1*2097152 Apr 20 15:22:58.011485 kernel: pcpu-alloc: [0] 0 1 2 3 Apr 20 15:22:58.011493 kernel: kvm-guest: PV spinlocks enabled Apr 20 15:22:58.011530 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Apr 20 15:22:58.011541 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=70de22794192cd52e167b5a4b1ae0509811ded61dbe4152dfc02378f843ae81a Apr 20 15:22:58.011550 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Apr 20 15:22:58.011559 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Apr 20 15:22:58.011567 kernel: Fallback order for Node 0: 0 Apr 20 15:22:58.011576 kernel: Built 1 zonelists, mobility grouping on. Total pages: 641450 Apr 20 15:22:58.011613 kernel: Policy zone: DMA32 Apr 20 15:22:58.011621 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Apr 20 15:22:58.011630 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Apr 20 15:22:58.011639 kernel: ftrace: allocating 40346 entries in 158 pages Apr 20 15:22:58.011648 kernel: ftrace: allocated 158 pages with 5 groups Apr 20 15:22:58.011656 kernel: Dynamic Preempt: voluntary Apr 20 15:22:58.011665 kernel: rcu: Preemptible hierarchical RCU implementation. Apr 20 15:22:58.011675 kernel: rcu: RCU event tracing is enabled. Apr 20 15:22:58.011711 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Apr 20 15:22:58.011720 kernel: Trampoline variant of Tasks RCU enabled. Apr 20 15:22:58.011729 kernel: Rude variant of Tasks RCU enabled. Apr 20 15:22:58.011738 kernel: Tracing variant of Tasks RCU enabled. Apr 20 15:22:58.011746 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Apr 20 15:22:58.011755 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Apr 20 15:22:58.011764 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Apr 20 15:22:58.012302 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Apr 20 15:22:58.012312 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Apr 20 15:22:58.012321 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Apr 20 15:22:58.012330 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Apr 20 15:22:58.012339 kernel: Console: colour dummy device 80x25 Apr 20 15:22:58.012348 kernel: printk: legacy console [ttyS0] enabled Apr 20 15:22:58.012356 kernel: ACPI: Core revision 20240827 Apr 20 15:22:58.012620 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Apr 20 15:22:58.012629 kernel: APIC: Switch to symmetric I/O mode setup Apr 20 15:22:58.012636 kernel: x2apic enabled Apr 20 15:22:58.012642 kernel: APIC: Switched APIC routing to: physical x2apic Apr 20 15:22:58.012648 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Apr 20 15:22:58.012654 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Apr 20 15:22:58.012661 kernel: kvm-guest: setup PV IPIs Apr 20 15:22:58.012703 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Apr 20 15:22:58.012710 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x284409db922, max_idle_ns: 440795228871 ns Apr 20 15:22:58.012716 kernel: Calibrating delay loop (skipped) preset value.. 5586.87 BogoMIPS (lpj=2793438) Apr 20 15:22:58.012723 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Apr 20 15:22:58.012732 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Apr 20 15:22:58.012741 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Apr 20 15:22:58.012750 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Apr 20 15:22:58.014356 kernel: Spectre V2 : Mitigation: Retpolines Apr 20 15:22:58.014376 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Apr 20 15:22:58.014385 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Apr 20 15:22:58.014396 kernel: RETBleed: Vulnerable Apr 20 15:22:58.014405 kernel: Speculative Store Bypass: Vulnerable Apr 20 15:22:58.014413 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Apr 20 15:22:58.014422 kernel: GDS: Unknown: Dependent on hypervisor status Apr 20 15:22:58.014508 kernel: active return thunk: its_return_thunk Apr 20 15:22:58.014518 kernel: ITS: Mitigation: Aligned branch/return thunks Apr 20 15:22:58.014528 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Apr 20 15:22:58.014538 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Apr 20 15:22:58.014548 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Apr 20 15:22:58.014558 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Apr 20 15:22:58.014566 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Apr 20 15:22:58.014614 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Apr 20 15:22:58.014624 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Apr 20 15:22:58.014633 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Apr 20 15:22:58.014642 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Apr 20 15:22:58.014651 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Apr 20 15:22:58.014660 kernel: x86/fpu: Enabled xstate features 0xe7, context size is 2432 bytes, using 'compacted' format. Apr 20 15:22:58.014669 kernel: Freeing SMP alternatives memory: 32K Apr 20 15:22:58.014715 kernel: pid_max: default: 32768 minimum: 301 Apr 20 15:22:58.014725 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Apr 20 15:22:58.014735 kernel: landlock: Up and running. Apr 20 15:22:58.014744 kernel: SELinux: Initializing. Apr 20 15:22:58.014754 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 20 15:22:58.014764 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 20 15:22:58.014812 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8370C CPU @ 2.80GHz (family: 0x6, model: 0x6a, stepping: 0x6) Apr 20 15:22:58.014855 kernel: Performance Events: unsupported p6 CPU model 106 no PMU driver, software events only. Apr 20 15:22:58.014864 kernel: signal: max sigframe size: 3632 Apr 20 15:22:58.014873 kernel: rcu: Hierarchical SRCU implementation. Apr 20 15:22:58.014884 kernel: rcu: Max phase no-delay instances is 400. Apr 20 15:22:58.014897 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Apr 20 15:22:58.014907 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Apr 20 15:22:58.014917 kernel: smp: Bringing up secondary CPUs ... Apr 20 15:22:58.014928 kernel: smpboot: x86: Booting SMP configuration: Apr 20 15:22:58.026275 kernel: .... node #0, CPUs: #1 #2 #3 Apr 20 15:22:58.026288 kernel: smp: Brought up 1 node, 4 CPUs Apr 20 15:22:58.026297 kernel: smpboot: Total of 4 processors activated (22347.50 BogoMIPS) Apr 20 15:22:58.026306 kernel: Memory: 2399268K/2565800K available (14336K kernel code, 2458K rwdata, 31736K rodata, 15944K init, 2284K bss, 160640K reserved, 0K cma-reserved) Apr 20 15:22:58.026315 kernel: devtmpfs: initialized Apr 20 15:22:58.026324 kernel: x86/mm: Memory block size: 128MB Apr 20 15:22:58.026334 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Apr 20 15:22:58.026696 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Apr 20 15:22:58.026707 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00811000-0x008fffff] (978944 bytes) Apr 20 15:22:58.026717 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Apr 20 15:22:58.026726 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9ce95000-0x9ce96fff] (8192 bytes) Apr 20 15:22:58.026734 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Apr 20 15:22:58.026743 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Apr 20 15:22:58.027097 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Apr 20 15:22:58.027110 kernel: pinctrl core: initialized pinctrl subsystem Apr 20 15:22:58.027196 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Apr 20 15:22:58.027206 kernel: audit: initializing netlink subsys (disabled) Apr 20 15:22:58.027216 kernel: thermal_sys: Registered thermal governor 'step_wise' Apr 20 15:22:58.027226 kernel: thermal_sys: Registered thermal governor 'user_space' Apr 20 15:22:58.027234 kernel: audit: type=2000 audit(1776698566.534:1): state=initialized audit_enabled=0 res=1 Apr 20 15:22:58.027242 kernel: cpuidle: using governor menu Apr 20 15:22:58.027294 kernel: efi: Freeing EFI boot services memory: 38812K Apr 20 15:22:58.027304 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Apr 20 15:22:58.027313 kernel: dca service started, version 1.12.1 Apr 20 15:22:58.027322 kernel: PCI: ECAM [mem 0xe0000000-0xefffffff] (base 0xe0000000) for domain 0000 [bus 00-ff] Apr 20 15:22:58.027330 kernel: PCI: Using configuration type 1 for base access Apr 20 15:22:58.027339 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Apr 20 15:22:58.027348 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Apr 20 15:22:58.027394 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Apr 20 15:22:58.027405 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Apr 20 15:22:58.027414 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Apr 20 15:22:58.027424 kernel: ACPI: Added _OSI(Module Device) Apr 20 15:22:58.027434 kernel: ACPI: Added _OSI(Processor Device) Apr 20 15:22:58.027444 kernel: ACPI: Added _OSI(Processor Aggregator Device) Apr 20 15:22:58.027453 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Apr 20 15:22:58.027525 kernel: ACPI: Interpreter enabled Apr 20 15:22:58.027535 kernel: ACPI: PM: (supports S0 S3 S5) Apr 20 15:22:58.027545 kernel: ACPI: Using IOAPIC for interrupt routing Apr 20 15:22:58.027555 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Apr 20 15:22:58.027565 kernel: PCI: Using E820 reservations for host bridge windows Apr 20 15:22:58.027573 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Apr 20 15:22:58.027582 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Apr 20 15:22:58.028434 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Apr 20 15:22:58.028679 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Apr 20 15:22:58.028852 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Apr 20 15:22:58.028868 kernel: PCI host bridge to bus 0000:00 Apr 20 15:22:58.028998 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Apr 20 15:22:58.029457 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Apr 20 15:22:58.030038 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Apr 20 15:22:58.030546 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xdfffffff window] Apr 20 15:22:58.030708 kernel: pci_bus 0000:00: root bus resource [mem 0xf0000000-0xfebfffff window] Apr 20 15:22:58.030856 kernel: pci_bus 0000:00: root bus resource [mem 0x380000000000-0x3807ffffffff window] Apr 20 15:22:58.031320 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Apr 20 15:22:58.031513 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Apr 20 15:22:58.031631 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Apr 20 15:22:58.031751 kernel: pci 0000:00:01.0: BAR 0 [mem 0xc0000000-0xc0ffffff pref] Apr 20 15:22:58.032423 kernel: pci 0000:00:01.0: BAR 2 [mem 0xc1044000-0xc1044fff] Apr 20 15:22:58.032617 kernel: pci 0000:00:01.0: ROM [mem 0xffff0000-0xffffffff pref] Apr 20 15:22:58.032820 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Apr 20 15:22:58.032990 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Apr 20 15:22:58.033100 kernel: pci 0000:00:02.0: BAR 0 [io 0x6100-0x611f] Apr 20 15:22:58.033558 kernel: pci 0000:00:02.0: BAR 1 [mem 0xc1043000-0xc1043fff] Apr 20 15:22:58.033669 kernel: pci 0000:00:02.0: BAR 4 [mem 0x380000000000-0x380000003fff 64bit pref] Apr 20 15:22:58.034081 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Apr 20 15:22:58.034310 kernel: pci 0000:00:03.0: BAR 0 [io 0x6000-0x607f] Apr 20 15:22:58.034420 kernel: pci 0000:00:03.0: BAR 1 [mem 0xc1042000-0xc1042fff] Apr 20 15:22:58.034528 kernel: pci 0000:00:03.0: BAR 4 [mem 0x380000004000-0x380000007fff 64bit pref] Apr 20 15:22:58.034646 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Apr 20 15:22:58.034755 kernel: pci 0000:00:04.0: BAR 0 [io 0x60e0-0x60ff] Apr 20 15:22:58.035407 kernel: pci 0000:00:04.0: BAR 1 [mem 0xc1041000-0xc1041fff] Apr 20 15:22:58.035875 kernel: pci 0000:00:04.0: BAR 4 [mem 0x380000008000-0x38000000bfff 64bit pref] Apr 20 15:22:58.035991 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref] Apr 20 15:22:58.036105 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Apr 20 15:22:58.036322 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Apr 20 15:22:58.036478 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Apr 20 15:22:58.037351 kernel: pci 0000:00:1f.2: BAR 4 [io 0x60c0-0x60df] Apr 20 15:22:58.037466 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xc1040000-0xc1040fff] Apr 20 15:22:58.037580 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Apr 20 15:22:58.037688 kernel: pci 0000:00:1f.3: BAR 4 [io 0x6080-0x60bf] Apr 20 15:22:58.037699 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Apr 20 15:22:58.037708 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Apr 20 15:22:58.037717 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Apr 20 15:22:58.038029 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Apr 20 15:22:58.038039 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Apr 20 15:22:58.038048 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Apr 20 15:22:58.038057 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Apr 20 15:22:58.038066 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Apr 20 15:22:58.038075 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Apr 20 15:22:58.038084 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Apr 20 15:22:58.038378 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Apr 20 15:22:58.038389 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Apr 20 15:22:58.038399 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Apr 20 15:22:58.038408 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Apr 20 15:22:58.038418 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Apr 20 15:22:58.038428 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Apr 20 15:22:58.038438 kernel: iommu: Default domain type: Translated Apr 20 15:22:58.038736 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Apr 20 15:22:58.038748 kernel: efivars: Registered efivars operations Apr 20 15:22:58.038758 kernel: PCI: Using ACPI for IRQ routing Apr 20 15:22:58.038983 kernel: PCI: pci_cache_line_size set to 64 bytes Apr 20 15:22:58.038995 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Apr 20 15:22:58.039004 kernel: e820: reserve RAM buffer [mem 0x00811000-0x008fffff] Apr 20 15:22:58.039012 kernel: e820: reserve RAM buffer [mem 0x9b2e1018-0x9bffffff] Apr 20 15:22:58.039061 kernel: e820: reserve RAM buffer [mem 0x9b31e018-0x9bffffff] Apr 20 15:22:58.039070 kernel: e820: reserve RAM buffer [mem 0x9bd3f000-0x9bffffff] Apr 20 15:22:58.039079 kernel: e820: reserve RAM buffer [mem 0x9c8ed000-0x9fffffff] Apr 20 15:22:58.039088 kernel: e820: reserve RAM buffer [mem 0x9ce91000-0x9fffffff] Apr 20 15:22:58.039097 kernel: e820: reserve RAM buffer [mem 0x9cedc000-0x9fffffff] Apr 20 15:22:58.040003 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Apr 20 15:22:58.040833 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Apr 20 15:22:58.042008 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Apr 20 15:22:58.042025 kernel: vgaarb: loaded Apr 20 15:22:58.042036 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Apr 20 15:22:58.042046 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Apr 20 15:22:58.042055 kernel: clocksource: Switched to clocksource kvm-clock Apr 20 15:22:58.042065 kernel: VFS: Disk quotas dquot_6.6.0 Apr 20 15:22:58.042075 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Apr 20 15:22:58.042488 kernel: pnp: PnP ACPI init Apr 20 15:22:58.042667 kernel: system 00:05: [mem 0xe0000000-0xefffffff window] has been reserved Apr 20 15:22:58.042683 kernel: pnp: PnP ACPI: found 6 devices Apr 20 15:22:58.042695 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Apr 20 15:22:58.043531 kernel: NET: Registered PF_INET protocol family Apr 20 15:22:58.043584 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Apr 20 15:22:58.044107 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Apr 20 15:22:58.044246 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Apr 20 15:22:58.044257 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Apr 20 15:22:58.044266 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Apr 20 15:22:58.044276 kernel: TCP: Hash tables configured (established 32768 bind 32768) Apr 20 15:22:58.044287 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 20 15:22:58.044297 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 20 15:22:58.044351 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Apr 20 15:22:58.044360 kernel: NET: Registered PF_XDP protocol family Apr 20 15:22:58.044533 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref]: can't claim; no compatible bridge window Apr 20 15:22:58.044664 kernel: pci 0000:00:04.0: ROM [mem 0x9d000000-0x9d03ffff pref]: assigned Apr 20 15:22:58.044836 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Apr 20 15:22:58.044956 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Apr 20 15:22:58.045500 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Apr 20 15:22:58.045621 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xdfffffff window] Apr 20 15:22:58.045738 kernel: pci_bus 0000:00: resource 8 [mem 0xf0000000-0xfebfffff window] Apr 20 15:22:58.046299 kernel: pci_bus 0000:00: resource 9 [mem 0x380000000000-0x3807ffffffff window] Apr 20 15:22:58.046316 kernel: PCI: CLS 0 bytes, default 64 Apr 20 15:22:58.046326 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Apr 20 15:22:58.046335 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x284409db922, max_idle_ns: 440795228871 ns Apr 20 15:22:58.046396 kernel: Initialise system trusted keyrings Apr 20 15:22:58.046435 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Apr 20 15:22:58.046445 kernel: Key type asymmetric registered Apr 20 15:22:58.046454 kernel: Asymmetric key parser 'x509' registered Apr 20 15:22:58.046502 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Apr 20 15:22:58.046512 kernel: io scheduler mq-deadline registered Apr 20 15:22:58.046521 kernel: io scheduler kyber registered Apr 20 15:22:58.046530 kernel: io scheduler bfq registered Apr 20 15:22:58.046539 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Apr 20 15:22:58.046549 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Apr 20 15:22:58.046559 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Apr 20 15:22:58.046568 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Apr 20 15:22:58.046925 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Apr 20 15:22:58.046937 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Apr 20 15:22:58.046947 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Apr 20 15:22:58.046957 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Apr 20 15:22:58.046967 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Apr 20 15:22:58.046977 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Apr 20 15:22:58.047458 kernel: rtc_cmos 00:04: RTC can wake from S4 Apr 20 15:22:58.047985 kernel: rtc_cmos 00:04: registered as rtc0 Apr 20 15:22:58.048469 kernel: rtc_cmos 00:04: setting system clock to 2026-04-20T15:22:52 UTC (1776698572) Apr 20 15:22:58.048596 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Apr 20 15:22:58.048610 kernel: intel_pstate: CPU model not supported Apr 20 15:22:58.048620 kernel: efifb: probing for efifb Apr 20 15:22:58.048629 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k Apr 20 15:22:58.049109 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 Apr 20 15:22:58.049204 kernel: efifb: scrolling: redraw Apr 20 15:22:58.049215 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Apr 20 15:22:58.049225 kernel: Console: switching to colour frame buffer device 160x50 Apr 20 15:22:58.049234 kernel: fb0: EFI VGA frame buffer device Apr 20 15:22:58.049243 kernel: pstore: Using crash dump compression: deflate Apr 20 15:22:58.049251 kernel: pstore: Registered efi_pstore as persistent store backend Apr 20 15:22:58.049307 kernel: NET: Registered PF_INET6 protocol family Apr 20 15:22:58.049319 kernel: Segment Routing with IPv6 Apr 20 15:22:58.049328 kernel: In-situ OAM (IOAM) with IPv6 Apr 20 15:22:58.049337 kernel: NET: Registered PF_PACKET protocol family Apr 20 15:22:58.049346 kernel: Key type dns_resolver registered Apr 20 15:22:58.049356 kernel: IPI shorthand broadcast: enabled Apr 20 15:22:58.049365 kernel: sched_clock: Marking stable (3834038838, 2915852741)->(7864115797, -1114224218) Apr 20 15:22:58.049415 kernel: registered taskstats version 1 Apr 20 15:22:58.049425 kernel: Loading compiled-in X.509 certificates Apr 20 15:22:58.049434 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.81-flatcar: 7cf14208c08026297bea8a5678f7340932b35e4b' Apr 20 15:22:58.049443 kernel: Demotion targets for Node 0: null Apr 20 15:22:58.049453 kernel: Key type .fscrypt registered Apr 20 15:22:58.049462 kernel: Key type fscrypt-provisioning registered Apr 20 15:22:58.049471 kernel: ima: No TPM chip found, activating TPM-bypass! Apr 20 15:22:58.049519 kernel: ima: Allocated hash algorithm: sha1 Apr 20 15:22:58.049529 kernel: ima: No architecture policies found Apr 20 15:22:58.049538 kernel: clk: Disabling unused clocks Apr 20 15:22:58.049548 kernel: Freeing unused kernel image (initmem) memory: 15944K Apr 20 15:22:58.049557 kernel: Write protecting the kernel read-only data: 47104k Apr 20 15:22:58.049566 kernel: Freeing unused kernel image (rodata/data gap) memory: 1032K Apr 20 15:22:58.049580 kernel: Run /init as init process Apr 20 15:22:58.049590 kernel: with arguments: Apr 20 15:22:58.049640 kernel: /init Apr 20 15:22:58.049649 kernel: with environment: Apr 20 15:22:58.049658 kernel: HOME=/ Apr 20 15:22:58.049667 kernel: TERM=linux Apr 20 15:22:58.049676 kernel: SCSI subsystem initialized Apr 20 15:22:58.049686 kernel: libata version 3.00 loaded. Apr 20 15:22:58.049884 kernel: ahci 0000:00:1f.2: version 3.0 Apr 20 15:22:58.050307 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Apr 20 15:22:58.050452 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Apr 20 15:22:58.050579 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Apr 20 15:22:58.050709 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Apr 20 15:22:58.050905 kernel: scsi host0: ahci Apr 20 15:22:58.051043 kernel: scsi host1: ahci Apr 20 15:22:58.051550 kernel: scsi host2: ahci Apr 20 15:22:58.051688 kernel: scsi host3: ahci Apr 20 15:22:58.051873 kernel: scsi host4: ahci Apr 20 15:22:58.052010 kernel: scsi host5: ahci Apr 20 15:22:58.052024 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 26 lpm-pol 1 Apr 20 15:22:58.052301 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 26 lpm-pol 1 Apr 20 15:22:58.052316 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 26 lpm-pol 1 Apr 20 15:22:58.052325 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 26 lpm-pol 1 Apr 20 15:22:58.052335 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 26 lpm-pol 1 Apr 20 15:22:58.052344 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 26 lpm-pol 1 Apr 20 15:22:58.052353 kernel: ata1: SATA link down (SStatus 0 SControl 300) Apr 20 15:22:58.052362 kernel: ata4: SATA link down (SStatus 0 SControl 300) Apr 20 15:22:58.052416 kernel: ata2: SATA link down (SStatus 0 SControl 300) Apr 20 15:22:58.052426 kernel: ata5: SATA link down (SStatus 0 SControl 300) Apr 20 15:22:58.052435 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Apr 20 15:22:58.052444 kernel: ata3.00: LPM support broken, forcing max_power Apr 20 15:22:58.052454 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Apr 20 15:22:58.052464 kernel: ata3.00: applying bridge limits Apr 20 15:22:58.052473 kernel: ata6: SATA link down (SStatus 0 SControl 300) Apr 20 15:22:58.052857 kernel: ata3.00: LPM support broken, forcing max_power Apr 20 15:22:58.052872 kernel: ata3.00: configured for UDMA/100 Apr 20 15:22:58.053079 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Apr 20 15:22:58.053303 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Apr 20 15:22:58.053433 kernel: virtio_blk virtio1: [vda] 27000832 512-byte logical blocks (13.8 GB/12.9 GiB) Apr 20 15:22:58.053447 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Apr 20 15:22:58.053504 kernel: GPT:16515071 != 27000831 Apr 20 15:22:58.053514 kernel: GPT:Alternate GPT header not at the end of the disk. Apr 20 15:22:58.053524 kernel: GPT:16515071 != 27000831 Apr 20 15:22:58.053535 kernel: GPT: Use GNU Parted to correct GPT errors. Apr 20 15:22:58.053544 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 20 15:22:58.053691 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Apr 20 15:22:58.053705 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Apr 20 15:22:58.054488 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Apr 20 15:22:58.054508 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Apr 20 15:22:58.054517 kernel: device-mapper: uevent: version 1.0.3 Apr 20 15:22:58.054527 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Apr 20 15:22:58.054537 kernel: device-mapper: verity: sha256 using shash "sha256-generic" Apr 20 15:22:58.054549 kernel: raid6: avx512x4 gen() 32321 MB/s Apr 20 15:22:58.054560 kernel: raid6: avx512x2 gen() 29107 MB/s Apr 20 15:22:58.054618 kernel: raid6: avx512x1 gen() 26737 MB/s Apr 20 15:22:58.054627 kernel: raid6: avx2x4 gen() 20865 MB/s Apr 20 15:22:58.054636 kernel: raid6: avx2x2 gen() 20983 MB/s Apr 20 15:22:58.054648 kernel: raid6: avx2x1 gen() 25356 MB/s Apr 20 15:22:58.054659 kernel: raid6: using algorithm avx512x4 gen() 32321 MB/s Apr 20 15:22:58.054668 kernel: raid6: .... xor() 7861 MB/s, rmw enabled Apr 20 15:22:58.054677 kernel: raid6: using avx512x2 recovery algorithm Apr 20 15:22:58.054724 kernel: xor: automatically using best checksumming function avx Apr 20 15:22:58.054734 kernel: Btrfs loaded, zoned=no, fsverity=no Apr 20 15:22:58.054745 kernel: BTRFS: device fsid 2b1891e6-d4d2-4c02-a1ed-3a6feccae86f devid 1 transid 45 /dev/mapper/usr (253:0) scanned by mount (182) Apr 20 15:22:58.054757 kernel: BTRFS info (device dm-0): first mount of filesystem 2b1891e6-d4d2-4c02-a1ed-3a6feccae86f Apr 20 15:22:58.054806 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Apr 20 15:22:58.054815 kernel: BTRFS info (device dm-0 state E): disabling log replay at mount time Apr 20 15:22:58.054824 kernel: BTRFS info (device dm-0 state E): enabling free space tree Apr 20 15:22:58.054872 kernel: loop: module loaded Apr 20 15:22:58.054881 kernel: loop0: detected capacity change from 0 to 106960 Apr 20 15:22:58.054891 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Apr 20 15:22:58.054902 systemd[1]: /etc/systemd/system.conf.d/nocgroup.conf:2: Support for option DefaultCPUAccounting= has been removed and it is ignored Apr 20 15:22:58.054914 systemd[1]: /etc/systemd/system.conf.d/nocgroup.conf:5: Support for option DefaultBlockIOAccounting= has been removed and it is ignored Apr 20 15:22:58.054924 systemd[1]: Successfully made /usr/ read-only. Apr 20 15:22:58.055283 systemd[1]: systemd 258.3 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Apr 20 15:22:58.055299 systemd[1]: Detected virtualization kvm. Apr 20 15:22:58.055310 systemd[1]: Detected architecture x86-64. Apr 20 15:22:58.055321 systemd[1]: Running in initrd. Apr 20 15:22:58.055331 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Apr 20 15:22:58.055342 systemd[1]: No hostname configured, using default hostname. Apr 20 15:22:58.055395 systemd[1]: Hostname set to . Apr 20 15:22:58.055406 kernel: hrtimer: interrupt took 4620109 ns Apr 20 15:22:58.055415 systemd[1]: Queued start job for default target initrd.target. Apr 20 15:22:58.055425 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Apr 20 15:22:58.055435 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 20 15:22:58.055446 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 20 15:22:58.055497 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Apr 20 15:22:58.055508 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 20 15:22:58.055518 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Apr 20 15:22:58.055528 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Apr 20 15:22:58.055538 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 20 15:22:58.055547 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 20 15:22:58.055557 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Apr 20 15:22:58.055607 systemd[1]: Reached target paths.target - Path Units. Apr 20 15:22:58.055654 systemd[1]: Reached target slices.target - Slice Units. Apr 20 15:22:58.055665 systemd[1]: Reached target swap.target - Swaps. Apr 20 15:22:58.055676 systemd[1]: Reached target timers.target - Timer Units. Apr 20 15:22:58.055687 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Apr 20 15:22:58.055697 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 20 15:22:58.055744 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Apr 20 15:22:58.055755 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 20 15:22:58.055998 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Apr 20 15:22:58.056079 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 20 15:22:58.056091 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 20 15:22:58.056101 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 20 15:22:58.056110 systemd[1]: Reached target sockets.target - Socket Units. Apr 20 15:22:58.056465 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Apr 20 15:22:58.056479 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Apr 20 15:22:58.056491 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 20 15:22:58.056501 systemd[1]: Finished network-cleanup.service - Network Cleanup. Apr 20 15:22:58.056551 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Apr 20 15:22:58.056562 systemd[1]: Starting systemd-fsck-usr.service... Apr 20 15:22:58.056610 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 20 15:22:58.056621 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 20 15:22:58.056631 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 20 15:22:58.056641 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Apr 20 15:22:58.056974 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 20 15:22:58.057277 systemd[1]: Finished systemd-fsck-usr.service. Apr 20 15:22:58.057291 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 20 15:22:58.057301 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 20 15:22:58.057446 systemd-journald[319]: Collecting audit messages is enabled. Apr 20 15:22:58.057517 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 20 15:22:58.057530 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 20 15:22:58.057542 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Apr 20 15:22:58.057553 kernel: audit: type=1130 audit(1776698578.047:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 15:22:58.057563 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 20 15:22:58.057616 systemd-journald[319]: Journal started Apr 20 15:22:58.057640 systemd-journald[319]: Runtime Journal (/run/log/journal/70cb549fe860499a85dff43974fb897d) is 6M, max 48M, 42M free. Apr 20 15:22:58.047000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 15:22:58.080493 systemd[1]: Started systemd-journald.service - Journal Service. Apr 20 15:22:58.084000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 15:22:58.087406 kernel: Bridge firewalling registered Apr 20 15:22:58.087661 kernel: audit: type=1130 audit(1776698578.084:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 15:22:58.095286 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 20 15:22:58.096879 systemd-modules-load[322]: Inserted module 'br_netfilter' Apr 20 15:22:58.159724 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 20 15:22:58.160000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 15:22:58.167042 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 20 15:22:58.174202 kernel: audit: type=1130 audit(1776698578.160:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 15:22:58.207757 systemd-tmpfiles[337]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Apr 20 15:22:58.209000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 15:22:58.247000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 15:22:58.208070 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 20 15:22:58.273582 kernel: audit: type=1130 audit(1776698578.209:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 15:22:58.246619 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 20 15:22:58.273000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 15:22:58.299598 kernel: audit: type=1130 audit(1776698578.247:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 15:22:58.263090 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 20 15:22:58.304647 kernel: audit: type=1130 audit(1776698578.273:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 15:22:58.303295 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 20 15:22:58.320000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 15:22:58.334516 kernel: audit: type=1130 audit(1776698578.320:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 15:22:58.339904 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Apr 20 15:22:58.371000 audit: BPF prog-id=5 op=LOAD Apr 20 15:22:58.376588 kernel: audit: type=1334 audit(1776698578.371:9): prog-id=5 op=LOAD Apr 20 15:22:58.378414 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 20 15:22:58.478653 dracut-cmdline[356]: dracut-109 Apr 20 15:22:58.493859 dracut-cmdline[356]: Using kernel command line parameters: SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=70de22794192cd52e167b5a4b1ae0509811ded61dbe4152dfc02378f843ae81a Apr 20 15:22:58.766032 systemd-resolved[358]: Positive Trust Anchors: Apr 20 15:22:58.766522 systemd-resolved[358]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 20 15:22:58.766527 systemd-resolved[358]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Apr 20 15:22:58.766577 systemd-resolved[358]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 20 15:22:58.892012 systemd-resolved[358]: Defaulting to hostname 'linux'. Apr 20 15:22:58.912401 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 20 15:22:58.977497 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 20 15:22:58.977000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 15:22:58.997691 kernel: audit: type=1130 audit(1776698578.977:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 15:22:59.766382 kernel: Loading iSCSI transport class v2.0-870. Apr 20 15:22:59.839875 kernel: iscsi: registered transport (tcp) Apr 20 15:22:59.971477 kernel: iscsi: registered transport (qla4xxx) Apr 20 15:22:59.971872 kernel: QLogic iSCSI HBA Driver Apr 20 15:23:00.197640 systemd[1]: Starting systemd-network-generator.service - Generate Network Units from Kernel Command Line... Apr 20 15:23:00.298420 systemd[1]: Finished systemd-network-generator.service - Generate Network Units from Kernel Command Line. Apr 20 15:23:00.322000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 15:23:00.330237 systemd[1]: Reached target network-pre.target - Preparation for Network. Apr 20 15:23:00.344461 kernel: audit: type=1130 audit(1776698580.322:11): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 15:23:01.005638 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Apr 20 15:23:01.018000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 15:23:01.023344 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Apr 20 15:23:01.025594 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Apr 20 15:23:01.225568 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Apr 20 15:23:01.238000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 15:23:01.240000 audit: BPF prog-id=6 op=LOAD Apr 20 15:23:01.241000 audit: BPF prog-id=7 op=LOAD Apr 20 15:23:01.245763 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 20 15:23:01.465352 systemd-udevd[593]: Using default interface naming scheme 'v258'. Apr 20 15:23:01.900724 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 20 15:23:01.908000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 15:23:01.922421 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Apr 20 15:23:02.142000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 15:23:02.130428 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 20 15:23:02.151000 audit: BPF prog-id=8 op=LOAD Apr 20 15:23:02.159915 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 20 15:23:02.270449 dracut-pre-trigger[684]: rd.md=0: removing MD RAID activation Apr 20 15:23:02.463510 systemd-networkd[704]: lo: Link UP Apr 20 15:23:02.463841 systemd-networkd[704]: lo: Gained carrier Apr 20 15:23:02.476000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 15:23:02.466491 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 20 15:23:02.476622 systemd[1]: Reached target network.target - Network. Apr 20 15:23:02.556341 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Apr 20 15:23:02.562000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 15:23:02.569717 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 20 15:23:03.202670 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 20 15:23:03.213000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 15:23:03.236294 kernel: kauditd_printk_skb: 9 callbacks suppressed Apr 20 15:23:03.226572 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Apr 20 15:23:03.247285 kernel: audit: type=1130 audit(1776698583.213:21): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 15:23:03.470350 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Apr 20 15:23:03.522033 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Apr 20 15:23:04.161946 kernel: cryptd: max_cpu_qlen set to 1000 Apr 20 15:23:04.243611 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Apr 20 15:23:04.251576 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Apr 20 15:23:04.277370 kernel: AES CTR mode by8 optimization enabled Apr 20 15:23:04.304581 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Apr 20 15:23:04.399768 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Apr 20 15:23:04.427674 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 20 15:23:04.440065 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 20 15:23:04.453000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 15:23:04.454534 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Apr 20 15:23:04.478965 kernel: audit: type=1131 audit(1776698584.453:22): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 15:23:04.465421 systemd-networkd[704]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Apr 20 15:23:04.465426 systemd-networkd[704]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 20 15:23:04.487722 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 20 15:23:04.513378 disk-uuid[869]: Primary Header is updated. Apr 20 15:23:04.513378 disk-uuid[869]: Secondary Entries is updated. Apr 20 15:23:04.513378 disk-uuid[869]: Secondary Header is updated. Apr 20 15:23:04.495413 systemd-networkd[704]: eth0: Link UP Apr 20 15:23:04.495789 systemd-networkd[704]: eth0: Gained carrier Apr 20 15:23:04.495848 systemd-networkd[704]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Apr 20 15:23:04.560358 systemd-networkd[704]: eth0: DHCPv4 address 10.0.0.22/16, gateway 10.0.0.1 acquired from 10.0.0.1 Apr 20 15:23:04.677771 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 20 15:23:04.686000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 15:23:04.699619 kernel: audit: type=1130 audit(1776698584.686:23): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 15:23:04.948496 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Apr 20 15:23:04.963000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 15:23:04.974323 kernel: audit: type=1130 audit(1776698584.963:24): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 15:23:04.984219 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Apr 20 15:23:04.989394 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 20 15:23:05.002936 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 20 15:23:05.024505 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Apr 20 15:23:05.250017 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Apr 20 15:23:05.262000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 15:23:05.279431 kernel: audit: type=1130 audit(1776698585.262:25): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 15:23:05.677331 disk-uuid[870]: Warning: The kernel is still using the old partition table. Apr 20 15:23:05.677331 disk-uuid[870]: The new table will be used at the next reboot or after you Apr 20 15:23:05.677331 disk-uuid[870]: run partprobe(8) or kpartx(8) Apr 20 15:23:05.677331 disk-uuid[870]: The operation has completed successfully. Apr 20 15:23:05.712942 systemd[1]: disk-uuid.service: Deactivated successfully. Apr 20 15:23:05.719000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 15:23:05.719000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 15:23:05.738534 kernel: audit: type=1130 audit(1776698585.719:26): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 15:23:05.713222 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Apr 20 15:23:05.744005 kernel: audit: type=1131 audit(1776698585.719:27): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 15:23:05.747586 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Apr 20 15:23:05.962671 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (899) Apr 20 15:23:05.963224 kernel: BTRFS info (device vda6): first mount of filesystem 17906e87-85d1-46f5-980e-3e85555360cf Apr 20 15:23:05.971690 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 20 15:23:05.986338 kernel: BTRFS info (device vda6): turning on async discard Apr 20 15:23:05.987351 kernel: BTRFS info (device vda6): enabling free space tree Apr 20 15:23:06.082420 kernel: BTRFS info (device vda6): last unmount of filesystem 17906e87-85d1-46f5-980e-3e85555360cf Apr 20 15:23:06.102582 systemd[1]: Finished ignition-setup.service - Ignition (setup). Apr 20 15:23:06.106000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 15:23:06.131941 kernel: audit: type=1130 audit(1776698586.106:28): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 15:23:06.127619 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Apr 20 15:23:06.651858 systemd-networkd[704]: eth0: Gained IPv6LL Apr 20 15:23:07.737278 ignition[918]: Ignition 2.24.0 Apr 20 15:23:07.738404 ignition[918]: Stage: fetch-offline Apr 20 15:23:07.741066 ignition[918]: no configs at "/usr/lib/ignition/base.d" Apr 20 15:23:07.741732 ignition[918]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 20 15:23:07.744605 ignition[918]: parsed url from cmdline: "" Apr 20 15:23:07.744718 ignition[918]: no config URL provided Apr 20 15:23:07.745513 ignition[918]: reading system config file "/usr/lib/ignition/user.ign" Apr 20 15:23:07.745574 ignition[918]: no config at "/usr/lib/ignition/user.ign" Apr 20 15:23:07.745675 ignition[918]: op(1): [started] loading QEMU firmware config module Apr 20 15:23:07.745678 ignition[918]: op(1): executing: "modprobe" "qemu_fw_cfg" Apr 20 15:23:07.856499 ignition[918]: op(1): [finished] loading QEMU firmware config module Apr 20 15:23:07.866623 ignition[918]: QEMU firmware config was not found. Ignoring... Apr 20 15:23:08.016487 ignition[918]: parsing config with SHA512: b29eadfcbbdc25f48843961d09f1e996e61a747fa957e56559458b22445d52485a8e6390c075c02cb4a7392f1c83814966cfe9ee26339554ff94c835208be5ee Apr 20 15:23:08.180579 unknown[918]: fetched base config from "system" Apr 20 15:23:08.182717 unknown[918]: fetched user config from "qemu" Apr 20 15:23:08.193591 ignition[918]: fetch-offline: fetch-offline passed Apr 20 15:23:08.195083 ignition[918]: Ignition finished successfully Apr 20 15:23:08.204867 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Apr 20 15:23:08.214000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 15:23:08.218701 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Apr 20 15:23:08.231628 kernel: audit: type=1130 audit(1776698588.214:29): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 15:23:08.232409 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Apr 20 15:23:08.608024 ignition[928]: Ignition 2.24.0 Apr 20 15:23:08.608719 ignition[928]: Stage: kargs Apr 20 15:23:08.610356 ignition[928]: no configs at "/usr/lib/ignition/base.d" Apr 20 15:23:08.610370 ignition[928]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 20 15:23:08.674456 ignition[928]: kargs: kargs passed Apr 20 15:23:08.674623 ignition[928]: Ignition finished successfully Apr 20 15:23:08.702000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 15:23:08.687100 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Apr 20 15:23:08.723710 kernel: audit: type=1130 audit(1776698588.702:30): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 15:23:08.721697 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Apr 20 15:23:09.395665 ignition[937]: Ignition 2.24.0 Apr 20 15:23:09.395710 ignition[937]: Stage: disks Apr 20 15:23:09.397662 ignition[937]: no configs at "/usr/lib/ignition/base.d" Apr 20 15:23:09.397674 ignition[937]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 20 15:23:09.443423 ignition[937]: disks: disks passed Apr 20 15:23:09.443772 ignition[937]: Ignition finished successfully Apr 20 15:23:09.464174 systemd[1]: Finished ignition-disks.service - Ignition (disks). Apr 20 15:23:09.481481 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Apr 20 15:23:09.478000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 15:23:09.509552 kernel: audit: type=1130 audit(1776698589.478:31): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 15:23:09.481805 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 20 15:23:09.520945 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 20 15:23:09.539941 systemd[1]: Reached target sysinit.target - System Initialization. Apr 20 15:23:09.552059 systemd[1]: Reached target basic.target - Basic System. Apr 20 15:23:09.576327 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Apr 20 15:23:09.824312 systemd-fsck[947]: ROOT: clean, 15/456736 files, 38230/456704 blocks Apr 20 15:23:09.855984 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Apr 20 15:23:09.867000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 15:23:09.891636 kernel: audit: type=1130 audit(1776698589.867:32): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 15:23:09.916480 systemd[1]: Mounting sysroot.mount - /sysroot... Apr 20 15:23:10.603998 kernel: EXT4-fs (vda9): mounted filesystem 2bdffc2e-451a-418b-b04b-9e3cd9229e7e r/w with ordered data mode. Quota mode: none. Apr 20 15:23:10.607582 systemd[1]: Mounted sysroot.mount - /sysroot. Apr 20 15:23:10.644737 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Apr 20 15:23:10.709691 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 20 15:23:10.734459 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Apr 20 15:23:10.745592 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Apr 20 15:23:10.746049 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Apr 20 15:23:10.748437 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Apr 20 15:23:10.839759 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Apr 20 15:23:10.855383 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (956) Apr 20 15:23:10.871990 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Apr 20 15:23:10.900437 kernel: BTRFS info (device vda6): first mount of filesystem 17906e87-85d1-46f5-980e-3e85555360cf Apr 20 15:23:10.900626 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 20 15:23:10.900640 kernel: BTRFS info (device vda6): turning on async discard Apr 20 15:23:10.900652 kernel: BTRFS info (device vda6): enabling free space tree Apr 20 15:23:10.907767 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 20 15:23:11.911994 kernel: loop1: detected capacity change from 0 to 43472 Apr 20 15:23:11.961027 kernel: loop1: p1 p2 p3 Apr 20 15:23:12.128694 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Apr 20 15:23:12.128920 kernel: device-mapper: verity: Unrecognized verity feature request: root_hash_sig_key_desc Apr 20 15:23:12.128938 kernel: device-mapper: table: 253:1: verity: Unrecognized verity feature request (-EINVAL) Apr 20 15:23:12.134288 kernel: device-mapper: ioctl: error adding target to table Apr 20 15:23:12.141288 systemd-confext[1046]: device-mapper: reload ioctl on 036bab43330e5a16b58b2997d79b59667c299046b83a0d438261a470d6586a8f-verity (253:1) failed: Invalid argument Apr 20 15:23:12.233369 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Apr 20 15:23:12.583561 kernel: erofs: (device dm-1): mounted with root inode @ nid 40. Apr 20 15:23:12.617057 kernel: loop2: detected capacity change from 0 to 43472 Apr 20 15:23:12.642733 kernel: loop2: p1 p2 p3 Apr 20 15:23:12.724760 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Apr 20 15:23:12.725224 kernel: device-mapper: verity: Unrecognized verity feature request: root_hash_sig_key_desc Apr 20 15:23:12.725282 kernel: device-mapper: table: 253:1: verity: Unrecognized verity feature request (-EINVAL) Apr 20 15:23:12.731820 kernel: device-mapper: ioctl: error adding target to table Apr 20 15:23:12.733337 (sd-merge)[1059]: device-mapper: reload ioctl on 036bab43330e5a16b58b2997d79b59667c299046b83a0d438261a470d6586a8f-verity (253:1) failed: Invalid argument Apr 20 15:23:12.754294 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Apr 20 15:23:12.902444 kernel: erofs: (device dm-1): mounted with root inode @ nid 40. Apr 20 15:23:12.903095 (sd-merge)[1059]: Using extensions '00-flatcar-default.raw'. Apr 20 15:23:12.905423 (sd-merge)[1059]: Merged extensions into '/sysroot/etc'. Apr 20 15:23:12.919569 initrd-setup-root[1066]: /etc 00-flatcar-default Mon 2026-04-20 15:22:58 UTC Apr 20 15:23:12.922059 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Apr 20 15:23:12.928000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 15:23:12.944704 kernel: audit: type=1130 audit(1776698592.928:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 15:23:12.937986 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Apr 20 15:23:12.960608 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Apr 20 15:23:12.980680 systemd[1]: sysroot-oem.mount: Deactivated successfully. Apr 20 15:23:12.986023 kernel: BTRFS info (device vda6): last unmount of filesystem 17906e87-85d1-46f5-980e-3e85555360cf Apr 20 15:23:13.046899 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Apr 20 15:23:13.050000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 15:23:13.062099 kernel: audit: type=1130 audit(1776698593.050:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 15:23:13.189642 ignition[1074]: INFO : Ignition 2.24.0 Apr 20 15:23:13.189642 ignition[1074]: INFO : Stage: mount Apr 20 15:23:13.198666 ignition[1074]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 20 15:23:13.198666 ignition[1074]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 20 15:23:13.217584 ignition[1074]: INFO : mount: mount passed Apr 20 15:23:13.224422 ignition[1074]: INFO : Ignition finished successfully Apr 20 15:23:13.231279 systemd[1]: Finished ignition-mount.service - Ignition (mount). Apr 20 15:23:13.231000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 15:23:13.247456 kernel: audit: type=1130 audit(1776698593.231:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 15:23:13.242262 systemd[1]: Starting ignition-files.service - Ignition (files)... Apr 20 15:23:13.367763 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 20 15:23:13.490560 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1088) Apr 20 15:23:13.499290 kernel: BTRFS info (device vda6): first mount of filesystem 17906e87-85d1-46f5-980e-3e85555360cf Apr 20 15:23:13.499464 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 20 15:23:13.509730 kernel: BTRFS info (device vda6): turning on async discard Apr 20 15:23:13.510714 kernel: BTRFS info (device vda6): enabling free space tree Apr 20 15:23:13.558239 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 20 15:23:13.956463 ignition[1105]: INFO : Ignition 2.24.0 Apr 20 15:23:13.956463 ignition[1105]: INFO : Stage: files Apr 20 15:23:13.966284 ignition[1105]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 20 15:23:13.966284 ignition[1105]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 20 15:23:13.976791 ignition[1105]: DEBUG : files: compiled without relabeling support, skipping Apr 20 15:23:13.976791 ignition[1105]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Apr 20 15:23:13.976791 ignition[1105]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Apr 20 15:23:13.998761 ignition[1105]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Apr 20 15:23:14.006944 ignition[1105]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Apr 20 15:23:14.014034 ignition[1105]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Apr 20 15:23:14.010539 unknown[1105]: wrote ssh authorized keys file for user: core Apr 20 15:23:14.077338 ignition[1105]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Apr 20 15:23:14.086919 ignition[1105]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Apr 20 15:23:14.441958 ignition[1105]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Apr 20 15:23:14.764364 ignition[1105]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Apr 20 15:23:14.764364 ignition[1105]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Apr 20 15:23:14.780711 ignition[1105]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Apr 20 15:23:14.780711 ignition[1105]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Apr 20 15:23:14.780711 ignition[1105]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Apr 20 15:23:14.780711 ignition[1105]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 20 15:23:14.780711 ignition[1105]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 20 15:23:14.780711 ignition[1105]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 20 15:23:14.780711 ignition[1105]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 20 15:23:14.780711 ignition[1105]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Apr 20 15:23:14.780711 ignition[1105]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Apr 20 15:23:14.780711 ignition[1105]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Apr 20 15:23:14.780711 ignition[1105]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Apr 20 15:23:14.780711 ignition[1105]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Apr 20 15:23:14.780711 ignition[1105]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.4-x86-64.raw: attempt #1 Apr 20 15:23:15.290730 ignition[1105]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Apr 20 15:23:18.647527 ignition[1105]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Apr 20 15:23:18.647527 ignition[1105]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Apr 20 15:23:18.663105 ignition[1105]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 20 15:23:18.663105 ignition[1105]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 20 15:23:18.663105 ignition[1105]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Apr 20 15:23:18.663105 ignition[1105]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Apr 20 15:23:18.663105 ignition[1105]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Apr 20 15:23:18.663105 ignition[1105]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Apr 20 15:23:18.663105 ignition[1105]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Apr 20 15:23:18.663105 ignition[1105]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Apr 20 15:23:18.773224 ignition[1105]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Apr 20 15:23:18.783761 ignition[1105]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Apr 20 15:23:18.799501 ignition[1105]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Apr 20 15:23:18.810211 ignition[1105]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Apr 20 15:23:18.810211 ignition[1105]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Apr 20 15:23:18.878722 ignition[1105]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Apr 20 15:23:18.892468 ignition[1105]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Apr 20 15:23:18.892468 ignition[1105]: INFO : files: files passed Apr 20 15:23:18.892468 ignition[1105]: INFO : Ignition finished successfully Apr 20 15:23:18.911000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 15:23:18.935733 kernel: audit: type=1130 audit(1776698598.911:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 15:23:18.900335 systemd[1]: Finished ignition-files.service - Ignition (files). Apr 20 15:23:18.922930 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Apr 20 15:23:18.951801 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Apr 20 15:23:18.964715 systemd[1]: ignition-quench.service: Deactivated successfully. Apr 20 15:23:18.965734 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Apr 20 15:23:18.976000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 15:23:18.976000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 15:23:18.996953 kernel: audit: type=1130 audit(1776698598.976:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 15:23:18.997019 kernel: audit: type=1131 audit(1776698598.976:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 15:23:19.005344 initrd-setup-root-after-ignition[1136]: grep: /sysroot/oem/oem-release: No such file or directory Apr 20 15:23:19.017019 initrd-setup-root-after-ignition[1138]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 20 15:23:19.017019 initrd-setup-root-after-ignition[1138]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Apr 20 15:23:19.027781 initrd-setup-root-after-ignition[1142]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 20 15:23:19.038448 kernel: loop3: detected capacity change from 0 to 43472 Apr 20 15:23:19.038499 kernel: loop3: p1 p2 p3 Apr 20 15:23:19.065928 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Apr 20 15:23:19.066001 kernel: device-mapper: verity: Unrecognized verity feature request: root_hash_sig_key_desc Apr 20 15:23:19.066010 kernel: device-mapper: table: 253:2: verity: Unrecognized verity feature request (-EINVAL) Apr 20 15:23:19.070513 kernel: device-mapper: ioctl: error adding target to table Apr 20 15:23:19.073824 systemd-confext[1144]: device-mapper: reload ioctl on loop3p1-verity (253:2) failed: Invalid argument Apr 20 15:23:19.088304 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Apr 20 15:23:19.262370 kernel: erofs: (device dm-2): mounted with root inode @ nid 40. Apr 20 15:23:19.294924 kernel: loop4: detected capacity change from 0 to 43472 Apr 20 15:23:19.299590 kernel: loop4: p1 p2 p3 Apr 20 15:23:19.335706 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Apr 20 15:23:19.336007 kernel: device-mapper: verity: Unrecognized verity feature request: root_hash_sig_key_desc Apr 20 15:23:19.336020 kernel: device-mapper: table: 253:2: verity: Unrecognized verity feature request (-EINVAL) Apr 20 15:23:19.338760 kernel: device-mapper: ioctl: error adding target to table Apr 20 15:23:19.341350 (sd-merge)[1157]: device-mapper: reload ioctl on loop4p1-verity (253:2) failed: Invalid argument Apr 20 15:23:19.354326 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Apr 20 15:23:19.444310 kernel: erofs: (device dm-2): mounted with root inode @ nid 40. Apr 20 15:23:19.445344 (sd-merge)[1157]: Skipping extension refresh because no change was found, use --always-refresh=yes to always do a refresh. Apr 20 15:23:19.467353 kernel: device-mapper: ioctl: remove_all left 2 open device(s) Apr 20 15:23:19.477359 kernel: loop4: detected capacity change from 0 to 178200 Apr 20 15:23:19.483820 kernel: loop4: p1 p2 p3 Apr 20 15:23:19.487288 kernel: loop4: p1 p2 p3 Apr 20 15:23:19.541176 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Apr 20 15:23:19.541251 kernel: device-mapper: verity: Unrecognized verity feature request: root_hash_sig_key_desc Apr 20 15:23:19.541261 kernel: device-mapper: table: 253:2: verity: Unrecognized verity feature request (-EINVAL) Apr 20 15:23:19.546322 kernel: device-mapper: ioctl: error adding target to table Apr 20 15:23:19.548375 systemd-sysext[1165]: device-mapper: reload ioctl on 47e3a0d62726bde98fcb471f946aa0f0e9f97280e4f7267ec40f142aba643eb6-verity (253:2) failed: Invalid argument Apr 20 15:23:19.562710 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Apr 20 15:23:19.678228 kernel: erofs: (device dm-2): mounted with root inode @ nid 39. Apr 20 15:23:19.719360 kernel: loop5: detected capacity change from 0 to 378016 Apr 20 15:23:19.723404 kernel: loop5: p1 p2 p3 Apr 20 15:23:19.777940 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Apr 20 15:23:19.778256 kernel: device-mapper: verity: Unrecognized verity feature request: root_hash_sig_key_desc Apr 20 15:23:19.778269 kernel: device-mapper: table: 253:2: verity: Unrecognized verity feature request (-EINVAL) Apr 20 15:23:19.781061 kernel: device-mapper: ioctl: error adding target to table Apr 20 15:23:19.783726 systemd-sysext[1165]: device-mapper: reload ioctl on 5f63b01eb609e19b7df6b1f3554b098a8644903507171258f91f339ee69140b0-verity (253:2) failed: Invalid argument Apr 20 15:23:19.797508 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Apr 20 15:23:19.927477 kernel: erofs: (device dm-2): mounted with root inode @ nid 39. Apr 20 15:23:20.019383 kernel: loop6: detected capacity change from 0 to 219192 Apr 20 15:23:20.099685 kernel: loop7: detected capacity change from 0 to 178200 Apr 20 15:23:20.104401 kernel: loop7: p1 p2 p3 Apr 20 15:23:20.135896 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Apr 20 15:23:20.136287 kernel: device-mapper: verity: Unrecognized verity feature request: root_hash_sig_key_desc Apr 20 15:23:20.136309 kernel: device-mapper: table: 253:2: verity: Unrecognized verity feature request (-EINVAL) Apr 20 15:23:20.140931 kernel: device-mapper: ioctl: error adding target to table Apr 20 15:23:20.144718 (sd-merge)[1182]: device-mapper: reload ioctl on 47e3a0d62726bde98fcb471f946aa0f0e9f97280e4f7267ec40f142aba643eb6-verity (253:2) failed: Invalid argument Apr 20 15:23:20.159292 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Apr 20 15:23:20.241449 kernel: erofs: (device dm-2): mounted with root inode @ nid 39. Apr 20 15:23:20.246297 kernel: loop1: detected capacity change from 0 to 378016 Apr 20 15:23:20.250315 kernel: loop1: p1 p2 p3 Apr 20 15:23:20.256361 kernel: loop1: p1 p2 p3 Apr 20 15:23:20.295700 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Apr 20 15:23:20.295822 kernel: device-mapper: verity: Unrecognized verity feature request: root_hash_sig_key_desc Apr 20 15:23:20.300309 kernel: device-mapper: table: 253:3: verity: Unrecognized verity feature request (-EINVAL) Apr 20 15:23:20.300692 kernel: device-mapper: ioctl: error adding target to table Apr 20 15:23:20.303524 (sd-merge)[1182]: device-mapper: reload ioctl on 5f63b01eb609e19b7df6b1f3554b098a8644903507171258f91f339ee69140b0-verity (253:3) failed: Invalid argument Apr 20 15:23:20.378631 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Apr 20 15:23:20.506477 kernel: erofs: (device dm-3): mounted with root inode @ nid 39. Apr 20 15:23:20.516327 kernel: loop3: detected capacity change from 0 to 219192 Apr 20 15:23:20.536935 (sd-merge)[1182]: Using extensions 'containerd-flatcar.raw', 'docker-flatcar.raw', 'kubernetes-v1.34.4-x86-64.raw'. Apr 20 15:23:20.539229 (sd-merge)[1182]: Merged extensions into '/sysroot/usr'. Apr 20 15:23:20.558227 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 20 15:23:20.564000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 15:23:20.579599 kernel: audit: type=1130 audit(1776698600.564:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 15:23:20.564716 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Apr 20 15:23:20.590700 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Apr 20 15:23:20.743722 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Apr 20 15:23:20.744823 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Apr 20 15:23:20.752000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 15:23:20.752000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 15:23:20.775972 kernel: audit: type=1130 audit(1776698600.752:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 15:23:20.753348 systemd[1]: initrd-parse-etc.service: Triggering OnSuccess= dependencies. Apr 20 15:23:20.784619 kernel: audit: type=1131 audit(1776698600.752:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 15:23:20.754008 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Apr 20 15:23:20.781499 systemd[1]: Reached target initrd.target - Initrd Default Target. Apr 20 15:23:20.800095 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Apr 20 15:23:20.804187 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Apr 20 15:23:20.882356 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 20 15:23:20.882000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 15:23:20.902072 kernel: audit: type=1130 audit(1776698600.882:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 15:23:20.915737 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Apr 20 15:23:20.993740 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Apr 20 15:23:21.010416 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 20 15:23:21.018056 systemd[1]: Stopped target timers.target - Timer Units. Apr 20 15:23:21.033845 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Apr 20 15:23:21.035529 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 20 15:23:21.047000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 15:23:21.058645 kernel: audit: type=1131 audit(1776698601.047:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 15:23:21.050374 systemd[1]: Stopped target initrd.target - Initrd Default Target. Apr 20 15:23:21.064989 systemd[1]: Stopped target basic.target - Basic System. Apr 20 15:23:21.077952 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Apr 20 15:23:21.086527 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Apr 20 15:23:21.111439 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Apr 20 15:23:21.168645 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Apr 20 15:23:21.180600 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Apr 20 15:23:21.186035 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Apr 20 15:23:21.196247 systemd[1]: Stopped target sysinit.target - System Initialization. Apr 20 15:23:21.205704 systemd[1]: Stopped target local-fs.target - Local File Systems. Apr 20 15:23:21.214803 systemd[1]: Stopped target swap.target - Swaps. Apr 20 15:23:21.223706 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Apr 20 15:23:21.229000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 15:23:21.240263 kernel: audit: type=1131 audit(1776698601.229:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 15:23:21.223953 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Apr 20 15:23:21.229927 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Apr 20 15:23:21.255346 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 20 15:23:21.267088 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Apr 20 15:23:21.267756 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 20 15:23:21.284591 systemd[1]: dracut-initqueue.service: Deactivated successfully. Apr 20 15:23:21.285317 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Apr 20 15:23:21.298000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 15:23:21.298825 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Apr 20 15:23:21.308000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 15:23:21.301771 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Apr 20 15:23:21.322313 kernel: audit: type=1131 audit(1776698601.298:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 15:23:21.308300 systemd[1]: ignition-fetch-offline.service: Consumed 1.198s CPU time. Apr 20 15:23:21.308480 systemd[1]: Stopped target paths.target - Path Units. Apr 20 15:23:21.318405 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Apr 20 15:23:21.318848 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 20 15:23:21.329580 systemd[1]: Stopped target slices.target - Slice Units. Apr 20 15:23:21.332344 systemd[1]: Stopped target sockets.target - Socket Units. Apr 20 15:23:21.351358 systemd[1]: iscsid.socket: Deactivated successfully. Apr 20 15:23:21.351719 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Apr 20 15:23:21.358772 systemd[1]: iscsiuio.socket: Deactivated successfully. Apr 20 15:23:21.359191 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 20 15:23:21.392000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 15:23:21.370833 systemd[1]: systemd-journald-audit.socket: Deactivated successfully. Apr 20 15:23:21.400000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 15:23:21.370974 systemd[1]: Closed systemd-journald-audit.socket - Journal Audit Socket. Apr 20 15:23:21.382359 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Apr 20 15:23:21.382654 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 20 15:23:21.392421 systemd[1]: ignition-files.service: Deactivated successfully. Apr 20 15:23:21.394205 systemd[1]: Stopped ignition-files.service - Ignition (files). Apr 20 15:23:21.430000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 15:23:21.400280 systemd[1]: ignition-files.service: Consumed 4.986s CPU time. Apr 20 15:23:21.440000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 15:23:21.403509 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Apr 20 15:23:21.455000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 15:23:21.420229 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Apr 20 15:23:21.427109 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Apr 20 15:23:21.427380 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 20 15:23:21.430388 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Apr 20 15:23:21.430515 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Apr 20 15:23:21.491000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 15:23:21.491000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 15:23:21.441069 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Apr 20 15:23:21.441253 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Apr 20 15:23:21.481377 systemd[1]: initrd-cleanup.service: Deactivated successfully. Apr 20 15:23:21.482643 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Apr 20 15:23:21.601306 systemd[1]: sysroot-boot.mount: Deactivated successfully. Apr 20 15:23:21.605275 systemd[1]: sysroot-boot.service: Deactivated successfully. Apr 20 15:23:21.606328 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Apr 20 15:23:21.614000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 15:23:21.635027 ignition[1214]: INFO : Ignition 2.24.0 Apr 20 15:23:21.635027 ignition[1214]: INFO : Stage: umount Apr 20 15:23:21.641238 ignition[1214]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 20 15:23:21.641238 ignition[1214]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 20 15:23:21.641238 ignition[1214]: INFO : umount: umount passed Apr 20 15:23:21.641238 ignition[1214]: INFO : Ignition finished successfully Apr 20 15:23:21.656275 systemd[1]: ignition-mount.service: Deactivated successfully. Apr 20 15:23:21.656522 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Apr 20 15:23:21.667000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 15:23:21.668097 systemd[1]: Stopped target network.target - Network. Apr 20 15:23:21.668362 systemd[1]: ignition-disks.service: Deactivated successfully. Apr 20 15:23:21.679000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 15:23:21.668406 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Apr 20 15:23:21.688000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 15:23:21.680267 systemd[1]: ignition-kargs.service: Deactivated successfully. Apr 20 15:23:21.696000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 15:23:21.680352 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Apr 20 15:23:21.710000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup-pre comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 15:23:21.688459 systemd[1]: ignition-setup.service: Deactivated successfully. Apr 20 15:23:21.688502 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Apr 20 15:23:21.696584 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Apr 20 15:23:21.781000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 15:23:21.696654 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Apr 20 15:23:21.736986 systemd[1]: initrd-setup-root.service: Deactivated successfully. Apr 20 15:23:21.737270 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Apr 20 15:23:21.784602 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Apr 20 15:23:21.795347 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Apr 20 15:23:21.813353 systemd[1]: systemd-networkd.service: Deactivated successfully. Apr 20 15:23:21.821000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 15:23:21.814739 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Apr 20 15:23:21.833000 audit: BPF prog-id=8 op=UNLOAD Apr 20 15:23:21.836588 systemd[1]: Stopped target network-pre.target - Preparation for Network. Apr 20 15:23:21.836775 systemd[1]: systemd-networkd.socket: Deactivated successfully. Apr 20 15:23:21.836823 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Apr 20 15:23:21.869466 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Apr 20 15:23:21.874389 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Apr 20 15:23:21.882000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 15:23:21.874490 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 20 15:23:21.885437 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 20 15:23:21.897346 systemd[1]: systemd-resolved.service: Deactivated successfully. Apr 20 15:23:21.905000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 15:23:21.897464 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Apr 20 15:23:21.916000 audit: BPF prog-id=5 op=UNLOAD Apr 20 15:23:21.923807 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 20 15:23:21.924950 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 20 15:23:21.932000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 15:23:21.933023 systemd[1]: systemd-modules-load.service: Deactivated successfully. Apr 20 15:23:21.936000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 15:23:21.933100 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Apr 20 15:23:21.937240 systemd[1]: systemd-udevd.service: Deactivated successfully. Apr 20 15:23:21.937386 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 20 15:23:21.955000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 15:23:21.955784 systemd[1]: systemd-udevd.service: Consumed 5.706s CPU time. Apr 20 15:23:21.960553 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Apr 20 15:23:21.960682 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Apr 20 15:23:21.974000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 15:23:21.966428 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Apr 20 15:23:21.985000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 15:23:21.966587 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Apr 20 15:23:21.996000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 15:23:21.975058 systemd[1]: dracut-cmdline.service: Deactivated successfully. Apr 20 15:23:21.975226 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Apr 20 15:23:21.985471 systemd[1]: dracut-cmdline.service: Consumed 1.591s CPU time. Apr 20 15:23:21.985591 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 20 15:23:21.985633 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 20 15:23:22.018000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 15:23:22.005546 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Apr 20 15:23:22.034000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 15:23:22.012268 systemd[1]: systemd-network-generator.service: Deactivated successfully. Apr 20 15:23:22.012319 systemd[1]: Stopped systemd-network-generator.service - Generate Network Units from Kernel Command Line. Apr 20 15:23:22.044000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 15:23:22.021353 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Apr 20 15:23:22.021992 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 20 15:23:22.036375 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Apr 20 15:23:22.067000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 15:23:22.036465 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 20 15:23:22.050554 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Apr 20 15:23:22.085000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 15:23:22.050611 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Apr 20 15:23:22.068591 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 20 15:23:22.069601 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 20 15:23:22.112716 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Apr 20 15:23:22.115440 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Apr 20 15:23:22.163000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 15:23:22.163000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 15:23:22.195552 systemd[1]: network-cleanup.service: Deactivated successfully. Apr 20 15:23:22.198557 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Apr 20 15:23:22.207000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 15:23:22.209821 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Apr 20 15:23:22.212613 systemd[1]: Starting initrd-switch-root.service - Switch Root... Apr 20 15:23:22.280604 systemd[1]: Switching root. Apr 20 15:23:22.462576 systemd-journald[319]: Journal stopped Apr 20 15:23:30.852050 systemd-journald[319]: Received SIGTERM from PID 1 (systemd). Apr 20 15:23:30.852233 kernel: SELinux: policy capability network_peer_controls=1 Apr 20 15:23:30.852253 kernel: SELinux: policy capability open_perms=1 Apr 20 15:23:30.852265 kernel: SELinux: policy capability extended_socket_class=1 Apr 20 15:23:30.852281 kernel: SELinux: policy capability always_check_network=0 Apr 20 15:23:30.852344 kernel: SELinux: policy capability cgroup_seclabel=1 Apr 20 15:23:30.852364 kernel: SELinux: policy capability nnp_nosuid_transition=1 Apr 20 15:23:30.852379 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Apr 20 15:23:30.852391 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Apr 20 15:23:30.852407 kernel: SELinux: policy capability userspace_initial_context=0 Apr 20 15:23:30.852420 systemd[1]: Successfully loaded SELinux policy in 358.928ms. Apr 20 15:23:30.852442 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 35.713ms. Apr 20 15:23:30.854721 systemd[1]: systemd 258.3 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Apr 20 15:23:30.859410 systemd[1]: Detected virtualization kvm. Apr 20 15:23:30.860750 systemd[1]: Detected architecture x86-64. Apr 20 15:23:30.860777 systemd[1]: Detected first boot. Apr 20 15:23:30.860791 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Apr 20 15:23:30.860807 kernel: kauditd_printk_skb: 36 callbacks suppressed Apr 20 15:23:30.860823 kernel: audit: type=1334 audit(1776698604.103:82): prog-id=9 op=LOAD Apr 20 15:23:30.860956 kernel: audit: type=1334 audit(1776698604.104:83): prog-id=9 op=UNLOAD Apr 20 15:23:30.860975 zram_generator::config[1262]: No configuration found. Apr 20 15:23:30.860997 kernel: Guest personality initialized and is inactive Apr 20 15:23:30.861010 kernel: VMCI host device registered (name=vmci, major=10, minor=258) Apr 20 15:23:30.861022 kernel: Initialized host personality Apr 20 15:23:30.861036 kernel: NET: Registered PF_VSOCK protocol family Apr 20 15:23:30.861051 systemd-ssh-generator[1258]: Failed to query local AF_VSOCK CID: Cannot assign requested address Apr 20 15:23:30.861111 (sd-exec-[1243]: /usr/lib/systemd/system-generators/systemd-ssh-generator failed with exit status 1. Apr 20 15:23:30.861194 systemd[1]: Applying preset policy. Apr 20 15:23:30.861211 systemd[1]: Created symlink '/etc/systemd/system/multi-user.target.wants/prepare-helm.service' → '/etc/systemd/system/prepare-helm.service'. Apr 20 15:23:30.861226 systemd[1]: Created symlink '/etc/systemd/system/timers.target.wants/google-oslogin-cache.timer' → '/usr/lib/systemd/system/google-oslogin-cache.timer'. Apr 20 15:23:30.861240 systemd[1]: Populated /etc with preset unit settings. Apr 20 15:23:30.861255 systemd[1]: /usr/lib/systemd/system/update-engine.service:10: Support for option BlockIOWeight= has been removed and it is ignored Apr 20 15:23:30.861307 kernel: audit: type=1334 audit(1776698609.110:84): prog-id=10 op=LOAD Apr 20 15:23:30.861350 kernel: audit: type=1334 audit(1776698609.111:85): prog-id=2 op=UNLOAD Apr 20 15:23:30.861362 kernel: audit: type=1334 audit(1776698609.112:86): prog-id=11 op=LOAD Apr 20 15:23:30.861373 kernel: audit: type=1334 audit(1776698609.113:87): prog-id=12 op=LOAD Apr 20 15:23:30.861386 kernel: audit: type=1334 audit(1776698609.113:88): prog-id=3 op=UNLOAD Apr 20 15:23:30.861398 kernel: audit: type=1334 audit(1776698609.113:89): prog-id=4 op=UNLOAD Apr 20 15:23:30.861448 kernel: audit: type=1334 audit(1776698609.121:90): prog-id=13 op=LOAD Apr 20 15:23:30.861462 kernel: audit: type=1334 audit(1776698609.121:91): prog-id=10 op=UNLOAD Apr 20 15:23:30.861475 systemd[1]: initrd-switch-root.service: Deactivated successfully. Apr 20 15:23:30.861487 kernel: audit: type=1334 audit(1776698609.122:92): prog-id=14 op=LOAD Apr 20 15:23:30.861500 kernel: audit: type=1334 audit(1776698609.122:93): prog-id=15 op=LOAD Apr 20 15:23:30.861512 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Apr 20 15:23:30.861560 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Apr 20 15:23:30.861574 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Apr 20 15:23:30.862404 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Apr 20 15:23:30.862428 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Apr 20 15:23:30.862443 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Apr 20 15:23:30.862459 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Apr 20 15:23:30.862472 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Apr 20 15:23:30.864061 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Apr 20 15:23:30.864185 systemd[1]: Created slice user.slice - User and Session Slice. Apr 20 15:23:30.864204 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 20 15:23:30.864222 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 20 15:23:30.864239 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Apr 20 15:23:30.864254 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Apr 20 15:23:30.864271 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Apr 20 15:23:30.864331 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 20 15:23:30.864346 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Apr 20 15:23:30.864362 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 20 15:23:30.864376 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 20 15:23:30.865019 systemd[1]: Reached target imports.target - Image Downloads. Apr 20 15:23:30.865767 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Apr 20 15:23:30.869489 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Apr 20 15:23:30.872317 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Apr 20 15:23:30.872495 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Apr 20 15:23:30.872509 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 20 15:23:30.872521 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 20 15:23:30.872533 systemd[1]: Reached target remote-integritysetup.target - Remote Integrity Protected Volumes. Apr 20 15:23:30.872545 systemd[1]: Reached target remote-veritysetup.target - Remote Verity Protected Volumes. Apr 20 15:23:30.872557 systemd[1]: Reached target slices.target - Slice Units. Apr 20 15:23:30.872629 systemd[1]: Reached target swap.target - Swaps. Apr 20 15:23:30.872642 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Apr 20 15:23:30.872654 systemd[1]: Listening on systemd-ask-password.socket - Query the User Interactively for a Password. Apr 20 15:23:30.872665 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Apr 20 15:23:30.872676 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Apr 20 15:23:30.872688 systemd[1]: Listening on systemd-factory-reset.socket - Factory Reset Management. Apr 20 15:23:30.873475 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Apr 20 15:23:30.876680 systemd[1]: Listening on systemd-mountfsd.socket - DDI File System Mounter Socket. Apr 20 15:23:30.876816 systemd[1]: Listening on systemd-networkd-varlink.socket - Network Service Varlink Socket. Apr 20 15:23:30.876832 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 20 15:23:30.876844 systemd[1]: Listening on systemd-nsresourced.socket - Namespace Resource Manager Socket. Apr 20 15:23:30.876857 systemd[1]: Listening on systemd-oomd.socket - Userspace Out-Of-Memory (OOM) Killer Socket. Apr 20 15:23:30.876869 systemd[1]: Listening on systemd-resolved-monitor.socket - Resolve Monitor Varlink Socket. Apr 20 15:23:30.876882 systemd[1]: Listening on systemd-resolved-varlink.socket - Resolve Service Varlink Socket. Apr 20 15:23:30.876975 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 20 15:23:30.876990 systemd[1]: Listening on systemd-udevd-varlink.socket - udev Varlink Socket. Apr 20 15:23:30.877003 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Apr 20 15:23:30.877017 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Apr 20 15:23:30.877030 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Apr 20 15:23:30.877043 systemd[1]: Mounting media.mount - External Media Directory... Apr 20 15:23:30.877057 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 20 15:23:30.877111 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Apr 20 15:23:30.879869 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Apr 20 15:23:30.879896 systemd[1]: tmp.mount: x-systemd.graceful-option=usrquota specified, but option is not available, suppressing. Apr 20 15:23:30.879947 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Apr 20 15:23:30.880009 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Apr 20 15:23:30.880024 systemd[1]: Reached target machines.target - Virtual Machines and Containers. Apr 20 15:23:30.880039 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Apr 20 15:23:30.880051 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 20 15:23:30.880066 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 20 15:23:30.880079 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Apr 20 15:23:30.880193 systemd[1]: modprobe@dm_mod.service - Load Kernel Module dm_mod was skipped because of an unmet condition check (ConditionKernelModuleLoaded=!dm_mod). Apr 20 15:23:30.880209 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 20 15:23:30.880223 systemd[1]: modprobe@efi_pstore.service - Load Kernel Module efi_pstore was skipped because of an unmet condition check (ConditionKernelModuleLoaded=!efi_pstore). Apr 20 15:23:30.880235 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Apr 20 15:23:30.880251 systemd[1]: modprobe@loop.service - Load Kernel Module loop was skipped because of an unmet condition check (ConditionKernelModuleLoaded=!loop). Apr 20 15:23:30.880263 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Apr 20 15:23:30.880277 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Apr 20 15:23:30.880323 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Apr 20 15:23:30.880336 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Apr 20 15:23:30.880351 systemd[1]: Stopped systemd-fsck-usr.service. Apr 20 15:23:30.880365 kernel: ACPI: bus type drm_connector registered Apr 20 15:23:30.880378 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Apr 20 15:23:30.880393 kernel: fuse: init (API version 7.41) Apr 20 15:23:30.880443 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 20 15:23:30.880457 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 20 15:23:30.880472 systemd[1]: Starting systemd-network-generator.service - Generate Network Units from Kernel Command Line... Apr 20 15:23:30.880521 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Apr 20 15:23:30.880535 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 20 15:23:30.880550 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Apr 20 15:23:30.880562 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 20 15:23:30.880577 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 20 15:23:30.880630 systemd-journald[1332]: Collecting audit messages is enabled. Apr 20 15:23:30.880700 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Apr 20 15:23:30.880716 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Apr 20 15:23:30.880732 systemd-journald[1332]: Journal started Apr 20 15:23:30.880759 systemd-journald[1332]: Runtime Journal (/run/log/journal/70cb549fe860499a85dff43974fb897d) is 6M, max 48M, 42M free. Apr 20 15:23:29.832000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Apr 20 15:23:30.539000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 15:23:30.563000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 15:23:30.577000 audit: BPF prog-id=18 op=UNLOAD Apr 20 15:23:30.578000 audit: BPF prog-id=17 op=UNLOAD Apr 20 15:23:30.579000 audit: BPF prog-id=19 op=LOAD Apr 20 15:23:30.579000 audit: BPF prog-id=20 op=LOAD Apr 20 15:23:30.579000 audit: BPF prog-id=21 op=LOAD Apr 20 15:23:30.840000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Apr 20 15:23:30.840000 audit[1332]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=5 a1=7ffd387ae560 a2=4000 a3=0 items=0 ppid=1 pid=1332 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 15:23:30.840000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Apr 20 15:23:28.997313 systemd[1]: Queued start job for default target multi-user.target. Apr 20 15:23:29.123551 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Apr 20 15:23:29.126307 systemd[1]: systemd-journald.service: Deactivated successfully. Apr 20 15:23:29.126797 systemd[1]: systemd-journald.service: Consumed 1.824s CPU time. Apr 20 15:23:30.898376 systemd[1]: Started systemd-journald.service - Journal Service. Apr 20 15:23:30.901000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 15:23:30.903106 systemd[1]: Mounted media.mount - External Media Directory. Apr 20 15:23:30.909017 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Apr 20 15:23:30.915194 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Apr 20 15:23:30.921193 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Apr 20 15:23:30.927759 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Apr 20 15:23:30.937000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 15:23:30.939853 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 20 15:23:30.948000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 15:23:30.950592 systemd[1]: modprobe@configfs.service: Deactivated successfully. Apr 20 15:23:30.950982 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Apr 20 15:23:30.956000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 15:23:30.956000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 15:23:30.962807 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 20 15:23:30.963891 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 20 15:23:30.971000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 15:23:30.971000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 15:23:30.972670 systemd[1]: modprobe@fuse.service: Deactivated successfully. Apr 20 15:23:30.973048 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Apr 20 15:23:30.980000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 15:23:30.980000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 15:23:30.988422 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 20 15:23:30.994000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 15:23:31.001763 systemd[1]: Finished systemd-network-generator.service - Generate Network Units from Kernel Command Line. Apr 20 15:23:31.008000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 15:23:31.051668 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Apr 20 15:23:31.059000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 15:23:31.066781 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Apr 20 15:23:31.072000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-load-credentials comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 15:23:31.111186 systemd[1]: Reached target network-pre.target - Preparation for Network. Apr 20 15:23:31.125851 systemd[1]: Listening on systemd-importd.socket - Disk Image Download Service Socket. Apr 20 15:23:31.143211 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Apr 20 15:23:31.152665 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Apr 20 15:23:31.157534 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Apr 20 15:23:31.157675 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 20 15:23:31.164706 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Apr 20 15:23:31.170494 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 20 15:23:31.175275 systemd[1]: Starting systemd-confext.service - Merge System Configuration Images into /etc/... Apr 20 15:23:31.180868 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Apr 20 15:23:31.187076 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Apr 20 15:23:31.196086 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 20 15:23:31.204612 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Apr 20 15:23:31.215992 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 20 15:23:31.283466 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 20 15:23:31.291185 systemd[1]: Starting systemd-userdb-load-credentials.service - Load JSON user/group Records from Credentials... Apr 20 15:23:31.326797 systemd-journald[1332]: Time spent on flushing to /var/log/journal/70cb549fe860499a85dff43974fb897d is 71.304ms for 1306 entries. Apr 20 15:23:31.326797 systemd-journald[1332]: System Journal (/var/log/journal/70cb549fe860499a85dff43974fb897d) is 8M, max 163.5M, 155.5M free. Apr 20 15:23:31.364000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 15:23:31.373000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 15:23:31.430000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 15:23:31.301332 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Apr 20 15:23:31.457444 systemd-journald[1332]: Received client request to flush runtime journal. Apr 20 15:23:31.314579 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Apr 20 15:23:31.457810 kernel: loop4: detected capacity change from 0 to 43472 Apr 20 15:23:31.359367 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 20 15:23:31.457973 kernel: loop4: p1 p2 p3 Apr 20 15:23:31.365498 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Apr 20 15:23:31.375721 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Apr 20 15:23:31.392723 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Apr 20 15:23:31.420714 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 20 15:23:31.465240 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Apr 20 15:23:31.470000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 15:23:32.022300 systemd[1]: Finished systemd-userdb-load-credentials.service - Load JSON user/group Records from Credentials. Apr 20 15:23:32.032000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdb-load-credentials comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 15:23:32.046187 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Apr 20 15:23:32.050456 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Apr 20 15:23:32.058000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 15:23:32.061733 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Apr 20 15:23:32.061895 kernel: device-mapper: verity: Unrecognized verity feature request: root_hash_sig_key_desc Apr 20 15:23:32.061994 kernel: device-mapper: table: 253:4: verity: Unrecognized verity feature request (-EINVAL) Apr 20 15:23:32.065208 kernel: device-mapper: ioctl: error adding target to table Apr 20 15:23:32.067517 systemd-confext[1381]: device-mapper: reload ioctl on loop4p1-verity (253:4) failed: Invalid argument Apr 20 15:23:32.074244 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Apr 20 15:23:32.092447 systemd-tmpfiles[1378]: ACLs are not supported, ignoring. Apr 20 15:23:32.092493 systemd-tmpfiles[1378]: ACLs are not supported, ignoring. Apr 20 15:23:32.106843 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 20 15:23:32.114000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 15:23:32.137497 systemd[1]: Starting systemd-sysusers.service - Create System Users... Apr 20 15:23:32.212357 systemd[1]: Finished systemd-sysusers.service - Create System Users. Apr 20 15:23:32.222000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 15:23:32.227000 audit: BPF prog-id=22 op=LOAD Apr 20 15:23:32.227000 audit: BPF prog-id=23 op=LOAD Apr 20 15:23:32.227000 audit: BPF prog-id=24 op=LOAD Apr 20 15:23:32.231387 systemd[1]: Starting systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer... Apr 20 15:23:32.237000 audit: BPF prog-id=25 op=LOAD Apr 20 15:23:32.239248 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 20 15:23:32.246000 audit: BPF prog-id=26 op=LOAD Apr 20 15:23:32.249992 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Apr 20 15:23:32.259448 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 20 15:23:32.270404 systemd[1]: Starting modprobe@tun.service - Load Kernel Module tun... Apr 20 15:23:32.276000 audit: BPF prog-id=27 op=LOAD Apr 20 15:23:32.276000 audit: BPF prog-id=28 op=LOAD Apr 20 15:23:32.277000 audit: BPF prog-id=29 op=LOAD Apr 20 15:23:32.282764 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Apr 20 15:23:32.314296 kernel: tun: Universal TUN/TAP device driver, 1.6 Apr 20 15:23:32.315750 systemd[1]: modprobe@tun.service: Deactivated successfully. Apr 20 15:23:32.316006 systemd[1]: Finished modprobe@tun.service - Load Kernel Module tun. Apr 20 15:23:32.322000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@tun comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 15:23:32.322000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@tun comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 15:23:32.326000 audit: BPF prog-id=30 op=LOAD Apr 20 15:23:32.326000 audit: BPF prog-id=31 op=LOAD Apr 20 15:23:32.326000 audit: BPF prog-id=32 op=LOAD Apr 20 15:23:32.328735 systemd[1]: Starting systemd-nsresourced.service - Namespace Resource Manager... Apr 20 15:23:32.376746 systemd-tmpfiles[1407]: ACLs are not supported, ignoring. Apr 20 15:23:32.376795 systemd-tmpfiles[1407]: ACLs are not supported, ignoring. Apr 20 15:23:32.379509 systemd[1]: Started systemd-userdbd.service - User Database Manager. Apr 20 15:23:32.383000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 15:23:32.385907 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 20 15:23:32.390000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 15:23:32.418844 systemd-nsresourced[1411]: Not setting up BPF subsystem, as functionality has been disabled at compile time. Apr 20 15:23:32.421083 systemd[1]: Started systemd-nsresourced.service - Namespace Resource Manager. Apr 20 15:23:32.427000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-nsresourced comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 15:23:32.490570 systemd-oomd[1404]: No swap; memory pressure usage will be degraded Apr 20 15:23:32.493335 systemd[1]: Started systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer. Apr 20 15:23:32.502000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-oomd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 15:23:32.598340 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Apr 20 15:23:32.604000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-timesyncd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 15:23:32.607718 systemd[1]: Reached target time-set.target - System Time Set. Apr 20 15:23:32.635083 systemd-resolved[1405]: Positive Trust Anchors: Apr 20 15:23:32.635199 systemd-resolved[1405]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 20 15:23:32.635203 systemd-resolved[1405]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Apr 20 15:23:32.635269 systemd-resolved[1405]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 20 15:23:32.692343 systemd-resolved[1405]: Defaulting to hostname 'linux'. Apr 20 15:23:32.713077 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 20 15:23:32.722000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 15:23:32.723008 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 20 15:23:37.472043 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Apr 20 15:23:37.484000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 15:23:37.497030 kernel: kauditd_printk_skb: 63 callbacks suppressed Apr 20 15:23:37.498533 kernel: audit: type=1130 audit(1776698617.484:155): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 15:23:37.491000 audit: BPF prog-id=7 op=UNLOAD Apr 20 15:23:37.502426 kernel: audit: type=1334 audit(1776698617.491:156): prog-id=7 op=UNLOAD Apr 20 15:23:37.502501 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 20 15:23:37.491000 audit: BPF prog-id=6 op=UNLOAD Apr 20 15:23:37.495000 audit: BPF prog-id=33 op=LOAD Apr 20 15:23:37.495000 audit: BPF prog-id=34 op=LOAD Apr 20 15:23:37.508109 kernel: audit: type=1334 audit(1776698617.491:157): prog-id=6 op=UNLOAD Apr 20 15:23:37.511876 kernel: audit: type=1334 audit(1776698617.495:158): prog-id=33 op=LOAD Apr 20 15:23:37.513306 kernel: audit: type=1334 audit(1776698617.495:159): prog-id=34 op=LOAD Apr 20 15:23:37.839432 systemd-udevd[1433]: Using default interface naming scheme 'v258'. Apr 20 15:23:38.730246 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 20 15:23:38.736000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 15:23:38.743000 audit: BPF prog-id=35 op=LOAD Apr 20 15:23:38.748275 kernel: audit: type=1130 audit(1776698618.736:160): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 15:23:38.746301 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 20 15:23:38.748498 kernel: audit: type=1334 audit(1776698618.743:161): prog-id=35 op=LOAD Apr 20 15:23:38.956867 systemd-networkd[1435]: lo: Link UP Apr 20 15:23:38.957014 systemd-networkd[1435]: lo: Gained carrier Apr 20 15:23:38.961000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 15:23:38.958054 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 20 15:23:38.962507 systemd[1]: Reached target network.target - Network. Apr 20 15:23:38.973270 kernel: audit: type=1130 audit(1776698618.961:162): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 15:23:38.980439 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Apr 20 15:23:38.997925 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Apr 20 15:23:39.049391 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Apr 20 15:23:39.050735 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Apr 20 15:23:39.057000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-persistent-storage comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 15:23:39.074389 kernel: audit: type=1130 audit(1776698619.057:163): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-persistent-storage comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 15:23:39.219641 kernel: mousedev: PS/2 mouse device common for all mice Apr 20 15:23:39.367043 systemd-networkd[1435]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Apr 20 15:23:39.371102 systemd-networkd[1435]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 20 15:23:39.372867 systemd-networkd[1435]: eth0: Link UP Apr 20 15:23:39.373062 systemd-networkd[1435]: eth0: Gained carrier Apr 20 15:23:39.373084 systemd-networkd[1435]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Apr 20 15:23:39.392184 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Apr 20 15:23:39.403010 systemd-networkd[1435]: eth0: DHCPv4 address 10.0.0.22/16, gateway 10.0.0.1 acquired from 10.0.0.1 Apr 20 15:23:39.411850 systemd-timesyncd[1406]: Network configuration changed, trying to establish connection. Apr 20 15:23:40.769159 systemd-timesyncd[1406]: Contacted time server 10.0.0.1:123 (10.0.0.1). Apr 20 15:23:40.769469 systemd-resolved[1405]: Clock change detected. Flushing caches. Apr 20 15:23:40.789789 kernel: ACPI: button: Power Button [PWRF] Apr 20 15:23:40.770626 systemd-timesyncd[1406]: Initial clock synchronization to Mon 2026-04-20 15:23:40.769019 UTC. Apr 20 15:23:40.990141 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Apr 20 15:23:41.007749 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Apr 20 15:23:41.008051 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Apr 20 15:23:41.154027 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Apr 20 15:23:41.262168 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Apr 20 15:23:42.098279 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Apr 20 15:23:42.107000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 15:23:42.118933 kernel: audit: type=1130 audit(1776698622.107:164): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 15:23:42.128700 systemd-networkd[1435]: eth0: Gained IPv6LL Apr 20 15:23:42.210913 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Apr 20 15:23:42.215000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-wait-online comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 15:23:42.216552 systemd[1]: Reached target network-online.target - Network is Online. Apr 20 15:23:42.435764 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 20 15:23:42.612484 kernel: erofs: (device dm-4): mounted with root inode @ nid 40. Apr 20 15:23:42.670738 kernel: loop4: detected capacity change from 0 to 43472 Apr 20 15:23:42.673500 kernel: loop4: p1 p2 p3 Apr 20 15:23:42.728942 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Apr 20 15:23:42.729193 kernel: device-mapper: verity: Unrecognized verity feature request: root_hash_sig_key_desc Apr 20 15:23:42.732993 kernel: device-mapper: table: 253:4: verity: Unrecognized verity feature request (-EINVAL) Apr 20 15:23:42.735814 kernel: device-mapper: ioctl: error adding target to table Apr 20 15:23:42.738064 (sd-merge)[1498]: device-mapper: reload ioctl on loop4p1-verity (253:4) failed: Invalid argument Apr 20 15:23:42.751625 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Apr 20 15:23:42.862345 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 20 15:23:42.866000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 15:23:42.913580 kernel: erofs: (device dm-4): mounted with root inode @ nid 40. Apr 20 15:23:42.921660 (sd-merge)[1498]: Skipping extension refresh because no change was found, use --always-refresh=yes to always do a refresh. Apr 20 15:23:42.938351 systemd[1]: Finished systemd-confext.service - Merge System Configuration Images into /etc/. Apr 20 15:23:42.958000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-confext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 15:23:42.960081 kernel: device-mapper: ioctl: remove_all left 4 open device(s) Apr 20 15:23:43.077875 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Apr 20 15:23:43.149575 kernel: loop4: detected capacity change from 0 to 378016 Apr 20 15:23:43.152501 kernel: loop4: p1 p2 p3 Apr 20 15:23:43.222868 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Apr 20 15:23:43.224341 kernel: device-mapper: verity: Unrecognized verity feature request: root_hash_sig_key_desc Apr 20 15:23:43.224372 kernel: device-mapper: table: 253:4: verity: Unrecognized verity feature request (-EINVAL) Apr 20 15:23:43.230452 kernel: device-mapper: ioctl: error adding target to table Apr 20 15:23:43.230035 systemd-sysext[1509]: device-mapper: reload ioctl on loop4p1-verity (253:4) failed: Invalid argument Apr 20 15:23:43.236592 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Apr 20 15:23:43.419728 kernel: erofs: (device dm-4): mounted with root inode @ nid 39. Apr 20 15:23:43.482781 kernel: loop4: detected capacity change from 0 to 178200 Apr 20 15:23:43.483512 kernel: loop4: p1 p2 p3 Apr 20 15:23:43.489518 kernel: loop4: p1 p2 p3 Apr 20 15:23:43.515024 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Apr 20 15:23:43.515290 kernel: device-mapper: verity: Unrecognized verity feature request: root_hash_sig_key_desc Apr 20 15:23:43.515311 kernel: device-mapper: table: 253:4: verity: Unrecognized verity feature request (-EINVAL) Apr 20 15:23:43.524017 systemd-sysext[1509]: device-mapper: reload ioctl on loop4p1-verity (253:4) failed: Invalid argument Apr 20 15:23:43.524503 kernel: device-mapper: ioctl: error adding target to table Apr 20 15:23:43.528495 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Apr 20 15:23:43.579861 kernel: erofs: (device dm-4): mounted with root inode @ nid 39. Apr 20 15:23:43.615981 kernel: loop4: detected capacity change from 0 to 219192 Apr 20 15:23:43.741770 kernel: loop4: detected capacity change from 0 to 378016 Apr 20 15:23:43.745705 kernel: loop4: p1 p2 p3 Apr 20 15:23:43.755521 kernel: loop4: p1 p2 p3 Apr 20 15:23:43.796761 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Apr 20 15:23:43.798791 kernel: device-mapper: verity: Unrecognized verity feature request: root_hash_sig_key_desc Apr 20 15:23:43.798813 kernel: device-mapper: table: 253:4: verity: Unrecognized verity feature request (-EINVAL) Apr 20 15:23:43.802639 kernel: device-mapper: ioctl: error adding target to table Apr 20 15:23:43.803072 (sd-merge)[1529]: device-mapper: reload ioctl on loop4p1-verity (253:4) failed: Invalid argument Apr 20 15:23:43.809653 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Apr 20 15:23:43.934077 kernel: erofs: (device dm-4): mounted with root inode @ nid 39. Apr 20 15:23:44.083205 kernel: loop5: detected capacity change from 0 to 178200 Apr 20 15:23:44.084269 kernel: loop5: p1 p2 p3 Apr 20 15:23:44.123726 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Apr 20 15:23:44.124098 kernel: device-mapper: verity: Unrecognized verity feature request: root_hash_sig_key_desc Apr 20 15:23:44.124122 kernel: device-mapper: table: 253:5: verity: Unrecognized verity feature request (-EINVAL) Apr 20 15:23:44.131729 kernel: device-mapper: ioctl: error adding target to table Apr 20 15:23:44.132187 (sd-merge)[1529]: device-mapper: reload ioctl on loop5p1-verity (253:5) failed: Invalid argument Apr 20 15:23:44.143571 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Apr 20 15:23:44.339766 kernel: erofs: (device dm-5): mounted with root inode @ nid 39. Apr 20 15:23:44.356467 kernel: loop6: detected capacity change from 0 to 219192 Apr 20 15:23:44.437156 (sd-merge)[1529]: Skipping extension refresh because no change was found, use --always-refresh=yes to always do a refresh. Apr 20 15:23:44.462799 kernel: device-mapper: ioctl: remove_all left 4 open device(s) Apr 20 15:23:44.467928 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Apr 20 15:23:44.518000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 15:23:44.525638 kernel: kauditd_printk_skb: 3 callbacks suppressed Apr 20 15:23:44.525665 kernel: audit: type=1130 audit(1776698624.518:168): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 15:23:44.528368 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 20 15:23:44.665837 systemd-tmpfiles[1546]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Apr 20 15:23:44.665885 systemd-tmpfiles[1546]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Apr 20 15:23:44.671282 systemd-tmpfiles[1546]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Apr 20 15:23:44.680015 systemd-tmpfiles[1546]: ACLs are not supported, ignoring. Apr 20 15:23:44.680110 systemd-tmpfiles[1546]: ACLs are not supported, ignoring. Apr 20 15:23:44.696040 systemd-tmpfiles[1546]: Detected autofs mount point /boot during canonicalization of boot. Apr 20 15:23:44.696088 systemd-tmpfiles[1546]: Skipping /boot Apr 20 15:23:44.705309 systemd-tmpfiles[1546]: Detected autofs mount point /boot during canonicalization of boot. Apr 20 15:23:44.705344 systemd-tmpfiles[1546]: Skipping /boot Apr 20 15:23:44.735157 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 20 15:23:44.751000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 15:23:44.763462 kernel: audit: type=1130 audit(1776698624.751:169): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 15:23:44.783325 systemd[1]: Starting audit-rules.service - Load Audit Rules... Apr 20 15:23:44.796694 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Apr 20 15:23:44.812482 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Apr 20 15:23:44.823608 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Apr 20 15:23:44.849369 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Apr 20 15:23:44.957000 audit[1562]: AUDIT1127 pid=1562 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Apr 20 15:23:44.972316 kernel: audit: type=1127 audit(1776698624.957:170): pid=1562 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Apr 20 15:23:44.984051 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Apr 20 15:23:44.990000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 15:23:45.004574 kernel: audit: type=1130 audit(1776698624.990:171): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 15:23:45.041985 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Apr 20 15:23:45.056000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 15:23:45.069754 kernel: audit: type=1130 audit(1776698625.056:172): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 15:23:45.069729 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Apr 20 15:23:45.078000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 15:23:45.092757 kernel: audit: type=1130 audit(1776698625.078:173): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 15:23:45.093731 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Apr 20 15:23:45.126000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Apr 20 15:23:45.126000 audit[1579]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7fff3671c0e0 a2=420 a3=0 items=0 ppid=1552 pid=1579 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 15:23:45.134361 augenrules[1579]: No rules Apr 20 15:23:45.146979 kernel: audit: type=1305 audit(1776698625.126:174): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Apr 20 15:23:45.126000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Apr 20 15:23:45.154096 kernel: audit: type=1300 audit(1776698625.126:174): arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7fff3671c0e0 a2=420 a3=0 items=0 ppid=1552 pid=1579 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 15:23:45.154183 kernel: audit: type=1327 audit(1776698625.126:174): proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Apr 20 15:23:45.199647 systemd[1]: audit-rules.service: Deactivated successfully. Apr 20 15:23:45.203948 systemd[1]: Finished audit-rules.service - Load Audit Rules. Apr 20 15:23:47.432142 ldconfig[1554]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Apr 20 15:23:47.466217 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Apr 20 15:23:47.531011 systemd[1]: Starting systemd-update-done.service - Update is Completed... Apr 20 15:23:47.657053 systemd[1]: Finished systemd-update-done.service - Update is Completed. Apr 20 15:23:47.681957 systemd[1]: Reached target sysinit.target - System Initialization. Apr 20 15:23:47.691359 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Apr 20 15:23:47.705091 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Apr 20 15:23:47.720520 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Apr 20 15:23:47.734733 systemd[1]: Started logrotate.timer - Daily rotation of log files. Apr 20 15:23:47.746002 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Apr 20 15:23:47.760044 systemd[1]: Started systemd-sysupdate-reboot.timer - Reboot Automatically After System Update. Apr 20 15:23:47.775679 systemd[1]: Started systemd-sysupdate.timer - Automatic System Update. Apr 20 15:23:47.781970 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Apr 20 15:23:47.793531 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Apr 20 15:23:47.793619 systemd[1]: Reached target paths.target - Path Units. Apr 20 15:23:47.819067 systemd[1]: Reached target timers.target - Timer Units. Apr 20 15:23:47.904728 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Apr 20 15:23:47.947792 systemd[1]: Starting docker.socket - Docker Socket for the API... Apr 20 15:23:48.267216 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Apr 20 15:23:48.374799 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Apr 20 15:23:48.405075 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Apr 20 15:23:48.461960 systemd[1]: Listening on systemd-logind-varlink.socket - User Login Management Varlink Socket. Apr 20 15:23:48.577320 systemd[1]: Listening on systemd-machined.socket - Virtual Machine and Container Registration Service Socket. Apr 20 15:23:48.637829 systemd[1]: Listening on docker.socket - Docker Socket for the API. Apr 20 15:23:48.705082 systemd[1]: Reached target sockets.target - Socket Units. Apr 20 15:23:48.712761 systemd[1]: Reached target basic.target - Basic System. Apr 20 15:23:48.718494 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Apr 20 15:23:48.718536 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Apr 20 15:23:48.752102 systemd[1]: Starting containerd.service - containerd container runtime... Apr 20 15:23:48.879824 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Apr 20 15:23:48.890281 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Apr 20 15:23:48.909840 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Apr 20 15:23:48.918704 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Apr 20 15:23:48.943219 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Apr 20 15:23:48.958225 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Apr 20 15:23:49.041131 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Apr 20 15:23:49.048317 jq[1594]: false Apr 20 15:23:49.054722 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 20 15:23:49.063042 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Apr 20 15:23:49.086870 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Apr 20 15:23:49.091557 google_oslogin_nss_cache[1596]: oslogin_cache_refresh[1596]: Refreshing passwd entry cache Apr 20 15:23:49.091104 oslogin_cache_refresh[1596]: Refreshing passwd entry cache Apr 20 15:23:49.099001 extend-filesystems[1595]: Found /dev/vda6 Apr 20 15:23:49.106519 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Apr 20 15:23:49.125796 extend-filesystems[1595]: Found /dev/vda9 Apr 20 15:23:49.129065 extend-filesystems[1595]: Checking size of /dev/vda9 Apr 20 15:23:49.128875 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Apr 20 15:23:49.137695 oslogin_cache_refresh[1596]: Failure getting users, quitting Apr 20 15:23:49.139850 google_oslogin_nss_cache[1596]: oslogin_cache_refresh[1596]: Failure getting users, quitting Apr 20 15:23:49.139850 google_oslogin_nss_cache[1596]: oslogin_cache_refresh[1596]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Apr 20 15:23:49.139850 google_oslogin_nss_cache[1596]: oslogin_cache_refresh[1596]: Refreshing group entry cache Apr 20 15:23:49.137713 oslogin_cache_refresh[1596]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Apr 20 15:23:49.137763 oslogin_cache_refresh[1596]: Refreshing group entry cache Apr 20 15:23:49.142820 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Apr 20 15:23:49.151601 systemd[1]: Starting systemd-logind.service - User Login Management... Apr 20 15:23:49.155988 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Apr 20 15:23:49.160853 google_oslogin_nss_cache[1596]: oslogin_cache_refresh[1596]: Failure getting groups, quitting Apr 20 15:23:49.160853 google_oslogin_nss_cache[1596]: oslogin_cache_refresh[1596]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Apr 20 15:23:49.157691 oslogin_cache_refresh[1596]: Failure getting groups, quitting Apr 20 15:23:49.157661 systemd[1]: Starting update-engine.service - Update Engine... Apr 20 15:23:49.157702 oslogin_cache_refresh[1596]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Apr 20 15:23:49.169000 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Apr 20 15:23:49.177834 extend-filesystems[1595]: Resized partition /dev/vda9 Apr 20 15:23:49.185180 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Apr 20 15:23:49.191318 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Apr 20 15:23:49.194977 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Apr 20 15:23:49.197094 extend-filesystems[1627]: resize2fs 1.47.3 (8-Jul-2025) Apr 20 15:23:49.196953 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Apr 20 15:23:49.219191 kernel: EXT4-fs (vda9): resizing filesystem from 456704 to 1784827 blocks Apr 20 15:23:49.197293 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Apr 20 15:23:49.206335 systemd[1]: motdgen.service: Deactivated successfully. Apr 20 15:23:49.206575 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Apr 20 15:23:49.219625 jq[1620]: true Apr 20 15:23:49.208725 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Apr 20 15:23:49.208880 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Apr 20 15:23:49.219990 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Apr 20 15:23:49.272042 jq[1638]: true Apr 20 15:23:49.276485 kernel: EXT4-fs (vda9): resized filesystem to 1784827 Apr 20 15:23:49.307101 tar[1634]: linux-amd64/LICENSE Apr 20 15:23:49.304584 systemd[1]: coreos-metadata.service: Deactivated successfully. Apr 20 15:23:49.316688 update_engine[1619]: I20260420 15:23:49.312052 1619 main.cc:92] Flatcar Update Engine starting Apr 20 15:23:49.316919 tar[1634]: linux-amd64/helm Apr 20 15:23:49.308961 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Apr 20 15:23:49.315893 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Apr 20 15:23:49.321691 extend-filesystems[1627]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Apr 20 15:23:49.321691 extend-filesystems[1627]: old_desc_blocks = 1, new_desc_blocks = 1 Apr 20 15:23:49.321691 extend-filesystems[1627]: The filesystem on /dev/vda9 is now 1784827 (4k) blocks long. Apr 20 15:23:49.325155 systemd[1]: extend-filesystems.service: Deactivated successfully. Apr 20 15:23:49.354994 extend-filesystems[1595]: Resized filesystem in /dev/vda9 Apr 20 15:23:49.327276 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Apr 20 15:23:49.412928 systemd-logind[1616]: Watching system buttons on /dev/input/event2 (Power Button) Apr 20 15:23:49.412993 systemd-logind[1616]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Apr 20 15:23:49.414493 systemd-logind[1616]: New seat seat0. Apr 20 15:23:49.449332 systemd[1]: Started systemd-logind.service - User Login Management. Apr 20 15:23:49.527945 dbus-daemon[1592]: [system] SELinux support is enabled Apr 20 15:23:49.536511 systemd[1]: Started dbus.service - D-Bus System Message Bus. Apr 20 15:23:49.560931 update_engine[1619]: I20260420 15:23:49.557827 1619 update_check_scheduler.cc:74] Next update check in 3m54s Apr 20 15:23:49.615916 bash[1686]: Updated "/home/core/.ssh/authorized_keys" Apr 20 15:23:49.786808 sshd_keygen[1637]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Apr 20 15:23:49.773641 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Apr 20 15:23:49.769666 dbus-daemon[1592]: [system] Successfully activated service 'org.freedesktop.systemd1' Apr 20 15:23:49.798916 systemd[1]: Started update-engine.service - Update Engine. Apr 20 15:23:49.833720 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Apr 20 15:23:49.834113 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Apr 20 15:23:49.834364 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Apr 20 15:23:49.846275 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Apr 20 15:23:49.851297 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Apr 20 15:23:49.888097 systemd[1]: Started locksmithd.service - Cluster reboot manager. Apr 20 15:23:49.919460 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Apr 20 15:23:49.934640 systemd[1]: Starting issuegen.service - Generate /run/issue... Apr 20 15:23:50.425619 systemd[1]: issuegen.service: Deactivated successfully. Apr 20 15:23:50.450467 systemd[1]: Finished issuegen.service - Generate /run/issue. Apr 20 15:23:50.602134 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Apr 20 15:23:50.603149 locksmithd[1712]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Apr 20 15:23:50.742189 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Apr 20 15:23:50.797731 systemd[1]: Started getty@tty1.service - Getty on tty1. Apr 20 15:23:51.080034 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Apr 20 15:23:51.100984 systemd[1]: Reached target getty.target - Login Prompts. Apr 20 15:23:51.797241 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Apr 20 15:23:51.884466 systemd[1]: Started sshd@0-1-10.0.0.22:22-10.0.0.1:39120.service - OpenSSH per-connection server daemon (10.0.0.1:39120). Apr 20 15:23:53.553927 sshd[1727]: Accepted publickey for core from 10.0.0.1 port 39120 ssh2: RSA SHA256:TY3ywpKQxrHjYT+ud73OgntCcwLDp6eqaAmsAXbkkEQ Apr 20 15:23:53.571817 sshd-session[1727]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 20 15:23:53.767105 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Apr 20 15:23:53.801961 containerd[1635]: time="2026-04-20T15:23:53Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Apr 20 15:23:53.812925 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Apr 20 15:23:53.825462 containerd[1635]: time="2026-04-20T15:23:53.824087309Z" level=info msg="starting containerd" revision=dea7da592f5d1d2b7755e3a161be07f43fad8f75 version=v2.2.1 Apr 20 15:23:54.536822 systemd-logind[1616]: New session '1' of user 'core' with class 'user' and type 'tty'. Apr 20 15:23:54.732228 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Apr 20 15:23:54.858729 systemd[1]: Starting user@500.service - User Manager for UID 500... Apr 20 15:23:55.102613 containerd[1635]: time="2026-04-20T15:23:55.095927097Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t=27.568308ms Apr 20 15:23:55.102613 containerd[1635]: time="2026-04-20T15:23:55.098061893Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Apr 20 15:23:55.102613 containerd[1635]: time="2026-04-20T15:23:55.100464291Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Apr 20 15:23:55.102613 containerd[1635]: time="2026-04-20T15:23:55.100673347Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Apr 20 15:23:55.102613 containerd[1635]: time="2026-04-20T15:23:55.101069369Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Apr 20 15:23:55.102613 containerd[1635]: time="2026-04-20T15:23:55.101144356Z" level=info msg="loading plugin" id=io.containerd.mount-handler.v1.erofs type=io.containerd.mount-handler.v1 Apr 20 15:23:55.102613 containerd[1635]: time="2026-04-20T15:23:55.101183462Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Apr 20 15:23:55.102613 containerd[1635]: time="2026-04-20T15:23:55.101369276Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Apr 20 15:23:55.102613 containerd[1635]: time="2026-04-20T15:23:55.101432262Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Apr 20 15:23:55.102613 containerd[1635]: time="2026-04-20T15:23:55.102080473Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Apr 20 15:23:55.102613 containerd[1635]: time="2026-04-20T15:23:55.102100968Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Apr 20 15:23:55.102613 containerd[1635]: time="2026-04-20T15:23:55.102112768Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Apr 20 15:23:55.119730 containerd[1635]: time="2026-04-20T15:23:55.102125318Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Apr 20 15:23:55.119730 containerd[1635]: time="2026-04-20T15:23:55.102855295Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Apr 20 15:23:55.119730 containerd[1635]: time="2026-04-20T15:23:55.102963311Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Apr 20 15:23:55.119730 containerd[1635]: time="2026-04-20T15:23:55.105458806Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Apr 20 15:23:55.119730 containerd[1635]: time="2026-04-20T15:23:55.105710279Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Apr 20 15:23:55.119730 containerd[1635]: time="2026-04-20T15:23:55.105718246Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Apr 20 15:23:55.119730 containerd[1635]: time="2026-04-20T15:23:55.116961005Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Apr 20 15:23:55.127580 (systemd)[1736]: pam_unix(systemd-user:session): session opened for user core(uid=500) by core(uid=0) Apr 20 15:23:55.167231 containerd[1635]: time="2026-04-20T15:23:55.131639271Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Apr 20 15:23:55.167231 containerd[1635]: time="2026-04-20T15:23:55.138254369Z" level=info msg="metadata content store policy set" policy=shared Apr 20 15:23:55.304623 containerd[1635]: time="2026-04-20T15:23:55.302963293Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Apr 20 15:23:55.314880 containerd[1635]: time="2026-04-20T15:23:55.314041497Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Apr 20 15:23:55.343985 containerd[1635]: time="2026-04-20T15:23:55.338182332Z" level=info msg="built-in NRI default validator is disabled" Apr 20 15:23:55.362146 containerd[1635]: time="2026-04-20T15:23:55.345223096Z" level=info msg="runtime interface created" Apr 20 15:23:55.362146 containerd[1635]: time="2026-04-20T15:23:55.348907983Z" level=info msg="created NRI interface" Apr 20 15:23:55.362146 containerd[1635]: time="2026-04-20T15:23:55.354555623Z" level=info msg="loading plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Apr 20 15:23:55.362664 systemd-logind[1616]: New session '2' of user 'core' with class 'manager-early' and type 'unspecified'. Apr 20 15:23:55.363157 containerd[1635]: time="2026-04-20T15:23:55.362687360Z" level=info msg="skip loading plugin" error="failed to check mkfs.erofs availability: failed to run mkfs.erofs --help: exec: \"mkfs.erofs\": executable file not found in $PATH: skip plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Apr 20 15:23:55.363157 containerd[1635]: time="2026-04-20T15:23:55.362953869Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Apr 20 15:23:55.378369 containerd[1635]: time="2026-04-20T15:23:55.375012549Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Apr 20 15:23:55.379477 containerd[1635]: time="2026-04-20T15:23:55.379198017Z" level=info msg="loading plugin" id=io.containerd.mount-manager.v1.bolt type=io.containerd.mount-manager.v1 Apr 20 15:23:55.390638 containerd[1635]: time="2026-04-20T15:23:55.389624639Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Apr 20 15:23:55.390638 containerd[1635]: time="2026-04-20T15:23:55.390302168Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Apr 20 15:23:55.390638 containerd[1635]: time="2026-04-20T15:23:55.390368977Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Apr 20 15:23:55.390638 containerd[1635]: time="2026-04-20T15:23:55.390498625Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Apr 20 15:23:55.392830 containerd[1635]: time="2026-04-20T15:23:55.391307268Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Apr 20 15:23:55.407508 containerd[1635]: time="2026-04-20T15:23:55.403642224Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Apr 20 15:23:55.433554 tar[1634]: linux-amd64/README.md Apr 20 15:23:55.434313 containerd[1635]: time="2026-04-20T15:23:55.428175343Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Apr 20 15:23:55.466209 containerd[1635]: time="2026-04-20T15:23:55.447331579Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Apr 20 15:23:55.518714 containerd[1635]: time="2026-04-20T15:23:55.507030681Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Apr 20 15:23:55.518714 containerd[1635]: time="2026-04-20T15:23:55.515723294Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Apr 20 15:23:55.518714 containerd[1635]: time="2026-04-20T15:23:55.515974916Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Apr 20 15:23:55.518714 containerd[1635]: time="2026-04-20T15:23:55.516001586Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Apr 20 15:23:55.518714 containerd[1635]: time="2026-04-20T15:23:55.516011375Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Apr 20 15:23:55.518714 containerd[1635]: time="2026-04-20T15:23:55.516020702Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Apr 20 15:23:55.518714 containerd[1635]: time="2026-04-20T15:23:55.516027998Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Apr 20 15:23:55.518714 containerd[1635]: time="2026-04-20T15:23:55.516037302Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Apr 20 15:23:55.518714 containerd[1635]: time="2026-04-20T15:23:55.516088049Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Apr 20 15:23:55.518714 containerd[1635]: time="2026-04-20T15:23:55.516126214Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.mounts type=io.containerd.grpc.v1 Apr 20 15:23:55.518714 containerd[1635]: time="2026-04-20T15:23:55.516159926Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Apr 20 15:23:55.518714 containerd[1635]: time="2026-04-20T15:23:55.516173289Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Apr 20 15:23:55.518714 containerd[1635]: time="2026-04-20T15:23:55.516181422Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Apr 20 15:23:55.539888 containerd[1635]: time="2026-04-20T15:23:55.524110530Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Apr 20 15:23:55.552678 containerd[1635]: time="2026-04-20T15:23:55.542369912Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Apr 20 15:23:55.553046 containerd[1635]: time="2026-04-20T15:23:55.552799287Z" level=info msg="Start snapshots syncer" Apr 20 15:23:55.554606 containerd[1635]: time="2026-04-20T15:23:55.553794817Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Apr 20 15:23:55.559697 containerd[1635]: time="2026-04-20T15:23:55.559053704Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"cgroupWritable\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"\",\"binDirs\":[\"/opt/cni/bin\"],\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogLineSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Apr 20 15:23:55.584063 containerd[1635]: time="2026-04-20T15:23:55.561944799Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Apr 20 15:23:55.584063 containerd[1635]: time="2026-04-20T15:23:55.577752191Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Apr 20 15:23:55.595026 containerd[1635]: time="2026-04-20T15:23:55.587551595Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Apr 20 15:23:55.595026 containerd[1635]: time="2026-04-20T15:23:55.588836596Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Apr 20 15:23:55.595026 containerd[1635]: time="2026-04-20T15:23:55.591227785Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Apr 20 15:23:55.595217 containerd[1635]: time="2026-04-20T15:23:55.595058918Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Apr 20 15:23:55.597098 containerd[1635]: time="2026-04-20T15:23:55.595319087Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Apr 20 15:23:55.597098 containerd[1635]: time="2026-04-20T15:23:55.595358955Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Apr 20 15:23:55.597098 containerd[1635]: time="2026-04-20T15:23:55.595513348Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Apr 20 15:23:55.597098 containerd[1635]: time="2026-04-20T15:23:55.595530398Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Apr 20 15:23:55.597098 containerd[1635]: time="2026-04-20T15:23:55.595617979Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Apr 20 15:23:55.597098 containerd[1635]: time="2026-04-20T15:23:55.595851675Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Apr 20 15:23:55.599500 containerd[1635]: time="2026-04-20T15:23:55.598753226Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Apr 20 15:23:55.599500 containerd[1635]: time="2026-04-20T15:23:55.598954032Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Apr 20 15:23:55.599500 containerd[1635]: time="2026-04-20T15:23:55.599049770Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Apr 20 15:23:55.599500 containerd[1635]: time="2026-04-20T15:23:55.599057833Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Apr 20 15:23:55.599500 containerd[1635]: time="2026-04-20T15:23:55.599125279Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Apr 20 15:23:55.599500 containerd[1635]: time="2026-04-20T15:23:55.599315782Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Apr 20 15:23:55.599500 containerd[1635]: time="2026-04-20T15:23:55.599332355Z" level=info msg="Connect containerd service" Apr 20 15:23:55.599698 containerd[1635]: time="2026-04-20T15:23:55.599659597Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Apr 20 15:23:55.655210 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Apr 20 15:23:55.710869 containerd[1635]: time="2026-04-20T15:23:55.710704681Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 20 15:23:56.305012 containerd[1635]: time="2026-04-20T15:23:56.304173917Z" level=info msg="Start subscribing containerd event" Apr 20 15:23:56.309258 containerd[1635]: time="2026-04-20T15:23:56.308919348Z" level=info msg="Start recovering state" Apr 20 15:23:56.310355 containerd[1635]: time="2026-04-20T15:23:56.310333487Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Apr 20 15:23:56.311061 containerd[1635]: time="2026-04-20T15:23:56.310712428Z" level=info msg=serving... address=/run/containerd/containerd.sock Apr 20 15:23:56.312533 containerd[1635]: time="2026-04-20T15:23:56.312125747Z" level=info msg="Start event monitor" Apr 20 15:23:56.312533 containerd[1635]: time="2026-04-20T15:23:56.312225550Z" level=info msg="Start cni network conf syncer for default" Apr 20 15:23:56.312533 containerd[1635]: time="2026-04-20T15:23:56.312260059Z" level=info msg="Start streaming server" Apr 20 15:23:56.312668 containerd[1635]: time="2026-04-20T15:23:56.312653764Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Apr 20 15:23:56.312732 containerd[1635]: time="2026-04-20T15:23:56.312722882Z" level=info msg="runtime interface starting up..." Apr 20 15:23:56.312810 containerd[1635]: time="2026-04-20T15:23:56.312799205Z" level=info msg="starting plugins..." Apr 20 15:23:56.313039 containerd[1635]: time="2026-04-20T15:23:56.313029027Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Apr 20 15:23:56.350686 containerd[1635]: time="2026-04-20T15:23:56.349877909Z" level=info msg="containerd successfully booted in 2.597859s" Apr 20 15:23:56.398016 systemd[1]: Started containerd.service - containerd container runtime. Apr 20 15:23:56.439650 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 20 15:23:56.449236 systemd[1]: Reached target multi-user.target - Multi-User System. Apr 20 15:23:56.510208 (kubelet)[1770]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 20 15:23:57.071613 systemd[1736]: Queued start job for default target default.target. Apr 20 15:23:57.127447 systemd[1736]: Created slice app.slice - User Application Slice. Apr 20 15:23:57.129226 systemd[1736]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of User's Temporary Directories. Apr 20 15:23:57.129445 systemd[1736]: Reached target machines.target - Virtual Machines and Containers. Apr 20 15:23:57.129551 systemd[1736]: Reached target paths.target - Paths. Apr 20 15:23:57.129577 systemd[1736]: Reached target timers.target - Timers. Apr 20 15:23:57.142334 systemd[1736]: Starting dbus.socket - D-Bus User Message Bus Socket... Apr 20 15:23:57.144173 systemd[1736]: Listening on systemd-ask-password.socket - Query the User Interactively for a Password. Apr 20 15:23:57.150997 systemd[1736]: Starting systemd-tmpfiles-setup.service - Create User Files and Directories... Apr 20 15:23:57.286842 systemd[1736]: Listening on dbus.socket - D-Bus User Message Bus Socket. Apr 20 15:23:57.288974 systemd[1736]: Reached target sockets.target - Sockets. Apr 20 15:23:57.291855 systemd[1736]: Finished systemd-tmpfiles-setup.service - Create User Files and Directories. Apr 20 15:23:57.291987 systemd[1736]: Reached target basic.target - Basic System. Apr 20 15:23:57.298167 systemd[1736]: Reached target default.target - Main User Target. Apr 20 15:23:57.301259 systemd[1736]: Startup finished in 1.854s. Apr 20 15:23:57.302107 systemd[1]: Started user@500.service - User Manager for UID 500. Apr 20 15:23:57.493630 systemd[1]: Started session-1.scope - Session 1 of User core. Apr 20 15:23:57.497033 systemd[1]: Startup finished in 6.357s (kernel) + 27.560s (initrd) + 33.394s (userspace) = 1min 7.312s. Apr 20 15:23:57.572831 systemd[1]: Started sshd@1-4097-10.0.0.22:22-10.0.0.1:45606.service - OpenSSH per-connection server daemon (10.0.0.1:45606). Apr 20 15:23:57.731815 sshd[1789]: Accepted publickey for core from 10.0.0.1 port 45606 ssh2: RSA SHA256:TY3ywpKQxrHjYT+ud73OgntCcwLDp6eqaAmsAXbkkEQ Apr 20 15:23:57.734744 sshd-session[1789]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 20 15:23:57.742112 kubelet[1770]: E0420 15:23:57.741946 1770 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 20 15:23:57.749750 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 20 15:23:57.750003 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 20 15:23:57.756211 systemd[1]: kubelet.service: Consumed 4.339s CPU time, 260.8M memory peak. Apr 20 15:23:57.776672 systemd-logind[1616]: New session '3' of user 'core' with class 'user' and type 'tty'. Apr 20 15:23:57.802103 systemd[1]: Started session-3.scope - Session 3 of User core. Apr 20 15:23:57.840204 sshd[1794]: Connection closed by 10.0.0.1 port 45606 Apr 20 15:23:57.840633 sshd-session[1789]: pam_unix(sshd:session): session closed for user core Apr 20 15:23:57.867035 systemd[1]: sshd@1-4097-10.0.0.22:22-10.0.0.1:45606.service: Deactivated successfully. Apr 20 15:23:57.868866 systemd[1]: session-3.scope: Deactivated successfully. Apr 20 15:23:57.879033 systemd-logind[1616]: Session 3 logged out. Waiting for processes to exit. Apr 20 15:23:57.883187 systemd[1]: Started sshd@2-4098-10.0.0.22:22-10.0.0.1:45618.service - OpenSSH per-connection server daemon (10.0.0.1:45618). Apr 20 15:23:57.893082 systemd-logind[1616]: Removed session 3. Apr 20 15:23:58.125787 sshd[1800]: Accepted publickey for core from 10.0.0.1 port 45618 ssh2: RSA SHA256:TY3ywpKQxrHjYT+ud73OgntCcwLDp6eqaAmsAXbkkEQ Apr 20 15:23:58.127084 sshd-session[1800]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 20 15:23:58.169017 systemd-logind[1616]: New session '4' of user 'core' with class 'user' and type 'tty'. Apr 20 15:23:58.185832 systemd[1]: Started session-4.scope - Session 4 of User core. Apr 20 15:23:58.217986 sshd[1804]: Connection closed by 10.0.0.1 port 45618 Apr 20 15:23:58.218593 sshd-session[1800]: pam_unix(sshd:session): session closed for user core Apr 20 15:23:58.253074 systemd[1]: sshd@2-4098-10.0.0.22:22-10.0.0.1:45618.service: Deactivated successfully. Apr 20 15:23:58.258069 systemd[1]: session-4.scope: Deactivated successfully. Apr 20 15:23:58.259080 systemd-logind[1616]: Session 4 logged out. Waiting for processes to exit. Apr 20 15:23:58.281904 systemd[1]: Started sshd@3-2-10.0.0.22:22-10.0.0.1:45626.service - OpenSSH per-connection server daemon (10.0.0.1:45626). Apr 20 15:23:58.284774 systemd-logind[1616]: Removed session 4. Apr 20 15:23:58.419134 sshd[1810]: Accepted publickey for core from 10.0.0.1 port 45626 ssh2: RSA SHA256:TY3ywpKQxrHjYT+ud73OgntCcwLDp6eqaAmsAXbkkEQ Apr 20 15:23:58.425741 sshd-session[1810]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 20 15:23:58.547214 systemd-logind[1616]: New session '5' of user 'core' with class 'user' and type 'tty'. Apr 20 15:23:58.589835 systemd[1]: Started session-5.scope - Session 5 of User core. Apr 20 15:23:58.740929 sshd[1814]: Connection closed by 10.0.0.1 port 45626 Apr 20 15:23:58.742189 sshd-session[1810]: pam_unix(sshd:session): session closed for user core Apr 20 15:23:58.786660 systemd[1]: sshd@3-2-10.0.0.22:22-10.0.0.1:45626.service: Deactivated successfully. Apr 20 15:23:58.793867 systemd[1]: session-5.scope: Deactivated successfully. Apr 20 15:23:58.798784 systemd-logind[1616]: Session 5 logged out. Waiting for processes to exit. Apr 20 15:23:58.812114 systemd[1]: Started sshd@4-3-10.0.0.22:22-10.0.0.1:45634.service - OpenSSH per-connection server daemon (10.0.0.1:45634). Apr 20 15:23:58.815859 systemd-logind[1616]: Removed session 5. Apr 20 15:23:59.045005 sshd[1821]: Accepted publickey for core from 10.0.0.1 port 45634 ssh2: RSA SHA256:TY3ywpKQxrHjYT+ud73OgntCcwLDp6eqaAmsAXbkkEQ Apr 20 15:23:59.048631 sshd-session[1821]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 20 15:23:59.135784 systemd-logind[1616]: New session '6' of user 'core' with class 'user' and type 'tty'. Apr 20 15:23:59.180833 systemd[1]: Started session-6.scope - Session 6 of User core. Apr 20 15:23:59.249687 sudo[1826]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Apr 20 15:23:59.250202 sudo[1826]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 20 15:24:01.783558 systemd[1]: Starting docker.service - Docker Application Container Engine... Apr 20 15:24:01.823511 (dockerd)[1847]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Apr 20 15:24:04.080731 dockerd[1847]: time="2026-04-20T15:24:04.068272522Z" level=info msg="Starting up" Apr 20 15:24:04.104029 dockerd[1847]: time="2026-04-20T15:24:04.103559237Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Apr 20 15:24:04.299578 dockerd[1847]: time="2026-04-20T15:24:04.298545173Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Apr 20 15:24:04.902635 systemd[1]: var-lib-docker-metacopy\x2dcheck2193285519-merged.mount: Deactivated successfully. Apr 20 15:24:04.946199 dockerd[1847]: time="2026-04-20T15:24:04.942943294Z" level=info msg="Loading containers: start." Apr 20 15:24:05.098961 kernel: Initializing XFRM netlink socket Apr 20 15:24:08.061152 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Apr 20 15:24:08.154717 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 20 15:24:10.590098 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 20 15:24:10.649291 (kubelet)[2028]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 20 15:24:10.658949 systemd-networkd[1435]: docker0: Link UP Apr 20 15:24:10.742029 dockerd[1847]: time="2026-04-20T15:24:10.738569140Z" level=info msg="Loading containers: done." Apr 20 15:24:11.010702 dockerd[1847]: time="2026-04-20T15:24:11.007858607Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Apr 20 15:24:11.017074 dockerd[1847]: time="2026-04-20T15:24:11.012674041Z" level=info msg="Docker daemon" commit=45873be4ae3f5488c9498b3d9f17deaddaf609f4 containerd-snapshotter=false storage-driver=overlay2 version=28.2.2 Apr 20 15:24:11.017074 dockerd[1847]: time="2026-04-20T15:24:11.013050517Z" level=info msg="Initializing buildkit" Apr 20 15:24:11.013249 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1241194745-merged.mount: Deactivated successfully. Apr 20 15:24:11.162363 dockerd[1847]: time="2026-04-20T15:24:11.159553925Z" level=warning msg="CDI setup error /etc/cdi: failed to monitor for changes: no such file or directory" Apr 20 15:24:11.162363 dockerd[1847]: time="2026-04-20T15:24:11.159600006Z" level=warning msg="CDI setup error /var/run/cdi: failed to monitor for changes: no such file or directory" Apr 20 15:24:11.192210 kubelet[2028]: E0420 15:24:11.187082 2028 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 20 15:24:11.207094 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 20 15:24:11.211613 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 20 15:24:11.212878 systemd[1]: kubelet.service: Consumed 1.724s CPU time, 111.4M memory peak. Apr 20 15:24:11.593926 dockerd[1847]: time="2026-04-20T15:24:11.590486731Z" level=info msg="Completed buildkit initialization" Apr 20 15:24:12.069173 dockerd[1847]: time="2026-04-20T15:24:12.064678607Z" level=info msg="Daemon has completed initialization" Apr 20 15:24:12.069173 dockerd[1847]: time="2026-04-20T15:24:12.065148174Z" level=info msg="API listen on /run/docker.sock" Apr 20 15:24:12.078265 systemd[1]: Started docker.service - Docker Application Container Engine. Apr 20 15:24:20.104694 containerd[1635]: time="2026-04-20T15:24:20.103484895Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.7\"" Apr 20 15:24:21.708831 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Apr 20 15:24:22.047465 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 20 15:24:24.677362 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 20 15:24:24.751115 (kubelet)[2089]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 20 15:24:25.682772 kubelet[2089]: E0420 15:24:25.680800 2089 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 20 15:24:25.745789 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 20 15:24:25.764128 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 20 15:24:25.848334 systemd[1]: kubelet.service: Consumed 1.533s CPU time, 110.6M memory peak. Apr 20 15:24:32.681636 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2023082861.mount: Deactivated successfully. Apr 20 15:24:35.218957 update_engine[1619]: I20260420 15:24:35.216202 1619 update_attempter.cc:509] Updating boot flags... Apr 20 15:24:36.004825 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Apr 20 15:24:36.048290 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 20 15:24:39.920698 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 20 15:24:40.257080 (kubelet)[2136]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 20 15:24:41.414062 kubelet[2136]: E0420 15:24:41.412955 2136 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 20 15:24:41.421189 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 20 15:24:41.425431 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 20 15:24:41.435340 systemd[1]: kubelet.service: Consumed 2.537s CPU time, 110.3M memory peak. Apr 20 15:24:52.105332 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Apr 20 15:24:52.392221 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 20 15:24:54.891039 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 20 15:24:55.107810 (kubelet)[2156]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 20 15:24:56.596913 kubelet[2156]: E0420 15:24:56.596334 2156 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 20 15:24:56.644335 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 20 15:24:56.645980 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 20 15:24:56.665909 systemd[1]: kubelet.service: Consumed 2.614s CPU time, 110.2M memory peak. Apr 20 15:25:07.150941 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Apr 20 15:25:07.392096 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 20 15:25:10.181794 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 20 15:25:10.219906 (kubelet)[2178]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 20 15:25:11.509209 kubelet[2178]: E0420 15:25:11.508332 2178 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 20 15:25:11.520162 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 20 15:25:11.520352 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 20 15:25:11.549263 systemd[1]: kubelet.service: Consumed 2.559s CPU time, 110.8M memory peak. Apr 20 15:25:21.765215 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Apr 20 15:25:22.048580 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 20 15:25:25.236873 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 20 15:25:25.342420 (kubelet)[2235]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 20 15:25:26.852498 kubelet[2235]: E0420 15:25:26.850362 2235 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 20 15:25:26.937944 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 20 15:25:26.951009 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 20 15:25:26.976747 systemd[1]: kubelet.service: Consumed 2.857s CPU time, 110M memory peak. Apr 20 15:25:37.539210 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7. Apr 20 15:25:37.807119 containerd[1635]: time="2026-04-20T15:25:37.800368120Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.34.7: active requests=0, bytes read=27088853" Apr 20 15:25:37.860813 containerd[1635]: time="2026-04-20T15:25:37.802092681Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.34.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 20 15:25:37.922100 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 20 15:25:38.526125 containerd[1635]: time="2026-04-20T15:25:38.523278503Z" level=info msg="ImageCreate event name:\"sha256:c15709457ff55a861a7259eb631c447f9bf906267615f9d8dcc820635a0bfb95\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 20 15:25:40.295684 containerd[1635]: time="2026-04-20T15:25:40.294825245Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:b96b8464d152a24c81d7f0435fd2198f8486970cd26a9e0e9c20826c73d1441c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 20 15:25:41.107780 containerd[1635]: time="2026-04-20T15:25:41.103898978Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.34.7\" with image id \"sha256:c15709457ff55a861a7259eb631c447f9bf906267615f9d8dcc820635a0bfb95\", repo tag \"registry.k8s.io/kube-apiserver:v1.34.7\", repo digest \"registry.k8s.io/kube-apiserver@sha256:b96b8464d152a24c81d7f0435fd2198f8486970cd26a9e0e9c20826c73d1441c\", size \"27097113\" in 1m20.995836197s" Apr 20 15:25:41.122106 containerd[1635]: time="2026-04-20T15:25:41.113030349Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.7\" returns image reference \"sha256:c15709457ff55a861a7259eb631c447f9bf906267615f9d8dcc820635a0bfb95\"" Apr 20 15:25:41.415096 containerd[1635]: time="2026-04-20T15:25:41.356108991Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.7\"" Apr 20 15:25:41.796219 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 20 15:25:41.850842 (kubelet)[2252]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 20 15:25:42.905468 kernel: clocksource: Long readout interval, skipping watchdog check: cs_nsec: 1114292786 wd_nsec: 1114293318 Apr 20 15:25:44.220010 kubelet[2252]: E0420 15:25:44.211216 2252 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 20 15:25:44.260249 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 20 15:25:44.297085 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 20 15:25:44.308935 systemd[1]: kubelet.service: Consumed 3.983s CPU time, 110.5M memory peak. Apr 20 15:25:54.898639 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 8. Apr 20 15:25:55.408101 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 20 15:26:01.104489 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 20 15:26:01.297887 (kubelet)[2273]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 20 15:26:02.522564 kubelet[2273]: E0420 15:26:02.522039 2273 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 20 15:26:02.582167 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 20 15:26:02.582610 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 20 15:26:02.660138 systemd[1]: kubelet.service: Consumed 3.938s CPU time, 110.6M memory peak. Apr 20 15:26:11.682828 containerd[1635]: time="2026-04-20T15:26:11.678908551Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.34.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 20 15:26:11.682828 containerd[1635]: time="2026-04-20T15:26:11.680862834Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.34.7: active requests=1, bytes read=19922944" Apr 20 15:26:11.909031 containerd[1635]: time="2026-04-20T15:26:11.906123093Z" level=info msg="ImageCreate event name:\"sha256:23986a24c803336f2a2dfbcaaf0547ee8bcf6638f23bec8967e210909d00a97a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 20 15:26:12.710901 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 9. Apr 20 15:26:12.860680 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 20 15:26:13.247316 containerd[1635]: time="2026-04-20T15:26:13.246325424Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:7d759bdc4fef10a3fc1ad60ce9439d58e1a4df7ebb22751f7cc0201ce55f280b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 20 15:26:13.737809 containerd[1635]: time="2026-04-20T15:26:13.735793119Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.34.7\" with image id \"sha256:23986a24c803336f2a2dfbcaaf0547ee8bcf6638f23bec8967e210909d00a97a\", repo tag \"registry.k8s.io/kube-controller-manager:v1.34.7\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:7d759bdc4fef10a3fc1ad60ce9439d58e1a4df7ebb22751f7cc0201ce55f280b\", size \"22819085\" in 32.325643068s" Apr 20 15:26:13.737809 containerd[1635]: time="2026-04-20T15:26:13.735953439Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.7\" returns image reference \"sha256:23986a24c803336f2a2dfbcaaf0547ee8bcf6638f23bec8967e210909d00a97a\"" Apr 20 15:26:13.879704 containerd[1635]: time="2026-04-20T15:26:13.878041648Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.7\"" Apr 20 15:26:14.937334 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 20 15:26:15.021292 (kubelet)[2290]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 20 15:26:15.444823 kubelet[2290]: E0420 15:26:15.444113 2290 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 20 15:26:15.456995 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 20 15:26:15.458480 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 20 15:26:15.477326 systemd[1]: kubelet.service: Consumed 1.621s CPU time, 110M memory peak. Apr 20 15:26:24.514748 containerd[1635]: time="2026-04-20T15:26:24.513911234Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.34.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 20 15:26:24.545644 containerd[1635]: time="2026-04-20T15:26:24.530048539Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.34.7: active requests=0, bytes read=15802120" Apr 20 15:26:24.721837 containerd[1635]: time="2026-04-20T15:26:24.720916839Z" level=info msg="ImageCreate event name:\"sha256:568f1856b0e1c464b0b50ab2879ebd535623c1a620b1d2530ba5dd594237dc82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 20 15:26:25.472876 containerd[1635]: time="2026-04-20T15:26:25.471054781Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:4ab32f707ff84beaac431797999707757b885196b0b9a52d29cb67f95efce7c1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 20 15:26:25.586643 containerd[1635]: time="2026-04-20T15:26:25.584223164Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.34.7\" with image id \"sha256:568f1856b0e1c464b0b50ab2879ebd535623c1a620b1d2530ba5dd594237dc82\", repo tag \"registry.k8s.io/kube-scheduler:v1.34.7\", repo digest \"registry.k8s.io/kube-scheduler@sha256:4ab32f707ff84beaac431797999707757b885196b0b9a52d29cb67f95efce7c1\", size \"17377256\" in 11.704166465s" Apr 20 15:26:25.600949 containerd[1635]: time="2026-04-20T15:26:25.588742315Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.7\" returns image reference \"sha256:568f1856b0e1c464b0b50ab2879ebd535623c1a620b1d2530ba5dd594237dc82\"" Apr 20 15:26:25.612310 containerd[1635]: time="2026-04-20T15:26:25.609205722Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.7\"" Apr 20 15:26:25.716666 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 10. Apr 20 15:26:25.750487 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 20 15:26:27.076894 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 20 15:26:27.229118 (kubelet)[2311]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 20 15:26:27.896062 kubelet[2311]: E0420 15:26:27.895138 2311 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 20 15:26:27.901593 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 20 15:26:27.901963 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 20 15:26:27.904893 systemd[1]: kubelet.service: Consumed 1.497s CPU time, 110.6M memory peak. Apr 20 15:26:38.440808 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 11. Apr 20 15:26:38.690964 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 20 15:26:41.195492 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 20 15:26:41.251269 (kubelet)[2330]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 20 15:26:41.612650 kubelet[2330]: E0420 15:26:41.604358 2330 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 20 15:26:41.615752 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 20 15:26:41.615941 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 20 15:26:41.624581 systemd[1]: kubelet.service: Consumed 1.654s CPU time, 110.3M memory peak. Apr 20 15:26:51.764074 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 12. Apr 20 15:26:51.815643 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 20 15:26:53.736722 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 20 15:26:53.886330 (kubelet)[2348]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 20 15:26:54.445824 kubelet[2348]: E0420 15:26:54.445500 2348 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 20 15:26:54.457075 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 20 15:26:54.457316 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 20 15:26:54.547882 systemd[1]: kubelet.service: Consumed 1.733s CPU time, 110.5M memory peak. Apr 20 15:26:55.475195 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount236812176.mount: Deactivated successfully. Apr 20 15:26:58.828528 containerd[1635]: time="2026-04-20T15:26:58.774608961Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.34.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 20 15:26:58.866596 containerd[1635]: time="2026-04-20T15:26:58.837961130Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.34.7: active requests=1, bytes read=15502082" Apr 20 15:26:59.051954 containerd[1635]: time="2026-04-20T15:26:59.048586921Z" level=info msg="ImageCreate event name:\"sha256:345c2b8919907fbb425a843da24d86a16708ee53a49ad3fa2e6dc229c7b34643\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 20 15:26:59.315940 containerd[1635]: time="2026-04-20T15:26:59.314892656Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:062519bc0a14769e2f98c6bdff7816a17e6252de3f3c9cb102e6be33fe38d9e2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 20 15:26:59.486048 containerd[1635]: time="2026-04-20T15:26:59.483771877Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.34.7\" with image id \"sha256:345c2b8919907fbb425a843da24d86a16708ee53a49ad3fa2e6dc229c7b34643\", repo tag \"registry.k8s.io/kube-proxy:v1.34.7\", repo digest \"registry.k8s.io/kube-proxy@sha256:062519bc0a14769e2f98c6bdff7816a17e6252de3f3c9cb102e6be33fe38d9e2\", size \"25971973\" in 33.867986288s" Apr 20 15:26:59.495502 containerd[1635]: time="2026-04-20T15:26:59.486624557Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.7\" returns image reference \"sha256:345c2b8919907fbb425a843da24d86a16708ee53a49ad3fa2e6dc229c7b34643\"" Apr 20 15:26:59.507805 containerd[1635]: time="2026-04-20T15:26:59.507474288Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\"" Apr 20 15:27:04.764524 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 13. Apr 20 15:27:04.863931 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 20 15:27:05.214813 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1390566356.mount: Deactivated successfully. Apr 20 15:27:06.246482 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 20 15:27:06.343982 (kubelet)[2380]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 20 15:27:06.623999 kubelet[2380]: E0420 15:27:06.621496 2380 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 20 15:27:06.627296 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 20 15:27:06.627542 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 20 15:27:06.628270 systemd[1]: kubelet.service: Consumed 1.138s CPU time, 110.5M memory peak. Apr 20 15:27:12.875992 containerd[1635]: time="2026-04-20T15:27:12.868157642Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 20 15:27:12.937843 containerd[1635]: time="2026-04-20T15:27:12.926143017Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.1: active requests=0, bytes read=22379772" Apr 20 15:27:13.405987 containerd[1635]: time="2026-04-20T15:27:13.403491326Z" level=info msg="ImageCreate event name:\"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 20 15:27:14.640932 containerd[1635]: time="2026-04-20T15:27:14.638776176Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 20 15:27:15.071799 containerd[1635]: time="2026-04-20T15:27:15.068145127Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.1\" with image id \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\", size \"22384805\" in 15.559734751s" Apr 20 15:27:15.071799 containerd[1635]: time="2026-04-20T15:27:15.070094889Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\" returns image reference \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\"" Apr 20 15:27:15.082933 containerd[1635]: time="2026-04-20T15:27:15.081989093Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Apr 20 15:27:16.774755 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 14. Apr 20 15:27:16.906867 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 20 15:27:18.205133 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 20 15:27:18.344083 (kubelet)[2437]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 20 15:27:19.629501 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2778291817.mount: Deactivated successfully. Apr 20 15:27:19.814741 kubelet[2437]: E0420 15:27:19.813694 2437 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 20 15:27:19.828355 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 20 15:27:19.896772 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 20 15:27:19.909959 systemd[1]: kubelet.service: Consumed 2.143s CPU time, 112.4M memory peak. Apr 20 15:27:19.918409 containerd[1635]: time="2026-04-20T15:27:19.917739043Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 20 15:27:19.925512 containerd[1635]: time="2026-04-20T15:27:19.924704194Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=0" Apr 20 15:27:20.038779 containerd[1635]: time="2026-04-20T15:27:20.037947481Z" level=info msg="ImageCreate event name:\"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 20 15:27:20.349792 containerd[1635]: time="2026-04-20T15:27:20.345017527Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 20 15:27:20.580670 containerd[1635]: time="2026-04-20T15:27:20.580202479Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"320448\" in 5.49526329s" Apr 20 15:27:20.580670 containerd[1635]: time="2026-04-20T15:27:20.580588624Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\"" Apr 20 15:27:20.653024 containerd[1635]: time="2026-04-20T15:27:20.649338476Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.5-0\"" Apr 20 15:27:27.838980 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3154942856.mount: Deactivated successfully. Apr 20 15:27:30.095053 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 15. Apr 20 15:27:30.298947 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 20 15:27:32.515520 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 20 15:27:32.725688 (kubelet)[2468]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 20 15:27:34.149332 kubelet[2468]: E0420 15:27:34.146598 2468 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 20 15:27:34.237918 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 20 15:27:34.238292 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 20 15:27:34.255168 systemd[1]: kubelet.service: Consumed 2.437s CPU time, 110.5M memory peak. Apr 20 15:27:44.229142 update_engine[1619]: I20260420 15:27:44.222533 1619 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Apr 20 15:27:44.229142 update_engine[1619]: I20260420 15:27:44.229228 1619 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Apr 20 15:27:44.246167 update_engine[1619]: I20260420 15:27:44.233170 1619 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Apr 20 15:27:44.261228 update_engine[1619]: I20260420 15:27:44.260746 1619 omaha_request_params.cc:62] Current group set to alpha Apr 20 15:27:44.299017 update_engine[1619]: I20260420 15:27:44.296443 1619 update_attempter.cc:499] Already updated boot flags. Skipping. Apr 20 15:27:44.299017 update_engine[1619]: I20260420 15:27:44.296854 1619 update_attempter.cc:643] Scheduling an action processor start. Apr 20 15:27:44.299017 update_engine[1619]: I20260420 15:27:44.297151 1619 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Apr 20 15:27:44.307942 update_engine[1619]: I20260420 15:27:44.304148 1619 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Apr 20 15:27:44.307942 update_engine[1619]: I20260420 15:27:44.304291 1619 omaha_request_action.cc:271] Posting an Omaha request to disabled Apr 20 15:27:44.307942 update_engine[1619]: I20260420 15:27:44.304298 1619 omaha_request_action.cc:272] Request: Apr 20 15:27:44.307942 update_engine[1619]: Apr 20 15:27:44.307942 update_engine[1619]: Apr 20 15:27:44.307942 update_engine[1619]: Apr 20 15:27:44.307942 update_engine[1619]: Apr 20 15:27:44.307942 update_engine[1619]: Apr 20 15:27:44.307942 update_engine[1619]: Apr 20 15:27:44.307942 update_engine[1619]: Apr 20 15:27:44.307942 update_engine[1619]: Apr 20 15:27:44.307942 update_engine[1619]: I20260420 15:27:44.304304 1619 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 20 15:27:44.324295 locksmithd[1712]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Apr 20 15:27:44.343497 update_engine[1619]: I20260420 15:27:44.341174 1619 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 20 15:27:44.344059 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 16. Apr 20 15:27:44.438217 update_engine[1619]: I20260420 15:27:44.434312 1619 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 20 15:27:44.451835 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 20 15:27:44.463210 update_engine[1619]: E20260420 15:27:44.459200 1619 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (Domain name not found) Apr 20 15:27:44.497091 update_engine[1619]: I20260420 15:27:44.491531 1619 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Apr 20 15:27:46.546964 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 20 15:27:46.693359 (kubelet)[2485]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 20 15:27:47.342342 kubelet[2485]: E0420 15:27:47.340869 2485 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 20 15:27:47.393354 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 20 15:27:47.395907 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 20 15:27:47.404585 systemd[1]: kubelet.service: Consumed 1.616s CPU time, 110.5M memory peak. Apr 20 15:27:55.197109 update_engine[1619]: I20260420 15:27:55.195273 1619 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 20 15:27:55.197109 update_engine[1619]: I20260420 15:27:55.197485 1619 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 20 15:27:55.205938 update_engine[1619]: I20260420 15:27:55.201149 1619 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 20 15:27:55.213573 update_engine[1619]: E20260420 15:27:55.212052 1619 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (Domain name not found) Apr 20 15:27:55.213573 update_engine[1619]: I20260420 15:27:55.213027 1619 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Apr 20 15:27:57.431474 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 17. Apr 20 15:27:57.511334 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 20 15:27:59.568001 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 20 15:27:59.725104 (kubelet)[2503]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 20 15:28:01.042892 kubelet[2503]: E0420 15:28:01.042500 2503 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 20 15:28:01.115590 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 20 15:28:01.119328 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 20 15:28:01.124235 systemd[1]: kubelet.service: Consumed 2.262s CPU time, 109.8M memory peak. Apr 20 15:28:05.224253 update_engine[1619]: I20260420 15:28:05.220647 1619 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 20 15:28:05.224253 update_engine[1619]: I20260420 15:28:05.223215 1619 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 20 15:28:05.236666 update_engine[1619]: I20260420 15:28:05.236619 1619 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 20 15:28:05.260861 update_engine[1619]: E20260420 15:28:05.259064 1619 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (Domain name not found) Apr 20 15:28:05.271110 update_engine[1619]: I20260420 15:28:05.265684 1619 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Apr 20 15:28:11.232017 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 18. Apr 20 15:28:11.317142 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 20 15:28:12.766619 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 20 15:28:12.816482 (kubelet)[2555]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 20 15:28:13.195093 kubelet[2555]: E0420 15:28:13.193353 2555 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 20 15:28:13.208305 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 20 15:28:13.214552 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 20 15:28:13.230708 systemd[1]: kubelet.service: Consumed 1.153s CPU time, 110.2M memory peak. Apr 20 15:28:15.207224 update_engine[1619]: I20260420 15:28:15.199751 1619 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 20 15:28:15.261588 update_engine[1619]: I20260420 15:28:15.212245 1619 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 20 15:28:15.261588 update_engine[1619]: I20260420 15:28:15.218072 1619 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 20 15:28:15.261588 update_engine[1619]: E20260420 15:28:15.239001 1619 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (Domain name not found) Apr 20 15:28:15.261588 update_engine[1619]: I20260420 15:28:15.240329 1619 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Apr 20 15:28:15.261588 update_engine[1619]: I20260420 15:28:15.240351 1619 omaha_request_action.cc:617] Omaha request response: Apr 20 15:28:15.261588 update_engine[1619]: E20260420 15:28:15.240619 1619 omaha_request_action.cc:636] Omaha request network transfer failed. Apr 20 15:28:15.261588 update_engine[1619]: I20260420 15:28:15.243072 1619 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Apr 20 15:28:15.261588 update_engine[1619]: I20260420 15:28:15.243148 1619 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Apr 20 15:28:15.261588 update_engine[1619]: I20260420 15:28:15.243155 1619 update_attempter.cc:306] Processing Done. Apr 20 15:28:15.261588 update_engine[1619]: E20260420 15:28:15.244328 1619 update_attempter.cc:619] Update failed. Apr 20 15:28:15.261588 update_engine[1619]: I20260420 15:28:15.244508 1619 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Apr 20 15:28:15.261588 update_engine[1619]: I20260420 15:28:15.244513 1619 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Apr 20 15:28:15.261588 update_engine[1619]: I20260420 15:28:15.244516 1619 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Apr 20 15:28:15.261588 update_engine[1619]: I20260420 15:28:15.247774 1619 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Apr 20 15:28:15.261588 update_engine[1619]: I20260420 15:28:15.252208 1619 omaha_request_action.cc:271] Posting an Omaha request to disabled Apr 20 15:28:15.261588 update_engine[1619]: I20260420 15:28:15.252667 1619 omaha_request_action.cc:272] Request: Apr 20 15:28:15.261588 update_engine[1619]: Apr 20 15:28:15.261588 update_engine[1619]: Apr 20 15:28:15.315023 locksmithd[1712]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Apr 20 15:28:15.315023 locksmithd[1712]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Apr 20 15:28:15.330572 update_engine[1619]: Apr 20 15:28:15.330572 update_engine[1619]: Apr 20 15:28:15.330572 update_engine[1619]: Apr 20 15:28:15.330572 update_engine[1619]: Apr 20 15:28:15.330572 update_engine[1619]: I20260420 15:28:15.252703 1619 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 20 15:28:15.330572 update_engine[1619]: I20260420 15:28:15.253010 1619 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 20 15:28:15.330572 update_engine[1619]: I20260420 15:28:15.257305 1619 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 20 15:28:15.330572 update_engine[1619]: E20260420 15:28:15.302217 1619 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (Domain name not found) Apr 20 15:28:15.330572 update_engine[1619]: I20260420 15:28:15.306247 1619 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Apr 20 15:28:15.330572 update_engine[1619]: I20260420 15:28:15.308573 1619 omaha_request_action.cc:617] Omaha request response: Apr 20 15:28:15.330572 update_engine[1619]: I20260420 15:28:15.308740 1619 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Apr 20 15:28:15.330572 update_engine[1619]: I20260420 15:28:15.308748 1619 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Apr 20 15:28:15.330572 update_engine[1619]: I20260420 15:28:15.308753 1619 update_attempter.cc:306] Processing Done. Apr 20 15:28:15.330572 update_engine[1619]: I20260420 15:28:15.308876 1619 update_attempter.cc:310] Error event sent. Apr 20 15:28:15.330572 update_engine[1619]: I20260420 15:28:15.309726 1619 update_check_scheduler.cc:74] Next update check in 40m29s Apr 20 15:28:19.736589 containerd[1635]: time="2026-04-20T15:28:19.668736288Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.5-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 20 15:28:19.784018 containerd[1635]: time="2026-04-20T15:28:19.738868505Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.5-0: active requests=0, bytes read=16582632" Apr 20 15:28:19.860513 containerd[1635]: time="2026-04-20T15:28:19.859431303Z" level=info msg="ImageCreate event name:\"sha256:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 20 15:28:22.185048 containerd[1635]: time="2026-04-20T15:28:22.183209853Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 20 15:28:22.832017 containerd[1635]: time="2026-04-20T15:28:22.826740522Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.5-0\" with image id \"sha256:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1\", repo tag \"registry.k8s.io/etcd:3.6.5-0\", repo digest \"registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534\", size \"22871747\" in 1m2.165850815s" Apr 20 15:28:22.845716 containerd[1635]: time="2026-04-20T15:28:22.836059852Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.5-0\" returns image reference \"sha256:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1\"" Apr 20 15:28:23.830824 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 19. Apr 20 15:28:24.064582 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 20 15:28:26.666100 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 20 15:28:26.819646 (kubelet)[2595]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 20 15:28:28.146529 kubelet[2595]: E0420 15:28:28.145673 2595 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 20 15:28:28.214300 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 20 15:28:28.223724 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 20 15:28:28.244628 systemd[1]: kubelet.service: Consumed 2.475s CPU time, 110.4M memory peak. Apr 20 15:28:38.481901 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 20. Apr 20 15:28:38.546889 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 20 15:28:40.623664 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 20 15:28:40.748311 (kubelet)[2635]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 20 15:28:41.297057 kubelet[2635]: E0420 15:28:41.296048 2635 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 20 15:28:41.316589 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 20 15:28:41.321873 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 20 15:28:41.330201 systemd[1]: kubelet.service: Consumed 1.733s CPU time, 109.6M memory peak. Apr 20 15:28:51.707738 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 21. Apr 20 15:28:51.885879 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 20 15:28:54.121459 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 20 15:28:54.298849 (kubelet)[2650]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 20 15:28:54.928326 kubelet[2650]: E0420 15:28:54.926753 2650 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 20 15:28:54.966315 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 20 15:28:54.966666 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 20 15:28:55.018995 systemd[1]: kubelet.service: Consumed 1.798s CPU time, 112.4M memory peak. Apr 20 15:28:58.250585 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 20 15:28:58.293066 systemd[1]: kubelet.service: Consumed 1.798s CPU time, 112.4M memory peak. Apr 20 15:28:58.678874 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 20 15:29:00.697796 systemd[1]: Reload requested from client PID 2668 ('systemctl') (unit session-6.scope)... Apr 20 15:29:00.700118 systemd[1]: Reloading... Apr 20 15:29:05.309847 systemd-ssh-generator[2716]: Failed to query local AF_VSOCK CID: Cannot assign requested address Apr 20 15:29:05.338976 zram_generator::config[2724]: No configuration found. Apr 20 15:29:05.405125 (sd-exec-[2701]: /usr/lib/systemd/system-generators/systemd-ssh-generator failed with exit status 1. Apr 20 15:29:13.035056 systemd[1]: /usr/lib/systemd/system/update-engine.service:10: Support for option BlockIOWeight= has been removed and it is ignored Apr 20 15:29:15.031990 systemd[1]: Reloading finished in 14313 ms. Apr 20 15:29:15.278505 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Apr 20 15:29:15.279160 systemd[1]: kubelet.service: Failed with result 'signal'. Apr 20 15:29:15.280952 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 20 15:29:15.281158 systemd[1]: kubelet.service: Consumed 1.223s CPU time, 98.5M memory peak. Apr 20 15:29:15.305412 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 20 15:29:15.923036 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 20 15:29:15.944807 (kubelet)[2770]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 20 15:29:16.119254 kubelet[2770]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 20 15:29:16.119254 kubelet[2770]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 20 15:29:16.119254 kubelet[2770]: I0420 15:29:16.119248 2770 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 20 15:29:17.730946 kubelet[2770]: I0420 15:29:17.730825 2770 server.go:529] "Kubelet version" kubeletVersion="v1.34.4" Apr 20 15:29:17.730946 kubelet[2770]: I0420 15:29:17.730880 2770 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 20 15:29:17.732666 kubelet[2770]: I0420 15:29:17.731491 2770 watchdog_linux.go:95] "Systemd watchdog is not enabled" Apr 20 15:29:17.732666 kubelet[2770]: I0420 15:29:17.731505 2770 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 20 15:29:17.732666 kubelet[2770]: I0420 15:29:17.731774 2770 server.go:956] "Client rotation is on, will bootstrap in background" Apr 20 15:29:17.771029 kubelet[2770]: E0420 15:29:17.770546 2770 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.22:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.22:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 20 15:29:17.771029 kubelet[2770]: I0420 15:29:17.771255 2770 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 20 15:29:17.815300 kubelet[2770]: I0420 15:29:17.814946 2770 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Apr 20 15:29:17.829889 kubelet[2770]: I0420 15:29:17.829599 2770 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Apr 20 15:29:17.831156 kubelet[2770]: I0420 15:29:17.831098 2770 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 20 15:29:17.831416 kubelet[2770]: I0420 15:29:17.831142 2770 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 20 15:29:17.831416 kubelet[2770]: I0420 15:29:17.831371 2770 topology_manager.go:138] "Creating topology manager with none policy" Apr 20 15:29:17.831416 kubelet[2770]: I0420 15:29:17.831413 2770 container_manager_linux.go:306] "Creating device plugin manager" Apr 20 15:29:17.831687 kubelet[2770]: I0420 15:29:17.831544 2770 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Apr 20 15:29:17.835143 kubelet[2770]: I0420 15:29:17.834898 2770 state_mem.go:36] "Initialized new in-memory state store" Apr 20 15:29:17.835143 kubelet[2770]: I0420 15:29:17.835167 2770 kubelet.go:475] "Attempting to sync node with API server" Apr 20 15:29:17.835143 kubelet[2770]: I0420 15:29:17.835239 2770 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 20 15:29:17.835143 kubelet[2770]: I0420 15:29:17.835354 2770 kubelet.go:387] "Adding apiserver pod source" Apr 20 15:29:17.835143 kubelet[2770]: I0420 15:29:17.835444 2770 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 20 15:29:17.838799 kubelet[2770]: E0420 15:29:17.837192 2770 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.22:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.22:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 20 15:29:17.840592 kubelet[2770]: E0420 15:29:17.840352 2770 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.22:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.22:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 20 15:29:17.848716 kubelet[2770]: I0420 15:29:17.848482 2770 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.2.1" apiVersion="v1" Apr 20 15:29:17.850449 kubelet[2770]: I0420 15:29:17.849008 2770 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 20 15:29:17.850449 kubelet[2770]: I0420 15:29:17.849029 2770 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Apr 20 15:29:17.850449 kubelet[2770]: W0420 15:29:17.849104 2770 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Apr 20 15:29:17.887949 kubelet[2770]: I0420 15:29:17.886101 2770 server.go:1262] "Started kubelet" Apr 20 15:29:17.887949 kubelet[2770]: I0420 15:29:17.886854 2770 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Apr 20 15:29:17.887949 kubelet[2770]: I0420 15:29:17.887092 2770 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 20 15:29:17.887949 kubelet[2770]: I0420 15:29:17.887910 2770 server_v1.go:49] "podresources" method="list" useActivePods=true Apr 20 15:29:17.891411 kubelet[2770]: I0420 15:29:17.890946 2770 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 20 15:29:17.891411 kubelet[2770]: E0420 15:29:17.889822 2770 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.22:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.22:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18a81a49f19832be default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-20 15:29:17.885878974 +0000 UTC m=+1.925951829,LastTimestamp:2026-04-20 15:29:17.885878974 +0000 UTC m=+1.925951829,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 20 15:29:17.892874 kubelet[2770]: I0420 15:29:17.891741 2770 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 20 15:29:17.898425 kubelet[2770]: I0420 15:29:17.896333 2770 server.go:310] "Adding debug handlers to kubelet server" Apr 20 15:29:17.898425 kubelet[2770]: I0420 15:29:17.896551 2770 volume_manager.go:313] "Starting Kubelet Volume Manager" Apr 20 15:29:17.898425 kubelet[2770]: E0420 15:29:17.896944 2770 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 20 15:29:17.898425 kubelet[2770]: I0420 15:29:17.894600 2770 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 20 15:29:17.900880 kubelet[2770]: I0420 15:29:17.900650 2770 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Apr 20 15:29:17.901669 kubelet[2770]: E0420 15:29:17.901457 2770 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.22:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.22:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 20 15:29:17.901749 kubelet[2770]: I0420 15:29:17.901715 2770 reconciler.go:29] "Reconciler: start to sync state" Apr 20 15:29:17.904306 kubelet[2770]: E0420 15:29:17.901801 2770 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.22:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.22:6443: connect: connection refused" interval="200ms" Apr 20 15:29:17.919706 kubelet[2770]: I0420 15:29:17.919455 2770 factory.go:223] Registration of the systemd container factory successfully Apr 20 15:29:17.919706 kubelet[2770]: I0420 15:29:17.919693 2770 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 20 15:29:17.923760 kubelet[2770]: I0420 15:29:17.923677 2770 factory.go:223] Registration of the containerd container factory successfully Apr 20 15:29:18.029270 kubelet[2770]: E0420 15:29:18.024902 2770 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 20 15:29:18.030522 kubelet[2770]: I0420 15:29:18.029409 2770 cpu_manager.go:221] "Starting CPU manager" policy="none" Apr 20 15:29:18.030522 kubelet[2770]: I0420 15:29:18.029434 2770 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Apr 20 15:29:18.030522 kubelet[2770]: I0420 15:29:18.029455 2770 state_mem.go:36] "Initialized new in-memory state store" Apr 20 15:29:18.030817 kubelet[2770]: I0420 15:29:18.030800 2770 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Apr 20 15:29:18.034067 kubelet[2770]: I0420 15:29:18.033132 2770 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Apr 20 15:29:18.034067 kubelet[2770]: I0420 15:29:18.034282 2770 status_manager.go:244] "Starting to sync pod status with apiserver" Apr 20 15:29:18.040677 kubelet[2770]: I0420 15:29:18.036693 2770 kubelet.go:2428] "Starting kubelet main sync loop" Apr 20 15:29:18.040677 kubelet[2770]: I0420 15:29:18.036716 2770 policy_none.go:49] "None policy: Start" Apr 20 15:29:18.040677 kubelet[2770]: I0420 15:29:18.036770 2770 memory_manager.go:187] "Starting memorymanager" policy="None" Apr 20 15:29:18.040677 kubelet[2770]: I0420 15:29:18.036783 2770 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Apr 20 15:29:18.040677 kubelet[2770]: E0420 15:29:18.036890 2770 kubelet.go:2452] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 20 15:29:18.046807 kubelet[2770]: I0420 15:29:18.041199 2770 policy_none.go:47] "Start" Apr 20 15:29:18.047028 kubelet[2770]: E0420 15:29:18.046974 2770 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.22:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.22:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 20 15:29:18.067048 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Apr 20 15:29:18.105772 kubelet[2770]: E0420 15:29:18.105644 2770 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.22:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.22:6443: connect: connection refused" interval="400ms" Apr 20 15:29:18.110159 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Apr 20 15:29:18.127539 kubelet[2770]: E0420 15:29:18.127116 2770 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 20 15:29:18.140033 kubelet[2770]: E0420 15:29:18.139085 2770 kubelet.go:2452] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Apr 20 15:29:18.153951 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Apr 20 15:29:18.158541 kubelet[2770]: E0420 15:29:18.157951 2770 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 20 15:29:18.158541 kubelet[2770]: I0420 15:29:18.158530 2770 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 20 15:29:18.158541 kubelet[2770]: I0420 15:29:18.158558 2770 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 20 15:29:18.160669 kubelet[2770]: I0420 15:29:18.158973 2770 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 20 15:29:18.175281 kubelet[2770]: E0420 15:29:18.174797 2770 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 20 15:29:18.175281 kubelet[2770]: E0420 15:29:18.175119 2770 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 20 15:29:18.360022 kubelet[2770]: I0420 15:29:18.358495 2770 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 20 15:29:18.360022 kubelet[2770]: E0420 15:29:18.359771 2770 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.22:6443/api/v1/nodes\": dial tcp 10.0.0.22:6443: connect: connection refused" node="localhost" Apr 20 15:29:18.435052 kubelet[2770]: I0420 15:29:18.434783 2770 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/087ce51c238cac7808178dd7f5c26d13-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"087ce51c238cac7808178dd7f5c26d13\") " pod="kube-system/kube-apiserver-localhost" Apr 20 15:29:18.435052 kubelet[2770]: I0420 15:29:18.434997 2770 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c6bb8708a026256e82ca4c5631a78b5a-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"c6bb8708a026256e82ca4c5631a78b5a\") " pod="kube-system/kube-controller-manager-localhost" Apr 20 15:29:18.435052 kubelet[2770]: I0420 15:29:18.435133 2770 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/c6bb8708a026256e82ca4c5631a78b5a-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"c6bb8708a026256e82ca4c5631a78b5a\") " pod="kube-system/kube-controller-manager-localhost" Apr 20 15:29:18.435052 kubelet[2770]: I0420 15:29:18.435197 2770 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c6bb8708a026256e82ca4c5631a78b5a-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"c6bb8708a026256e82ca4c5631a78b5a\") " pod="kube-system/kube-controller-manager-localhost" Apr 20 15:29:18.435052 kubelet[2770]: I0420 15:29:18.435219 2770 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/824fd89300514e351ed3b68d82c665c6-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"824fd89300514e351ed3b68d82c665c6\") " pod="kube-system/kube-scheduler-localhost" Apr 20 15:29:18.448979 kubelet[2770]: I0420 15:29:18.435266 2770 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/087ce51c238cac7808178dd7f5c26d13-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"087ce51c238cac7808178dd7f5c26d13\") " pod="kube-system/kube-apiserver-localhost" Apr 20 15:29:18.448979 kubelet[2770]: I0420 15:29:18.435318 2770 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/087ce51c238cac7808178dd7f5c26d13-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"087ce51c238cac7808178dd7f5c26d13\") " pod="kube-system/kube-apiserver-localhost" Apr 20 15:29:18.448979 kubelet[2770]: I0420 15:29:18.435337 2770 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c6bb8708a026256e82ca4c5631a78b5a-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"c6bb8708a026256e82ca4c5631a78b5a\") " pod="kube-system/kube-controller-manager-localhost" Apr 20 15:29:18.448979 kubelet[2770]: I0420 15:29:18.435357 2770 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c6bb8708a026256e82ca4c5631a78b5a-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"c6bb8708a026256e82ca4c5631a78b5a\") " pod="kube-system/kube-controller-manager-localhost" Apr 20 15:29:18.458044 systemd[1]: Created slice kubepods-burstable-pod087ce51c238cac7808178dd7f5c26d13.slice - libcontainer container kubepods-burstable-pod087ce51c238cac7808178dd7f5c26d13.slice. Apr 20 15:29:18.490953 kubelet[2770]: E0420 15:29:18.490692 2770 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 20 15:29:18.526216 kubelet[2770]: E0420 15:29:18.525533 2770 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.22:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.22:6443: connect: connection refused" interval="800ms" Apr 20 15:29:18.565255 systemd[1]: Created slice kubepods-burstable-podc6bb8708a026256e82ca4c5631a78b5a.slice - libcontainer container kubepods-burstable-podc6bb8708a026256e82ca4c5631a78b5a.slice. Apr 20 15:29:18.590966 kubelet[2770]: I0420 15:29:18.590567 2770 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 20 15:29:18.590966 kubelet[2770]: E0420 15:29:18.591253 2770 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.22:6443/api/v1/nodes\": dial tcp 10.0.0.22:6443: connect: connection refused" node="localhost" Apr 20 15:29:18.609811 kubelet[2770]: E0420 15:29:18.609633 2770 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 20 15:29:18.612570 kubelet[2770]: E0420 15:29:18.612462 2770 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 15:29:18.613730 systemd[1]: Created slice kubepods-burstable-pod824fd89300514e351ed3b68d82c665c6.slice - libcontainer container kubepods-burstable-pod824fd89300514e351ed3b68d82c665c6.slice. Apr 20 15:29:18.628715 kubelet[2770]: E0420 15:29:18.627527 2770 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 20 15:29:18.633748 containerd[1635]: time="2026-04-20T15:29:18.633698537Z" level=info msg="RunPodSandbox for name:\"kube-controller-manager-localhost\" uid:\"c6bb8708a026256e82ca4c5631a78b5a\" namespace:\"kube-system\"" Apr 20 15:29:18.643134 kubelet[2770]: E0420 15:29:18.634014 2770 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 15:29:18.643722 containerd[1635]: time="2026-04-20T15:29:18.643679247Z" level=info msg="RunPodSandbox for name:\"kube-scheduler-localhost\" uid:\"824fd89300514e351ed3b68d82c665c6\" namespace:\"kube-system\"" Apr 20 15:29:18.656719 kubelet[2770]: E0420 15:29:18.656317 2770 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.22:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.22:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 20 15:29:18.838102 systemd[1736]: Created slice background.slice - User Background Tasks Slice. Apr 20 15:29:18.862169 systemd[1736]: Starting systemd-tmpfiles-clean.service - Cleanup of User's Temporary Files and Directories... Apr 20 15:29:18.866030 kubelet[2770]: E0420 15:29:18.854455 2770 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 15:29:18.866555 containerd[1635]: time="2026-04-20T15:29:18.866402937Z" level=info msg="RunPodSandbox for name:\"kube-apiserver-localhost\" uid:\"087ce51c238cac7808178dd7f5c26d13\" namespace:\"kube-system\"" Apr 20 15:29:18.913507 containerd[1635]: time="2026-04-20T15:29:18.903134520Z" level=info msg="connecting to shim 36cc9c62516c4192c61ab0e37efdfcd65bebb612af9dcfb4a1ca8370c4716063" address="unix:///run/containerd/s/e8b570697ccae5c37ee8c203dd47b2636b9b3b7d7a1b21460cbc42ccc93af9d3" namespace=k8s.io protocol=ttrpc version=3 Apr 20 15:29:18.918807 systemd[1736]: Finished systemd-tmpfiles-clean.service - Cleanup of User's Temporary Files and Directories. Apr 20 15:29:18.924909 containerd[1635]: time="2026-04-20T15:29:18.919003176Z" level=info msg="connecting to shim 3ae2bc198d206d496b05bf8cedf59cf493595b084c968bee9f872f384d6cca08" address="unix:///run/containerd/s/909e6cfc558d4fc91d810de80342fc4f6713a6b29dc9d1596b2d8f2a2ab41cb6" namespace=k8s.io protocol=ttrpc version=3 Apr 20 15:29:18.947578 kubelet[2770]: E0420 15:29:18.946482 2770 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.22:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.22:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 20 15:29:18.958697 kubelet[2770]: E0420 15:29:18.952492 2770 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.22:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.22:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 20 15:29:19.037326 kubelet[2770]: I0420 15:29:19.035835 2770 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 20 15:29:19.037326 kubelet[2770]: E0420 15:29:19.036825 2770 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.22:6443/api/v1/nodes\": dial tcp 10.0.0.22:6443: connect: connection refused" node="localhost" Apr 20 15:29:19.154294 containerd[1635]: time="2026-04-20T15:29:19.153826233Z" level=info msg="connecting to shim 1ed9cbcf97567ee301ca8abeff1d858942b912259ed14d453dc220808018e980" address="unix:///run/containerd/s/7bc90663f25f594c94fa1c5aef28bfcccf7d3c3364629dedc834cc31c286a9b0" namespace=k8s.io protocol=ttrpc version=3 Apr 20 15:29:19.173201 systemd[1]: Started cri-containerd-3ae2bc198d206d496b05bf8cedf59cf493595b084c968bee9f872f384d6cca08.scope - libcontainer container 3ae2bc198d206d496b05bf8cedf59cf493595b084c968bee9f872f384d6cca08. Apr 20 15:29:19.190593 systemd[1]: Started cri-containerd-36cc9c62516c4192c61ab0e37efdfcd65bebb612af9dcfb4a1ca8370c4716063.scope - libcontainer container 36cc9c62516c4192c61ab0e37efdfcd65bebb612af9dcfb4a1ca8370c4716063. Apr 20 15:29:19.225964 systemd[1]: Started cri-containerd-1ed9cbcf97567ee301ca8abeff1d858942b912259ed14d453dc220808018e980.scope - libcontainer container 1ed9cbcf97567ee301ca8abeff1d858942b912259ed14d453dc220808018e980. Apr 20 15:29:19.331074 kubelet[2770]: E0420 15:29:19.330090 2770 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.22:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.22:6443: connect: connection refused" interval="1.6s" Apr 20 15:29:19.331074 kubelet[2770]: E0420 15:29:19.330150 2770 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.22:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.22:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 20 15:29:19.526025 containerd[1635]: time="2026-04-20T15:29:19.492227695Z" level=info msg="RunPodSandbox for name:\"kube-controller-manager-localhost\" uid:\"c6bb8708a026256e82ca4c5631a78b5a\" namespace:\"kube-system\" returns sandbox id \"36cc9c62516c4192c61ab0e37efdfcd65bebb612af9dcfb4a1ca8370c4716063\"" Apr 20 15:29:19.564926 containerd[1635]: time="2026-04-20T15:29:19.562973501Z" level=info msg="RunPodSandbox for name:\"kube-scheduler-localhost\" uid:\"824fd89300514e351ed3b68d82c665c6\" namespace:\"kube-system\" returns sandbox id \"3ae2bc198d206d496b05bf8cedf59cf493595b084c968bee9f872f384d6cca08\"" Apr 20 15:29:19.586272 containerd[1635]: time="2026-04-20T15:29:19.584724229Z" level=info msg="RunPodSandbox for name:\"kube-apiserver-localhost\" uid:\"087ce51c238cac7808178dd7f5c26d13\" namespace:\"kube-system\" returns sandbox id \"1ed9cbcf97567ee301ca8abeff1d858942b912259ed14d453dc220808018e980\"" Apr 20 15:29:19.589640 kubelet[2770]: E0420 15:29:19.589275 2770 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 15:29:19.589640 kubelet[2770]: E0420 15:29:19.589553 2770 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 15:29:19.601309 kubelet[2770]: E0420 15:29:19.600669 2770 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 15:29:19.631740 containerd[1635]: time="2026-04-20T15:29:19.631537602Z" level=info msg="CreateContainer within sandbox \"36cc9c62516c4192c61ab0e37efdfcd65bebb612af9dcfb4a1ca8370c4716063\" for container name:\"kube-controller-manager\"" Apr 20 15:29:19.635470 containerd[1635]: time="2026-04-20T15:29:19.631580598Z" level=info msg="CreateContainer within sandbox \"3ae2bc198d206d496b05bf8cedf59cf493595b084c968bee9f872f384d6cca08\" for container name:\"kube-scheduler\"" Apr 20 15:29:19.641173 containerd[1635]: time="2026-04-20T15:29:19.631629427Z" level=info msg="CreateContainer within sandbox \"1ed9cbcf97567ee301ca8abeff1d858942b912259ed14d453dc220808018e980\" for container name:\"kube-apiserver\"" Apr 20 15:29:19.710038 containerd[1635]: time="2026-04-20T15:29:19.708005212Z" level=info msg="Container 1bef1508aab3f88c63cd1b634e1d1b348f685299e51ff9d42f8592d04628f0d9: CDI devices from CRI Config.CDIDevices: []" Apr 20 15:29:19.710038 containerd[1635]: time="2026-04-20T15:29:19.710044163Z" level=info msg="Container 6fd58b8c1c91dba1d785b781e0a4ac6425b1aed8f47de4fda07e8f33ce5536e3: CDI devices from CRI Config.CDIDevices: []" Apr 20 15:29:19.747604 containerd[1635]: time="2026-04-20T15:29:19.747319653Z" level=info msg="Container b28a5680958d2f890a4e601418fc693e80656ff0c2a370aea88226798a4e1c4d: CDI devices from CRI Config.CDIDevices: []" Apr 20 15:29:19.833075 kubelet[2770]: E0420 15:29:19.825927 2770 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.22:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.22:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 20 15:29:19.856121 containerd[1635]: time="2026-04-20T15:29:19.855043593Z" level=info msg="CreateContainer within sandbox \"3ae2bc198d206d496b05bf8cedf59cf493595b084c968bee9f872f384d6cca08\" for name:\"kube-scheduler\" returns container id \"6fd58b8c1c91dba1d785b781e0a4ac6425b1aed8f47de4fda07e8f33ce5536e3\"" Apr 20 15:29:19.858706 kubelet[2770]: I0420 15:29:19.855846 2770 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 20 15:29:19.858706 kubelet[2770]: E0420 15:29:19.856183 2770 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.22:6443/api/v1/nodes\": dial tcp 10.0.0.22:6443: connect: connection refused" node="localhost" Apr 20 15:29:19.864350 containerd[1635]: time="2026-04-20T15:29:19.863045288Z" level=info msg="CreateContainer within sandbox \"36cc9c62516c4192c61ab0e37efdfcd65bebb612af9dcfb4a1ca8370c4716063\" for name:\"kube-controller-manager\" returns container id \"1bef1508aab3f88c63cd1b634e1d1b348f685299e51ff9d42f8592d04628f0d9\"" Apr 20 15:29:19.872859 containerd[1635]: time="2026-04-20T15:29:19.871888506Z" level=info msg="StartContainer for \"6fd58b8c1c91dba1d785b781e0a4ac6425b1aed8f47de4fda07e8f33ce5536e3\"" Apr 20 15:29:19.877452 containerd[1635]: time="2026-04-20T15:29:19.874210089Z" level=info msg="StartContainer for \"1bef1508aab3f88c63cd1b634e1d1b348f685299e51ff9d42f8592d04628f0d9\"" Apr 20 15:29:19.914830 containerd[1635]: time="2026-04-20T15:29:19.913492875Z" level=info msg="connecting to shim 6fd58b8c1c91dba1d785b781e0a4ac6425b1aed8f47de4fda07e8f33ce5536e3" address="unix:///run/containerd/s/909e6cfc558d4fc91d810de80342fc4f6713a6b29dc9d1596b2d8f2a2ab41cb6" protocol=ttrpc version=3 Apr 20 15:29:19.914830 containerd[1635]: time="2026-04-20T15:29:19.913345203Z" level=info msg="connecting to shim 1bef1508aab3f88c63cd1b634e1d1b348f685299e51ff9d42f8592d04628f0d9" address="unix:///run/containerd/s/e8b570697ccae5c37ee8c203dd47b2636b9b3b7d7a1b21460cbc42ccc93af9d3" protocol=ttrpc version=3 Apr 20 15:29:19.914830 containerd[1635]: time="2026-04-20T15:29:19.913992545Z" level=info msg="CreateContainer within sandbox \"1ed9cbcf97567ee301ca8abeff1d858942b912259ed14d453dc220808018e980\" for name:\"kube-apiserver\" returns container id \"b28a5680958d2f890a4e601418fc693e80656ff0c2a370aea88226798a4e1c4d\"" Apr 20 15:29:20.022894 containerd[1635]: time="2026-04-20T15:29:20.022714913Z" level=info msg="StartContainer for \"b28a5680958d2f890a4e601418fc693e80656ff0c2a370aea88226798a4e1c4d\"" Apr 20 15:29:20.091848 containerd[1635]: time="2026-04-20T15:29:20.090177991Z" level=info msg="connecting to shim b28a5680958d2f890a4e601418fc693e80656ff0c2a370aea88226798a4e1c4d" address="unix:///run/containerd/s/7bc90663f25f594c94fa1c5aef28bfcccf7d3c3364629dedc834cc31c286a9b0" protocol=ttrpc version=3 Apr 20 15:29:20.223964 systemd[1]: Started cri-containerd-1bef1508aab3f88c63cd1b634e1d1b348f685299e51ff9d42f8592d04628f0d9.scope - libcontainer container 1bef1508aab3f88c63cd1b634e1d1b348f685299e51ff9d42f8592d04628f0d9. Apr 20 15:29:20.296478 systemd[1]: Started cri-containerd-6fd58b8c1c91dba1d785b781e0a4ac6425b1aed8f47de4fda07e8f33ce5536e3.scope - libcontainer container 6fd58b8c1c91dba1d785b781e0a4ac6425b1aed8f47de4fda07e8f33ce5536e3. Apr 20 15:29:20.448965 systemd[1]: Started cri-containerd-b28a5680958d2f890a4e601418fc693e80656ff0c2a370aea88226798a4e1c4d.scope - libcontainer container b28a5680958d2f890a4e601418fc693e80656ff0c2a370aea88226798a4e1c4d. Apr 20 15:29:20.640347 kubelet[2770]: E0420 15:29:20.640219 2770 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.22:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.22:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 20 15:29:20.946217 kubelet[2770]: E0420 15:29:20.945461 2770 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.22:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.22:6443: connect: connection refused" interval="3.2s" Apr 20 15:29:21.102847 kubelet[2770]: E0420 15:29:21.102056 2770 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.22:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.22:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 20 15:29:21.169341 kubelet[2770]: E0420 15:29:21.123133 2770 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.22:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.22:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 20 15:29:21.365134 containerd[1635]: time="2026-04-20T15:29:21.359052566Z" level=info msg="StartContainer for \"b28a5680958d2f890a4e601418fc693e80656ff0c2a370aea88226798a4e1c4d\" returns successfully" Apr 20 15:29:21.498160 kubelet[2770]: E0420 15:29:21.497631 2770 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.22:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.22:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 20 15:29:21.542495 kubelet[2770]: I0420 15:29:21.540603 2770 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 20 15:29:21.558834 kubelet[2770]: E0420 15:29:21.558419 2770 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.22:6443/api/v1/nodes\": dial tcp 10.0.0.22:6443: connect: connection refused" node="localhost" Apr 20 15:29:21.691967 containerd[1635]: time="2026-04-20T15:29:21.689925378Z" level=info msg="StartContainer for \"1bef1508aab3f88c63cd1b634e1d1b348f685299e51ff9d42f8592d04628f0d9\" returns successfully" Apr 20 15:29:21.964044 containerd[1635]: time="2026-04-20T15:29:21.959210302Z" level=info msg="StartContainer for \"6fd58b8c1c91dba1d785b781e0a4ac6425b1aed8f47de4fda07e8f33ce5536e3\" returns successfully" Apr 20 15:29:23.399744 kubelet[2770]: E0420 15:29:23.398765 2770 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 20 15:29:23.399744 kubelet[2770]: E0420 15:29:23.399056 2770 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 15:29:24.019612 kubelet[2770]: E0420 15:29:24.019043 2770 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 20 15:29:24.019612 kubelet[2770]: E0420 15:29:24.019824 2770 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 15:29:24.721211 kubelet[2770]: E0420 15:29:24.667871 2770 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 20 15:29:24.748056 kubelet[2770]: E0420 15:29:24.725337 2770 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 15:29:24.818900 kubelet[2770]: I0420 15:29:24.817846 2770 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 20 15:29:25.907617 kubelet[2770]: E0420 15:29:25.903734 2770 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 20 15:29:25.907617 kubelet[2770]: E0420 15:29:25.912699 2770 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 15:29:26.145739 kubelet[2770]: E0420 15:29:26.109660 2770 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 20 15:29:26.145739 kubelet[2770]: E0420 15:29:26.109874 2770 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 15:29:26.145739 kubelet[2770]: E0420 15:29:26.144894 2770 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 20 15:29:26.145739 kubelet[2770]: E0420 15:29:26.145248 2770 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 15:29:27.128932 kubelet[2770]: E0420 15:29:27.127489 2770 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 20 15:29:27.229751 kubelet[2770]: E0420 15:29:27.132070 2770 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 15:29:27.229751 kubelet[2770]: E0420 15:29:27.151869 2770 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 20 15:29:27.229751 kubelet[2770]: E0420 15:29:27.154176 2770 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 15:29:28.183750 kubelet[2770]: E0420 15:29:28.182021 2770 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 20 15:29:30.633083 kubelet[2770]: E0420 15:29:30.628803 2770 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 20 15:29:30.716729 kubelet[2770]: E0420 15:29:30.638027 2770 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 15:29:32.673256 kubelet[2770]: E0420 15:29:32.668105 2770 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 20 15:29:32.673256 kubelet[2770]: E0420 15:29:32.673049 2770 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 15:29:32.709998 kubelet[2770]: E0420 15:29:32.709017 2770 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.22:6443/api/v1/namespaces/default/events\": net/http: TLS handshake timeout" event="&Event{ObjectMeta:{localhost.18a81a49f19832be default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-20 15:29:17.885878974 +0000 UTC m=+1.925951829,LastTimestamp:2026-04-20 15:29:17.885878974 +0000 UTC m=+1.925951829,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 20 15:29:33.653017 kubelet[2770]: E0420 15:29:33.651929 2770 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 20 15:29:33.653017 kubelet[2770]: E0420 15:29:33.655921 2770 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 15:29:33.950167 kubelet[2770]: E0420 15:29:33.931059 2770 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.22:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": net/http: TLS handshake timeout" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 20 15:29:34.230115 kubelet[2770]: E0420 15:29:34.217879 2770 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.22:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded" interval="6.4s" Apr 20 15:29:34.230115 kubelet[2770]: E0420 15:29:34.219594 2770 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 20 15:29:34.230115 kubelet[2770]: E0420 15:29:34.220148 2770 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 15:29:34.833576 kubelet[2770]: E0420 15:29:34.827044 2770 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.22:6443/api/v1/nodes\": net/http: TLS handshake timeout" node="localhost" Apr 20 15:29:35.632122 kubelet[2770]: E0420 15:29:35.630446 2770 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.22:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 20 15:29:36.045816 kubelet[2770]: E0420 15:29:36.042968 2770 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.22:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 20 15:29:36.161205 kubelet[2770]: E0420 15:29:36.147900 2770 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.22:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 20 15:29:37.823019 kubelet[2770]: E0420 15:29:37.821212 2770 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.22:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 20 15:29:38.193166 kubelet[2770]: E0420 15:29:38.190178 2770 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 20 15:29:41.435035 kubelet[2770]: I0420 15:29:41.434432 2770 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 20 15:29:45.221816 kubelet[2770]: E0420 15:29:45.220697 2770 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 20 15:29:45.221816 kubelet[2770]: E0420 15:29:45.227559 2770 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 15:29:48.205974 kubelet[2770]: E0420 15:29:48.204640 2770 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 20 15:29:50.625984 kubelet[2770]: E0420 15:29:50.624466 2770 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.22:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded" interval="7s" Apr 20 15:29:51.461192 kubelet[2770]: E0420 15:29:51.459135 2770 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.22:6443/api/v1/nodes\": net/http: TLS handshake timeout" node="localhost" Apr 20 15:29:52.021913 kubelet[2770]: E0420 15:29:52.020623 2770 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.22:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": net/http: TLS handshake timeout" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 20 15:29:52.750264 kubelet[2770]: E0420 15:29:52.742821 2770 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.22:6443/api/v1/namespaces/default/events\": net/http: TLS handshake timeout" event="&Event{ObjectMeta:{localhost.18a81a49f19832be default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-20 15:29:17.885878974 +0000 UTC m=+1.925951829,LastTimestamp:2026-04-20 15:29:17.885878974 +0000 UTC m=+1.925951829,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 20 15:29:52.858068 kubelet[2770]: E0420 15:29:52.857028 2770 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.22:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 20 15:29:54.119338 kubelet[2770]: E0420 15:29:54.109624 2770 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.22:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 20 15:29:54.451231 kubelet[2770]: E0420 15:29:54.448861 2770 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.22:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 20 15:29:58.139224 kubelet[2770]: E0420 15:29:58.135237 2770 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.22:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 20 15:29:58.208222 kubelet[2770]: E0420 15:29:58.207703 2770 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 20 15:29:58.634274 kubelet[2770]: I0420 15:29:58.629456 2770 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 20 15:30:07.648864 kubelet[2770]: E0420 15:30:07.643833 2770 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.22:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded" interval="7s" Apr 20 15:30:08.237990 kubelet[2770]: E0420 15:30:08.237047 2770 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 20 15:30:08.734147 kubelet[2770]: E0420 15:30:08.733616 2770 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.22:6443/api/v1/nodes\": net/http: TLS handshake timeout" node="localhost" Apr 20 15:30:12.817133 kubelet[2770]: E0420 15:30:12.811896 2770 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.22:6443/api/v1/namespaces/default/events\": net/http: TLS handshake timeout" event="&Event{ObjectMeta:{localhost.18a81a49f19832be default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-20 15:29:17.885878974 +0000 UTC m=+1.925951829,LastTimestamp:2026-04-20 15:29:17.885878974 +0000 UTC m=+1.925951829,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 20 15:30:15.764209 kubelet[2770]: I0420 15:30:15.762280 2770 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 20 15:30:18.341237 kubelet[2770]: E0420 15:30:18.279094 2770 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 20 15:30:18.776791 kubelet[2770]: E0420 15:30:18.776311 2770 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.22:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": net/http: TLS handshake timeout" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 20 15:30:18.776791 kubelet[2770]: E0420 15:30:18.776504 2770 certificate_manager.go:461] "Reached backoff limit, still unable to rotate certs" err="timed out waiting for the condition" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 20 15:30:20.633156 kubelet[2770]: E0420 15:30:20.631197 2770 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.22:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 20 15:30:23.095013 kubelet[2770]: E0420 15:30:23.093322 2770 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.22:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 20 15:30:24.705183 kubelet[2770]: E0420 15:30:24.704005 2770 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.22:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" interval="7s" Apr 20 15:30:25.829964 kubelet[2770]: E0420 15:30:25.829086 2770 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.22:6443/api/v1/nodes\": net/http: TLS handshake timeout" node="localhost" Apr 20 15:30:27.343118 kubelet[2770]: E0420 15:30:27.341310 2770 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.22:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 20 15:30:28.296916 kubelet[2770]: E0420 15:30:28.295099 2770 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 20 15:30:32.645460 kubelet[2770]: E0420 15:30:32.644838 2770 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.22:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 20 15:30:32.829025 kubelet[2770]: E0420 15:30:32.824301 2770 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.22:6443/api/v1/namespaces/default/events\": net/http: TLS handshake timeout" event="&Event{ObjectMeta:{localhost.18a81a49f19832be default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-20 15:29:17.885878974 +0000 UTC m=+1.925951829,LastTimestamp:2026-04-20 15:29:17.885878974 +0000 UTC m=+1.925951829,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 20 15:30:32.992454 kubelet[2770]: I0420 15:30:32.956789 2770 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 20 15:30:38.238006 kubelet[2770]: E0420 15:30:38.236749 2770 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 20 15:30:38.247138 kubelet[2770]: E0420 15:30:38.241232 2770 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 15:30:38.299751 kubelet[2770]: E0420 15:30:38.297328 2770 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 20 15:30:41.712158 kubelet[2770]: E0420 15:30:41.711364 2770 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.22:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" interval="7s" Apr 20 15:30:43.000720 kubelet[2770]: E0420 15:30:42.999104 2770 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.22:6443/api/v1/nodes\": net/http: TLS handshake timeout" node="localhost" Apr 20 15:30:48.332336 kubelet[2770]: E0420 15:30:48.327179 2770 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 20 15:30:50.147257 kubelet[2770]: I0420 15:30:50.144794 2770 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 20 15:30:52.953245 kubelet[2770]: E0420 15:30:52.943191 2770 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.22:6443/api/v1/namespaces/default/events\": net/http: TLS handshake timeout" event="&Event{ObjectMeta:{localhost.18a81a49f19832be default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-20 15:29:17.885878974 +0000 UTC m=+1.925951829,LastTimestamp:2026-04-20 15:29:17.885878974 +0000 UTC m=+1.925951829,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 20 15:30:58.348097 kubelet[2770]: E0420 15:30:58.344515 2770 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 20 15:30:58.732343 kubelet[2770]: E0420 15:30:58.731504 2770 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.22:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" interval="7s" Apr 20 15:31:00.236213 kubelet[2770]: E0420 15:31:00.235223 2770 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.22:6443/api/v1/nodes\": net/http: TLS handshake timeout" node="localhost" Apr 20 15:31:00.461178 kubelet[2770]: E0420 15:31:00.369603 2770 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 20 15:31:00.461178 kubelet[2770]: E0420 15:31:00.383515 2770 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 15:31:00.864220 kubelet[2770]: E0420 15:31:00.862939 2770 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.22:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": net/http: TLS handshake timeout" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 20 15:31:05.295123 kubelet[2770]: E0420 15:31:05.237995 2770 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.22:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 20 15:31:05.530065 kubelet[2770]: E0420 15:31:05.528726 2770 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.22:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 20 15:31:07.415595 kubelet[2770]: I0420 15:31:07.414932 2770 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 20 15:31:08.356335 kubelet[2770]: E0420 15:31:08.355006 2770 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 20 15:31:09.728880 kubelet[2770]: E0420 15:31:09.728018 2770 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Apr 20 15:31:12.181209 kubelet[2770]: E0420 15:31:12.180791 2770 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 20 15:31:12.181209 kubelet[2770]: E0420 15:31:12.181236 2770 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 15:31:12.249059 kubelet[2770]: I0420 15:31:12.248095 2770 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Apr 20 15:31:12.250288 kubelet[2770]: E0420 15:31:12.249672 2770 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Apr 20 15:31:13.026412 kubelet[2770]: E0420 15:31:13.023149 2770 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.18a81a49f19832be default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-20 15:29:17.885878974 +0000 UTC m=+1.925951829,LastTimestamp:2026-04-20 15:29:17.885878974 +0000 UTC m=+1.925951829,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 20 15:31:13.193924 kubelet[2770]: E0420 15:31:13.192646 2770 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 20 15:31:13.330313 kubelet[2770]: E0420 15:31:13.319504 2770 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 20 15:31:13.445073 kubelet[2770]: E0420 15:31:13.443254 2770 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 20 15:31:13.554645 kubelet[2770]: E0420 15:31:13.553888 2770 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 20 15:31:13.720585 kubelet[2770]: E0420 15:31:13.667621 2770 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 20 15:31:13.782283 kubelet[2770]: E0420 15:31:13.781682 2770 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 20 15:31:13.951221 kubelet[2770]: E0420 15:31:13.938916 2770 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 20 15:31:14.111717 kubelet[2770]: E0420 15:31:14.058180 2770 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.18a81a49f67fe05a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node localhost status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-20 15:29:17.968171098 +0000 UTC m=+2.008243959,LastTimestamp:2026-04-20 15:29:17.968171098 +0000 UTC m=+2.008243959,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 20 15:31:14.120680 kubelet[2770]: E0420 15:31:14.120180 2770 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 20 15:31:14.234075 kubelet[2770]: E0420 15:31:14.228462 2770 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 20 15:31:14.335123 kubelet[2770]: E0420 15:31:14.333619 2770 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 20 15:31:14.445063 kubelet[2770]: E0420 15:31:14.442934 2770 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 20 15:31:14.564678 kubelet[2770]: E0420 15:31:14.564020 2770 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 20 15:31:14.709782 kubelet[2770]: E0420 15:31:14.704525 2770 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 20 15:31:14.850407 kubelet[2770]: E0420 15:31:14.817885 2770 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 20 15:31:14.922932 kubelet[2770]: E0420 15:31:14.921306 2770 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 20 15:31:15.087827 kubelet[2770]: E0420 15:31:15.059331 2770 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 20 15:31:15.201747 kubelet[2770]: E0420 15:31:15.199164 2770 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 20 15:31:15.312860 kubelet[2770]: E0420 15:31:15.309043 2770 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 20 15:31:15.424954 kubelet[2770]: E0420 15:31:15.415164 2770 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 20 15:31:15.541867 kubelet[2770]: E0420 15:31:15.540344 2770 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 20 15:31:15.769115 kubelet[2770]: E0420 15:31:15.756702 2770 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 20 15:31:15.994230 kubelet[2770]: E0420 15:31:15.993194 2770 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 20 15:31:16.167075 kubelet[2770]: E0420 15:31:16.159303 2770 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 20 15:31:16.287362 kubelet[2770]: E0420 15:31:16.278921 2770 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 20 15:31:16.405877 kubelet[2770]: E0420 15:31:16.405155 2770 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 20 15:31:16.584145 kubelet[2770]: E0420 15:31:16.542326 2770 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 20 15:31:16.748141 kubelet[2770]: E0420 15:31:16.747293 2770 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 20 15:31:16.867940 kubelet[2770]: E0420 15:31:16.850364 2770 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 20 15:31:16.970205 kubelet[2770]: E0420 15:31:16.964698 2770 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 20 15:31:17.190977 kubelet[2770]: E0420 15:31:17.108242 2770 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 20 15:31:17.366717 kubelet[2770]: E0420 15:31:17.363511 2770 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 20 15:31:17.544271 kubelet[2770]: E0420 15:31:17.534721 2770 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 20 15:31:17.647161 kubelet[2770]: E0420 15:31:17.637970 2770 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 20 15:31:17.745901 kubelet[2770]: E0420 15:31:17.744092 2770 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 20 15:31:17.869334 kubelet[2770]: E0420 15:31:17.859126 2770 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 20 15:31:17.986896 kubelet[2770]: E0420 15:31:17.985841 2770 kubelet_node_status.go:398] "Node not becoming ready in time after startup" Apr 20 15:31:18.407179 kubelet[2770]: E0420 15:31:18.405846 2770 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 20 15:31:22.742230 kubelet[2770]: E0420 15:31:22.741246 2770 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 15:31:23.257026 kubelet[2770]: E0420 15:31:23.255044 2770 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Apr 20 15:31:28.048488 kubelet[2770]: E0420 15:31:28.039487 2770 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 15:31:28.415853 kubelet[2770]: E0420 15:31:28.415057 2770 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 20 15:31:33.264690 kubelet[2770]: E0420 15:31:33.263834 2770 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 15:31:33.893056 kubelet[2770]: E0420 15:31:33.891295 2770 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Apr 20 15:31:37.319371 kubelet[2770]: I0420 15:31:37.308112 2770 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Apr 20 15:31:37.493583 kubelet[2770]: I0420 15:31:37.447479 2770 apiserver.go:52] "Watching apiserver" Apr 20 15:31:37.704835 kubelet[2770]: I0420 15:31:37.703075 2770 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Apr 20 15:31:38.167099 kubelet[2770]: I0420 15:31:38.133243 2770 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Apr 20 15:31:38.341316 kubelet[2770]: E0420 15:31:38.340601 2770 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 15:31:38.561630 kubelet[2770]: I0420 15:31:38.540285 2770 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Apr 20 15:31:38.782086 kubelet[2770]: E0420 15:31:38.781482 2770 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 15:31:38.790535 kubelet[2770]: E0420 15:31:38.789861 2770 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 15:31:38.959083 kubelet[2770]: E0420 15:31:38.958037 2770 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 15:31:43.391993 kubelet[2770]: E0420 15:31:43.390147 2770 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 15:31:48.562856 kubelet[2770]: E0420 15:31:48.559826 2770 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 15:31:51.916033 kubelet[2770]: I0420 15:31:51.911327 2770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=13.870177292 podStartE2EDuration="13.870177292s" podCreationTimestamp="2026-04-20 15:31:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-20 15:31:49.402206621 +0000 UTC m=+153.442279483" watchObservedRunningTime="2026-04-20 15:31:51.870177292 +0000 UTC m=+155.910250148" Apr 20 15:31:53.214558 kubelet[2770]: I0420 15:31:53.207776 2770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=16.207466744 podStartE2EDuration="16.207466744s" podCreationTimestamp="2026-04-20 15:31:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-20 15:31:51.922828738 +0000 UTC m=+155.962901608" watchObservedRunningTime="2026-04-20 15:31:53.207466744 +0000 UTC m=+157.247539610" Apr 20 15:31:53.365933 kubelet[2770]: I0420 15:31:53.215039 2770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=15.21482785 podStartE2EDuration="15.21482785s" podCreationTimestamp="2026-04-20 15:31:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-20 15:31:53.187056371 +0000 UTC m=+157.227129229" watchObservedRunningTime="2026-04-20 15:31:53.21482785 +0000 UTC m=+157.254900702" Apr 20 15:31:53.727184 kubelet[2770]: E0420 15:31:53.726104 2770 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 15:31:57.516647 kubelet[2770]: E0420 15:31:57.513946 2770 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.467s" Apr 20 15:31:59.109995 kubelet[2770]: E0420 15:31:59.109463 2770 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 15:31:59.427033 kubelet[2770]: E0420 15:31:59.426216 2770 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.318s" Apr 20 15:32:05.243033 kubelet[2770]: E0420 15:32:05.233879 2770 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.162s" Apr 20 15:32:05.602535 kubelet[2770]: E0420 15:32:05.582597 2770 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 15:32:07.950988 kubelet[2770]: E0420 15:32:07.943108 2770 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.705s" Apr 20 15:32:09.391351 kubelet[2770]: E0420 15:32:09.390062 2770 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.376s" Apr 20 15:32:11.046095 kubelet[2770]: E0420 15:32:11.043000 2770 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.64s" Apr 20 15:32:11.543147 kubelet[2770]: E0420 15:32:11.537704 2770 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 15:32:12.599275 kubelet[2770]: E0420 15:32:12.596955 2770 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.548s" Apr 20 15:32:14.169842 kubelet[2770]: E0420 15:32:14.165491 2770 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.562s" Apr 20 15:32:15.337561 kubelet[2770]: E0420 15:32:15.336696 2770 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.167s" Apr 20 15:32:16.500331 systemd[1]: cri-containerd-1bef1508aab3f88c63cd1b634e1d1b348f685299e51ff9d42f8592d04628f0d9.scope: Deactivated successfully. Apr 20 15:32:16.607640 systemd[1]: cri-containerd-1bef1508aab3f88c63cd1b634e1d1b348f685299e51ff9d42f8592d04628f0d9.scope: Consumed 7.074s CPU time, 20.5M memory peak. Apr 20 15:32:17.294014 kubelet[2770]: E0420 15:32:17.292682 2770 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 15:32:17.619245 kubelet[2770]: E0420 15:32:17.608775 2770 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.558s" Apr 20 15:32:17.663174 containerd[1635]: time="2026-04-20T15:32:17.606561375Z" level=info msg="received container exit event container_id:\"1bef1508aab3f88c63cd1b634e1d1b348f685299e51ff9d42f8592d04628f0d9\" id:\"1bef1508aab3f88c63cd1b634e1d1b348f685299e51ff9d42f8592d04628f0d9\" pid:2990 exit_status:1 exited_at:{seconds:1776699136 nanos:878613025}" Apr 20 15:32:20.603755 kubelet[2770]: E0420 15:32:20.601174 2770 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.556s" Apr 20 15:32:22.698872 kubelet[2770]: E0420 15:32:22.698277 2770 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 15:32:22.818496 kubelet[2770]: E0420 15:32:22.717799 2770 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.113s" Apr 20 15:32:24.078826 kubelet[2770]: E0420 15:32:24.072934 2770 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.355s" Apr 20 15:32:25.327028 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1bef1508aab3f88c63cd1b634e1d1b348f685299e51ff9d42f8592d04628f0d9-rootfs.mount: Deactivated successfully. Apr 20 15:32:26.183114 kubelet[2770]: E0420 15:32:26.181488 2770 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.108s" Apr 20 15:32:26.956301 kubelet[2770]: I0420 15:32:26.952619 2770 scope.go:117] "RemoveContainer" containerID="1bef1508aab3f88c63cd1b634e1d1b348f685299e51ff9d42f8592d04628f0d9" Apr 20 15:32:26.976053 kubelet[2770]: E0420 15:32:26.959846 2770 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 15:32:27.527921 containerd[1635]: time="2026-04-20T15:32:27.525063649Z" level=info msg="CreateContainer within sandbox \"36cc9c62516c4192c61ab0e37efdfcd65bebb612af9dcfb4a1ca8370c4716063\" for container name:\"kube-controller-manager\" attempt:1" Apr 20 15:32:28.562487 kubelet[2770]: E0420 15:32:28.553899 2770 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 15:32:29.542927 kubelet[2770]: E0420 15:32:29.532065 2770 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.473s" Apr 20 15:32:31.412235 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1110589112.mount: Deactivated successfully. Apr 20 15:32:31.741058 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1439445928.mount: Deactivated successfully. Apr 20 15:32:31.752040 containerd[1635]: time="2026-04-20T15:32:31.722965757Z" level=info msg="Container c72ba01c5503ae58e130769497dca7aefb70047d9f34a28d0b436b6439c4c24d: CDI devices from CRI Config.CDIDevices: []" Apr 20 15:32:33.916048 kubelet[2770]: E0420 15:32:33.898646 2770 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 15:32:36.338889 containerd[1635]: time="2026-04-20T15:32:36.321334697Z" level=info msg="CreateContainer within sandbox \"36cc9c62516c4192c61ab0e37efdfcd65bebb612af9dcfb4a1ca8370c4716063\" for name:\"kube-controller-manager\" attempt:1 returns container id \"c72ba01c5503ae58e130769497dca7aefb70047d9f34a28d0b436b6439c4c24d\"" Apr 20 15:32:36.941026 containerd[1635]: time="2026-04-20T15:32:36.860371200Z" level=info msg="StartContainer for \"c72ba01c5503ae58e130769497dca7aefb70047d9f34a28d0b436b6439c4c24d\"" Apr 20 15:32:38.255803 kubelet[2770]: E0420 15:32:38.255059 2770 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.197s" Apr 20 15:32:39.398782 kubelet[2770]: E0420 15:32:39.395168 2770 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.139s" Apr 20 15:32:39.592187 kubelet[2770]: E0420 15:32:39.583248 2770 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 15:32:39.964336 containerd[1635]: time="2026-04-20T15:32:39.949616358Z" level=info msg="connecting to shim c72ba01c5503ae58e130769497dca7aefb70047d9f34a28d0b436b6439c4c24d" address="unix:///run/containerd/s/e8b570697ccae5c37ee8c203dd47b2636b9b3b7d7a1b21460cbc42ccc93af9d3" protocol=ttrpc version=3 Apr 20 15:32:41.596811 systemd[1]: Started cri-containerd-c72ba01c5503ae58e130769497dca7aefb70047d9f34a28d0b436b6439c4c24d.scope - libcontainer container c72ba01c5503ae58e130769497dca7aefb70047d9f34a28d0b436b6439c4c24d. Apr 20 15:32:45.433970 kubelet[2770]: E0420 15:32:45.426736 2770 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 15:32:47.342906 containerd[1635]: time="2026-04-20T15:32:47.341478330Z" level=info msg="StartContainer for \"c72ba01c5503ae58e130769497dca7aefb70047d9f34a28d0b436b6439c4c24d\" returns successfully" Apr 20 15:32:47.665827 kubelet[2770]: E0420 15:32:47.637287 2770 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.565s" Apr 20 15:32:49.625949 kubelet[2770]: E0420 15:32:49.618017 2770 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.558s" Apr 20 15:32:49.759137 kubelet[2770]: E0420 15:32:49.758024 2770 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 15:32:51.133607 kubelet[2770]: E0420 15:32:51.125077 2770 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 15:32:51.366923 kubelet[2770]: E0420 15:32:51.282553 2770 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.228s" Apr 20 15:32:51.891048 kubelet[2770]: E0420 15:32:51.889887 2770 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 15:32:53.123995 kubelet[2770]: E0420 15:32:53.121669 2770 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.084s" Apr 20 15:32:53.360903 kubelet[2770]: E0420 15:32:53.240631 2770 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 15:32:53.637262 kubelet[2770]: E0420 15:32:53.632310 2770 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 15:32:55.356259 kubelet[2770]: E0420 15:32:55.338999 2770 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 15:32:56.446076 kubelet[2770]: E0420 15:32:56.434797 2770 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 15:32:57.190582 kubelet[2770]: E0420 15:32:57.185956 2770 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.143s" Apr 20 15:33:01.665498 kubelet[2770]: E0420 15:33:01.664980 2770 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 15:33:05.184218 kubelet[2770]: E0420 15:33:05.172916 2770 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.111s" Apr 20 15:33:05.549824 kubelet[2770]: E0420 15:33:05.528833 2770 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 15:33:07.045327 kubelet[2770]: E0420 15:33:07.044283 2770 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 15:33:12.388346 kubelet[2770]: E0420 15:33:12.385985 2770 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 15:33:17.151701 kubelet[2770]: E0420 15:33:17.146285 2770 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.027s" Apr 20 15:33:18.326035 kubelet[2770]: E0420 15:33:18.322653 2770 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 15:33:19.345090 kubelet[2770]: E0420 15:33:19.342058 2770 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.299s" Apr 20 15:33:21.329313 kubelet[2770]: E0420 15:33:21.324963 2770 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.287s" Apr 20 15:33:23.157930 systemd[1]: cri-containerd-c72ba01c5503ae58e130769497dca7aefb70047d9f34a28d0b436b6439c4c24d.scope: Deactivated successfully. Apr 20 15:33:23.226096 systemd[1]: cri-containerd-c72ba01c5503ae58e130769497dca7aefb70047d9f34a28d0b436b6439c4c24d.scope: Consumed 6.092s CPU time, 20.3M memory peak. Apr 20 15:33:23.516936 containerd[1635]: time="2026-04-20T15:33:23.439525343Z" level=info msg="received container exit event container_id:\"c72ba01c5503ae58e130769497dca7aefb70047d9f34a28d0b436b6439c4c24d\" id:\"c72ba01c5503ae58e130769497dca7aefb70047d9f34a28d0b436b6439c4c24d\" pid:3097 exit_status:1 exited_at:{seconds:1776699203 nanos:157441843}" Apr 20 15:33:23.664760 kubelet[2770]: E0420 15:33:23.661348 2770 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 15:33:25.056327 kubelet[2770]: E0420 15:33:25.055764 2770 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.018s" Apr 20 15:33:26.238709 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c72ba01c5503ae58e130769497dca7aefb70047d9f34a28d0b436b6439c4c24d-rootfs.mount: Deactivated successfully. Apr 20 15:33:28.266435 kubelet[2770]: I0420 15:33:28.265766 2770 scope.go:117] "RemoveContainer" containerID="1bef1508aab3f88c63cd1b634e1d1b348f685299e51ff9d42f8592d04628f0d9" Apr 20 15:33:28.309556 kubelet[2770]: I0420 15:33:28.269758 2770 scope.go:117] "RemoveContainer" containerID="c72ba01c5503ae58e130769497dca7aefb70047d9f34a28d0b436b6439c4c24d" Apr 20 15:33:28.309556 kubelet[2770]: E0420 15:33:28.294065 2770 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 15:33:28.309556 kubelet[2770]: E0420 15:33:28.300787 2770 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-controller-manager pod=kube-controller-manager-localhost_kube-system(c6bb8708a026256e82ca4c5631a78b5a)\"" pod="kube-system/kube-controller-manager-localhost" podUID="c6bb8708a026256e82ca4c5631a78b5a" Apr 20 15:33:29.009008 containerd[1635]: time="2026-04-20T15:33:28.997188983Z" level=info msg="RemoveContainer for \"1bef1508aab3f88c63cd1b634e1d1b348f685299e51ff9d42f8592d04628f0d9\"" Apr 20 15:33:29.149360 kubelet[2770]: E0420 15:33:29.129782 2770 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 15:33:31.963655 kubelet[2770]: E0420 15:33:31.961961 2770 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.893s" Apr 20 15:33:34.060073 kubelet[2770]: E0420 15:33:34.058904 2770 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.992s" Apr 20 15:33:34.322802 kubelet[2770]: I0420 15:33:34.248686 2770 scope.go:117] "RemoveContainer" containerID="c72ba01c5503ae58e130769497dca7aefb70047d9f34a28d0b436b6439c4c24d" Apr 20 15:33:34.322802 kubelet[2770]: E0420 15:33:34.249988 2770 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 15:33:34.536062 containerd[1635]: time="2026-04-20T15:33:34.514201199Z" level=info msg="RemoveContainer for \"1bef1508aab3f88c63cd1b634e1d1b348f685299e51ff9d42f8592d04628f0d9\" returns successfully" Apr 20 15:33:34.653824 kubelet[2770]: E0420 15:33:34.650212 2770 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 15:33:36.630079 kubelet[2770]: E0420 15:33:36.625991 2770 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.5s" Apr 20 15:33:37.092716 containerd[1635]: time="2026-04-20T15:33:37.086628304Z" level=info msg="CreateContainer within sandbox \"36cc9c62516c4192c61ab0e37efdfcd65bebb612af9dcfb4a1ca8370c4716063\" for container name:\"kube-controller-manager\" attempt:2" Apr 20 15:33:38.143030 kubelet[2770]: E0420 15:33:38.138115 2770 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.488s" Apr 20 15:33:39.476024 kubelet[2770]: E0420 15:33:39.474083 2770 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.333s" Apr 20 15:33:39.769365 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2284367202.mount: Deactivated successfully. Apr 20 15:33:39.833091 kubelet[2770]: E0420 15:33:39.826775 2770 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 15:33:40.065177 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4202624736.mount: Deactivated successfully. Apr 20 15:33:40.100604 containerd[1635]: time="2026-04-20T15:33:40.066637836Z" level=info msg="Container 3b2352d00f5f73944af3efbf0280250a0b9faf45bb8c73547ba715bfe2066d6e: CDI devices from CRI Config.CDIDevices: []" Apr 20 15:33:41.212783 kubelet[2770]: E0420 15:33:41.211778 2770 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.164s" Apr 20 15:33:42.547218 containerd[1635]: time="2026-04-20T15:33:42.545853941Z" level=info msg="CreateContainer within sandbox \"36cc9c62516c4192c61ab0e37efdfcd65bebb612af9dcfb4a1ca8370c4716063\" for name:\"kube-controller-manager\" attempt:2 returns container id \"3b2352d00f5f73944af3efbf0280250a0b9faf45bb8c73547ba715bfe2066d6e\"" Apr 20 15:33:42.860849 containerd[1635]: time="2026-04-20T15:33:42.843899517Z" level=info msg="StartContainer for \"3b2352d00f5f73944af3efbf0280250a0b9faf45bb8c73547ba715bfe2066d6e\"" Apr 20 15:33:44.208936 containerd[1635]: time="2026-04-20T15:33:44.208185759Z" level=info msg="connecting to shim 3b2352d00f5f73944af3efbf0280250a0b9faf45bb8c73547ba715bfe2066d6e" address="unix:///run/containerd/s/e8b570697ccae5c37ee8c203dd47b2636b9b3b7d7a1b21460cbc42ccc93af9d3" protocol=ttrpc version=3 Apr 20 15:33:44.992825 kubelet[2770]: E0420 15:33:44.992026 2770 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 15:33:45.366161 kubelet[2770]: E0420 15:33:45.356228 2770 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.314s" Apr 20 15:33:46.958061 systemd[1]: Started cri-containerd-3b2352d00f5f73944af3efbf0280250a0b9faf45bb8c73547ba715bfe2066d6e.scope - libcontainer container 3b2352d00f5f73944af3efbf0280250a0b9faf45bb8c73547ba715bfe2066d6e. Apr 20 15:33:48.062738 kubelet[2770]: E0420 15:33:48.058108 2770 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.997s" Apr 20 15:33:50.470155 kubelet[2770]: E0420 15:33:50.466190 2770 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 15:33:52.330173 containerd[1635]: time="2026-04-20T15:33:52.322171066Z" level=info msg="StartContainer for \"3b2352d00f5f73944af3efbf0280250a0b9faf45bb8c73547ba715bfe2066d6e\" returns successfully" Apr 20 15:33:55.096988 kubelet[2770]: E0420 15:33:55.093841 2770 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 15:33:55.646261 kubelet[2770]: E0420 15:33:55.643316 2770 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 15:33:56.765861 kubelet[2770]: E0420 15:33:56.765407 2770 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 15:34:00.869251 kubelet[2770]: E0420 15:34:00.865667 2770 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 15:34:01.167179 kubelet[2770]: E0420 15:34:01.163778 2770 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 15:34:03.015193 kubelet[2770]: E0420 15:34:03.004843 2770 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 15:34:03.534979 kubelet[2770]: E0420 15:34:03.524025 2770 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 15:34:04.637012 kubelet[2770]: E0420 15:34:04.636037 2770 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 15:34:06.224137 kubelet[2770]: E0420 15:34:06.223220 2770 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 15:34:10.266663 systemd[1]: Reload requested from client PID 3179 ('systemctl') (unit session-6.scope)... Apr 20 15:34:10.281999 systemd[1]: Reloading... Apr 20 15:34:12.211041 kubelet[2770]: E0420 15:34:12.210026 2770 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 15:34:12.260616 zram_generator::config[3234]: No configuration found. Apr 20 15:34:12.286887 systemd-ssh-generator[3229]: Failed to query local AF_VSOCK CID: Cannot assign requested address Apr 20 15:34:12.308994 (sd-exec-[3210]: /usr/lib/systemd/system-generators/systemd-ssh-generator failed with exit status 1. Apr 20 15:34:17.138137 kubelet[2770]: E0420 15:34:17.135849 2770 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.059s" Apr 20 15:34:17.393877 systemd[1]: /usr/lib/systemd/system/update-engine.service:10: Support for option BlockIOWeight= has been removed and it is ignored Apr 20 15:34:17.577280 kubelet[2770]: E0420 15:34:17.576684 2770 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 15:34:19.602764 containerd[1635]: time="2026-04-20T15:34:19.529648882Z" level=info msg="container event discarded" container=36cc9c62516c4192c61ab0e37efdfcd65bebb612af9dcfb4a1ca8370c4716063 type=CONTAINER_CREATED_EVENT Apr 20 15:34:19.898889 containerd[1635]: time="2026-04-20T15:34:19.786327757Z" level=info msg="container event discarded" container=36cc9c62516c4192c61ab0e37efdfcd65bebb612af9dcfb4a1ca8370c4716063 type=CONTAINER_STARTED_EVENT Apr 20 15:34:19.905825 containerd[1635]: time="2026-04-20T15:34:19.900047474Z" level=info msg="container event discarded" container=3ae2bc198d206d496b05bf8cedf59cf493595b084c968bee9f872f384d6cca08 type=CONTAINER_CREATED_EVENT Apr 20 15:34:19.905825 containerd[1635]: time="2026-04-20T15:34:19.901021382Z" level=info msg="container event discarded" container=3ae2bc198d206d496b05bf8cedf59cf493595b084c968bee9f872f384d6cca08 type=CONTAINER_STARTED_EVENT Apr 20 15:34:19.905825 containerd[1635]: time="2026-04-20T15:34:19.901114496Z" level=info msg="container event discarded" container=1ed9cbcf97567ee301ca8abeff1d858942b912259ed14d453dc220808018e980 type=CONTAINER_CREATED_EVENT Apr 20 15:34:19.905825 containerd[1635]: time="2026-04-20T15:34:19.901123751Z" level=info msg="container event discarded" container=1ed9cbcf97567ee301ca8abeff1d858942b912259ed14d453dc220808018e980 type=CONTAINER_STARTED_EVENT Apr 20 15:34:20.024860 containerd[1635]: time="2026-04-20T15:34:20.020820039Z" level=info msg="container event discarded" container=6fd58b8c1c91dba1d785b781e0a4ac6425b1aed8f47de4fda07e8f33ce5536e3 type=CONTAINER_CREATED_EVENT Apr 20 15:34:20.025903 containerd[1635]: time="2026-04-20T15:34:20.024820079Z" level=info msg="container event discarded" container=1bef1508aab3f88c63cd1b634e1d1b348f685299e51ff9d42f8592d04628f0d9 type=CONTAINER_CREATED_EVENT Apr 20 15:34:20.025903 containerd[1635]: time="2026-04-20T15:34:20.025712240Z" level=info msg="container event discarded" container=b28a5680958d2f890a4e601418fc693e80656ff0c2a370aea88226798a4e1c4d type=CONTAINER_CREATED_EVENT Apr 20 15:34:20.917262 systemd[1]: Reloading finished in 10631 ms. Apr 20 15:34:21.238021 containerd[1635]: time="2026-04-20T15:34:21.235328377Z" level=info msg="container event discarded" container=1bef1508aab3f88c63cd1b634e1d1b348f685299e51ff9d42f8592d04628f0d9 type=CONTAINER_STARTED_EVENT Apr 20 15:34:21.328662 containerd[1635]: time="2026-04-20T15:34:21.250103574Z" level=info msg="container event discarded" container=b28a5680958d2f890a4e601418fc693e80656ff0c2a370aea88226798a4e1c4d type=CONTAINER_STARTED_EVENT Apr 20 15:34:21.633801 containerd[1635]: time="2026-04-20T15:34:21.615087607Z" level=info msg="container event discarded" container=6fd58b8c1c91dba1d785b781e0a4ac6425b1aed8f47de4fda07e8f33ce5536e3 type=CONTAINER_STARTED_EVENT Apr 20 15:34:22.540024 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 20 15:34:22.571169 systemd[1]: kubelet.service: Deactivated successfully. Apr 20 15:34:22.573623 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 20 15:34:22.574079 systemd[1]: kubelet.service: Consumed 1min 35.207s CPU time, 138.4M memory peak. Apr 20 15:34:22.890003 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 20 15:34:25.746224 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 20 15:34:25.984860 (kubelet)[3279]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 20 15:34:28.211738 kubelet[3279]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 20 15:34:28.278319 kubelet[3279]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 20 15:34:28.278319 kubelet[3279]: I0420 15:34:28.222115 3279 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 20 15:34:28.441974 kubelet[3279]: I0420 15:34:28.440964 3279 server.go:529] "Kubelet version" kubeletVersion="v1.34.4" Apr 20 15:34:28.441974 kubelet[3279]: I0420 15:34:28.441165 3279 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 20 15:34:28.441974 kubelet[3279]: I0420 15:34:28.441308 3279 watchdog_linux.go:95] "Systemd watchdog is not enabled" Apr 20 15:34:28.441974 kubelet[3279]: I0420 15:34:28.441336 3279 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 20 15:34:28.638722 kubelet[3279]: I0420 15:34:28.442912 3279 server.go:956] "Client rotation is on, will bootstrap in background" Apr 20 15:34:28.638722 kubelet[3279]: I0420 15:34:28.559837 3279 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Apr 20 15:34:28.705276 kubelet[3279]: I0420 15:34:28.668465 3279 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 20 15:34:30.420730 kubelet[3279]: I0420 15:34:30.418751 3279 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Apr 20 15:34:30.916901 kubelet[3279]: I0420 15:34:30.916081 3279 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Apr 20 15:34:30.930994 kubelet[3279]: I0420 15:34:30.927355 3279 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 20 15:34:30.949373 kubelet[3279]: I0420 15:34:30.930626 3279 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 20 15:34:30.949373 kubelet[3279]: I0420 15:34:30.936991 3279 topology_manager.go:138] "Creating topology manager with none policy" Apr 20 15:34:30.949373 kubelet[3279]: I0420 15:34:30.937012 3279 container_manager_linux.go:306] "Creating device plugin manager" Apr 20 15:34:30.949373 kubelet[3279]: I0420 15:34:30.937104 3279 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Apr 20 15:34:30.949373 kubelet[3279]: I0420 15:34:30.945556 3279 state_mem.go:36] "Initialized new in-memory state store" Apr 20 15:34:31.113221 kubelet[3279]: I0420 15:34:30.955282 3279 kubelet.go:475] "Attempting to sync node with API server" Apr 20 15:34:31.113221 kubelet[3279]: I0420 15:34:30.955831 3279 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 20 15:34:31.113221 kubelet[3279]: I0420 15:34:30.956038 3279 kubelet.go:387] "Adding apiserver pod source" Apr 20 15:34:31.113221 kubelet[3279]: I0420 15:34:30.956144 3279 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 20 15:34:31.414014 kubelet[3279]: I0420 15:34:31.412877 3279 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.2.1" apiVersion="v1" Apr 20 15:34:31.541164 kubelet[3279]: I0420 15:34:31.540532 3279 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 20 15:34:31.615026 kubelet[3279]: I0420 15:34:31.553068 3279 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Apr 20 15:34:31.848345 kubelet[3279]: I0420 15:34:31.840974 3279 server.go:1262] "Started kubelet" Apr 20 15:34:31.872308 kubelet[3279]: I0420 15:34:31.871545 3279 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Apr 20 15:34:31.991188 kubelet[3279]: I0420 15:34:31.856804 3279 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 20 15:34:32.053692 kubelet[3279]: I0420 15:34:31.994296 3279 server_v1.go:49] "podresources" method="list" useActivePods=true Apr 20 15:34:32.053692 kubelet[3279]: I0420 15:34:32.032214 3279 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 20 15:34:32.713081 kubelet[3279]: I0420 15:34:32.712333 3279 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 20 15:34:33.025651 kubelet[3279]: I0420 15:34:32.718245 3279 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 20 15:34:33.025651 kubelet[3279]: I0420 15:34:32.732907 3279 server.go:310] "Adding debug handlers to kubelet server" Apr 20 15:34:33.025651 kubelet[3279]: I0420 15:34:32.960167 3279 volume_manager.go:313] "Starting Kubelet Volume Manager" Apr 20 15:34:33.025651 kubelet[3279]: I0420 15:34:33.003097 3279 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Apr 20 15:34:33.213324 kubelet[3279]: I0420 15:34:33.126279 3279 reconciler.go:29] "Reconciler: start to sync state" Apr 20 15:34:33.213324 kubelet[3279]: I0420 15:34:33.158586 3279 apiserver.go:52] "Watching apiserver" Apr 20 15:34:33.726981 kubelet[3279]: I0420 15:34:33.725684 3279 factory.go:223] Registration of the systemd container factory successfully Apr 20 15:34:34.005133 kubelet[3279]: I0420 15:34:33.805313 3279 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 20 15:34:34.082306 kubelet[3279]: W0420 15:34:34.060316 3279 logging.go:55] [core] [Channel #9 SubChannel #10]grpc: addrConn.createTransport failed to connect to {Addr: "/run/containerd/containerd.sock", ServerName: "localhost", Attributes: {"<%!p(networktype.keyType=grpc.internal.transport.networktype)>": "unix" }, BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "error reading server preface: read unix @->/run/containerd/containerd.sock: use of closed network connection" Apr 20 15:34:34.440044 kubelet[3279]: W0420 15:34:34.430355 3279 logging.go:55] [core] [Channel #9 SubChannel #10]grpc: addrConn.createTransport failed to connect to {Addr: "/run/containerd/containerd.sock", ServerName: "localhost", Attributes: {"<%!p(networktype.keyType=grpc.internal.transport.networktype)>": "unix" }, BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial unix:///run/containerd/containerd.sock: timeout" Apr 20 15:34:35.554967 kubelet[3279]: I0420 15:34:35.554162 3279 factory.go:223] Registration of the containerd container factory successfully Apr 20 15:34:35.865054 kubelet[3279]: E0420 15:34:35.847726 3279 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 20 15:34:36.428103 kubelet[3279]: I0420 15:34:36.427746 3279 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Apr 20 15:34:36.629808 kubelet[3279]: I0420 15:34:36.628000 3279 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Apr 20 15:34:36.629808 kubelet[3279]: I0420 15:34:36.628154 3279 status_manager.go:244] "Starting to sync pod status with apiserver" Apr 20 15:34:36.629808 kubelet[3279]: I0420 15:34:36.628302 3279 kubelet.go:2428] "Starting kubelet main sync loop" Apr 20 15:34:36.757300 kubelet[3279]: E0420 15:34:36.638976 3279 kubelet.go:2452] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 20 15:34:36.817808 kubelet[3279]: E0420 15:34:36.795702 3279 kubelet.go:2452] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 20 15:34:37.051290 kubelet[3279]: E0420 15:34:37.036174 3279 kubelet.go:2452] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 20 15:34:37.467185 kubelet[3279]: E0420 15:34:37.459285 3279 kubelet.go:2452] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 20 15:34:38.342462 kubelet[3279]: E0420 15:34:38.341265 3279 kubelet.go:2452] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Apr 20 15:34:39.948802 kubelet[3279]: E0420 15:34:39.946831 3279 kubelet.go:2452] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Apr 20 15:34:43.207100 kubelet[3279]: E0420 15:34:43.162683 3279 kubelet.go:2452] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Apr 20 15:34:47.583072 kubelet[3279]: I0420 15:34:47.581728 3279 cpu_manager.go:221] "Starting CPU manager" policy="none" Apr 20 15:34:47.583072 kubelet[3279]: I0420 15:34:47.582068 3279 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Apr 20 15:34:47.583072 kubelet[3279]: I0420 15:34:47.582153 3279 state_mem.go:36] "Initialized new in-memory state store" Apr 20 15:34:48.058784 kubelet[3279]: I0420 15:34:47.712108 3279 state_mem.go:88] "Updated default CPUSet" cpuSet="" Apr 20 15:34:48.058784 kubelet[3279]: I0420 15:34:47.716121 3279 state_mem.go:96] "Updated CPUSet assignments" assignments={} Apr 20 15:34:48.058784 kubelet[3279]: I0420 15:34:47.717884 3279 policy_none.go:49] "None policy: Start" Apr 20 15:34:48.058784 kubelet[3279]: I0420 15:34:47.718038 3279 memory_manager.go:187] "Starting memorymanager" policy="None" Apr 20 15:34:48.058784 kubelet[3279]: I0420 15:34:47.718070 3279 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Apr 20 15:34:48.058784 kubelet[3279]: I0420 15:34:47.815326 3279 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Apr 20 15:34:48.058784 kubelet[3279]: I0420 15:34:47.816835 3279 policy_none.go:47] "Start" Apr 20 15:34:48.252990 kubelet[3279]: E0420 15:34:48.219147 3279 kubelet.go:2452] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Apr 20 15:34:49.110358 kubelet[3279]: E0420 15:34:49.108364 3279 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 20 15:34:49.236250 kubelet[3279]: I0420 15:34:49.158949 3279 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 20 15:34:49.236250 kubelet[3279]: I0420 15:34:49.159304 3279 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 20 15:34:49.356240 kubelet[3279]: I0420 15:34:49.355933 3279 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 20 15:34:50.676231 kubelet[3279]: E0420 15:34:50.675295 3279 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 20 15:34:52.826420 kubelet[3279]: I0420 15:34:52.822354 3279 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 20 15:34:53.438948 kubelet[3279]: I0420 15:34:53.438145 3279 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Apr 20 15:34:53.499872 kubelet[3279]: I0420 15:34:53.445237 3279 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/087ce51c238cac7808178dd7f5c26d13-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"087ce51c238cac7808178dd7f5c26d13\") " pod="kube-system/kube-apiserver-localhost" Apr 20 15:34:53.499872 kubelet[3279]: I0420 15:34:53.468061 3279 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Apr 20 15:34:53.539077 kubelet[3279]: I0420 15:34:53.468140 3279 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/087ce51c238cac7808178dd7f5c26d13-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"087ce51c238cac7808178dd7f5c26d13\") " pod="kube-system/kube-apiserver-localhost" Apr 20 15:34:53.598057 kubelet[3279]: I0420 15:34:53.597051 3279 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Apr 20 15:34:53.753211 kubelet[3279]: I0420 15:34:53.679594 3279 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/087ce51c238cac7808178dd7f5c26d13-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"087ce51c238cac7808178dd7f5c26d13\") " pod="kube-system/kube-apiserver-localhost" Apr 20 15:34:53.939230 kubelet[3279]: I0420 15:34:53.919344 3279 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c6bb8708a026256e82ca4c5631a78b5a-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"c6bb8708a026256e82ca4c5631a78b5a\") " pod="kube-system/kube-controller-manager-localhost" Apr 20 15:34:54.362696 kubelet[3279]: I0420 15:34:54.358907 3279 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c6bb8708a026256e82ca4c5631a78b5a-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"c6bb8708a026256e82ca4c5631a78b5a\") " pod="kube-system/kube-controller-manager-localhost" Apr 20 15:34:54.446161 kubelet[3279]: I0420 15:34:54.445553 3279 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/824fd89300514e351ed3b68d82c665c6-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"824fd89300514e351ed3b68d82c665c6\") " pod="kube-system/kube-scheduler-localhost" Apr 20 15:34:54.765991 kubelet[3279]: I0420 15:34:54.744309 3279 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c6bb8708a026256e82ca4c5631a78b5a-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"c6bb8708a026256e82ca4c5631a78b5a\") " pod="kube-system/kube-controller-manager-localhost" Apr 20 15:34:54.932754 kubelet[3279]: I0420 15:34:54.799736 3279 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/c6bb8708a026256e82ca4c5631a78b5a-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"c6bb8708a026256e82ca4c5631a78b5a\") " pod="kube-system/kube-controller-manager-localhost" Apr 20 15:34:54.932754 kubelet[3279]: I0420 15:34:54.822009 3279 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c6bb8708a026256e82ca4c5631a78b5a-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"c6bb8708a026256e82ca4c5631a78b5a\") " pod="kube-system/kube-controller-manager-localhost" Apr 20 15:34:55.222767 kubelet[3279]: E0420 15:34:55.221894 3279 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 15:34:56.116959 kubelet[3279]: E0420 15:34:56.115285 3279 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.477s" Apr 20 15:34:57.334233 systemd[1]: cri-containerd-3b2352d00f5f73944af3efbf0280250a0b9faf45bb8c73547ba715bfe2066d6e.scope: Deactivated successfully. Apr 20 15:34:57.431327 systemd[1]: cri-containerd-3b2352d00f5f73944af3efbf0280250a0b9faf45bb8c73547ba715bfe2066d6e.scope: Consumed 13.965s CPU time, 20.7M memory peak. Apr 20 15:34:57.570931 containerd[1635]: time="2026-04-20T15:34:57.554187887Z" level=info msg="received container exit event container_id:\"3b2352d00f5f73944af3efbf0280250a0b9faf45bb8c73547ba715bfe2066d6e\" id:\"3b2352d00f5f73944af3efbf0280250a0b9faf45bb8c73547ba715bfe2066d6e\" pid:3155 exit_status:1 exited_at:{seconds:1776699297 nanos:337138101}" Apr 20 15:34:57.669959 kubelet[3279]: E0420 15:34:57.610715 3279 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 15:34:58.378197 kubelet[3279]: E0420 15:34:58.375730 3279 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.72s" Apr 20 15:34:58.611450 kubelet[3279]: I0420 15:34:58.605338 3279 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Apr 20 15:34:58.611450 kubelet[3279]: I0420 15:34:58.618681 3279 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Apr 20 15:34:59.160832 kubelet[3279]: E0420 15:34:59.134212 3279 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 15:35:00.132062 kubelet[3279]: E0420 15:35:00.121136 3279 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Apr 20 15:35:00.437673 kubelet[3279]: E0420 15:35:00.435084 3279 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 15:35:00.907926 kubelet[3279]: E0420 15:35:00.813134 3279 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Apr 20 15:35:00.907926 kubelet[3279]: E0420 15:35:00.820212 3279 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 15:35:01.229220 kubelet[3279]: E0420 15:35:01.216997 3279 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.581s" Apr 20 15:35:03.577627 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3b2352d00f5f73944af3efbf0280250a0b9faf45bb8c73547ba715bfe2066d6e-rootfs.mount: Deactivated successfully. Apr 20 15:35:03.628705 kubelet[3279]: E0420 15:35:03.626551 3279 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.163s" Apr 20 15:35:04.688062 kubelet[3279]: E0420 15:35:04.686572 3279 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.057s" Apr 20 15:35:14.155086 kubelet[3279]: E0420 15:35:14.149275 3279 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="9.364s" Apr 20 15:35:17.253833 kubelet[3279]: E0420 15:35:17.245796 3279 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.082s" Apr 20 15:35:17.918899 kubelet[3279]: E0420 15:35:17.917991 3279 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 15:35:18.924184 kubelet[3279]: E0420 15:35:18.919426 3279 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 15:35:19.115358 kubelet[3279]: I0420 15:35:19.091114 3279 scope.go:117] "RemoveContainer" containerID="3b2352d00f5f73944af3efbf0280250a0b9faf45bb8c73547ba715bfe2066d6e" Apr 20 15:35:19.161018 kubelet[3279]: E0420 15:35:19.144347 3279 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 15:35:19.394265 kubelet[3279]: E0420 15:35:19.391049 3279 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.746s" Apr 20 15:35:19.592152 kubelet[3279]: I0420 15:35:19.568308 3279 scope.go:117] "RemoveContainer" containerID="c72ba01c5503ae58e130769497dca7aefb70047d9f34a28d0b436b6439c4c24d" Apr 20 15:35:21.671504 kubelet[3279]: E0420 15:35:21.660129 3279 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 15:35:22.413095 kubelet[3279]: E0420 15:35:22.408222 3279 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.016s" Apr 20 15:35:26.857892 containerd[1635]: time="2026-04-20T15:35:26.824093330Z" level=info msg="CreateContainer within sandbox \"36cc9c62516c4192c61ab0e37efdfcd65bebb612af9dcfb4a1ca8370c4716063\" for container name:\"kube-controller-manager\" attempt:3" Apr 20 15:35:27.828341 containerd[1635]: time="2026-04-20T15:35:27.827559098Z" level=info msg="RemoveContainer for \"c72ba01c5503ae58e130769497dca7aefb70047d9f34a28d0b436b6439c4c24d\"" Apr 20 15:35:29.280646 kubelet[3279]: E0420 15:35:29.279876 3279 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="6.786s" Apr 20 15:35:30.214489 kubelet[3279]: E0420 15:35:30.205147 3279 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 15:35:34.613702 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2475034630.mount: Deactivated successfully. Apr 20 15:35:35.600425 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2311670055.mount: Deactivated successfully. Apr 20 15:35:35.963249 containerd[1635]: time="2026-04-20T15:35:35.793016431Z" level=info msg="Container 5dd02dc9c0aea497c0bde84747e6de4de59b9e2f5ff60d183e9aac204f58211c: CDI devices from CRI Config.CDIDevices: []" Apr 20 15:35:37.870116 systemd[1]: cri-containerd-6fd58b8c1c91dba1d785b781e0a4ac6425b1aed8f47de4fda07e8f33ce5536e3.scope: Deactivated successfully. Apr 20 15:35:37.919238 systemd[1]: cri-containerd-6fd58b8c1c91dba1d785b781e0a4ac6425b1aed8f47de4fda07e8f33ce5536e3.scope: Consumed 53.285s CPU time, 25.8M memory peak, 1.6M read from disk. Apr 20 15:35:38.317187 kubelet[3279]: E0420 15:35:38.315032 3279 cadvisor_stats_provider.go:567] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod824fd89300514e351ed3b68d82c665c6.slice/cri-containerd-6fd58b8c1c91dba1d785b781e0a4ac6425b1aed8f47de4fda07e8f33ce5536e3.scope\": RecentStats: unable to find data in memory cache]" Apr 20 15:35:39.119128 kubelet[3279]: I0420 15:35:39.115104 3279 scope.go:117] "RemoveContainer" containerID="c72ba01c5503ae58e130769497dca7aefb70047d9f34a28d0b436b6439c4c24d" Apr 20 15:35:39.284272 containerd[1635]: time="2026-04-20T15:35:39.263359139Z" level=info msg="received container exit event container_id:\"6fd58b8c1c91dba1d785b781e0a4ac6425b1aed8f47de4fda07e8f33ce5536e3\" id:\"6fd58b8c1c91dba1d785b781e0a4ac6425b1aed8f47de4fda07e8f33ce5536e3\" pid:3000 exit_status:1 exited_at:{seconds:1776699337 nanos:919243257}" Apr 20 15:35:39.558011 kubelet[3279]: E0420 15:35:39.556460 3279 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="9.827s" Apr 20 15:35:41.343262 containerd[1635]: time="2026-04-20T15:35:41.326252394Z" level=info msg="RemoveContainer for \"c72ba01c5503ae58e130769497dca7aefb70047d9f34a28d0b436b6439c4c24d\" returns successfully" Apr 20 15:35:41.692882 containerd[1635]: time="2026-04-20T15:35:41.689336232Z" level=error msg="ContainerStatus for \"c72ba01c5503ae58e130769497dca7aefb70047d9f34a28d0b436b6439c4c24d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c72ba01c5503ae58e130769497dca7aefb70047d9f34a28d0b436b6439c4c24d\": not found" Apr 20 15:35:41.960596 kubelet[3279]: E0420 15:35:41.932975 3279 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c72ba01c5503ae58e130769497dca7aefb70047d9f34a28d0b436b6439c4c24d\": not found" containerID="c72ba01c5503ae58e130769497dca7aefb70047d9f34a28d0b436b6439c4c24d" Apr 20 15:35:42.116347 kubelet[3279]: E0420 15:35:42.005674 3279 kuberuntime_gc.go:151] "Failed to remove container" err="failed to get container status \"c72ba01c5503ae58e130769497dca7aefb70047d9f34a28d0b436b6439c4c24d\": rpc error: code = NotFound desc = an error occurred when try to find container \"c72ba01c5503ae58e130769497dca7aefb70047d9f34a28d0b436b6439c4c24d\": not found" containerID="c72ba01c5503ae58e130769497dca7aefb70047d9f34a28d0b436b6439c4c24d" Apr 20 15:35:48.503004 kubelet[3279]: E0420 15:35:48.500034 3279 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="8.885s" Apr 20 15:35:49.598614 containerd[1635]: time="2026-04-20T15:35:49.577910006Z" level=error msg="failed to delete task" error="context deadline exceeded" id=6fd58b8c1c91dba1d785b781e0a4ac6425b1aed8f47de4fda07e8f33ce5536e3 Apr 20 15:35:49.620362 containerd[1635]: time="2026-04-20T15:35:49.612491782Z" level=error msg="failed to handle container TaskExit event container_id:\"6fd58b8c1c91dba1d785b781e0a4ac6425b1aed8f47de4fda07e8f33ce5536e3\" id:\"6fd58b8c1c91dba1d785b781e0a4ac6425b1aed8f47de4fda07e8f33ce5536e3\" pid:3000 exit_status:1 exited_at:{seconds:1776699337 nanos:919243257}" error="failed to stop container: failed to delete task: context deadline exceeded" Apr 20 15:35:49.633618 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6fd58b8c1c91dba1d785b781e0a4ac6425b1aed8f47de4fda07e8f33ce5536e3-rootfs.mount: Deactivated successfully. Apr 20 15:35:49.930972 containerd[1635]: time="2026-04-20T15:35:49.702713032Z" level=error msg="ttrpc: received message on inactive stream" stream=63 Apr 20 15:35:51.307139 kubelet[3279]: E0420 15:35:51.299325 3279 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.798s" Apr 20 15:35:51.388281 containerd[1635]: time="2026-04-20T15:35:51.374055650Z" level=info msg="TaskExit event container_id:\"6fd58b8c1c91dba1d785b781e0a4ac6425b1aed8f47de4fda07e8f33ce5536e3\" id:\"6fd58b8c1c91dba1d785b781e0a4ac6425b1aed8f47de4fda07e8f33ce5536e3\" pid:3000 exit_status:1 exited_at:{seconds:1776699337 nanos:919243257}" Apr 20 15:35:51.429828 containerd[1635]: time="2026-04-20T15:35:51.429565035Z" level=info msg="CreateContainer within sandbox \"36cc9c62516c4192c61ab0e37efdfcd65bebb612af9dcfb4a1ca8370c4716063\" for name:\"kube-controller-manager\" attempt:3 returns container id \"5dd02dc9c0aea497c0bde84747e6de4de59b9e2f5ff60d183e9aac204f58211c\"" Apr 20 15:35:51.449359 kubelet[3279]: E0420 15:35:51.447622 3279 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 15:35:51.714955 containerd[1635]: time="2026-04-20T15:35:51.627595236Z" level=info msg="StartContainer for \"5dd02dc9c0aea497c0bde84747e6de4de59b9e2f5ff60d183e9aac204f58211c\"" Apr 20 15:35:52.532868 containerd[1635]: time="2026-04-20T15:35:52.523791562Z" level=info msg="connecting to shim 5dd02dc9c0aea497c0bde84747e6de4de59b9e2f5ff60d183e9aac204f58211c" address="unix:///run/containerd/s/e8b570697ccae5c37ee8c203dd47b2636b9b3b7d7a1b21460cbc42ccc93af9d3" protocol=ttrpc version=3 Apr 20 15:35:53.293132 systemd[1]: Started cri-containerd-5dd02dc9c0aea497c0bde84747e6de4de59b9e2f5ff60d183e9aac204f58211c.scope - libcontainer container 5dd02dc9c0aea497c0bde84747e6de4de59b9e2f5ff60d183e9aac204f58211c. Apr 20 15:35:56.919846 kubelet[3279]: E0420 15:35:56.915469 3279 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.281s" Apr 20 15:35:57.375290 kubelet[3279]: I0420 15:35:57.374099 3279 scope.go:117] "RemoveContainer" containerID="6fd58b8c1c91dba1d785b781e0a4ac6425b1aed8f47de4fda07e8f33ce5536e3" Apr 20 15:35:57.375290 kubelet[3279]: E0420 15:35:57.374431 3279 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 15:35:57.839034 containerd[1635]: time="2026-04-20T15:35:57.827963423Z" level=info msg="CreateContainer within sandbox \"3ae2bc198d206d496b05bf8cedf59cf493595b084c968bee9f872f384d6cca08\" for container name:\"kube-scheduler\" attempt:1" Apr 20 15:36:00.245004 kubelet[3279]: E0420 15:36:00.243318 3279 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.598s" Apr 20 15:36:02.724310 containerd[1635]: time="2026-04-20T15:36:02.718192240Z" level=info msg="StartContainer for \"5dd02dc9c0aea497c0bde84747e6de4de59b9e2f5ff60d183e9aac204f58211c\" returns successfully" Apr 20 15:36:02.753145 sudo[1826]: pam_unix(sudo:session): session closed for user root Apr 20 15:36:02.805362 sshd[1825]: Connection closed by 10.0.0.1 port 45634 Apr 20 15:36:02.812170 sshd-session[1821]: pam_unix(sshd:session): session closed for user core Apr 20 15:36:03.186585 systemd[1]: sshd@4-3-10.0.0.22:22-10.0.0.1:45634.service: Deactivated successfully. Apr 20 15:36:03.419522 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2852410679.mount: Deactivated successfully. Apr 20 15:36:04.368171 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3027037936.mount: Deactivated successfully. Apr 20 15:36:04.633236 systemd[1]: session-6.scope: Deactivated successfully. Apr 20 15:36:04.716301 systemd[1]: session-6.scope: Consumed 1min 14.175s CPU time, 222.6M memory peak. Apr 20 15:36:04.838804 containerd[1635]: time="2026-04-20T15:36:04.605180224Z" level=info msg="Container 6303059002e4ee5e543356374da9d7d604693648ecb0d255f56fb9788408da67: CDI devices from CRI Config.CDIDevices: []" Apr 20 15:36:04.970316 systemd-logind[1616]: Session 6 logged out. Waiting for processes to exit. Apr 20 15:36:05.647994 systemd-logind[1616]: Removed session 6. Apr 20 15:36:08.512317 kubelet[3279]: E0420 15:36:08.400175 3279 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="7.731s" Apr 20 15:36:10.625122 kubelet[3279]: E0420 15:36:10.618583 3279 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.107s" Apr 20 15:36:13.334185 kubelet[3279]: E0420 15:36:13.329154 3279 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.693s" Apr 20 15:36:15.941974 kubelet[3279]: E0420 15:36:15.933726 3279 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.592s" Apr 20 15:36:16.310100 containerd[1635]: time="2026-04-20T15:36:16.158328694Z" level=info msg="CreateContainer within sandbox \"3ae2bc198d206d496b05bf8cedf59cf493595b084c968bee9f872f384d6cca08\" for name:\"kube-scheduler\" attempt:1 returns container id \"6303059002e4ee5e543356374da9d7d604693648ecb0d255f56fb9788408da67\"" Apr 20 15:36:16.954571 containerd[1635]: time="2026-04-20T15:36:16.950520022Z" level=info msg="StartContainer for \"6303059002e4ee5e543356374da9d7d604693648ecb0d255f56fb9788408da67\"" Apr 20 15:36:18.819140 kubelet[3279]: E0420 15:36:18.774707 3279 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.272s" Apr 20 15:36:19.962208 containerd[1635]: time="2026-04-20T15:36:19.762506769Z" level=info msg="connecting to shim 6303059002e4ee5e543356374da9d7d604693648ecb0d255f56fb9788408da67" address="unix:///run/containerd/s/909e6cfc558d4fc91d810de80342fc4f6713a6b29dc9d1596b2d8f2a2ab41cb6" protocol=ttrpc version=3 Apr 20 15:36:21.229115 kubelet[3279]: E0420 15:36:21.228324 3279 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.41s" Apr 20 15:36:21.928521 kubelet[3279]: E0420 15:36:21.903603 3279 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 15:36:22.658562 systemd[1]: Started cri-containerd-6303059002e4ee5e543356374da9d7d604693648ecb0d255f56fb9788408da67.scope - libcontainer container 6303059002e4ee5e543356374da9d7d604693648ecb0d255f56fb9788408da67. Apr 20 15:36:23.482726 kubelet[3279]: E0420 15:36:23.459945 3279 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.222s" Apr 20 15:36:25.347242 kubelet[3279]: E0420 15:36:25.341211 3279 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 15:36:25.592226 kubelet[3279]: E0420 15:36:25.539498 3279 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.95s" Apr 20 15:36:27.609819 kubelet[3279]: E0420 15:36:27.579137 3279 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.92s" Apr 20 15:36:29.442281 kubelet[3279]: E0420 15:36:29.438028 3279 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.826s" Apr 20 15:36:30.687245 kubelet[3279]: E0420 15:36:30.684792 3279 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.239s" Apr 20 15:36:33.634538 kubelet[3279]: E0420 15:36:33.633967 3279 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.881s" Apr 20 15:36:33.824353 kubelet[3279]: E0420 15:36:33.783039 3279 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 15:36:33.880850 kubelet[3279]: E0420 15:36:33.873753 3279 kubelet_node_status.go:398] "Node not becoming ready in time after startup" Apr 20 15:36:34.071313 kubelet[3279]: E0420 15:36:34.070762 3279 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 15:36:34.347328 containerd[1635]: time="2026-04-20T15:36:34.316310961Z" level=info msg="StartContainer for \"6303059002e4ee5e543356374da9d7d604693648ecb0d255f56fb9788408da67\" returns successfully" Apr 20 15:36:35.169172 kubelet[3279]: E0420 15:36:35.123203 3279 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.475s" Apr 20 15:36:39.921690 kubelet[3279]: E0420 15:36:39.848828 3279 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 15:36:41.256891 kubelet[3279]: E0420 15:36:41.228089 3279 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 15:36:43.125753 kubelet[3279]: E0420 15:36:43.121056 3279 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="7.726s" Apr 20 15:36:46.169788 kubelet[3279]: E0420 15:36:46.166340 3279 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 15:36:46.520952 kubelet[3279]: E0420 15:36:46.379360 3279 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.14s" Apr 20 15:36:46.607163 kubelet[3279]: E0420 15:36:46.606102 3279 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 15:36:46.982028 kubelet[3279]: E0420 15:36:46.943227 3279 controller.go:195] "Failed to update lease" err="Put \"https://10.0.0.22:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Apr 20 15:36:48.622757 systemd[1]: cri-containerd-5dd02dc9c0aea497c0bde84747e6de4de59b9e2f5ff60d183e9aac204f58211c.scope: Deactivated successfully. Apr 20 15:36:48.660710 systemd[1]: cri-containerd-5dd02dc9c0aea497c0bde84747e6de4de59b9e2f5ff60d183e9aac204f58211c.scope: Consumed 10.277s CPU time, 20.7M memory peak. Apr 20 15:36:49.501562 containerd[1635]: time="2026-04-20T15:36:49.500968139Z" level=info msg="received container exit event container_id:\"5dd02dc9c0aea497c0bde84747e6de4de59b9e2f5ff60d183e9aac204f58211c\" id:\"5dd02dc9c0aea497c0bde84747e6de4de59b9e2f5ff60d183e9aac204f58211c\" pid:3391 exit_status:1 exited_at:{seconds:1776699409 nanos:31063043}" Apr 20 15:36:49.744102 kubelet[3279]: E0420 15:36:49.699253 3279 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.129s" Apr 20 15:36:50.112132 kubelet[3279]: E0420 15:36:50.108425 3279 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 15:36:52.750281 kubelet[3279]: E0420 15:36:52.561363 3279 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 15:36:53.169298 kubelet[3279]: E0420 15:36:53.167697 3279 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.321s" Apr 20 15:36:56.298430 kubelet[3279]: E0420 15:36:56.295093 3279 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.071s" Apr 20 15:36:59.840947 containerd[1635]: time="2026-04-20T15:36:59.827590556Z" level=error msg="failed to delete task" error="context deadline exceeded" id=5dd02dc9c0aea497c0bde84747e6de4de59b9e2f5ff60d183e9aac204f58211c Apr 20 15:37:00.194652 containerd[1635]: time="2026-04-20T15:36:59.896538054Z" level=error msg="failed to handle container TaskExit event container_id:\"5dd02dc9c0aea497c0bde84747e6de4de59b9e2f5ff60d183e9aac204f58211c\" id:\"5dd02dc9c0aea497c0bde84747e6de4de59b9e2f5ff60d183e9aac204f58211c\" pid:3391 exit_status:1 exited_at:{seconds:1776699409 nanos:31063043}" error="failed to stop container: failed to delete task: context deadline exceeded" Apr 20 15:37:00.201136 kubelet[3279]: E0420 15:37:00.198800 3279 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 15:37:00.203915 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5dd02dc9c0aea497c0bde84747e6de4de59b9e2f5ff60d183e9aac204f58211c-rootfs.mount: Deactivated successfully. Apr 20 15:37:00.531003 containerd[1635]: time="2026-04-20T15:37:00.122345227Z" level=error msg="ttrpc: received message on inactive stream" stream=37 Apr 20 15:37:00.672729 kubelet[3279]: E0420 15:37:00.561968 3279 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="4.236s" Apr 20 15:37:01.449865 kubelet[3279]: E0420 15:37:01.431420 3279 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 15:37:01.610232 containerd[1635]: time="2026-04-20T15:37:01.555298289Z" level=info msg="TaskExit event container_id:\"5dd02dc9c0aea497c0bde84747e6de4de59b9e2f5ff60d183e9aac204f58211c\" id:\"5dd02dc9c0aea497c0bde84747e6de4de59b9e2f5ff60d183e9aac204f58211c\" pid:3391 exit_status:1 exited_at:{seconds:1776699409 nanos:31063043}" Apr 20 15:37:03.309330 containerd[1635]: time="2026-04-20T15:37:03.248186464Z" level=error msg="failed to delete task" error="rpc error: code = NotFound desc = container not created: not found" id=5dd02dc9c0aea497c0bde84747e6de4de59b9e2f5ff60d183e9aac204f58211c Apr 20 15:37:03.949369 kubelet[3279]: E0420 15:37:03.945836 3279 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.229s" Apr 20 15:37:04.354343 containerd[1635]: time="2026-04-20T15:37:04.239268219Z" level=info msg="Ensure that container 5dd02dc9c0aea497c0bde84747e6de4de59b9e2f5ff60d183e9aac204f58211c in task-service has been cleanup successfully" Apr 20 15:37:05.805194 kubelet[3279]: E0420 15:37:05.799256 3279 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.831s" Apr 20 15:37:06.116674 kubelet[3279]: E0420 15:37:05.796192 3279 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 15:37:12.607294 kubelet[3279]: E0420 15:37:12.605002 3279 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 15:37:14.846097 kubelet[3279]: E0420 15:37:14.665153 3279 controller.go:195] "Failed to update lease" err="Put \"https://10.0.0.22:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded" Apr 20 15:37:16.429361 kubelet[3279]: E0420 15:37:16.421039 3279 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="10.557s" Apr 20 15:37:18.144404 kubelet[3279]: E0420 15:37:18.143894 3279 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 15:37:18.237085 kubelet[3279]: E0420 15:37:18.236648 3279 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 15:37:18.435568 kubelet[3279]: E0420 15:37:18.414047 3279 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.629s" Apr 20 15:37:18.499252 kubelet[3279]: I0420 15:37:18.439862 3279 scope.go:117] "RemoveContainer" containerID="3b2352d00f5f73944af3efbf0280250a0b9faf45bb8c73547ba715bfe2066d6e" Apr 20 15:37:18.535307 kubelet[3279]: I0420 15:37:18.508922 3279 scope.go:117] "RemoveContainer" containerID="5dd02dc9c0aea497c0bde84747e6de4de59b9e2f5ff60d183e9aac204f58211c" Apr 20 15:37:18.539105 kubelet[3279]: E0420 15:37:18.537195 3279 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 15:37:18.758302 containerd[1635]: time="2026-04-20T15:37:18.757745537Z" level=info msg="CreateContainer within sandbox \"36cc9c62516c4192c61ab0e37efdfcd65bebb612af9dcfb4a1ca8370c4716063\" for container name:\"kube-controller-manager\" attempt:4" Apr 20 15:37:18.795457 containerd[1635]: time="2026-04-20T15:37:18.783541220Z" level=info msg="RemoveContainer for \"3b2352d00f5f73944af3efbf0280250a0b9faf45bb8c73547ba715bfe2066d6e\"" Apr 20 15:37:19.294681 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2083133015.mount: Deactivated successfully. Apr 20 15:37:19.486131 containerd[1635]: time="2026-04-20T15:37:19.484986757Z" level=info msg="RemoveContainer for \"3b2352d00f5f73944af3efbf0280250a0b9faf45bb8c73547ba715bfe2066d6e\" returns successfully" Apr 20 15:37:19.529804 kubelet[3279]: E0420 15:37:19.529702 3279 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 15:37:19.529774 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2522400215.mount: Deactivated successfully. Apr 20 15:37:19.537696 containerd[1635]: time="2026-04-20T15:37:19.535271683Z" level=info msg="Container d5993b6cf6ac2414e9d15ee09a9a5452f85f4814c3a4086a477c7880355a0415: CDI devices from CRI Config.CDIDevices: []" Apr 20 15:37:21.382892 containerd[1635]: time="2026-04-20T15:37:21.382162476Z" level=info msg="CreateContainer within sandbox \"36cc9c62516c4192c61ab0e37efdfcd65bebb612af9dcfb4a1ca8370c4716063\" for name:\"kube-controller-manager\" attempt:4 returns container id \"d5993b6cf6ac2414e9d15ee09a9a5452f85f4814c3a4086a477c7880355a0415\"" Apr 20 15:37:21.445467 containerd[1635]: time="2026-04-20T15:37:21.444515513Z" level=info msg="StartContainer for \"d5993b6cf6ac2414e9d15ee09a9a5452f85f4814c3a4086a477c7880355a0415\"" Apr 20 15:37:21.757145 containerd[1635]: time="2026-04-20T15:37:21.730522505Z" level=info msg="connecting to shim d5993b6cf6ac2414e9d15ee09a9a5452f85f4814c3a4086a477c7880355a0415" address="unix:///run/containerd/s/e8b570697ccae5c37ee8c203dd47b2636b9b3b7d7a1b21460cbc42ccc93af9d3" protocol=ttrpc version=3 Apr 20 15:37:23.092368 systemd[1]: Started cri-containerd-d5993b6cf6ac2414e9d15ee09a9a5452f85f4814c3a4086a477c7880355a0415.scope - libcontainer container d5993b6cf6ac2414e9d15ee09a9a5452f85f4814c3a4086a477c7880355a0415. Apr 20 15:37:23.371313 kubelet[3279]: E0420 15:37:23.369095 3279 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 15:37:24.045656 containerd[1635]: time="2026-04-20T15:37:24.042982695Z" level=info msg="StartContainer for \"d5993b6cf6ac2414e9d15ee09a9a5452f85f4814c3a4086a477c7880355a0415\" returns successfully" Apr 20 15:37:26.241783 kubelet[3279]: E0420 15:37:26.140251 3279 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.496s" Apr 20 15:37:26.759669 containerd[1635]: time="2026-04-20T15:37:26.709356357Z" level=info msg="container event discarded" container=1bef1508aab3f88c63cd1b634e1d1b348f685299e51ff9d42f8592d04628f0d9 type=CONTAINER_STOPPED_EVENT Apr 20 15:37:28.535316 kubelet[3279]: E0420 15:37:28.515938 3279 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.832s" Apr 20 15:37:29.807572 kubelet[3279]: E0420 15:37:29.805069 3279 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 15:37:31.160228 kubelet[3279]: E0420 15:37:31.155860 3279 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.524s" Apr 20 15:37:32.332504 kubelet[3279]: E0420 15:37:32.331890 3279 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.172s" Apr 20 15:37:32.332504 kubelet[3279]: E0420 15:37:32.332636 3279 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 15:37:34.636733 containerd[1635]: time="2026-04-20T15:37:34.568768852Z" level=info msg="container event discarded" container=c72ba01c5503ae58e130769497dca7aefb70047d9f34a28d0b436b6439c4c24d type=CONTAINER_CREATED_EVENT Apr 20 15:37:34.810300 kubelet[3279]: E0420 15:37:34.719874 3279 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 15:37:36.151228 kubelet[3279]: E0420 15:37:36.098781 3279 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 15:37:37.017219 kubelet[3279]: E0420 15:37:37.015112 3279 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.354s" Apr 20 15:37:39.866005 kubelet[3279]: E0420 15:37:39.850685 3279 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.179s" Apr 20 15:37:41.948343 kubelet[3279]: E0420 15:37:41.939692 3279 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 15:37:43.172983 kubelet[3279]: E0420 15:37:43.171320 3279 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.452s" Apr 20 15:37:45.772868 kubelet[3279]: E0420 15:37:45.754886 3279 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.54s" Apr 20 15:37:46.160018 containerd[1635]: time="2026-04-20T15:37:46.068668681Z" level=info msg="container event discarded" container=c72ba01c5503ae58e130769497dca7aefb70047d9f34a28d0b436b6439c4c24d type=CONTAINER_STARTED_EVENT Apr 20 15:37:46.810878 kubelet[3279]: E0420 15:37:46.802890 3279 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 15:37:47.422341 kubelet[3279]: E0420 15:37:47.419926 3279 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 15:37:48.220799 kubelet[3279]: E0420 15:37:48.215575 3279 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 15:37:49.736846 kubelet[3279]: E0420 15:37:49.666898 3279 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.213s" Apr 20 15:37:53.102324 systemd[1]: Starting systemd-tmpfiles-clean.service - Cleanup of Temporary Directories... Apr 20 15:37:55.330086 systemd-tmpfiles[3514]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Apr 20 15:37:55.330101 systemd-tmpfiles[3514]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Apr 20 15:37:55.468573 systemd-tmpfiles[3514]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Apr 20 15:37:55.623064 systemd-tmpfiles[3514]: ACLs are not supported, ignoring. Apr 20 15:37:55.623901 systemd-tmpfiles[3514]: ACLs are not supported, ignoring. Apr 20 15:37:55.959798 systemd-tmpfiles[3514]: Detected autofs mount point /boot during canonicalization of boot. Apr 20 15:37:55.965248 systemd-tmpfiles[3514]: Skipping /boot Apr 20 15:37:56.370404 systemd[1]: systemd-tmpfiles-clean.service: Deactivated successfully. Apr 20 15:37:56.431078 systemd[1]: Finished systemd-tmpfiles-clean.service - Cleanup of Temporary Directories. Apr 20 15:37:56.446036 systemd[1]: systemd-tmpfiles-clean.service: Consumed 1.061s CPU time, 4.5M memory peak. Apr 20 15:37:56.932916 kubelet[3279]: E0420 15:37:56.873362 3279 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 15:37:57.528475 kubelet[3279]: E0420 15:37:57.524063 3279 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 15:38:00.273532 kubelet[3279]: E0420 15:38:00.244222 3279 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="10.344s" Apr 20 15:38:03.705142 kubelet[3279]: E0420 15:38:03.687290 3279 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 15:38:03.853014 kubelet[3279]: E0420 15:38:03.852003 3279 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.403s" Apr 20 15:38:06.292035 kubelet[3279]: E0420 15:38:06.267585 3279 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.401s" Apr 20 15:38:09.132082 kubelet[3279]: E0420 15:38:09.069812 3279 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.775s" Apr 20 15:38:09.528785 kubelet[3279]: E0420 15:38:09.501982 3279 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 15:38:11.026881 kubelet[3279]: E0420 15:38:11.009085 3279 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.874s" Apr 20 15:38:12.924983 kubelet[3279]: E0420 15:38:12.917458 3279 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.872s" Apr 20 15:38:14.309117 kubelet[3279]: E0420 15:38:14.308041 3279 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.29s" Apr 20 15:38:15.430539 kubelet[3279]: E0420 15:38:15.354881 3279 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 15:38:16.350992 kubelet[3279]: E0420 15:38:16.325193 3279 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.679s" Apr 20 15:38:20.766339 kubelet[3279]: E0420 15:38:20.723866 3279 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.987s" Apr 20 15:38:21.806554 kubelet[3279]: E0420 15:38:21.800240 3279 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 15:38:23.498800 kubelet[3279]: E0420 15:38:23.497579 3279 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.629s" Apr 20 15:38:25.613907 kubelet[3279]: E0420 15:38:25.611194 3279 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.104s" Apr 20 15:38:26.832846 kubelet[3279]: E0420 15:38:26.829799 3279 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.199s" Apr 20 15:38:27.155193 containerd[1635]: time="2026-04-20T15:38:26.989252932Z" level=info msg="container event discarded" container=c72ba01c5503ae58e130769497dca7aefb70047d9f34a28d0b436b6439c4c24d type=CONTAINER_STOPPED_EVENT Apr 20 15:38:27.511852 kubelet[3279]: E0420 15:38:27.497868 3279 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 15:38:29.044865 kubelet[3279]: E0420 15:38:29.040435 3279 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.189s" Apr 20 15:38:30.816990 kubelet[3279]: E0420 15:38:30.816194 3279 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.775s" Apr 20 15:38:32.372186 kubelet[3279]: E0420 15:38:32.356972 3279 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.457s" Apr 20 15:38:32.695157 systemd[1]: cri-containerd-d5993b6cf6ac2414e9d15ee09a9a5452f85f4814c3a4086a477c7880355a0415.scope: Deactivated successfully. Apr 20 15:38:32.762724 systemd[1]: cri-containerd-d5993b6cf6ac2414e9d15ee09a9a5452f85f4814c3a4086a477c7880355a0415.scope: Consumed 8.570s CPU time, 18.5M memory peak. Apr 20 15:38:33.020137 kubelet[3279]: E0420 15:38:32.719966 3279 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 15:38:33.158953 containerd[1635]: time="2026-04-20T15:38:33.064333823Z" level=info msg="received container exit event container_id:\"d5993b6cf6ac2414e9d15ee09a9a5452f85f4814c3a4086a477c7880355a0415\" id:\"d5993b6cf6ac2414e9d15ee09a9a5452f85f4814c3a4086a477c7880355a0415\" pid:3491 exit_status:1 exited_at:{seconds:1776699512 nanos:858334001}" Apr 20 15:38:33.333085 kubelet[3279]: E0420 15:38:33.267050 3279 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 15:38:34.651278 containerd[1635]: time="2026-04-20T15:38:34.556188487Z" level=info msg="container event discarded" container=1bef1508aab3f88c63cd1b634e1d1b348f685299e51ff9d42f8592d04628f0d9 type=CONTAINER_DELETED_EVENT Apr 20 15:38:36.297008 kubelet[3279]: E0420 15:38:36.295615 3279 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.666s" Apr 20 15:38:38.353136 kubelet[3279]: E0420 15:38:38.351059 3279 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.055s" Apr 20 15:38:38.560280 systemd[1]: cri-containerd-6303059002e4ee5e543356374da9d7d604693648ecb0d255f56fb9788408da67.scope: Deactivated successfully. Apr 20 15:38:38.612708 systemd[1]: cri-containerd-6303059002e4ee5e543356374da9d7d604693648ecb0d255f56fb9788408da67.scope: Consumed 43.637s CPU time, 20.4M memory peak. Apr 20 15:38:40.463885 containerd[1635]: time="2026-04-20T15:38:40.459002311Z" level=info msg="received container exit event container_id:\"6303059002e4ee5e543356374da9d7d604693648ecb0d255f56fb9788408da67\" id:\"6303059002e4ee5e543356374da9d7d604693648ecb0d255f56fb9788408da67\" pid:3437 exit_status:1 exited_at:{seconds:1776699519 nanos:330177853}" Apr 20 15:38:40.934205 kubelet[3279]: E0420 15:38:40.925215 3279 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 15:38:41.072167 kubelet[3279]: E0420 15:38:41.071171 3279 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.7s" Apr 20 15:38:42.040086 containerd[1635]: time="2026-04-20T15:38:41.981788490Z" level=info msg="container event discarded" container=3b2352d00f5f73944af3efbf0280250a0b9faf45bb8c73547ba715bfe2066d6e type=CONTAINER_CREATED_EVENT Apr 20 15:38:42.921703 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d5993b6cf6ac2414e9d15ee09a9a5452f85f4814c3a4086a477c7880355a0415-rootfs.mount: Deactivated successfully. Apr 20 15:38:44.401795 kubelet[3279]: E0420 15:38:44.399750 3279 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.276s" Apr 20 15:38:44.721278 kubelet[3279]: E0420 15:38:44.703601 3279 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 15:38:45.710806 kubelet[3279]: E0420 15:38:45.709985 3279 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.149s" Apr 20 15:38:46.762085 kubelet[3279]: E0420 15:38:46.758901 3279 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 15:38:47.137283 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6303059002e4ee5e543356374da9d7d604693648ecb0d255f56fb9788408da67-rootfs.mount: Deactivated successfully. Apr 20 15:38:47.895197 kubelet[3279]: E0420 15:38:47.893032 3279 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.165s" Apr 20 15:38:49.222127 kubelet[3279]: E0420 15:38:49.215244 3279 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.264s" Apr 20 15:38:51.255015 containerd[1635]: time="2026-04-20T15:38:51.140106738Z" level=info msg="container event discarded" container=3b2352d00f5f73944af3efbf0280250a0b9faf45bb8c73547ba715bfe2066d6e type=CONTAINER_STARTED_EVENT Apr 20 15:38:51.792049 kubelet[3279]: E0420 15:38:51.776217 3279 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.532s" Apr 20 15:38:51.905103 kubelet[3279]: I0420 15:38:51.890340 3279 scope.go:117] "RemoveContainer" containerID="5dd02dc9c0aea497c0bde84747e6de4de59b9e2f5ff60d183e9aac204f58211c" Apr 20 15:38:51.905103 kubelet[3279]: I0420 15:38:51.892244 3279 scope.go:117] "RemoveContainer" containerID="d5993b6cf6ac2414e9d15ee09a9a5452f85f4814c3a4086a477c7880355a0415" Apr 20 15:38:51.905103 kubelet[3279]: E0420 15:38:51.903339 3279 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 15:38:52.070276 kubelet[3279]: E0420 15:38:52.064353 3279 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-controller-manager pod=kube-controller-manager-localhost_kube-system(c6bb8708a026256e82ca4c5631a78b5a)\"" pod="kube-system/kube-controller-manager-localhost" podUID="c6bb8708a026256e82ca4c5631a78b5a" Apr 20 15:38:52.497397 kubelet[3279]: E0420 15:38:52.482102 3279 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 15:38:53.350973 kubelet[3279]: E0420 15:38:53.350290 3279 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.543s" Apr 20 15:38:53.579887 containerd[1635]: time="2026-04-20T15:38:53.578692629Z" level=info msg="RemoveContainer for \"5dd02dc9c0aea497c0bde84747e6de4de59b9e2f5ff60d183e9aac204f58211c\"" Apr 20 15:38:54.651928 kubelet[3279]: E0420 15:38:54.651130 3279 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.281s" Apr 20 15:38:56.001535 kubelet[3279]: I0420 15:38:56.000846 3279 scope.go:117] "RemoveContainer" containerID="6303059002e4ee5e543356374da9d7d604693648ecb0d255f56fb9788408da67" Apr 20 15:38:56.129594 kubelet[3279]: E0420 15:38:56.128961 3279 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 15:38:57.672002 containerd[1635]: time="2026-04-20T15:38:57.671222090Z" level=info msg="RemoveContainer for \"5dd02dc9c0aea497c0bde84747e6de4de59b9e2f5ff60d183e9aac204f58211c\" returns successfully" Apr 20 15:38:57.835072 kubelet[3279]: E0420 15:38:57.817876 3279 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.179s" Apr 20 15:38:58.008185 kubelet[3279]: I0420 15:38:57.962224 3279 scope.go:117] "RemoveContainer" containerID="6fd58b8c1c91dba1d785b781e0a4ac6425b1aed8f47de4fda07e8f33ce5536e3" Apr 20 15:38:58.008185 kubelet[3279]: E0420 15:38:57.962140 3279 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 15:38:58.327879 containerd[1635]: time="2026-04-20T15:38:58.313231242Z" level=info msg="CreateContainer within sandbox \"3ae2bc198d206d496b05bf8cedf59cf493595b084c968bee9f872f384d6cca08\" for container name:\"kube-scheduler\" attempt:2" Apr 20 15:38:59.290265 kubelet[3279]: I0420 15:38:59.288202 3279 scope.go:117] "RemoveContainer" containerID="6fd58b8c1c91dba1d785b781e0a4ac6425b1aed8f47de4fda07e8f33ce5536e3" Apr 20 15:39:00.013044 containerd[1635]: time="2026-04-20T15:38:59.991243338Z" level=info msg="RemoveContainer for \"6fd58b8c1c91dba1d785b781e0a4ac6425b1aed8f47de4fda07e8f33ce5536e3\"" Apr 20 15:39:02.022893 containerd[1635]: time="2026-04-20T15:39:02.011330815Z" level=info msg="RemoveContainer for \"6fd58b8c1c91dba1d785b781e0a4ac6425b1aed8f47de4fda07e8f33ce5536e3\"" Apr 20 15:39:02.300900 containerd[1635]: time="2026-04-20T15:39:02.289300204Z" level=error msg="RemoveContainer for \"6fd58b8c1c91dba1d785b781e0a4ac6425b1aed8f47de4fda07e8f33ce5536e3\" failed" error="rpc error: code = Unknown desc = failed to set removing state for container \"6fd58b8c1c91dba1d785b781e0a4ac6425b1aed8f47de4fda07e8f33ce5536e3\": container is already in removing state" Apr 20 15:39:02.734085 kubelet[3279]: E0420 15:39:02.728230 3279 log.go:32] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to set removing state for container \"6fd58b8c1c91dba1d785b781e0a4ac6425b1aed8f47de4fda07e8f33ce5536e3\": container is already in removing state" containerID="6fd58b8c1c91dba1d785b781e0a4ac6425b1aed8f47de4fda07e8f33ce5536e3" Apr 20 15:39:02.858253 kubelet[3279]: E0420 15:39:02.728373 3279 kuberuntime_gc.go:151] "Failed to remove container" err="rpc error: code = Unknown desc = failed to set removing state for container \"6fd58b8c1c91dba1d785b781e0a4ac6425b1aed8f47de4fda07e8f33ce5536e3\": container is already in removing state" containerID="6fd58b8c1c91dba1d785b781e0a4ac6425b1aed8f47de4fda07e8f33ce5536e3" Apr 20 15:39:02.854271 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1726898922.mount: Deactivated successfully. Apr 20 15:39:04.233902 containerd[1635]: time="2026-04-20T15:39:04.222342280Z" level=info msg="Container 3bf8cd9e0e669f180abedb3b2f900b0d1914afc67359322c6b58e298951c7f1c: CDI devices from CRI Config.CDIDevices: []" Apr 20 15:39:04.500366 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1231497665.mount: Deactivated successfully. Apr 20 15:39:04.854347 kubelet[3279]: E0420 15:39:04.819248 3279 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 15:39:05.440000 kubelet[3279]: E0420 15:39:05.438644 3279 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="4.787s" Apr 20 15:39:05.537759 containerd[1635]: time="2026-04-20T15:39:05.536609525Z" level=info msg="RemoveContainer for \"6fd58b8c1c91dba1d785b781e0a4ac6425b1aed8f47de4fda07e8f33ce5536e3\" returns successfully" Apr 20 15:39:05.627704 kubelet[3279]: E0420 15:39:05.626713 3279 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 15:39:05.746852 kubelet[3279]: I0420 15:39:05.671241 3279 scope.go:117] "RemoveContainer" containerID="d5993b6cf6ac2414e9d15ee09a9a5452f85f4814c3a4086a477c7880355a0415" Apr 20 15:39:05.776665 kubelet[3279]: E0420 15:39:05.753279 3279 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 15:39:08.016254 containerd[1635]: time="2026-04-20T15:39:08.004321391Z" level=info msg="CreateContainer within sandbox \"36cc9c62516c4192c61ab0e37efdfcd65bebb612af9dcfb4a1ca8370c4716063\" for container name:\"kube-controller-manager\" attempt:5" Apr 20 15:39:08.650981 kubelet[3279]: E0420 15:39:08.649326 3279 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.008s" Apr 20 15:39:10.715026 kubelet[3279]: E0420 15:39:10.714015 3279 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 15:39:10.870737 kubelet[3279]: E0420 15:39:10.833289 3279 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.172s" Apr 20 15:39:11.274238 containerd[1635]: time="2026-04-20T15:39:11.241160774Z" level=info msg="CreateContainer within sandbox \"3ae2bc198d206d496b05bf8cedf59cf493595b084c968bee9f872f384d6cca08\" for name:\"kube-scheduler\" attempt:2 returns container id \"3bf8cd9e0e669f180abedb3b2f900b0d1914afc67359322c6b58e298951c7f1c\"" Apr 20 15:39:12.024349 containerd[1635]: time="2026-04-20T15:39:11.995123253Z" level=info msg="StartContainer for \"3bf8cd9e0e669f180abedb3b2f900b0d1914afc67359322c6b58e298951c7f1c\"" Apr 20 15:39:12.635569 kubelet[3279]: E0420 15:39:12.628976 3279 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.716s" Apr 20 15:39:13.351210 containerd[1635]: time="2026-04-20T15:39:13.344353980Z" level=info msg="connecting to shim 3bf8cd9e0e669f180abedb3b2f900b0d1914afc67359322c6b58e298951c7f1c" address="unix:///run/containerd/s/909e6cfc558d4fc91d810de80342fc4f6713a6b29dc9d1596b2d8f2a2ab41cb6" protocol=ttrpc version=3 Apr 20 15:39:14.402673 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2763074312.mount: Deactivated successfully. Apr 20 15:39:15.821331 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount800718080.mount: Deactivated successfully. Apr 20 15:39:16.089701 containerd[1635]: time="2026-04-20T15:39:16.085421368Z" level=info msg="Container 2a873b9ebaec04a65c4656c3c26eefca306b34e5a15d751e81c9df2c6355265f: CDI devices from CRI Config.CDIDevices: []" Apr 20 15:39:16.979692 kubelet[3279]: E0420 15:39:16.978214 3279 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 15:39:17.690963 kubelet[3279]: E0420 15:39:17.671286 3279 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.911s" Apr 20 15:39:18.356212 systemd[1]: Started cri-containerd-3bf8cd9e0e669f180abedb3b2f900b0d1914afc67359322c6b58e298951c7f1c.scope - libcontainer container 3bf8cd9e0e669f180abedb3b2f900b0d1914afc67359322c6b58e298951c7f1c. Apr 20 15:39:22.300343 kubelet[3279]: E0420 15:39:22.297124 3279 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="4.493s" Apr 20 15:39:22.415092 kubelet[3279]: E0420 15:39:22.407775 3279 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 15:39:23.430938 kubelet[3279]: E0420 15:39:23.425933 3279 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.128s" Apr 20 15:39:24.716975 containerd[1635]: time="2026-04-20T15:39:24.715974661Z" level=info msg="CreateContainer within sandbox \"36cc9c62516c4192c61ab0e37efdfcd65bebb612af9dcfb4a1ca8370c4716063\" for name:\"kube-controller-manager\" attempt:5 returns container id \"2a873b9ebaec04a65c4656c3c26eefca306b34e5a15d751e81c9df2c6355265f\"" Apr 20 15:39:24.924931 kubelet[3279]: E0420 15:39:24.924184 3279 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.488s" Apr 20 15:39:25.576751 containerd[1635]: time="2026-04-20T15:39:25.415088911Z" level=info msg="StartContainer for \"2a873b9ebaec04a65c4656c3c26eefca306b34e5a15d751e81c9df2c6355265f\"" Apr 20 15:39:25.948784 containerd[1635]: time="2026-04-20T15:39:25.865353880Z" level=info msg="StartContainer for \"3bf8cd9e0e669f180abedb3b2f900b0d1914afc67359322c6b58e298951c7f1c\" returns successfully" Apr 20 15:39:27.304087 kubelet[3279]: E0420 15:39:27.302309 3279 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.367s" Apr 20 15:39:28.409242 containerd[1635]: time="2026-04-20T15:39:28.390263036Z" level=info msg="connecting to shim 2a873b9ebaec04a65c4656c3c26eefca306b34e5a15d751e81c9df2c6355265f" address="unix:///run/containerd/s/e8b570697ccae5c37ee8c203dd47b2636b9b3b7d7a1b21460cbc42ccc93af9d3" protocol=ttrpc version=3 Apr 20 15:39:29.270966 kubelet[3279]: E0420 15:39:29.269858 3279 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 15:39:34.917151 kubelet[3279]: E0420 15:39:34.913950 3279 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="7.482s" Apr 20 15:39:35.189959 kubelet[3279]: E0420 15:39:35.065029 3279 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 15:39:35.760863 systemd[1]: Started cri-containerd-2a873b9ebaec04a65c4656c3c26eefca306b34e5a15d751e81c9df2c6355265f.scope - libcontainer container 2a873b9ebaec04a65c4656c3c26eefca306b34e5a15d751e81c9df2c6355265f. Apr 20 15:39:38.112252 kubelet[3279]: E0420 15:39:38.102567 3279 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.134s" Apr 20 15:39:41.210025 kubelet[3279]: E0420 15:39:41.208736 3279 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 15:39:41.414021 kubelet[3279]: E0420 15:39:41.383183 3279 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 15:39:41.505605 kubelet[3279]: E0420 15:39:41.489496 3279 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.344s" Apr 20 15:39:43.328040 kubelet[3279]: E0420 15:39:43.185335 3279 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.675s" Apr 20 15:39:43.328040 kubelet[3279]: E0420 15:39:43.320129 3279 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 15:39:44.390939 kubelet[3279]: E0420 15:39:44.372856 3279 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.05s" Apr 20 15:39:46.443676 kubelet[3279]: E0420 15:39:46.443042 3279 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.76s" Apr 20 15:39:47.489291 kubelet[3279]: E0420 15:39:47.481973 3279 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 15:39:48.187905 containerd[1635]: time="2026-04-20T15:39:48.186031268Z" level=info msg="StartContainer for \"2a873b9ebaec04a65c4656c3c26eefca306b34e5a15d751e81c9df2c6355265f\" returns successfully" Apr 20 15:39:50.628224 kubelet[3279]: E0420 15:39:50.625134 3279 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.974s" Apr 20 15:39:53.692519 kubelet[3279]: E0420 15:39:53.652336 3279 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 15:39:55.073228 kubelet[3279]: E0420 15:39:55.061210 3279 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="4.422s" Apr 20 15:39:55.531742 kubelet[3279]: E0420 15:39:55.211054 3279 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 15:39:56.662148 kubelet[3279]: E0420 15:39:56.659621 3279 controller.go:195] "Failed to update lease" err="Put \"https://10.0.0.22:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Apr 20 15:39:58.750927 kubelet[3279]: E0420 15:39:58.748210 3279 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 15:39:59.033310 kubelet[3279]: E0420 15:39:58.952022 3279 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 15:40:00.226917 kubelet[3279]: E0420 15:40:00.212141 3279 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 15:40:03.837104 kubelet[3279]: E0420 15:40:03.822951 3279 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="8.75s" Apr 20 15:40:05.629336 kubelet[3279]: E0420 15:40:05.627228 3279 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.795s" Apr 20 15:40:06.436532 kubelet[3279]: E0420 15:40:06.436081 3279 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 15:40:06.522752 kubelet[3279]: E0420 15:40:06.404587 3279 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 15:40:06.547917 kubelet[3279]: E0420 15:40:06.452054 3279 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 15:40:06.884110 kubelet[3279]: E0420 15:40:06.872013 3279 controller.go:195] "Failed to update lease" err="Put \"https://10.0.0.22:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Apr 20 15:40:07.637117 containerd[1635]: time="2026-04-20T15:40:07.625126075Z" level=info msg="container event discarded" container=3b2352d00f5f73944af3efbf0280250a0b9faf45bb8c73547ba715bfe2066d6e type=CONTAINER_STOPPED_EVENT Apr 20 15:40:09.027339 kubelet[3279]: E0420 15:40:09.007276 3279 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.264s" Apr 20 15:40:10.827945 kubelet[3279]: E0420 15:40:10.827011 3279 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.39s" Apr 20 15:40:11.734007 kubelet[3279]: E0420 15:40:11.732313 3279 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 15:40:12.757153 kubelet[3279]: E0420 15:40:12.733958 3279 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 15:40:12.956545 kubelet[3279]: E0420 15:40:12.765362 3279 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.928s" Apr 20 15:40:13.792104 kubelet[3279]: E0420 15:40:13.762364 3279 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 15:40:14.544321 kubelet[3279]: E0420 15:40:14.514164 3279 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.704s" Apr 20 15:40:14.750033 kubelet[3279]: E0420 15:40:14.747108 3279 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 15:40:16.942188 kubelet[3279]: E0420 15:40:16.938332 3279 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.305s" Apr 20 15:40:17.347061 kubelet[3279]: E0420 15:40:17.215309 3279 controller.go:195] "Failed to update lease" err="Put \"https://10.0.0.22:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Apr 20 15:40:18.719174 kubelet[3279]: E0420 15:40:18.693339 3279 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 15:40:19.746367 kubelet[3279]: E0420 15:40:19.728065 3279 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.778s" Apr 20 15:40:22.097331 kubelet[3279]: E0420 15:40:22.096756 3279 status_manager.go:1041] "Failed to update status for pod" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a27bd140-74be-4b0d-b6dc-1af15dd13c2a\\\"},\\\"status\\\":{\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"100m\\\"},\\\"containerID\\\":\\\"containerd://3bf8cd9e0e669f180abedb3b2f900b0d1914afc67359322c6b58e298951c7f1c\\\",\\\"image\\\":\\\"registry.k8s.io/kube-scheduler:v1.34.7\\\",\\\"imageID\\\":\\\"registry.k8s.io/kube-scheduler@sha256:4ab32f707ff84beaac431797999707757b885196b0b9a52d29cb67f95efce7c1\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"containerd://6303059002e4ee5e543356374da9d7d604693648ecb0d255f56fb9788408da67\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-04-20T15:38:39Z\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-04-20T15:36:29Z\\\"}},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"100m\\\"}},\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-20T15:39:23Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}]}}\" for pod \"kube-system\"/\"kube-scheduler-localhost\": Timeout: request did not complete within requested timeout - context deadline exceeded" pod="kube-system/kube-scheduler-localhost" Apr 20 15:40:22.417421 kubelet[3279]: E0420 15:40:22.353925 3279 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.538s" Apr 20 15:40:25.150007 kubelet[3279]: E0420 15:40:25.144953 3279 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 15:40:25.517916 kubelet[3279]: E0420 15:40:25.337259 3279 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.983s" Apr 20 15:40:27.088180 kubelet[3279]: E0420 15:40:27.024102 3279 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.679s" Apr 20 15:40:27.454780 kubelet[3279]: E0420 15:40:27.449702 3279 controller.go:195] "Failed to update lease" err="Put \"https://10.0.0.22:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Apr 20 15:40:29.443041 kubelet[3279]: E0420 15:40:29.439307 3279 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.327s" Apr 20 15:40:29.832847 kubelet[3279]: E0420 15:40:29.823363 3279 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 15:40:31.182017 kubelet[3279]: E0420 15:40:31.177014 3279 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 15:40:31.814367 kubelet[3279]: E0420 15:40:31.794253 3279 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.16s" Apr 20 15:40:33.108165 kubelet[3279]: E0420 15:40:33.106699 3279 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.183s" Apr 20 15:40:34.869974 kubelet[3279]: E0420 15:40:34.864259 3279 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.756s" Apr 20 15:40:35.550742 kubelet[3279]: E0420 15:40:35.358237 3279 controller.go:195] "Failed to update lease" err="Operation cannot be fulfilled on leases.coordination.k8s.io \"localhost\": the object has been modified; please apply your changes to the latest version and try again" Apr 20 15:40:37.094361 kubelet[3279]: E0420 15:40:37.080309 3279 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 15:40:37.904148 kubelet[3279]: E0420 15:40:37.902824 3279 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.541s" Apr 20 15:40:38.448284 kubelet[3279]: I0420 15:40:38.329317 3279 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Apr 20 15:40:40.246128 kubelet[3279]: E0420 15:40:40.231922 3279 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.195s" Apr 20 15:40:41.775100 containerd[1635]: time="2026-04-20T15:40:41.708281417Z" level=info msg="container event discarded" container=c72ba01c5503ae58e130769497dca7aefb70047d9f34a28d0b436b6439c4c24d type=CONTAINER_DELETED_EVENT Apr 20 15:40:43.612882 kubelet[3279]: E0420 15:40:43.552266 3279 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 15:40:44.194789 kubelet[3279]: E0420 15:40:44.146350 3279 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.805s" Apr 20 15:40:46.085068 kubelet[3279]: E0420 15:40:46.066101 3279 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.839s" Apr 20 15:40:47.911367 kubelet[3279]: E0420 15:40:47.904734 3279 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.807s" Apr 20 15:40:49.372686 kubelet[3279]: E0420 15:40:49.343372 3279 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 15:40:50.487933 kubelet[3279]: E0420 15:40:50.487062 3279 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.582s" Apr 20 15:40:51.362313 containerd[1635]: time="2026-04-20T15:40:51.329119073Z" level=info msg="container event discarded" container=5dd02dc9c0aea497c0bde84747e6de4de59b9e2f5ff60d183e9aac204f58211c type=CONTAINER_CREATED_EVENT Apr 20 15:40:51.962243 kubelet[3279]: E0420 15:40:51.960578 3279 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.425s" Apr 20 15:40:53.423193 kubelet[3279]: E0420 15:40:53.410307 3279 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.325s" Apr 20 15:40:54.163344 containerd[1635]: time="2026-04-20T15:40:54.019268930Z" level=info msg="container event discarded" container=6fd58b8c1c91dba1d785b781e0a4ac6425b1aed8f47de4fda07e8f33ce5536e3 type=CONTAINER_STOPPED_EVENT Apr 20 15:40:55.219264 kubelet[3279]: E0420 15:40:55.217331 3279 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 15:40:56.312007 kubelet[3279]: E0420 15:40:56.309001 3279 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.884s" Apr 20 15:40:58.710808 kubelet[3279]: E0420 15:40:58.710147 3279 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.394s" Apr 20 15:40:59.310949 containerd[1635]: time="2026-04-20T15:40:59.267237829Z" level=info msg="container event discarded" container=5dd02dc9c0aea497c0bde84747e6de4de59b9e2f5ff60d183e9aac204f58211c type=CONTAINER_STARTED_EVENT Apr 20 15:41:00.139308 systemd[1]: cri-containerd-3bf8cd9e0e669f180abedb3b2f900b0d1914afc67359322c6b58e298951c7f1c.scope: Deactivated successfully. Apr 20 15:41:00.180861 systemd[1]: cri-containerd-3bf8cd9e0e669f180abedb3b2f900b0d1914afc67359322c6b58e298951c7f1c.scope: Consumed 31.031s CPU time, 22M memory peak. Apr 20 15:41:00.824829 containerd[1635]: time="2026-04-20T15:41:00.758008885Z" level=info msg="received container exit event container_id:\"3bf8cd9e0e669f180abedb3b2f900b0d1914afc67359322c6b58e298951c7f1c\" id:\"3bf8cd9e0e669f180abedb3b2f900b0d1914afc67359322c6b58e298951c7f1c\" pid:3571 exit_status:1 exited_at:{seconds:1776699660 nanos:222050708}" Apr 20 15:41:01.410158 kubelet[3279]: E0420 15:41:01.409817 3279 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.668s" Apr 20 15:41:01.573766 kubelet[3279]: E0420 15:41:01.485217 3279 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 15:41:02.716586 kubelet[3279]: E0420 15:41:02.667373 3279 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.238s" Apr 20 15:41:06.418941 kubelet[3279]: E0420 15:41:06.329198 3279 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.595s" Apr 20 15:41:08.272314 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3bf8cd9e0e669f180abedb3b2f900b0d1914afc67359322c6b58e298951c7f1c-rootfs.mount: Deactivated successfully. Apr 20 15:41:09.948200 kubelet[3279]: E0420 15:41:09.923877 3279 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 15:41:13.301433 kubelet[3279]: E0420 15:41:13.299949 3279 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="6.789s" Apr 20 15:41:13.742047 kubelet[3279]: E0420 15:41:13.740909 3279 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 15:41:14.926793 containerd[1635]: time="2026-04-20T15:41:14.871211534Z" level=info msg="container event discarded" container=6303059002e4ee5e543356374da9d7d604693648ecb0d255f56fb9788408da67 type=CONTAINER_CREATED_EVENT Apr 20 15:41:15.333061 kubelet[3279]: E0420 15:41:15.320604 3279 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2s" Apr 20 15:41:15.858005 kubelet[3279]: E0420 15:41:15.816065 3279 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 15:41:17.429211 kubelet[3279]: E0420 15:41:17.424180 3279 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.036s" Apr 20 15:41:17.550313 kubelet[3279]: E0420 15:41:17.512044 3279 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 15:41:19.234072 kubelet[3279]: E0420 15:41:19.233100 3279 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.79s" Apr 20 15:41:20.149349 kubelet[3279]: I0420 15:41:20.145257 3279 scope.go:117] "RemoveContainer" containerID="6303059002e4ee5e543356374da9d7d604693648ecb0d255f56fb9788408da67" Apr 20 15:41:20.474759 kubelet[3279]: I0420 15:41:20.448323 3279 scope.go:117] "RemoveContainer" containerID="3bf8cd9e0e669f180abedb3b2f900b0d1914afc67359322c6b58e298951c7f1c" Apr 20 15:41:20.474759 kubelet[3279]: E0420 15:41:20.498542 3279 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 15:41:21.432108 kubelet[3279]: E0420 15:41:21.427845 3279 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 15:41:22.211174 kubelet[3279]: E0420 15:41:22.202216 3279 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 15:41:22.971059 kubelet[3279]: E0420 15:41:22.953359 3279 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.219s" Apr 20 15:41:23.995610 containerd[1635]: time="2026-04-20T15:41:23.956862106Z" level=info msg="CreateContainer within sandbox \"3ae2bc198d206d496b05bf8cedf59cf493595b084c968bee9f872f384d6cca08\" for container name:\"kube-scheduler\" attempt:3" Apr 20 15:41:24.632970 containerd[1635]: time="2026-04-20T15:41:24.630595655Z" level=info msg="RemoveContainer for \"6303059002e4ee5e543356374da9d7d604693648ecb0d255f56fb9788408da67\"" Apr 20 15:41:24.798742 kubelet[3279]: E0420 15:41:24.792681 3279 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.718s" Apr 20 15:41:26.107504 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount218542305.mount: Deactivated successfully. Apr 20 15:41:26.630581 containerd[1635]: time="2026-04-20T15:41:26.625763967Z" level=info msg="Container 4e08dc43ac90f25562c59449b4869228c986ee25bb27e1161c87341514179b9a: CDI devices from CRI Config.CDIDevices: []" Apr 20 15:41:26.642534 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount377077376.mount: Deactivated successfully. Apr 20 15:41:26.941260 containerd[1635]: time="2026-04-20T15:41:26.934101968Z" level=info msg="RemoveContainer for \"6303059002e4ee5e543356374da9d7d604693648ecb0d255f56fb9788408da67\" returns successfully" Apr 20 15:41:27.724856 kubelet[3279]: E0420 15:41:27.723530 3279 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 15:41:28.295140 kubelet[3279]: E0420 15:41:28.293937 3279 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.592s" Apr 20 15:41:30.896192 containerd[1635]: time="2026-04-20T15:41:30.802870447Z" level=info msg="container event discarded" container=6303059002e4ee5e543356374da9d7d604693648ecb0d255f56fb9788408da67 type=CONTAINER_STARTED_EVENT Apr 20 15:41:31.810352 kubelet[3279]: E0420 15:41:31.798176 3279 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.036s" Apr 20 15:41:34.017139 kubelet[3279]: E0420 15:41:33.941565 3279 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 15:41:34.017139 kubelet[3279]: E0420 15:41:34.001443 3279 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.193s" Apr 20 15:41:34.314200 containerd[1635]: time="2026-04-20T15:41:34.028156446Z" level=info msg="CreateContainer within sandbox \"3ae2bc198d206d496b05bf8cedf59cf493595b084c968bee9f872f384d6cca08\" for name:\"kube-scheduler\" attempt:3 returns container id \"4e08dc43ac90f25562c59449b4869228c986ee25bb27e1161c87341514179b9a\"" Apr 20 15:41:35.040885 containerd[1635]: time="2026-04-20T15:41:34.910274968Z" level=info msg="StartContainer for \"4e08dc43ac90f25562c59449b4869228c986ee25bb27e1161c87341514179b9a\"" Apr 20 15:41:36.253162 kubelet[3279]: E0420 15:41:36.248813 3279 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.247s" Apr 20 15:41:37.643953 containerd[1635]: time="2026-04-20T15:41:37.561228990Z" level=info msg="connecting to shim 4e08dc43ac90f25562c59449b4869228c986ee25bb27e1161c87341514179b9a" address="unix:///run/containerd/s/909e6cfc558d4fc91d810de80342fc4f6713a6b29dc9d1596b2d8f2a2ab41cb6" protocol=ttrpc version=3 Apr 20 15:41:38.885597 kubelet[3279]: E0420 15:41:38.885054 3279 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.573s" Apr 20 15:41:40.743001 kubelet[3279]: E0420 15:41:40.738064 3279 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 15:41:43.734247 systemd[1]: Started cri-containerd-4e08dc43ac90f25562c59449b4869228c986ee25bb27e1161c87341514179b9a.scope - libcontainer container 4e08dc43ac90f25562c59449b4869228c986ee25bb27e1161c87341514179b9a. Apr 20 15:41:43.867007 kubelet[3279]: E0420 15:41:43.839021 3279 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="4.925s" Apr 20 15:41:46.535361 kubelet[3279]: E0420 15:41:46.527344 3279 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.372s" Apr 20 15:41:47.112199 kubelet[3279]: E0420 15:41:47.106998 3279 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 15:41:48.310922 kubelet[3279]: E0420 15:41:48.308715 3279 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.78s" Apr 20 15:41:49.037222 containerd[1635]: time="2026-04-20T15:41:48.942215168Z" level=error msg="get state for 4e08dc43ac90f25562c59449b4869228c986ee25bb27e1161c87341514179b9a" error="context deadline exceeded" Apr 20 15:41:49.215795 containerd[1635]: time="2026-04-20T15:41:49.052056491Z" level=warning msg="unknown status" status=0 Apr 20 15:41:49.435096 containerd[1635]: time="2026-04-20T15:41:48.943361194Z" level=error msg="ttrpc: received message on inactive stream" stream=3 Apr 20 15:41:53.780866 kubelet[3279]: E0420 15:41:53.776099 3279 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 15:41:54.139972 systemd[1]: cri-containerd-2a873b9ebaec04a65c4656c3c26eefca306b34e5a15d751e81c9df2c6355265f.scope: Deactivated successfully. Apr 20 15:41:54.193795 systemd[1]: cri-containerd-2a873b9ebaec04a65c4656c3c26eefca306b34e5a15d751e81c9df2c6355265f.scope: Consumed 24.073s CPU time, 22M memory peak. Apr 20 15:41:55.465070 containerd[1635]: time="2026-04-20T15:41:55.412101248Z" level=info msg="received container exit event container_id:\"2a873b9ebaec04a65c4656c3c26eefca306b34e5a15d751e81c9df2c6355265f\" id:\"2a873b9ebaec04a65c4656c3c26eefca306b34e5a15d751e81c9df2c6355265f\" pid:3608 exit_status:1 exited_at:{seconds:1776699714 nanos:222707007}" Apr 20 15:41:55.953655 kubelet[3279]: E0420 15:41:55.952609 3279 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="7.504s" Apr 20 15:41:58.331669 kubelet[3279]: E0420 15:41:58.331240 3279 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.369s" Apr 20 15:42:00.099488 kubelet[3279]: E0420 15:42:00.098101 3279 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.613s" Apr 20 15:42:00.453062 kubelet[3279]: E0420 15:42:00.402348 3279 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 15:42:02.111618 kubelet[3279]: E0420 15:42:02.110876 3279 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.013s" Apr 20 15:42:03.222815 containerd[1635]: time="2026-04-20T15:42:03.219112528Z" level=info msg="StartContainer for \"4e08dc43ac90f25562c59449b4869228c986ee25bb27e1161c87341514179b9a\" returns successfully" Apr 20 15:42:03.810876 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2a873b9ebaec04a65c4656c3c26eefca306b34e5a15d751e81c9df2c6355265f-rootfs.mount: Deactivated successfully. Apr 20 15:42:05.039767 kubelet[3279]: E0420 15:42:05.032270 3279 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.896s" Apr 20 15:42:07.274182 kubelet[3279]: E0420 15:42:07.260869 3279 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 15:42:07.896222 kubelet[3279]: E0420 15:42:07.891366 3279 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.799s" Apr 20 15:42:08.902495 containerd[1635]: time="2026-04-20T15:42:08.716996045Z" level=info msg="container event discarded" container=5dd02dc9c0aea497c0bde84747e6de4de59b9e2f5ff60d183e9aac204f58211c type=CONTAINER_STOPPED_EVENT Apr 20 15:42:10.743860 kubelet[3279]: E0420 15:42:10.743268 3279 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.809s" Apr 20 15:42:12.955299 kubelet[3279]: E0420 15:42:12.943372 3279 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 15:42:13.544179 kubelet[3279]: E0420 15:42:13.543082 3279 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.543s" Apr 20 15:42:13.836705 kubelet[3279]: I0420 15:42:13.827289 3279 scope.go:117] "RemoveContainer" containerID="d5993b6cf6ac2414e9d15ee09a9a5452f85f4814c3a4086a477c7880355a0415" Apr 20 15:42:15.325090 kubelet[3279]: E0420 15:42:15.321354 3279 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 15:42:15.423537 containerd[1635]: time="2026-04-20T15:42:15.422734843Z" level=info msg="RemoveContainer for \"d5993b6cf6ac2414e9d15ee09a9a5452f85f4814c3a4086a477c7880355a0415\"" Apr 20 15:42:15.889829 kubelet[3279]: E0420 15:42:15.886397 3279 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.24s" Apr 20 15:42:16.014229 kubelet[3279]: I0420 15:42:16.013258 3279 scope.go:117] "RemoveContainer" containerID="d5993b6cf6ac2414e9d15ee09a9a5452f85f4814c3a4086a477c7880355a0415" Apr 20 15:42:16.047874 kubelet[3279]: I0420 15:42:16.030458 3279 scope.go:117] "RemoveContainer" containerID="2a873b9ebaec04a65c4656c3c26eefca306b34e5a15d751e81c9df2c6355265f" Apr 20 15:42:16.139667 kubelet[3279]: E0420 15:42:16.139109 3279 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 15:42:16.266455 kubelet[3279]: E0420 15:42:16.190835 3279 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-controller-manager pod=kube-controller-manager-localhost_kube-system(c6bb8708a026256e82ca4c5631a78b5a)\"" pod="kube-system/kube-controller-manager-localhost" podUID="c6bb8708a026256e82ca4c5631a78b5a" Apr 20 15:42:17.054522 kubelet[3279]: E0420 15:42:17.053362 3279 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 15:42:18.814036 containerd[1635]: time="2026-04-20T15:42:18.725603530Z" level=error msg="ContainerStatus for \"d5993b6cf6ac2414e9d15ee09a9a5452f85f4814c3a4086a477c7880355a0415\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d5993b6cf6ac2414e9d15ee09a9a5452f85f4814c3a4086a477c7880355a0415\": not found" Apr 20 15:42:18.830489 kubelet[3279]: E0420 15:42:18.828745 3279 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.153s" Apr 20 15:42:19.023792 kubelet[3279]: E0420 15:42:19.008861 3279 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d5993b6cf6ac2414e9d15ee09a9a5452f85f4814c3a4086a477c7880355a0415\": not found" containerID="d5993b6cf6ac2414e9d15ee09a9a5452f85f4814c3a4086a477c7880355a0415" Apr 20 15:42:19.199092 kubelet[3279]: I0420 15:42:19.128947 3279 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d5993b6cf6ac2414e9d15ee09a9a5452f85f4814c3a4086a477c7880355a0415"} err="failed to get container status \"d5993b6cf6ac2414e9d15ee09a9a5452f85f4814c3a4086a477c7880355a0415\": rpc error: code = NotFound desc = an error occurred when try to find container \"d5993b6cf6ac2414e9d15ee09a9a5452f85f4814c3a4086a477c7880355a0415\": not found" Apr 20 15:42:19.507899 containerd[1635]: time="2026-04-20T15:42:19.425165701Z" level=info msg="RemoveContainer for \"d5993b6cf6ac2414e9d15ee09a9a5452f85f4814c3a4086a477c7880355a0415\" returns successfully" Apr 20 15:42:19.606272 containerd[1635]: time="2026-04-20T15:42:19.542818292Z" level=info msg="container event discarded" container=3b2352d00f5f73944af3efbf0280250a0b9faf45bb8c73547ba715bfe2066d6e type=CONTAINER_DELETED_EVENT Apr 20 15:42:19.718455 kubelet[3279]: E0420 15:42:19.698966 3279 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 15:42:20.134827 kubelet[3279]: E0420 15:42:20.134479 3279 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.305s" Apr 20 15:42:21.214602 containerd[1635]: time="2026-04-20T15:42:21.204888418Z" level=info msg="container event discarded" container=d5993b6cf6ac2414e9d15ee09a9a5452f85f4814c3a4086a477c7880355a0415 type=CONTAINER_CREATED_EVENT Apr 20 15:42:22.111156 kubelet[3279]: E0420 15:42:22.108921 3279 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.454s" Apr 20 15:42:22.303283 kubelet[3279]: I0420 15:42:22.294253 3279 scope.go:117] "RemoveContainer" containerID="2a873b9ebaec04a65c4656c3c26eefca306b34e5a15d751e81c9df2c6355265f" Apr 20 15:42:22.403659 kubelet[3279]: E0420 15:42:22.321352 3279 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 15:42:22.409423 kubelet[3279]: E0420 15:42:22.402291 3279 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-controller-manager pod=kube-controller-manager-localhost_kube-system(c6bb8708a026256e82ca4c5631a78b5a)\"" pod="kube-system/kube-controller-manager-localhost" podUID="c6bb8708a026256e82ca4c5631a78b5a" Apr 20 15:42:23.047906 kubelet[3279]: E0420 15:42:23.017657 3279 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 15:42:24.105056 containerd[1635]: time="2026-04-20T15:42:24.080748084Z" level=info msg="container event discarded" container=d5993b6cf6ac2414e9d15ee09a9a5452f85f4814c3a4086a477c7880355a0415 type=CONTAINER_STARTED_EVENT Apr 20 15:42:24.224049 kubelet[3279]: E0420 15:42:24.105987 3279 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.472s" Apr 20 15:42:25.125901 kubelet[3279]: E0420 15:42:25.124824 3279 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 15:42:25.651688 kubelet[3279]: E0420 15:42:25.636924 3279 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 15:42:26.899941 kubelet[3279]: E0420 15:42:26.897947 3279 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 15:42:30.325529 kubelet[3279]: E0420 15:42:30.324243 3279 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.561s" Apr 20 15:42:31.670254 kubelet[3279]: E0420 15:42:31.615910 3279 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 15:42:35.924978 kubelet[3279]: E0420 15:42:35.922308 3279 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="5.57s" Apr 20 15:42:36.473926 kubelet[3279]: E0420 15:42:36.287726 3279 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 15:42:36.588969 kubelet[3279]: E0420 15:42:36.580123 3279 controller.go:195] "Failed to update lease" err="Put \"https://10.0.0.22:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded" Apr 20 15:42:38.197280 kubelet[3279]: E0420 15:42:38.196716 3279 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 15:42:38.914256 kubelet[3279]: E0420 15:42:38.904756 3279 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.941s" Apr 20 15:42:40.599327 kubelet[3279]: E0420 15:42:40.597276 3279 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 15:42:41.625200 kubelet[3279]: E0420 15:42:41.620637 3279 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.66s" Apr 20 15:42:44.790295 kubelet[3279]: E0420 15:42:44.789223 3279 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 15:42:46.641311 kubelet[3279]: E0420 15:42:46.641127 3279 controller.go:195] "Failed to update lease" err="Put \"https://10.0.0.22:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded" Apr 20 15:42:49.423551 kubelet[3279]: E0420 15:42:49.417133 3279 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="7.796s" Apr 20 15:42:52.335772 kubelet[3279]: E0420 15:42:52.331213 3279 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 15:42:55.109172 kubelet[3279]: E0420 15:42:55.106863 3279 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="5.657s" Apr 20 15:42:55.631772 kubelet[3279]: I0420 15:42:55.627019 3279 scope.go:117] "RemoveContainer" containerID="2a873b9ebaec04a65c4656c3c26eefca306b34e5a15d751e81c9df2c6355265f" Apr 20 15:42:55.730186 kubelet[3279]: E0420 15:42:55.634878 3279 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 15:42:57.200294 containerd[1635]: time="2026-04-20T15:42:57.192434389Z" level=info msg="CreateContainer within sandbox \"36cc9c62516c4192c61ab0e37efdfcd65bebb612af9dcfb4a1ca8370c4716063\" for container name:\"kube-controller-manager\" attempt:6" Apr 20 15:42:58.361022 kubelet[3279]: E0420 15:42:58.353878 3279 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 15:42:59.433025 kubelet[3279]: E0420 15:42:59.432706 3279 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.797s" Apr 20 15:43:01.655175 kubelet[3279]: E0420 15:43:01.653823 3279 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.221s" Apr 20 15:43:04.842560 kubelet[3279]: E0420 15:43:04.756126 3279 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 15:43:06.830582 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1351228533.mount: Deactivated successfully. Apr 20 15:43:08.551698 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount971389229.mount: Deactivated successfully. Apr 20 15:43:09.521743 containerd[1635]: time="2026-04-20T15:43:09.329245805Z" level=info msg="Container 4e37aa768325b0611fdac9043cc7815788528bfa601acabf0b0ecabfa79a966a: CDI devices from CRI Config.CDIDevices: []" Apr 20 15:43:10.599148 kubelet[3279]: E0420 15:43:10.588175 3279 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 15:43:11.446013 kubelet[3279]: E0420 15:43:11.436336 3279 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="8.792s" Apr 20 15:43:13.834678 kubelet[3279]: E0420 15:43:13.755264 3279 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.28s" Apr 20 15:43:16.088260 kubelet[3279]: E0420 15:43:16.085792 3279 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.256s" Apr 20 15:43:16.549564 kubelet[3279]: E0420 15:43:16.547209 3279 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 15:43:17.685016 kubelet[3279]: E0420 15:43:17.682608 3279 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.594s" Apr 20 15:43:19.942094 containerd[1635]: time="2026-04-20T15:43:19.937143864Z" level=info msg="CreateContainer within sandbox \"36cc9c62516c4192c61ab0e37efdfcd65bebb612af9dcfb4a1ca8370c4716063\" for name:\"kube-controller-manager\" attempt:6 returns container id \"4e37aa768325b0611fdac9043cc7815788528bfa601acabf0b0ecabfa79a966a\"" Apr 20 15:43:21.338838 kubelet[3279]: I0420 15:43:21.336829 3279 scope.go:117] "RemoveContainer" containerID="2a873b9ebaec04a65c4656c3c26eefca306b34e5a15d751e81c9df2c6355265f" Apr 20 15:43:21.519281 containerd[1635]: time="2026-04-20T15:43:21.462121186Z" level=info msg="StartContainer for \"4e37aa768325b0611fdac9043cc7815788528bfa601acabf0b0ecabfa79a966a\"" Apr 20 15:43:22.236237 kubelet[3279]: E0420 15:43:22.234147 3279 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 15:43:22.357858 containerd[1635]: time="2026-04-20T15:43:22.357410245Z" level=info msg="connecting to shim 4e37aa768325b0611fdac9043cc7815788528bfa601acabf0b0ecabfa79a966a" address="unix:///run/containerd/s/e8b570697ccae5c37ee8c203dd47b2636b9b3b7d7a1b21460cbc42ccc93af9d3" protocol=ttrpc version=3 Apr 20 15:43:23.218544 kubelet[3279]: E0420 15:43:23.217699 3279 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="5.513s" Apr 20 15:43:26.335765 systemd[1]: Started cri-containerd-4e37aa768325b0611fdac9043cc7815788528bfa601acabf0b0ecabfa79a966a.scope - libcontainer container 4e37aa768325b0611fdac9043cc7815788528bfa601acabf0b0ecabfa79a966a. Apr 20 15:43:26.392219 containerd[1635]: time="2026-04-20T15:43:26.391692610Z" level=info msg="RemoveContainer for \"2a873b9ebaec04a65c4656c3c26eefca306b34e5a15d751e81c9df2c6355265f\"" Apr 20 15:43:27.826356 kubelet[3279]: E0420 15:43:27.825071 3279 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 15:43:28.871294 kubelet[3279]: E0420 15:43:28.870147 3279 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="5.586s" Apr 20 15:43:29.198287 containerd[1635]: time="2026-04-20T15:43:29.037837368Z" level=error msg="get state for 36cc9c62516c4192c61ab0e37efdfcd65bebb612af9dcfb4a1ca8370c4716063" error="context deadline exceeded" Apr 20 15:43:29.471320 containerd[1635]: time="2026-04-20T15:43:29.450171354Z" level=warning msg="unknown status" status=0 Apr 20 15:43:29.521082 containerd[1635]: time="2026-04-20T15:43:29.363832331Z" level=error msg="ttrpc: received message on inactive stream" stream=93 Apr 20 15:43:30.458204 kubelet[3279]: E0420 15:43:30.432639 3279 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.538s" Apr 20 15:43:32.896299 kubelet[3279]: E0420 15:43:32.895776 3279 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.199s" Apr 20 15:43:34.498370 kubelet[3279]: E0420 15:43:34.497977 3279 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 15:43:35.055040 kubelet[3279]: E0420 15:43:35.053563 3279 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.153s" Apr 20 15:43:36.974825 kubelet[3279]: E0420 15:43:36.972695 3279 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.919s" Apr 20 15:43:37.453005 containerd[1635]: time="2026-04-20T15:43:37.438575232Z" level=info msg="RemoveContainer for \"2a873b9ebaec04a65c4656c3c26eefca306b34e5a15d751e81c9df2c6355265f\" returns successfully" Apr 20 15:43:38.357551 kubelet[3279]: E0420 15:43:38.353949 3279 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.38s" Apr 20 15:43:40.099208 kubelet[3279]: E0420 15:43:40.097188 3279 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.465s" Apr 20 15:43:40.623330 kubelet[3279]: E0420 15:43:40.563371 3279 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 15:43:40.733608 containerd[1635]: time="2026-04-20T15:43:40.726780424Z" level=info msg="StartContainer for \"4e37aa768325b0611fdac9043cc7815788528bfa601acabf0b0ecabfa79a966a\" returns successfully" Apr 20 15:43:43.050974 kubelet[3279]: E0420 15:43:43.049993 3279 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.333s" Apr 20 15:43:44.198215 kubelet[3279]: E0420 15:43:44.197920 3279 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 15:43:45.100237 containerd[1635]: time="2026-04-20T15:43:45.097137155Z" level=info msg="container event discarded" container=d5993b6cf6ac2414e9d15ee09a9a5452f85f4814c3a4086a477c7880355a0415 type=CONTAINER_STOPPED_EVENT Apr 20 15:43:45.913038 kubelet[3279]: E0420 15:43:45.909123 3279 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 15:43:50.904341 kubelet[3279]: E0420 15:43:50.903148 3279 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.177s" Apr 20 15:43:51.378990 kubelet[3279]: E0420 15:43:51.376525 3279 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 15:43:51.378990 kubelet[3279]: E0420 15:43:51.376955 3279 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 15:43:51.581069 containerd[1635]: time="2026-04-20T15:43:51.579746985Z" level=info msg="container event discarded" container=6303059002e4ee5e543356374da9d7d604693648ecb0d255f56fb9788408da67 type=CONTAINER_STOPPED_EVENT Apr 20 15:43:52.039557 kubelet[3279]: E0420 15:43:52.039116 3279 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 15:43:56.466139 kubelet[3279]: E0420 15:43:56.465206 3279 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 15:43:57.634526 kubelet[3279]: E0420 15:43:57.634035 3279 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 15:43:57.701350 containerd[1635]: time="2026-04-20T15:43:57.697161984Z" level=info msg="container event discarded" container=5dd02dc9c0aea497c0bde84747e6de4de59b9e2f5ff60d183e9aac204f58211c type=CONTAINER_DELETED_EVENT Apr 20 15:44:01.732237 kubelet[3279]: E0420 15:44:01.730070 3279 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 15:44:03.733095 kubelet[3279]: E0420 15:44:03.730353 3279 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.097s" Apr 20 15:44:05.560603 containerd[1635]: time="2026-04-20T15:44:05.557307687Z" level=info msg="container event discarded" container=6fd58b8c1c91dba1d785b781e0a4ac6425b1aed8f47de4fda07e8f33ce5536e3 type=CONTAINER_DELETED_EVENT Apr 20 15:44:06.903104 kubelet[3279]: E0420 15:44:06.897899 3279 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 15:44:07.825216 containerd[1635]: time="2026-04-20T15:44:07.821954050Z" level=info msg="container event discarded" container=3bf8cd9e0e669f180abedb3b2f900b0d1914afc67359322c6b58e298951c7f1c type=CONTAINER_CREATED_EVENT Apr 20 15:44:09.852093 kubelet[3279]: E0420 15:44:09.844434 3279 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.193s" Apr 20 15:44:11.321192 kubelet[3279]: E0420 15:44:11.320722 3279 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 15:44:12.504970 kubelet[3279]: E0420 15:44:12.499812 3279 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 15:44:12.932312 kubelet[3279]: E0420 15:44:12.925649 3279 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.201s" Apr 20 15:44:14.319256 kubelet[3279]: E0420 15:44:14.317767 3279 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.392s" Apr 20 15:44:16.218373 kubelet[3279]: E0420 15:44:16.201282 3279 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.57s" Apr 20 15:44:18.483334 kubelet[3279]: E0420 15:44:18.479772 3279 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 15:44:18.483334 kubelet[3279]: E0420 15:44:18.482058 3279 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.844s" Apr 20 15:44:20.146007 kubelet[3279]: E0420 15:44:20.141731 3279 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.469s" Apr 20 15:44:21.839916 kubelet[3279]: E0420 15:44:21.836326 3279 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.203s" Apr 20 15:44:23.516297 containerd[1635]: time="2026-04-20T15:44:23.450289970Z" level=info msg="container event discarded" container=2a873b9ebaec04a65c4656c3c26eefca306b34e5a15d751e81c9df2c6355265f type=CONTAINER_CREATED_EVENT Apr 20 15:44:24.655603 kubelet[3279]: E0420 15:44:24.640803 3279 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 15:44:24.669164 systemd[1]: cri-containerd-4e08dc43ac90f25562c59449b4869228c986ee25bb27e1161c87341514179b9a.scope: Deactivated successfully. Apr 20 15:44:24.725163 systemd[1]: cri-containerd-4e08dc43ac90f25562c59449b4869228c986ee25bb27e1161c87341514179b9a.scope: Consumed 52.159s CPU time, 23.8M memory peak. Apr 20 15:44:25.171339 systemd[1]: cri-containerd-4e37aa768325b0611fdac9043cc7815788528bfa601acabf0b0ecabfa79a966a.scope: Deactivated successfully. Apr 20 15:44:25.207983 systemd[1]: cri-containerd-4e37aa768325b0611fdac9043cc7815788528bfa601acabf0b0ecabfa79a966a.scope: Consumed 11.653s CPU time, 22.6M memory peak. Apr 20 15:44:25.362286 containerd[1635]: time="2026-04-20T15:44:25.255583102Z" level=info msg="container event discarded" container=3bf8cd9e0e669f180abedb3b2f900b0d1914afc67359322c6b58e298951c7f1c type=CONTAINER_STARTED_EVENT Apr 20 15:44:25.923899 containerd[1635]: time="2026-04-20T15:44:25.918445794Z" level=info msg="received container exit event container_id:\"4e37aa768325b0611fdac9043cc7815788528bfa601acabf0b0ecabfa79a966a\" id:\"4e37aa768325b0611fdac9043cc7815788528bfa601acabf0b0ecabfa79a966a\" pid:3716 exit_status:1 exited_at:{seconds:1776699865 nanos:309659664}" Apr 20 15:44:26.252250 containerd[1635]: time="2026-04-20T15:44:26.234096183Z" level=info msg="received container exit event container_id:\"4e08dc43ac90f25562c59449b4869228c986ee25bb27e1161c87341514179b9a\" id:\"4e08dc43ac90f25562c59449b4869228c986ee25bb27e1161c87341514179b9a\" pid:3665 exit_status:1 exited_at:{seconds:1776699865 nanos:18366796}" Apr 20 15:44:26.444351 kubelet[3279]: E0420 15:44:26.440427 3279 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.804s" Apr 20 15:44:28.229140 kubelet[3279]: E0420 15:44:28.228353 3279 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.788s" Apr 20 15:44:28.842738 kubelet[3279]: E0420 15:44:28.841921 3279 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 15:44:29.308534 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4e37aa768325b0611fdac9043cc7815788528bfa601acabf0b0ecabfa79a966a-rootfs.mount: Deactivated successfully. Apr 20 15:44:29.367222 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4e08dc43ac90f25562c59449b4869228c986ee25bb27e1161c87341514179b9a-rootfs.mount: Deactivated successfully. Apr 20 15:44:29.902992 kubelet[3279]: E0420 15:44:29.893306 3279 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 15:44:30.394600 kubelet[3279]: I0420 15:44:30.372056 3279 scope.go:117] "RemoveContainer" containerID="4e37aa768325b0611fdac9043cc7815788528bfa601acabf0b0ecabfa79a966a" Apr 20 15:44:30.435337 kubelet[3279]: E0420 15:44:30.423269 3279 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 15:44:30.535668 kubelet[3279]: E0420 15:44:30.534636 3279 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-controller-manager pod=kube-controller-manager-localhost_kube-system(c6bb8708a026256e82ca4c5631a78b5a)\"" pod="kube-system/kube-controller-manager-localhost" podUID="c6bb8708a026256e82ca4c5631a78b5a" Apr 20 15:44:31.232950 kubelet[3279]: I0420 15:44:31.232198 3279 scope.go:117] "RemoveContainer" containerID="3bf8cd9e0e669f180abedb3b2f900b0d1914afc67359322c6b58e298951c7f1c" Apr 20 15:44:31.286300 kubelet[3279]: I0420 15:44:31.241672 3279 scope.go:117] "RemoveContainer" containerID="4e08dc43ac90f25562c59449b4869228c986ee25bb27e1161c87341514179b9a" Apr 20 15:44:31.286300 kubelet[3279]: E0420 15:44:31.242077 3279 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 15:44:31.286300 kubelet[3279]: E0420 15:44:31.242234 3279 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-scheduler pod=kube-scheduler-localhost_kube-system(824fd89300514e351ed3b68d82c665c6)\"" pod="kube-system/kube-scheduler-localhost" podUID="824fd89300514e351ed3b68d82c665c6" Apr 20 15:44:31.560995 containerd[1635]: time="2026-04-20T15:44:31.555839985Z" level=info msg="RemoveContainer for \"3bf8cd9e0e669f180abedb3b2f900b0d1914afc67359322c6b58e298951c7f1c\"" Apr 20 15:44:32.078729 containerd[1635]: time="2026-04-20T15:44:32.077727302Z" level=info msg="RemoveContainer for \"3bf8cd9e0e669f180abedb3b2f900b0d1914afc67359322c6b58e298951c7f1c\" returns successfully" Apr 20 15:44:32.787636 kubelet[3279]: I0420 15:44:32.784609 3279 scope.go:117] "RemoveContainer" containerID="4e37aa768325b0611fdac9043cc7815788528bfa601acabf0b0ecabfa79a966a" Apr 20 15:44:32.787636 kubelet[3279]: E0420 15:44:32.786022 3279 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 15:44:32.787636 kubelet[3279]: E0420 15:44:32.787911 3279 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-controller-manager pod=kube-controller-manager-localhost_kube-system(c6bb8708a026256e82ca4c5631a78b5a)\"" pod="kube-system/kube-controller-manager-localhost" podUID="c6bb8708a026256e82ca4c5631a78b5a" Apr 20 15:44:34.985565 kubelet[3279]: E0420 15:44:34.985220 3279 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 15:44:35.232478 kubelet[3279]: I0420 15:44:35.232126 3279 scope.go:117] "RemoveContainer" containerID="4e08dc43ac90f25562c59449b4869228c986ee25bb27e1161c87341514179b9a" Apr 20 15:44:35.232478 kubelet[3279]: E0420 15:44:35.232369 3279 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 15:44:35.232478 kubelet[3279]: E0420 15:44:35.232509 3279 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-scheduler pod=kube-scheduler-localhost_kube-system(824fd89300514e351ed3b68d82c665c6)\"" pod="kube-system/kube-scheduler-localhost" podUID="824fd89300514e351ed3b68d82c665c6" Apr 20 15:44:40.030737 kubelet[3279]: E0420 15:44:40.029110 3279 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 15:44:44.641919 kubelet[3279]: I0420 15:44:44.640281 3279 scope.go:117] "RemoveContainer" containerID="4e37aa768325b0611fdac9043cc7815788528bfa601acabf0b0ecabfa79a966a" Apr 20 15:44:44.641919 kubelet[3279]: E0420 15:44:44.645441 3279 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 15:44:44.678545 kubelet[3279]: E0420 15:44:44.646283 3279 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-controller-manager pod=kube-controller-manager-localhost_kube-system(c6bb8708a026256e82ca4c5631a78b5a)\"" pod="kube-system/kube-controller-manager-localhost" podUID="c6bb8708a026256e82ca4c5631a78b5a" Apr 20 15:44:45.051080 containerd[1635]: time="2026-04-20T15:44:44.973301513Z" level=info msg="container event discarded" container=2a873b9ebaec04a65c4656c3c26eefca306b34e5a15d751e81c9df2c6355265f type=CONTAINER_STARTED_EVENT Apr 20 15:44:45.320250 kubelet[3279]: E0420 15:44:45.308617 3279 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 15:44:47.759552 kubelet[3279]: I0420 15:44:47.758321 3279 scope.go:117] "RemoveContainer" containerID="4e08dc43ac90f25562c59449b4869228c986ee25bb27e1161c87341514179b9a" Apr 20 15:44:47.942783 kubelet[3279]: E0420 15:44:47.844998 3279 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 15:44:48.033862 kubelet[3279]: E0420 15:44:48.011290 3279 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-scheduler pod=kube-scheduler-localhost_kube-system(824fd89300514e351ed3b68d82c665c6)\"" pod="kube-system/kube-scheduler-localhost" podUID="824fd89300514e351ed3b68d82c665c6" Apr 20 15:44:50.617510 kubelet[3279]: E0420 15:44:50.575327 3279 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 15:44:55.649854 kubelet[3279]: I0420 15:44:55.648719 3279 scope.go:117] "RemoveContainer" containerID="4e37aa768325b0611fdac9043cc7815788528bfa601acabf0b0ecabfa79a966a" Apr 20 15:44:55.827103 kubelet[3279]: E0420 15:44:55.825241 3279 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 15:44:55.891774 kubelet[3279]: E0420 15:44:55.886052 3279 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-controller-manager pod=kube-controller-manager-localhost_kube-system(c6bb8708a026256e82ca4c5631a78b5a)\"" pod="kube-system/kube-controller-manager-localhost" podUID="c6bb8708a026256e82ca4c5631a78b5a" Apr 20 15:44:56.036057 kubelet[3279]: E0420 15:44:56.022566 3279 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 15:45:00.649307 kubelet[3279]: I0420 15:45:00.648766 3279 scope.go:117] "RemoveContainer" containerID="4e08dc43ac90f25562c59449b4869228c986ee25bb27e1161c87341514179b9a" Apr 20 15:45:00.649307 kubelet[3279]: E0420 15:45:00.649116 3279 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 15:45:00.649307 kubelet[3279]: E0420 15:45:00.649255 3279 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-scheduler pod=kube-scheduler-localhost_kube-system(824fd89300514e351ed3b68d82c665c6)\"" pod="kube-system/kube-scheduler-localhost" podUID="824fd89300514e351ed3b68d82c665c6" Apr 20 15:45:01.216971 kubelet[3279]: E0420 15:45:01.208335 3279 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 15:45:06.231720 kubelet[3279]: E0420 15:45:06.231026 3279 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 15:45:07.633506 kubelet[3279]: I0420 15:45:07.632707 3279 scope.go:117] "RemoveContainer" containerID="4e37aa768325b0611fdac9043cc7815788528bfa601acabf0b0ecabfa79a966a" Apr 20 15:45:07.633506 kubelet[3279]: E0420 15:45:07.632851 3279 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 15:45:07.633506 kubelet[3279]: E0420 15:45:07.632920 3279 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 15:45:07.633506 kubelet[3279]: E0420 15:45:07.633057 3279 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-controller-manager pod=kube-controller-manager-localhost_kube-system(c6bb8708a026256e82ca4c5631a78b5a)\"" pod="kube-system/kube-controller-manager-localhost" podUID="c6bb8708a026256e82ca4c5631a78b5a" Apr 20 15:45:11.239899 kubelet[3279]: E0420 15:45:11.239563 3279 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 15:45:14.634069 kubelet[3279]: I0420 15:45:14.633631 3279 scope.go:117] "RemoveContainer" containerID="4e08dc43ac90f25562c59449b4869228c986ee25bb27e1161c87341514179b9a" Apr 20 15:45:14.634069 kubelet[3279]: E0420 15:45:14.633935 3279 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 15:45:14.698057 containerd[1635]: time="2026-04-20T15:45:14.696994010Z" level=info msg="CreateContainer within sandbox \"3ae2bc198d206d496b05bf8cedf59cf493595b084c968bee9f872f384d6cca08\" for container name:\"kube-scheduler\" attempt:4" Apr 20 15:45:14.734858 containerd[1635]: time="2026-04-20T15:45:14.734712455Z" level=info msg="Container 0cf87016025b28ab58e3321f07d934c6b58ce6663c0b3d2293b35dc8dfb51478: CDI devices from CRI Config.CDIDevices: []" Apr 20 15:45:14.740749 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3493581353.mount: Deactivated successfully. Apr 20 15:45:14.911271 containerd[1635]: time="2026-04-20T15:45:14.910747695Z" level=info msg="CreateContainer within sandbox \"3ae2bc198d206d496b05bf8cedf59cf493595b084c968bee9f872f384d6cca08\" for name:\"kube-scheduler\" attempt:4 returns container id \"0cf87016025b28ab58e3321f07d934c6b58ce6663c0b3d2293b35dc8dfb51478\"" Apr 20 15:45:14.921665 containerd[1635]: time="2026-04-20T15:45:14.921332158Z" level=info msg="StartContainer for \"0cf87016025b28ab58e3321f07d934c6b58ce6663c0b3d2293b35dc8dfb51478\"" Apr 20 15:45:14.924441 containerd[1635]: time="2026-04-20T15:45:14.924175713Z" level=info msg="connecting to shim 0cf87016025b28ab58e3321f07d934c6b58ce6663c0b3d2293b35dc8dfb51478" address="unix:///run/containerd/s/909e6cfc558d4fc91d810de80342fc4f6713a6b29dc9d1596b2d8f2a2ab41cb6" protocol=ttrpc version=3 Apr 20 15:45:15.097003 systemd[1]: Started cri-containerd-0cf87016025b28ab58e3321f07d934c6b58ce6663c0b3d2293b35dc8dfb51478.scope - libcontainer container 0cf87016025b28ab58e3321f07d934c6b58ce6663c0b3d2293b35dc8dfb51478. Apr 20 15:45:15.372647 containerd[1635]: time="2026-04-20T15:45:15.372112387Z" level=info msg="StartContainer for \"0cf87016025b28ab58e3321f07d934c6b58ce6663c0b3d2293b35dc8dfb51478\" returns successfully" Apr 20 15:45:16.313086 kubelet[3279]: E0420 15:45:16.311501 3279 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 15:45:16.644981 kubelet[3279]: E0420 15:45:16.635914 3279 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 15:45:17.781282 kubelet[3279]: E0420 15:45:17.780119 3279 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 15:45:20.635414 kubelet[3279]: I0420 15:45:20.634888 3279 scope.go:117] "RemoveContainer" containerID="4e37aa768325b0611fdac9043cc7815788528bfa601acabf0b0ecabfa79a966a" Apr 20 15:45:20.635414 kubelet[3279]: E0420 15:45:20.635431 3279 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 15:45:20.665417 kubelet[3279]: E0420 15:45:20.635897 3279 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-controller-manager pod=kube-controller-manager-localhost_kube-system(c6bb8708a026256e82ca4c5631a78b5a)\"" pod="kube-system/kube-controller-manager-localhost" podUID="c6bb8708a026256e82ca4c5631a78b5a" Apr 20 15:45:21.325263 kubelet[3279]: E0420 15:45:21.324884 3279 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 15:45:25.234353 kubelet[3279]: E0420 15:45:25.233891 3279 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 15:45:26.462077 kubelet[3279]: E0420 15:45:26.460971 3279 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 15:45:26.537287 kubelet[3279]: E0420 15:45:26.535604 3279 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 15:45:27.396432 kubelet[3279]: E0420 15:45:27.396089 3279 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 15:45:31.557598 kubelet[3279]: E0420 15:45:31.557258 3279 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 15:45:33.631483 kubelet[3279]: I0420 15:45:33.631108 3279 scope.go:117] "RemoveContainer" containerID="4e37aa768325b0611fdac9043cc7815788528bfa601acabf0b0ecabfa79a966a" Apr 20 15:45:33.631483 kubelet[3279]: E0420 15:45:33.631317 3279 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 15:45:33.631483 kubelet[3279]: E0420 15:45:33.631576 3279 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-controller-manager pod=kube-controller-manager-localhost_kube-system(c6bb8708a026256e82ca4c5631a78b5a)\"" pod="kube-system/kube-controller-manager-localhost" podUID="c6bb8708a026256e82ca4c5631a78b5a" Apr 20 15:45:36.563808 kubelet[3279]: E0420 15:45:36.563511 3279 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 15:45:41.586685 kubelet[3279]: E0420 15:45:41.585007 3279 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 15:45:46.592487 kubelet[3279]: E0420 15:45:46.592034 3279 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 15:45:46.630571 kubelet[3279]: I0420 15:45:46.630194 3279 scope.go:117] "RemoveContainer" containerID="4e37aa768325b0611fdac9043cc7815788528bfa601acabf0b0ecabfa79a966a" Apr 20 15:45:46.630571 kubelet[3279]: E0420 15:45:46.630408 3279 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 15:45:46.634996 containerd[1635]: time="2026-04-20T15:45:46.634930916Z" level=info msg="CreateContainer within sandbox \"36cc9c62516c4192c61ab0e37efdfcd65bebb612af9dcfb4a1ca8370c4716063\" for container name:\"kube-controller-manager\" attempt:7" Apr 20 15:45:46.692742 containerd[1635]: time="2026-04-20T15:45:46.687554706Z" level=info msg="Container 55d080fed6d44280e39f49bc888de29043b36e8e616b2f27fa5c91f12f48c035: CDI devices from CRI Config.CDIDevices: []" Apr 20 15:45:46.688981 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1301426628.mount: Deactivated successfully. Apr 20 15:45:46.695057 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2596980618.mount: Deactivated successfully. Apr 20 15:45:46.740101 containerd[1635]: time="2026-04-20T15:45:46.739549019Z" level=info msg="CreateContainer within sandbox \"36cc9c62516c4192c61ab0e37efdfcd65bebb612af9dcfb4a1ca8370c4716063\" for name:\"kube-controller-manager\" attempt:7 returns container id \"55d080fed6d44280e39f49bc888de29043b36e8e616b2f27fa5c91f12f48c035\"" Apr 20 15:45:46.762433 containerd[1635]: time="2026-04-20T15:45:46.761835900Z" level=info msg="StartContainer for \"55d080fed6d44280e39f49bc888de29043b36e8e616b2f27fa5c91f12f48c035\"" Apr 20 15:45:46.765483 containerd[1635]: time="2026-04-20T15:45:46.765431669Z" level=info msg="connecting to shim 55d080fed6d44280e39f49bc888de29043b36e8e616b2f27fa5c91f12f48c035" address="unix:///run/containerd/s/e8b570697ccae5c37ee8c203dd47b2636b9b3b7d7a1b21460cbc42ccc93af9d3" protocol=ttrpc version=3 Apr 20 15:45:46.816675 systemd[1]: Started cri-containerd-55d080fed6d44280e39f49bc888de29043b36e8e616b2f27fa5c91f12f48c035.scope - libcontainer container 55d080fed6d44280e39f49bc888de29043b36e8e616b2f27fa5c91f12f48c035. Apr 20 15:45:46.905429 containerd[1635]: time="2026-04-20T15:45:46.905175641Z" level=info msg="StartContainer for \"55d080fed6d44280e39f49bc888de29043b36e8e616b2f27fa5c91f12f48c035\" returns successfully" Apr 20 15:45:47.261659 kubelet[3279]: E0420 15:45:47.259588 3279 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 15:45:50.832778 kubelet[3279]: E0420 15:45:50.832260 3279 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 15:45:51.602990 kubelet[3279]: E0420 15:45:51.602626 3279 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 15:45:56.623868 kubelet[3279]: E0420 15:45:56.623441 3279 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 15:46:00.841229 kubelet[3279]: E0420 15:46:00.840893 3279 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 15:46:01.649986 kubelet[3279]: E0420 15:46:01.649518 3279 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 15:46:06.772335 kubelet[3279]: E0420 15:46:06.770991 3279 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 15:46:12.257895 kubelet[3279]: E0420 15:46:12.254298 3279 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 15:46:13.149048 containerd[1635]: time="2026-04-20T15:46:13.147594168Z" level=info msg="container event discarded" container=3bf8cd9e0e669f180abedb3b2f900b0d1914afc67359322c6b58e298951c7f1c type=CONTAINER_STOPPED_EVENT Apr 20 15:46:13.715236 kubelet[3279]: E0420 15:46:13.713523 3279 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 15:46:17.317340 kubelet[3279]: E0420 15:46:17.310369 3279 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 15:46:17.899837 kubelet[3279]: E0420 15:46:17.899761 3279 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.267s" Apr 20 15:46:22.327189 kubelet[3279]: E0420 15:46:22.326954 3279 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 15:46:23.123479 systemd[1]: Created slice kubepods-burstable-poda3e2dbc1_b592_4542_803e_2c0b5b2f7a8f.slice - libcontainer container kubepods-burstable-poda3e2dbc1_b592_4542_803e_2c0b5b2f7a8f.slice. Apr 20 15:46:23.137462 systemd[1]: Created slice kubepods-besteffort-pod940dc7aa_196a_43f7_98f6_c3eec3286736.slice - libcontainer container kubepods-besteffort-pod940dc7aa_196a_43f7_98f6_c3eec3286736.slice. Apr 20 15:46:23.185639 kubelet[3279]: I0420 15:46:23.185164 3279 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-plugin\" (UniqueName: \"kubernetes.io/host-path/a3e2dbc1-b592-4542-803e-2c0b5b2f7a8f-cni-plugin\") pod \"kube-flannel-ds-l6z4d\" (UID: \"a3e2dbc1-b592-4542-803e-2c0b5b2f7a8f\") " pod="kube-flannel/kube-flannel-ds-l6z4d" Apr 20 15:46:23.185639 kubelet[3279]: I0420 15:46:23.185334 3279 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flannel-cfg\" (UniqueName: \"kubernetes.io/configmap/a3e2dbc1-b592-4542-803e-2c0b5b2f7a8f-flannel-cfg\") pod \"kube-flannel-ds-l6z4d\" (UID: \"a3e2dbc1-b592-4542-803e-2c0b5b2f7a8f\") " pod="kube-flannel/kube-flannel-ds-l6z4d" Apr 20 15:46:23.187027 kubelet[3279]: I0420 15:46:23.186998 3279 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni\" (UniqueName: \"kubernetes.io/host-path/a3e2dbc1-b592-4542-803e-2c0b5b2f7a8f-cni\") pod \"kube-flannel-ds-l6z4d\" (UID: \"a3e2dbc1-b592-4542-803e-2c0b5b2f7a8f\") " pod="kube-flannel/kube-flannel-ds-l6z4d" Apr 20 15:46:23.187027 kubelet[3279]: I0420 15:46:23.187027 3279 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a3e2dbc1-b592-4542-803e-2c0b5b2f7a8f-xtables-lock\") pod \"kube-flannel-ds-l6z4d\" (UID: \"a3e2dbc1-b592-4542-803e-2c0b5b2f7a8f\") " pod="kube-flannel/kube-flannel-ds-l6z4d" Apr 20 15:46:23.187162 kubelet[3279]: I0420 15:46:23.187040 3279 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ccxj7\" (UniqueName: \"kubernetes.io/projected/a3e2dbc1-b592-4542-803e-2c0b5b2f7a8f-kube-api-access-ccxj7\") pod \"kube-flannel-ds-l6z4d\" (UID: \"a3e2dbc1-b592-4542-803e-2c0b5b2f7a8f\") " pod="kube-flannel/kube-flannel-ds-l6z4d" Apr 20 15:46:23.187162 kubelet[3279]: I0420 15:46:23.187054 3279 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/940dc7aa-196a-43f7-98f6-c3eec3286736-kube-proxy\") pod \"kube-proxy-z642c\" (UID: \"940dc7aa-196a-43f7-98f6-c3eec3286736\") " pod="kube-system/kube-proxy-z642c" Apr 20 15:46:23.187162 kubelet[3279]: I0420 15:46:23.187084 3279 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p4z7f\" (UniqueName: \"kubernetes.io/projected/940dc7aa-196a-43f7-98f6-c3eec3286736-kube-api-access-p4z7f\") pod \"kube-proxy-z642c\" (UID: \"940dc7aa-196a-43f7-98f6-c3eec3286736\") " pod="kube-system/kube-proxy-z642c" Apr 20 15:46:23.187162 kubelet[3279]: I0420 15:46:23.187094 3279 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/a3e2dbc1-b592-4542-803e-2c0b5b2f7a8f-run\") pod \"kube-flannel-ds-l6z4d\" (UID: \"a3e2dbc1-b592-4542-803e-2c0b5b2f7a8f\") " pod="kube-flannel/kube-flannel-ds-l6z4d" Apr 20 15:46:23.187162 kubelet[3279]: I0420 15:46:23.187107 3279 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/940dc7aa-196a-43f7-98f6-c3eec3286736-xtables-lock\") pod \"kube-proxy-z642c\" (UID: \"940dc7aa-196a-43f7-98f6-c3eec3286736\") " pod="kube-system/kube-proxy-z642c" Apr 20 15:46:23.187295 kubelet[3279]: I0420 15:46:23.187119 3279 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/940dc7aa-196a-43f7-98f6-c3eec3286736-lib-modules\") pod \"kube-proxy-z642c\" (UID: \"940dc7aa-196a-43f7-98f6-c3eec3286736\") " pod="kube-system/kube-proxy-z642c" Apr 20 15:46:23.447431 kubelet[3279]: E0420 15:46:23.445141 3279 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 15:46:23.453003 kubelet[3279]: E0420 15:46:23.452698 3279 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 15:46:23.456630 containerd[1635]: time="2026-04-20T15:46:23.456555056Z" level=info msg="RunPodSandbox for name:\"kube-flannel-ds-l6z4d\" uid:\"a3e2dbc1-b592-4542-803e-2c0b5b2f7a8f\" namespace:\"kube-flannel\"" Apr 20 15:46:23.457231 containerd[1635]: time="2026-04-20T15:46:23.456580172Z" level=info msg="RunPodSandbox for name:\"kube-proxy-z642c\" uid:\"940dc7aa-196a-43f7-98f6-c3eec3286736\" namespace:\"kube-system\"" Apr 20 15:46:23.579460 containerd[1635]: time="2026-04-20T15:46:23.578489425Z" level=info msg="connecting to shim 82d28a4becd25e2f32ee017aa21c87cbe846c35af266488823fabbec15a6eea8" address="unix:///run/containerd/s/7a1f95b2a4fcdaf13ed7101a43ea685e110bb038d1d6edc86c15eb015004b75e" namespace=k8s.io protocol=ttrpc version=3 Apr 20 15:46:23.590444 containerd[1635]: time="2026-04-20T15:46:23.589544666Z" level=info msg="connecting to shim 5d48d9505405286117eaa2fe5107f95c26a1def8be38ed1512ff16c4e8a6a12b" address="unix:///run/containerd/s/ca6b182ffa2c1f8f3678d82f745b0665f30a725fa057a293a2a005dbae434654" namespace=k8s.io protocol=ttrpc version=3 Apr 20 15:46:23.618937 systemd[1]: Started cri-containerd-82d28a4becd25e2f32ee017aa21c87cbe846c35af266488823fabbec15a6eea8.scope - libcontainer container 82d28a4becd25e2f32ee017aa21c87cbe846c35af266488823fabbec15a6eea8. Apr 20 15:46:23.629299 systemd[1]: Started cri-containerd-5d48d9505405286117eaa2fe5107f95c26a1def8be38ed1512ff16c4e8a6a12b.scope - libcontainer container 5d48d9505405286117eaa2fe5107f95c26a1def8be38ed1512ff16c4e8a6a12b. Apr 20 15:46:23.663391 containerd[1635]: time="2026-04-20T15:46:23.663168920Z" level=info msg="RunPodSandbox for name:\"kube-proxy-z642c\" uid:\"940dc7aa-196a-43f7-98f6-c3eec3286736\" namespace:\"kube-system\" returns sandbox id \"82d28a4becd25e2f32ee017aa21c87cbe846c35af266488823fabbec15a6eea8\"" Apr 20 15:46:23.665181 kubelet[3279]: E0420 15:46:23.665150 3279 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 15:46:23.710336 containerd[1635]: time="2026-04-20T15:46:23.709205877Z" level=info msg="CreateContainer within sandbox \"82d28a4becd25e2f32ee017aa21c87cbe846c35af266488823fabbec15a6eea8\" for container name:\"kube-proxy\"" Apr 20 15:46:23.737945 containerd[1635]: time="2026-04-20T15:46:23.737625502Z" level=info msg="RunPodSandbox for name:\"kube-flannel-ds-l6z4d\" uid:\"a3e2dbc1-b592-4542-803e-2c0b5b2f7a8f\" namespace:\"kube-flannel\" returns sandbox id \"5d48d9505405286117eaa2fe5107f95c26a1def8be38ed1512ff16c4e8a6a12b\"" Apr 20 15:46:23.739418 kubelet[3279]: E0420 15:46:23.739324 3279 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 15:46:23.740427 containerd[1635]: time="2026-04-20T15:46:23.740325109Z" level=info msg="PullImage \"ghcr.io/flannel-io/flannel-cni-plugin:v1.6.2-flannel1\"" Apr 20 15:46:23.743458 containerd[1635]: time="2026-04-20T15:46:23.743270921Z" level=info msg="Container 8c4f93bc699e51f77d219b49a57c2d162a6eebd50762eb2147969feb5333514f: CDI devices from CRI Config.CDIDevices: []" Apr 20 15:46:23.755610 containerd[1635]: time="2026-04-20T15:46:23.755539259Z" level=info msg="CreateContainer within sandbox \"82d28a4becd25e2f32ee017aa21c87cbe846c35af266488823fabbec15a6eea8\" for name:\"kube-proxy\" returns container id \"8c4f93bc699e51f77d219b49a57c2d162a6eebd50762eb2147969feb5333514f\"" Apr 20 15:46:23.756695 containerd[1635]: time="2026-04-20T15:46:23.756637121Z" level=info msg="StartContainer for \"8c4f93bc699e51f77d219b49a57c2d162a6eebd50762eb2147969feb5333514f\"" Apr 20 15:46:23.773406 containerd[1635]: time="2026-04-20T15:46:23.772007806Z" level=info msg="connecting to shim 8c4f93bc699e51f77d219b49a57c2d162a6eebd50762eb2147969feb5333514f" address="unix:///run/containerd/s/7a1f95b2a4fcdaf13ed7101a43ea685e110bb038d1d6edc86c15eb015004b75e" protocol=ttrpc version=3 Apr 20 15:46:23.811775 systemd[1]: Started cri-containerd-8c4f93bc699e51f77d219b49a57c2d162a6eebd50762eb2147969feb5333514f.scope - libcontainer container 8c4f93bc699e51f77d219b49a57c2d162a6eebd50762eb2147969feb5333514f. Apr 20 15:46:23.891330 containerd[1635]: time="2026-04-20T15:46:23.891057181Z" level=info msg="StartContainer for \"8c4f93bc699e51f77d219b49a57c2d162a6eebd50762eb2147969feb5333514f\" returns successfully" Apr 20 15:46:24.573928 kubelet[3279]: E0420 15:46:24.573472 3279 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 15:46:25.862870 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount555413348.mount: Deactivated successfully. Apr 20 15:46:26.020999 containerd[1635]: time="2026-04-20T15:46:26.020568817Z" level=info msg="stop pulling image ghcr.io/flannel-io/flannel-cni-plugin:v1.6.2-flannel1: active requests=0, bytes read=0" Apr 20 15:46:26.020999 containerd[1635]: time="2026-04-20T15:46:26.020639422Z" level=info msg="ImageCreate event name:\"ghcr.io/flannel-io/flannel-cni-plugin:v1.6.2-flannel1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 20 15:46:26.025814 containerd[1635]: time="2026-04-20T15:46:26.025582708Z" level=info msg="ImageCreate event name:\"sha256:55ce2385d9d8c6f720091c177fbe885a21c9dc07c9e480bfb4d94b3001f58182\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 20 15:46:26.032924 containerd[1635]: time="2026-04-20T15:46:26.032775732Z" level=info msg="ImageCreate event name:\"ghcr.io/flannel-io/flannel-cni-plugin@sha256:f1812994f0edbcb5bb5ccb63be2147ba6ad10e1faaa7ca9fcdad4f441739d84f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 20 15:46:26.034306 containerd[1635]: time="2026-04-20T15:46:26.034089474Z" level=info msg="Pulled image \"ghcr.io/flannel-io/flannel-cni-plugin:v1.6.2-flannel1\" with image id \"sha256:55ce2385d9d8c6f720091c177fbe885a21c9dc07c9e480bfb4d94b3001f58182\", repo tag \"ghcr.io/flannel-io/flannel-cni-plugin:v1.6.2-flannel1\", repo digest \"ghcr.io/flannel-io/flannel-cni-plugin@sha256:f1812994f0edbcb5bb5ccb63be2147ba6ad10e1faaa7ca9fcdad4f441739d84f\", size \"4856838\" in 2.293719262s" Apr 20 15:46:26.034306 containerd[1635]: time="2026-04-20T15:46:26.034210849Z" level=info msg="PullImage \"ghcr.io/flannel-io/flannel-cni-plugin:v1.6.2-flannel1\" returns image reference \"sha256:55ce2385d9d8c6f720091c177fbe885a21c9dc07c9e480bfb4d94b3001f58182\"" Apr 20 15:46:26.096939 containerd[1635]: time="2026-04-20T15:46:26.096308115Z" level=info msg="CreateContainer within sandbox \"5d48d9505405286117eaa2fe5107f95c26a1def8be38ed1512ff16c4e8a6a12b\" for container name:\"install-cni-plugin\"" Apr 20 15:46:26.127203 containerd[1635]: time="2026-04-20T15:46:26.127049493Z" level=info msg="Container a16e0d7c25311ef9cc624f97ea472f00028306af143b780bfebc8c1e4a6513b0: CDI devices from CRI Config.CDIDevices: []" Apr 20 15:46:26.140759 containerd[1635]: time="2026-04-20T15:46:26.140418425Z" level=info msg="CreateContainer within sandbox \"5d48d9505405286117eaa2fe5107f95c26a1def8be38ed1512ff16c4e8a6a12b\" for name:\"install-cni-plugin\" returns container id \"a16e0d7c25311ef9cc624f97ea472f00028306af143b780bfebc8c1e4a6513b0\"" Apr 20 15:46:26.146846 containerd[1635]: time="2026-04-20T15:46:26.144168101Z" level=info msg="StartContainer for \"a16e0d7c25311ef9cc624f97ea472f00028306af143b780bfebc8c1e4a6513b0\"" Apr 20 15:46:26.149081 containerd[1635]: time="2026-04-20T15:46:26.149018485Z" level=info msg="connecting to shim a16e0d7c25311ef9cc624f97ea472f00028306af143b780bfebc8c1e4a6513b0" address="unix:///run/containerd/s/ca6b182ffa2c1f8f3678d82f745b0665f30a725fa057a293a2a005dbae434654" protocol=ttrpc version=3 Apr 20 15:46:26.185871 systemd[1]: Started cri-containerd-a16e0d7c25311ef9cc624f97ea472f00028306af143b780bfebc8c1e4a6513b0.scope - libcontainer container a16e0d7c25311ef9cc624f97ea472f00028306af143b780bfebc8c1e4a6513b0. Apr 20 15:46:26.228211 systemd[1]: cri-containerd-a16e0d7c25311ef9cc624f97ea472f00028306af143b780bfebc8c1e4a6513b0.scope: Deactivated successfully. Apr 20 15:46:26.231261 containerd[1635]: time="2026-04-20T15:46:26.230800106Z" level=info msg="StartContainer for \"a16e0d7c25311ef9cc624f97ea472f00028306af143b780bfebc8c1e4a6513b0\" returns successfully" Apr 20 15:46:26.231558 containerd[1635]: time="2026-04-20T15:46:26.231535750Z" level=info msg="received container exit event container_id:\"a16e0d7c25311ef9cc624f97ea472f00028306af143b780bfebc8c1e4a6513b0\" id:\"a16e0d7c25311ef9cc624f97ea472f00028306af143b780bfebc8c1e4a6513b0\" pid:4124 exited_at:{seconds:1776699986 nanos:231121232}" Apr 20 15:46:26.607331 kubelet[3279]: E0420 15:46:26.607027 3279 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 15:46:26.609896 containerd[1635]: time="2026-04-20T15:46:26.607815538Z" level=info msg="PullImage \"ghcr.io/flannel-io/flannel:v0.26.7\"" Apr 20 15:46:26.637148 kubelet[3279]: I0420 15:46:26.637041 3279 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-z642c" podStartSLOduration=3.636874488 podStartE2EDuration="3.636874488s" podCreationTimestamp="2026-04-20 15:46:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-20 15:46:24.629815076 +0000 UTC m=+718.395757079" watchObservedRunningTime="2026-04-20 15:46:26.636874488 +0000 UTC m=+720.402816489" Apr 20 15:46:26.666004 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a16e0d7c25311ef9cc624f97ea472f00028306af143b780bfebc8c1e4a6513b0-rootfs.mount: Deactivated successfully. Apr 20 15:46:26.960647 containerd[1635]: time="2026-04-20T15:46:26.959306599Z" level=info msg="container event discarded" container=6303059002e4ee5e543356374da9d7d604693648ecb0d255f56fb9788408da67 type=CONTAINER_DELETED_EVENT Apr 20 15:46:27.370676 kubelet[3279]: E0420 15:46:27.369572 3279 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 15:46:31.128842 containerd[1635]: time="2026-04-20T15:46:31.127889344Z" level=info msg="ImageCreate event name:\"ghcr.io/flannel-io/flannel:v0.26.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 20 15:46:31.132272 containerd[1635]: time="2026-04-20T15:46:31.130823332Z" level=info msg="stop pulling image ghcr.io/flannel-io/flannel:v0.26.7: active requests=1, bytes read=9437184" Apr 20 15:46:31.141469 containerd[1635]: time="2026-04-20T15:46:31.141253633Z" level=info msg="ImageCreate event name:\"sha256:965b9dd4aa4c1b6b68a4c54a166692b4645b6e6f8a5937d8dc17736cb63f515e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 20 15:46:31.148785 containerd[1635]: time="2026-04-20T15:46:31.147600444Z" level=info msg="ImageCreate event name:\"ghcr.io/flannel-io/flannel@sha256:7f471907fa940f944867270de4ed78121b8b4c5d564e17f940dc787cb16dea82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 20 15:46:31.150941 containerd[1635]: time="2026-04-20T15:46:31.150892827Z" level=info msg="Pulled image \"ghcr.io/flannel-io/flannel:v0.26.7\" with image id \"sha256:965b9dd4aa4c1b6b68a4c54a166692b4645b6e6f8a5937d8dc17736cb63f515e\", repo tag \"ghcr.io/flannel-io/flannel:v0.26.7\", repo digest \"ghcr.io/flannel-io/flannel@sha256:7f471907fa940f944867270de4ed78121b8b4c5d564e17f940dc787cb16dea82\", size \"32996046\" in 4.543039281s" Apr 20 15:46:31.151072 containerd[1635]: time="2026-04-20T15:46:31.151018157Z" level=info msg="PullImage \"ghcr.io/flannel-io/flannel:v0.26.7\" returns image reference \"sha256:965b9dd4aa4c1b6b68a4c54a166692b4645b6e6f8a5937d8dc17736cb63f515e\"" Apr 20 15:46:31.167237 containerd[1635]: time="2026-04-20T15:46:31.166982248Z" level=info msg="CreateContainer within sandbox \"5d48d9505405286117eaa2fe5107f95c26a1def8be38ed1512ff16c4e8a6a12b\" for container name:\"install-cni\"" Apr 20 15:46:31.218914 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount906701841.mount: Deactivated successfully. Apr 20 15:46:31.222448 containerd[1635]: time="2026-04-20T15:46:31.221791620Z" level=info msg="Container c0f036b75f59c9637c33c94586c48704dd7b9b6603e9e359968266acbaa571d4: CDI devices from CRI Config.CDIDevices: []" Apr 20 15:46:31.285860 containerd[1635]: time="2026-04-20T15:46:31.285329548Z" level=info msg="CreateContainer within sandbox \"5d48d9505405286117eaa2fe5107f95c26a1def8be38ed1512ff16c4e8a6a12b\" for name:\"install-cni\" returns container id \"c0f036b75f59c9637c33c94586c48704dd7b9b6603e9e359968266acbaa571d4\"" Apr 20 15:46:31.290511 containerd[1635]: time="2026-04-20T15:46:31.290433158Z" level=info msg="StartContainer for \"c0f036b75f59c9637c33c94586c48704dd7b9b6603e9e359968266acbaa571d4\"" Apr 20 15:46:31.309051 containerd[1635]: time="2026-04-20T15:46:31.308568457Z" level=info msg="connecting to shim c0f036b75f59c9637c33c94586c48704dd7b9b6603e9e359968266acbaa571d4" address="unix:///run/containerd/s/ca6b182ffa2c1f8f3678d82f745b0665f30a725fa057a293a2a005dbae434654" protocol=ttrpc version=3 Apr 20 15:46:31.346542 systemd[1]: Started cri-containerd-c0f036b75f59c9637c33c94586c48704dd7b9b6603e9e359968266acbaa571d4.scope - libcontainer container c0f036b75f59c9637c33c94586c48704dd7b9b6603e9e359968266acbaa571d4. Apr 20 15:46:31.444051 systemd[1]: cri-containerd-c0f036b75f59c9637c33c94586c48704dd7b9b6603e9e359968266acbaa571d4.scope: Deactivated successfully. Apr 20 15:46:31.494155 containerd[1635]: time="2026-04-20T15:46:31.493477984Z" level=info msg="received container exit event container_id:\"c0f036b75f59c9637c33c94586c48704dd7b9b6603e9e359968266acbaa571d4\" id:\"c0f036b75f59c9637c33c94586c48704dd7b9b6603e9e359968266acbaa571d4\" pid:4198 exited_at:{seconds:1776699991 nanos:458896634}" Apr 20 15:46:31.589055 containerd[1635]: time="2026-04-20T15:46:31.588858611Z" level=info msg="StartContainer for \"c0f036b75f59c9637c33c94586c48704dd7b9b6603e9e359968266acbaa571d4\" returns successfully" Apr 20 15:46:31.627763 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c0f036b75f59c9637c33c94586c48704dd7b9b6603e9e359968266acbaa571d4-rootfs.mount: Deactivated successfully. Apr 20 15:46:31.679333 kubelet[3279]: E0420 15:46:31.678855 3279 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 15:46:31.698784 containerd[1635]: time="2026-04-20T15:46:31.695581927Z" level=info msg="CreateContainer within sandbox \"5d48d9505405286117eaa2fe5107f95c26a1def8be38ed1512ff16c4e8a6a12b\" for container name:\"kube-flannel\"" Apr 20 15:46:31.726315 containerd[1635]: time="2026-04-20T15:46:31.725944215Z" level=info msg="Container 7838b4188f106ec542064fbd1b68a87af95800cc3af13b8b92f3a816c8e12680: CDI devices from CRI Config.CDIDevices: []" Apr 20 15:46:31.759603 containerd[1635]: time="2026-04-20T15:46:31.759174600Z" level=info msg="CreateContainer within sandbox \"5d48d9505405286117eaa2fe5107f95c26a1def8be38ed1512ff16c4e8a6a12b\" for name:\"kube-flannel\" returns container id \"7838b4188f106ec542064fbd1b68a87af95800cc3af13b8b92f3a816c8e12680\"" Apr 20 15:46:31.762298 containerd[1635]: time="2026-04-20T15:46:31.762242536Z" level=info msg="StartContainer for \"7838b4188f106ec542064fbd1b68a87af95800cc3af13b8b92f3a816c8e12680\"" Apr 20 15:46:31.764523 containerd[1635]: time="2026-04-20T15:46:31.764467117Z" level=info msg="connecting to shim 7838b4188f106ec542064fbd1b68a87af95800cc3af13b8b92f3a816c8e12680" address="unix:///run/containerd/s/ca6b182ffa2c1f8f3678d82f745b0665f30a725fa057a293a2a005dbae434654" protocol=ttrpc version=3 Apr 20 15:46:31.772504 kubelet[3279]: I0420 15:46:31.771922 3279 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Apr 20 15:46:31.774794 containerd[1635]: time="2026-04-20T15:46:31.774647215Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Apr 20 15:46:31.775793 kubelet[3279]: I0420 15:46:31.775133 3279 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Apr 20 15:46:31.818061 systemd[1]: Started cri-containerd-7838b4188f106ec542064fbd1b68a87af95800cc3af13b8b92f3a816c8e12680.scope - libcontainer container 7838b4188f106ec542064fbd1b68a87af95800cc3af13b8b92f3a816c8e12680. Apr 20 15:46:32.071171 containerd[1635]: time="2026-04-20T15:46:32.070598328Z" level=info msg="StartContainer for \"7838b4188f106ec542064fbd1b68a87af95800cc3af13b8b92f3a816c8e12680\" returns successfully" Apr 20 15:46:32.450064 containerd[1635]: time="2026-04-20T15:46:32.449136509Z" level=info msg="container event discarded" container=4e08dc43ac90f25562c59449b4869228c986ee25bb27e1161c87341514179b9a type=CONTAINER_CREATED_EVENT Apr 20 15:46:32.824075 kubelet[3279]: E0420 15:46:32.822755 3279 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 15:46:33.495313 systemd-networkd[1435]: flannel.1: Link UP Apr 20 15:46:33.495851 systemd-networkd[1435]: flannel.1: Gained carrier Apr 20 15:46:33.829695 kubelet[3279]: E0420 15:46:33.828516 3279 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"