Apr 22 23:48:18.909845 kernel: Linux version 6.12.81-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.1_p20250801 p4) 14.3.1 20250801, GNU ld (Gentoo 2.45 p3) 2.45.0) #1 SMP PREEMPT_DYNAMIC Wed Apr 22 21:57:11 -00 2026 Apr 22 23:48:18.909871 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=c1111a64faf79e22c6b231a95ce03ff7308375557d63046382fb274ec481eaec Apr 22 23:48:18.909882 kernel: BIOS-provided physical RAM map: Apr 22 23:48:18.909896 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Apr 22 23:48:18.909903 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Apr 22 23:48:18.909910 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Apr 22 23:48:18.909918 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Apr 22 23:48:18.909925 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Apr 22 23:48:18.909996 kernel: BIOS-e820: [mem 0x000000000080c000-0x0000000000810fff] usable Apr 22 23:48:18.910004 kernel: BIOS-e820: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Apr 22 23:48:18.910012 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009bd3efff] usable Apr 22 23:48:18.910019 kernel: BIOS-e820: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Apr 22 23:48:18.910030 kernel: BIOS-e820: [mem 0x000000009be00000-0x000000009c8ecfff] usable Apr 22 23:48:18.910037 kernel: BIOS-e820: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Apr 22 23:48:18.910046 kernel: BIOS-e820: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Apr 22 23:48:18.910054 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Apr 22 23:48:18.910088 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009ce90fff] usable Apr 22 23:48:18.910098 kernel: BIOS-e820: [mem 0x000000009ce91000-0x000000009ce94fff] reserved Apr 22 23:48:18.910107 kernel: BIOS-e820: [mem 0x000000009ce95000-0x000000009ce96fff] ACPI NVS Apr 22 23:48:18.910115 kernel: BIOS-e820: [mem 0x000000009ce97000-0x000000009cedbfff] usable Apr 22 23:48:18.910123 kernel: BIOS-e820: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Apr 22 23:48:18.910131 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Apr 22 23:48:18.910140 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Apr 22 23:48:18.910148 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Apr 22 23:48:18.910157 kernel: BIOS-e820: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Apr 22 23:48:18.910165 kernel: NX (Execute Disable) protection: active Apr 22 23:48:18.910172 kernel: APIC: Static calls initialized Apr 22 23:48:18.910183 kernel: e820: update [mem 0x9b31e018-0x9b327c57] usable ==> usable Apr 22 23:48:18.910191 kernel: e820: update [mem 0x9b2e1018-0x9b31de57] usable ==> usable Apr 22 23:48:18.910198 kernel: extended physical RAM map: Apr 22 23:48:18.910206 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable Apr 22 23:48:18.910214 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000007fffff] usable Apr 22 23:48:18.910221 kernel: reserve setup_data: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Apr 22 23:48:18.910229 kernel: reserve setup_data: [mem 0x0000000000808000-0x000000000080afff] usable Apr 22 23:48:18.910236 kernel: reserve setup_data: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Apr 22 23:48:18.910244 kernel: reserve setup_data: [mem 0x000000000080c000-0x0000000000810fff] usable Apr 22 23:48:18.910251 kernel: reserve setup_data: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Apr 22 23:48:18.910754 kernel: reserve setup_data: [mem 0x0000000000900000-0x000000009b2e1017] usable Apr 22 23:48:18.910793 kernel: reserve setup_data: [mem 0x000000009b2e1018-0x000000009b31de57] usable Apr 22 23:48:18.910804 kernel: reserve setup_data: [mem 0x000000009b31de58-0x000000009b31e017] usable Apr 22 23:48:18.910830 kernel: reserve setup_data: [mem 0x000000009b31e018-0x000000009b327c57] usable Apr 22 23:48:18.910839 kernel: reserve setup_data: [mem 0x000000009b327c58-0x000000009bd3efff] usable Apr 22 23:48:18.910850 kernel: reserve setup_data: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Apr 22 23:48:18.910858 kernel: reserve setup_data: [mem 0x000000009be00000-0x000000009c8ecfff] usable Apr 22 23:48:18.910866 kernel: reserve setup_data: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Apr 22 23:48:18.910874 kernel: reserve setup_data: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Apr 22 23:48:18.910882 kernel: reserve setup_data: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Apr 22 23:48:18.910890 kernel: reserve setup_data: [mem 0x000000009cbff000-0x000000009ce90fff] usable Apr 22 23:48:18.910898 kernel: reserve setup_data: [mem 0x000000009ce91000-0x000000009ce94fff] reserved Apr 22 23:48:18.910907 kernel: reserve setup_data: [mem 0x000000009ce95000-0x000000009ce96fff] ACPI NVS Apr 22 23:48:18.910916 kernel: reserve setup_data: [mem 0x000000009ce97000-0x000000009cedbfff] usable Apr 22 23:48:18.910929 kernel: reserve setup_data: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Apr 22 23:48:18.910939 kernel: reserve setup_data: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Apr 22 23:48:18.910949 kernel: reserve setup_data: [mem 0x00000000e0000000-0x00000000efffffff] reserved Apr 22 23:48:18.910958 kernel: reserve setup_data: [mem 0x00000000feffc000-0x00000000feffffff] reserved Apr 22 23:48:18.910992 kernel: reserve setup_data: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Apr 22 23:48:18.911001 kernel: efi: EFI v2.7 by EDK II Apr 22 23:48:18.911009 kernel: efi: SMBIOS=0x9c988000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b9e4198 RNG=0x9cb73018 Apr 22 23:48:18.911061 kernel: random: crng init done Apr 22 23:48:18.911071 kernel: efi: Remove mem151: MMIO range=[0xffc00000-0xffffffff] (4MB) from e820 map Apr 22 23:48:18.911080 kernel: e820: remove [mem 0xffc00000-0xffffffff] reserved Apr 22 23:48:18.911126 kernel: secureboot: Secure boot disabled Apr 22 23:48:18.911157 kernel: SMBIOS 2.8 present. Apr 22 23:48:18.911166 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS unknown 02/02/2022 Apr 22 23:48:18.911175 kernel: DMI: Memory slots populated: 1/1 Apr 22 23:48:18.911183 kernel: Hypervisor detected: KVM Apr 22 23:48:18.911191 kernel: last_pfn = 0x9cedc max_arch_pfn = 0x10000000000 Apr 22 23:48:18.911199 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Apr 22 23:48:18.911208 kernel: kvm-clock: using sched offset of 28677970869 cycles Apr 22 23:48:18.911217 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Apr 22 23:48:18.911226 kernel: tsc: Detected 2793.438 MHz processor Apr 22 23:48:18.911241 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Apr 22 23:48:18.911251 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Apr 22 23:48:18.911315 kernel: last_pfn = 0x9cedc max_arch_pfn = 0x10000000000 Apr 22 23:48:18.911322 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Apr 22 23:48:18.911328 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Apr 22 23:48:18.911334 kernel: Using GB pages for direct mapping Apr 22 23:48:18.911340 kernel: ACPI: Early table checksum verification disabled Apr 22 23:48:18.911366 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Apr 22 23:48:18.911375 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Apr 22 23:48:18.911382 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Apr 22 23:48:18.911388 kernel: ACPI: DSDT 0x000000009CB7A000 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 22 23:48:18.911394 kernel: ACPI: FACS 0x000000009CBDD000 000040 Apr 22 23:48:18.911400 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 22 23:48:18.911406 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 22 23:48:18.911412 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 22 23:48:18.911420 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 22 23:48:18.911426 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Apr 22 23:48:18.911432 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Apr 22 23:48:18.911438 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1b9] Apr 22 23:48:18.911444 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Apr 22 23:48:18.911450 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Apr 22 23:48:18.911455 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Apr 22 23:48:18.911463 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Apr 22 23:48:18.911469 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Apr 22 23:48:18.911477 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Apr 22 23:48:18.911487 kernel: No NUMA configuration found Apr 22 23:48:18.911496 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cedbfff] Apr 22 23:48:18.911504 kernel: NODE_DATA(0) allocated [mem 0x9ce36dc0-0x9ce3dfff] Apr 22 23:48:18.911512 kernel: Zone ranges: Apr 22 23:48:18.911523 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Apr 22 23:48:18.911532 kernel: DMA32 [mem 0x0000000001000000-0x000000009cedbfff] Apr 22 23:48:18.911540 kernel: Normal empty Apr 22 23:48:18.911548 kernel: Device empty Apr 22 23:48:18.911556 kernel: Movable zone start for each node Apr 22 23:48:18.911565 kernel: Early memory node ranges Apr 22 23:48:18.911573 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Apr 22 23:48:18.911582 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Apr 22 23:48:18.911593 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Apr 22 23:48:18.911763 kernel: node 0: [mem 0x000000000080c000-0x0000000000810fff] Apr 22 23:48:18.911773 kernel: node 0: [mem 0x0000000000900000-0x000000009bd3efff] Apr 22 23:48:18.911781 kernel: node 0: [mem 0x000000009be00000-0x000000009c8ecfff] Apr 22 23:48:18.911790 kernel: node 0: [mem 0x000000009cbff000-0x000000009ce90fff] Apr 22 23:48:18.911799 kernel: node 0: [mem 0x000000009ce97000-0x000000009cedbfff] Apr 22 23:48:18.911807 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cedbfff] Apr 22 23:48:18.911819 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Apr 22 23:48:18.911828 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Apr 22 23:48:18.911990 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Apr 22 23:48:18.912010 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Apr 22 23:48:18.912022 kernel: On node 0, zone DMA: 239 pages in unavailable ranges Apr 22 23:48:18.912031 kernel: On node 0, zone DMA32: 193 pages in unavailable ranges Apr 22 23:48:18.912040 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Apr 22 23:48:18.912048 kernel: On node 0, zone DMA32: 6 pages in unavailable ranges Apr 22 23:48:18.912057 kernel: On node 0, zone DMA32: 12580 pages in unavailable ranges Apr 22 23:48:18.912069 kernel: ACPI: PM-Timer IO Port: 0x608 Apr 22 23:48:18.912078 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Apr 22 23:48:18.912087 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Apr 22 23:48:18.912096 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Apr 22 23:48:18.912108 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Apr 22 23:48:18.912117 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Apr 22 23:48:18.912126 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Apr 22 23:48:18.912135 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Apr 22 23:48:18.912144 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Apr 22 23:48:18.912153 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Apr 22 23:48:18.912162 kernel: TSC deadline timer available Apr 22 23:48:18.912174 kernel: CPU topo: Max. logical packages: 1 Apr 22 23:48:18.912185 kernel: CPU topo: Max. logical dies: 1 Apr 22 23:48:18.912196 kernel: CPU topo: Max. dies per package: 1 Apr 22 23:48:18.912205 kernel: CPU topo: Max. threads per core: 1 Apr 22 23:48:18.912217 kernel: CPU topo: Num. cores per package: 4 Apr 22 23:48:18.912226 kernel: CPU topo: Num. threads per package: 4 Apr 22 23:48:18.912237 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs Apr 22 23:48:18.912246 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Apr 22 23:48:18.913504 kernel: kvm-guest: KVM setup pv remote TLB flush Apr 22 23:48:18.913651 kernel: kvm-guest: setup PV sched yield Apr 22 23:48:18.913696 kernel: [mem 0x9d000000-0xdfffffff] available for PCI devices Apr 22 23:48:18.913707 kernel: Booting paravirtualized kernel on KVM Apr 22 23:48:18.913718 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Apr 22 23:48:18.913729 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Apr 22 23:48:18.913739 kernel: percpu: Embedded 60 pages/cpu s207448 r8192 d30120 u524288 Apr 22 23:48:18.913767 kernel: pcpu-alloc: s207448 r8192 d30120 u524288 alloc=1*2097152 Apr 22 23:48:18.913777 kernel: pcpu-alloc: [0] 0 1 2 3 Apr 22 23:48:18.913787 kernel: kvm-guest: PV spinlocks enabled Apr 22 23:48:18.913797 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Apr 22 23:48:18.913811 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=c1111a64faf79e22c6b231a95ce03ff7308375557d63046382fb274ec481eaec Apr 22 23:48:18.913975 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Apr 22 23:48:18.913991 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Apr 22 23:48:18.914001 kernel: Fallback order for Node 0: 0 Apr 22 23:48:18.914013 kernel: Built 1 zonelists, mobility grouping on. Total pages: 641450 Apr 22 23:48:18.914023 kernel: Policy zone: DMA32 Apr 22 23:48:18.914034 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Apr 22 23:48:18.914045 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Apr 22 23:48:18.914056 kernel: ftrace: allocating 40157 entries in 157 pages Apr 22 23:48:18.914066 kernel: ftrace: allocated 157 pages with 5 groups Apr 22 23:48:18.914079 kernel: Dynamic Preempt: voluntary Apr 22 23:48:18.914090 kernel: rcu: Preemptible hierarchical RCU implementation. Apr 22 23:48:18.914101 kernel: rcu: RCU event tracing is enabled. Apr 22 23:48:18.914112 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Apr 22 23:48:18.914122 kernel: Trampoline variant of Tasks RCU enabled. Apr 22 23:48:18.914133 kernel: Rude variant of Tasks RCU enabled. Apr 22 23:48:18.914143 kernel: Tracing variant of Tasks RCU enabled. Apr 22 23:48:18.914155 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Apr 22 23:48:18.914165 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Apr 22 23:48:18.914175 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Apr 22 23:48:18.914185 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Apr 22 23:48:18.914217 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Apr 22 23:48:18.914227 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Apr 22 23:48:18.914238 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Apr 22 23:48:18.914250 kernel: Console: colour dummy device 80x25 Apr 22 23:48:18.914308 kernel: printk: legacy console [ttyS0] enabled Apr 22 23:48:18.914318 kernel: ACPI: Core revision 20240827 Apr 22 23:48:18.914328 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Apr 22 23:48:18.914337 kernel: APIC: Switch to symmetric I/O mode setup Apr 22 23:48:18.914374 kernel: x2apic enabled Apr 22 23:48:18.914385 kernel: APIC: Switched APIC routing to: physical x2apic Apr 22 23:48:18.914396 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Apr 22 23:48:18.914409 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Apr 22 23:48:18.914418 kernel: kvm-guest: setup PV IPIs Apr 22 23:48:18.914427 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Apr 22 23:48:18.914436 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x284409db922, max_idle_ns: 440795228871 ns Apr 22 23:48:18.914445 kernel: Calibrating delay loop (skipped) preset value.. 5586.87 BogoMIPS (lpj=2793438) Apr 22 23:48:18.914455 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Apr 22 23:48:18.914464 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Apr 22 23:48:18.915036 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Apr 22 23:48:18.915047 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Apr 22 23:48:18.915057 kernel: Spectre V2 : Mitigation: Retpolines Apr 22 23:48:18.915066 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Apr 22 23:48:18.915076 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Apr 22 23:48:18.915085 kernel: RETBleed: Vulnerable Apr 22 23:48:18.915104 kernel: Speculative Store Bypass: Vulnerable Apr 22 23:48:18.915115 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Apr 22 23:48:18.915125 kernel: GDS: Unknown: Dependent on hypervisor status Apr 22 23:48:18.915162 kernel: active return thunk: its_return_thunk Apr 22 23:48:18.915173 kernel: ITS: Mitigation: Aligned branch/return thunks Apr 22 23:48:18.915183 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Apr 22 23:48:18.915193 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Apr 22 23:48:18.915204 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Apr 22 23:48:18.915217 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Apr 22 23:48:18.915228 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Apr 22 23:48:18.915238 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Apr 22 23:48:18.915249 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Apr 22 23:48:18.915309 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Apr 22 23:48:18.915319 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Apr 22 23:48:18.915328 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Apr 22 23:48:18.915342 kernel: x86/fpu: Enabled xstate features 0xe7, context size is 2432 bytes, using 'compacted' format. Apr 22 23:48:18.915651 kernel: Freeing SMP alternatives memory: 32K Apr 22 23:48:18.915694 kernel: pid_max: default: 32768 minimum: 301 Apr 22 23:48:18.915704 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Apr 22 23:48:18.915713 kernel: landlock: Up and running. Apr 22 23:48:18.915722 kernel: SELinux: Initializing. Apr 22 23:48:18.915731 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 22 23:48:18.915756 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 22 23:48:18.915766 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8370C CPU @ 2.80GHz (family: 0x6, model: 0x6a, stepping: 0x6) Apr 22 23:48:18.915777 kernel: Performance Events: unsupported p6 CPU model 106 no PMU driver, software events only. Apr 22 23:48:18.915789 kernel: signal: max sigframe size: 3632 Apr 22 23:48:18.915799 kernel: rcu: Hierarchical SRCU implementation. Apr 22 23:48:18.915810 kernel: rcu: Max phase no-delay instances is 400. Apr 22 23:48:18.915822 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Apr 22 23:48:18.915835 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Apr 22 23:48:18.915846 kernel: smp: Bringing up secondary CPUs ... Apr 22 23:48:18.915855 kernel: smpboot: x86: Booting SMP configuration: Apr 22 23:48:18.915864 kernel: .... node #0, CPUs: #1 #2 #3 Apr 22 23:48:18.915873 kernel: smp: Brought up 1 node, 4 CPUs Apr 22 23:48:18.915881 kernel: smpboot: Total of 4 processors activated (22347.50 BogoMIPS) Apr 22 23:48:18.915891 kernel: Memory: 2399272K/2565800K available (14336K kernel code, 2453K rwdata, 31656K rodata, 15552K init, 2472K bss, 160636K reserved, 0K cma-reserved) Apr 22 23:48:18.915903 kernel: devtmpfs: initialized Apr 22 23:48:18.915912 kernel: x86/mm: Memory block size: 128MB Apr 22 23:48:18.915921 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Apr 22 23:48:18.915931 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Apr 22 23:48:18.915940 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00811000-0x008fffff] (978944 bytes) Apr 22 23:48:18.915949 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Apr 22 23:48:18.915960 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9ce95000-0x9ce96fff] (8192 bytes) Apr 22 23:48:18.915974 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Apr 22 23:48:18.915986 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Apr 22 23:48:18.915998 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Apr 22 23:48:18.916009 kernel: pinctrl core: initialized pinctrl subsystem Apr 22 23:48:18.916019 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Apr 22 23:48:18.916030 kernel: audit: initializing netlink subsys (disabled) Apr 22 23:48:18.916040 kernel: audit: type=2000 audit(1776901687.985:1): state=initialized audit_enabled=0 res=1 Apr 22 23:48:18.916054 kernel: thermal_sys: Registered thermal governor 'step_wise' Apr 22 23:48:18.916064 kernel: thermal_sys: Registered thermal governor 'user_space' Apr 22 23:48:18.916075 kernel: cpuidle: using governor menu Apr 22 23:48:18.916086 kernel: efi: Freeing EFI boot services memory: 38812K Apr 22 23:48:18.916096 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Apr 22 23:48:18.916107 kernel: dca service started, version 1.12.1 Apr 22 23:48:18.916117 kernel: PCI: ECAM [mem 0xe0000000-0xefffffff] (base 0xe0000000) for domain 0000 [bus 00-ff] Apr 22 23:48:18.916128 kernel: PCI: Using configuration type 1 for base access Apr 22 23:48:18.916139 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Apr 22 23:48:18.916149 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Apr 22 23:48:18.916159 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Apr 22 23:48:18.916169 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Apr 22 23:48:18.916179 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Apr 22 23:48:18.916188 kernel: ACPI: Added _OSI(Module Device) Apr 22 23:48:18.916201 kernel: ACPI: Added _OSI(Processor Device) Apr 22 23:48:18.916211 kernel: ACPI: Added _OSI(Processor Aggregator Device) Apr 22 23:48:18.916221 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Apr 22 23:48:18.916232 kernel: ACPI: Interpreter enabled Apr 22 23:48:18.916241 kernel: ACPI: PM: (supports S0 S3 S5) Apr 22 23:48:18.916251 kernel: ACPI: Using IOAPIC for interrupt routing Apr 22 23:48:18.916314 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Apr 22 23:48:18.916327 kernel: PCI: Using E820 reservations for host bridge windows Apr 22 23:48:18.916337 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Apr 22 23:48:18.916372 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Apr 22 23:48:18.916946 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Apr 22 23:48:18.917128 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Apr 22 23:48:18.917250 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Apr 22 23:48:18.917327 kernel: PCI host bridge to bus 0000:00 Apr 22 23:48:18.917495 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Apr 22 23:48:18.917609 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Apr 22 23:48:18.917718 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Apr 22 23:48:18.917826 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xdfffffff window] Apr 22 23:48:18.917967 kernel: pci_bus 0000:00: root bus resource [mem 0xf0000000-0xfebfffff window] Apr 22 23:48:18.918078 kernel: pci_bus 0000:00: root bus resource [mem 0x380000000000-0x3807ffffffff window] Apr 22 23:48:18.918231 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Apr 22 23:48:18.918460 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Apr 22 23:48:18.918595 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Apr 22 23:48:18.918719 kernel: pci 0000:00:01.0: BAR 0 [mem 0xc0000000-0xc0ffffff pref] Apr 22 23:48:18.918852 kernel: pci 0000:00:01.0: BAR 2 [mem 0xc1044000-0xc1044fff] Apr 22 23:48:18.918978 kernel: pci 0000:00:01.0: ROM [mem 0xffff0000-0xffffffff pref] Apr 22 23:48:18.919088 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Apr 22 23:48:18.919623 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Apr 22 23:48:18.919930 kernel: pci 0000:00:02.0: BAR 0 [io 0x6100-0x611f] Apr 22 23:48:18.920053 kernel: pci 0000:00:02.0: BAR 1 [mem 0xc1043000-0xc1043fff] Apr 22 23:48:18.921318 kernel: pci 0000:00:02.0: BAR 4 [mem 0x380000000000-0x380000003fff 64bit pref] Apr 22 23:48:18.921499 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Apr 22 23:48:18.921620 kernel: pci 0000:00:03.0: BAR 0 [io 0x6000-0x607f] Apr 22 23:48:18.921738 kernel: pci 0000:00:03.0: BAR 1 [mem 0xc1042000-0xc1042fff] Apr 22 23:48:18.921856 kernel: pci 0000:00:03.0: BAR 4 [mem 0x380000004000-0x380000007fff 64bit pref] Apr 22 23:48:18.921982 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Apr 22 23:48:18.922112 kernel: pci 0000:00:04.0: BAR 0 [io 0x60e0-0x60ff] Apr 22 23:48:18.922227 kernel: pci 0000:00:04.0: BAR 1 [mem 0xc1041000-0xc1041fff] Apr 22 23:48:18.922488 kernel: pci 0000:00:04.0: BAR 4 [mem 0x380000008000-0x38000000bfff 64bit pref] Apr 22 23:48:18.922610 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref] Apr 22 23:48:18.922779 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Apr 22 23:48:18.922912 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Apr 22 23:48:18.923101 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Apr 22 23:48:18.923217 kernel: pci 0000:00:1f.2: BAR 4 [io 0x60c0-0x60df] Apr 22 23:48:18.923428 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xc1040000-0xc1040fff] Apr 22 23:48:18.923563 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Apr 22 23:48:18.923686 kernel: pci 0000:00:1f.3: BAR 4 [io 0x6080-0x60bf] Apr 22 23:48:18.923704 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Apr 22 23:48:18.923713 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Apr 22 23:48:18.923724 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Apr 22 23:48:18.923734 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Apr 22 23:48:18.923743 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Apr 22 23:48:18.923752 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Apr 22 23:48:18.923764 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Apr 22 23:48:18.923774 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Apr 22 23:48:18.923783 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Apr 22 23:48:18.923792 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Apr 22 23:48:18.923801 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Apr 22 23:48:18.923811 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Apr 22 23:48:18.923820 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Apr 22 23:48:18.923833 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Apr 22 23:48:18.923844 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Apr 22 23:48:18.923854 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Apr 22 23:48:18.923865 kernel: iommu: Default domain type: Translated Apr 22 23:48:18.923875 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Apr 22 23:48:18.923885 kernel: efivars: Registered efivars operations Apr 22 23:48:18.923895 kernel: PCI: Using ACPI for IRQ routing Apr 22 23:48:18.923906 kernel: PCI: pci_cache_line_size set to 64 bytes Apr 22 23:48:18.923919 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Apr 22 23:48:18.923928 kernel: e820: reserve RAM buffer [mem 0x00811000-0x008fffff] Apr 22 23:48:18.923938 kernel: e820: reserve RAM buffer [mem 0x9b2e1018-0x9bffffff] Apr 22 23:48:18.923948 kernel: e820: reserve RAM buffer [mem 0x9b31e018-0x9bffffff] Apr 22 23:48:18.923957 kernel: e820: reserve RAM buffer [mem 0x9bd3f000-0x9bffffff] Apr 22 23:48:18.923967 kernel: e820: reserve RAM buffer [mem 0x9c8ed000-0x9fffffff] Apr 22 23:48:18.923977 kernel: e820: reserve RAM buffer [mem 0x9ce91000-0x9fffffff] Apr 22 23:48:18.923989 kernel: e820: reserve RAM buffer [mem 0x9cedc000-0x9fffffff] Apr 22 23:48:18.924130 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Apr 22 23:48:18.924823 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Apr 22 23:48:18.924960 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Apr 22 23:48:18.924973 kernel: vgaarb: loaded Apr 22 23:48:18.924983 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Apr 22 23:48:18.924999 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Apr 22 23:48:18.925008 kernel: clocksource: Switched to clocksource kvm-clock Apr 22 23:48:18.925018 kernel: VFS: Disk quotas dquot_6.6.0 Apr 22 23:48:18.925027 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Apr 22 23:48:18.925038 kernel: pnp: PnP ACPI init Apr 22 23:48:18.925217 kernel: system 00:05: [mem 0xe0000000-0xefffffff window] has been reserved Apr 22 23:48:18.925229 kernel: pnp: PnP ACPI: found 6 devices Apr 22 23:48:18.925240 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Apr 22 23:48:18.925324 kernel: NET: Registered PF_INET protocol family Apr 22 23:48:18.925337 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Apr 22 23:48:18.925374 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Apr 22 23:48:18.925387 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Apr 22 23:48:18.925397 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Apr 22 23:48:18.925406 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Apr 22 23:48:18.925420 kernel: TCP: Hash tables configured (established 32768 bind 32768) Apr 22 23:48:18.925429 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 22 23:48:18.925439 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 22 23:48:18.925448 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Apr 22 23:48:18.925458 kernel: NET: Registered PF_XDP protocol family Apr 22 23:48:18.925595 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref]: can't claim; no compatible bridge window Apr 22 23:48:18.925725 kernel: pci 0000:00:04.0: ROM [mem 0x9d000000-0x9d03ffff pref]: assigned Apr 22 23:48:18.925845 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Apr 22 23:48:18.925960 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Apr 22 23:48:18.926043 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Apr 22 23:48:18.926117 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xdfffffff window] Apr 22 23:48:18.926190 kernel: pci_bus 0000:00: resource 8 [mem 0xf0000000-0xfebfffff window] Apr 22 23:48:18.926497 kernel: pci_bus 0000:00: resource 9 [mem 0x380000000000-0x3807ffffffff window] Apr 22 23:48:18.926516 kernel: PCI: CLS 0 bytes, default 64 Apr 22 23:48:18.926527 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Apr 22 23:48:18.926538 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x284409db922, max_idle_ns: 440795228871 ns Apr 22 23:48:18.926548 kernel: Initialise system trusted keyrings Apr 22 23:48:18.926563 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Apr 22 23:48:18.926572 kernel: Key type asymmetric registered Apr 22 23:48:18.926582 kernel: Asymmetric key parser 'x509' registered Apr 22 23:48:18.926591 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Apr 22 23:48:18.926600 kernel: io scheduler mq-deadline registered Apr 22 23:48:18.926610 kernel: io scheduler kyber registered Apr 22 23:48:18.926619 kernel: io scheduler bfq registered Apr 22 23:48:18.926632 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Apr 22 23:48:18.926642 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Apr 22 23:48:18.926652 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Apr 22 23:48:18.926956 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Apr 22 23:48:18.926965 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Apr 22 23:48:18.926972 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Apr 22 23:48:18.926979 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Apr 22 23:48:18.926993 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Apr 22 23:48:18.927000 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Apr 22 23:48:18.927142 kernel: rtc_cmos 00:04: RTC can wake from S4 Apr 22 23:48:18.927152 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Apr 22 23:48:18.927229 kernel: rtc_cmos 00:04: registered as rtc0 Apr 22 23:48:18.927707 kernel: rtc_cmos 00:04: setting system clock to 2026-04-22T23:48:15 UTC (1776901695) Apr 22 23:48:18.927787 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Apr 22 23:48:18.927803 kernel: intel_pstate: CPU model not supported Apr 22 23:48:18.927810 kernel: efifb: probing for efifb Apr 22 23:48:18.927817 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k Apr 22 23:48:18.927845 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 Apr 22 23:48:18.927852 kernel: efifb: scrolling: redraw Apr 22 23:48:18.927859 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Apr 22 23:48:18.927866 kernel: Console: switching to colour frame buffer device 160x50 Apr 22 23:48:18.927875 kernel: fb0: EFI VGA frame buffer device Apr 22 23:48:18.927881 kernel: pstore: Using crash dump compression: deflate Apr 22 23:48:18.927889 kernel: pstore: Registered efi_pstore as persistent store backend Apr 22 23:48:18.927896 kernel: NET: Registered PF_INET6 protocol family Apr 22 23:48:18.927902 kernel: Segment Routing with IPv6 Apr 22 23:48:18.927909 kernel: In-situ OAM (IOAM) with IPv6 Apr 22 23:48:18.927916 kernel: NET: Registered PF_PACKET protocol family Apr 22 23:48:18.927926 kernel: Key type dns_resolver registered Apr 22 23:48:18.927933 kernel: IPI shorthand broadcast: enabled Apr 22 23:48:18.927941 kernel: sched_clock: Marking stable (7774016881, 3901596613)->(13308596413, -1632982919) Apr 22 23:48:18.927947 kernel: registered taskstats version 1 Apr 22 23:48:18.927954 kernel: Loading compiled-in X.509 certificates Apr 22 23:48:18.927961 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.81-flatcar: 0793482f0b1477a4dee00a55cce942e30dec635a' Apr 22 23:48:18.927968 kernel: Demotion targets for Node 0: null Apr 22 23:48:18.927976 kernel: Key type .fscrypt registered Apr 22 23:48:18.927983 kernel: Key type fscrypt-provisioning registered Apr 22 23:48:18.927989 kernel: ima: No TPM chip found, activating TPM-bypass! Apr 22 23:48:18.927996 kernel: ima: Allocated hash algorithm: sha1 Apr 22 23:48:18.928004 kernel: ima: No architecture policies found Apr 22 23:48:18.928010 kernel: clk: Disabling unused clocks Apr 22 23:48:18.928017 kernel: Freeing unused kernel image (initmem) memory: 15552K Apr 22 23:48:18.928026 kernel: Write protecting the kernel read-only data: 47104k Apr 22 23:48:18.928033 kernel: Freeing unused kernel image (rodata/data gap) memory: 1112K Apr 22 23:48:18.928040 kernel: Run /init as init process Apr 22 23:48:18.928047 kernel: with arguments: Apr 22 23:48:18.928054 kernel: /init Apr 22 23:48:18.928061 kernel: with environment: Apr 22 23:48:18.928067 kernel: HOME=/ Apr 22 23:48:18.928074 kernel: TERM=linux Apr 22 23:48:18.928083 kernel: hrtimer: interrupt took 7555153 ns Apr 22 23:48:18.928090 kernel: SCSI subsystem initialized Apr 22 23:48:18.928097 kernel: libata version 3.00 loaded. Apr 22 23:48:18.928690 kernel: ahci 0000:00:1f.2: version 3.0 Apr 22 23:48:18.928755 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Apr 22 23:48:18.928880 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Apr 22 23:48:18.928968 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Apr 22 23:48:18.929047 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Apr 22 23:48:18.929142 kernel: scsi host0: ahci Apr 22 23:48:18.929228 kernel: scsi host1: ahci Apr 22 23:48:18.929753 kernel: scsi host2: ahci Apr 22 23:48:18.931226 kernel: scsi host3: ahci Apr 22 23:48:18.931757 kernel: scsi host4: ahci Apr 22 23:48:18.931858 kernel: scsi host5: ahci Apr 22 23:48:18.931870 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 26 lpm-pol 1 Apr 22 23:48:18.931878 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 26 lpm-pol 1 Apr 22 23:48:18.931887 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 26 lpm-pol 1 Apr 22 23:48:18.931895 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 26 lpm-pol 1 Apr 22 23:48:18.931911 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 26 lpm-pol 1 Apr 22 23:48:18.931920 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 26 lpm-pol 1 Apr 22 23:48:18.931928 kernel: ata6: SATA link down (SStatus 0 SControl 300) Apr 22 23:48:18.931937 kernel: ata4: SATA link down (SStatus 0 SControl 300) Apr 22 23:48:18.931945 kernel: ata1: SATA link down (SStatus 0 SControl 300) Apr 22 23:48:18.931975 kernel: ata2: SATA link down (SStatus 0 SControl 300) Apr 22 23:48:18.931983 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Apr 22 23:48:18.931994 kernel: ata5: SATA link down (SStatus 0 SControl 300) Apr 22 23:48:18.932002 kernel: ata3.00: LPM support broken, forcing max_power Apr 22 23:48:18.932011 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Apr 22 23:48:18.932019 kernel: ata3.00: applying bridge limits Apr 22 23:48:18.932027 kernel: ata3.00: LPM support broken, forcing max_power Apr 22 23:48:18.932035 kernel: ata3.00: configured for UDMA/100 Apr 22 23:48:18.936804 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Apr 22 23:48:18.937095 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Apr 22 23:48:18.937333 kernel: virtio_blk virtio1: [vda] 27000832 512-byte logical blocks (13.8 GB/12.9 GiB) Apr 22 23:48:18.937927 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Apr 22 23:48:18.937978 kernel: GPT:16515071 != 27000831 Apr 22 23:48:18.937990 kernel: GPT:Alternate GPT header not at the end of the disk. Apr 22 23:48:18.938001 kernel: GPT:16515071 != 27000831 Apr 22 23:48:18.938023 kernel: GPT: Use GNU Parted to correct GPT errors. Apr 22 23:48:18.938033 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 22 23:48:18.939016 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Apr 22 23:48:18.939072 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Apr 22 23:48:18.939213 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Apr 22 23:48:18.939228 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Apr 22 23:48:18.939881 kernel: device-mapper: uevent: version 1.0.3 Apr 22 23:48:18.939902 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Apr 22 23:48:18.939945 kernel: device-mapper: verity: sha256 using shash "sha256-generic" Apr 22 23:48:18.939957 kernel: raid6: avx512x4 gen() 33292 MB/s Apr 22 23:48:18.939968 kernel: raid6: avx512x2 gen() 20144 MB/s Apr 22 23:48:18.939978 kernel: raid6: avx512x1 gen() 33036 MB/s Apr 22 23:48:18.939988 kernel: raid6: avx2x4 gen() 22483 MB/s Apr 22 23:48:18.939998 kernel: raid6: avx2x2 gen() 18968 MB/s Apr 22 23:48:18.940012 kernel: raid6: avx2x1 gen() 13681 MB/s Apr 22 23:48:18.940021 kernel: raid6: using algorithm avx512x4 gen() 33292 MB/s Apr 22 23:48:18.940031 kernel: raid6: .... xor() 8392 MB/s, rmw enabled Apr 22 23:48:18.940041 kernel: raid6: using avx512x2 recovery algorithm Apr 22 23:48:18.940051 kernel: xor: automatically using best checksumming function avx Apr 22 23:48:18.940062 kernel: Btrfs loaded, zoned=no, fsverity=no Apr 22 23:48:18.940072 kernel: BTRFS: device fsid 3ae7ba34-f7bd-4b4e-97e5-7ce72707b9fd devid 1 transid 32 /dev/mapper/usr (253:0) scanned by mount (182) Apr 22 23:48:18.940085 kernel: BTRFS info (device dm-0): first mount of filesystem 3ae7ba34-f7bd-4b4e-97e5-7ce72707b9fd Apr 22 23:48:18.940095 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Apr 22 23:48:18.940105 kernel: BTRFS info (device dm-0 state E): disabling log replay at mount time Apr 22 23:48:18.940115 kernel: BTRFS info (device dm-0 state E): enabling free space tree Apr 22 23:48:18.940124 kernel: loop: module loaded Apr 22 23:48:18.940134 kernel: loop0: detected capacity change from 0 to 100560 Apr 22 23:48:18.940144 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Apr 22 23:48:18.940160 systemd[1]: Successfully made /usr/ read-only. Apr 22 23:48:18.940176 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Apr 22 23:48:18.940187 systemd[1]: Detected virtualization kvm. Apr 22 23:48:18.940199 systemd[1]: Detected architecture x86-64. Apr 22 23:48:18.940211 systemd[1]: Running in initrd. Apr 22 23:48:18.940222 systemd[1]: No hostname configured, using default hostname. Apr 22 23:48:18.940236 systemd[1]: Hostname set to . Apr 22 23:48:18.940248 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Apr 22 23:48:18.940561 systemd[1]: Queued start job for default target initrd.target. Apr 22 23:48:18.940671 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Apr 22 23:48:18.940683 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 22 23:48:18.940694 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 22 23:48:18.940718 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Apr 22 23:48:18.940758 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 22 23:48:18.940769 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Apr 22 23:48:18.940780 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Apr 22 23:48:18.940792 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 22 23:48:18.940808 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 22 23:48:18.940819 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Apr 22 23:48:18.940831 systemd[1]: Reached target paths.target - Path Units. Apr 22 23:48:18.940842 systemd[1]: Reached target slices.target - Slice Units. Apr 22 23:48:18.940853 systemd[1]: Reached target swap.target - Swaps. Apr 22 23:48:18.940865 systemd[1]: Reached target timers.target - Timer Units. Apr 22 23:48:18.940876 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Apr 22 23:48:18.940890 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 22 23:48:18.940901 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Apr 22 23:48:18.940913 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 22 23:48:18.940925 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Apr 22 23:48:18.940937 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 22 23:48:18.940948 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 22 23:48:18.940960 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 22 23:48:18.940974 systemd[1]: Reached target sockets.target - Socket Units. Apr 22 23:48:18.940987 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Apr 22 23:48:18.941000 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Apr 22 23:48:18.941011 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 22 23:48:18.941022 systemd[1]: Finished network-cleanup.service - Network Cleanup. Apr 22 23:48:18.941034 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Apr 22 23:48:18.941045 systemd[1]: Starting systemd-fsck-usr.service... Apr 22 23:48:18.941059 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 22 23:48:18.941070 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 22 23:48:18.941081 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 22 23:48:18.941095 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Apr 22 23:48:18.941106 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 22 23:48:18.941333 systemd-journald[320]: Collecting audit messages is enabled. Apr 22 23:48:18.941395 systemd[1]: Finished systemd-fsck-usr.service. Apr 22 23:48:18.941408 kernel: audit: type=1130 audit(1776901698.911:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:48:18.941420 kernel: audit: type=1130 audit(1776901698.923:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:48:18.941431 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 22 23:48:18.941443 systemd-journald[320]: Journal started Apr 22 23:48:18.941467 systemd-journald[320]: Runtime Journal (/run/log/journal/b9d2167ce9f54ae7af4f75cf0501e9aa) is 6M, max 48M, 42M free. Apr 22 23:48:18.911000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:48:18.923000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:48:18.945343 systemd[1]: Started systemd-journald.service - Journal Service. Apr 22 23:48:18.948000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:48:18.954344 kernel: audit: type=1130 audit(1776901698.948:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:48:18.956474 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 22 23:48:18.968836 kernel: audit: type=1130 audit(1776901698.958:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:48:18.958000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:48:18.983689 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Apr 22 23:48:18.973463 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 22 23:48:18.990112 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 22 23:48:19.001738 kernel: Bridge firewalling registered Apr 22 23:48:18.993745 systemd-modules-load[321]: Inserted module 'br_netfilter' Apr 22 23:48:19.006961 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 22 23:48:19.021446 kernel: audit: type=1130 audit(1776901699.011:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:48:19.011000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:48:19.021504 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 22 23:48:19.037952 kernel: audit: type=1130 audit(1776901699.025:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:48:19.025000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:48:19.042204 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 22 23:48:19.046867 systemd-tmpfiles[334]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Apr 22 23:48:19.048727 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 22 23:48:19.078728 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 22 23:48:19.103880 kernel: audit: type=1130 audit(1776901699.087:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:48:19.087000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:48:19.104045 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 22 23:48:19.113000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:48:19.121490 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Apr 22 23:48:19.131615 kernel: audit: type=1130 audit(1776901699.113:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:48:19.133614 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 22 23:48:19.146000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:48:19.158597 kernel: audit: type=1130 audit(1776901699.146:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:48:19.159557 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 22 23:48:19.178693 kernel: audit: type=1130 audit(1776901699.164:11): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:48:19.164000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:48:19.174000 audit: BPF prog-id=6 op=LOAD Apr 22 23:48:19.178748 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 22 23:48:19.202132 dracut-cmdline[353]: dracut-109 Apr 22 23:48:19.245195 dracut-cmdline[353]: Using kernel command line parameters: SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=c1111a64faf79e22c6b231a95ce03ff7308375557d63046382fb274ec481eaec Apr 22 23:48:19.320692 systemd-resolved[360]: Positive Trust Anchors: Apr 22 23:48:19.320726 systemd-resolved[360]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 22 23:48:19.320730 systemd-resolved[360]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Apr 22 23:48:19.320763 systemd-resolved[360]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 22 23:48:19.386023 systemd-resolved[360]: Defaulting to hostname 'linux'. Apr 22 23:48:19.389031 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 22 23:48:19.395000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:48:19.395744 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 22 23:48:19.594609 kernel: Loading iSCSI transport class v2.0-870. Apr 22 23:48:19.625381 kernel: iscsi: registered transport (tcp) Apr 22 23:48:19.675434 kernel: iscsi: registered transport (qla4xxx) Apr 22 23:48:19.675593 kernel: QLogic iSCSI HBA Driver Apr 22 23:48:19.725596 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Apr 22 23:48:19.768654 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Apr 22 23:48:19.768000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:48:19.776711 systemd[1]: Reached target network-pre.target - Preparation for Network. Apr 22 23:48:19.928060 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Apr 22 23:48:19.934000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:48:19.938527 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Apr 22 23:48:19.943029 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Apr 22 23:48:19.994879 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Apr 22 23:48:20.002000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:48:20.005000 audit: BPF prog-id=7 op=LOAD Apr 22 23:48:20.005000 audit: BPF prog-id=8 op=LOAD Apr 22 23:48:20.006579 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 22 23:48:20.059741 systemd-udevd[587]: Using default interface naming scheme 'v257'. Apr 22 23:48:20.075627 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 22 23:48:20.082000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:48:20.084820 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Apr 22 23:48:20.141066 dracut-pre-trigger[643]: rd.md=0: removing MD RAID activation Apr 22 23:48:20.204836 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 22 23:48:20.205000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:48:20.207000 audit: BPF prog-id=9 op=LOAD Apr 22 23:48:20.209052 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 22 23:48:20.221969 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Apr 22 23:48:20.253000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:48:20.259325 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 22 23:48:20.315692 systemd-networkd[722]: lo: Link UP Apr 22 23:48:20.316079 systemd-networkd[722]: lo: Gained carrier Apr 22 23:48:20.317000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:48:20.317627 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 22 23:48:20.318333 systemd[1]: Reached target network.target - Network. Apr 22 23:48:20.349250 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 22 23:48:20.354000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:48:20.359735 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Apr 22 23:48:20.438150 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Apr 22 23:48:20.457952 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Apr 22 23:48:20.476780 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Apr 22 23:48:20.492855 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Apr 22 23:48:20.506462 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Apr 22 23:48:20.516330 kernel: cryptd: max_cpu_qlen set to 1000 Apr 22 23:48:20.519994 systemd-networkd[722]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Apr 22 23:48:20.520002 systemd-networkd[722]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 22 23:48:20.523811 systemd-networkd[722]: eth0: Link UP Apr 22 23:48:20.525159 systemd-networkd[722]: eth0: Gained carrier Apr 22 23:48:20.525171 systemd-networkd[722]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Apr 22 23:48:20.548011 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 22 23:48:20.555000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:48:20.548191 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 22 23:48:20.553407 systemd-networkd[722]: eth0: DHCPv4 address 10.0.0.21/16, gateway 10.0.0.1 acquired from 10.0.0.1 Apr 22 23:48:20.555962 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Apr 22 23:48:20.579235 disk-uuid[770]: Primary Header is updated. Apr 22 23:48:20.579235 disk-uuid[770]: Secondary Entries is updated. Apr 22 23:48:20.579235 disk-uuid[770]: Secondary Header is updated. Apr 22 23:48:20.563435 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 22 23:48:20.578871 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 22 23:48:20.581000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:48:20.581000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:48:20.579011 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 22 23:48:20.587257 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 22 23:48:20.662696 kernel: AES CTR mode by8 optimization enabled Apr 22 23:48:20.669650 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Apr 22 23:48:20.703613 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 22 23:48:20.712000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:48:20.901833 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Apr 22 23:48:20.905000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:48:20.906916 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Apr 22 23:48:20.910112 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 22 23:48:20.921096 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 22 23:48:20.938811 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Apr 22 23:48:20.979788 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Apr 22 23:48:20.982000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:48:21.778930 disk-uuid[772]: Warning: The kernel is still using the old partition table. Apr 22 23:48:21.778930 disk-uuid[772]: The new table will be used at the next reboot or after you Apr 22 23:48:21.778930 disk-uuid[772]: run partprobe(8) or kpartx(8) Apr 22 23:48:21.778930 disk-uuid[772]: The operation has completed successfully. Apr 22 23:48:21.801885 systemd[1]: disk-uuid.service: Deactivated successfully. Apr 22 23:48:21.802019 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Apr 22 23:48:21.804000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:48:21.804000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:48:21.806124 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Apr 22 23:48:21.865411 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (884) Apr 22 23:48:21.870528 kernel: BTRFS info (device vda6): first mount of filesystem 2d4e8828-6ba6-458d-87f8-40fb1ce4470a Apr 22 23:48:21.870714 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 22 23:48:21.890801 kernel: BTRFS info (device vda6): turning on async discard Apr 22 23:48:21.890989 kernel: BTRFS info (device vda6): enabling free space tree Apr 22 23:48:21.903327 kernel: BTRFS info (device vda6): last unmount of filesystem 2d4e8828-6ba6-458d-87f8-40fb1ce4470a Apr 22 23:48:21.904391 systemd[1]: Finished ignition-setup.service - Ignition (setup). Apr 22 23:48:21.909000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:48:21.912561 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Apr 22 23:48:22.103188 ignition[903]: Ignition 2.24.0 Apr 22 23:48:22.104045 ignition[903]: Stage: fetch-offline Apr 22 23:48:22.104220 ignition[903]: no configs at "/usr/lib/ignition/base.d" Apr 22 23:48:22.104234 ignition[903]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 22 23:48:22.104420 ignition[903]: parsed url from cmdline: "" Apr 22 23:48:22.104425 ignition[903]: no config URL provided Apr 22 23:48:22.104518 ignition[903]: reading system config file "/usr/lib/ignition/user.ign" Apr 22 23:48:22.104530 ignition[903]: no config at "/usr/lib/ignition/user.ign" Apr 22 23:48:22.104578 ignition[903]: op(1): [started] loading QEMU firmware config module Apr 22 23:48:22.104583 ignition[903]: op(1): executing: "modprobe" "qemu_fw_cfg" Apr 22 23:48:22.206058 ignition[903]: op(1): [finished] loading QEMU firmware config module Apr 22 23:48:22.302461 ignition[903]: parsing config with SHA512: 55d891d1b767df210b76b235d380a95cbfdec19b45a8a750f3921e93c34e5686469be596d906a8169a2dacc6c80f5a663608b6c1ee413f3a7027216f287be7c7 Apr 22 23:48:22.313119 unknown[903]: fetched base config from "system" Apr 22 23:48:22.313154 unknown[903]: fetched user config from "qemu" Apr 22 23:48:22.321801 ignition[903]: fetch-offline: fetch-offline passed Apr 22 23:48:22.325125 ignition[903]: Ignition finished successfully Apr 22 23:48:22.334010 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Apr 22 23:48:22.333000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:48:22.335500 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Apr 22 23:48:22.336881 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Apr 22 23:48:22.382195 ignition[913]: Ignition 2.24.0 Apr 22 23:48:22.382683 ignition[913]: Stage: kargs Apr 22 23:48:22.382943 ignition[913]: no configs at "/usr/lib/ignition/base.d" Apr 22 23:48:22.382954 ignition[913]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 22 23:48:22.385332 ignition[913]: kargs: kargs passed Apr 22 23:48:22.385548 ignition[913]: Ignition finished successfully Apr 22 23:48:22.398807 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Apr 22 23:48:22.401000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:48:22.405615 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Apr 22 23:48:22.433258 systemd-networkd[722]: eth0: Gained IPv6LL Apr 22 23:48:22.463801 ignition[921]: Ignition 2.24.0 Apr 22 23:48:22.463838 ignition[921]: Stage: disks Apr 22 23:48:22.463974 ignition[921]: no configs at "/usr/lib/ignition/base.d" Apr 22 23:48:22.463980 ignition[921]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 22 23:48:22.476677 ignition[921]: disks: disks passed Apr 22 23:48:22.480154 ignition[921]: Ignition finished successfully Apr 22 23:48:22.488805 systemd[1]: Finished ignition-disks.service - Ignition (disks). Apr 22 23:48:22.500000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:48:22.502247 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Apr 22 23:48:22.509598 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 22 23:48:22.554027 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 22 23:48:22.555871 systemd[1]: Reached target sysinit.target - System Initialization. Apr 22 23:48:22.560713 systemd[1]: Reached target basic.target - Basic System. Apr 22 23:48:22.569517 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Apr 22 23:48:22.647125 systemd-fsck[930]: ROOT: clean, 15/456736 files, 38230/456704 blocks Apr 22 23:48:22.656852 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Apr 22 23:48:22.668000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:48:22.670805 systemd[1]: Mounting sysroot.mount - /sysroot... Apr 22 23:48:22.936402 kernel: EXT4-fs (vda9): mounted filesystem acb26ad1-a3c4-45b5-95a2-dde9b0966d3b r/w with ordered data mode. Quota mode: none. Apr 22 23:48:22.937409 systemd[1]: Mounted sysroot.mount - /sysroot. Apr 22 23:48:22.939949 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Apr 22 23:48:22.947195 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 22 23:48:22.950603 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Apr 22 23:48:22.955216 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Apr 22 23:48:22.955256 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Apr 22 23:48:22.955339 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Apr 22 23:48:22.984981 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (938) Apr 22 23:48:22.967020 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Apr 22 23:48:22.992592 kernel: BTRFS info (device vda6): first mount of filesystem 2d4e8828-6ba6-458d-87f8-40fb1ce4470a Apr 22 23:48:22.992616 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 22 23:48:22.976147 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Apr 22 23:48:23.005823 kernel: BTRFS info (device vda6): turning on async discard Apr 22 23:48:23.005991 kernel: BTRFS info (device vda6): enabling free space tree Apr 22 23:48:23.007313 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 22 23:48:23.416197 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Apr 22 23:48:23.424000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:48:23.434688 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Apr 22 23:48:23.441792 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Apr 22 23:48:23.465158 systemd[1]: sysroot-oem.mount: Deactivated successfully. Apr 22 23:48:23.470920 kernel: BTRFS info (device vda6): last unmount of filesystem 2d4e8828-6ba6-458d-87f8-40fb1ce4470a Apr 22 23:48:23.514782 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Apr 22 23:48:23.519000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:48:23.542104 ignition[1037]: INFO : Ignition 2.24.0 Apr 22 23:48:23.542104 ignition[1037]: INFO : Stage: mount Apr 22 23:48:23.550404 ignition[1037]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 22 23:48:23.550404 ignition[1037]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 22 23:48:23.550404 ignition[1037]: INFO : mount: mount passed Apr 22 23:48:23.550404 ignition[1037]: INFO : Ignition finished successfully Apr 22 23:48:23.564200 systemd[1]: Finished ignition-mount.service - Ignition (mount). Apr 22 23:48:23.566465 systemd[1]: Starting ignition-files.service - Ignition (files)... Apr 22 23:48:23.564000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:48:23.939185 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 22 23:48:23.970601 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1047) Apr 22 23:48:23.978980 kernel: BTRFS info (device vda6): first mount of filesystem 2d4e8828-6ba6-458d-87f8-40fb1ce4470a Apr 22 23:48:23.979189 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 22 23:48:23.994139 kernel: BTRFS info (device vda6): turning on async discard Apr 22 23:48:23.994492 kernel: BTRFS info (device vda6): enabling free space tree Apr 22 23:48:23.995960 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 22 23:48:24.039845 ignition[1064]: INFO : Ignition 2.24.0 Apr 22 23:48:24.039845 ignition[1064]: INFO : Stage: files Apr 22 23:48:24.045394 ignition[1064]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 22 23:48:24.045394 ignition[1064]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 22 23:48:24.045394 ignition[1064]: DEBUG : files: compiled without relabeling support, skipping Apr 22 23:48:24.055776 ignition[1064]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Apr 22 23:48:24.055776 ignition[1064]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Apr 22 23:48:24.068316 ignition[1064]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Apr 22 23:48:24.076097 ignition[1064]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Apr 22 23:48:24.082239 unknown[1064]: wrote ssh authorized keys file for user: core Apr 22 23:48:24.086626 ignition[1064]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Apr 22 23:48:24.086626 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Apr 22 23:48:24.086626 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Apr 22 23:48:24.160149 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Apr 22 23:48:24.293603 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Apr 22 23:48:24.293603 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Apr 22 23:48:24.303468 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Apr 22 23:48:24.303468 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Apr 22 23:48:24.303468 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Apr 22 23:48:24.303468 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 22 23:48:24.303468 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 22 23:48:24.303468 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 22 23:48:24.303468 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 22 23:48:24.303468 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Apr 22 23:48:24.303468 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Apr 22 23:48:24.303468 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Apr 22 23:48:24.303468 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Apr 22 23:48:24.303468 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Apr 22 23:48:24.303468 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.8-x86-64.raw: attempt #1 Apr 22 23:48:24.434891 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Apr 22 23:48:24.795782 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Apr 22 23:48:24.795782 ignition[1064]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Apr 22 23:48:24.809018 ignition[1064]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 22 23:48:24.814184 ignition[1064]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 22 23:48:24.814184 ignition[1064]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Apr 22 23:48:24.814184 ignition[1064]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Apr 22 23:48:24.814184 ignition[1064]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Apr 22 23:48:24.814184 ignition[1064]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Apr 22 23:48:24.814184 ignition[1064]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Apr 22 23:48:24.814184 ignition[1064]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Apr 22 23:48:24.879025 ignition[1064]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Apr 22 23:48:24.898583 ignition[1064]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Apr 22 23:48:24.904769 ignition[1064]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Apr 22 23:48:24.904769 ignition[1064]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Apr 22 23:48:24.904769 ignition[1064]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Apr 22 23:48:24.904769 ignition[1064]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Apr 22 23:48:24.904769 ignition[1064]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Apr 22 23:48:24.904769 ignition[1064]: INFO : files: files passed Apr 22 23:48:24.904769 ignition[1064]: INFO : Ignition finished successfully Apr 22 23:48:24.953684 kernel: kauditd_printk_skb: 29 callbacks suppressed Apr 22 23:48:24.953720 kernel: audit: type=1130 audit(1776901704.945:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:48:24.945000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:48:24.939131 systemd[1]: Finished ignition-files.service - Ignition (files). Apr 22 23:48:24.956633 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Apr 22 23:48:24.962109 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Apr 22 23:48:24.983973 systemd[1]: ignition-quench.service: Deactivated successfully. Apr 22 23:48:24.984108 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Apr 22 23:48:24.987771 initrd-setup-root-after-ignition[1094]: grep: /sysroot/oem/oem-release: No such file or directory Apr 22 23:48:24.996461 initrd-setup-root-after-ignition[1096]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 22 23:48:24.996461 initrd-setup-root-after-ignition[1096]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Apr 22 23:48:25.018041 kernel: audit: type=1130 audit(1776901704.996:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:48:25.018089 kernel: audit: type=1131 audit(1776901704.996:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:48:24.996000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:48:24.996000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:48:25.018189 initrd-setup-root-after-ignition[1100]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 22 23:48:25.022460 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 22 23:48:25.031000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:48:25.037365 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Apr 22 23:48:25.042881 kernel: audit: type=1130 audit(1776901705.031:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:48:25.049677 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Apr 22 23:48:25.198114 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Apr 22 23:48:25.202679 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Apr 22 23:48:25.209000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:48:25.210806 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Apr 22 23:48:25.230724 kernel: audit: type=1130 audit(1776901705.209:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:48:25.230802 kernel: audit: type=1131 audit(1776901705.209:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:48:25.209000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:48:25.235778 systemd[1]: Reached target initrd.target - Initrd Default Target. Apr 22 23:48:25.239251 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Apr 22 23:48:25.247058 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Apr 22 23:48:25.378124 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 22 23:48:25.396547 kernel: audit: type=1130 audit(1776901705.383:47): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:48:25.383000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:48:25.396718 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Apr 22 23:48:25.460713 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Apr 22 23:48:25.462026 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Apr 22 23:48:25.473763 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 22 23:48:25.478019 systemd[1]: Stopped target timers.target - Timer Units. Apr 22 23:48:25.487087 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Apr 22 23:48:25.511112 kernel: audit: type=1131 audit(1776901705.496:48): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:48:25.496000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:48:25.487332 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 22 23:48:25.513916 systemd[1]: Stopped target initrd.target - Initrd Default Target. Apr 22 23:48:25.519096 systemd[1]: Stopped target basic.target - Basic System. Apr 22 23:48:25.530218 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Apr 22 23:48:25.535770 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Apr 22 23:48:25.545857 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Apr 22 23:48:25.552830 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Apr 22 23:48:25.562777 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Apr 22 23:48:25.574201 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Apr 22 23:48:25.578039 systemd[1]: Stopped target sysinit.target - System Initialization. Apr 22 23:48:25.589669 systemd[1]: Stopped target local-fs.target - Local File Systems. Apr 22 23:48:25.590856 systemd[1]: Stopped target swap.target - Swaps. Apr 22 23:48:25.597561 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Apr 22 23:48:25.597823 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Apr 22 23:48:25.622586 kernel: audit: type=1131 audit(1776901705.605:49): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:48:25.605000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:48:25.608132 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Apr 22 23:48:25.616668 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 22 23:48:25.642005 kernel: audit: type=1131 audit(1776901705.635:50): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:48:25.635000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:48:25.616824 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Apr 22 23:48:25.622652 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 22 23:48:25.624154 systemd[1]: dracut-initqueue.service: Deactivated successfully. Apr 22 23:48:25.657000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:48:25.624359 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Apr 22 23:48:25.645806 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Apr 22 23:48:25.648970 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Apr 22 23:48:25.657948 systemd[1]: Stopped target paths.target - Path Units. Apr 22 23:48:25.664049 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Apr 22 23:48:25.671942 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 22 23:48:25.675066 systemd[1]: Stopped target slices.target - Slice Units. Apr 22 23:48:25.687722 systemd[1]: Stopped target sockets.target - Socket Units. Apr 22 23:48:25.693851 systemd[1]: iscsid.socket: Deactivated successfully. Apr 22 23:48:25.695044 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Apr 22 23:48:25.703850 systemd[1]: iscsiuio.socket: Deactivated successfully. Apr 22 23:48:25.704037 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 22 23:48:25.710000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:48:25.707768 systemd[1]: systemd-journald-audit.socket: Deactivated successfully. Apr 22 23:48:25.707914 systemd[1]: Closed systemd-journald-audit.socket - Journal Audit Socket. Apr 22 23:48:25.755000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:48:25.710067 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Apr 22 23:48:25.761000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:48:25.710846 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 22 23:48:25.711142 systemd[1]: ignition-files.service: Deactivated successfully. Apr 22 23:48:25.774000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:48:25.711241 systemd[1]: Stopped ignition-files.service - Ignition (files). Apr 22 23:48:25.781000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:48:25.756758 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Apr 22 23:48:25.757982 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Apr 22 23:48:25.788000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:48:25.758107 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Apr 22 23:48:25.763062 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Apr 22 23:48:25.769988 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Apr 22 23:48:25.802000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:48:25.803000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:48:25.770875 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 22 23:48:25.775423 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Apr 22 23:48:25.775556 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Apr 22 23:48:25.821668 ignition[1121]: INFO : Ignition 2.24.0 Apr 22 23:48:25.821668 ignition[1121]: INFO : Stage: umount Apr 22 23:48:25.821668 ignition[1121]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 22 23:48:25.821668 ignition[1121]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 22 23:48:25.821668 ignition[1121]: INFO : umount: umount passed Apr 22 23:48:25.821668 ignition[1121]: INFO : Ignition finished successfully Apr 22 23:48:25.825000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:48:25.840000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:48:25.848000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:48:25.781944 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Apr 22 23:48:25.852000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:48:25.782037 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Apr 22 23:48:25.858000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup-pre comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:48:25.801193 systemd[1]: initrd-cleanup.service: Deactivated successfully. Apr 22 23:48:25.801400 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Apr 22 23:48:25.821934 systemd[1]: ignition-mount.service: Deactivated successfully. Apr 22 23:48:25.822048 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Apr 22 23:48:25.826689 systemd[1]: Stopped target network.target - Network. Apr 22 23:48:25.829174 systemd[1]: ignition-disks.service: Deactivated successfully. Apr 22 23:48:25.829237 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Apr 22 23:48:25.893000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:48:25.841183 systemd[1]: ignition-kargs.service: Deactivated successfully. Apr 22 23:48:25.841327 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Apr 22 23:48:25.849400 systemd[1]: ignition-setup.service: Deactivated successfully. Apr 22 23:48:25.904000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:48:25.849468 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Apr 22 23:48:25.912000 audit: BPF prog-id=9 op=UNLOAD Apr 22 23:48:25.855583 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Apr 22 23:48:25.915000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:48:25.855657 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Apr 22 23:48:25.862718 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Apr 22 23:48:25.919000 audit: BPF prog-id=6 op=UNLOAD Apr 22 23:48:25.869651 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Apr 22 23:48:25.884800 systemd[1]: sysroot-boot.mount: Deactivated successfully. Apr 22 23:48:25.941000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:48:25.890728 systemd[1]: systemd-networkd.service: Deactivated successfully. Apr 22 23:48:25.890869 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Apr 22 23:48:25.961000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:48:25.903120 systemd[1]: systemd-resolved.service: Deactivated successfully. Apr 22 23:48:25.964000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:48:25.904048 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Apr 22 23:48:25.981000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:48:25.912614 systemd[1]: sysroot-boot.service: Deactivated successfully. Apr 22 23:48:25.912717 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Apr 22 23:48:25.918577 systemd[1]: Stopped target network-pre.target - Preparation for Network. Apr 22 23:48:25.922237 systemd[1]: systemd-networkd.socket: Deactivated successfully. Apr 22 23:48:25.923750 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Apr 22 23:48:25.931671 systemd[1]: initrd-setup-root.service: Deactivated successfully. Apr 22 23:48:25.932788 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Apr 22 23:48:26.067000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:48:25.947514 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Apr 22 23:48:25.955423 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Apr 22 23:48:26.086000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:48:26.093000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:48:25.955520 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 22 23:48:25.962093 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 22 23:48:26.099000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:48:25.962163 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 22 23:48:25.965223 systemd[1]: systemd-modules-load.service: Deactivated successfully. Apr 22 23:48:25.965336 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Apr 22 23:48:26.118000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:48:25.983195 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 22 23:48:26.122000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:48:26.130000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:48:26.063215 systemd[1]: systemd-udevd.service: Deactivated successfully. Apr 22 23:48:26.063487 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 22 23:48:26.068235 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Apr 22 23:48:26.068341 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Apr 22 23:48:26.077167 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Apr 22 23:48:26.077245 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Apr 22 23:48:26.156000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:48:26.156000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:48:26.077939 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Apr 22 23:48:26.078029 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Apr 22 23:48:26.087070 systemd[1]: dracut-cmdline.service: Deactivated successfully. Apr 22 23:48:26.087132 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Apr 22 23:48:26.093883 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 22 23:48:26.093942 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 22 23:48:26.106240 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Apr 22 23:48:26.182000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:48:26.113705 systemd[1]: systemd-network-generator.service: Deactivated successfully. Apr 22 23:48:26.113982 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Apr 22 23:48:26.118578 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Apr 22 23:48:26.118632 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 22 23:48:26.123176 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 22 23:48:26.123237 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 22 23:48:26.154810 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Apr 22 23:48:26.154951 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Apr 22 23:48:26.181543 systemd[1]: network-cleanup.service: Deactivated successfully. Apr 22 23:48:26.181710 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Apr 22 23:48:26.186762 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Apr 22 23:48:26.194918 systemd[1]: Starting initrd-switch-root.service - Switch Root... Apr 22 23:48:26.261755 systemd[1]: Switching root. Apr 22 23:48:26.300643 systemd-journald[320]: Received SIGTERM from PID 1 (systemd). Apr 22 23:48:26.300803 systemd-journald[320]: Journal stopped Apr 22 23:48:28.407119 kernel: SELinux: policy capability network_peer_controls=1 Apr 22 23:48:28.407200 kernel: SELinux: policy capability open_perms=1 Apr 22 23:48:28.407220 kernel: SELinux: policy capability extended_socket_class=1 Apr 22 23:48:28.407345 kernel: SELinux: policy capability always_check_network=0 Apr 22 23:48:28.407720 kernel: SELinux: policy capability cgroup_seclabel=1 Apr 22 23:48:28.407811 kernel: SELinux: policy capability nnp_nosuid_transition=1 Apr 22 23:48:28.407843 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Apr 22 23:48:28.407922 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Apr 22 23:48:28.407936 kernel: SELinux: policy capability userspace_initial_context=0 Apr 22 23:48:28.407954 systemd[1]: Successfully loaded SELinux policy in 74.217ms. Apr 22 23:48:28.407979 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 11.898ms. Apr 22 23:48:28.407994 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Apr 22 23:48:28.408007 systemd[1]: Detected virtualization kvm. Apr 22 23:48:28.408021 systemd[1]: Detected architecture x86-64. Apr 22 23:48:28.408037 systemd[1]: Detected first boot. Apr 22 23:48:28.408051 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Apr 22 23:48:28.408066 zram_generator::config[1165]: No configuration found. Apr 22 23:48:28.408091 kernel: Guest personality initialized and is inactive Apr 22 23:48:28.408103 kernel: VMCI host device registered (name=vmci, major=10, minor=258) Apr 22 23:48:28.408115 kernel: Initialized host personality Apr 22 23:48:28.408130 kernel: NET: Registered PF_VSOCK protocol family Apr 22 23:48:28.408493 systemd[1]: Populated /etc with preset unit settings. Apr 22 23:48:28.408576 systemd[1]: initrd-switch-root.service: Deactivated successfully. Apr 22 23:48:28.408589 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Apr 22 23:48:28.408636 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Apr 22 23:48:28.408658 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Apr 22 23:48:28.408672 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Apr 22 23:48:28.408689 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Apr 22 23:48:28.408707 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Apr 22 23:48:28.408719 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Apr 22 23:48:28.408733 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Apr 22 23:48:28.408746 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Apr 22 23:48:28.408758 systemd[1]: Created slice user.slice - User and Session Slice. Apr 22 23:48:28.408771 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 22 23:48:28.408790 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 22 23:48:28.408813 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Apr 22 23:48:28.408829 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Apr 22 23:48:28.408843 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Apr 22 23:48:28.408861 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 22 23:48:28.408876 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Apr 22 23:48:28.408890 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 22 23:48:28.418038 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 22 23:48:28.418188 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Apr 22 23:48:28.418206 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Apr 22 23:48:28.418905 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Apr 22 23:48:28.418957 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Apr 22 23:48:28.418971 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 22 23:48:28.418984 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 22 23:48:28.418998 systemd[1]: Reached target remote-veritysetup.target - Remote Verity Protected Volumes. Apr 22 23:48:28.419012 systemd[1]: Reached target slices.target - Slice Units. Apr 22 23:48:28.419024 systemd[1]: Reached target swap.target - Swaps. Apr 22 23:48:28.419042 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Apr 22 23:48:28.419057 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Apr 22 23:48:28.419069 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Apr 22 23:48:28.419082 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Apr 22 23:48:28.419094 systemd[1]: Listening on systemd-mountfsd.socket - DDI File System Mounter Socket. Apr 22 23:48:28.419107 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 22 23:48:28.419119 systemd[1]: Listening on systemd-nsresourced.socket - Namespace Resource Manager Socket. Apr 22 23:48:28.419135 systemd[1]: Listening on systemd-oomd.socket - Userspace Out-Of-Memory (OOM) Killer Socket. Apr 22 23:48:28.419151 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 22 23:48:28.419166 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 22 23:48:28.419181 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Apr 22 23:48:28.419196 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Apr 22 23:48:28.419210 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Apr 22 23:48:28.419222 systemd[1]: Mounting media.mount - External Media Directory... Apr 22 23:48:28.419347 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 22 23:48:28.419363 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Apr 22 23:48:28.419378 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Apr 22 23:48:28.419420 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Apr 22 23:48:28.419435 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Apr 22 23:48:28.419449 systemd[1]: Reached target machines.target - Containers. Apr 22 23:48:28.419461 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Apr 22 23:48:28.419478 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 22 23:48:28.419491 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 22 23:48:28.419506 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Apr 22 23:48:28.419520 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 22 23:48:28.419536 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 22 23:48:28.419550 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 22 23:48:28.419564 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Apr 22 23:48:28.419582 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 22 23:48:28.419598 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Apr 22 23:48:28.419612 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Apr 22 23:48:28.419627 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Apr 22 23:48:28.419641 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Apr 22 23:48:28.419655 kernel: ACPI: bus type drm_connector registered Apr 22 23:48:28.419669 systemd[1]: Stopped systemd-fsck-usr.service. Apr 22 23:48:28.419686 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Apr 22 23:48:28.419701 kernel: fuse: init (API version 7.41) Apr 22 23:48:28.419716 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 22 23:48:28.419731 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 22 23:48:28.419746 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Apr 22 23:48:28.419760 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Apr 22 23:48:28.419776 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Apr 22 23:48:28.419791 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 22 23:48:28.419807 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 22 23:48:28.419854 systemd-journald[1241]: Collecting audit messages is enabled. Apr 22 23:48:28.419886 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Apr 22 23:48:28.419900 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Apr 22 23:48:28.419914 systemd-journald[1241]: Journal started Apr 22 23:48:28.419938 systemd-journald[1241]: Runtime Journal (/run/log/journal/b9d2167ce9f54ae7af4f75cf0501e9aa) is 6M, max 48M, 42M free. Apr 22 23:48:27.917000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Apr 22 23:48:28.244000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:48:28.255000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:48:28.265000 audit: BPF prog-id=14 op=UNLOAD Apr 22 23:48:28.265000 audit: BPF prog-id=13 op=UNLOAD Apr 22 23:48:28.266000 audit: BPF prog-id=15 op=LOAD Apr 22 23:48:28.268000 audit: BPF prog-id=16 op=LOAD Apr 22 23:48:28.268000 audit: BPF prog-id=17 op=LOAD Apr 22 23:48:28.405000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Apr 22 23:48:28.405000 audit[1241]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=6 a1=7ffff7657350 a2=4000 a3=0 items=0 ppid=1 pid=1241 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 22 23:48:28.405000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Apr 22 23:48:27.564972 systemd[1]: Queued start job for default target multi-user.target. Apr 22 23:48:27.580913 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Apr 22 23:48:27.583653 systemd[1]: systemd-journald.service: Deactivated successfully. Apr 22 23:48:27.586095 systemd[1]: systemd-journald.service: Consumed 1.479s CPU time. Apr 22 23:48:28.436117 systemd[1]: Started systemd-journald.service - Journal Service. Apr 22 23:48:28.435000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:48:28.445842 systemd[1]: Mounted media.mount - External Media Directory. Apr 22 23:48:28.451789 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Apr 22 23:48:28.457636 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Apr 22 23:48:28.461722 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Apr 22 23:48:28.465580 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Apr 22 23:48:28.469000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:48:28.470739 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 22 23:48:28.475000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:48:28.476641 systemd[1]: modprobe@configfs.service: Deactivated successfully. Apr 22 23:48:28.476836 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Apr 22 23:48:28.481000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:48:28.481000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:48:28.481786 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 22 23:48:28.482834 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 22 23:48:28.487000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:48:28.487000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:48:28.487957 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 22 23:48:28.488828 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 22 23:48:28.493000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:48:28.493000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:48:28.493724 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 22 23:48:28.493886 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 22 23:48:28.497000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:48:28.497000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:48:28.497659 systemd[1]: modprobe@fuse.service: Deactivated successfully. Apr 22 23:48:28.497830 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Apr 22 23:48:28.502000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:48:28.502000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:48:28.503103 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 22 23:48:28.504073 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 22 23:48:28.507000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:48:28.507000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:48:28.510048 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 22 23:48:28.513000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:48:28.514483 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Apr 22 23:48:28.519000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:48:28.522202 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Apr 22 23:48:28.525000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:48:28.530923 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Apr 22 23:48:28.537000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-load-credentials comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:48:28.546957 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 22 23:48:28.552000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:48:28.565560 systemd[1]: Reached target network-pre.target - Preparation for Network. Apr 22 23:48:28.575460 systemd[1]: Listening on systemd-importd.socket - Disk Image Download Service Socket. Apr 22 23:48:28.581765 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Apr 22 23:48:28.587751 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Apr 22 23:48:28.593577 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Apr 22 23:48:28.593658 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 22 23:48:28.598709 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Apr 22 23:48:28.606205 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 22 23:48:28.606453 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Apr 22 23:48:28.608880 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Apr 22 23:48:28.615460 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Apr 22 23:48:28.619203 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 22 23:48:28.622256 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Apr 22 23:48:28.626042 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 22 23:48:28.628699 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 22 23:48:28.640026 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Apr 22 23:48:28.646039 systemd-journald[1241]: Time spent on flushing to /var/log/journal/b9d2167ce9f54ae7af4f75cf0501e9aa is 92.934ms for 1194 entries. Apr 22 23:48:28.646039 systemd-journald[1241]: System Journal (/var/log/journal/b9d2167ce9f54ae7af4f75cf0501e9aa) is 8M, max 163.5M, 155.5M free. Apr 22 23:48:28.778126 systemd-journald[1241]: Received client request to flush runtime journal. Apr 22 23:48:28.778244 kernel: loop1: detected capacity change from 0 to 111560 Apr 22 23:48:28.778792 kernel: loop2: detected capacity change from 0 to 50784 Apr 22 23:48:28.690000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:48:28.745000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:48:28.657018 systemd[1]: Starting systemd-sysusers.service - Create System Users... Apr 22 23:48:28.662629 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Apr 22 23:48:28.666842 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Apr 22 23:48:28.685703 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Apr 22 23:48:28.690671 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Apr 22 23:48:28.700682 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Apr 22 23:48:28.739532 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 22 23:48:28.788927 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Apr 22 23:48:28.790046 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Apr 22 23:48:28.797000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:48:28.800706 systemd[1]: Finished systemd-sysusers.service - Create System Users. Apr 22 23:48:28.806000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:48:28.808544 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Apr 22 23:48:28.828000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:48:28.847000 audit: BPF prog-id=18 op=LOAD Apr 22 23:48:28.847000 audit: BPF prog-id=19 op=LOAD Apr 22 23:48:28.847000 audit: BPF prog-id=20 op=LOAD Apr 22 23:48:28.849000 systemd[1]: Starting systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer... Apr 22 23:48:28.854000 audit: BPF prog-id=21 op=LOAD Apr 22 23:48:28.856172 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 22 23:48:28.863379 kernel: loop3: detected capacity change from 0 to 228704 Apr 22 23:48:28.866542 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 22 23:48:28.881000 audit: BPF prog-id=22 op=LOAD Apr 22 23:48:28.882000 audit: BPF prog-id=23 op=LOAD Apr 22 23:48:28.884000 audit: BPF prog-id=24 op=LOAD Apr 22 23:48:28.886549 systemd[1]: Starting systemd-nsresourced.service - Namespace Resource Manager... Apr 22 23:48:28.893000 audit: BPF prog-id=25 op=LOAD Apr 22 23:48:28.894000 audit: BPF prog-id=26 op=LOAD Apr 22 23:48:28.894000 audit: BPF prog-id=27 op=LOAD Apr 22 23:48:28.906044 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Apr 22 23:48:28.922524 kernel: loop4: detected capacity change from 0 to 111560 Apr 22 23:48:28.931716 systemd-tmpfiles[1307]: ACLs are not supported, ignoring. Apr 22 23:48:28.937043 systemd-tmpfiles[1307]: ACLs are not supported, ignoring. Apr 22 23:48:28.948066 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 22 23:48:28.954000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:48:28.954919 systemd-nsresourced[1309]: Not setting up BPF subsystem, as functionality has been disabled at compile time. Apr 22 23:48:28.957017 systemd[1]: Started systemd-nsresourced.service - Namespace Resource Manager. Apr 22 23:48:28.962903 kernel: loop5: detected capacity change from 0 to 50784 Apr 22 23:48:28.963000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-nsresourced comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:48:28.986951 kernel: loop6: detected capacity change from 0 to 228704 Apr 22 23:48:28.999481 (sd-merge)[1313]: Using extensions 'containerd-flatcar.raw', 'docker-flatcar.raw', 'kubernetes.raw'. Apr 22 23:48:29.003669 systemd[1]: Started systemd-userdbd.service - User Database Manager. Apr 22 23:48:29.004217 (sd-merge)[1313]: Merged extensions into '/usr'. Apr 22 23:48:29.007000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:48:29.012447 systemd[1]: Reload requested from client PID 1287 ('systemd-sysext') (unit systemd-sysext.service)... Apr 22 23:48:29.012567 systemd[1]: Reloading... Apr 22 23:48:29.185326 zram_generator::config[1357]: No configuration found. Apr 22 23:48:29.183738 systemd-oomd[1305]: No swap; memory pressure usage will be degraded Apr 22 23:48:29.222616 systemd-resolved[1306]: Positive Trust Anchors: Apr 22 23:48:29.222658 systemd-resolved[1306]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 22 23:48:29.222662 systemd-resolved[1306]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Apr 22 23:48:29.222693 systemd-resolved[1306]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 22 23:48:29.227079 systemd-resolved[1306]: Defaulting to hostname 'linux'. Apr 22 23:48:29.655543 systemd[1]: Reloading finished in 642 ms. Apr 22 23:48:29.705194 systemd[1]: Started systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer. Apr 22 23:48:29.709000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-oomd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:48:29.711192 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 22 23:48:29.715000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:48:29.716240 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Apr 22 23:48:29.721000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:48:29.722427 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Apr 22 23:48:29.727000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:48:29.746842 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 22 23:48:29.779921 systemd[1]: Starting ensure-sysext.service... Apr 22 23:48:29.789095 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 22 23:48:29.791000 audit: BPF prog-id=8 op=UNLOAD Apr 22 23:48:29.792000 audit: BPF prog-id=7 op=UNLOAD Apr 22 23:48:29.792000 audit: BPF prog-id=28 op=LOAD Apr 22 23:48:29.804000 audit: BPF prog-id=29 op=LOAD Apr 22 23:48:29.806559 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 22 23:48:29.821000 audit: BPF prog-id=30 op=LOAD Apr 22 23:48:29.821000 audit: BPF prog-id=22 op=UNLOAD Apr 22 23:48:29.825000 audit: BPF prog-id=31 op=LOAD Apr 22 23:48:29.825000 audit: BPF prog-id=32 op=LOAD Apr 22 23:48:29.825000 audit: BPF prog-id=23 op=UNLOAD Apr 22 23:48:29.825000 audit: BPF prog-id=24 op=UNLOAD Apr 22 23:48:29.854000 audit: BPF prog-id=33 op=LOAD Apr 22 23:48:29.854000 audit: BPF prog-id=18 op=UNLOAD Apr 22 23:48:29.854000 audit: BPF prog-id=34 op=LOAD Apr 22 23:48:29.854000 audit: BPF prog-id=35 op=LOAD Apr 22 23:48:29.854000 audit: BPF prog-id=19 op=UNLOAD Apr 22 23:48:29.854000 audit: BPF prog-id=20 op=UNLOAD Apr 22 23:48:29.858000 audit: BPF prog-id=36 op=LOAD Apr 22 23:48:29.858000 audit: BPF prog-id=15 op=UNLOAD Apr 22 23:48:29.858000 audit: BPF prog-id=37 op=LOAD Apr 22 23:48:29.858000 audit: BPF prog-id=38 op=LOAD Apr 22 23:48:29.859000 audit: BPF prog-id=16 op=UNLOAD Apr 22 23:48:29.859000 audit: BPF prog-id=17 op=UNLOAD Apr 22 23:48:29.863000 audit: BPF prog-id=39 op=LOAD Apr 22 23:48:29.870155 systemd-tmpfiles[1394]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Apr 22 23:48:29.870211 systemd-tmpfiles[1394]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Apr 22 23:48:29.870482 systemd-tmpfiles[1394]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Apr 22 23:48:29.871205 systemd-tmpfiles[1394]: ACLs are not supported, ignoring. Apr 22 23:48:29.871317 systemd-tmpfiles[1394]: ACLs are not supported, ignoring. Apr 22 23:48:29.871000 audit: BPF prog-id=25 op=UNLOAD Apr 22 23:48:29.871000 audit: BPF prog-id=40 op=LOAD Apr 22 23:48:29.871000 audit: BPF prog-id=41 op=LOAD Apr 22 23:48:29.871000 audit: BPF prog-id=26 op=UNLOAD Apr 22 23:48:29.871000 audit: BPF prog-id=27 op=UNLOAD Apr 22 23:48:29.874000 audit: BPF prog-id=42 op=LOAD Apr 22 23:48:29.874000 audit: BPF prog-id=21 op=UNLOAD Apr 22 23:48:29.878919 systemd-tmpfiles[1394]: Detected autofs mount point /boot during canonicalization of boot. Apr 22 23:48:29.879174 systemd-tmpfiles[1394]: Skipping /boot Apr 22 23:48:29.884058 systemd[1]: Reload requested from client PID 1393 ('systemctl') (unit ensure-sysext.service)... Apr 22 23:48:29.885143 systemd[1]: Reloading... Apr 22 23:48:29.892049 systemd-tmpfiles[1394]: Detected autofs mount point /boot during canonicalization of boot. Apr 22 23:48:29.892087 systemd-tmpfiles[1394]: Skipping /boot Apr 22 23:48:29.919970 systemd-udevd[1395]: Using default interface naming scheme 'v257'. Apr 22 23:48:30.001376 zram_generator::config[1435]: No configuration found. Apr 22 23:48:30.176595 kernel: mousedev: PS/2 mouse device common for all mice Apr 22 23:48:30.201531 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Apr 22 23:48:30.209332 kernel: ACPI: button: Power Button [PWRF] Apr 22 23:48:30.223101 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Apr 22 23:48:30.223549 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Apr 22 23:48:30.223734 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Apr 22 23:48:30.456008 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Apr 22 23:48:30.459954 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Apr 22 23:48:30.460149 systemd[1]: Reloading finished in 573 ms. Apr 22 23:48:30.475751 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 22 23:48:30.497839 kernel: kauditd_printk_skb: 132 callbacks suppressed Apr 22 23:48:30.497972 kernel: audit: type=1130 audit(1776901710.483:181): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:48:30.483000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:48:30.498073 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 22 23:48:30.504000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:48:30.518152 kernel: audit: type=1130 audit(1776901710.504:182): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:48:30.525000 audit: BPF prog-id=43 op=LOAD Apr 22 23:48:30.531804 kernel: audit: type=1334 audit(1776901710.525:183): prog-id=43 op=LOAD Apr 22 23:48:30.537788 kernel: audit: type=1334 audit(1776901710.525:184): prog-id=42 op=UNLOAD Apr 22 23:48:30.525000 audit: BPF prog-id=42 op=UNLOAD Apr 22 23:48:30.536000 audit: BPF prog-id=44 op=LOAD Apr 22 23:48:30.544538 kernel: audit: type=1334 audit(1776901710.536:185): prog-id=44 op=LOAD Apr 22 23:48:30.549677 kernel: audit: type=1334 audit(1776901710.536:186): prog-id=33 op=UNLOAD Apr 22 23:48:30.536000 audit: BPF prog-id=33 op=UNLOAD Apr 22 23:48:30.536000 audit: BPF prog-id=45 op=LOAD Apr 22 23:48:30.555387 kernel: audit: type=1334 audit(1776901710.536:187): prog-id=45 op=LOAD Apr 22 23:48:30.537000 audit: BPF prog-id=46 op=LOAD Apr 22 23:48:30.559369 kernel: audit: type=1334 audit(1776901710.537:188): prog-id=46 op=LOAD Apr 22 23:48:30.537000 audit: BPF prog-id=34 op=UNLOAD Apr 22 23:48:30.563328 kernel: audit: type=1334 audit(1776901710.537:189): prog-id=34 op=UNLOAD Apr 22 23:48:30.537000 audit: BPF prog-id=35 op=UNLOAD Apr 22 23:48:30.568691 kernel: audit: type=1334 audit(1776901710.537:190): prog-id=35 op=UNLOAD Apr 22 23:48:30.543000 audit: BPF prog-id=47 op=LOAD Apr 22 23:48:30.543000 audit: BPF prog-id=39 op=UNLOAD Apr 22 23:48:30.544000 audit: BPF prog-id=48 op=LOAD Apr 22 23:48:30.544000 audit: BPF prog-id=49 op=LOAD Apr 22 23:48:30.544000 audit: BPF prog-id=40 op=UNLOAD Apr 22 23:48:30.544000 audit: BPF prog-id=41 op=UNLOAD Apr 22 23:48:30.546000 audit: BPF prog-id=50 op=LOAD Apr 22 23:48:30.552000 audit: BPF prog-id=36 op=UNLOAD Apr 22 23:48:30.552000 audit: BPF prog-id=51 op=LOAD Apr 22 23:48:30.552000 audit: BPF prog-id=52 op=LOAD Apr 22 23:48:30.552000 audit: BPF prog-id=37 op=UNLOAD Apr 22 23:48:30.552000 audit: BPF prog-id=38 op=UNLOAD Apr 22 23:48:30.553000 audit: BPF prog-id=53 op=LOAD Apr 22 23:48:30.553000 audit: BPF prog-id=54 op=LOAD Apr 22 23:48:30.554000 audit: BPF prog-id=28 op=UNLOAD Apr 22 23:48:30.554000 audit: BPF prog-id=29 op=UNLOAD Apr 22 23:48:30.554000 audit: BPF prog-id=55 op=LOAD Apr 22 23:48:30.554000 audit: BPF prog-id=30 op=UNLOAD Apr 22 23:48:30.555000 audit: BPF prog-id=56 op=LOAD Apr 22 23:48:30.555000 audit: BPF prog-id=57 op=LOAD Apr 22 23:48:30.555000 audit: BPF prog-id=31 op=UNLOAD Apr 22 23:48:30.555000 audit: BPF prog-id=32 op=UNLOAD Apr 22 23:48:30.737189 systemd[1]: Finished ensure-sysext.service. Apr 22 23:48:30.740000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:48:30.751434 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 22 23:48:30.753041 systemd[1]: Starting audit-rules.service - Load Audit Rules... Apr 22 23:48:30.757630 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Apr 22 23:48:30.760930 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 22 23:48:30.780081 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 22 23:48:30.788500 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 22 23:48:30.792649 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 22 23:48:30.803896 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 22 23:48:30.809653 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 22 23:48:30.810771 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Apr 22 23:48:30.850009 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Apr 22 23:48:30.857931 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Apr 22 23:48:30.862378 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Apr 22 23:48:30.870690 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Apr 22 23:48:30.875000 audit: BPF prog-id=58 op=LOAD Apr 22 23:48:30.881213 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 22 23:48:30.892000 audit: BPF prog-id=59 op=LOAD Apr 22 23:48:30.895662 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Apr 22 23:48:30.905151 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Apr 22 23:48:30.914542 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 22 23:48:30.920216 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 22 23:48:30.922637 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 22 23:48:30.923617 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 22 23:48:30.928000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:48:30.928000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:48:30.931669 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 22 23:48:30.931936 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 22 23:48:30.936000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:48:30.937000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:48:30.938313 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 22 23:48:30.939047 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 22 23:48:30.943000 audit[1537]: SYSTEM_BOOT pid=1537 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Apr 22 23:48:30.943000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:48:30.944000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:48:30.945645 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 22 23:48:30.946020 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 22 23:48:30.951000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:48:30.951000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:48:30.952056 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Apr 22 23:48:30.956000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:48:30.956978 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Apr 22 23:48:30.962000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:48:30.977534 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Apr 22 23:48:30.984000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 22 23:48:30.995000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Apr 22 23:48:30.995000 audit[1550]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7fff9c87e5d0 a2=420 a3=0 items=0 ppid=1509 pid=1550 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 22 23:48:30.995000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Apr 22 23:48:30.997820 augenrules[1550]: No rules Apr 22 23:48:31.001247 systemd[1]: audit-rules.service: Deactivated successfully. Apr 22 23:48:31.009909 systemd[1]: Finished audit-rules.service - Load Audit Rules. Apr 22 23:48:31.017129 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 22 23:48:31.018041 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 22 23:48:31.018146 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Apr 22 23:48:31.024142 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Apr 22 23:48:31.065686 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 22 23:48:31.162894 systemd-networkd[1531]: lo: Link UP Apr 22 23:48:31.162921 systemd-networkd[1531]: lo: Gained carrier Apr 22 23:48:31.168679 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 22 23:48:31.168825 systemd-networkd[1531]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Apr 22 23:48:31.168829 systemd-networkd[1531]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 22 23:48:31.172211 systemd-networkd[1531]: eth0: Link UP Apr 22 23:48:31.173440 systemd-networkd[1531]: eth0: Gained carrier Apr 22 23:48:31.173483 systemd-networkd[1531]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Apr 22 23:48:31.174907 systemd[1]: Reached target network.target - Network. Apr 22 23:48:31.183205 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Apr 22 23:48:31.193840 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Apr 22 23:48:31.197992 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Apr 22 23:48:31.198527 systemd-networkd[1531]: eth0: DHCPv4 address 10.0.0.21/16, gateway 10.0.0.1 acquired from 10.0.0.1 Apr 22 23:48:31.199250 systemd-timesyncd[1533]: Network configuration changed, trying to establish connection. Apr 22 23:48:32.402112 systemd-timesyncd[1533]: Contacted time server 10.0.0.1:123 (10.0.0.1). Apr 22 23:48:32.402196 systemd-timesyncd[1533]: Initial clock synchronization to Wed 2026-04-22 23:48:32.401965 UTC. Apr 22 23:48:32.402573 systemd-resolved[1306]: Clock change detected. Flushing caches. Apr 22 23:48:32.403283 systemd[1]: Reached target time-set.target - System Time Set. Apr 22 23:48:32.446823 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Apr 22 23:48:33.205804 ldconfig[1521]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Apr 22 23:48:33.263865 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Apr 22 23:48:33.269155 systemd[1]: Starting systemd-update-done.service - Update is Completed... Apr 22 23:48:33.317769 systemd[1]: Finished systemd-update-done.service - Update is Completed. Apr 22 23:48:33.322180 systemd[1]: Reached target sysinit.target - System Initialization. Apr 22 23:48:33.329384 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Apr 22 23:48:33.343441 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Apr 22 23:48:33.349816 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Apr 22 23:48:33.355011 systemd[1]: Started logrotate.timer - Daily rotation of log files. Apr 22 23:48:33.359095 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Apr 22 23:48:33.364243 systemd[1]: Started systemd-sysupdate-reboot.timer - Reboot Automatically After System Update. Apr 22 23:48:33.368409 systemd[1]: Started systemd-sysupdate.timer - Automatic System Update. Apr 22 23:48:33.372609 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Apr 22 23:48:33.377256 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Apr 22 23:48:33.377350 systemd[1]: Reached target paths.target - Path Units. Apr 22 23:48:33.380106 systemd[1]: Reached target timers.target - Timer Units. Apr 22 23:48:33.389030 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Apr 22 23:48:33.396911 systemd[1]: Starting docker.socket - Docker Socket for the API... Apr 22 23:48:33.407019 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Apr 22 23:48:33.411009 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Apr 22 23:48:33.415986 systemd[1]: Reached target ssh-access.target - SSH Access Available. Apr 22 23:48:33.439064 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Apr 22 23:48:33.445347 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Apr 22 23:48:33.451831 systemd[1]: Listening on docker.socket - Docker Socket for the API. Apr 22 23:48:33.460242 systemd[1]: Reached target sockets.target - Socket Units. Apr 22 23:48:33.465775 systemd[1]: Reached target basic.target - Basic System. Apr 22 23:48:33.471226 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Apr 22 23:48:33.472313 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Apr 22 23:48:33.477072 systemd[1]: Starting containerd.service - containerd container runtime... Apr 22 23:48:33.486385 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Apr 22 23:48:33.503702 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Apr 22 23:48:33.510348 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Apr 22 23:48:33.565915 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Apr 22 23:48:33.571619 jq[1578]: false Apr 22 23:48:33.573132 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Apr 22 23:48:33.575621 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Apr 22 23:48:33.592068 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Apr 22 23:48:33.600168 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Apr 22 23:48:33.607941 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Apr 22 23:48:33.608337 oslogin_cache_refresh[1580]: Refreshing passwd entry cache Apr 22 23:48:33.611495 google_oslogin_nss_cache[1580]: oslogin_cache_refresh[1580]: Refreshing passwd entry cache Apr 22 23:48:33.617486 extend-filesystems[1579]: Found /dev/vda6 Apr 22 23:48:33.620227 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Apr 22 23:48:33.623304 extend-filesystems[1579]: Found /dev/vda9 Apr 22 23:48:33.625319 google_oslogin_nss_cache[1580]: oslogin_cache_refresh[1580]: Failure getting users, quitting Apr 22 23:48:33.625319 google_oslogin_nss_cache[1580]: oslogin_cache_refresh[1580]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Apr 22 23:48:33.625319 google_oslogin_nss_cache[1580]: oslogin_cache_refresh[1580]: Refreshing group entry cache Apr 22 23:48:33.623701 oslogin_cache_refresh[1580]: Failure getting users, quitting Apr 22 23:48:33.623721 oslogin_cache_refresh[1580]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Apr 22 23:48:33.623775 oslogin_cache_refresh[1580]: Refreshing group entry cache Apr 22 23:48:33.629231 extend-filesystems[1579]: Checking size of /dev/vda9 Apr 22 23:48:33.634257 google_oslogin_nss_cache[1580]: oslogin_cache_refresh[1580]: Failure getting groups, quitting Apr 22 23:48:33.634257 google_oslogin_nss_cache[1580]: oslogin_cache_refresh[1580]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Apr 22 23:48:33.634238 oslogin_cache_refresh[1580]: Failure getting groups, quitting Apr 22 23:48:33.634256 oslogin_cache_refresh[1580]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Apr 22 23:48:33.644247 systemd[1]: Starting systemd-logind.service - User Login Management... Apr 22 23:48:33.647949 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Apr 22 23:48:33.648692 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Apr 22 23:48:33.649472 systemd[1]: Starting update-engine.service - Update Engine... Apr 22 23:48:33.656113 extend-filesystems[1579]: Resized partition /dev/vda9 Apr 22 23:48:33.659837 extend-filesystems[1603]: resize2fs 1.47.3 (8-Jul-2025) Apr 22 23:48:33.667484 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Apr 22 23:48:33.673254 kernel: EXT4-fs (vda9): resizing filesystem from 456704 to 1784827 blocks Apr 22 23:48:33.677769 systemd-networkd[1531]: eth0: Gained IPv6LL Apr 22 23:48:33.680884 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Apr 22 23:48:33.689298 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Apr 22 23:48:33.695319 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Apr 22 23:48:33.696081 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Apr 22 23:48:33.696381 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Apr 22 23:48:33.697077 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Apr 22 23:48:33.705041 systemd[1]: motdgen.service: Deactivated successfully. Apr 22 23:48:33.710467 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Apr 22 23:48:33.717250 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Apr 22 23:48:33.719102 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Apr 22 23:48:33.729991 jq[1602]: true Apr 22 23:48:33.741987 kernel: EXT4-fs (vda9): resized filesystem to 1784827 Apr 22 23:48:33.762156 update_engine[1598]: I20260422 23:48:33.761961 1598 main.cc:92] Flatcar Update Engine starting Apr 22 23:48:33.763267 systemd[1]: Reached target network-online.target - Network is Online. Apr 22 23:48:33.771130 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Apr 22 23:48:33.781969 jq[1619]: true Apr 22 23:48:33.778406 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 22 23:48:33.789007 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Apr 22 23:48:33.800318 extend-filesystems[1603]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Apr 22 23:48:33.800318 extend-filesystems[1603]: old_desc_blocks = 1, new_desc_blocks = 1 Apr 22 23:48:33.800318 extend-filesystems[1603]: The filesystem on /dev/vda9 is now 1784827 (4k) blocks long. Apr 22 23:48:33.856335 extend-filesystems[1579]: Resized filesystem in /dev/vda9 Apr 22 23:48:33.804478 systemd[1]: extend-filesystems.service: Deactivated successfully. Apr 22 23:48:33.806064 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Apr 22 23:48:33.816791 systemd-logind[1597]: Watching system buttons on /dev/input/event2 (Power Button) Apr 22 23:48:33.816808 systemd-logind[1597]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Apr 22 23:48:33.817700 systemd-logind[1597]: New seat seat0. Apr 22 23:48:33.833383 systemd[1]: Started systemd-logind.service - User Login Management. Apr 22 23:48:33.907812 tar[1612]: linux-amd64/LICENSE Apr 22 23:48:33.907812 tar[1612]: linux-amd64/helm Apr 22 23:48:33.948199 systemd[1]: coreos-metadata.service: Deactivated successfully. Apr 22 23:48:33.948636 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Apr 22 23:48:33.955963 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Apr 22 23:48:33.962378 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Apr 22 23:48:33.974186 dbus-daemon[1576]: [system] SELinux support is enabled Apr 22 23:48:33.976242 systemd[1]: Started dbus.service - D-Bus System Message Bus. Apr 22 23:48:33.979790 update_engine[1598]: I20260422 23:48:33.979697 1598 update_check_scheduler.cc:74] Next update check in 4m12s Apr 22 23:48:33.987995 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Apr 22 23:48:33.988061 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Apr 22 23:48:33.994062 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Apr 22 23:48:33.994116 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Apr 22 23:48:33.998702 bash[1661]: Updated "/home/core/.ssh/authorized_keys" Apr 22 23:48:34.000472 systemd[1]: Started update-engine.service - Update Engine. Apr 22 23:48:34.001418 dbus-daemon[1576]: [system] Successfully activated service 'org.freedesktop.systemd1' Apr 22 23:48:34.006880 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Apr 22 23:48:34.020005 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Apr 22 23:48:34.034404 systemd[1]: Started locksmithd.service - Cluster reboot manager. Apr 22 23:48:34.161122 sshd_keygen[1605]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Apr 22 23:48:34.206441 locksmithd[1666]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Apr 22 23:48:34.260922 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Apr 22 23:48:34.268276 systemd[1]: Starting issuegen.service - Generate /run/issue... Apr 22 23:48:34.279889 containerd[1625]: time="2026-04-22T23:48:34Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Apr 22 23:48:34.283023 containerd[1625]: time="2026-04-22T23:48:34.282637167Z" level=info msg="starting containerd" revision=fcd43222d6b07379a4be9786bda52438f0dd16a1 version=v2.1.5 Apr 22 23:48:34.293307 containerd[1625]: time="2026-04-22T23:48:34.292961034Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="7.831µs" Apr 22 23:48:34.293307 containerd[1625]: time="2026-04-22T23:48:34.293285824Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Apr 22 23:48:34.294051 containerd[1625]: time="2026-04-22T23:48:34.293376297Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Apr 22 23:48:34.294051 containerd[1625]: time="2026-04-22T23:48:34.293388282Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Apr 22 23:48:34.296979 containerd[1625]: time="2026-04-22T23:48:34.296906654Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Apr 22 23:48:34.296979 containerd[1625]: time="2026-04-22T23:48:34.296972609Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Apr 22 23:48:34.297148 containerd[1625]: time="2026-04-22T23:48:34.297023698Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Apr 22 23:48:34.297148 containerd[1625]: time="2026-04-22T23:48:34.297031728Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Apr 22 23:48:34.297442 containerd[1625]: time="2026-04-22T23:48:34.297205906Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Apr 22 23:48:34.297442 containerd[1625]: time="2026-04-22T23:48:34.297219546Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Apr 22 23:48:34.297442 containerd[1625]: time="2026-04-22T23:48:34.297227482Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Apr 22 23:48:34.297442 containerd[1625]: time="2026-04-22T23:48:34.297233214Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Apr 22 23:48:34.297442 containerd[1625]: time="2026-04-22T23:48:34.297346101Z" level=info msg="skip loading plugin" error="EROFS unsupported, please `modprobe erofs`: skip plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Apr 22 23:48:34.297442 containerd[1625]: time="2026-04-22T23:48:34.297356599Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Apr 22 23:48:34.297442 containerd[1625]: time="2026-04-22T23:48:34.297409738Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Apr 22 23:48:34.297703 containerd[1625]: time="2026-04-22T23:48:34.297603228Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Apr 22 23:48:34.297703 containerd[1625]: time="2026-04-22T23:48:34.297624079Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Apr 22 23:48:34.297703 containerd[1625]: time="2026-04-22T23:48:34.297631358Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Apr 22 23:48:34.297703 containerd[1625]: time="2026-04-22T23:48:34.297681776Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Apr 22 23:48:34.297993 containerd[1625]: time="2026-04-22T23:48:34.297836184Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Apr 22 23:48:34.297993 containerd[1625]: time="2026-04-22T23:48:34.297888075Z" level=info msg="metadata content store policy set" policy=shared Apr 22 23:48:34.305486 systemd[1]: issuegen.service: Deactivated successfully. Apr 22 23:48:34.306902 systemd[1]: Finished issuegen.service - Generate /run/issue. Apr 22 23:48:34.317826 containerd[1625]: time="2026-04-22T23:48:34.317781731Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Apr 22 23:48:34.318023 containerd[1625]: time="2026-04-22T23:48:34.318009725Z" level=info msg="loading plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Apr 22 23:48:34.318152 containerd[1625]: time="2026-04-22T23:48:34.318138221Z" level=info msg="skip loading plugin" error="could not find mkfs.erofs: exec: \"mkfs.erofs\": executable file not found in $PATH: skip plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Apr 22 23:48:34.318218 containerd[1625]: time="2026-04-22T23:48:34.318207954Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Apr 22 23:48:34.318278 containerd[1625]: time="2026-04-22T23:48:34.318266463Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Apr 22 23:48:34.318326 containerd[1625]: time="2026-04-22T23:48:34.318317812Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Apr 22 23:48:34.318385 containerd[1625]: time="2026-04-22T23:48:34.318371316Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Apr 22 23:48:34.318431 containerd[1625]: time="2026-04-22T23:48:34.318420757Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Apr 22 23:48:34.318477 containerd[1625]: time="2026-04-22T23:48:34.318467391Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Apr 22 23:48:34.318861 containerd[1625]: time="2026-04-22T23:48:34.318839781Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Apr 22 23:48:34.319036 containerd[1625]: time="2026-04-22T23:48:34.319025619Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Apr 22 23:48:34.319085 containerd[1625]: time="2026-04-22T23:48:34.319076803Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Apr 22 23:48:34.319134 containerd[1625]: time="2026-04-22T23:48:34.319124434Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Apr 22 23:48:34.319182 containerd[1625]: time="2026-04-22T23:48:34.319172973Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Apr 22 23:48:34.319354 containerd[1625]: time="2026-04-22T23:48:34.319340783Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Apr 22 23:48:34.319411 containerd[1625]: time="2026-04-22T23:48:34.319401170Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Apr 22 23:48:34.319872 containerd[1625]: time="2026-04-22T23:48:34.319701479Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Apr 22 23:48:34.320076 containerd[1625]: time="2026-04-22T23:48:34.320064716Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Apr 22 23:48:34.320128 containerd[1625]: time="2026-04-22T23:48:34.320118847Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Apr 22 23:48:34.320177 containerd[1625]: time="2026-04-22T23:48:34.320168670Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Apr 22 23:48:34.320229 containerd[1625]: time="2026-04-22T23:48:34.320220424Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Apr 22 23:48:34.320504 containerd[1625]: time="2026-04-22T23:48:34.320334546Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Apr 22 23:48:34.321245 containerd[1625]: time="2026-04-22T23:48:34.321180328Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Apr 22 23:48:34.321311 containerd[1625]: time="2026-04-22T23:48:34.321299471Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Apr 22 23:48:34.321370 containerd[1625]: time="2026-04-22T23:48:34.321357640Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Apr 22 23:48:34.322231 containerd[1625]: time="2026-04-22T23:48:34.321605397Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Apr 22 23:48:34.323201 containerd[1625]: time="2026-04-22T23:48:34.322635819Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Apr 22 23:48:34.323405 containerd[1625]: time="2026-04-22T23:48:34.323382849Z" level=info msg="Start snapshots syncer" Apr 22 23:48:34.323493 containerd[1625]: time="2026-04-22T23:48:34.323478880Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Apr 22 23:48:34.324312 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Apr 22 23:48:34.329890 containerd[1625]: time="2026-04-22T23:48:34.329456784Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"cgroupWritable\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"\",\"binDirs\":[\"/opt/cni/bin\"],\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogLineSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Apr 22 23:48:34.330969 containerd[1625]: time="2026-04-22T23:48:34.330631229Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Apr 22 23:48:34.331683 containerd[1625]: time="2026-04-22T23:48:34.331630652Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Apr 22 23:48:34.331944 containerd[1625]: time="2026-04-22T23:48:34.331925451Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Apr 22 23:48:34.332015 containerd[1625]: time="2026-04-22T23:48:34.332002708Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Apr 22 23:48:34.332073 containerd[1625]: time="2026-04-22T23:48:34.332063552Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Apr 22 23:48:34.332119 containerd[1625]: time="2026-04-22T23:48:34.332110674Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Apr 22 23:48:34.332408 containerd[1625]: time="2026-04-22T23:48:34.332199393Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Apr 22 23:48:34.332722 containerd[1625]: time="2026-04-22T23:48:34.332704766Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Apr 22 23:48:34.332792 containerd[1625]: time="2026-04-22T23:48:34.332780384Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Apr 22 23:48:34.332837 containerd[1625]: time="2026-04-22T23:48:34.332828927Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Apr 22 23:48:34.332884 containerd[1625]: time="2026-04-22T23:48:34.332874617Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Apr 22 23:48:34.332956 containerd[1625]: time="2026-04-22T23:48:34.332946206Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Apr 22 23:48:34.333410 containerd[1625]: time="2026-04-22T23:48:34.333331171Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Apr 22 23:48:34.335020 containerd[1625]: time="2026-04-22T23:48:34.334896097Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Apr 22 23:48:34.335628 containerd[1625]: time="2026-04-22T23:48:34.335322027Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Apr 22 23:48:34.335977 containerd[1625]: time="2026-04-22T23:48:34.335840464Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Apr 22 23:48:34.335977 containerd[1625]: time="2026-04-22T23:48:34.335872843Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Apr 22 23:48:34.335977 containerd[1625]: time="2026-04-22T23:48:34.335886697Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Apr 22 23:48:34.335977 containerd[1625]: time="2026-04-22T23:48:34.335900816Z" level=info msg="runtime interface created" Apr 22 23:48:34.335977 containerd[1625]: time="2026-04-22T23:48:34.335905467Z" level=info msg="created NRI interface" Apr 22 23:48:34.335977 containerd[1625]: time="2026-04-22T23:48:34.335913018Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Apr 22 23:48:34.335977 containerd[1625]: time="2026-04-22T23:48:34.335930725Z" level=info msg="Connect containerd service" Apr 22 23:48:34.335977 containerd[1625]: time="2026-04-22T23:48:34.335956984Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Apr 22 23:48:34.340255 containerd[1625]: time="2026-04-22T23:48:34.340196098Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 22 23:48:34.368760 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Apr 22 23:48:34.384266 systemd[1]: Started getty@tty1.service - Getty on tty1. Apr 22 23:48:34.398396 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Apr 22 23:48:34.405769 systemd[1]: Reached target getty.target - Login Prompts. Apr 22 23:48:34.594683 containerd[1625]: time="2026-04-22T23:48:34.594434559Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Apr 22 23:48:34.595016 containerd[1625]: time="2026-04-22T23:48:34.594856041Z" level=info msg="Start subscribing containerd event" Apr 22 23:48:34.595143 containerd[1625]: time="2026-04-22T23:48:34.595124917Z" level=info msg="Start recovering state" Apr 22 23:48:34.595257 containerd[1625]: time="2026-04-22T23:48:34.595247827Z" level=info msg="Start event monitor" Apr 22 23:48:34.595291 containerd[1625]: time="2026-04-22T23:48:34.595286269Z" level=info msg="Start cni network conf syncer for default" Apr 22 23:48:34.595338 containerd[1625]: time="2026-04-22T23:48:34.595332783Z" level=info msg="Start streaming server" Apr 22 23:48:34.595372 containerd[1625]: time="2026-04-22T23:48:34.595366965Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Apr 22 23:48:34.595399 containerd[1625]: time="2026-04-22T23:48:34.595393365Z" level=info msg="runtime interface starting up..." Apr 22 23:48:34.595422 containerd[1625]: time="2026-04-22T23:48:34.595417141Z" level=info msg="starting plugins..." Apr 22 23:48:34.595461 containerd[1625]: time="2026-04-22T23:48:34.595455246Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Apr 22 23:48:34.595818 containerd[1625]: time="2026-04-22T23:48:34.595802092Z" level=info msg=serving... address=/run/containerd/containerd.sock Apr 22 23:48:34.596880 containerd[1625]: time="2026-04-22T23:48:34.596823039Z" level=info msg="containerd successfully booted in 0.317484s" Apr 22 23:48:34.597410 systemd[1]: Started containerd.service - containerd container runtime. Apr 22 23:48:34.643964 tar[1612]: linux-amd64/README.md Apr 22 23:48:34.693879 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Apr 22 23:48:35.913688 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 22 23:48:35.918331 systemd[1]: Reached target multi-user.target - Multi-User System. Apr 22 23:48:35.922024 systemd[1]: Startup finished in 10.079s (kernel) + 8.591s (initrd) + 8.240s (userspace) = 26.910s. Apr 22 23:48:35.982793 (kubelet)[1717]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 22 23:48:37.302793 kubelet[1717]: E0422 23:48:37.302389 1717 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 22 23:48:37.380612 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 22 23:48:37.380838 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 22 23:48:37.381225 systemd[1]: kubelet.service: Consumed 1.746s CPU time, 267.2M memory peak. Apr 22 23:48:42.754778 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Apr 22 23:48:42.759137 systemd[1]: Started sshd@0-10.0.0.21:22-10.0.0.1:60470.service - OpenSSH per-connection server daemon (10.0.0.1:60470). Apr 22 23:48:42.921369 sshd[1730]: Accepted publickey for core from 10.0.0.1 port 60470 ssh2: RSA SHA256:gmkHw14fVTAUcAmiJZ2tt7TEOMWFJnKw5wXUaWj9fHU Apr 22 23:48:42.924810 sshd-session[1730]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 22 23:48:42.977190 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Apr 22 23:48:42.978371 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Apr 22 23:48:42.997208 systemd-logind[1597]: New session 1 of user core. Apr 22 23:48:43.057601 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Apr 22 23:48:43.064747 systemd[1]: Starting user@500.service - User Manager for UID 500... Apr 22 23:48:43.110442 (systemd)[1736]: pam_unix(systemd-user:session): session opened for user core(uid=500) by core(uid=0) Apr 22 23:48:43.117303 systemd-logind[1597]: New session 2 of user core. Apr 22 23:48:43.434446 systemd[1736]: Queued start job for default target default.target. Apr 22 23:48:43.448311 systemd[1736]: Created slice app.slice - User Application Slice. Apr 22 23:48:43.448399 systemd[1736]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of User's Temporary Directories. Apr 22 23:48:43.448411 systemd[1736]: Reached target paths.target - Paths. Apr 22 23:48:43.448473 systemd[1736]: Reached target timers.target - Timers. Apr 22 23:48:43.449727 systemd[1736]: Starting dbus.socket - D-Bus User Message Bus Socket... Apr 22 23:48:43.450600 systemd[1736]: Starting systemd-tmpfiles-setup.service - Create User Files and Directories... Apr 22 23:48:43.466239 systemd[1736]: Listening on dbus.socket - D-Bus User Message Bus Socket. Apr 22 23:48:43.466394 systemd[1736]: Reached target sockets.target - Sockets. Apr 22 23:48:43.466928 systemd[1736]: Finished systemd-tmpfiles-setup.service - Create User Files and Directories. Apr 22 23:48:43.467051 systemd[1736]: Reached target basic.target - Basic System. Apr 22 23:48:43.467101 systemd[1736]: Reached target default.target - Main User Target. Apr 22 23:48:43.467133 systemd[1736]: Startup finished in 331ms. Apr 22 23:48:43.467568 systemd[1]: Started user@500.service - User Manager for UID 500. Apr 22 23:48:43.476910 systemd[1]: Started session-1.scope - Session 1 of User core. Apr 22 23:48:43.580219 systemd[1]: Started sshd@1-10.0.0.21:22-10.0.0.1:60472.service - OpenSSH per-connection server daemon (10.0.0.1:60472). Apr 22 23:48:43.696470 sshd[1750]: Accepted publickey for core from 10.0.0.1 port 60472 ssh2: RSA SHA256:gmkHw14fVTAUcAmiJZ2tt7TEOMWFJnKw5wXUaWj9fHU Apr 22 23:48:43.698486 sshd-session[1750]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 22 23:48:43.710436 systemd-logind[1597]: New session 3 of user core. Apr 22 23:48:43.726903 systemd[1]: Started session-3.scope - Session 3 of User core. Apr 22 23:48:43.754000 sshd[1754]: Connection closed by 10.0.0.1 port 60472 Apr 22 23:48:43.754304 sshd-session[1750]: pam_unix(sshd:session): session closed for user core Apr 22 23:48:43.779380 systemd[1]: sshd@1-10.0.0.21:22-10.0.0.1:60472.service: Deactivated successfully. Apr 22 23:48:43.786201 systemd[1]: session-3.scope: Deactivated successfully. Apr 22 23:48:43.791138 systemd-logind[1597]: Session 3 logged out. Waiting for processes to exit. Apr 22 23:48:43.799044 systemd[1]: Started sshd@2-10.0.0.21:22-10.0.0.1:60474.service - OpenSSH per-connection server daemon (10.0.0.1:60474). Apr 22 23:48:43.799785 systemd-logind[1597]: Removed session 3. Apr 22 23:48:43.996386 sshd[1760]: Accepted publickey for core from 10.0.0.1 port 60474 ssh2: RSA SHA256:gmkHw14fVTAUcAmiJZ2tt7TEOMWFJnKw5wXUaWj9fHU Apr 22 23:48:43.998791 sshd-session[1760]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 22 23:48:44.004047 systemd-logind[1597]: New session 4 of user core. Apr 22 23:48:44.019861 systemd[1]: Started session-4.scope - Session 4 of User core. Apr 22 23:48:44.032789 sshd[1764]: Connection closed by 10.0.0.1 port 60474 Apr 22 23:48:44.033168 sshd-session[1760]: pam_unix(sshd:session): session closed for user core Apr 22 23:48:44.052309 systemd[1]: sshd@2-10.0.0.21:22-10.0.0.1:60474.service: Deactivated successfully. Apr 22 23:48:44.055134 systemd[1]: session-4.scope: Deactivated successfully. Apr 22 23:48:44.056154 systemd-logind[1597]: Session 4 logged out. Waiting for processes to exit. Apr 22 23:48:44.062163 systemd[1]: Started sshd@3-10.0.0.21:22-10.0.0.1:60476.service - OpenSSH per-connection server daemon (10.0.0.1:60476). Apr 22 23:48:44.062970 systemd-logind[1597]: Removed session 4. Apr 22 23:48:44.158669 sshd[1770]: Accepted publickey for core from 10.0.0.1 port 60476 ssh2: RSA SHA256:gmkHw14fVTAUcAmiJZ2tt7TEOMWFJnKw5wXUaWj9fHU Apr 22 23:48:44.161169 sshd-session[1770]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 22 23:48:44.169552 systemd-logind[1597]: New session 5 of user core. Apr 22 23:48:44.189894 systemd[1]: Started session-5.scope - Session 5 of User core. Apr 22 23:48:44.217459 sshd[1774]: Connection closed by 10.0.0.1 port 60476 Apr 22 23:48:44.218926 sshd-session[1770]: pam_unix(sshd:session): session closed for user core Apr 22 23:48:44.235222 systemd[1]: sshd@3-10.0.0.21:22-10.0.0.1:60476.service: Deactivated successfully. Apr 22 23:48:44.237120 systemd[1]: session-5.scope: Deactivated successfully. Apr 22 23:48:44.239595 systemd-logind[1597]: Session 5 logged out. Waiting for processes to exit. Apr 22 23:48:44.242828 systemd[1]: Started sshd@4-10.0.0.21:22-10.0.0.1:60488.service - OpenSSH per-connection server daemon (10.0.0.1:60488). Apr 22 23:48:44.243444 systemd-logind[1597]: Removed session 5. Apr 22 23:48:44.344001 sshd[1780]: Accepted publickey for core from 10.0.0.1 port 60488 ssh2: RSA SHA256:gmkHw14fVTAUcAmiJZ2tt7TEOMWFJnKw5wXUaWj9fHU Apr 22 23:48:44.350205 sshd-session[1780]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 22 23:48:44.359934 systemd-logind[1597]: New session 6 of user core. Apr 22 23:48:44.376257 systemd[1]: Started session-6.scope - Session 6 of User core. Apr 22 23:48:44.486157 sudo[1785]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Apr 22 23:48:44.488251 sudo[1785]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 22 23:48:45.640185 systemd[1]: Starting docker.service - Docker Application Container Engine... Apr 22 23:48:45.667355 (dockerd)[1806]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Apr 22 23:48:46.493743 dockerd[1806]: time="2026-04-22T23:48:46.493202153Z" level=info msg="Starting up" Apr 22 23:48:46.499894 dockerd[1806]: time="2026-04-22T23:48:46.499246531Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Apr 22 23:48:46.582577 dockerd[1806]: time="2026-04-22T23:48:46.582180669Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Apr 22 23:48:46.843837 dockerd[1806]: time="2026-04-22T23:48:46.843038807Z" level=info msg="Loading containers: start." Apr 22 23:48:46.880915 kernel: Initializing XFRM netlink socket Apr 22 23:48:47.618046 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Apr 22 23:48:47.627817 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 22 23:48:48.322149 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 22 23:48:48.343979 (kubelet)[1931]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 22 23:48:48.505195 kubelet[1931]: E0422 23:48:48.504457 1931 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 22 23:48:48.516145 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 22 23:48:48.517640 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 22 23:48:48.518877 systemd[1]: kubelet.service: Consumed 558ms CPU time, 110.2M memory peak. Apr 22 23:48:49.216469 systemd-networkd[1531]: docker0: Link UP Apr 22 23:48:49.245026 dockerd[1806]: time="2026-04-22T23:48:49.242993316Z" level=info msg="Loading containers: done." Apr 22 23:48:49.394673 dockerd[1806]: time="2026-04-22T23:48:49.394455687Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Apr 22 23:48:49.394673 dockerd[1806]: time="2026-04-22T23:48:49.394656189Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Apr 22 23:48:49.398344 dockerd[1806]: time="2026-04-22T23:48:49.396450150Z" level=info msg="Initializing buildkit" Apr 22 23:48:49.650385 dockerd[1806]: time="2026-04-22T23:48:49.649842062Z" level=info msg="Completed buildkit initialization" Apr 22 23:48:49.669146 dockerd[1806]: time="2026-04-22T23:48:49.668869763Z" level=info msg="Daemon has completed initialization" Apr 22 23:48:49.670693 dockerd[1806]: time="2026-04-22T23:48:49.669407134Z" level=info msg="API listen on /run/docker.sock" Apr 22 23:48:49.677274 systemd[1]: Started docker.service - Docker Application Container Engine. Apr 22 23:48:51.952366 containerd[1625]: time="2026-04-22T23:48:51.950981320Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.11\"" Apr 22 23:48:53.719698 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2986774233.mount: Deactivated successfully. Apr 22 23:48:57.089090 containerd[1625]: time="2026-04-22T23:48:57.088666622Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 22 23:48:57.089798 containerd[1625]: time="2026-04-22T23:48:57.089656920Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.11: active requests=0, bytes read=29201400" Apr 22 23:48:57.092198 containerd[1625]: time="2026-04-22T23:48:57.092010416Z" level=info msg="ImageCreate event name:\"sha256:7ea99c30f23b106a042b6c46e565fddb42b20bbe58ba6852e562eed03477aec2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 22 23:48:57.101063 containerd[1625]: time="2026-04-22T23:48:57.100430468Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:18e9f2b6e4d67c24941e14b2d41ec0aa6e5f628e39f2ef2163e176de85bbe39e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 22 23:48:57.103229 containerd[1625]: time="2026-04-22T23:48:57.103061176Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.11\" with image id \"sha256:7ea99c30f23b106a042b6c46e565fddb42b20bbe58ba6852e562eed03477aec2\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.11\", repo digest \"registry.k8s.io/kube-apiserver@sha256:18e9f2b6e4d67c24941e14b2d41ec0aa6e5f628e39f2ef2163e176de85bbe39e\", size \"30190588\" in 5.150888855s" Apr 22 23:48:57.103229 containerd[1625]: time="2026-04-22T23:48:57.103206163Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.11\" returns image reference \"sha256:7ea99c30f23b106a042b6c46e565fddb42b20bbe58ba6852e562eed03477aec2\"" Apr 22 23:48:57.104774 containerd[1625]: time="2026-04-22T23:48:57.104668283Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.11\"" Apr 22 23:48:58.614026 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Apr 22 23:48:58.618342 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 22 23:48:59.141398 containerd[1625]: time="2026-04-22T23:48:59.140645502Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 22 23:48:59.144190 containerd[1625]: time="2026-04-22T23:48:59.143827897Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.11: active requests=0, bytes read=0" Apr 22 23:48:59.150229 containerd[1625]: time="2026-04-22T23:48:59.150106695Z" level=info msg="ImageCreate event name:\"sha256:c75dc8a6c47e2f7491fa2e367879f53c6f46053066e6b7135df4b154ddd94a1f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 22 23:48:59.155998 containerd[1625]: time="2026-04-22T23:48:59.155881995Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:7579451c5b3c2715da4a263c5d80a3367a24fdc12e86fde6851674d567d1dfb2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 22 23:48:59.157315 containerd[1625]: time="2026-04-22T23:48:59.157047948Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.11\" with image id \"sha256:c75dc8a6c47e2f7491fa2e367879f53c6f46053066e6b7135df4b154ddd94a1f\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.11\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:7579451c5b3c2715da4a263c5d80a3367a24fdc12e86fde6851674d567d1dfb2\", size \"27737794\" in 2.052320037s" Apr 22 23:48:59.157315 containerd[1625]: time="2026-04-22T23:48:59.157335117Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.11\" returns image reference \"sha256:c75dc8a6c47e2f7491fa2e367879f53c6f46053066e6b7135df4b154ddd94a1f\"" Apr 22 23:48:59.158749 containerd[1625]: time="2026-04-22T23:48:59.158719687Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.11\"" Apr 22 23:48:59.213804 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 22 23:48:59.233415 (kubelet)[2114]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 22 23:48:59.515050 kubelet[2114]: E0422 23:48:59.513225 2114 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 22 23:48:59.520450 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 22 23:48:59.521236 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 22 23:48:59.524078 systemd[1]: kubelet.service: Consumed 604ms CPU time, 110.5M memory peak. Apr 22 23:49:03.428033 containerd[1625]: time="2026-04-22T23:49:03.426904646Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 22 23:49:03.433943 containerd[1625]: time="2026-04-22T23:49:03.431969083Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.11: active requests=0, bytes read=20284372" Apr 22 23:49:03.437223 containerd[1625]: time="2026-04-22T23:49:03.436405312Z" level=info msg="ImageCreate event name:\"sha256:3febad3451e2d599688a8ad13d19d03c48c9054be209342c748fac2bb6c56f97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 22 23:49:03.444211 containerd[1625]: time="2026-04-22T23:49:03.443286143Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:5506f0f94c4d9aeb071664893aabc12166bcb7f775008a6fff02d004e6091d28\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 22 23:49:03.448382 containerd[1625]: time="2026-04-22T23:49:03.448258559Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.11\" with image id \"sha256:3febad3451e2d599688a8ad13d19d03c48c9054be209342c748fac2bb6c56f97\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.11\", repo digest \"registry.k8s.io/kube-scheduler@sha256:5506f0f94c4d9aeb071664893aabc12166bcb7f775008a6fff02d004e6091d28\", size \"21856121\" in 4.289505948s" Apr 22 23:49:03.448382 containerd[1625]: time="2026-04-22T23:49:03.448363232Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.11\" returns image reference \"sha256:3febad3451e2d599688a8ad13d19d03c48c9054be209342c748fac2bb6c56f97\"" Apr 22 23:49:03.450160 containerd[1625]: time="2026-04-22T23:49:03.450020720Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.11\"" Apr 22 23:49:06.998115 systemd-resolved[1306]: Clock change detected. Flushing caches. Apr 22 23:49:10.428990 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Apr 22 23:49:10.436890 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 22 23:49:11.329985 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 22 23:49:11.358077 (kubelet)[2138]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 22 23:49:12.067321 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2224659511.mount: Deactivated successfully. Apr 22 23:49:12.518210 kubelet[2138]: E0422 23:49:12.517341 2138 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 22 23:49:12.526163 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 22 23:49:12.526341 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 22 23:49:12.533219 systemd[1]: kubelet.service: Consumed 2.127s CPU time, 111.1M memory peak. Apr 22 23:49:15.561718 containerd[1625]: time="2026-04-22T23:49:15.560293281Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 22 23:49:15.567285 containerd[1625]: time="2026-04-22T23:49:15.567041443Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.11: active requests=0, bytes read=32006989" Apr 22 23:49:15.572679 containerd[1625]: time="2026-04-22T23:49:15.572243124Z" level=info msg="ImageCreate event name:\"sha256:4ce1332df15d2a0b1c2d3b18292afb4ff670070401211daebb00b7293b26f6d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 22 23:49:15.585637 containerd[1625]: time="2026-04-22T23:49:15.585003741Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:8d18637b5c5f58a4ca0163d3cf184e53d4c522963c242860562be7cb25e9303e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 22 23:49:15.587342 containerd[1625]: time="2026-04-22T23:49:15.586877939Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.11\" with image id \"sha256:4ce1332df15d2a0b1c2d3b18292afb4ff670070401211daebb00b7293b26f6d0\", repo tag \"registry.k8s.io/kube-proxy:v1.33.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:8d18637b5c5f58a4ca0163d3cf184e53d4c522963c242860562be7cb25e9303e\", size \"32009730\" in 11.36817733s" Apr 22 23:49:15.587342 containerd[1625]: time="2026-04-22T23:49:15.586919136Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.11\" returns image reference \"sha256:4ce1332df15d2a0b1c2d3b18292afb4ff670070401211daebb00b7293b26f6d0\"" Apr 22 23:49:15.588586 containerd[1625]: time="2026-04-22T23:49:15.588508284Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Apr 22 23:49:17.418897 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2924650773.mount: Deactivated successfully. Apr 22 23:49:19.814218 update_engine[1598]: I20260422 23:49:19.813035 1598 update_attempter.cc:509] Updating boot flags... Apr 22 23:49:22.741751 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Apr 22 23:49:22.760309 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 22 23:49:23.960243 containerd[1625]: time="2026-04-22T23:49:23.960011349Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 22 23:49:23.966459 containerd[1625]: time="2026-04-22T23:49:23.964864531Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20258627" Apr 22 23:49:23.978814 containerd[1625]: time="2026-04-22T23:49:23.978729053Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 22 23:49:24.031770 containerd[1625]: time="2026-04-22T23:49:24.031395332Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 22 23:49:24.035063 containerd[1625]: time="2026-04-22T23:49:24.034989235Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 8.446363226s" Apr 22 23:49:24.036531 containerd[1625]: time="2026-04-22T23:49:24.036015411Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Apr 22 23:49:24.037100 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 22 23:49:24.042944 containerd[1625]: time="2026-04-22T23:49:24.042839274Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Apr 22 23:49:24.166109 (kubelet)[2227]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 22 23:49:24.529640 kubelet[2227]: E0422 23:49:24.529528 2227 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 22 23:49:24.541856 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 22 23:49:24.542093 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 22 23:49:24.542851 systemd[1]: kubelet.service: Consumed 1.249s CPU time, 110.5M memory peak. Apr 22 23:49:24.748590 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1549831686.mount: Deactivated successfully. Apr 22 23:49:24.763756 containerd[1625]: time="2026-04-22T23:49:24.762900362Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 22 23:49:24.768728 containerd[1625]: time="2026-04-22T23:49:24.768586830Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Apr 22 23:49:24.777456 containerd[1625]: time="2026-04-22T23:49:24.776835594Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 22 23:49:24.846846 containerd[1625]: time="2026-04-22T23:49:24.846518831Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 22 23:49:24.847744 containerd[1625]: time="2026-04-22T23:49:24.847335756Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 804.426169ms" Apr 22 23:49:24.847990 containerd[1625]: time="2026-04-22T23:49:24.847859464Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Apr 22 23:49:24.849162 containerd[1625]: time="2026-04-22T23:49:24.849086305Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.24-0\"" Apr 22 23:49:26.921344 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3033570941.mount: Deactivated successfully. Apr 22 23:49:33.781649 containerd[1625]: time="2026-04-22T23:49:33.779908192Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.24-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 22 23:49:33.784476 containerd[1625]: time="2026-04-22T23:49:33.783403193Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.24-0: active requests=0, bytes read=16420986" Apr 22 23:49:33.790029 containerd[1625]: time="2026-04-22T23:49:33.789916903Z" level=info msg="ImageCreate event name:\"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 22 23:49:33.828770 containerd[1625]: time="2026-04-22T23:49:33.828327458Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 22 23:49:33.832191 containerd[1625]: time="2026-04-22T23:49:33.831939850Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.24-0\" with image id \"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\", repo tag \"registry.k8s.io/etcd:3.5.24-0\", repo digest \"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\", size \"23716032\" in 8.982717644s" Apr 22 23:49:33.832795 containerd[1625]: time="2026-04-22T23:49:33.832214519Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.24-0\" returns image reference \"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\"" Apr 22 23:49:34.650476 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Apr 22 23:49:34.674058 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 22 23:49:35.930358 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 22 23:49:35.969031 (kubelet)[2335]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 22 23:49:36.310844 kubelet[2335]: E0422 23:49:36.310227 2335 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 22 23:49:36.324963 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 22 23:49:36.325462 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 22 23:49:36.326583 systemd[1]: kubelet.service: Consumed 1.067s CPU time, 109.9M memory peak. Apr 22 23:49:38.361732 systemd-resolved[1306]: Clock change detected. Flushing caches. Apr 22 23:49:45.644864 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Apr 22 23:49:45.683412 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 22 23:49:45.717376 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Apr 22 23:49:45.717511 systemd[1]: kubelet.service: Failed with result 'signal'. Apr 22 23:49:45.718276 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 22 23:49:46.280144 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 22 23:49:46.578454 systemd[1]: Reload requested from client PID 2351 ('systemctl') (unit session-6.scope)... Apr 22 23:49:46.579493 systemd[1]: Reloading... Apr 22 23:49:47.707791 zram_generator::config[2400]: No configuration found. Apr 22 23:49:50.797466 systemd[1]: Reloading finished in 4211 ms. Apr 22 23:49:51.310241 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 22 23:49:51.351510 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 22 23:49:51.358486 systemd[1]: kubelet.service: Deactivated successfully. Apr 22 23:49:51.360463 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 22 23:49:51.361459 systemd[1]: kubelet.service: Consumed 678ms CPU time, 98.4M memory peak. Apr 22 23:49:51.376373 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 22 23:49:53.157007 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 22 23:49:53.186185 (kubelet)[2447]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 22 23:49:53.419966 kubelet[2447]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 22 23:49:53.419966 kubelet[2447]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 22 23:49:53.419966 kubelet[2447]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 22 23:49:53.419966 kubelet[2447]: I0422 23:49:53.419765 2447 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 22 23:49:55.524034 kubelet[2447]: I0422 23:49:55.523591 2447 server.go:530] "Kubelet version" kubeletVersion="v1.33.8" Apr 22 23:49:55.524034 kubelet[2447]: I0422 23:49:55.523883 2447 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 22 23:49:55.532271 kubelet[2447]: I0422 23:49:55.524252 2447 server.go:956] "Client rotation is on, will bootstrap in background" Apr 22 23:49:55.778704 kubelet[2447]: E0422 23:49:55.776187 2447 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.21:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.21:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 22 23:49:55.803778 kubelet[2447]: I0422 23:49:55.801782 2447 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 22 23:49:55.954355 kubelet[2447]: I0422 23:49:55.953887 2447 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Apr 22 23:49:56.238376 kubelet[2447]: I0422 23:49:56.238247 2447 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Apr 22 23:49:56.240048 kubelet[2447]: I0422 23:49:56.239149 2447 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 22 23:49:56.240048 kubelet[2447]: I0422 23:49:56.239213 2447 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 22 23:49:56.240048 kubelet[2447]: I0422 23:49:56.239479 2447 topology_manager.go:138] "Creating topology manager with none policy" Apr 22 23:49:56.240048 kubelet[2447]: I0422 23:49:56.239489 2447 container_manager_linux.go:303] "Creating device plugin manager" Apr 22 23:49:56.245858 kubelet[2447]: I0422 23:49:56.243490 2447 state_mem.go:36] "Initialized new in-memory state store" Apr 22 23:49:56.288548 kubelet[2447]: I0422 23:49:56.288247 2447 kubelet.go:480] "Attempting to sync node with API server" Apr 22 23:49:56.288548 kubelet[2447]: I0422 23:49:56.288460 2447 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 22 23:49:56.290680 kubelet[2447]: I0422 23:49:56.288620 2447 kubelet.go:386] "Adding apiserver pod source" Apr 22 23:49:56.290680 kubelet[2447]: I0422 23:49:56.290640 2447 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 22 23:49:56.295824 kubelet[2447]: E0422 23:49:56.295707 2447 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.21:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.21:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 22 23:49:56.296197 kubelet[2447]: E0422 23:49:56.296173 2447 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.21:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.21:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 22 23:49:56.301780 kubelet[2447]: I0422 23:49:56.301569 2447 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Apr 22 23:49:56.306139 kubelet[2447]: I0422 23:49:56.304635 2447 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 22 23:49:56.324455 kubelet[2447]: W0422 23:49:56.310678 2447 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Apr 22 23:49:56.383415 kubelet[2447]: I0422 23:49:56.382066 2447 watchdog_linux.go:99] "Systemd watchdog is not enabled" Apr 22 23:49:56.384629 kubelet[2447]: I0422 23:49:56.383510 2447 server.go:1289] "Started kubelet" Apr 22 23:49:56.384740 kubelet[2447]: I0422 23:49:56.384655 2447 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Apr 22 23:49:56.387852 kubelet[2447]: I0422 23:49:56.387625 2447 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 22 23:49:56.389080 kubelet[2447]: I0422 23:49:56.388466 2447 server.go:317] "Adding debug handlers to kubelet server" Apr 22 23:49:56.398921 kubelet[2447]: I0422 23:49:56.398831 2447 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 22 23:49:56.468170 kubelet[2447]: E0422 23:49:56.401328 2447 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.21:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.21:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18a8d2c4f6214efa default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-22 23:49:56.38246169 +0000 UTC m=+3.123099745,LastTimestamp:2026-04-22 23:49:56.38246169 +0000 UTC m=+3.123099745,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 22 23:49:56.478424 kubelet[2447]: E0422 23:49:56.478287 2447 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 22 23:49:56.481318 kubelet[2447]: I0422 23:49:56.480317 2447 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 22 23:49:56.483913 kubelet[2447]: I0422 23:49:56.483480 2447 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 22 23:49:56.484977 kubelet[2447]: I0422 23:49:56.484889 2447 volume_manager.go:297] "Starting Kubelet Volume Manager" Apr 22 23:49:56.485508 kubelet[2447]: E0422 23:49:56.485465 2447 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 22 23:49:56.489220 kubelet[2447]: I0422 23:49:56.488389 2447 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Apr 22 23:49:56.502822 kubelet[2447]: I0422 23:49:56.499506 2447 reconciler.go:26] "Reconciler: start to sync state" Apr 22 23:49:56.507423 kubelet[2447]: E0422 23:49:56.504077 2447 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.21:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.21:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 22 23:49:56.519057 kubelet[2447]: I0422 23:49:56.505637 2447 factory.go:223] Registration of the systemd container factory successfully Apr 22 23:49:56.521640 kubelet[2447]: E0422 23:49:56.510313 2447 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.21:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.21:6443: connect: connection refused" interval="200ms" Apr 22 23:49:56.522763 kubelet[2447]: I0422 23:49:56.522496 2447 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 22 23:49:56.535665 kubelet[2447]: I0422 23:49:56.535208 2447 factory.go:223] Registration of the containerd container factory successfully Apr 22 23:49:56.585858 kubelet[2447]: E0422 23:49:56.585683 2447 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 22 23:49:56.686135 kubelet[2447]: E0422 23:49:56.686078 2447 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 22 23:49:56.713241 kubelet[2447]: I0422 23:49:56.713173 2447 cpu_manager.go:221] "Starting CPU manager" policy="none" Apr 22 23:49:56.713241 kubelet[2447]: I0422 23:49:56.713215 2447 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Apr 22 23:49:56.713472 kubelet[2447]: I0422 23:49:56.713285 2447 state_mem.go:36] "Initialized new in-memory state store" Apr 22 23:49:56.722695 kubelet[2447]: I0422 23:49:56.722613 2447 policy_none.go:49] "None policy: Start" Apr 22 23:49:56.722695 kubelet[2447]: I0422 23:49:56.722692 2447 memory_manager.go:186] "Starting memorymanager" policy="None" Apr 22 23:49:56.722695 kubelet[2447]: I0422 23:49:56.722708 2447 state_mem.go:35] "Initializing new in-memory state store" Apr 22 23:49:56.723052 kubelet[2447]: E0422 23:49:56.722795 2447 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.21:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.21:6443: connect: connection refused" interval="400ms" Apr 22 23:49:56.738758 kubelet[2447]: I0422 23:49:56.738144 2447 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Apr 22 23:49:56.758201 kubelet[2447]: I0422 23:49:56.757976 2447 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Apr 22 23:49:56.758201 kubelet[2447]: I0422 23:49:56.758043 2447 status_manager.go:230] "Starting to sync pod status with apiserver" Apr 22 23:49:56.758201 kubelet[2447]: I0422 23:49:56.758067 2447 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 22 23:49:56.758201 kubelet[2447]: I0422 23:49:56.758073 2447 kubelet.go:2436] "Starting kubelet main sync loop" Apr 22 23:49:56.758201 kubelet[2447]: E0422 23:49:56.758148 2447 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 22 23:49:56.759484 kubelet[2447]: E0422 23:49:56.759425 2447 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.21:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.21:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 22 23:49:56.769241 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Apr 22 23:49:56.787854 kubelet[2447]: E0422 23:49:56.787698 2447 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 22 23:49:56.861841 kubelet[2447]: E0422 23:49:56.860417 2447 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Apr 22 23:49:56.889399 kubelet[2447]: E0422 23:49:56.889186 2447 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 22 23:49:56.953160 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Apr 22 23:49:56.990894 kubelet[2447]: E0422 23:49:56.989458 2447 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 22 23:49:57.063805 kubelet[2447]: E0422 23:49:57.062025 2447 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Apr 22 23:49:57.094755 kubelet[2447]: E0422 23:49:57.093449 2447 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 22 23:49:57.101073 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Apr 22 23:49:57.148599 kubelet[2447]: E0422 23:49:57.147710 2447 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.21:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.21:6443: connect: connection refused" interval="800ms" Apr 22 23:49:57.157087 kubelet[2447]: E0422 23:49:57.156979 2447 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 22 23:49:57.160353 kubelet[2447]: I0422 23:49:57.159865 2447 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 22 23:49:57.161190 kubelet[2447]: I0422 23:49:57.160730 2447 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 22 23:49:57.162989 kubelet[2447]: I0422 23:49:57.162841 2447 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 22 23:49:57.205398 kubelet[2447]: E0422 23:49:57.204879 2447 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 22 23:49:57.234861 kubelet[2447]: E0422 23:49:57.206958 2447 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 22 23:49:57.275155 kubelet[2447]: I0422 23:49:57.275096 2447 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 22 23:49:57.275830 kubelet[2447]: E0422 23:49:57.275657 2447 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.21:6443/api/v1/nodes\": dial tcp 10.0.0.21:6443: connect: connection refused" node="localhost" Apr 22 23:49:57.464599 kubelet[2447]: E0422 23:49:57.464262 2447 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.21:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.21:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 22 23:49:57.499137 kubelet[2447]: I0422 23:49:57.499034 2447 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 22 23:49:57.499923 kubelet[2447]: E0422 23:49:57.499873 2447 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.21:6443/api/v1/nodes\": dial tcp 10.0.0.21:6443: connect: connection refused" node="localhost" Apr 22 23:49:57.539349 systemd[1]: Created slice kubepods-burstable-pod47e50b39a5c3f70a3898f39186f718d0.slice - libcontainer container kubepods-burstable-pod47e50b39a5c3f70a3898f39186f718d0.slice. Apr 22 23:49:57.545221 kubelet[2447]: I0422 23:49:57.540830 2447 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/47e50b39a5c3f70a3898f39186f718d0-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"47e50b39a5c3f70a3898f39186f718d0\") " pod="kube-system/kube-apiserver-localhost" Apr 22 23:49:57.545221 kubelet[2447]: I0422 23:49:57.540902 2447 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/47e50b39a5c3f70a3898f39186f718d0-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"47e50b39a5c3f70a3898f39186f718d0\") " pod="kube-system/kube-apiserver-localhost" Apr 22 23:49:57.545221 kubelet[2447]: I0422 23:49:57.540929 2447 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e9ca41790ae21be9f4cbd451ade0acec-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"e9ca41790ae21be9f4cbd451ade0acec\") " pod="kube-system/kube-controller-manager-localhost" Apr 22 23:49:57.550853 kubelet[2447]: I0422 23:49:57.541949 2447 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/e9ca41790ae21be9f4cbd451ade0acec-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"e9ca41790ae21be9f4cbd451ade0acec\") " pod="kube-system/kube-controller-manager-localhost" Apr 22 23:49:57.550853 kubelet[2447]: I0422 23:49:57.548435 2447 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e9ca41790ae21be9f4cbd451ade0acec-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"e9ca41790ae21be9f4cbd451ade0acec\") " pod="kube-system/kube-controller-manager-localhost" Apr 22 23:49:57.550853 kubelet[2447]: I0422 23:49:57.548582 2447 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e9ca41790ae21be9f4cbd451ade0acec-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"e9ca41790ae21be9f4cbd451ade0acec\") " pod="kube-system/kube-controller-manager-localhost" Apr 22 23:49:57.550853 kubelet[2447]: I0422 23:49:57.548598 2447 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e9ca41790ae21be9f4cbd451ade0acec-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"e9ca41790ae21be9f4cbd451ade0acec\") " pod="kube-system/kube-controller-manager-localhost" Apr 22 23:49:57.550853 kubelet[2447]: I0422 23:49:57.548627 2447 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/47e50b39a5c3f70a3898f39186f718d0-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"47e50b39a5c3f70a3898f39186f718d0\") " pod="kube-system/kube-apiserver-localhost" Apr 22 23:49:57.655605 kubelet[2447]: I0422 23:49:57.654294 2447 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/33fee6ba1581201eda98a989140db110-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"33fee6ba1581201eda98a989140db110\") " pod="kube-system/kube-scheduler-localhost" Apr 22 23:49:57.662836 kubelet[2447]: E0422 23:49:57.662735 2447 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 22 23:49:57.666848 systemd[1]: Created slice kubepods-burstable-pode9ca41790ae21be9f4cbd451ade0acec.slice - libcontainer container kubepods-burstable-pode9ca41790ae21be9f4cbd451ade0acec.slice. Apr 22 23:49:57.667099 kubelet[2447]: E0422 23:49:57.666860 2447 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 22 23:49:57.676583 containerd[1625]: time="2026-04-22T23:49:57.673821722Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:47e50b39a5c3f70a3898f39186f718d0,Namespace:kube-system,Attempt:0,}" Apr 22 23:49:57.693354 kubelet[2447]: E0422 23:49:57.693282 2447 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 22 23:49:57.696587 kubelet[2447]: E0422 23:49:57.696412 2447 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 22 23:49:57.697314 systemd[1]: Created slice kubepods-burstable-pod33fee6ba1581201eda98a989140db110.slice - libcontainer container kubepods-burstable-pod33fee6ba1581201eda98a989140db110.slice. Apr 22 23:49:57.739707 containerd[1625]: time="2026-04-22T23:49:57.732112666Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:e9ca41790ae21be9f4cbd451ade0acec,Namespace:kube-system,Attempt:0,}" Apr 22 23:49:57.743219 kubelet[2447]: E0422 23:49:57.737582 2447 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.21:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.21:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 22 23:49:57.774616 kubelet[2447]: E0422 23:49:57.774446 2447 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 22 23:49:57.779871 kubelet[2447]: E0422 23:49:57.779080 2447 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 22 23:49:57.783694 containerd[1625]: time="2026-04-22T23:49:57.783563068Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:33fee6ba1581201eda98a989140db110,Namespace:kube-system,Attempt:0,}" Apr 22 23:49:57.851941 kubelet[2447]: E0422 23:49:57.850350 2447 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.21:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.21:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 22 23:49:57.854935 kubelet[2447]: E0422 23:49:57.854317 2447 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.21:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.21:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 22 23:49:57.948682 kubelet[2447]: E0422 23:49:57.948238 2447 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.21:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.21:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 22 23:49:57.952505 kubelet[2447]: E0422 23:49:57.950585 2447 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.21:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.21:6443: connect: connection refused" interval="1.6s" Apr 22 23:49:57.952505 kubelet[2447]: I0422 23:49:57.951906 2447 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 22 23:49:57.959771 kubelet[2447]: E0422 23:49:57.958969 2447 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.21:6443/api/v1/nodes\": dial tcp 10.0.0.21:6443: connect: connection refused" node="localhost" Apr 22 23:49:57.966824 containerd[1625]: time="2026-04-22T23:49:57.964658255Z" level=info msg="connecting to shim c6bac13b94006ba425aa4bc4fb250537550fda612feb518f8e75232eaa30d02e" address="unix:///run/containerd/s/31c2cd80a55b2067bec08ad7b24137d27869090c80d14b966d23c65f4ed7acfb" namespace=k8s.io protocol=ttrpc version=3 Apr 22 23:49:58.066115 containerd[1625]: time="2026-04-22T23:49:58.064766524Z" level=info msg="connecting to shim e8669ce21a3f51f960f35f8e0dd86019d77ce722a2fabbea5799927df0c40f5f" address="unix:///run/containerd/s/4c9d8b7dca752b08b905213a378800e7fc982aefd785442d227a931cc3f6c3f7" namespace=k8s.io protocol=ttrpc version=3 Apr 22 23:49:58.162339 containerd[1625]: time="2026-04-22T23:49:58.161770849Z" level=info msg="connecting to shim 2828c2319c8a920f1d62358b8008cdb2c23d03c9191e05cd8a8e46899474fe83" address="unix:///run/containerd/s/008a852ad47db2e030aefc8056a2e849ba474c4802ea5eebff2d501bd41a664c" namespace=k8s.io protocol=ttrpc version=3 Apr 22 23:49:58.665473 systemd[1]: Started cri-containerd-c6bac13b94006ba425aa4bc4fb250537550fda612feb518f8e75232eaa30d02e.scope - libcontainer container c6bac13b94006ba425aa4bc4fb250537550fda612feb518f8e75232eaa30d02e. Apr 22 23:49:58.888116 systemd[1]: Started cri-containerd-2828c2319c8a920f1d62358b8008cdb2c23d03c9191e05cd8a8e46899474fe83.scope - libcontainer container 2828c2319c8a920f1d62358b8008cdb2c23d03c9191e05cd8a8e46899474fe83. Apr 22 23:49:58.894856 kubelet[2447]: I0422 23:49:58.894773 2447 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 22 23:49:58.902476 kubelet[2447]: E0422 23:49:58.896316 2447 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.21:6443/api/v1/nodes\": dial tcp 10.0.0.21:6443: connect: connection refused" node="localhost" Apr 22 23:49:58.946299 systemd[1]: Started cri-containerd-e8669ce21a3f51f960f35f8e0dd86019d77ce722a2fabbea5799927df0c40f5f.scope - libcontainer container e8669ce21a3f51f960f35f8e0dd86019d77ce722a2fabbea5799927df0c40f5f. Apr 22 23:49:59.454414 containerd[1625]: time="2026-04-22T23:49:59.453625148Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:47e50b39a5c3f70a3898f39186f718d0,Namespace:kube-system,Attempt:0,} returns sandbox id \"c6bac13b94006ba425aa4bc4fb250537550fda612feb518f8e75232eaa30d02e\"" Apr 22 23:49:59.460872 kubelet[2447]: E0422 23:49:59.460587 2447 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 22 23:49:59.501892 containerd[1625]: time="2026-04-22T23:49:59.501778401Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:33fee6ba1581201eda98a989140db110,Namespace:kube-system,Attempt:0,} returns sandbox id \"2828c2319c8a920f1d62358b8008cdb2c23d03c9191e05cd8a8e46899474fe83\"" Apr 22 23:49:59.558599 kubelet[2447]: E0422 23:49:59.555736 2447 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 22 23:49:59.562841 kubelet[2447]: E0422 23:49:59.562325 2447 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.21:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.21:6443: connect: connection refused" interval="3.2s" Apr 22 23:49:59.576026 containerd[1625]: time="2026-04-22T23:49:59.575904921Z" level=info msg="CreateContainer within sandbox \"c6bac13b94006ba425aa4bc4fb250537550fda612feb518f8e75232eaa30d02e\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Apr 22 23:49:59.589650 containerd[1625]: time="2026-04-22T23:49:59.589596564Z" level=info msg="CreateContainer within sandbox \"2828c2319c8a920f1d62358b8008cdb2c23d03c9191e05cd8a8e46899474fe83\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Apr 22 23:49:59.596221 containerd[1625]: time="2026-04-22T23:49:59.595979417Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:e9ca41790ae21be9f4cbd451ade0acec,Namespace:kube-system,Attempt:0,} returns sandbox id \"e8669ce21a3f51f960f35f8e0dd86019d77ce722a2fabbea5799927df0c40f5f\"" Apr 22 23:49:59.655728 kubelet[2447]: E0422 23:49:59.655065 2447 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 22 23:49:59.678436 containerd[1625]: time="2026-04-22T23:49:59.677991348Z" level=info msg="Container 74ab14f4259962fd9224b102de5de3f69d39b0b971d486d485296afe00fb0673: CDI devices from CRI Config.CDIDevices: []" Apr 22 23:49:59.686778 containerd[1625]: time="2026-04-22T23:49:59.686735266Z" level=info msg="Container 984746e5648cf3da3c5ce84b88379d9819cce85092fd94ec82a04ab113283fd6: CDI devices from CRI Config.CDIDevices: []" Apr 22 23:49:59.697635 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1922837593.mount: Deactivated successfully. Apr 22 23:49:59.703163 containerd[1625]: time="2026-04-22T23:49:59.702629559Z" level=info msg="CreateContainer within sandbox \"e8669ce21a3f51f960f35f8e0dd86019d77ce722a2fabbea5799927df0c40f5f\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Apr 22 23:49:59.711068 containerd[1625]: time="2026-04-22T23:49:59.708723158Z" level=info msg="CreateContainer within sandbox \"c6bac13b94006ba425aa4bc4fb250537550fda612feb518f8e75232eaa30d02e\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"74ab14f4259962fd9224b102de5de3f69d39b0b971d486d485296afe00fb0673\"" Apr 22 23:49:59.730386 containerd[1625]: time="2026-04-22T23:49:59.723844933Z" level=info msg="StartContainer for \"74ab14f4259962fd9224b102de5de3f69d39b0b971d486d485296afe00fb0673\"" Apr 22 23:49:59.737723 containerd[1625]: time="2026-04-22T23:49:59.735713095Z" level=info msg="CreateContainer within sandbox \"2828c2319c8a920f1d62358b8008cdb2c23d03c9191e05cd8a8e46899474fe83\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"984746e5648cf3da3c5ce84b88379d9819cce85092fd94ec82a04ab113283fd6\"" Apr 22 23:49:59.789138 containerd[1625]: time="2026-04-22T23:49:59.786475891Z" level=info msg="StartContainer for \"984746e5648cf3da3c5ce84b88379d9819cce85092fd94ec82a04ab113283fd6\"" Apr 22 23:49:59.798197 containerd[1625]: time="2026-04-22T23:49:59.797501227Z" level=info msg="connecting to shim 74ab14f4259962fd9224b102de5de3f69d39b0b971d486d485296afe00fb0673" address="unix:///run/containerd/s/31c2cd80a55b2067bec08ad7b24137d27869090c80d14b966d23c65f4ed7acfb" protocol=ttrpc version=3 Apr 22 23:49:59.806473 containerd[1625]: time="2026-04-22T23:49:59.804888395Z" level=info msg="connecting to shim 984746e5648cf3da3c5ce84b88379d9819cce85092fd94ec82a04ab113283fd6" address="unix:///run/containerd/s/008a852ad47db2e030aefc8056a2e849ba474c4802ea5eebff2d501bd41a664c" protocol=ttrpc version=3 Apr 22 23:49:59.865579 containerd[1625]: time="2026-04-22T23:49:59.861715990Z" level=info msg="Container 24102edd7bd05250735d0a30fc200ca15e79f4847d483e26270398dea44e8f4b: CDI devices from CRI Config.CDIDevices: []" Apr 22 23:49:59.883121 kubelet[2447]: E0422 23:49:59.881499 2447 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.21:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.21:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 22 23:49:59.955870 containerd[1625]: time="2026-04-22T23:49:59.952864259Z" level=info msg="CreateContainer within sandbox \"e8669ce21a3f51f960f35f8e0dd86019d77ce722a2fabbea5799927df0c40f5f\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"24102edd7bd05250735d0a30fc200ca15e79f4847d483e26270398dea44e8f4b\"" Apr 22 23:49:59.977975 containerd[1625]: time="2026-04-22T23:49:59.975737790Z" level=info msg="StartContainer for \"24102edd7bd05250735d0a30fc200ca15e79f4847d483e26270398dea44e8f4b\"" Apr 22 23:49:59.991313 kubelet[2447]: E0422 23:49:59.990891 2447 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.21:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.21:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 22 23:50:00.009925 containerd[1625]: time="2026-04-22T23:50:00.008657270Z" level=info msg="connecting to shim 24102edd7bd05250735d0a30fc200ca15e79f4847d483e26270398dea44e8f4b" address="unix:///run/containerd/s/4c9d8b7dca752b08b905213a378800e7fc982aefd785442d227a931cc3f6c3f7" protocol=ttrpc version=3 Apr 22 23:50:00.047151 systemd[1]: Started cri-containerd-74ab14f4259962fd9224b102de5de3f69d39b0b971d486d485296afe00fb0673.scope - libcontainer container 74ab14f4259962fd9224b102de5de3f69d39b0b971d486d485296afe00fb0673. Apr 22 23:50:00.048264 kubelet[2447]: E0422 23:50:00.048110 2447 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.21:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.21:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 22 23:50:00.100247 systemd[1]: Started cri-containerd-984746e5648cf3da3c5ce84b88379d9819cce85092fd94ec82a04ab113283fd6.scope - libcontainer container 984746e5648cf3da3c5ce84b88379d9819cce85092fd94ec82a04ab113283fd6. Apr 22 23:50:00.595994 kubelet[2447]: I0422 23:50:00.595882 2447 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 22 23:50:00.599298 kubelet[2447]: E0422 23:50:00.599219 2447 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.21:6443/api/v1/nodes\": dial tcp 10.0.0.21:6443: connect: connection refused" node="localhost" Apr 22 23:50:00.688038 kubelet[2447]: E0422 23:50:00.684455 2447 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.21:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.21:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 22 23:50:00.774157 systemd[1]: Started cri-containerd-24102edd7bd05250735d0a30fc200ca15e79f4847d483e26270398dea44e8f4b.scope - libcontainer container 24102edd7bd05250735d0a30fc200ca15e79f4847d483e26270398dea44e8f4b. Apr 22 23:50:01.134833 containerd[1625]: time="2026-04-22T23:50:01.134381900Z" level=info msg="StartContainer for \"74ab14f4259962fd9224b102de5de3f69d39b0b971d486d485296afe00fb0673\" returns successfully" Apr 22 23:50:01.286212 containerd[1625]: time="2026-04-22T23:50:01.285478118Z" level=info msg="StartContainer for \"984746e5648cf3da3c5ce84b88379d9819cce85092fd94ec82a04ab113283fd6\" returns successfully" Apr 22 23:50:01.517458 containerd[1625]: time="2026-04-22T23:50:01.517243386Z" level=info msg="StartContainer for \"24102edd7bd05250735d0a30fc200ca15e79f4847d483e26270398dea44e8f4b\" returns successfully" Apr 22 23:50:01.756846 kubelet[2447]: E0422 23:50:01.755832 2447 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 22 23:50:01.763160 kubelet[2447]: E0422 23:50:01.759317 2447 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 22 23:50:01.808641 kubelet[2447]: E0422 23:50:01.808231 2447 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 22 23:50:01.893957 kubelet[2447]: E0422 23:50:01.887188 2447 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 22 23:50:01.949472 kubelet[2447]: E0422 23:50:01.949363 2447 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 22 23:50:01.951693 kubelet[2447]: E0422 23:50:01.949911 2447 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 22 23:50:02.993755 kubelet[2447]: E0422 23:50:02.979077 2447 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 22 23:50:03.052760 kubelet[2447]: E0422 23:50:03.051387 2447 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 22 23:50:03.082952 kubelet[2447]: E0422 23:50:03.079982 2447 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 22 23:50:03.085350 kubelet[2447]: E0422 23:50:03.082203 2447 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 22 23:50:03.099295 kubelet[2447]: E0422 23:50:03.099099 2447 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 22 23:50:03.152963 kubelet[2447]: E0422 23:50:03.141231 2447 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 22 23:50:03.830361 kubelet[2447]: I0422 23:50:03.830283 2447 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 22 23:50:04.027925 kubelet[2447]: E0422 23:50:04.025727 2447 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 22 23:50:04.037382 kubelet[2447]: E0422 23:50:04.037358 2447 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 22 23:50:04.039379 kubelet[2447]: E0422 23:50:04.037487 2447 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 22 23:50:04.053011 kubelet[2447]: E0422 23:50:04.050841 2447 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 22 23:50:04.059583 kubelet[2447]: E0422 23:50:04.059200 2447 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 22 23:50:04.075888 kubelet[2447]: E0422 23:50:04.074403 2447 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 22 23:50:07.209481 kubelet[2447]: E0422 23:50:07.208473 2447 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 22 23:50:09.855621 kubelet[2447]: E0422 23:50:09.854323 2447 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 22 23:50:09.857459 kubelet[2447]: E0422 23:50:09.856650 2447 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 22 23:50:12.192409 kubelet[2447]: E0422 23:50:12.191301 2447 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.21:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": net/http: TLS handshake timeout" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 22 23:50:12.659316 kubelet[2447]: E0422 23:50:12.658328 2447 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 22 23:50:12.663556 kubelet[2447]: E0422 23:50:12.663413 2447 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 22 23:50:12.777874 kubelet[2447]: E0422 23:50:12.775963 2447 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.21:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" interval="6.4s" Apr 22 23:50:13.493273 kubelet[2447]: E0422 23:50:13.492418 2447 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.21:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 22 23:50:13.505567 kubelet[2447]: E0422 23:50:13.502425 2447 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.21:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 22 23:50:14.485153 kubelet[2447]: E0422 23:50:14.485044 2447 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.21:6443/api/v1/namespaces/default/events\": net/http: TLS handshake timeout" event="&Event{ObjectMeta:{localhost.18a8d2c4f6214efa default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-22 23:49:56.38246169 +0000 UTC m=+3.123099745,LastTimestamp:2026-04-22 23:49:56.38246169 +0000 UTC m=+3.123099745,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 22 23:50:14.491233 kubelet[2447]: E0422 23:50:14.489887 2447 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.21:6443/api/v1/nodes\": net/http: TLS handshake timeout" node="localhost" Apr 22 23:50:14.584049 kubelet[2447]: E0422 23:50:14.583264 2447 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.21:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 22 23:50:16.181736 kubelet[2447]: E0422 23:50:16.181419 2447 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.21:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 22 23:50:17.278247 kubelet[2447]: E0422 23:50:17.278176 2447 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 22 23:50:20.920146 kubelet[2447]: I0422 23:50:20.920082 2447 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 22 23:50:25.178919 kubelet[2447]: E0422 23:50:25.178081 2447 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 22 23:50:25.180174 kubelet[2447]: E0422 23:50:25.179787 2447 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 22 23:50:27.282425 kubelet[2447]: E0422 23:50:27.281834 2447 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 22 23:50:29.208993 kubelet[2447]: E0422 23:50:29.208613 2447 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.21:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" interval="7s" Apr 22 23:50:30.490687 kubelet[2447]: E0422 23:50:30.489373 2447 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.21:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": net/http: TLS handshake timeout" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 22 23:50:30.965786 kubelet[2447]: E0422 23:50:30.963321 2447 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.21:6443/api/v1/nodes\": net/http: TLS handshake timeout" node="localhost" Apr 22 23:50:31.597597 kubelet[2447]: E0422 23:50:31.597221 2447 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.21:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 22 23:50:32.529400 kubelet[2447]: E0422 23:50:32.528376 2447 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.21:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 22 23:50:34.391144 kubelet[2447]: E0422 23:50:34.390093 2447 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.21:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 22 23:50:34.574662 kubelet[2447]: E0422 23:50:34.569965 2447 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.21:6443/api/v1/namespaces/default/events\": net/http: TLS handshake timeout" event="&Event{ObjectMeta:{localhost.18a8d2c4f6214efa default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-22 23:49:56.38246169 +0000 UTC m=+3.123099745,LastTimestamp:2026-04-22 23:49:56.38246169 +0000 UTC m=+3.123099745,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 22 23:50:37.293646 kubelet[2447]: E0422 23:50:37.291889 2447 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 22 23:50:38.018054 kubelet[2447]: I0422 23:50:38.017356 2447 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 22 23:50:38.197473 kubelet[2447]: E0422 23:50:38.192498 2447 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Apr 22 23:50:38.548362 kubelet[2447]: I0422 23:50:38.542296 2447 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Apr 22 23:50:38.575942 kubelet[2447]: E0422 23:50:38.549452 2447 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Apr 22 23:50:39.092607 kubelet[2447]: E0422 23:50:39.092023 2447 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 22 23:50:39.216567 kubelet[2447]: E0422 23:50:39.208902 2447 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 22 23:50:39.353278 kubelet[2447]: E0422 23:50:39.344474 2447 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 22 23:50:39.455286 kubelet[2447]: E0422 23:50:39.452832 2447 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 22 23:50:39.566039 kubelet[2447]: E0422 23:50:39.565139 2447 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 22 23:50:39.667466 kubelet[2447]: E0422 23:50:39.667310 2447 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 22 23:50:39.779889 kubelet[2447]: E0422 23:50:39.778255 2447 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 22 23:50:39.883988 kubelet[2447]: E0422 23:50:39.883053 2447 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 22 23:50:39.998420 kubelet[2447]: E0422 23:50:39.992870 2447 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 22 23:50:40.181276 kubelet[2447]: E0422 23:50:40.174247 2447 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 22 23:50:40.277498 kubelet[2447]: E0422 23:50:40.275573 2447 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 22 23:50:40.387899 kubelet[2447]: E0422 23:50:40.386293 2447 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 22 23:50:40.488632 kubelet[2447]: E0422 23:50:40.487872 2447 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 22 23:50:40.610763 kubelet[2447]: E0422 23:50:40.591397 2447 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 22 23:50:40.705202 kubelet[2447]: E0422 23:50:40.703299 2447 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 22 23:50:40.807682 kubelet[2447]: E0422 23:50:40.806899 2447 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 22 23:50:40.911215 kubelet[2447]: E0422 23:50:40.908837 2447 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 22 23:50:41.051175 kubelet[2447]: E0422 23:50:41.051060 2447 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 22 23:50:41.190723 kubelet[2447]: E0422 23:50:41.186731 2447 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 22 23:50:41.294716 kubelet[2447]: E0422 23:50:41.293114 2447 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 22 23:50:41.399359 kubelet[2447]: E0422 23:50:41.396376 2447 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 22 23:50:41.504333 kubelet[2447]: E0422 23:50:41.501659 2447 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 22 23:50:41.629674 kubelet[2447]: E0422 23:50:41.629437 2447 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 22 23:50:41.731705 kubelet[2447]: E0422 23:50:41.731540 2447 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 22 23:50:41.848882 kubelet[2447]: E0422 23:50:41.846356 2447 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 22 23:50:41.955354 kubelet[2447]: E0422 23:50:41.954126 2447 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 22 23:50:42.058460 kubelet[2447]: E0422 23:50:42.057323 2447 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 22 23:50:42.164296 kubelet[2447]: E0422 23:50:42.161487 2447 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 22 23:50:42.264759 kubelet[2447]: E0422 23:50:42.264263 2447 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 22 23:50:42.367139 kubelet[2447]: E0422 23:50:42.366797 2447 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 22 23:50:42.474306 kubelet[2447]: E0422 23:50:42.467834 2447 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 22 23:50:42.573284 kubelet[2447]: E0422 23:50:42.572452 2447 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 22 23:50:42.676373 kubelet[2447]: E0422 23:50:42.675331 2447 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 22 23:50:42.784241 kubelet[2447]: E0422 23:50:42.780374 2447 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 22 23:50:42.884787 kubelet[2447]: E0422 23:50:42.883388 2447 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 22 23:50:42.987402 kubelet[2447]: E0422 23:50:42.986963 2447 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 22 23:50:43.101124 kubelet[2447]: E0422 23:50:43.091791 2447 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 22 23:50:43.204208 kubelet[2447]: E0422 23:50:43.203221 2447 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 22 23:50:43.308762 kubelet[2447]: E0422 23:50:43.307337 2447 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 22 23:50:43.410799 kubelet[2447]: E0422 23:50:43.410306 2447 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 22 23:50:43.539146 kubelet[2447]: E0422 23:50:43.537285 2447 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 22 23:50:43.642443 kubelet[2447]: E0422 23:50:43.641623 2447 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 22 23:50:43.748560 kubelet[2447]: E0422 23:50:43.745240 2447 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 22 23:50:43.877796 kubelet[2447]: E0422 23:50:43.877060 2447 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 22 23:50:43.987033 kubelet[2447]: E0422 23:50:43.986336 2447 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 22 23:50:44.096392 kubelet[2447]: E0422 23:50:44.095046 2447 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 22 23:50:44.208837 kubelet[2447]: E0422 23:50:44.207671 2447 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 22 23:50:44.310884 kubelet[2447]: E0422 23:50:44.310114 2447 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 22 23:50:44.438351 kubelet[2447]: E0422 23:50:44.438268 2447 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 22 23:50:44.541589 kubelet[2447]: E0422 23:50:44.540426 2447 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 22 23:50:44.656775 kubelet[2447]: E0422 23:50:44.656149 2447 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 22 23:50:44.761360 kubelet[2447]: E0422 23:50:44.758236 2447 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 22 23:50:44.866591 kubelet[2447]: E0422 23:50:44.860342 2447 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 22 23:50:44.966190 kubelet[2447]: E0422 23:50:44.965949 2447 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 22 23:50:45.079852 kubelet[2447]: E0422 23:50:45.071262 2447 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 22 23:50:45.182729 kubelet[2447]: E0422 23:50:45.181097 2447 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 22 23:50:45.287320 kubelet[2447]: E0422 23:50:45.287246 2447 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 22 23:50:45.389734 kubelet[2447]: E0422 23:50:45.389099 2447 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 22 23:50:45.498558 kubelet[2447]: E0422 23:50:45.496064 2447 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 22 23:50:45.607959 kubelet[2447]: E0422 23:50:45.607234 2447 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 22 23:50:45.755396 kubelet[2447]: E0422 23:50:45.752769 2447 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 22 23:50:45.860597 kubelet[2447]: E0422 23:50:45.856296 2447 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 22 23:50:45.963871 kubelet[2447]: E0422 23:50:45.963154 2447 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 22 23:50:46.067397 kubelet[2447]: E0422 23:50:46.065423 2447 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 22 23:50:46.176456 kubelet[2447]: E0422 23:50:46.175405 2447 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 22 23:50:46.278732 kubelet[2447]: E0422 23:50:46.278612 2447 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 22 23:50:46.381727 kubelet[2447]: E0422 23:50:46.380480 2447 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 22 23:50:46.492555 kubelet[2447]: E0422 23:50:46.492006 2447 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 22 23:50:46.596991 kubelet[2447]: E0422 23:50:46.596639 2447 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 22 23:50:46.698002 kubelet[2447]: E0422 23:50:46.697349 2447 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 22 23:50:46.802476 kubelet[2447]: E0422 23:50:46.801288 2447 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 22 23:50:46.906202 kubelet[2447]: E0422 23:50:46.905304 2447 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 22 23:50:47.055366 kubelet[2447]: E0422 23:50:47.049218 2447 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 22 23:50:47.153340 kubelet[2447]: E0422 23:50:47.151651 2447 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 22 23:50:47.263107 kubelet[2447]: E0422 23:50:47.258117 2447 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 22 23:50:47.328018 kubelet[2447]: E0422 23:50:47.326241 2447 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 22 23:50:47.377919 kubelet[2447]: E0422 23:50:47.377490 2447 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 22 23:50:47.413835 kubelet[2447]: I0422 23:50:47.413734 2447 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Apr 22 23:50:47.760438 kubelet[2447]: I0422 23:50:47.760009 2447 apiserver.go:52] "Watching apiserver" Apr 22 23:50:47.930890 kubelet[2447]: I0422 23:50:47.930207 2447 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Apr 22 23:50:48.037783 kubelet[2447]: E0422 23:50:48.035378 2447 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 22 23:50:48.051568 kubelet[2447]: I0422 23:50:48.045390 2447 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Apr 22 23:50:48.239771 kubelet[2447]: I0422 23:50:48.239292 2447 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Apr 22 23:50:48.261237 kubelet[2447]: E0422 23:50:48.240325 2447 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 22 23:50:48.485014 kubelet[2447]: E0422 23:50:48.484875 2447 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 22 23:50:57.991695 kubelet[2447]: I0422 23:50:57.990739 2447 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=9.990626527 podStartE2EDuration="9.990626527s" podCreationTimestamp="2026-04-22 23:50:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-22 23:50:57.983466698 +0000 UTC m=+64.724104751" watchObservedRunningTime="2026-04-22 23:50:57.990626527 +0000 UTC m=+64.731264578" Apr 22 23:50:58.006382 kubelet[2447]: I0422 23:50:57.995812 2447 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=9.995753274 podStartE2EDuration="9.995753274s" podCreationTimestamp="2026-04-22 23:50:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-22 23:50:57.464468021 +0000 UTC m=+64.205106082" watchObservedRunningTime="2026-04-22 23:50:57.995753274 +0000 UTC m=+64.736391328" Apr 22 23:51:07.009662 systemd[1]: cri-containerd-24102edd7bd05250735d0a30fc200ca15e79f4847d483e26270398dea44e8f4b.scope: Deactivated successfully. Apr 22 23:51:07.062826 systemd[1]: cri-containerd-24102edd7bd05250735d0a30fc200ca15e79f4847d483e26270398dea44e8f4b.scope: Consumed 3.934s CPU time, 18.7M memory peak. Apr 22 23:51:07.070679 containerd[1625]: time="2026-04-22T23:51:07.068787656Z" level=info msg="received container exit event container_id:\"24102edd7bd05250735d0a30fc200ca15e79f4847d483e26270398dea44e8f4b\" id:\"24102edd7bd05250735d0a30fc200ca15e79f4847d483e26270398dea44e8f4b\" pid:2677 exit_status:1 exited_at:{seconds:1776901867 nanos:65909009}" Apr 22 23:51:07.959964 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-24102edd7bd05250735d0a30fc200ca15e79f4847d483e26270398dea44e8f4b-rootfs.mount: Deactivated successfully. Apr 22 23:51:08.944655 kubelet[2447]: I0422 23:51:08.942296 2447 scope.go:117] "RemoveContainer" containerID="24102edd7bd05250735d0a30fc200ca15e79f4847d483e26270398dea44e8f4b" Apr 22 23:51:08.944655 kubelet[2447]: E0422 23:51:08.943398 2447 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 22 23:51:09.128859 containerd[1625]: time="2026-04-22T23:51:09.126893606Z" level=info msg="CreateContainer within sandbox \"e8669ce21a3f51f960f35f8e0dd86019d77ce722a2fabbea5799927df0c40f5f\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Apr 22 23:51:09.384205 containerd[1625]: time="2026-04-22T23:51:09.384077235Z" level=info msg="Container 53dbd89dd226e3f7eb45ba12cd648e1c685c54edcda4367ed7aa3e817c858be4: CDI devices from CRI Config.CDIDevices: []" Apr 22 23:51:09.542156 containerd[1625]: time="2026-04-22T23:51:09.541265252Z" level=info msg="CreateContainer within sandbox \"e8669ce21a3f51f960f35f8e0dd86019d77ce722a2fabbea5799927df0c40f5f\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"53dbd89dd226e3f7eb45ba12cd648e1c685c54edcda4367ed7aa3e817c858be4\"" Apr 22 23:51:09.558201 containerd[1625]: time="2026-04-22T23:51:09.556426626Z" level=info msg="StartContainer for \"53dbd89dd226e3f7eb45ba12cd648e1c685c54edcda4367ed7aa3e817c858be4\"" Apr 22 23:51:09.622115 containerd[1625]: time="2026-04-22T23:51:09.620299904Z" level=info msg="connecting to shim 53dbd89dd226e3f7eb45ba12cd648e1c685c54edcda4367ed7aa3e817c858be4" address="unix:///run/containerd/s/4c9d8b7dca752b08b905213a378800e7fc982aefd785442d227a931cc3f6c3f7" protocol=ttrpc version=3 Apr 22 23:51:10.398216 systemd[1]: Started cri-containerd-53dbd89dd226e3f7eb45ba12cd648e1c685c54edcda4367ed7aa3e817c858be4.scope - libcontainer container 53dbd89dd226e3f7eb45ba12cd648e1c685c54edcda4367ed7aa3e817c858be4. Apr 22 23:51:11.460984 kubelet[2447]: I0422 23:51:11.460767 2447 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=24.460673697 podStartE2EDuration="24.460673697s" podCreationTimestamp="2026-04-22 23:50:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-22 23:50:58.192669499 +0000 UTC m=+64.933307553" watchObservedRunningTime="2026-04-22 23:51:11.460673697 +0000 UTC m=+78.201311748" Apr 22 23:51:12.027121 containerd[1625]: time="2026-04-22T23:51:12.026879746Z" level=info msg="StartContainer for \"53dbd89dd226e3f7eb45ba12cd648e1c685c54edcda4367ed7aa3e817c858be4\" returns successfully" Apr 22 23:51:12.574698 kubelet[2447]: E0422 23:51:12.574487 2447 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 22 23:51:13.484805 kubelet[2447]: E0422 23:51:13.484214 2447 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 22 23:51:13.907418 kubelet[2447]: E0422 23:51:13.906493 2447 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 22 23:51:19.200135 kubelet[2447]: E0422 23:51:19.198237 2447 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 22 23:51:20.932241 systemd[1]: Reload requested from client PID 2784 ('systemctl') (unit session-6.scope)... Apr 22 23:51:20.935074 systemd[1]: Reloading... Apr 22 23:51:22.582935 kubelet[2447]: E0422 23:51:22.580377 2447 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 22 23:51:23.191859 kubelet[2447]: E0422 23:51:23.189443 2447 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 22 23:51:23.241966 zram_generator::config[2830]: No configuration found. Apr 22 23:51:30.691366 systemd[1]: Reloading finished in 9751 ms. Apr 22 23:51:31.408234 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 22 23:51:31.598952 systemd[1]: kubelet.service: Deactivated successfully. Apr 22 23:51:31.640984 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 22 23:51:31.645479 systemd[1]: kubelet.service: Consumed 38.869s CPU time, 142M memory peak. Apr 22 23:51:31.732012 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 22 23:51:34.668246 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 22 23:51:34.795894 (kubelet)[2876]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 22 23:51:36.203048 kubelet[2876]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 22 23:51:36.278410 kubelet[2876]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 22 23:51:36.278410 kubelet[2876]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 22 23:51:36.278410 kubelet[2876]: I0422 23:51:36.249441 2876 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 22 23:51:36.570160 kubelet[2876]: I0422 23:51:36.565393 2876 server.go:530] "Kubelet version" kubeletVersion="v1.33.8" Apr 22 23:51:36.570160 kubelet[2876]: I0422 23:51:36.569300 2876 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 22 23:51:36.587580 kubelet[2876]: I0422 23:51:36.586869 2876 server.go:956] "Client rotation is on, will bootstrap in background" Apr 22 23:51:36.645374 kubelet[2876]: I0422 23:51:36.642493 2876 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Apr 22 23:51:36.980368 kubelet[2876]: I0422 23:51:36.970820 2876 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 22 23:51:37.271666 kubelet[2876]: I0422 23:51:37.264347 2876 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Apr 22 23:51:37.756388 kubelet[2876]: I0422 23:51:37.754799 2876 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Apr 22 23:51:37.761836 kubelet[2876]: I0422 23:51:37.759918 2876 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 22 23:51:37.764764 kubelet[2876]: I0422 23:51:37.761195 2876 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 22 23:51:37.767458 kubelet[2876]: I0422 23:51:37.767224 2876 topology_manager.go:138] "Creating topology manager with none policy" Apr 22 23:51:37.767458 kubelet[2876]: I0422 23:51:37.767449 2876 container_manager_linux.go:303] "Creating device plugin manager" Apr 22 23:51:37.768943 kubelet[2876]: I0422 23:51:37.768370 2876 state_mem.go:36] "Initialized new in-memory state store" Apr 22 23:51:37.780767 kubelet[2876]: I0422 23:51:37.780678 2876 kubelet.go:480] "Attempting to sync node with API server" Apr 22 23:51:37.780767 kubelet[2876]: I0422 23:51:37.780723 2876 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 22 23:51:37.780767 kubelet[2876]: I0422 23:51:37.780777 2876 kubelet.go:386] "Adding apiserver pod source" Apr 22 23:51:37.780767 kubelet[2876]: I0422 23:51:37.780801 2876 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 22 23:51:37.885371 kubelet[2876]: I0422 23:51:37.883963 2876 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Apr 22 23:51:37.908194 kubelet[2876]: I0422 23:51:37.907397 2876 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 22 23:51:38.387853 kubelet[2876]: I0422 23:51:38.386985 2876 watchdog_linux.go:99] "Systemd watchdog is not enabled" Apr 22 23:51:38.398933 kubelet[2876]: I0422 23:51:38.398295 2876 server.go:1289] "Started kubelet" Apr 22 23:51:38.410991 kubelet[2876]: I0422 23:51:38.406928 2876 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Apr 22 23:51:38.436336 kubelet[2876]: I0422 23:51:38.421001 2876 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 22 23:51:38.443308 kubelet[2876]: I0422 23:51:38.441765 2876 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 22 23:51:38.637809 kubelet[2876]: I0422 23:51:38.637635 2876 server.go:317] "Adding debug handlers to kubelet server" Apr 22 23:51:38.641374 kubelet[2876]: E0422 23:51:38.638201 2876 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 22 23:51:38.661129 kubelet[2876]: I0422 23:51:38.660832 2876 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 22 23:51:38.661446 kubelet[2876]: I0422 23:51:38.661205 2876 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 22 23:51:38.665225 kubelet[2876]: I0422 23:51:38.664750 2876 volume_manager.go:297] "Starting Kubelet Volume Manager" Apr 22 23:51:38.687283 kubelet[2876]: E0422 23:51:38.685929 2876 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 22 23:51:38.707464 kubelet[2876]: I0422 23:51:38.707044 2876 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Apr 22 23:51:38.707806 kubelet[2876]: I0422 23:51:38.707695 2876 reconciler.go:26] "Reconciler: start to sync state" Apr 22 23:51:38.803727 kubelet[2876]: I0422 23:51:38.802284 2876 factory.go:223] Registration of the systemd container factory successfully Apr 22 23:51:38.862067 kubelet[2876]: I0422 23:51:38.860900 2876 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 22 23:51:38.874856 kubelet[2876]: I0422 23:51:38.874329 2876 apiserver.go:52] "Watching apiserver" Apr 22 23:51:38.916991 kubelet[2876]: I0422 23:51:38.915777 2876 factory.go:223] Registration of the containerd container factory successfully Apr 22 23:51:39.793842 kubelet[2876]: I0422 23:51:39.792607 2876 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Apr 22 23:51:39.887917 kubelet[2876]: I0422 23:51:39.886788 2876 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Apr 22 23:51:39.902421 kubelet[2876]: I0422 23:51:39.888461 2876 status_manager.go:230] "Starting to sync pod status with apiserver" Apr 22 23:51:39.938483 kubelet[2876]: I0422 23:51:39.938168 2876 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 22 23:51:39.938483 kubelet[2876]: I0422 23:51:39.938431 2876 kubelet.go:2436] "Starting kubelet main sync loop" Apr 22 23:51:39.939272 kubelet[2876]: E0422 23:51:39.938971 2876 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 22 23:51:40.081477 kubelet[2876]: E0422 23:51:40.079394 2876 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 22 23:51:40.418625 kubelet[2876]: E0422 23:51:40.418398 2876 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Apr 22 23:51:40.871937 kubelet[2876]: E0422 23:51:40.869923 2876 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Apr 22 23:51:41.652344 kubelet[2876]: I0422 23:51:41.651208 2876 cpu_manager.go:221] "Starting CPU manager" policy="none" Apr 22 23:51:41.652344 kubelet[2876]: I0422 23:51:41.651314 2876 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Apr 22 23:51:41.652344 kubelet[2876]: I0422 23:51:41.651396 2876 state_mem.go:36] "Initialized new in-memory state store" Apr 22 23:51:41.658563 kubelet[2876]: I0422 23:51:41.655998 2876 state_mem.go:88] "Updated default CPUSet" cpuSet="" Apr 22 23:51:41.658563 kubelet[2876]: I0422 23:51:41.656647 2876 state_mem.go:96] "Updated CPUSet assignments" assignments={} Apr 22 23:51:41.658563 kubelet[2876]: I0422 23:51:41.656860 2876 policy_none.go:49] "None policy: Start" Apr 22 23:51:41.658563 kubelet[2876]: I0422 23:51:41.656893 2876 memory_manager.go:186] "Starting memorymanager" policy="None" Apr 22 23:51:41.658563 kubelet[2876]: I0422 23:51:41.656946 2876 state_mem.go:35] "Initializing new in-memory state store" Apr 22 23:51:41.658563 kubelet[2876]: I0422 23:51:41.658622 2876 state_mem.go:75] "Updated machine memory state" Apr 22 23:51:41.691701 kubelet[2876]: E0422 23:51:41.690387 2876 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Apr 22 23:51:41.956316 kubelet[2876]: E0422 23:51:41.954207 2876 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 22 23:51:41.992131 kubelet[2876]: I0422 23:51:41.991132 2876 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 22 23:51:42.050407 kubelet[2876]: I0422 23:51:42.048397 2876 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 22 23:51:42.056488 kubelet[2876]: I0422 23:51:42.056456 2876 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 22 23:51:42.377882 kubelet[2876]: E0422 23:51:42.293966 2876 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 22 23:51:42.868102 kubelet[2876]: I0422 23:51:42.858253 2876 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 22 23:51:43.452396 kubelet[2876]: I0422 23:51:43.452072 2876 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Apr 22 23:51:43.452396 kubelet[2876]: I0422 23:51:43.452198 2876 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Apr 22 23:51:43.518982 kubelet[2876]: I0422 23:51:43.512670 2876 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Apr 22 23:51:43.518982 kubelet[2876]: I0422 23:51:43.513379 2876 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Apr 22 23:51:43.547072 kubelet[2876]: I0422 23:51:43.546816 2876 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/47e50b39a5c3f70a3898f39186f718d0-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"47e50b39a5c3f70a3898f39186f718d0\") " pod="kube-system/kube-apiserver-localhost" Apr 22 23:51:43.547072 kubelet[2876]: I0422 23:51:43.546968 2876 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/e9ca41790ae21be9f4cbd451ade0acec-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"e9ca41790ae21be9f4cbd451ade0acec\") " pod="kube-system/kube-controller-manager-localhost" Apr 22 23:51:43.547072 kubelet[2876]: I0422 23:51:43.547013 2876 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e9ca41790ae21be9f4cbd451ade0acec-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"e9ca41790ae21be9f4cbd451ade0acec\") " pod="kube-system/kube-controller-manager-localhost" Apr 22 23:51:43.547072 kubelet[2876]: I0422 23:51:43.547059 2876 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e9ca41790ae21be9f4cbd451ade0acec-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"e9ca41790ae21be9f4cbd451ade0acec\") " pod="kube-system/kube-controller-manager-localhost" Apr 22 23:51:43.547072 kubelet[2876]: I0422 23:51:43.547090 2876 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/47e50b39a5c3f70a3898f39186f718d0-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"47e50b39a5c3f70a3898f39186f718d0\") " pod="kube-system/kube-apiserver-localhost" Apr 22 23:51:43.586099 kubelet[2876]: I0422 23:51:43.547102 2876 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/47e50b39a5c3f70a3898f39186f718d0-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"47e50b39a5c3f70a3898f39186f718d0\") " pod="kube-system/kube-apiserver-localhost" Apr 22 23:51:43.586099 kubelet[2876]: I0422 23:51:43.547124 2876 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e9ca41790ae21be9f4cbd451ade0acec-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"e9ca41790ae21be9f4cbd451ade0acec\") " pod="kube-system/kube-controller-manager-localhost" Apr 22 23:51:43.586099 kubelet[2876]: I0422 23:51:43.547136 2876 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e9ca41790ae21be9f4cbd451ade0acec-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"e9ca41790ae21be9f4cbd451ade0acec\") " pod="kube-system/kube-controller-manager-localhost" Apr 22 23:51:43.586099 kubelet[2876]: I0422 23:51:43.547147 2876 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/33fee6ba1581201eda98a989140db110-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"33fee6ba1581201eda98a989140db110\") " pod="kube-system/kube-scheduler-localhost" Apr 22 23:51:44.083655 kubelet[2876]: I0422 23:51:44.081834 2876 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Apr 22 23:51:44.089458 kubelet[2876]: I0422 23:51:44.088910 2876 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Apr 22 23:51:45.055685 kubelet[2876]: E0422 23:51:45.053429 2876 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Apr 22 23:51:45.149468 kubelet[2876]: E0422 23:51:45.148813 2876 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 22 23:51:45.162690 kubelet[2876]: E0422 23:51:45.161168 2876 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Apr 22 23:51:45.166215 kubelet[2876]: E0422 23:51:45.166110 2876 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 22 23:51:45.195186 kubelet[2876]: E0422 23:51:45.194974 2876 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Apr 22 23:51:45.288930 kubelet[2876]: E0422 23:51:45.286266 2876 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 22 23:51:46.053025 kubelet[2876]: E0422 23:51:46.051444 2876 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 22 23:51:46.070885 kubelet[2876]: E0422 23:51:46.062226 2876 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 22 23:51:46.154945 kubelet[2876]: E0422 23:51:46.142892 2876 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 22 23:51:46.980161 kubelet[2876]: E0422 23:51:46.978840 2876 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 22 23:51:47.359912 kubelet[2876]: E0422 23:51:47.359853 2876 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 22 23:51:47.444880 kubelet[2876]: E0422 23:51:47.439580 2876 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 22 23:51:48.030379 kubelet[2876]: E0422 23:51:48.030040 2876 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 22 23:51:49.068711 kubelet[2876]: E0422 23:51:49.068656 2876 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 22 23:51:49.818680 kubelet[2876]: E0422 23:51:49.817860 2876 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 22 23:51:50.135312 kubelet[2876]: E0422 23:51:50.135219 2876 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 22 23:52:03.901724 sudo[1785]: pam_unix(sudo:session): session closed for user root Apr 22 23:52:03.910973 sshd[1784]: Connection closed by 10.0.0.1 port 60488 Apr 22 23:52:03.963861 sshd-session[1780]: pam_unix(sshd:session): session closed for user core Apr 22 23:52:04.134645 systemd-logind[1597]: Session 6 logged out. Waiting for processes to exit. Apr 22 23:52:04.171507 systemd[1]: sshd@4-10.0.0.21:22-10.0.0.1:60488.service: Deactivated successfully. Apr 22 23:52:04.333452 systemd[1]: session-6.scope: Deactivated successfully. Apr 22 23:52:04.342320 systemd[1]: session-6.scope: Consumed 27.340s CPU time, 233.5M memory peak. Apr 22 23:52:04.523165 systemd-logind[1597]: Removed session 6. Apr 22 23:52:05.113897 kubelet[2876]: E0422 23:52:05.111496 2876 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.143s" Apr 22 23:52:09.478787 kubelet[2876]: I0422 23:52:09.464378 2876 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Apr 22 23:52:09.487155 containerd[1625]: time="2026-04-22T23:52:09.482643718Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Apr 22 23:52:09.497970 kubelet[2876]: I0422 23:52:09.497661 2876 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Apr 22 23:52:13.951706 kubelet[2876]: I0422 23:52:13.951328 2876 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0700a492-ddda-45d4-9e86-dd14b1c79d8a-xtables-lock\") pod \"kube-proxy-76lh5\" (UID: \"0700a492-ddda-45d4-9e86-dd14b1c79d8a\") " pod="kube-system/kube-proxy-76lh5" Apr 22 23:52:13.974433 kubelet[2876]: I0422 23:52:13.969036 2876 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/0700a492-ddda-45d4-9e86-dd14b1c79d8a-kube-proxy\") pod \"kube-proxy-76lh5\" (UID: \"0700a492-ddda-45d4-9e86-dd14b1c79d8a\") " pod="kube-system/kube-proxy-76lh5" Apr 22 23:52:13.992698 kubelet[2876]: I0422 23:52:13.984042 2876 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0700a492-ddda-45d4-9e86-dd14b1c79d8a-lib-modules\") pod \"kube-proxy-76lh5\" (UID: \"0700a492-ddda-45d4-9e86-dd14b1c79d8a\") " pod="kube-system/kube-proxy-76lh5" Apr 22 23:52:14.044551 kubelet[2876]: I0422 23:52:14.043896 2876 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2srkg\" (UniqueName: \"kubernetes.io/projected/0700a492-ddda-45d4-9e86-dd14b1c79d8a-kube-api-access-2srkg\") pod \"kube-proxy-76lh5\" (UID: \"0700a492-ddda-45d4-9e86-dd14b1c79d8a\") " pod="kube-system/kube-proxy-76lh5" Apr 22 23:52:14.332000 systemd[1]: Created slice kubepods-besteffort-pod0700a492_ddda_45d4_9e86_dd14b1c79d8a.slice - libcontainer container kubepods-besteffort-pod0700a492_ddda_45d4_9e86_dd14b1c79d8a.slice. Apr 22 23:52:14.889437 kubelet[2876]: I0422 23:52:14.889287 2876 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-plugin\" (UniqueName: \"kubernetes.io/host-path/7c082508-f936-4280-9aeb-df1a43992b68-cni-plugin\") pod \"kube-flannel-ds-9h9jm\" (UID: \"7c082508-f936-4280-9aeb-df1a43992b68\") " pod="kube-flannel/kube-flannel-ds-9h9jm" Apr 22 23:52:14.944818 kubelet[2876]: I0422 23:52:14.943950 2876 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni\" (UniqueName: \"kubernetes.io/host-path/7c082508-f936-4280-9aeb-df1a43992b68-cni\") pod \"kube-flannel-ds-9h9jm\" (UID: \"7c082508-f936-4280-9aeb-df1a43992b68\") " pod="kube-flannel/kube-flannel-ds-9h9jm" Apr 22 23:52:14.946871 systemd[1]: Created slice kubepods-burstable-pod7c082508_f936_4280_9aeb_df1a43992b68.slice - libcontainer container kubepods-burstable-pod7c082508_f936_4280_9aeb_df1a43992b68.slice. Apr 22 23:52:14.952074 kubelet[2876]: I0422 23:52:14.946074 2876 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vmp8m\" (UniqueName: \"kubernetes.io/projected/7c082508-f936-4280-9aeb-df1a43992b68-kube-api-access-vmp8m\") pod \"kube-flannel-ds-9h9jm\" (UID: \"7c082508-f936-4280-9aeb-df1a43992b68\") " pod="kube-flannel/kube-flannel-ds-9h9jm" Apr 22 23:52:14.952074 kubelet[2876]: I0422 23:52:14.949488 2876 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flannel-cfg\" (UniqueName: \"kubernetes.io/configmap/7c082508-f936-4280-9aeb-df1a43992b68-flannel-cfg\") pod \"kube-flannel-ds-9h9jm\" (UID: \"7c082508-f936-4280-9aeb-df1a43992b68\") " pod="kube-flannel/kube-flannel-ds-9h9jm" Apr 22 23:52:14.974197 kubelet[2876]: I0422 23:52:14.952101 2876 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/7c082508-f936-4280-9aeb-df1a43992b68-run\") pod \"kube-flannel-ds-9h9jm\" (UID: \"7c082508-f936-4280-9aeb-df1a43992b68\") " pod="kube-flannel/kube-flannel-ds-9h9jm" Apr 22 23:52:14.974197 kubelet[2876]: I0422 23:52:14.953970 2876 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7c082508-f936-4280-9aeb-df1a43992b68-xtables-lock\") pod \"kube-flannel-ds-9h9jm\" (UID: \"7c082508-f936-4280-9aeb-df1a43992b68\") " pod="kube-flannel/kube-flannel-ds-9h9jm" Apr 22 23:52:16.498374 kubelet[2876]: E0422 23:52:16.492481 2876 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 22 23:52:16.679352 containerd[1625]: time="2026-04-22T23:52:16.675188869Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-76lh5,Uid:0700a492-ddda-45d4-9e86-dd14b1c79d8a,Namespace:kube-system,Attempt:0,}" Apr 22 23:52:17.168991 containerd[1625]: time="2026-04-22T23:52:17.167283862Z" level=info msg="connecting to shim ec6b21d0d85a255421f8254afb26e4ba4ba930d0cceacb0304b7483eecaeddfb" address="unix:///run/containerd/s/c0c89149d00c5052743c76a339d873d5d1c4ab7067ff62763004bee26697f28e" namespace=k8s.io protocol=ttrpc version=3 Apr 22 23:52:18.183084 kubelet[2876]: E0422 23:52:18.182096 2876 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 22 23:52:18.421602 containerd[1625]: time="2026-04-22T23:52:18.421304965Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-9h9jm,Uid:7c082508-f936-4280-9aeb-df1a43992b68,Namespace:kube-flannel,Attempt:0,}" Apr 22 23:52:18.823487 systemd[1]: Started cri-containerd-ec6b21d0d85a255421f8254afb26e4ba4ba930d0cceacb0304b7483eecaeddfb.scope - libcontainer container ec6b21d0d85a255421f8254afb26e4ba4ba930d0cceacb0304b7483eecaeddfb. Apr 22 23:52:19.406867 containerd[1625]: time="2026-04-22T23:52:19.406376077Z" level=info msg="connecting to shim bc4a3d81084ff263b607d6bc293d4cb5990b442f6575a37b4a1369a09283d564" address="unix:///run/containerd/s/e4efc50ae82cc158f75577ae2a6bdff013f9d1607ddbd44c5ba2461b63d261c3" namespace=k8s.io protocol=ttrpc version=3 Apr 22 23:52:20.579002 systemd[1]: Started cri-containerd-bc4a3d81084ff263b607d6bc293d4cb5990b442f6575a37b4a1369a09283d564.scope - libcontainer container bc4a3d81084ff263b607d6bc293d4cb5990b442f6575a37b4a1369a09283d564. Apr 22 23:52:21.603608 containerd[1625]: time="2026-04-22T23:52:21.602533453Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-76lh5,Uid:0700a492-ddda-45d4-9e86-dd14b1c79d8a,Namespace:kube-system,Attempt:0,} returns sandbox id \"ec6b21d0d85a255421f8254afb26e4ba4ba930d0cceacb0304b7483eecaeddfb\"" Apr 22 23:52:21.718211 kubelet[2876]: E0422 23:52:21.718164 2876 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 22 23:52:22.236755 containerd[1625]: time="2026-04-22T23:52:22.231060228Z" level=info msg="CreateContainer within sandbox \"ec6b21d0d85a255421f8254afb26e4ba4ba930d0cceacb0304b7483eecaeddfb\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Apr 22 23:52:22.466952 containerd[1625]: time="2026-04-22T23:52:22.465576942Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-9h9jm,Uid:7c082508-f936-4280-9aeb-df1a43992b68,Namespace:kube-flannel,Attempt:0,} returns sandbox id \"bc4a3d81084ff263b607d6bc293d4cb5990b442f6575a37b4a1369a09283d564\"" Apr 22 23:52:22.688582 containerd[1625]: time="2026-04-22T23:52:22.687103803Z" level=info msg="Container f491607eace78f3ca55578e9025cc6f12ea7af2c1388b98bf4e58b64ebc619ab: CDI devices from CRI Config.CDIDevices: []" Apr 22 23:52:22.708563 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3059808060.mount: Deactivated successfully. Apr 22 23:52:22.710725 kubelet[2876]: E0422 23:52:22.709565 2876 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 22 23:52:22.753959 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount35320646.mount: Deactivated successfully. Apr 22 23:52:22.988849 containerd[1625]: time="2026-04-22T23:52:22.978958687Z" level=info msg="CreateContainer within sandbox \"ec6b21d0d85a255421f8254afb26e4ba4ba930d0cceacb0304b7483eecaeddfb\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"f491607eace78f3ca55578e9025cc6f12ea7af2c1388b98bf4e58b64ebc619ab\"" Apr 22 23:52:23.032737 containerd[1625]: time="2026-04-22T23:52:23.032103748Z" level=info msg="PullImage \"ghcr.io/flannel-io/flannel-cni-plugin:v1.6.2-flannel1\"" Apr 22 23:52:23.155467 containerd[1625]: time="2026-04-22T23:52:23.153430390Z" level=info msg="StartContainer for \"f491607eace78f3ca55578e9025cc6f12ea7af2c1388b98bf4e58b64ebc619ab\"" Apr 22 23:52:23.293857 containerd[1625]: time="2026-04-22T23:52:23.290115291Z" level=info msg="connecting to shim f491607eace78f3ca55578e9025cc6f12ea7af2c1388b98bf4e58b64ebc619ab" address="unix:///run/containerd/s/c0c89149d00c5052743c76a339d873d5d1c4ab7067ff62763004bee26697f28e" protocol=ttrpc version=3 Apr 22 23:52:23.817282 systemd[1]: Started cri-containerd-f491607eace78f3ca55578e9025cc6f12ea7af2c1388b98bf4e58b64ebc619ab.scope - libcontainer container f491607eace78f3ca55578e9025cc6f12ea7af2c1388b98bf4e58b64ebc619ab. Apr 22 23:52:25.203318 containerd[1625]: time="2026-04-22T23:52:25.203268794Z" level=info msg="StartContainer for \"f491607eace78f3ca55578e9025cc6f12ea7af2c1388b98bf4e58b64ebc619ab\" returns successfully" Apr 22 23:52:26.578790 kubelet[2876]: E0422 23:52:26.577585 2876 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 22 23:52:27.534034 kubelet[2876]: E0422 23:52:27.533198 2876 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 22 23:52:28.304637 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount139649078.mount: Deactivated successfully. Apr 22 23:52:28.805892 containerd[1625]: time="2026-04-22T23:52:28.805693496Z" level=info msg="ImageCreate event name:\"ghcr.io/flannel-io/flannel-cni-plugin:v1.6.2-flannel1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 22 23:52:28.849044 containerd[1625]: time="2026-04-22T23:52:28.811295401Z" level=info msg="stop pulling image ghcr.io/flannel-io/flannel-cni-plugin:v1.6.2-flannel1: active requests=0, bytes read=1214555" Apr 22 23:52:28.862249 containerd[1625]: time="2026-04-22T23:52:28.861947443Z" level=info msg="ImageCreate event name:\"sha256:55ce2385d9d8c6f720091c177fbe885a21c9dc07c9e480bfb4d94b3001f58182\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 22 23:52:29.016144 containerd[1625]: time="2026-04-22T23:52:29.016010484Z" level=info msg="ImageCreate event name:\"ghcr.io/flannel-io/flannel-cni-plugin@sha256:f1812994f0edbcb5bb5ccb63be2147ba6ad10e1faaa7ca9fcdad4f441739d84f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 22 23:52:29.025058 containerd[1625]: time="2026-04-22T23:52:29.024933036Z" level=info msg="Pulled image \"ghcr.io/flannel-io/flannel-cni-plugin:v1.6.2-flannel1\" with image id \"sha256:55ce2385d9d8c6f720091c177fbe885a21c9dc07c9e480bfb4d94b3001f58182\", repo tag \"ghcr.io/flannel-io/flannel-cni-plugin:v1.6.2-flannel1\", repo digest \"ghcr.io/flannel-io/flannel-cni-plugin@sha256:f1812994f0edbcb5bb5ccb63be2147ba6ad10e1faaa7ca9fcdad4f441739d84f\", size \"4856838\" in 5.992698013s" Apr 22 23:52:29.025058 containerd[1625]: time="2026-04-22T23:52:29.024982418Z" level=info msg="PullImage \"ghcr.io/flannel-io/flannel-cni-plugin:v1.6.2-flannel1\" returns image reference \"sha256:55ce2385d9d8c6f720091c177fbe885a21c9dc07c9e480bfb4d94b3001f58182\"" Apr 22 23:52:29.209754 containerd[1625]: time="2026-04-22T23:52:29.207126942Z" level=info msg="CreateContainer within sandbox \"bc4a3d81084ff263b607d6bc293d4cb5990b442f6575a37b4a1369a09283d564\" for container &ContainerMetadata{Name:install-cni-plugin,Attempt:0,}" Apr 22 23:52:29.560821 containerd[1625]: time="2026-04-22T23:52:29.553096112Z" level=info msg="Container ba4ba48cb7277449b5e178019e74fb5005c7e23232996ac90d485dffa9b805f7: CDI devices from CRI Config.CDIDevices: []" Apr 22 23:52:29.782135 containerd[1625]: time="2026-04-22T23:52:29.776403753Z" level=info msg="CreateContainer within sandbox \"bc4a3d81084ff263b607d6bc293d4cb5990b442f6575a37b4a1369a09283d564\" for &ContainerMetadata{Name:install-cni-plugin,Attempt:0,} returns container id \"ba4ba48cb7277449b5e178019e74fb5005c7e23232996ac90d485dffa9b805f7\"" Apr 22 23:52:29.823187 containerd[1625]: time="2026-04-22T23:52:29.816906794Z" level=info msg="StartContainer for \"ba4ba48cb7277449b5e178019e74fb5005c7e23232996ac90d485dffa9b805f7\"" Apr 22 23:52:29.874636 containerd[1625]: time="2026-04-22T23:52:29.873413046Z" level=info msg="connecting to shim ba4ba48cb7277449b5e178019e74fb5005c7e23232996ac90d485dffa9b805f7" address="unix:///run/containerd/s/e4efc50ae82cc158f75577ae2a6bdff013f9d1607ddbd44c5ba2461b63d261c3" protocol=ttrpc version=3 Apr 22 23:52:30.543695 systemd[1]: Started cri-containerd-ba4ba48cb7277449b5e178019e74fb5005c7e23232996ac90d485dffa9b805f7.scope - libcontainer container ba4ba48cb7277449b5e178019e74fb5005c7e23232996ac90d485dffa9b805f7. Apr 22 23:52:31.910654 systemd[1]: cri-containerd-ba4ba48cb7277449b5e178019e74fb5005c7e23232996ac90d485dffa9b805f7.scope: Deactivated successfully. Apr 22 23:52:32.001246 containerd[1625]: time="2026-04-22T23:52:32.001061936Z" level=info msg="StartContainer for \"ba4ba48cb7277449b5e178019e74fb5005c7e23232996ac90d485dffa9b805f7\" returns successfully" Apr 22 23:52:32.059925 containerd[1625]: time="2026-04-22T23:52:32.056235082Z" level=info msg="received container exit event container_id:\"ba4ba48cb7277449b5e178019e74fb5005c7e23232996ac90d485dffa9b805f7\" id:\"ba4ba48cb7277449b5e178019e74fb5005c7e23232996ac90d485dffa9b805f7\" pid:3107 exited_at:{seconds:1776901952 nanos:49918194}" Apr 22 23:52:33.357670 kubelet[2876]: E0422 23:52:33.342220 2876 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.384s" Apr 22 23:52:33.478649 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ba4ba48cb7277449b5e178019e74fb5005c7e23232996ac90d485dffa9b805f7-rootfs.mount: Deactivated successfully. Apr 22 23:52:33.953900 containerd[1625]: time="2026-04-22T23:52:33.953676110Z" level=error msg="collecting metrics for ba4ba48cb7277449b5e178019e74fb5005c7e23232996ac90d485dffa9b805f7" error="ttrpc: closed" Apr 22 23:52:34.028796 kubelet[2876]: E0422 23:52:34.025891 2876 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 22 23:52:35.186013 kubelet[2876]: E0422 23:52:35.183879 2876 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 22 23:52:35.342624 containerd[1625]: time="2026-04-22T23:52:35.341986287Z" level=info msg="PullImage \"ghcr.io/flannel-io/flannel:v0.26.7\"" Apr 22 23:52:43.132178 kubelet[2876]: E0422 23:52:43.129060 2876 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.186s" Apr 22 23:52:44.809604 kubelet[2876]: E0422 23:52:44.808709 2876 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.676s" Apr 22 23:52:46.023577 update_engine[1598]: I20260422 23:52:46.023128 1598 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Apr 22 23:52:46.023577 update_engine[1598]: I20260422 23:52:46.023299 1598 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Apr 22 23:52:46.055690 update_engine[1598]: I20260422 23:52:46.047278 1598 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Apr 22 23:52:46.142796 update_engine[1598]: I20260422 23:52:46.142124 1598 omaha_request_params.cc:62] Current group set to beta Apr 22 23:52:46.145864 update_engine[1598]: I20260422 23:52:46.143479 1598 update_attempter.cc:499] Already updated boot flags. Skipping. Apr 22 23:52:46.145864 update_engine[1598]: I20260422 23:52:46.143554 1598 update_attempter.cc:643] Scheduling an action processor start. Apr 22 23:52:46.145864 update_engine[1598]: I20260422 23:52:46.143580 1598 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Apr 22 23:52:46.145864 update_engine[1598]: I20260422 23:52:46.143757 1598 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Apr 22 23:52:46.145864 update_engine[1598]: I20260422 23:52:46.143944 1598 omaha_request_action.cc:271] Posting an Omaha request to disabled Apr 22 23:52:46.145864 update_engine[1598]: I20260422 23:52:46.144694 1598 omaha_request_action.cc:272] Request: Apr 22 23:52:46.145864 update_engine[1598]: Apr 22 23:52:46.145864 update_engine[1598]: Apr 22 23:52:46.145864 update_engine[1598]: Apr 22 23:52:46.145864 update_engine[1598]: Apr 22 23:52:46.145864 update_engine[1598]: Apr 22 23:52:46.145864 update_engine[1598]: Apr 22 23:52:46.145864 update_engine[1598]: Apr 22 23:52:46.145864 update_engine[1598]: Apr 22 23:52:46.145864 update_engine[1598]: I20260422 23:52:46.144860 1598 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 22 23:52:46.164090 update_engine[1598]: I20260422 23:52:46.161150 1598 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 22 23:52:46.169882 update_engine[1598]: I20260422 23:52:46.169703 1598 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 22 23:52:46.182776 update_engine[1598]: E20260422 23:52:46.181476 1598 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (Domain name not found) Apr 22 23:52:46.182776 update_engine[1598]: I20260422 23:52:46.181856 1598 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Apr 22 23:52:46.407625 locksmithd[1666]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Apr 22 23:52:46.457909 kubelet[2876]: E0422 23:52:46.456474 2876 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.591s" Apr 22 23:52:47.943800 kubelet[2876]: E0422 23:52:47.929163 2876 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.279s" Apr 22 23:52:48.002030 kubelet[2876]: I0422 23:52:47.987418 2876 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-76lh5" podStartSLOduration=35.987341694 podStartE2EDuration="35.987341694s" podCreationTimestamp="2026-04-22 23:52:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-22 23:52:27.488114965 +0000 UTC m=+52.615040865" watchObservedRunningTime="2026-04-22 23:52:47.987341694 +0000 UTC m=+73.114267586" Apr 22 23:52:50.978867 kubelet[2876]: E0422 23:52:50.976099 2876 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 22 23:52:56.062773 update_engine[1598]: I20260422 23:52:56.060372 1598 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 22 23:52:56.069442 update_engine[1598]: I20260422 23:52:56.065174 1598 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 22 23:52:56.078564 update_engine[1598]: I20260422 23:52:56.077451 1598 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 22 23:52:56.083893 update_engine[1598]: E20260422 23:52:56.082975 1598 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (Domain name not found) Apr 22 23:52:56.091883 update_engine[1598]: I20260422 23:52:56.089138 1598 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Apr 22 23:53:01.335252 kubelet[2876]: E0422 23:53:01.332484 2876 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.323s" Apr 22 23:53:03.894239 kubelet[2876]: E0422 23:53:03.893329 2876 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.95s" Apr 22 23:53:05.199940 kubelet[2876]: E0422 23:53:05.190431 2876 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.217s" Apr 22 23:53:06.043025 update_engine[1598]: I20260422 23:53:06.035491 1598 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 22 23:53:06.043025 update_engine[1598]: I20260422 23:53:06.037467 1598 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 22 23:53:06.073972 update_engine[1598]: I20260422 23:53:06.067451 1598 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 22 23:53:06.087839 update_engine[1598]: E20260422 23:53:06.085016 1598 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (Domain name not found) Apr 22 23:53:06.094990 update_engine[1598]: I20260422 23:53:06.091270 1598 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Apr 22 23:53:07.052024 kubelet[2876]: E0422 23:53:07.051471 2876 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.086s" Apr 22 23:53:10.338828 kubelet[2876]: E0422 23:53:10.336900 2876 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 22 23:53:15.870914 kubelet[2876]: E0422 23:53:15.859242 2876 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 22 23:53:16.022082 update_engine[1598]: I20260422 23:53:16.021077 1598 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 22 23:53:16.091352 update_engine[1598]: I20260422 23:53:16.025841 1598 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 22 23:53:16.091352 update_engine[1598]: I20260422 23:53:16.039814 1598 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 22 23:53:16.091352 update_engine[1598]: E20260422 23:53:16.045164 1598 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (Domain name not found) Apr 22 23:53:16.091352 update_engine[1598]: I20260422 23:53:16.049273 1598 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Apr 22 23:53:16.091352 update_engine[1598]: I20260422 23:53:16.049993 1598 omaha_request_action.cc:617] Omaha request response: Apr 22 23:53:16.091352 update_engine[1598]: E20260422 23:53:16.062100 1598 omaha_request_action.cc:636] Omaha request network transfer failed. Apr 22 23:53:16.091352 update_engine[1598]: I20260422 23:53:16.084476 1598 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Apr 22 23:53:16.091352 update_engine[1598]: I20260422 23:53:16.085063 1598 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Apr 22 23:53:16.091352 update_engine[1598]: I20260422 23:53:16.085076 1598 update_attempter.cc:306] Processing Done. Apr 22 23:53:16.091352 update_engine[1598]: E20260422 23:53:16.085160 1598 update_attempter.cc:619] Update failed. Apr 22 23:53:16.091352 update_engine[1598]: I20260422 23:53:16.085173 1598 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Apr 22 23:53:16.091352 update_engine[1598]: I20260422 23:53:16.085179 1598 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Apr 22 23:53:16.091352 update_engine[1598]: I20260422 23:53:16.085186 1598 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Apr 22 23:53:16.091352 update_engine[1598]: I20260422 23:53:16.085376 1598 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Apr 22 23:53:16.091352 update_engine[1598]: I20260422 23:53:16.085415 1598 omaha_request_action.cc:271] Posting an Omaha request to disabled Apr 22 23:53:16.091352 update_engine[1598]: I20260422 23:53:16.085420 1598 omaha_request_action.cc:272] Request: Apr 22 23:53:16.091352 update_engine[1598]: Apr 22 23:53:16.091352 update_engine[1598]: Apr 22 23:53:16.169969 update_engine[1598]: Apr 22 23:53:16.169969 update_engine[1598]: Apr 22 23:53:16.169969 update_engine[1598]: Apr 22 23:53:16.169969 update_engine[1598]: Apr 22 23:53:16.169969 update_engine[1598]: I20260422 23:53:16.085476 1598 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 22 23:53:16.169969 update_engine[1598]: I20260422 23:53:16.086550 1598 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 22 23:53:16.169969 update_engine[1598]: I20260422 23:53:16.096446 1598 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 22 23:53:16.169969 update_engine[1598]: E20260422 23:53:16.149768 1598 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (Domain name not found) Apr 22 23:53:16.169969 update_engine[1598]: I20260422 23:53:16.151186 1598 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Apr 22 23:53:16.169969 update_engine[1598]: I20260422 23:53:16.151889 1598 omaha_request_action.cc:617] Omaha request response: Apr 22 23:53:16.169969 update_engine[1598]: I20260422 23:53:16.152034 1598 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Apr 22 23:53:16.169969 update_engine[1598]: I20260422 23:53:16.152041 1598 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Apr 22 23:53:16.169969 update_engine[1598]: I20260422 23:53:16.152046 1598 update_attempter.cc:306] Processing Done. Apr 22 23:53:16.169969 update_engine[1598]: I20260422 23:53:16.152056 1598 update_attempter.cc:310] Error event sent. Apr 22 23:53:16.169969 update_engine[1598]: I20260422 23:53:16.152073 1598 update_check_scheduler.cc:74] Next update check in 43m10s Apr 22 23:53:16.189102 locksmithd[1666]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Apr 22 23:53:16.254235 locksmithd[1666]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Apr 22 23:53:19.031924 kubelet[2876]: E0422 23:53:19.030871 2876 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="4.946s" Apr 22 23:53:20.997864 containerd[1625]: time="2026-04-22T23:53:20.988319965Z" level=info msg="ImageCreate event name:\"ghcr.io/flannel-io/flannel:v0.26.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 22 23:53:21.086159 containerd[1625]: time="2026-04-22T23:53:21.084666095Z" level=info msg="stop pulling image ghcr.io/flannel-io/flannel:v0.26.7: active requests=0, bytes read=29347592" Apr 22 23:53:21.241819 containerd[1625]: time="2026-04-22T23:53:21.233935790Z" level=info msg="ImageCreate event name:\"sha256:965b9dd4aa4c1b6b68a4c54a166692b4645b6e6f8a5937d8dc17736cb63f515e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 22 23:53:21.952315 containerd[1625]: time="2026-04-22T23:53:21.951092592Z" level=info msg="ImageCreate event name:\"ghcr.io/flannel-io/flannel@sha256:7f471907fa940f944867270de4ed78121b8b4c5d564e17f940dc787cb16dea82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 22 23:53:22.058553 containerd[1625]: time="2026-04-22T23:53:22.057257564Z" level=info msg="Pulled image \"ghcr.io/flannel-io/flannel:v0.26.7\" with image id \"sha256:965b9dd4aa4c1b6b68a4c54a166692b4645b6e6f8a5937d8dc17736cb63f515e\", repo tag \"ghcr.io/flannel-io/flannel:v0.26.7\", repo digest \"ghcr.io/flannel-io/flannel@sha256:7f471907fa940f944867270de4ed78121b8b4c5d564e17f940dc787cb16dea82\", size \"32996046\" in 46.715044603s" Apr 22 23:53:22.065950 containerd[1625]: time="2026-04-22T23:53:22.061095716Z" level=info msg="PullImage \"ghcr.io/flannel-io/flannel:v0.26.7\" returns image reference \"sha256:965b9dd4aa4c1b6b68a4c54a166692b4645b6e6f8a5937d8dc17736cb63f515e\"" Apr 22 23:53:22.430891 kubelet[2876]: E0422 23:53:22.430057 2876 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.272s" Apr 22 23:53:23.259144 containerd[1625]: time="2026-04-22T23:53:23.256953341Z" level=info msg="CreateContainer within sandbox \"bc4a3d81084ff263b607d6bc293d4cb5990b442f6575a37b4a1369a09283d564\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Apr 22 23:53:23.849762 kubelet[2876]: E0422 23:53:23.847576 2876 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.416s" Apr 22 23:53:24.328282 containerd[1625]: time="2026-04-22T23:53:24.310051631Z" level=info msg="Container 9be3d84a6567962c71c77138f33b9b867649f97c2257003550c257a13f17b9a7: CDI devices from CRI Config.CDIDevices: []" Apr 22 23:53:24.349279 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3630110794.mount: Deactivated successfully. Apr 22 23:53:24.639989 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount403283288.mount: Deactivated successfully. Apr 22 23:53:25.281570 containerd[1625]: time="2026-04-22T23:53:25.281176382Z" level=info msg="CreateContainer within sandbox \"bc4a3d81084ff263b607d6bc293d4cb5990b442f6575a37b4a1369a09283d564\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"9be3d84a6567962c71c77138f33b9b867649f97c2257003550c257a13f17b9a7\"" Apr 22 23:53:25.405911 containerd[1625]: time="2026-04-22T23:53:25.397880415Z" level=info msg="StartContainer for \"9be3d84a6567962c71c77138f33b9b867649f97c2257003550c257a13f17b9a7\"" Apr 22 23:53:25.589260 containerd[1625]: time="2026-04-22T23:53:25.574987972Z" level=info msg="connecting to shim 9be3d84a6567962c71c77138f33b9b867649f97c2257003550c257a13f17b9a7" address="unix:///run/containerd/s/e4efc50ae82cc158f75577ae2a6bdff013f9d1607ddbd44c5ba2461b63d261c3" protocol=ttrpc version=3 Apr 22 23:53:26.862287 systemd[1]: Started cri-containerd-9be3d84a6567962c71c77138f33b9b867649f97c2257003550c257a13f17b9a7.scope - libcontainer container 9be3d84a6567962c71c77138f33b9b867649f97c2257003550c257a13f17b9a7. Apr 22 23:53:27.515838 kubelet[2876]: E0422 23:53:27.512053 2876 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.565s" Apr 22 23:53:28.796084 kubelet[2876]: E0422 23:53:28.795165 2876 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.277s" Apr 22 23:53:29.854189 containerd[1625]: time="2026-04-22T23:53:29.850907223Z" level=error msg="get state for 9be3d84a6567962c71c77138f33b9b867649f97c2257003550c257a13f17b9a7" error="context deadline exceeded" Apr 22 23:53:29.865258 containerd[1625]: time="2026-04-22T23:53:29.860307902Z" level=warning msg="unknown status" status=0 Apr 22 23:53:30.507880 containerd[1625]: time="2026-04-22T23:53:30.503259027Z" level=error msg="ttrpc: received message on inactive stream" stream=3 Apr 22 23:53:31.455101 systemd[1]: cri-containerd-9be3d84a6567962c71c77138f33b9b867649f97c2257003550c257a13f17b9a7.scope: Deactivated successfully. Apr 22 23:53:31.485156 systemd[1]: cri-containerd-9be3d84a6567962c71c77138f33b9b867649f97c2257003550c257a13f17b9a7.scope: Consumed 1.071s CPU time, 4.3M memory peak, 4K read from disk. Apr 22 23:53:31.574984 containerd[1625]: time="2026-04-22T23:53:31.569288006Z" level=info msg="received container exit event container_id:\"9be3d84a6567962c71c77138f33b9b867649f97c2257003550c257a13f17b9a7\" id:\"9be3d84a6567962c71c77138f33b9b867649f97c2257003550c257a13f17b9a7\" pid:3262 exited_at:{seconds:1776902011 nanos:544426389}" Apr 22 23:53:31.758996 containerd[1625]: time="2026-04-22T23:53:31.744476326Z" level=info msg="StartContainer for \"9be3d84a6567962c71c77138f33b9b867649f97c2257003550c257a13f17b9a7\" returns successfully" Apr 22 23:53:32.106702 kubelet[2876]: I0422 23:53:32.101659 2876 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Apr 22 23:53:32.259881 kubelet[2876]: E0422 23:53:32.245307 2876 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.287s" Apr 22 23:53:34.142833 kubelet[2876]: E0422 23:53:34.125173 2876 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.839s" Apr 22 23:53:34.343863 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9be3d84a6567962c71c77138f33b9b867649f97c2257003550c257a13f17b9a7-rootfs.mount: Deactivated successfully. Apr 22 23:53:34.440125 kubelet[2876]: E0422 23:53:34.406186 2876 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 22 23:53:35.314822 kubelet[2876]: E0422 23:53:35.311205 2876 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.046s" Apr 22 23:53:37.776253 kubelet[2876]: E0422 23:53:37.775259 2876 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 22 23:53:37.859207 kubelet[2876]: E0422 23:53:37.852878 2876 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.9s" Apr 22 23:53:39.672249 containerd[1625]: time="2026-04-22T23:53:39.671778261Z" level=info msg="CreateContainer within sandbox \"bc4a3d81084ff263b607d6bc293d4cb5990b442f6575a37b4a1369a09283d564\" for container &ContainerMetadata{Name:kube-flannel,Attempt:0,}" Apr 22 23:53:40.091184 containerd[1625]: time="2026-04-22T23:53:40.073337700Z" level=info msg="Container e9457298112d10f15a11a0fdad4aca2bfbc22aa0ebb72be087d63991b10088c9: CDI devices from CRI Config.CDIDevices: []" Apr 22 23:53:40.234190 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount279715358.mount: Deactivated successfully. Apr 22 23:53:40.377749 kubelet[2876]: E0422 23:53:40.371458 2876 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.289s" Apr 22 23:53:40.647860 containerd[1625]: time="2026-04-22T23:53:40.644236616Z" level=info msg="CreateContainer within sandbox \"bc4a3d81084ff263b607d6bc293d4cb5990b442f6575a37b4a1369a09283d564\" for &ContainerMetadata{Name:kube-flannel,Attempt:0,} returns container id \"e9457298112d10f15a11a0fdad4aca2bfbc22aa0ebb72be087d63991b10088c9\"" Apr 22 23:53:40.675633 containerd[1625]: time="2026-04-22T23:53:40.675395405Z" level=info msg="StartContainer for \"e9457298112d10f15a11a0fdad4aca2bfbc22aa0ebb72be087d63991b10088c9\"" Apr 22 23:53:40.834953 containerd[1625]: time="2026-04-22T23:53:40.832418924Z" level=info msg="connecting to shim e9457298112d10f15a11a0fdad4aca2bfbc22aa0ebb72be087d63991b10088c9" address="unix:///run/containerd/s/e4efc50ae82cc158f75577ae2a6bdff013f9d1607ddbd44c5ba2461b63d261c3" protocol=ttrpc version=3 Apr 22 23:53:41.872970 kubelet[2876]: E0422 23:53:41.850496 2876 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.466s" Apr 22 23:53:43.543217 systemd[1]: Started cri-containerd-e9457298112d10f15a11a0fdad4aca2bfbc22aa0ebb72be087d63991b10088c9.scope - libcontainer container e9457298112d10f15a11a0fdad4aca2bfbc22aa0ebb72be087d63991b10088c9. Apr 22 23:53:44.428706 kubelet[2876]: E0422 23:53:44.428199 2876 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.282s" Apr 22 23:53:47.907840 containerd[1625]: time="2026-04-22T23:53:47.903270479Z" level=info msg="StartContainer for \"e9457298112d10f15a11a0fdad4aca2bfbc22aa0ebb72be087d63991b10088c9\" returns successfully" Apr 22 23:53:48.109945 kubelet[2876]: E0422 23:53:48.109423 2876 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 22 23:53:49.193472 kubelet[2876]: E0422 23:53:49.193194 2876 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 22 23:53:50.757936 kubelet[2876]: E0422 23:53:50.701485 2876 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 22 23:53:57.220726 kubelet[2876]: E0422 23:53:57.185159 2876 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 22 23:53:58.662913 systemd[1736]: Created slice background.slice - User Background Tasks Slice. Apr 22 23:53:58.685979 systemd[1736]: Starting systemd-tmpfiles-clean.service - Cleanup of User's Temporary Files and Directories... Apr 22 23:53:59.143039 systemd[1736]: Finished systemd-tmpfiles-clean.service - Cleanup of User's Temporary Files and Directories. Apr 22 23:54:02.606924 kubelet[2876]: I0422 23:54:02.604393 2876 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-flannel/kube-flannel-ds-9h9jm" podStartSLOduration=50.143425597 podStartE2EDuration="1m49.604374902s" podCreationTimestamp="2026-04-22 23:52:13 +0000 UTC" firstStartedPulling="2026-04-22 23:52:22.900022869 +0000 UTC m=+48.026948758" lastFinishedPulling="2026-04-22 23:53:22.360972166 +0000 UTC m=+107.487898063" observedRunningTime="2026-04-22 23:53:58.833065634 +0000 UTC m=+143.959991537" watchObservedRunningTime="2026-04-22 23:54:02.604374902 +0000 UTC m=+147.731300795" Apr 22 23:54:03.792619 kubelet[2876]: I0422 23:54:03.791152 2876 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a8bd6632-d3b8-4f3c-b1bc-eb8e99294ebc-config-volume\") pod \"coredns-674b8bbfcf-6hv5h\" (UID: \"a8bd6632-d3b8-4f3c-b1bc-eb8e99294ebc\") " pod="kube-system/coredns-674b8bbfcf-6hv5h" Apr 22 23:54:03.833695 kubelet[2876]: I0422 23:54:03.831014 2876 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wt2nk\" (UniqueName: \"kubernetes.io/projected/a8bd6632-d3b8-4f3c-b1bc-eb8e99294ebc-kube-api-access-wt2nk\") pod \"coredns-674b8bbfcf-6hv5h\" (UID: \"a8bd6632-d3b8-4f3c-b1bc-eb8e99294ebc\") " pod="kube-system/coredns-674b8bbfcf-6hv5h" Apr 22 23:54:04.261950 kubelet[2876]: I0422 23:54:04.259130 2876 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/59ee5781-812f-4669-af64-4c3f91e212ca-config-volume\") pod \"coredns-674b8bbfcf-x54dm\" (UID: \"59ee5781-812f-4669-af64-4c3f91e212ca\") " pod="kube-system/coredns-674b8bbfcf-x54dm" Apr 22 23:54:04.261950 kubelet[2876]: I0422 23:54:04.260842 2876 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hndgm\" (UniqueName: \"kubernetes.io/projected/59ee5781-812f-4669-af64-4c3f91e212ca-kube-api-access-hndgm\") pod \"coredns-674b8bbfcf-x54dm\" (UID: \"59ee5781-812f-4669-af64-4c3f91e212ca\") " pod="kube-system/coredns-674b8bbfcf-x54dm" Apr 22 23:54:04.461037 systemd[1]: Created slice kubepods-burstable-poda8bd6632_d3b8_4f3c_b1bc_eb8e99294ebc.slice - libcontainer container kubepods-burstable-poda8bd6632_d3b8_4f3c_b1bc_eb8e99294ebc.slice. Apr 22 23:54:05.001219 kubelet[2876]: E0422 23:54:04.999097 2876 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.052s" Apr 22 23:54:05.160639 kubelet[2876]: E0422 23:54:05.155467 2876 configmap.go:193] Couldn't get configMap kube-system/coredns: failed to sync configmap cache: timed out waiting for the condition Apr 22 23:54:05.189058 kubelet[2876]: E0422 23:54:05.188908 2876 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a8bd6632-d3b8-4f3c-b1bc-eb8e99294ebc-config-volume podName:a8bd6632-d3b8-4f3c-b1bc-eb8e99294ebc nodeName:}" failed. No retries permitted until 2026-04-22 23:54:05.688745005 +0000 UTC m=+150.815670894 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/a8bd6632-d3b8-4f3c-b1bc-eb8e99294ebc-config-volume") pod "coredns-674b8bbfcf-6hv5h" (UID: "a8bd6632-d3b8-4f3c-b1bc-eb8e99294ebc") : failed to sync configmap cache: timed out waiting for the condition Apr 22 23:54:05.437742 systemd[1]: Created slice kubepods-burstable-pod59ee5781_812f_4669_af64_4c3f91e212ca.slice - libcontainer container kubepods-burstable-pod59ee5781_812f_4669_af64_4c3f91e212ca.slice. Apr 22 23:54:08.568012 kubelet[2876]: E0422 23:54:08.567776 2876 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 22 23:54:08.591828 containerd[1625]: time="2026-04-22T23:54:08.591138077Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-6hv5h,Uid:a8bd6632-d3b8-4f3c-b1bc-eb8e99294ebc,Namespace:kube-system,Attempt:0,}" Apr 22 23:54:08.849952 systemd-networkd[1531]: flannel.1: Link UP Apr 22 23:54:08.851034 systemd-networkd[1531]: flannel.1: Gained carrier Apr 22 23:54:09.564008 kubelet[2876]: E0422 23:54:09.562957 2876 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 22 23:54:09.588550 containerd[1625]: time="2026-04-22T23:54:09.587250230Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-x54dm,Uid:59ee5781-812f-4669-af64-4c3f91e212ca,Namespace:kube-system,Attempt:0,}" Apr 22 23:54:10.700386 systemd-networkd[1531]: flannel.1: Gained IPv6LL Apr 22 23:54:12.000323 containerd[1625]: time="2026-04-22T23:54:11.998196391Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-6hv5h,Uid:a8bd6632-d3b8-4f3c-b1bc-eb8e99294ebc,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"8f03de836ca4c481426335c091919eef960d83fc3964c4f3d9ecab3ae23c37c4\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Apr 22 23:54:12.030216 systemd[1]: run-netns-cni\x2da5cd2a3d\x2d6c70\x2dbe31\x2dcb48\x2de230cfec4147.mount: Deactivated successfully. Apr 22 23:54:12.051995 kubelet[2876]: E0422 23:54:12.049288 2876 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8f03de836ca4c481426335c091919eef960d83fc3964c4f3d9ecab3ae23c37c4\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Apr 22 23:54:12.103265 kubelet[2876]: E0422 23:54:12.087452 2876 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8f03de836ca4c481426335c091919eef960d83fc3964c4f3d9ecab3ae23c37c4\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-674b8bbfcf-6hv5h" Apr 22 23:54:12.120797 kubelet[2876]: E0422 23:54:12.088242 2876 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8f03de836ca4c481426335c091919eef960d83fc3964c4f3d9ecab3ae23c37c4\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-674b8bbfcf-6hv5h" Apr 22 23:54:12.581951 kubelet[2876]: E0422 23:54:12.358399 2876 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-6hv5h_kube-system(a8bd6632-d3b8-4f3c-b1bc-eb8e99294ebc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-6hv5h_kube-system(a8bd6632-d3b8-4f3c-b1bc-eb8e99294ebc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8f03de836ca4c481426335c091919eef960d83fc3964c4f3d9ecab3ae23c37c4\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-674b8bbfcf-6hv5h" podUID="a8bd6632-d3b8-4f3c-b1bc-eb8e99294ebc" Apr 22 23:54:13.467248 containerd[1625]: time="2026-04-22T23:54:13.461119132Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-x54dm,Uid:59ee5781-812f-4669-af64-4c3f91e212ca,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"61045634865dab77e6457eaa8650b889cdb68472733c6bb695f61211b2772553\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Apr 22 23:54:13.558180 kubelet[2876]: E0422 23:54:13.558057 2876 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.554s" Apr 22 23:54:13.565119 kubelet[2876]: E0422 23:54:13.564446 2876 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"61045634865dab77e6457eaa8650b889cdb68472733c6bb695f61211b2772553\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Apr 22 23:54:13.568224 systemd[1]: run-netns-cni\x2d79ddb23d\x2da1a1\x2d8677\x2d92bd\x2d47e8a77faf5b.mount: Deactivated successfully. Apr 22 23:54:13.573098 kubelet[2876]: E0422 23:54:13.568858 2876 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"61045634865dab77e6457eaa8650b889cdb68472733c6bb695f61211b2772553\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-674b8bbfcf-x54dm" Apr 22 23:54:13.604839 kubelet[2876]: E0422 23:54:13.570325 2876 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"61045634865dab77e6457eaa8650b889cdb68472733c6bb695f61211b2772553\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-674b8bbfcf-x54dm" Apr 22 23:54:13.685843 kubelet[2876]: E0422 23:54:13.671076 2876 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-x54dm_kube-system(59ee5781-812f-4669-af64-4c3f91e212ca)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-x54dm_kube-system(59ee5781-812f-4669-af64-4c3f91e212ca)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"61045634865dab77e6457eaa8650b889cdb68472733c6bb695f61211b2772553\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-674b8bbfcf-x54dm" podUID="59ee5781-812f-4669-af64-4c3f91e212ca" Apr 22 23:54:19.594324 kubelet[2876]: E0422 23:54:19.544485 2876 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.582s" Apr 22 23:54:19.898027 systemd[1]: cri-containerd-53dbd89dd226e3f7eb45ba12cd648e1c685c54edcda4367ed7aa3e817c858be4.scope: Deactivated successfully. Apr 22 23:54:19.945479 systemd[1]: cri-containerd-53dbd89dd226e3f7eb45ba12cd648e1c685c54edcda4367ed7aa3e817c858be4.scope: Consumed 48.312s CPU time, 53.3M memory peak. Apr 22 23:54:20.098988 containerd[1625]: time="2026-04-22T23:54:20.096272874Z" level=info msg="received container exit event container_id:\"53dbd89dd226e3f7eb45ba12cd648e1c685c54edcda4367ed7aa3e817c858be4\" id:\"53dbd89dd226e3f7eb45ba12cd648e1c685c54edcda4367ed7aa3e817c858be4\" pid:2764 exit_status:1 exited_at:{seconds:1776902060 nanos:73384821}" Apr 22 23:54:22.410205 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-53dbd89dd226e3f7eb45ba12cd648e1c685c54edcda4367ed7aa3e817c858be4-rootfs.mount: Deactivated successfully. Apr 22 23:54:23.027715 kubelet[2876]: E0422 23:54:23.023294 2876 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.855s" Apr 22 23:54:23.747838 kubelet[2876]: I0422 23:54:23.745481 2876 scope.go:117] "RemoveContainer" containerID="24102edd7bd05250735d0a30fc200ca15e79f4847d483e26270398dea44e8f4b" Apr 22 23:54:23.755015 kubelet[2876]: I0422 23:54:23.754161 2876 scope.go:117] "RemoveContainer" containerID="53dbd89dd226e3f7eb45ba12cd648e1c685c54edcda4367ed7aa3e817c858be4" Apr 22 23:54:23.764660 kubelet[2876]: E0422 23:54:23.763024 2876 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 22 23:54:23.939001 kubelet[2876]: E0422 23:54:23.938957 2876 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 22 23:54:24.049896 containerd[1625]: time="2026-04-22T23:54:24.045225366Z" level=info msg="RemoveContainer for \"24102edd7bd05250735d0a30fc200ca15e79f4847d483e26270398dea44e8f4b\"" Apr 22 23:54:24.265842 containerd[1625]: time="2026-04-22T23:54:24.263184011Z" level=info msg="RemoveContainer for \"24102edd7bd05250735d0a30fc200ca15e79f4847d483e26270398dea44e8f4b\" returns successfully" Apr 22 23:54:24.271940 containerd[1625]: time="2026-04-22T23:54:24.268587680Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-6hv5h,Uid:a8bd6632-d3b8-4f3c-b1bc-eb8e99294ebc,Namespace:kube-system,Attempt:0,}" Apr 22 23:54:24.692670 containerd[1625]: time="2026-04-22T23:54:24.691048624Z" level=info msg="CreateContainer within sandbox \"e8669ce21a3f51f960f35f8e0dd86019d77ce722a2fabbea5799927df0c40f5f\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:2,}" Apr 22 23:54:25.567831 kubelet[2876]: E0422 23:54:25.566112 2876 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.585s" Apr 22 23:54:25.643968 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3950329575.mount: Deactivated successfully. Apr 22 23:54:26.136105 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1311313830.mount: Deactivated successfully. Apr 22 23:54:26.148828 containerd[1625]: time="2026-04-22T23:54:26.147309533Z" level=info msg="Container 82990c3fd820a97fc2544f2122b38c678bf8c76b638e6b3d64ca1b39b3ffe266: CDI devices from CRI Config.CDIDevices: []" Apr 22 23:54:26.954983 containerd[1625]: time="2026-04-22T23:54:26.954169737Z" level=info msg="CreateContainer within sandbox \"e8669ce21a3f51f960f35f8e0dd86019d77ce722a2fabbea5799927df0c40f5f\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:2,} returns container id \"82990c3fd820a97fc2544f2122b38c678bf8c76b638e6b3d64ca1b39b3ffe266\"" Apr 22 23:54:27.297874 systemd-networkd[1531]: cni0: Link UP Apr 22 23:54:27.297936 systemd-networkd[1531]: cni0: Gained carrier Apr 22 23:54:27.587689 systemd-networkd[1531]: cni0: Lost carrier Apr 22 23:54:27.804990 containerd[1625]: time="2026-04-22T23:54:27.803123975Z" level=info msg="StartContainer for \"82990c3fd820a97fc2544f2122b38c678bf8c76b638e6b3d64ca1b39b3ffe266\"" Apr 22 23:54:28.650026 containerd[1625]: time="2026-04-22T23:54:28.649322033Z" level=info msg="connecting to shim 82990c3fd820a97fc2544f2122b38c678bf8c76b638e6b3d64ca1b39b3ffe266" address="unix:///run/containerd/s/4c9d8b7dca752b08b905213a378800e7fc982aefd785442d227a931cc3f6c3f7" protocol=ttrpc version=3 Apr 22 23:54:28.934030 systemd-networkd[1531]: vethb3b9c27f: Link UP Apr 22 23:54:29.087003 systemd-networkd[1531]: cni0: Gained IPv6LL Apr 22 23:54:29.265818 kernel: cni0: port 1(vethb3b9c27f) entered blocking state Apr 22 23:54:29.277131 kernel: cni0: port 1(vethb3b9c27f) entered disabled state Apr 22 23:54:29.322222 kernel: vethb3b9c27f: entered allmulticast mode Apr 22 23:54:29.376751 kernel: vethb3b9c27f: entered promiscuous mode Apr 22 23:54:30.215676 kubelet[2876]: E0422 23:54:30.215207 2876 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="4.077s" Apr 22 23:54:31.155932 kernel: cni0: port 1(vethb3b9c27f) entered blocking state Apr 22 23:54:31.156705 kernel: cni0: port 1(vethb3b9c27f) entered forwarding state Apr 22 23:54:31.157103 systemd-networkd[1531]: vethb3b9c27f: Gained carrier Apr 22 23:54:31.202196 systemd-networkd[1531]: cni0: Gained carrier Apr 22 23:54:31.449097 systemd[1]: Started cri-containerd-82990c3fd820a97fc2544f2122b38c678bf8c76b638e6b3d64ca1b39b3ffe266.scope - libcontainer container 82990c3fd820a97fc2544f2122b38c678bf8c76b638e6b3d64ca1b39b3ffe266. Apr 22 23:54:32.306929 kubelet[2876]: E0422 23:54:32.306181 2876 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.997s" Apr 22 23:54:32.503025 containerd[1625]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil), MTU:0, AdvMSS:0, Priority:0, Table:(*int)(nil), Scope:(*int)(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0xc000104920), "name":"cbr0", "type":"bridge"} Apr 22 23:54:32.503025 containerd[1625]: delegateAdd: netconf sent to delegate plugin: Apr 22 23:54:32.782743 systemd-networkd[1531]: vethb3b9c27f: Gained IPv6LL Apr 22 23:54:34.086088 containerd[1625]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2026-04-22T23:54:34.085102200Z" level=error msg="get state for 82990c3fd820a97fc2544f2122b38c678bf8c76b638e6b3d64ca1b39b3ffe266" error="context deadline exceeded" Apr 22 23:54:34.093762 containerd[1625]: time="2026-04-22T23:54:34.086030182Z" level=warning msg="unknown status" status=0 Apr 22 23:54:34.102723 kubelet[2876]: E0422 23:54:34.091342 2876 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.595s" Apr 22 23:54:34.642355 kubelet[2876]: E0422 23:54:34.640040 2876 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 22 23:54:34.657841 containerd[1625]: time="2026-04-22T23:54:34.655258692Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-x54dm,Uid:59ee5781-812f-4669-af64-4c3f91e212ca,Namespace:kube-system,Attempt:0,}" Apr 22 23:54:35.745188 containerd[1625]: time="2026-04-22T23:54:35.744779200Z" level=error msg="ttrpc: received message on inactive stream" stream=3 Apr 22 23:54:35.807238 containerd[1625]: time="2026-04-22T23:54:35.799660720Z" level=info msg="connecting to shim 2b50f82959d7c84248b3b4abd79a1d33473d39e28f322fe523667b4af0ed12f6" address="unix:///run/containerd/s/3eecebd403447fb4f09fdc8428a1dd70d47d6cb7fd17326505479e051956d592" namespace=k8s.io protocol=ttrpc version=3 Apr 22 23:54:37.051960 kubelet[2876]: E0422 23:54:37.032892 2876 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 22 23:54:38.790397 containerd[1625]: time="2026-04-22T23:54:38.778298329Z" level=info msg="StartContainer for \"82990c3fd820a97fc2544f2122b38c678bf8c76b638e6b3d64ca1b39b3ffe266\" returns successfully" Apr 22 23:54:38.867072 systemd-networkd[1531]: veth3fa1db88: Link UP Apr 22 23:54:39.416731 kernel: cni0: port 2(veth3fa1db88) entered blocking state Apr 22 23:54:39.417341 kernel: cni0: port 2(veth3fa1db88) entered disabled state Apr 22 23:54:39.438909 kernel: veth3fa1db88: entered allmulticast mode Apr 22 23:54:39.459867 kernel: veth3fa1db88: entered promiscuous mode Apr 22 23:54:39.766948 systemd[1]: Started cri-containerd-2b50f82959d7c84248b3b4abd79a1d33473d39e28f322fe523667b4af0ed12f6.scope - libcontainer container 2b50f82959d7c84248b3b4abd79a1d33473d39e28f322fe523667b4af0ed12f6. Apr 22 23:54:41.143726 kernel: cni0: port 2(veth3fa1db88) entered blocking state Apr 22 23:54:41.145291 kernel: cni0: port 2(veth3fa1db88) entered forwarding state Apr 22 23:54:41.139977 systemd-networkd[1531]: veth3fa1db88: Gained carrier Apr 22 23:54:41.683062 containerd[1625]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil), MTU:0, AdvMSS:0, Priority:0, Table:(*int)(nil), Scope:(*int)(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0xc00011a7d0), "name":"cbr0", "type":"bridge"} Apr 22 23:54:41.683062 containerd[1625]: delegateAdd: netconf sent to delegate plugin: Apr 22 23:54:41.962978 systemd-resolved[1306]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 22 23:54:42.510094 systemd-networkd[1531]: veth3fa1db88: Gained IPv6LL Apr 22 23:54:42.755997 kubelet[2876]: E0422 23:54:42.751295 2876 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="6.794s" Apr 22 23:54:44.009891 containerd[1625]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2026-04-22T23:54:44.008291089Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-6hv5h,Uid:a8bd6632-d3b8-4f3c-b1bc-eb8e99294ebc,Namespace:kube-system,Attempt:0,} returns sandbox id \"2b50f82959d7c84248b3b4abd79a1d33473d39e28f322fe523667b4af0ed12f6\"" Apr 22 23:54:44.441201 containerd[1625]: time="2026-04-22T23:54:44.404342816Z" level=info msg="connecting to shim eca5a8c8c6a6671b0818b638e3b885c3deb0e2d7d76ac4c188af3f233924c0be" address="unix:///run/containerd/s/45245b1c9726a8ed76d857d915b6115b6079e03fe4ad3d7b5af673347c150283" namespace=k8s.io protocol=ttrpc version=3 Apr 22 23:54:44.445984 kubelet[2876]: E0422 23:54:44.443255 2876 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 22 23:54:44.848243 kubelet[2876]: E0422 23:54:44.846370 2876 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 22 23:54:45.864906 kubelet[2876]: E0422 23:54:45.804368 2876 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.053s" Apr 22 23:54:45.886301 containerd[1625]: time="2026-04-22T23:54:45.876997497Z" level=info msg="CreateContainer within sandbox \"2b50f82959d7c84248b3b4abd79a1d33473d39e28f322fe523667b4af0ed12f6\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 22 23:54:46.392333 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2697306265.mount: Deactivated successfully. Apr 22 23:54:46.658931 containerd[1625]: time="2026-04-22T23:54:46.648297746Z" level=info msg="Container cbf0230d32da201e92694a2ecc9fa4d11fef561e52f47f6a317cc66b29373af2: CDI devices from CRI Config.CDIDevices: []" Apr 22 23:54:46.775826 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2524958436.mount: Deactivated successfully. Apr 22 23:54:46.961109 containerd[1625]: time="2026-04-22T23:54:46.958940787Z" level=info msg="CreateContainer within sandbox \"2b50f82959d7c84248b3b4abd79a1d33473d39e28f322fe523667b4af0ed12f6\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"cbf0230d32da201e92694a2ecc9fa4d11fef561e52f47f6a317cc66b29373af2\"" Apr 22 23:54:47.345804 containerd[1625]: time="2026-04-22T23:54:47.300291213Z" level=info msg="StartContainer for \"cbf0230d32da201e92694a2ecc9fa4d11fef561e52f47f6a317cc66b29373af2\"" Apr 22 23:54:47.542727 containerd[1625]: time="2026-04-22T23:54:47.542153108Z" level=info msg="connecting to shim cbf0230d32da201e92694a2ecc9fa4d11fef561e52f47f6a317cc66b29373af2" address="unix:///run/containerd/s/3eecebd403447fb4f09fdc8428a1dd70d47d6cb7fd17326505479e051956d592" protocol=ttrpc version=3 Apr 22 23:54:47.777256 systemd[1]: Started cri-containerd-eca5a8c8c6a6671b0818b638e3b885c3deb0e2d7d76ac4c188af3f233924c0be.scope - libcontainer container eca5a8c8c6a6671b0818b638e3b885c3deb0e2d7d76ac4c188af3f233924c0be. Apr 22 23:54:48.268328 kubelet[2876]: E0422 23:54:48.268164 2876 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.403s" Apr 22 23:54:49.288258 systemd[1]: Started cri-containerd-cbf0230d32da201e92694a2ecc9fa4d11fef561e52f47f6a317cc66b29373af2.scope - libcontainer container cbf0230d32da201e92694a2ecc9fa4d11fef561e52f47f6a317cc66b29373af2. Apr 22 23:54:49.664811 kubelet[2876]: E0422 23:54:49.663931 2876 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.323s" Apr 22 23:54:49.844477 systemd-resolved[1306]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 22 23:54:50.293115 containerd[1625]: time="2026-04-22T23:54:50.289438469Z" level=error msg="get state for eca5a8c8c6a6671b0818b638e3b885c3deb0e2d7d76ac4c188af3f233924c0be" error="context deadline exceeded" Apr 22 23:54:50.293115 containerd[1625]: time="2026-04-22T23:54:50.318149778Z" level=warning msg="unknown status" status=0 Apr 22 23:54:50.380080 kubelet[2876]: E0422 23:54:50.378906 2876 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 22 23:54:51.130626 containerd[1625]: time="2026-04-22T23:54:51.130058077Z" level=error msg="ttrpc: received message on inactive stream" stream=3 Apr 22 23:54:52.167998 containerd[1625]: time="2026-04-22T23:54:52.167903410Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-x54dm,Uid:59ee5781-812f-4669-af64-4c3f91e212ca,Namespace:kube-system,Attempt:0,} returns sandbox id \"eca5a8c8c6a6671b0818b638e3b885c3deb0e2d7d76ac4c188af3f233924c0be\"" Apr 22 23:54:52.293972 kubelet[2876]: E0422 23:54:52.292209 2876 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 22 23:54:52.391224 kubelet[2876]: E0422 23:54:52.390354 2876 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 22 23:54:52.990827 kubelet[2876]: E0422 23:54:52.988975 2876 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.049s" Apr 22 23:54:53.434828 containerd[1625]: time="2026-04-22T23:54:53.432732543Z" level=info msg="CreateContainer within sandbox \"eca5a8c8c6a6671b0818b638e3b885c3deb0e2d7d76ac4c188af3f233924c0be\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 22 23:54:54.170904 containerd[1625]: time="2026-04-22T23:54:54.166046080Z" level=info msg="StartContainer for \"cbf0230d32da201e92694a2ecc9fa4d11fef561e52f47f6a317cc66b29373af2\" returns successfully" Apr 22 23:54:54.355887 containerd[1625]: time="2026-04-22T23:54:54.353432311Z" level=info msg="Container fc8b6aaff69f3182e6c52281eafa36ede03ea8b878e2d7eb6e766be3f63795b6: CDI devices from CRI Config.CDIDevices: []" Apr 22 23:54:54.502967 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2841006829.mount: Deactivated successfully. Apr 22 23:54:54.850583 containerd[1625]: time="2026-04-22T23:54:54.844036653Z" level=info msg="CreateContainer within sandbox \"eca5a8c8c6a6671b0818b638e3b885c3deb0e2d7d76ac4c188af3f233924c0be\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"fc8b6aaff69f3182e6c52281eafa36ede03ea8b878e2d7eb6e766be3f63795b6\"" Apr 22 23:54:54.907976 systemd[1]: cri-containerd-984746e5648cf3da3c5ce84b88379d9819cce85092fd94ec82a04ab113283fd6.scope: Deactivated successfully. Apr 22 23:54:54.937098 systemd[1]: cri-containerd-984746e5648cf3da3c5ce84b88379d9819cce85092fd94ec82a04ab113283fd6.scope: Consumed 1min 3.135s CPU time, 25M memory peak. Apr 22 23:54:54.997609 containerd[1625]: time="2026-04-22T23:54:54.995050850Z" level=info msg="StartContainer for \"fc8b6aaff69f3182e6c52281eafa36ede03ea8b878e2d7eb6e766be3f63795b6\"" Apr 22 23:54:55.248490 containerd[1625]: time="2026-04-22T23:54:55.238478956Z" level=info msg="received container exit event container_id:\"984746e5648cf3da3c5ce84b88379d9819cce85092fd94ec82a04ab113283fd6\" id:\"984746e5648cf3da3c5ce84b88379d9819cce85092fd94ec82a04ab113283fd6\" pid:2657 exit_status:1 exited_at:{seconds:1776902095 nanos:50375837}" Apr 22 23:54:55.275798 kubelet[2876]: E0422 23:54:55.264380 2876 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.218s" Apr 22 23:54:55.355730 containerd[1625]: time="2026-04-22T23:54:55.346489016Z" level=info msg="connecting to shim fc8b6aaff69f3182e6c52281eafa36ede03ea8b878e2d7eb6e766be3f63795b6" address="unix:///run/containerd/s/45245b1c9726a8ed76d857d915b6115b6079e03fe4ad3d7b5af673347c150283" protocol=ttrpc version=3 Apr 22 23:54:55.963773 kubelet[2876]: E0422 23:54:55.959357 2876 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 22 23:54:58.154164 systemd[1]: Started cri-containerd-fc8b6aaff69f3182e6c52281eafa36ede03ea8b878e2d7eb6e766be3f63795b6.scope - libcontainer container fc8b6aaff69f3182e6c52281eafa36ede03ea8b878e2d7eb6e766be3f63795b6. Apr 22 23:54:59.184247 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-984746e5648cf3da3c5ce84b88379d9819cce85092fd94ec82a04ab113283fd6-rootfs.mount: Deactivated successfully. Apr 22 23:54:59.211713 kubelet[2876]: E0422 23:54:59.209055 2876 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.235s" Apr 22 23:54:59.461146 containerd[1625]: time="2026-04-22T23:54:59.457932835Z" level=info msg="container event discarded" container=c6bac13b94006ba425aa4bc4fb250537550fda612feb518f8e75232eaa30d02e type=CONTAINER_CREATED_EVENT Apr 22 23:54:59.461146 containerd[1625]: time="2026-04-22T23:54:59.458203669Z" level=info msg="container event discarded" container=c6bac13b94006ba425aa4bc4fb250537550fda612feb518f8e75232eaa30d02e type=CONTAINER_STARTED_EVENT Apr 22 23:54:59.523805 containerd[1625]: time="2026-04-22T23:54:59.521948227Z" level=info msg="container event discarded" container=2828c2319c8a920f1d62358b8008cdb2c23d03c9191e05cd8a8e46899474fe83 type=CONTAINER_CREATED_EVENT Apr 22 23:54:59.523805 containerd[1625]: time="2026-04-22T23:54:59.523556072Z" level=info msg="container event discarded" container=2828c2319c8a920f1d62358b8008cdb2c23d03c9191e05cd8a8e46899474fe83 type=CONTAINER_STARTED_EVENT Apr 22 23:54:59.617018 containerd[1625]: time="2026-04-22T23:54:59.612146500Z" level=info msg="container event discarded" container=e8669ce21a3f51f960f35f8e0dd86019d77ce722a2fabbea5799927df0c40f5f type=CONTAINER_CREATED_EVENT Apr 22 23:54:59.624224 containerd[1625]: time="2026-04-22T23:54:59.615249972Z" level=info msg="container event discarded" container=e8669ce21a3f51f960f35f8e0dd86019d77ce722a2fabbea5799927df0c40f5f type=CONTAINER_STARTED_EVENT Apr 22 23:54:59.761769 containerd[1625]: time="2026-04-22T23:54:59.755107909Z" level=info msg="container event discarded" container=74ab14f4259962fd9224b102de5de3f69d39b0b971d486d485296afe00fb0673 type=CONTAINER_CREATED_EVENT Apr 22 23:54:59.835696 containerd[1625]: time="2026-04-22T23:54:59.831073306Z" level=info msg="container event discarded" container=984746e5648cf3da3c5ce84b88379d9819cce85092fd94ec82a04ab113283fd6 type=CONTAINER_CREATED_EVENT Apr 22 23:54:59.916330 containerd[1625]: time="2026-04-22T23:54:59.913191376Z" level=info msg="container event discarded" container=24102edd7bd05250735d0a30fc200ca15e79f4847d483e26270398dea44e8f4b type=CONTAINER_CREATED_EVENT Apr 22 23:55:00.175456 kubelet[2876]: E0422 23:55:00.162403 2876 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 22 23:55:00.841259 kubelet[2876]: E0422 23:55:00.839372 2876 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.623s" Apr 22 23:55:00.956709 containerd[1625]: time="2026-04-22T23:55:00.955059498Z" level=error msg="get state for fc8b6aaff69f3182e6c52281eafa36ede03ea8b878e2d7eb6e766be3f63795b6" error="context deadline exceeded" Apr 22 23:55:00.967223 containerd[1625]: time="2026-04-22T23:55:00.960713594Z" level=warning msg="unknown status" status=0 Apr 22 23:55:01.127983 containerd[1625]: time="2026-04-22T23:55:01.124985270Z" level=info msg="container event discarded" container=74ab14f4259962fd9224b102de5de3f69d39b0b971d486d485296afe00fb0673 type=CONTAINER_STARTED_EVENT Apr 22 23:55:01.276863 containerd[1625]: time="2026-04-22T23:55:01.275842544Z" level=info msg="container event discarded" container=984746e5648cf3da3c5ce84b88379d9819cce85092fd94ec82a04ab113283fd6 type=CONTAINER_STARTED_EVENT Apr 22 23:55:01.557595 containerd[1625]: time="2026-04-22T23:55:01.544309109Z" level=info msg="container event discarded" container=24102edd7bd05250735d0a30fc200ca15e79f4847d483e26270398dea44e8f4b type=CONTAINER_STARTED_EVENT Apr 22 23:55:01.992996 kubelet[2876]: E0422 23:55:01.912089 2876 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 22 23:55:02.032648 containerd[1625]: time="2026-04-22T23:55:02.031812586Z" level=error msg="ttrpc: received message on inactive stream" stream=3 Apr 22 23:55:02.186511 kubelet[2876]: E0422 23:55:02.178330 2876 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 22 23:55:03.244390 kubelet[2876]: I0422 23:55:03.244235 2876 scope.go:117] "RemoveContainer" containerID="984746e5648cf3da3c5ce84b88379d9819cce85092fd94ec82a04ab113283fd6" Apr 22 23:55:03.252790 containerd[1625]: time="2026-04-22T23:55:03.244588454Z" level=info msg="StartContainer for \"fc8b6aaff69f3182e6c52281eafa36ede03ea8b878e2d7eb6e766be3f63795b6\" returns successfully" Apr 22 23:55:03.258640 kubelet[2876]: E0422 23:55:03.257484 2876 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 22 23:55:03.395907 containerd[1625]: time="2026-04-22T23:55:03.394123987Z" level=info msg="CreateContainer within sandbox \"2828c2319c8a920f1d62358b8008cdb2c23d03c9191e05cd8a8e46899474fe83\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Apr 22 23:55:03.655460 containerd[1625]: time="2026-04-22T23:55:03.651479249Z" level=info msg="Container 363f25bb29f00ab4d89aca78657c099030f4766ff8440617307881e77d872488: CDI devices from CRI Config.CDIDevices: []" Apr 22 23:55:03.880493 containerd[1625]: time="2026-04-22T23:55:03.879346390Z" level=info msg="CreateContainer within sandbox \"2828c2319c8a920f1d62358b8008cdb2c23d03c9191e05cd8a8e46899474fe83\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"363f25bb29f00ab4d89aca78657c099030f4766ff8440617307881e77d872488\"" Apr 22 23:55:03.891772 containerd[1625]: time="2026-04-22T23:55:03.890926800Z" level=info msg="StartContainer for \"363f25bb29f00ab4d89aca78657c099030f4766ff8440617307881e77d872488\"" Apr 22 23:55:03.932030 containerd[1625]: time="2026-04-22T23:55:03.927222648Z" level=info msg="connecting to shim 363f25bb29f00ab4d89aca78657c099030f4766ff8440617307881e77d872488" address="unix:///run/containerd/s/008a852ad47db2e030aefc8056a2e849ba474c4802ea5eebff2d501bd41a664c" protocol=ttrpc version=3 Apr 22 23:55:04.855771 kubelet[2876]: E0422 23:55:04.855128 2876 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 22 23:55:05.243205 kubelet[2876]: E0422 23:55:05.219369 2876 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 22 23:55:05.454243 systemd[1]: Started cri-containerd-363f25bb29f00ab4d89aca78657c099030f4766ff8440617307881e77d872488.scope - libcontainer container 363f25bb29f00ab4d89aca78657c099030f4766ff8440617307881e77d872488. Apr 22 23:55:06.723856 kubelet[2876]: E0422 23:55:06.721822 2876 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 22 23:55:07.691456 kubelet[2876]: E0422 23:55:07.690365 2876 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 22 23:55:07.781143 containerd[1625]: time="2026-04-22T23:55:07.780400965Z" level=error msg="get state for 363f25bb29f00ab4d89aca78657c099030f4766ff8440617307881e77d872488" error="context deadline exceeded" Apr 22 23:55:07.781143 containerd[1625]: time="2026-04-22T23:55:07.780655728Z" level=warning msg="unknown status" status=0 Apr 22 23:55:07.994935 containerd[1625]: time="2026-04-22T23:55:07.992155462Z" level=error msg="ttrpc: received message on inactive stream" stream=3 Apr 22 23:55:09.023375 containerd[1625]: time="2026-04-22T23:55:09.023166503Z" level=info msg="StartContainer for \"363f25bb29f00ab4d89aca78657c099030f4766ff8440617307881e77d872488\" returns successfully" Apr 22 23:55:10.195957 kubelet[2876]: E0422 23:55:10.195738 2876 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 22 23:55:10.250008 kubelet[2876]: E0422 23:55:10.249509 2876 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 22 23:55:11.548118 kubelet[2876]: E0422 23:55:11.546165 2876 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 22 23:55:12.957406 kubelet[2876]: E0422 23:55:12.956748 2876 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 22 23:55:13.908016 kubelet[2876]: I0422 23:55:13.905912 2876 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-6hv5h" podStartSLOduration=179.903884123 podStartE2EDuration="2m59.903884123s" podCreationTimestamp="2026-04-22 23:52:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-22 23:55:11.296193969 +0000 UTC m=+216.423119868" watchObservedRunningTime="2026-04-22 23:55:13.903884123 +0000 UTC m=+219.030810022" Apr 22 23:55:17.711672 kubelet[2876]: E0422 23:55:17.710593 2876 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 22 23:55:19.752609 kubelet[2876]: E0422 23:55:19.751324 2876 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 22 23:55:20.145014 kubelet[2876]: E0422 23:55:20.136185 2876 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 22 23:55:20.745791 kubelet[2876]: E0422 23:55:20.745717 2876 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 22 23:55:21.811959 kubelet[2876]: E0422 23:55:21.808664 2876 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 22 23:55:22.446783 kubelet[2876]: I0422 23:55:22.444929 2876 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-x54dm" podStartSLOduration=189.444891948 podStartE2EDuration="3m9.444891948s" podCreationTimestamp="2026-04-22 23:52:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-22 23:55:21.286370907 +0000 UTC m=+226.413296807" watchObservedRunningTime="2026-04-22 23:55:22.444891948 +0000 UTC m=+227.571817848" Apr 22 23:55:57.074152 kubelet[2876]: E0422 23:55:57.058349 2876 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.114s" Apr 22 23:56:03.873631 systemd[1]: cri-containerd-82990c3fd820a97fc2544f2122b38c678bf8c76b638e6b3d64ca1b39b3ffe266.scope: Deactivated successfully. Apr 22 23:56:03.896591 systemd[1]: cri-containerd-82990c3fd820a97fc2544f2122b38c678bf8c76b638e6b3d64ca1b39b3ffe266.scope: Consumed 21.879s CPU time, 37.4M memory peak. Apr 22 23:56:03.994188 containerd[1625]: time="2026-04-22T23:56:03.994093372Z" level=info msg="received container exit event container_id:\"82990c3fd820a97fc2544f2122b38c678bf8c76b638e6b3d64ca1b39b3ffe266\" id:\"82990c3fd820a97fc2544f2122b38c678bf8c76b638e6b3d64ca1b39b3ffe266\" pid:3610 exit_status:1 exited_at:{seconds:1776902163 nanos:902495310}" Apr 22 23:56:05.463317 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-82990c3fd820a97fc2544f2122b38c678bf8c76b638e6b3d64ca1b39b3ffe266-rootfs.mount: Deactivated successfully. Apr 22 23:56:06.358644 kubelet[2876]: I0422 23:56:06.343271 2876 scope.go:117] "RemoveContainer" containerID="53dbd89dd226e3f7eb45ba12cd648e1c685c54edcda4367ed7aa3e817c858be4" Apr 22 23:56:06.464499 kubelet[2876]: I0422 23:56:06.464354 2876 scope.go:117] "RemoveContainer" containerID="82990c3fd820a97fc2544f2122b38c678bf8c76b638e6b3d64ca1b39b3ffe266" Apr 22 23:56:06.473176 kubelet[2876]: E0422 23:56:06.472020 2876 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 22 23:56:06.497566 kubelet[2876]: E0422 23:56:06.495075 2876 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-controller-manager pod=kube-controller-manager-localhost_kube-system(e9ca41790ae21be9f4cbd451ade0acec)\"" pod="kube-system/kube-controller-manager-localhost" podUID="e9ca41790ae21be9f4cbd451ade0acec" Apr 22 23:56:06.740688 containerd[1625]: time="2026-04-22T23:56:06.740174889Z" level=info msg="RemoveContainer for \"53dbd89dd226e3f7eb45ba12cd648e1c685c54edcda4367ed7aa3e817c858be4\"" Apr 22 23:56:07.073072 containerd[1625]: time="2026-04-22T23:56:07.064348359Z" level=info msg="RemoveContainer for \"53dbd89dd226e3f7eb45ba12cd648e1c685c54edcda4367ed7aa3e817c858be4\" returns successfully" Apr 22 23:56:08.108475 containerd[1625]: time="2026-04-22T23:56:08.108091684Z" level=info msg="container event discarded" container=24102edd7bd05250735d0a30fc200ca15e79f4847d483e26270398dea44e8f4b type=CONTAINER_STOPPED_EVENT Apr 22 23:56:09.536916 containerd[1625]: time="2026-04-22T23:56:09.536082023Z" level=info msg="container event discarded" container=53dbd89dd226e3f7eb45ba12cd648e1c685c54edcda4367ed7aa3e817c858be4 type=CONTAINER_CREATED_EVENT Apr 22 23:56:12.021550 containerd[1625]: time="2026-04-22T23:56:12.012289017Z" level=info msg="container event discarded" container=53dbd89dd226e3f7eb45ba12cd648e1c685c54edcda4367ed7aa3e817c858be4 type=CONTAINER_STARTED_EVENT Apr 22 23:56:16.595620 kubelet[2876]: I0422 23:56:16.592551 2876 scope.go:117] "RemoveContainer" containerID="82990c3fd820a97fc2544f2122b38c678bf8c76b638e6b3d64ca1b39b3ffe266" Apr 22 23:56:16.695071 kubelet[2876]: E0422 23:56:16.693477 2876 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 22 23:56:17.171086 containerd[1625]: time="2026-04-22T23:56:17.167554187Z" level=info msg="CreateContainer within sandbox \"e8669ce21a3f51f960f35f8e0dd86019d77ce722a2fabbea5799927df0c40f5f\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:3,}" Apr 22 23:56:17.677341 containerd[1625]: time="2026-04-22T23:56:17.673308920Z" level=info msg="Container 6bc663f8bffb59fe2d3fc60679c9b7ca05e6ebf971d606bfdfbca20611683b9a: CDI devices from CRI Config.CDIDevices: []" Apr 22 23:56:17.999762 containerd[1625]: time="2026-04-22T23:56:17.993046945Z" level=info msg="CreateContainer within sandbox \"e8669ce21a3f51f960f35f8e0dd86019d77ce722a2fabbea5799927df0c40f5f\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:3,} returns container id \"6bc663f8bffb59fe2d3fc60679c9b7ca05e6ebf971d606bfdfbca20611683b9a\"" Apr 22 23:56:18.155657 containerd[1625]: time="2026-04-22T23:56:18.152222782Z" level=info msg="StartContainer for \"6bc663f8bffb59fe2d3fc60679c9b7ca05e6ebf971d606bfdfbca20611683b9a\"" Apr 22 23:56:18.376712 containerd[1625]: time="2026-04-22T23:56:18.361141522Z" level=info msg="connecting to shim 6bc663f8bffb59fe2d3fc60679c9b7ca05e6ebf971d606bfdfbca20611683b9a" address="unix:///run/containerd/s/4c9d8b7dca752b08b905213a378800e7fc982aefd785442d227a931cc3f6c3f7" protocol=ttrpc version=3 Apr 22 23:56:19.107801 kubelet[2876]: E0422 23:56:19.106433 2876 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.152s" Apr 22 23:56:19.198083 kubelet[2876]: E0422 23:56:19.190547 2876 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 22 23:56:20.491491 systemd[1]: Started cri-containerd-6bc663f8bffb59fe2d3fc60679c9b7ca05e6ebf971d606bfdfbca20611683b9a.scope - libcontainer container 6bc663f8bffb59fe2d3fc60679c9b7ca05e6ebf971d606bfdfbca20611683b9a. Apr 22 23:56:21.058105 kubelet[2876]: E0422 23:56:21.055393 2876 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 22 23:56:21.258276 kubelet[2876]: E0422 23:56:21.252192 2876 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 22 23:56:22.842612 containerd[1625]: time="2026-04-22T23:56:22.842273473Z" level=error msg="get state for 6bc663f8bffb59fe2d3fc60679c9b7ca05e6ebf971d606bfdfbca20611683b9a" error="context deadline exceeded" Apr 22 23:56:22.855992 containerd[1625]: time="2026-04-22T23:56:22.846709760Z" level=warning msg="unknown status" status=0 Apr 22 23:56:23.481560 containerd[1625]: time="2026-04-22T23:56:23.479823669Z" level=error msg="ttrpc: received message on inactive stream" stream=3 Apr 22 23:56:24.569198 containerd[1625]: time="2026-04-22T23:56:24.567369095Z" level=info msg="StartContainer for \"6bc663f8bffb59fe2d3fc60679c9b7ca05e6ebf971d606bfdfbca20611683b9a\" returns successfully" Apr 22 23:56:25.710919 kubelet[2876]: E0422 23:56:25.705465 2876 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 22 23:56:26.928555 kubelet[2876]: E0422 23:56:26.925429 2876 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 22 23:56:27.074003 kubelet[2876]: E0422 23:56:27.068424 2876 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 22 23:56:27.986940 kubelet[2876]: E0422 23:56:27.985458 2876 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 22 23:56:32.677842 kubelet[2876]: E0422 23:56:32.672468 2876 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 22 23:56:37.082727 kubelet[2876]: E0422 23:56:37.081646 2876 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 22 23:56:41.340853 kubelet[2876]: E0422 23:56:41.330493 2876 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.381s" Apr 22 23:56:44.033785 kubelet[2876]: E0422 23:56:44.029140 2876 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 22 23:57:21.622640 containerd[1625]: time="2026-04-22T23:57:21.616997912Z" level=info msg="container event discarded" container=ec6b21d0d85a255421f8254afb26e4ba4ba930d0cceacb0304b7483eecaeddfb type=CONTAINER_CREATED_EVENT Apr 22 23:57:21.622640 containerd[1625]: time="2026-04-22T23:57:21.617286931Z" level=info msg="container event discarded" container=ec6b21d0d85a255421f8254afb26e4ba4ba930d0cceacb0304b7483eecaeddfb type=CONTAINER_STARTED_EVENT Apr 22 23:57:22.472211 containerd[1625]: time="2026-04-22T23:57:22.471706394Z" level=info msg="container event discarded" container=bc4a3d81084ff263b607d6bc293d4cb5990b442f6575a37b4a1369a09283d564 type=CONTAINER_CREATED_EVENT Apr 22 23:57:22.472211 containerd[1625]: time="2026-04-22T23:57:22.471829874Z" level=info msg="container event discarded" container=bc4a3d81084ff263b607d6bc293d4cb5990b442f6575a37b4a1369a09283d564 type=CONTAINER_STARTED_EVENT Apr 22 23:57:22.904511 containerd[1625]: time="2026-04-22T23:57:22.903407590Z" level=info msg="container event discarded" container=f491607eace78f3ca55578e9025cc6f12ea7af2c1388b98bf4e58b64ebc619ab type=CONTAINER_CREATED_EVENT Apr 22 23:57:25.204713 containerd[1625]: time="2026-04-22T23:57:25.203772704Z" level=info msg="container event discarded" container=f491607eace78f3ca55578e9025cc6f12ea7af2c1388b98bf4e58b64ebc619ab type=CONTAINER_STARTED_EVENT Apr 22 23:57:29.785616 containerd[1625]: time="2026-04-22T23:57:29.775312699Z" level=info msg="container event discarded" container=ba4ba48cb7277449b5e178019e74fb5005c7e23232996ac90d485dffa9b805f7 type=CONTAINER_CREATED_EVENT Apr 22 23:57:31.904186 containerd[1625]: time="2026-04-22T23:57:31.904015837Z" level=info msg="container event discarded" container=ba4ba48cb7277449b5e178019e74fb5005c7e23232996ac90d485dffa9b805f7 type=CONTAINER_STARTED_EVENT Apr 22 23:57:33.879698 containerd[1625]: time="2026-04-22T23:57:33.871347649Z" level=info msg="container event discarded" container=ba4ba48cb7277449b5e178019e74fb5005c7e23232996ac90d485dffa9b805f7 type=CONTAINER_STOPPED_EVENT Apr 22 23:57:35.193696 kubelet[2876]: E0422 23:57:35.190225 2876 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.247s" Apr 22 23:57:37.207952 kubelet[2876]: E0422 23:57:37.171800 2876 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 22 23:57:38.032168 kubelet[2876]: E0422 23:57:38.013033 2876 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 22 23:57:40.072679 kubelet[2876]: E0422 23:57:40.072366 2876 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 22 23:57:43.058509 kubelet[2876]: E0422 23:57:43.058159 2876 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 22 23:57:43.796718 kubelet[2876]: E0422 23:57:43.795400 2876 controller.go:195] "Failed to update lease" err="Put \"https://10.0.0.21:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Apr 22 23:57:45.806001 systemd[1]: cri-containerd-6bc663f8bffb59fe2d3fc60679c9b7ca05e6ebf971d606bfdfbca20611683b9a.scope: Deactivated successfully. Apr 22 23:57:45.828501 systemd[1]: cri-containerd-6bc663f8bffb59fe2d3fc60679c9b7ca05e6ebf971d606bfdfbca20611683b9a.scope: Consumed 19.730s CPU time, 29.8M memory peak. Apr 22 23:57:46.094823 containerd[1625]: time="2026-04-22T23:57:46.093506466Z" level=info msg="received container exit event container_id:\"6bc663f8bffb59fe2d3fc60679c9b7ca05e6ebf971d606bfdfbca20611683b9a\" id:\"6bc663f8bffb59fe2d3fc60679c9b7ca05e6ebf971d606bfdfbca20611683b9a\" pid:4213 exit_status:1 exited_at:{seconds:1776902266 nanos:1768416}" Apr 22 23:57:48.903500 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6bc663f8bffb59fe2d3fc60679c9b7ca05e6ebf971d606bfdfbca20611683b9a-rootfs.mount: Deactivated successfully. Apr 22 23:57:50.012753 kubelet[2876]: E0422 23:57:50.001316 2876 controller.go:195] "Failed to update lease" err="Operation cannot be fulfilled on leases.coordination.k8s.io \"localhost\": the object has been modified; please apply your changes to the latest version and try again" Apr 22 23:57:50.088397 kubelet[2876]: E0422 23:57:50.031936 2876 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.092s" Apr 22 23:57:50.343909 containerd[1625]: time="2026-04-22T23:57:50.338320221Z" level=error msg="collecting metrics for 6bc663f8bffb59fe2d3fc60679c9b7ca05e6ebf971d606bfdfbca20611683b9a" error="ttrpc: closed" Apr 22 23:57:50.504100 kubelet[2876]: I0422 23:57:50.481405 2876 scope.go:117] "RemoveContainer" containerID="82990c3fd820a97fc2544f2122b38c678bf8c76b638e6b3d64ca1b39b3ffe266" Apr 22 23:57:50.982724 systemd[1]: cri-containerd-363f25bb29f00ab4d89aca78657c099030f4766ff8440617307881e77d872488.scope: Deactivated successfully. Apr 22 23:57:51.007822 kubelet[2876]: I0422 23:57:50.985974 2876 scope.go:117] "RemoveContainer" containerID="6bc663f8bffb59fe2d3fc60679c9b7ca05e6ebf971d606bfdfbca20611683b9a" Apr 22 23:57:51.037379 systemd[1]: cri-containerd-363f25bb29f00ab4d89aca78657c099030f4766ff8440617307881e77d872488.scope: Consumed 35.093s CPU time, 22.3M memory peak. Apr 22 23:57:51.054824 containerd[1625]: time="2026-04-22T23:57:51.048472441Z" level=info msg="received container exit event container_id:\"363f25bb29f00ab4d89aca78657c099030f4766ff8440617307881e77d872488\" id:\"363f25bb29f00ab4d89aca78657c099030f4766ff8440617307881e77d872488\" pid:3951 exit_status:1 exited_at:{seconds:1776902271 nanos:34112573}" Apr 22 23:57:51.205610 containerd[1625]: time="2026-04-22T23:57:51.204242839Z" level=info msg="RemoveContainer for \"82990c3fd820a97fc2544f2122b38c678bf8c76b638e6b3d64ca1b39b3ffe266\"" Apr 22 23:57:51.245863 kubelet[2876]: E0422 23:57:51.245297 2876 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 22 23:57:51.248372 kubelet[2876]: E0422 23:57:51.245862 2876 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-controller-manager pod=kube-controller-manager-localhost_kube-system(e9ca41790ae21be9f4cbd451ade0acec)\"" pod="kube-system/kube-controller-manager-localhost" podUID="e9ca41790ae21be9f4cbd451ade0acec" Apr 22 23:57:51.312120 containerd[1625]: time="2026-04-22T23:57:51.311811644Z" level=info msg="RemoveContainer for \"82990c3fd820a97fc2544f2122b38c678bf8c76b638e6b3d64ca1b39b3ffe266\" returns successfully" Apr 22 23:57:52.782849 kubelet[2876]: E0422 23:57:52.771187 2876 cadvisor_stats_provider.go:525] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod33fee6ba1581201eda98a989140db110.slice/cri-containerd-363f25bb29f00ab4d89aca78657c099030f4766ff8440617307881e77d872488.scope\": RecentStats: unable to find data in memory cache]" Apr 22 23:57:53.000864 kubelet[2876]: E0422 23:57:52.996972 2876 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 22 23:57:53.708058 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-363f25bb29f00ab4d89aca78657c099030f4766ff8440617307881e77d872488-rootfs.mount: Deactivated successfully. Apr 22 23:57:55.299352 kubelet[2876]: I0422 23:57:55.297115 2876 scope.go:117] "RemoveContainer" containerID="984746e5648cf3da3c5ce84b88379d9819cce85092fd94ec82a04ab113283fd6" Apr 22 23:57:55.601203 containerd[1625]: time="2026-04-22T23:57:55.596004232Z" level=info msg="RemoveContainer for \"984746e5648cf3da3c5ce84b88379d9819cce85092fd94ec82a04ab113283fd6\"" Apr 22 23:57:55.660224 kubelet[2876]: I0422 23:57:55.654477 2876 scope.go:117] "RemoveContainer" containerID="363f25bb29f00ab4d89aca78657c099030f4766ff8440617307881e77d872488" Apr 22 23:57:55.682760 kubelet[2876]: E0422 23:57:55.673512 2876 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 22 23:57:55.736485 kubelet[2876]: E0422 23:57:55.734273 2876 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-scheduler pod=kube-scheduler-localhost_kube-system(33fee6ba1581201eda98a989140db110)\"" pod="kube-system/kube-scheduler-localhost" podUID="33fee6ba1581201eda98a989140db110" Apr 22 23:57:55.881147 containerd[1625]: time="2026-04-22T23:57:55.880746946Z" level=info msg="RemoveContainer for \"984746e5648cf3da3c5ce84b88379d9819cce85092fd94ec82a04ab113283fd6\" returns successfully" Apr 22 23:57:56.091052 kubelet[2876]: E0422 23:57:56.089156 2876 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 22 23:57:56.647877 kubelet[2876]: I0422 23:57:56.647402 2876 scope.go:117] "RemoveContainer" containerID="6bc663f8bffb59fe2d3fc60679c9b7ca05e6ebf971d606bfdfbca20611683b9a" Apr 22 23:57:56.730229 kubelet[2876]: E0422 23:57:56.729098 2876 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 22 23:57:56.735833 kubelet[2876]: E0422 23:57:56.734891 2876 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-controller-manager pod=kube-controller-manager-localhost_kube-system(e9ca41790ae21be9f4cbd451ade0acec)\"" pod="kube-system/kube-controller-manager-localhost" podUID="e9ca41790ae21be9f4cbd451ade0acec" Apr 22 23:58:01.793052 kubelet[2876]: I0422 23:58:01.792303 2876 scope.go:117] "RemoveContainer" containerID="363f25bb29f00ab4d89aca78657c099030f4766ff8440617307881e77d872488" Apr 22 23:58:01.832120 kubelet[2876]: E0422 23:58:01.802650 2876 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 22 23:58:02.099281 containerd[1625]: time="2026-04-22T23:58:02.088537513Z" level=info msg="CreateContainer within sandbox \"2828c2319c8a920f1d62358b8008cdb2c23d03c9191e05cd8a8e46899474fe83\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:2,}" Apr 22 23:58:02.713074 containerd[1625]: time="2026-04-22T23:58:02.703926947Z" level=info msg="Container a23596d5036ab4fa3da967453638a6e2539d4c51b289f6a8e1a3000226e74700: CDI devices from CRI Config.CDIDevices: []" Apr 22 23:58:02.751306 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1757103472.mount: Deactivated successfully. Apr 22 23:58:03.363805 containerd[1625]: time="2026-04-22T23:58:03.363558836Z" level=info msg="CreateContainer within sandbox \"2828c2319c8a920f1d62358b8008cdb2c23d03c9191e05cd8a8e46899474fe83\" for &ContainerMetadata{Name:kube-scheduler,Attempt:2,} returns container id \"a23596d5036ab4fa3da967453638a6e2539d4c51b289f6a8e1a3000226e74700\"" Apr 22 23:58:03.410791 kubelet[2876]: E0422 23:58:03.408660 2876 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.467s" Apr 22 23:58:03.428185 containerd[1625]: time="2026-04-22T23:58:03.427949492Z" level=info msg="StartContainer for \"a23596d5036ab4fa3da967453638a6e2539d4c51b289f6a8e1a3000226e74700\"" Apr 22 23:58:03.471985 containerd[1625]: time="2026-04-22T23:58:03.470385066Z" level=info msg="connecting to shim a23596d5036ab4fa3da967453638a6e2539d4c51b289f6a8e1a3000226e74700" address="unix:///run/containerd/s/008a852ad47db2e030aefc8056a2e849ba474c4802ea5eebff2d501bd41a664c" protocol=ttrpc version=3 Apr 22 23:58:05.290785 kubelet[2876]: E0422 23:58:05.289407 2876 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.294s" Apr 22 23:58:05.729121 systemd[1]: Started cri-containerd-a23596d5036ab4fa3da967453638a6e2539d4c51b289f6a8e1a3000226e74700.scope - libcontainer container a23596d5036ab4fa3da967453638a6e2539d4c51b289f6a8e1a3000226e74700. Apr 22 23:58:06.964035 kubelet[2876]: E0422 23:58:06.963222 2876 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.005s" Apr 22 23:58:08.108838 containerd[1625]: time="2026-04-22T23:58:08.103674900Z" level=error msg="get state for a23596d5036ab4fa3da967453638a6e2539d4c51b289f6a8e1a3000226e74700" error="context deadline exceeded" Apr 22 23:58:08.108838 containerd[1625]: time="2026-04-22T23:58:08.107468186Z" level=warning msg="unknown status" status=0 Apr 22 23:58:09.053495 kubelet[2876]: I0422 23:58:09.053277 2876 scope.go:117] "RemoveContainer" containerID="6bc663f8bffb59fe2d3fc60679c9b7ca05e6ebf971d606bfdfbca20611683b9a" Apr 22 23:58:09.062947 kubelet[2876]: E0422 23:58:09.062475 2876 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 22 23:58:09.273107 containerd[1625]: time="2026-04-22T23:58:09.272779470Z" level=info msg="CreateContainer within sandbox \"e8669ce21a3f51f960f35f8e0dd86019d77ce722a2fabbea5799927df0c40f5f\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:4,}" Apr 22 23:58:09.666120 containerd[1625]: time="2026-04-22T23:58:09.665879537Z" level=info msg="Container e41b2e5e5b6cefd6bfa58f91a798c2aeeda4ddea08dd75eb6c99df3f736042d9: CDI devices from CRI Config.CDIDevices: []" Apr 22 23:58:10.035940 containerd[1625]: time="2026-04-22T23:58:10.011245811Z" level=info msg="CreateContainer within sandbox \"e8669ce21a3f51f960f35f8e0dd86019d77ce722a2fabbea5799927df0c40f5f\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:4,} returns container id \"e41b2e5e5b6cefd6bfa58f91a798c2aeeda4ddea08dd75eb6c99df3f736042d9\"" Apr 22 23:58:10.157880 containerd[1625]: time="2026-04-22T23:58:10.157065480Z" level=info msg="StartContainer for \"e41b2e5e5b6cefd6bfa58f91a798c2aeeda4ddea08dd75eb6c99df3f736042d9\"" Apr 22 23:58:10.297288 containerd[1625]: time="2026-04-22T23:58:10.292062811Z" level=error msg="get state for a23596d5036ab4fa3da967453638a6e2539d4c51b289f6a8e1a3000226e74700" error="context deadline exceeded" Apr 22 23:58:10.297288 containerd[1625]: time="2026-04-22T23:58:10.292911917Z" level=warning msg="unknown status" status=0 Apr 22 23:58:10.358039 containerd[1625]: time="2026-04-22T23:58:10.322811275Z" level=info msg="connecting to shim e41b2e5e5b6cefd6bfa58f91a798c2aeeda4ddea08dd75eb6c99df3f736042d9" address="unix:///run/containerd/s/4c9d8b7dca752b08b905213a378800e7fc982aefd785442d227a931cc3f6c3f7" protocol=ttrpc version=3 Apr 22 23:58:11.411093 kubelet[2876]: E0422 23:58:11.407959 2876 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.372s" Apr 22 23:58:11.665858 systemd[1]: Started cri-containerd-e41b2e5e5b6cefd6bfa58f91a798c2aeeda4ddea08dd75eb6c99df3f736042d9.scope - libcontainer container e41b2e5e5b6cefd6bfa58f91a798c2aeeda4ddea08dd75eb6c99df3f736042d9. Apr 22 23:58:12.541820 containerd[1625]: time="2026-04-22T23:58:12.541663557Z" level=error msg="ttrpc: received message on inactive stream" stream=5 Apr 22 23:58:12.541820 containerd[1625]: time="2026-04-22T23:58:12.541773780Z" level=error msg="ttrpc: received message on inactive stream" stream=7 Apr 22 23:58:14.057474 containerd[1625]: time="2026-04-22T23:58:14.056918620Z" level=info msg="StartContainer for \"a23596d5036ab4fa3da967453638a6e2539d4c51b289f6a8e1a3000226e74700\" returns successfully" Apr 22 23:58:15.617833 kubelet[2876]: E0422 23:58:15.616802 2876 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.663s" Apr 22 23:58:15.915828 containerd[1625]: time="2026-04-22T23:58:15.910421137Z" level=info msg="StartContainer for \"e41b2e5e5b6cefd6bfa58f91a798c2aeeda4ddea08dd75eb6c99df3f736042d9\" returns successfully" Apr 22 23:58:16.101930 kubelet[2876]: E0422 23:58:16.071127 2876 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 22 23:58:17.385808 kubelet[2876]: E0422 23:58:17.377461 2876 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 22 23:58:17.544595 kubelet[2876]: E0422 23:58:17.543763 2876 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 22 23:58:18.705840 kubelet[2876]: E0422 23:58:18.700499 2876 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 22 23:58:19.748633 kubelet[2876]: E0422 23:58:19.746461 2876 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 22 23:58:25.063914 containerd[1625]: time="2026-04-22T23:58:25.063111315Z" level=info msg="container event discarded" container=9be3d84a6567962c71c77138f33b9b867649f97c2257003550c257a13f17b9a7 type=CONTAINER_CREATED_EVENT Apr 22 23:58:26.810997 kubelet[2876]: E0422 23:58:26.791466 2876 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 22 23:58:29.681034 kubelet[2876]: E0422 23:58:29.679084 2876 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 22 23:58:30.056750 kubelet[2876]: E0422 23:58:30.011881 2876 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.407s" Apr 22 23:58:31.549389 containerd[1625]: time="2026-04-22T23:58:31.548907166Z" level=info msg="container event discarded" container=9be3d84a6567962c71c77138f33b9b867649f97c2257003550c257a13f17b9a7 type=CONTAINER_STARTED_EVENT Apr 22 23:58:34.984234 containerd[1625]: time="2026-04-22T23:58:34.982245835Z" level=info msg="container event discarded" container=9be3d84a6567962c71c77138f33b9b867649f97c2257003550c257a13f17b9a7 type=CONTAINER_STOPPED_EVENT Apr 22 23:58:40.542049 containerd[1625]: time="2026-04-22T23:58:40.540867857Z" level=info msg="container event discarded" container=e9457298112d10f15a11a0fdad4aca2bfbc22aa0ebb72be087d63991b10088c9 type=CONTAINER_CREATED_EVENT Apr 22 23:58:40.768563 kubelet[2876]: E0422 23:58:40.768001 2876 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 22 23:58:45.244399 kubelet[2876]: E0422 23:58:45.205504 2876 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.256s" Apr 22 23:58:46.165630 kubelet[2876]: E0422 23:58:46.165317 2876 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 22 23:58:47.160107 kubelet[2876]: E0422 23:58:47.154180 2876 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 22 23:58:47.907821 containerd[1625]: time="2026-04-22T23:58:47.906027970Z" level=info msg="container event discarded" container=e9457298112d10f15a11a0fdad4aca2bfbc22aa0ebb72be087d63991b10088c9 type=CONTAINER_STARTED_EVENT Apr 22 23:58:53.845460 kubelet[2876]: E0422 23:58:53.844097 2876 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 22 23:58:56.054505 kubelet[2876]: E0422 23:58:56.054361 2876 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 22 23:58:57.053844 kubelet[2876]: E0422 23:58:57.053131 2876 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 22 23:58:57.879325 systemd[1]: cri-containerd-e41b2e5e5b6cefd6bfa58f91a798c2aeeda4ddea08dd75eb6c99df3f736042d9.scope: Deactivated successfully. Apr 22 23:58:57.969449 containerd[1625]: time="2026-04-22T23:58:57.909062088Z" level=info msg="received container exit event container_id:\"e41b2e5e5b6cefd6bfa58f91a798c2aeeda4ddea08dd75eb6c99df3f736042d9\" id:\"e41b2e5e5b6cefd6bfa58f91a798c2aeeda4ddea08dd75eb6c99df3f736042d9\" pid:4569 exit_status:1 exited_at:{seconds:1776902337 nanos:893281934}" Apr 22 23:58:57.982467 systemd[1]: cri-containerd-e41b2e5e5b6cefd6bfa58f91a798c2aeeda4ddea08dd75eb6c99df3f736042d9.scope: Consumed 6.622s CPU time, 19.6M memory peak. Apr 22 23:58:58.413071 kubelet[2876]: E0422 23:58:58.412078 2876 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 22 23:59:00.325849 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e41b2e5e5b6cefd6bfa58f91a798c2aeeda4ddea08dd75eb6c99df3f736042d9-rootfs.mount: Deactivated successfully. Apr 22 23:59:00.566043 kubelet[2876]: E0422 23:59:00.561869 2876 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.52s" Apr 22 23:59:02.504059 kubelet[2876]: I0422 23:59:02.494262 2876 scope.go:117] "RemoveContainer" containerID="6bc663f8bffb59fe2d3fc60679c9b7ca05e6ebf971d606bfdfbca20611683b9a" Apr 22 23:59:02.678899 kubelet[2876]: I0422 23:59:02.678238 2876 scope.go:117] "RemoveContainer" containerID="e41b2e5e5b6cefd6bfa58f91a798c2aeeda4ddea08dd75eb6c99df3f736042d9" Apr 22 23:59:02.705925 kubelet[2876]: E0422 23:59:02.704424 2876 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 22 23:59:02.784372 containerd[1625]: time="2026-04-22T23:59:02.779960459Z" level=info msg="RemoveContainer for \"6bc663f8bffb59fe2d3fc60679c9b7ca05e6ebf971d606bfdfbca20611683b9a\"" Apr 22 23:59:02.873991 kubelet[2876]: E0422 23:59:02.873867 2876 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-controller-manager pod=kube-controller-manager-localhost_kube-system(e9ca41790ae21be9f4cbd451ade0acec)\"" pod="kube-system/kube-controller-manager-localhost" podUID="e9ca41790ae21be9f4cbd451ade0acec" Apr 22 23:59:03.092719 containerd[1625]: time="2026-04-22T23:59:03.089919658Z" level=info msg="RemoveContainer for \"6bc663f8bffb59fe2d3fc60679c9b7ca05e6ebf971d606bfdfbca20611683b9a\" returns successfully" Apr 22 23:59:06.283108 kubelet[2876]: I0422 23:59:06.281947 2876 scope.go:117] "RemoveContainer" containerID="e41b2e5e5b6cefd6bfa58f91a798c2aeeda4ddea08dd75eb6c99df3f736042d9" Apr 22 23:59:06.293289 kubelet[2876]: E0422 23:59:06.292257 2876 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 22 23:59:06.293289 kubelet[2876]: E0422 23:59:06.292644 2876 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-controller-manager pod=kube-controller-manager-localhost_kube-system(e9ca41790ae21be9f4cbd451ade0acec)\"" pod="kube-system/kube-controller-manager-localhost" podUID="e9ca41790ae21be9f4cbd451ade0acec" Apr 22 23:59:20.264382 kubelet[2876]: I0422 23:59:20.264242 2876 scope.go:117] "RemoveContainer" containerID="e41b2e5e5b6cefd6bfa58f91a798c2aeeda4ddea08dd75eb6c99df3f736042d9" Apr 22 23:59:20.265870 kubelet[2876]: E0422 23:59:20.264482 2876 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 22 23:59:20.282795 kubelet[2876]: E0422 23:59:20.276337 2876 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-controller-manager pod=kube-controller-manager-localhost_kube-system(e9ca41790ae21be9f4cbd451ade0acec)\"" pod="kube-system/kube-controller-manager-localhost" podUID="e9ca41790ae21be9f4cbd451ade0acec" Apr 22 23:59:22.487909 containerd[1625]: time="2026-04-22T23:59:22.486898561Z" level=info msg="container event discarded" container=53dbd89dd226e3f7eb45ba12cd648e1c685c54edcda4367ed7aa3e817c858be4 type=CONTAINER_STOPPED_EVENT Apr 22 23:59:24.281820 containerd[1625]: time="2026-04-22T23:59:24.275139926Z" level=info msg="container event discarded" container=24102edd7bd05250735d0a30fc200ca15e79f4847d483e26270398dea44e8f4b type=CONTAINER_DELETED_EVENT Apr 22 23:59:26.713588 containerd[1625]: time="2026-04-22T23:59:26.711080077Z" level=info msg="container event discarded" container=82990c3fd820a97fc2544f2122b38c678bf8c76b638e6b3d64ca1b39b3ffe266 type=CONTAINER_CREATED_EVENT Apr 22 23:59:32.063670 kubelet[2876]: I0422 23:59:32.062761 2876 scope.go:117] "RemoveContainer" containerID="e41b2e5e5b6cefd6bfa58f91a798c2aeeda4ddea08dd75eb6c99df3f736042d9" Apr 22 23:59:32.074950 kubelet[2876]: E0422 23:59:32.073032 2876 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 22 23:59:32.078168 kubelet[2876]: E0422 23:59:32.077861 2876 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-controller-manager pod=kube-controller-manager-localhost_kube-system(e9ca41790ae21be9f4cbd451ade0acec)\"" pod="kube-system/kube-controller-manager-localhost" podUID="e9ca41790ae21be9f4cbd451ade0acec" Apr 22 23:59:35.299817 kubelet[2876]: E0422 23:59:35.287199 2876 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.298s" Apr 22 23:59:38.409812 containerd[1625]: time="2026-04-22T23:59:38.409166818Z" level=info msg="container event discarded" container=82990c3fd820a97fc2544f2122b38c678bf8c76b638e6b3d64ca1b39b3ffe266 type=CONTAINER_STARTED_EVENT Apr 22 23:59:43.986384 kubelet[2876]: I0422 23:59:43.985270 2876 scope.go:117] "RemoveContainer" containerID="e41b2e5e5b6cefd6bfa58f91a798c2aeeda4ddea08dd75eb6c99df3f736042d9" Apr 22 23:59:44.039951 kubelet[2876]: E0422 23:59:43.995912 2876 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 22 23:59:44.047353 containerd[1625]: time="2026-04-22T23:59:43.990083713Z" level=info msg="container event discarded" container=2b50f82959d7c84248b3b4abd79a1d33473d39e28f322fe523667b4af0ed12f6 type=CONTAINER_CREATED_EVENT Apr 22 23:59:44.047353 containerd[1625]: time="2026-04-22T23:59:44.033303910Z" level=info msg="container event discarded" container=2b50f82959d7c84248b3b4abd79a1d33473d39e28f322fe523667b4af0ed12f6 type=CONTAINER_STARTED_EVENT Apr 22 23:59:44.646966 containerd[1625]: time="2026-04-22T23:59:44.646788850Z" level=info msg="CreateContainer within sandbox \"e8669ce21a3f51f960f35f8e0dd86019d77ce722a2fabbea5799927df0c40f5f\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:5,}" Apr 22 23:59:45.167081 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1183585508.mount: Deactivated successfully. Apr 22 23:59:45.208704 containerd[1625]: time="2026-04-22T23:59:45.198911666Z" level=info msg="Container ef2678148ddaa3cb7cc0a62888a859d57bd3b820ff87ccf39779b0a342bd5955: CDI devices from CRI Config.CDIDevices: []" Apr 22 23:59:45.360619 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3963964984.mount: Deactivated successfully. Apr 22 23:59:45.666051 containerd[1625]: time="2026-04-22T23:59:45.648881114Z" level=info msg="CreateContainer within sandbox \"e8669ce21a3f51f960f35f8e0dd86019d77ce722a2fabbea5799927df0c40f5f\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:5,} returns container id \"ef2678148ddaa3cb7cc0a62888a859d57bd3b820ff87ccf39779b0a342bd5955\"" Apr 22 23:59:45.879174 containerd[1625]: time="2026-04-22T23:59:45.873439602Z" level=info msg="StartContainer for \"ef2678148ddaa3cb7cc0a62888a859d57bd3b820ff87ccf39779b0a342bd5955\"" Apr 22 23:59:45.947838 containerd[1625]: time="2026-04-22T23:59:45.946520682Z" level=info msg="connecting to shim ef2678148ddaa3cb7cc0a62888a859d57bd3b820ff87ccf39779b0a342bd5955" address="unix:///run/containerd/s/4c9d8b7dca752b08b905213a378800e7fc982aefd785442d227a931cc3f6c3f7" protocol=ttrpc version=3 Apr 22 23:59:46.913816 containerd[1625]: time="2026-04-22T23:59:46.913655433Z" level=info msg="container event discarded" container=cbf0230d32da201e92694a2ecc9fa4d11fef561e52f47f6a317cc66b29373af2 type=CONTAINER_CREATED_EVENT Apr 22 23:59:48.053994 kubelet[2876]: E0422 23:59:48.044247 2876 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.007s" Apr 22 23:59:48.398086 systemd[1]: Started cri-containerd-ef2678148ddaa3cb7cc0a62888a859d57bd3b820ff87ccf39779b0a342bd5955.scope - libcontainer container ef2678148ddaa3cb7cc0a62888a859d57bd3b820ff87ccf39779b0a342bd5955. Apr 22 23:59:49.285088 kubelet[2876]: E0422 23:59:49.282486 2876 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.169s" Apr 22 23:59:52.178846 containerd[1625]: time="2026-04-22T23:59:52.177381146Z" level=info msg="container event discarded" container=eca5a8c8c6a6671b0818b638e3b885c3deb0e2d7d76ac4c188af3f233924c0be type=CONTAINER_CREATED_EVENT Apr 22 23:59:52.178846 containerd[1625]: time="2026-04-22T23:59:52.178605514Z" level=info msg="container event discarded" container=eca5a8c8c6a6671b0818b638e3b885c3deb0e2d7d76ac4c188af3f233924c0be type=CONTAINER_STARTED_EVENT Apr 22 23:59:52.295211 containerd[1625]: time="2026-04-22T23:59:52.294926115Z" level=info msg="StartContainer for \"ef2678148ddaa3cb7cc0a62888a859d57bd3b820ff87ccf39779b0a342bd5955\" returns successfully" Apr 22 23:59:54.083993 kubelet[2876]: E0422 23:59:54.078624 2876 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 22 23:59:54.146198 containerd[1625]: time="2026-04-22T23:59:54.103008080Z" level=info msg="container event discarded" container=cbf0230d32da201e92694a2ecc9fa4d11fef561e52f47f6a317cc66b29373af2 type=CONTAINER_STARTED_EVENT Apr 22 23:59:54.796730 containerd[1625]: time="2026-04-22T23:59:54.795949451Z" level=info msg="container event discarded" container=fc8b6aaff69f3182e6c52281eafa36ede03ea8b878e2d7eb6e766be3f63795b6 type=CONTAINER_CREATED_EVENT Apr 22 23:59:56.530028 kubelet[2876]: E0422 23:59:56.529776 2876 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 22 23:59:59.408450 containerd[1625]: time="2026-04-22T23:59:59.408218080Z" level=info msg="container event discarded" container=984746e5648cf3da3c5ce84b88379d9819cce85092fd94ec82a04ab113283fd6 type=CONTAINER_STOPPED_EVENT Apr 23 00:00:03.251752 containerd[1625]: time="2026-04-23T00:00:03.250082086Z" level=info msg="container event discarded" container=fc8b6aaff69f3182e6c52281eafa36ede03ea8b878e2d7eb6e766be3f63795b6 type=CONTAINER_STARTED_EVENT Apr 23 00:00:03.885477 containerd[1625]: time="2026-04-23T00:00:03.883307554Z" level=info msg="container event discarded" container=363f25bb29f00ab4d89aca78657c099030f4766ff8440617307881e77d872488 type=CONTAINER_CREATED_EVENT Apr 23 00:00:04.878434 systemd[1]: cri-containerd-a23596d5036ab4fa3da967453638a6e2539d4c51b289f6a8e1a3000226e74700.scope: Deactivated successfully. Apr 23 00:00:04.939270 systemd[1]: cri-containerd-a23596d5036ab4fa3da967453638a6e2539d4c51b289f6a8e1a3000226e74700.scope: Consumed 32.517s CPU time, 21.1M memory peak. Apr 23 00:00:04.984649 containerd[1625]: time="2026-04-23T00:00:04.982816464Z" level=info msg="received container exit event container_id:\"a23596d5036ab4fa3da967453638a6e2539d4c51b289f6a8e1a3000226e74700\" id:\"a23596d5036ab4fa3da967453638a6e2539d4c51b289f6a8e1a3000226e74700\" pid:4544 exit_status:1 exited_at:{seconds:1776902404 nanos:962971529}" Apr 23 00:00:05.958622 kubelet[2876]: E0423 00:00:05.957385 2876 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 23 00:00:07.042342 kubelet[2876]: E0423 00:00:07.039993 2876 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 23 00:00:07.141860 kubelet[2876]: E0423 00:00:07.133177 2876 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 23 00:00:07.347803 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a23596d5036ab4fa3da967453638a6e2539d4c51b289f6a8e1a3000226e74700-rootfs.mount: Deactivated successfully. Apr 23 00:00:07.724200 containerd[1625]: time="2026-04-23T00:00:07.714290783Z" level=error msg="collecting metrics for a23596d5036ab4fa3da967453638a6e2539d4c51b289f6a8e1a3000226e74700" error="ttrpc: closed" Apr 23 00:00:08.794108 kubelet[2876]: I0423 00:00:08.791341 2876 scope.go:117] "RemoveContainer" containerID="363f25bb29f00ab4d89aca78657c099030f4766ff8440617307881e77d872488" Apr 23 00:00:08.914467 kubelet[2876]: I0423 00:00:08.912099 2876 scope.go:117] "RemoveContainer" containerID="a23596d5036ab4fa3da967453638a6e2539d4c51b289f6a8e1a3000226e74700" Apr 23 00:00:08.956257 kubelet[2876]: E0423 00:00:08.947116 2876 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 23 00:00:09.009025 kubelet[2876]: E0423 00:00:09.007867 2876 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-scheduler pod=kube-scheduler-localhost_kube-system(33fee6ba1581201eda98a989140db110)\"" pod="kube-system/kube-scheduler-localhost" podUID="33fee6ba1581201eda98a989140db110" Apr 23 00:00:09.027771 containerd[1625]: time="2026-04-23T00:00:09.026830986Z" level=info msg="container event discarded" container=363f25bb29f00ab4d89aca78657c099030f4766ff8440617307881e77d872488 type=CONTAINER_STARTED_EVENT Apr 23 00:00:09.035725 containerd[1625]: time="2026-04-23T00:00:09.035177032Z" level=info msg="RemoveContainer for \"363f25bb29f00ab4d89aca78657c099030f4766ff8440617307881e77d872488\"" Apr 23 00:00:09.139919 containerd[1625]: time="2026-04-23T00:00:09.139650495Z" level=info msg="RemoveContainer for \"363f25bb29f00ab4d89aca78657c099030f4766ff8440617307881e77d872488\" returns successfully" Apr 23 00:00:09.161947 kubelet[2876]: E0423 00:00:09.158378 2876 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 23 00:00:10.510103 kubelet[2876]: I0423 00:00:10.509727 2876 scope.go:117] "RemoveContainer" containerID="a23596d5036ab4fa3da967453638a6e2539d4c51b289f6a8e1a3000226e74700" Apr 23 00:00:10.558640 kubelet[2876]: E0423 00:00:10.510463 2876 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 23 00:00:10.561267 kubelet[2876]: E0423 00:00:10.558472 2876 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-scheduler pod=kube-scheduler-localhost_kube-system(33fee6ba1581201eda98a989140db110)\"" pod="kube-system/kube-scheduler-localhost" podUID="33fee6ba1581201eda98a989140db110" Apr 23 00:00:10.600157 kubelet[2876]: E0423 00:00:10.597214 2876 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 23 00:00:10.804818 kubelet[2876]: E0423 00:00:10.777301 2876 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 23 00:00:11.841535 kubelet[2876]: I0423 00:00:11.841112 2876 scope.go:117] "RemoveContainer" containerID="a23596d5036ab4fa3da967453638a6e2539d4c51b289f6a8e1a3000226e74700" Apr 23 00:00:11.842233 kubelet[2876]: E0423 00:00:11.841786 2876 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 23 00:00:11.842233 kubelet[2876]: E0423 00:00:11.841971 2876 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-scheduler pod=kube-scheduler-localhost_kube-system(33fee6ba1581201eda98a989140db110)\"" pod="kube-system/kube-scheduler-localhost" podUID="33fee6ba1581201eda98a989140db110" Apr 23 00:00:16.963389 kubelet[2876]: E0423 00:00:16.962913 2876 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 23 00:00:24.077967 kubelet[2876]: I0423 00:00:24.075232 2876 scope.go:117] "RemoveContainer" containerID="a23596d5036ab4fa3da967453638a6e2539d4c51b289f6a8e1a3000226e74700" Apr 23 00:00:24.090005 kubelet[2876]: E0423 00:00:24.083310 2876 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 23 00:00:24.108950 kubelet[2876]: E0423 00:00:24.101393 2876 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-scheduler pod=kube-scheduler-localhost_kube-system(33fee6ba1581201eda98a989140db110)\"" pod="kube-system/kube-scheduler-localhost" podUID="33fee6ba1581201eda98a989140db110" Apr 23 00:00:37.045136 kubelet[2876]: I0423 00:00:37.040113 2876 scope.go:117] "RemoveContainer" containerID="a23596d5036ab4fa3da967453638a6e2539d4c51b289f6a8e1a3000226e74700" Apr 23 00:00:37.058105 kubelet[2876]: E0423 00:00:37.056330 2876 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 23 00:00:37.306769 containerd[1625]: time="2026-04-23T00:00:37.305244557Z" level=info msg="CreateContainer within sandbox \"2828c2319c8a920f1d62358b8008cdb2c23d03c9191e05cd8a8e46899474fe83\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:3,}" Apr 23 00:00:37.496208 containerd[1625]: time="2026-04-23T00:00:37.493920884Z" level=info msg="Container 351d4368a67cd09e3e27a9acd350319e2f9398aaedd78bf482e541e221f95c60: CDI devices from CRI Config.CDIDevices: []" Apr 23 00:00:37.713301 containerd[1625]: time="2026-04-23T00:00:37.709391251Z" level=info msg="CreateContainer within sandbox \"2828c2319c8a920f1d62358b8008cdb2c23d03c9191e05cd8a8e46899474fe83\" for &ContainerMetadata{Name:kube-scheduler,Attempt:3,} returns container id \"351d4368a67cd09e3e27a9acd350319e2f9398aaedd78bf482e541e221f95c60\"" Apr 23 00:00:37.760281 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1105651112.mount: Deactivated successfully. Apr 23 00:00:37.800167 containerd[1625]: time="2026-04-23T00:00:37.796019038Z" level=info msg="StartContainer for \"351d4368a67cd09e3e27a9acd350319e2f9398aaedd78bf482e541e221f95c60\"" Apr 23 00:00:37.914002 containerd[1625]: time="2026-04-23T00:00:37.913769109Z" level=info msg="connecting to shim 351d4368a67cd09e3e27a9acd350319e2f9398aaedd78bf482e541e221f95c60" address="unix:///run/containerd/s/008a852ad47db2e030aefc8056a2e849ba474c4802ea5eebff2d501bd41a664c" protocol=ttrpc version=3 Apr 23 00:00:39.437041 systemd[1]: Started cri-containerd-351d4368a67cd09e3e27a9acd350319e2f9398aaedd78bf482e541e221f95c60.scope - libcontainer container 351d4368a67cd09e3e27a9acd350319e2f9398aaedd78bf482e541e221f95c60. Apr 23 00:00:42.102924 containerd[1625]: time="2026-04-23T00:00:42.101047449Z" level=error msg="get state for 351d4368a67cd09e3e27a9acd350319e2f9398aaedd78bf482e541e221f95c60" error="context deadline exceeded" Apr 23 00:00:42.127357 containerd[1625]: time="2026-04-23T00:00:42.105859302Z" level=warning msg="unknown status" status=0 Apr 23 00:00:44.298785 containerd[1625]: time="2026-04-23T00:00:44.297306962Z" level=error msg="get state for 351d4368a67cd09e3e27a9acd350319e2f9398aaedd78bf482e541e221f95c60" error="context deadline exceeded" Apr 23 00:00:44.298785 containerd[1625]: time="2026-04-23T00:00:44.298103164Z" level=warning msg="unknown status" status=0 Apr 23 00:00:44.748423 containerd[1625]: time="2026-04-23T00:00:44.748042028Z" level=error msg="ttrpc: received message on inactive stream" stream=5 Apr 23 00:00:44.783990 containerd[1625]: time="2026-04-23T00:00:44.782388278Z" level=error msg="ttrpc: received message on inactive stream" stream=3 Apr 23 00:00:46.142990 containerd[1625]: time="2026-04-23T00:00:46.142032358Z" level=info msg="StartContainer for \"351d4368a67cd09e3e27a9acd350319e2f9398aaedd78bf482e541e221f95c60\" returns successfully" Apr 23 00:00:47.463662 kubelet[2876]: E0423 00:00:47.450844 2876 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.21s" Apr 23 00:00:47.789272 kubelet[2876]: E0423 00:00:47.780460 2876 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 23 00:00:48.870441 kubelet[2876]: E0423 00:00:48.869673 2876 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 23 00:00:49.934130 kubelet[2876]: E0423 00:00:49.933338 2876 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 23 00:00:59.853590 kubelet[2876]: E0423 00:00:59.852372 2876 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 23 00:01:00.642416 kubelet[2876]: E0423 00:01:00.641605 2876 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 23 00:01:05.648947 containerd[1625]: time="2026-04-23T00:01:05.647161640Z" level=info msg="container event discarded" container=82990c3fd820a97fc2544f2122b38c678bf8c76b638e6b3d64ca1b39b3ffe266 type=CONTAINER_STOPPED_EVENT Apr 23 00:01:07.082226 containerd[1625]: time="2026-04-23T00:01:07.080968891Z" level=info msg="container event discarded" container=53dbd89dd226e3f7eb45ba12cd648e1c685c54edcda4367ed7aa3e817c858be4 type=CONTAINER_DELETED_EVENT Apr 23 00:01:11.014122 kubelet[2876]: E0423 00:01:11.013640 2876 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 23 00:01:14.979298 kubelet[2876]: E0423 00:01:14.977479 2876 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 23 00:01:15.989995 kubelet[2876]: E0423 00:01:15.988217 2876 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 23 00:01:17.945656 containerd[1625]: time="2026-04-23T00:01:17.941607260Z" level=info msg="container event discarded" container=6bc663f8bffb59fe2d3fc60679c9b7ca05e6ebf971d606bfdfbca20611683b9a type=CONTAINER_CREATED_EVENT Apr 23 00:01:24.296305 containerd[1625]: time="2026-04-23T00:01:24.295160398Z" level=info msg="container event discarded" container=6bc663f8bffb59fe2d3fc60679c9b7ca05e6ebf971d606bfdfbca20611683b9a type=CONTAINER_STARTED_EVENT Apr 23 00:01:31.973234 kubelet[2876]: E0423 00:01:31.965353 2876 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 23 00:01:34.051774 kubelet[2876]: E0423 00:01:34.050229 2876 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 23 00:01:36.086263 kubelet[2876]: E0423 00:01:36.086079 2876 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 23 00:02:16.959339 kubelet[2876]: E0423 00:02:16.955788 2876 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"