Apr 21 02:47:16.159669 kernel: Linux version 6.12.81-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT_DYNAMIC Mon Apr 20 22:35:05 -00 2026 Apr 21 02:47:16.159696 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=bff44a95b1e301b8c626c31d9593bbb30c469579bd546b0b84b6f8eaed8c72f7 Apr 21 02:47:16.159708 kernel: BIOS-provided physical RAM map: Apr 21 02:47:16.159718 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Apr 21 02:47:16.159726 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Apr 21 02:47:16.159734 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Apr 21 02:47:16.159744 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Apr 21 02:47:16.159752 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Apr 21 02:47:16.159760 kernel: BIOS-e820: [mem 0x000000000080c000-0x0000000000810fff] usable Apr 21 02:47:16.159768 kernel: BIOS-e820: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Apr 21 02:47:16.159776 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009bd3efff] usable Apr 21 02:47:16.159783 kernel: BIOS-e820: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Apr 21 02:47:16.159793 kernel: BIOS-e820: [mem 0x000000009be00000-0x000000009c8ecfff] usable Apr 21 02:47:16.159801 kernel: BIOS-e820: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Apr 21 02:47:16.159812 kernel: BIOS-e820: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Apr 21 02:47:16.159821 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Apr 21 02:47:16.159830 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009ce90fff] usable Apr 21 02:47:16.159840 kernel: BIOS-e820: [mem 0x000000009ce91000-0x000000009ce94fff] reserved Apr 21 02:47:16.159848 kernel: BIOS-e820: [mem 0x000000009ce95000-0x000000009ce96fff] ACPI NVS Apr 21 02:47:16.159855 kernel: BIOS-e820: [mem 0x000000009ce97000-0x000000009cedbfff] usable Apr 21 02:47:16.159864 kernel: BIOS-e820: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Apr 21 02:47:16.159873 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Apr 21 02:47:16.159881 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Apr 21 02:47:16.159889 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Apr 21 02:47:16.159898 kernel: BIOS-e820: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Apr 21 02:47:16.159907 kernel: NX (Execute Disable) protection: active Apr 21 02:47:16.159915 kernel: APIC: Static calls initialized Apr 21 02:47:16.159975 kernel: e820: update [mem 0x9b31e018-0x9b327c57] usable ==> usable Apr 21 02:47:16.159988 kernel: e820: update [mem 0x9b2e1018-0x9b31de57] usable ==> usable Apr 21 02:47:16.159997 kernel: extended physical RAM map: Apr 21 02:47:16.160006 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable Apr 21 02:47:16.160015 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000007fffff] usable Apr 21 02:47:16.160022 kernel: reserve setup_data: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Apr 21 02:47:16.160026 kernel: reserve setup_data: [mem 0x0000000000808000-0x000000000080afff] usable Apr 21 02:47:16.160031 kernel: reserve setup_data: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Apr 21 02:47:16.160036 kernel: reserve setup_data: [mem 0x000000000080c000-0x0000000000810fff] usable Apr 21 02:47:16.160040 kernel: reserve setup_data: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Apr 21 02:47:16.160045 kernel: reserve setup_data: [mem 0x0000000000900000-0x000000009b2e1017] usable Apr 21 02:47:16.160050 kernel: reserve setup_data: [mem 0x000000009b2e1018-0x000000009b31de57] usable Apr 21 02:47:16.160056 kernel: reserve setup_data: [mem 0x000000009b31de58-0x000000009b31e017] usable Apr 21 02:47:16.160063 kernel: reserve setup_data: [mem 0x000000009b31e018-0x000000009b327c57] usable Apr 21 02:47:16.160068 kernel: reserve setup_data: [mem 0x000000009b327c58-0x000000009bd3efff] usable Apr 21 02:47:16.160073 kernel: reserve setup_data: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Apr 21 02:47:16.160078 kernel: reserve setup_data: [mem 0x000000009be00000-0x000000009c8ecfff] usable Apr 21 02:47:16.160085 kernel: reserve setup_data: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Apr 21 02:47:16.160090 kernel: reserve setup_data: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Apr 21 02:47:16.160155 kernel: reserve setup_data: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Apr 21 02:47:16.160161 kernel: reserve setup_data: [mem 0x000000009cbff000-0x000000009ce90fff] usable Apr 21 02:47:16.160166 kernel: reserve setup_data: [mem 0x000000009ce91000-0x000000009ce94fff] reserved Apr 21 02:47:16.160171 kernel: reserve setup_data: [mem 0x000000009ce95000-0x000000009ce96fff] ACPI NVS Apr 21 02:47:16.160176 kernel: reserve setup_data: [mem 0x000000009ce97000-0x000000009cedbfff] usable Apr 21 02:47:16.160181 kernel: reserve setup_data: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Apr 21 02:47:16.160186 kernel: reserve setup_data: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Apr 21 02:47:16.160191 kernel: reserve setup_data: [mem 0x00000000e0000000-0x00000000efffffff] reserved Apr 21 02:47:16.160196 kernel: reserve setup_data: [mem 0x00000000feffc000-0x00000000feffffff] reserved Apr 21 02:47:16.160202 kernel: reserve setup_data: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Apr 21 02:47:16.160207 kernel: efi: EFI v2.7 by EDK II Apr 21 02:47:16.160213 kernel: efi: SMBIOS=0x9c988000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b9e4198 RNG=0x9cb73018 Apr 21 02:47:16.160218 kernel: random: crng init done Apr 21 02:47:16.160223 kernel: efi: Remove mem151: MMIO range=[0xffc00000-0xffffffff] (4MB) from e820 map Apr 21 02:47:16.160228 kernel: e820: remove [mem 0xffc00000-0xffffffff] reserved Apr 21 02:47:16.160236 kernel: secureboot: Secure boot disabled Apr 21 02:47:16.160244 kernel: SMBIOS 2.8 present. Apr 21 02:47:16.160253 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS unknown 02/02/2022 Apr 21 02:47:16.160262 kernel: DMI: Memory slots populated: 1/1 Apr 21 02:47:16.160271 kernel: Hypervisor detected: KVM Apr 21 02:47:16.160280 kernel: last_pfn = 0x9cedc max_arch_pfn = 0x10000000000 Apr 21 02:47:16.160292 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Apr 21 02:47:16.160302 kernel: kvm-clock: using sched offset of 6048137907 cycles Apr 21 02:47:16.160311 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Apr 21 02:47:16.160321 kernel: tsc: Detected 2793.438 MHz processor Apr 21 02:47:16.160331 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Apr 21 02:47:16.160340 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Apr 21 02:47:16.160349 kernel: last_pfn = 0x9cedc max_arch_pfn = 0x10000000000 Apr 21 02:47:16.160359 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Apr 21 02:47:16.160370 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Apr 21 02:47:16.160381 kernel: Using GB pages for direct mapping Apr 21 02:47:16.160390 kernel: ACPI: Early table checksum verification disabled Apr 21 02:47:16.160400 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Apr 21 02:47:16.160409 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Apr 21 02:47:16.160419 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Apr 21 02:47:16.160428 kernel: ACPI: DSDT 0x000000009CB7A000 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 21 02:47:16.160437 kernel: ACPI: FACS 0x000000009CBDD000 000040 Apr 21 02:47:16.160447 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 21 02:47:16.160456 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 21 02:47:16.160467 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 21 02:47:16.160477 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 21 02:47:16.160487 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Apr 21 02:47:16.160497 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Apr 21 02:47:16.160506 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1b9] Apr 21 02:47:16.160516 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Apr 21 02:47:16.160525 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Apr 21 02:47:16.160535 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Apr 21 02:47:16.160544 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Apr 21 02:47:16.160556 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Apr 21 02:47:16.160565 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Apr 21 02:47:16.160575 kernel: No NUMA configuration found Apr 21 02:47:16.160585 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cedbfff] Apr 21 02:47:16.160594 kernel: NODE_DATA(0) allocated [mem 0x9ce36dc0-0x9ce3dfff] Apr 21 02:47:16.160604 kernel: Zone ranges: Apr 21 02:47:16.160614 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Apr 21 02:47:16.160623 kernel: DMA32 [mem 0x0000000001000000-0x000000009cedbfff] Apr 21 02:47:16.160630 kernel: Normal empty Apr 21 02:47:16.160640 kernel: Device empty Apr 21 02:47:16.160651 kernel: Movable zone start for each node Apr 21 02:47:16.160661 kernel: Early memory node ranges Apr 21 02:47:16.160670 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Apr 21 02:47:16.160679 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Apr 21 02:47:16.160689 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Apr 21 02:47:16.160699 kernel: node 0: [mem 0x000000000080c000-0x0000000000810fff] Apr 21 02:47:16.160708 kernel: node 0: [mem 0x0000000000900000-0x000000009bd3efff] Apr 21 02:47:16.160718 kernel: node 0: [mem 0x000000009be00000-0x000000009c8ecfff] Apr 21 02:47:16.160727 kernel: node 0: [mem 0x000000009cbff000-0x000000009ce90fff] Apr 21 02:47:16.160739 kernel: node 0: [mem 0x000000009ce97000-0x000000009cedbfff] Apr 21 02:47:16.160748 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cedbfff] Apr 21 02:47:16.160758 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Apr 21 02:47:16.160768 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Apr 21 02:47:16.160778 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Apr 21 02:47:16.160795 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Apr 21 02:47:16.160807 kernel: On node 0, zone DMA: 239 pages in unavailable ranges Apr 21 02:47:16.160818 kernel: On node 0, zone DMA32: 193 pages in unavailable ranges Apr 21 02:47:16.160827 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Apr 21 02:47:16.160837 kernel: On node 0, zone DMA32: 6 pages in unavailable ranges Apr 21 02:47:16.160847 kernel: On node 0, zone DMA32: 12580 pages in unavailable ranges Apr 21 02:47:16.160857 kernel: ACPI: PM-Timer IO Port: 0x608 Apr 21 02:47:16.160870 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Apr 21 02:47:16.160878 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Apr 21 02:47:16.160887 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Apr 21 02:47:16.160897 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Apr 21 02:47:16.160907 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Apr 21 02:47:16.160919 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Apr 21 02:47:16.160978 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Apr 21 02:47:16.160989 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Apr 21 02:47:16.161000 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Apr 21 02:47:16.161010 kernel: TSC deadline timer available Apr 21 02:47:16.161020 kernel: CPU topo: Max. logical packages: 1 Apr 21 02:47:16.161029 kernel: CPU topo: Max. logical dies: 1 Apr 21 02:47:16.161040 kernel: CPU topo: Max. dies per package: 1 Apr 21 02:47:16.161050 kernel: CPU topo: Max. threads per core: 1 Apr 21 02:47:16.161063 kernel: CPU topo: Num. cores per package: 4 Apr 21 02:47:16.161073 kernel: CPU topo: Num. threads per package: 4 Apr 21 02:47:16.161083 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs Apr 21 02:47:16.161093 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Apr 21 02:47:16.161173 kernel: kvm-guest: KVM setup pv remote TLB flush Apr 21 02:47:16.161184 kernel: kvm-guest: setup PV sched yield Apr 21 02:47:16.161194 kernel: [mem 0x9d000000-0xdfffffff] available for PCI devices Apr 21 02:47:16.161204 kernel: Booting paravirtualized kernel on KVM Apr 21 02:47:16.161215 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Apr 21 02:47:16.161225 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Apr 21 02:47:16.161238 kernel: percpu: Embedded 60 pages/cpu s207448 r8192 d30120 u524288 Apr 21 02:47:16.161247 kernel: pcpu-alloc: s207448 r8192 d30120 u524288 alloc=1*2097152 Apr 21 02:47:16.161257 kernel: pcpu-alloc: [0] 0 1 2 3 Apr 21 02:47:16.161267 kernel: kvm-guest: PV spinlocks enabled Apr 21 02:47:16.161277 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Apr 21 02:47:16.161290 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=bff44a95b1e301b8c626c31d9593bbb30c469579bd546b0b84b6f8eaed8c72f7 Apr 21 02:47:16.161299 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Apr 21 02:47:16.161310 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Apr 21 02:47:16.161322 kernel: Fallback order for Node 0: 0 Apr 21 02:47:16.161332 kernel: Built 1 zonelists, mobility grouping on. Total pages: 641450 Apr 21 02:47:16.161342 kernel: Policy zone: DMA32 Apr 21 02:47:16.161350 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Apr 21 02:47:16.161361 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Apr 21 02:47:16.161371 kernel: ftrace: allocating 40126 entries in 157 pages Apr 21 02:47:16.161381 kernel: ftrace: allocated 157 pages with 5 groups Apr 21 02:47:16.161391 kernel: Dynamic Preempt: voluntary Apr 21 02:47:16.161401 kernel: rcu: Preemptible hierarchical RCU implementation. Apr 21 02:47:16.161415 kernel: rcu: RCU event tracing is enabled. Apr 21 02:47:16.161425 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Apr 21 02:47:16.161436 kernel: Trampoline variant of Tasks RCU enabled. Apr 21 02:47:16.161446 kernel: Rude variant of Tasks RCU enabled. Apr 21 02:47:16.161456 kernel: Tracing variant of Tasks RCU enabled. Apr 21 02:47:16.161467 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Apr 21 02:47:16.161477 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Apr 21 02:47:16.161486 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Apr 21 02:47:16.161495 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Apr 21 02:47:16.161507 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Apr 21 02:47:16.161518 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Apr 21 02:47:16.161528 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Apr 21 02:47:16.161539 kernel: Console: colour dummy device 80x25 Apr 21 02:47:16.161549 kernel: printk: legacy console [ttyS0] enabled Apr 21 02:47:16.161558 kernel: ACPI: Core revision 20240827 Apr 21 02:47:16.161569 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Apr 21 02:47:16.161579 kernel: APIC: Switch to symmetric I/O mode setup Apr 21 02:47:16.161590 kernel: x2apic enabled Apr 21 02:47:16.161602 kernel: APIC: Switched APIC routing to: physical x2apic Apr 21 02:47:16.161612 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Apr 21 02:47:16.161622 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Apr 21 02:47:16.161633 kernel: kvm-guest: setup PV IPIs Apr 21 02:47:16.161643 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Apr 21 02:47:16.161653 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x284409db922, max_idle_ns: 440795228871 ns Apr 21 02:47:16.161664 kernel: Calibrating delay loop (skipped) preset value.. 5586.87 BogoMIPS (lpj=2793438) Apr 21 02:47:16.161673 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Apr 21 02:47:16.161684 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Apr 21 02:47:16.161695 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Apr 21 02:47:16.161706 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Apr 21 02:47:16.161716 kernel: Spectre V2 : Mitigation: Retpolines Apr 21 02:47:16.161726 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Apr 21 02:47:16.161737 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Apr 21 02:47:16.161748 kernel: RETBleed: Vulnerable Apr 21 02:47:16.161758 kernel: Speculative Store Bypass: Vulnerable Apr 21 02:47:16.161768 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Apr 21 02:47:16.161780 kernel: GDS: Unknown: Dependent on hypervisor status Apr 21 02:47:16.161790 kernel: active return thunk: its_return_thunk Apr 21 02:47:16.161800 kernel: ITS: Mitigation: Aligned branch/return thunks Apr 21 02:47:16.161810 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Apr 21 02:47:16.161821 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Apr 21 02:47:16.161831 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Apr 21 02:47:16.161842 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Apr 21 02:47:16.161852 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Apr 21 02:47:16.161862 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Apr 21 02:47:16.161873 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Apr 21 02:47:16.161882 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Apr 21 02:47:16.161892 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Apr 21 02:47:16.161903 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Apr 21 02:47:16.161912 kernel: x86/fpu: Enabled xstate features 0xe7, context size is 2432 bytes, using 'compacted' format. Apr 21 02:47:16.161922 kernel: Freeing SMP alternatives memory: 32K Apr 21 02:47:16.161976 kernel: pid_max: default: 32768 minimum: 301 Apr 21 02:47:16.161986 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Apr 21 02:47:16.161996 kernel: landlock: Up and running. Apr 21 02:47:16.162009 kernel: SELinux: Initializing. Apr 21 02:47:16.162019 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 21 02:47:16.162030 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 21 02:47:16.162040 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8370C CPU @ 2.80GHz (family: 0x6, model: 0x6a, stepping: 0x6) Apr 21 02:47:16.162050 kernel: Performance Events: unsupported p6 CPU model 106 no PMU driver, software events only. Apr 21 02:47:16.162061 kernel: signal: max sigframe size: 3632 Apr 21 02:47:16.162070 kernel: rcu: Hierarchical SRCU implementation. Apr 21 02:47:16.162081 kernel: rcu: Max phase no-delay instances is 400. Apr 21 02:47:16.162092 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Apr 21 02:47:16.162174 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Apr 21 02:47:16.162185 kernel: smp: Bringing up secondary CPUs ... Apr 21 02:47:16.162195 kernel: smpboot: x86: Booting SMP configuration: Apr 21 02:47:16.162299 kernel: .... node #0, CPUs: #1 #2 #3 Apr 21 02:47:16.162312 kernel: smp: Brought up 1 node, 4 CPUs Apr 21 02:47:16.162322 kernel: smpboot: Total of 4 processors activated (22347.50 BogoMIPS) Apr 21 02:47:16.162333 kernel: Memory: 2374692K/2565800K available (14336K kernel code, 2453K rwdata, 26076K rodata, 46228K init, 2520K bss, 185216K reserved, 0K cma-reserved) Apr 21 02:47:16.162343 kernel: devtmpfs: initialized Apr 21 02:47:16.162353 kernel: x86/mm: Memory block size: 128MB Apr 21 02:47:16.162364 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Apr 21 02:47:16.162375 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Apr 21 02:47:16.162386 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00811000-0x008fffff] (978944 bytes) Apr 21 02:47:16.162395 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Apr 21 02:47:16.162405 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9ce95000-0x9ce96fff] (8192 bytes) Apr 21 02:47:16.162415 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Apr 21 02:47:16.162424 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Apr 21 02:47:16.162434 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Apr 21 02:47:16.162445 kernel: pinctrl core: initialized pinctrl subsystem Apr 21 02:47:16.162457 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Apr 21 02:47:16.162467 kernel: audit: initializing netlink subsys (disabled) Apr 21 02:47:16.162478 kernel: audit: type=2000 audit(1776739631.972:1): state=initialized audit_enabled=0 res=1 Apr 21 02:47:16.162487 kernel: thermal_sys: Registered thermal governor 'step_wise' Apr 21 02:47:16.162497 kernel: thermal_sys: Registered thermal governor 'user_space' Apr 21 02:47:16.162508 kernel: cpuidle: using governor menu Apr 21 02:47:16.162518 kernel: efi: Freeing EFI boot services memory: 38812K Apr 21 02:47:16.162528 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Apr 21 02:47:16.162539 kernel: dca service started, version 1.12.1 Apr 21 02:47:16.162551 kernel: PCI: ECAM [mem 0xe0000000-0xefffffff] (base 0xe0000000) for domain 0000 [bus 00-ff] Apr 21 02:47:16.162560 kernel: PCI: Using configuration type 1 for base access Apr 21 02:47:16.162571 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Apr 21 02:47:16.162581 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Apr 21 02:47:16.162591 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Apr 21 02:47:16.162602 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Apr 21 02:47:16.162612 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Apr 21 02:47:16.162623 kernel: ACPI: Added _OSI(Module Device) Apr 21 02:47:16.162633 kernel: ACPI: Added _OSI(Processor Device) Apr 21 02:47:16.162645 kernel: ACPI: Added _OSI(Processor Aggregator Device) Apr 21 02:47:16.162655 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Apr 21 02:47:16.162665 kernel: ACPI: Interpreter enabled Apr 21 02:47:16.162675 kernel: ACPI: PM: (supports S0 S3 S5) Apr 21 02:47:16.162684 kernel: ACPI: Using IOAPIC for interrupt routing Apr 21 02:47:16.162694 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Apr 21 02:47:16.162705 kernel: PCI: Using E820 reservations for host bridge windows Apr 21 02:47:16.162714 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Apr 21 02:47:16.162725 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Apr 21 02:47:16.163304 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Apr 21 02:47:16.163405 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Apr 21 02:47:16.163487 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Apr 21 02:47:16.163501 kernel: PCI host bridge to bus 0000:00 Apr 21 02:47:16.163583 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Apr 21 02:47:16.163719 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Apr 21 02:47:16.163802 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Apr 21 02:47:16.163876 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xdfffffff window] Apr 21 02:47:16.164005 kernel: pci_bus 0000:00: root bus resource [mem 0xf0000000-0xfebfffff window] Apr 21 02:47:16.164082 kernel: pci_bus 0000:00: root bus resource [mem 0x380000000000-0x3807ffffffff window] Apr 21 02:47:16.164258 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Apr 21 02:47:16.164364 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Apr 21 02:47:16.164458 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Apr 21 02:47:16.164546 kernel: pci 0000:00:01.0: BAR 0 [mem 0xc0000000-0xc0ffffff pref] Apr 21 02:47:16.164630 kernel: pci 0000:00:01.0: BAR 2 [mem 0xc1044000-0xc1044fff] Apr 21 02:47:16.164714 kernel: pci 0000:00:01.0: ROM [mem 0xffff0000-0xffffffff pref] Apr 21 02:47:16.164797 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Apr 21 02:47:16.164894 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Apr 21 02:47:16.165038 kernel: pci 0000:00:02.0: BAR 0 [io 0x6100-0x611f] Apr 21 02:47:16.165274 kernel: pci 0000:00:02.0: BAR 1 [mem 0xc1043000-0xc1043fff] Apr 21 02:47:16.165367 kernel: pci 0000:00:02.0: BAR 4 [mem 0x380000000000-0x380000003fff 64bit pref] Apr 21 02:47:16.165462 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Apr 21 02:47:16.165548 kernel: pci 0000:00:03.0: BAR 0 [io 0x6000-0x607f] Apr 21 02:47:16.165633 kernel: pci 0000:00:03.0: BAR 1 [mem 0xc1042000-0xc1042fff] Apr 21 02:47:16.165716 kernel: pci 0000:00:03.0: BAR 4 [mem 0x380000004000-0x380000007fff 64bit pref] Apr 21 02:47:16.165806 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Apr 21 02:47:16.165895 kernel: pci 0000:00:04.0: BAR 0 [io 0x60e0-0x60ff] Apr 21 02:47:16.166035 kernel: pci 0000:00:04.0: BAR 1 [mem 0xc1041000-0xc1041fff] Apr 21 02:47:16.166203 kernel: pci 0000:00:04.0: BAR 4 [mem 0x380000008000-0x38000000bfff 64bit pref] Apr 21 02:47:16.166294 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref] Apr 21 02:47:16.166390 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Apr 21 02:47:16.166477 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Apr 21 02:47:16.166569 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Apr 21 02:47:16.166659 kernel: pci 0000:00:1f.2: BAR 4 [io 0x60c0-0x60df] Apr 21 02:47:16.166745 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xc1040000-0xc1040fff] Apr 21 02:47:16.166837 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Apr 21 02:47:16.166979 kernel: pci 0000:00:1f.3: BAR 4 [io 0x6080-0x60bf] Apr 21 02:47:16.166989 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Apr 21 02:47:16.166996 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Apr 21 02:47:16.167002 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Apr 21 02:47:16.167011 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Apr 21 02:47:16.167016 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Apr 21 02:47:16.167022 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Apr 21 02:47:16.167028 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Apr 21 02:47:16.167034 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Apr 21 02:47:16.167039 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Apr 21 02:47:16.167045 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Apr 21 02:47:16.167050 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Apr 21 02:47:16.167056 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Apr 21 02:47:16.167066 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Apr 21 02:47:16.167072 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Apr 21 02:47:16.167077 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Apr 21 02:47:16.167083 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Apr 21 02:47:16.167089 kernel: iommu: Default domain type: Translated Apr 21 02:47:16.167151 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Apr 21 02:47:16.167158 kernel: efivars: Registered efivars operations Apr 21 02:47:16.167163 kernel: PCI: Using ACPI for IRQ routing Apr 21 02:47:16.167169 kernel: PCI: pci_cache_line_size set to 64 bytes Apr 21 02:47:16.167177 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Apr 21 02:47:16.167184 kernel: e820: reserve RAM buffer [mem 0x00811000-0x008fffff] Apr 21 02:47:16.167193 kernel: e820: reserve RAM buffer [mem 0x9b2e1018-0x9bffffff] Apr 21 02:47:16.167201 kernel: e820: reserve RAM buffer [mem 0x9b31e018-0x9bffffff] Apr 21 02:47:16.167210 kernel: e820: reserve RAM buffer [mem 0x9bd3f000-0x9bffffff] Apr 21 02:47:16.167219 kernel: e820: reserve RAM buffer [mem 0x9c8ed000-0x9fffffff] Apr 21 02:47:16.167228 kernel: e820: reserve RAM buffer [mem 0x9ce91000-0x9fffffff] Apr 21 02:47:16.167238 kernel: e820: reserve RAM buffer [mem 0x9cedc000-0x9fffffff] Apr 21 02:47:16.167336 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Apr 21 02:47:16.167426 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Apr 21 02:47:16.167513 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Apr 21 02:47:16.167526 kernel: vgaarb: loaded Apr 21 02:47:16.167536 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Apr 21 02:47:16.167546 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Apr 21 02:47:16.167557 kernel: clocksource: Switched to clocksource kvm-clock Apr 21 02:47:16.167566 kernel: VFS: Disk quotas dquot_6.6.0 Apr 21 02:47:16.167577 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Apr 21 02:47:16.167589 kernel: pnp: PnP ACPI init Apr 21 02:47:16.167682 kernel: system 00:05: [mem 0xe0000000-0xefffffff window] has been reserved Apr 21 02:47:16.167698 kernel: pnp: PnP ACPI: found 6 devices Apr 21 02:47:16.167709 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Apr 21 02:47:16.167733 kernel: NET: Registered PF_INET protocol family Apr 21 02:47:16.167746 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Apr 21 02:47:16.167756 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Apr 21 02:47:16.167767 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Apr 21 02:47:16.167780 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Apr 21 02:47:16.167790 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Apr 21 02:47:16.167801 kernel: TCP: Hash tables configured (established 32768 bind 32768) Apr 21 02:47:16.167812 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 21 02:47:16.167822 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 21 02:47:16.167832 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Apr 21 02:47:16.167843 kernel: NET: Registered PF_XDP protocol family Apr 21 02:47:16.167990 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref]: can't claim; no compatible bridge window Apr 21 02:47:16.168084 kernel: pci 0000:00:04.0: ROM [mem 0x9d000000-0x9d03ffff pref]: assigned Apr 21 02:47:16.168248 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Apr 21 02:47:16.168328 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Apr 21 02:47:16.168403 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Apr 21 02:47:16.168472 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xdfffffff window] Apr 21 02:47:16.168548 kernel: pci_bus 0000:00: resource 8 [mem 0xf0000000-0xfebfffff window] Apr 21 02:47:16.168623 kernel: pci_bus 0000:00: resource 9 [mem 0x380000000000-0x3807ffffffff window] Apr 21 02:47:16.168637 kernel: PCI: CLS 0 bytes, default 64 Apr 21 02:47:16.168648 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Apr 21 02:47:16.168662 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x284409db922, max_idle_ns: 440795228871 ns Apr 21 02:47:16.168673 kernel: Initialise system trusted keyrings Apr 21 02:47:16.168686 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Apr 21 02:47:16.168695 kernel: Key type asymmetric registered Apr 21 02:47:16.168704 kernel: Asymmetric key parser 'x509' registered Apr 21 02:47:16.168717 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Apr 21 02:47:16.168728 kernel: io scheduler mq-deadline registered Apr 21 02:47:16.168737 kernel: io scheduler kyber registered Apr 21 02:47:16.168747 kernel: io scheduler bfq registered Apr 21 02:47:16.168758 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Apr 21 02:47:16.168769 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Apr 21 02:47:16.168780 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Apr 21 02:47:16.168791 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Apr 21 02:47:16.168802 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Apr 21 02:47:16.168815 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Apr 21 02:47:16.168825 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Apr 21 02:47:16.168835 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Apr 21 02:47:16.168846 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Apr 21 02:47:16.168988 kernel: rtc_cmos 00:04: RTC can wake from S4 Apr 21 02:47:16.169009 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Apr 21 02:47:16.169089 kernel: rtc_cmos 00:04: registered as rtc0 Apr 21 02:47:16.169244 kernel: rtc_cmos 00:04: setting system clock to 2026-04-21T02:47:15 UTC (1776739635) Apr 21 02:47:16.169329 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Apr 21 02:47:16.169343 kernel: intel_pstate: CPU model not supported Apr 21 02:47:16.169353 kernel: efifb: probing for efifb Apr 21 02:47:16.169364 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k Apr 21 02:47:16.169375 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 Apr 21 02:47:16.169385 kernel: efifb: scrolling: redraw Apr 21 02:47:16.169396 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Apr 21 02:47:16.169407 kernel: Console: switching to colour frame buffer device 160x50 Apr 21 02:47:16.169416 kernel: fb0: EFI VGA frame buffer device Apr 21 02:47:16.169429 kernel: pstore: Using crash dump compression: deflate Apr 21 02:47:16.169440 kernel: pstore: Registered efi_pstore as persistent store backend Apr 21 02:47:16.169451 kernel: NET: Registered PF_INET6 protocol family Apr 21 02:47:16.169460 kernel: Segment Routing with IPv6 Apr 21 02:47:16.169470 kernel: In-situ OAM (IOAM) with IPv6 Apr 21 02:47:16.169481 kernel: NET: Registered PF_PACKET protocol family Apr 21 02:47:16.169491 kernel: Key type dns_resolver registered Apr 21 02:47:16.169501 kernel: IPI shorthand broadcast: enabled Apr 21 02:47:16.169512 kernel: sched_clock: Marking stable (3889030626, 722645244)->(4851528524, -239852654) Apr 21 02:47:16.169523 kernel: registered taskstats version 1 Apr 21 02:47:16.169535 kernel: Loading compiled-in X.509 certificates Apr 21 02:47:16.169545 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.81-flatcar: bc6d78cd9d700d9d34e2c2c5bd3cbf2a73898336' Apr 21 02:47:16.169557 kernel: Demotion targets for Node 0: null Apr 21 02:47:16.169566 kernel: Key type .fscrypt registered Apr 21 02:47:16.169575 kernel: Key type fscrypt-provisioning registered Apr 21 02:47:16.169584 kernel: ima: No TPM chip found, activating TPM-bypass! Apr 21 02:47:16.169595 kernel: ima: Allocated hash algorithm: sha1 Apr 21 02:47:16.169606 kernel: ima: No architecture policies found Apr 21 02:47:16.169616 kernel: clk: Disabling unused clocks Apr 21 02:47:16.169633 kernel: Warning: unable to open an initial console. Apr 21 02:47:16.169644 kernel: Freeing unused kernel image (initmem) memory: 46228K Apr 21 02:47:16.169655 kernel: Write protecting the kernel read-only data: 40960k Apr 21 02:47:16.169666 kernel: Freeing unused kernel image (rodata/data gap) memory: 548K Apr 21 02:47:16.169677 kernel: Run /init as init process Apr 21 02:47:16.169688 kernel: with arguments: Apr 21 02:47:16.169698 kernel: /init Apr 21 02:47:16.169709 kernel: with environment: Apr 21 02:47:16.169719 kernel: HOME=/ Apr 21 02:47:16.169731 kernel: TERM=linux Apr 21 02:47:16.169743 systemd[1]: Successfully made /usr/ read-only. Apr 21 02:47:16.169757 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Apr 21 02:47:16.169769 systemd[1]: Detected virtualization kvm. Apr 21 02:47:16.169780 systemd[1]: Detected architecture x86-64. Apr 21 02:47:16.169790 systemd[1]: Running in initrd. Apr 21 02:47:16.169801 systemd[1]: No hostname configured, using default hostname. Apr 21 02:47:16.169815 systemd[1]: Hostname set to . Apr 21 02:47:16.169825 systemd[1]: Initializing machine ID from VM UUID. Apr 21 02:47:16.169835 systemd[1]: Queued start job for default target initrd.target. Apr 21 02:47:16.169846 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 21 02:47:16.169857 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 21 02:47:16.169869 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Apr 21 02:47:16.169880 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 21 02:47:16.169892 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Apr 21 02:47:16.169906 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Apr 21 02:47:16.169917 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Apr 21 02:47:16.169978 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Apr 21 02:47:16.169990 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 21 02:47:16.170003 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 21 02:47:16.170014 systemd[1]: Reached target paths.target - Path Units. Apr 21 02:47:16.170026 systemd[1]: Reached target slices.target - Slice Units. Apr 21 02:47:16.170037 systemd[1]: Reached target swap.target - Swaps. Apr 21 02:47:16.170047 systemd[1]: Reached target timers.target - Timer Units. Apr 21 02:47:16.170058 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Apr 21 02:47:16.170070 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 21 02:47:16.170080 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 21 02:47:16.170091 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Apr 21 02:47:16.170171 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 21 02:47:16.170183 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 21 02:47:16.170194 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 21 02:47:16.170207 systemd[1]: Reached target sockets.target - Socket Units. Apr 21 02:47:16.170219 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Apr 21 02:47:16.170230 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 21 02:47:16.170240 systemd[1]: Finished network-cleanup.service - Network Cleanup. Apr 21 02:47:16.170252 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Apr 21 02:47:16.170264 systemd[1]: Starting systemd-fsck-usr.service... Apr 21 02:47:16.170275 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 21 02:47:16.170284 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 21 02:47:16.170297 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 21 02:47:16.170335 systemd-journald[203]: Collecting audit messages is disabled. Apr 21 02:47:16.170361 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Apr 21 02:47:16.170376 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 21 02:47:16.170388 systemd-journald[203]: Journal started Apr 21 02:47:16.170414 systemd-journald[203]: Runtime Journal (/run/log/journal/e2e8d8449e674425a689485bf6f09001) is 6M, max 48.1M, 42.1M free. Apr 21 02:47:16.179635 systemd[1]: Started systemd-journald.service - Journal Service. Apr 21 02:47:16.183597 systemd[1]: Finished systemd-fsck-usr.service. Apr 21 02:47:16.188363 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 21 02:47:16.189612 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 21 02:47:16.211673 systemd-modules-load[205]: Inserted module 'overlay' Apr 21 02:47:16.218975 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 21 02:47:16.220019 systemd-tmpfiles[211]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Apr 21 02:47:16.225984 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 21 02:47:16.238321 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 21 02:47:16.265240 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 21 02:47:16.276446 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 21 02:47:16.307242 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Apr 21 02:47:16.308421 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 21 02:47:16.324994 systemd-modules-load[205]: Inserted module 'br_netfilter' Apr 21 02:47:16.328658 kernel: Bridge firewalling registered Apr 21 02:47:16.328506 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 21 02:47:16.332504 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 21 02:47:16.360470 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 21 02:47:16.365584 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 21 02:47:16.385502 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 21 02:47:16.398298 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Apr 21 02:47:16.427824 systemd-resolved[242]: Positive Trust Anchors: Apr 21 02:47:16.427887 systemd-resolved[242]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 21 02:47:16.427916 systemd-resolved[242]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 21 02:47:16.430391 systemd-resolved[242]: Defaulting to hostname 'linux'. Apr 21 02:47:16.473060 dracut-cmdline[246]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=bff44a95b1e301b8c626c31d9593bbb30c469579bd546b0b84b6f8eaed8c72f7 Apr 21 02:47:16.431499 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 21 02:47:16.441670 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 21 02:47:16.592202 kernel: SCSI subsystem initialized Apr 21 02:47:16.601208 kernel: Loading iSCSI transport class v2.0-870. Apr 21 02:47:16.615231 kernel: iscsi: registered transport (tcp) Apr 21 02:47:16.639704 kernel: iscsi: registered transport (qla4xxx) Apr 21 02:47:16.639782 kernel: QLogic iSCSI HBA Driver Apr 21 02:47:16.667351 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Apr 21 02:47:16.697605 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Apr 21 02:47:16.700334 systemd[1]: Reached target network-pre.target - Preparation for Network. Apr 21 02:47:16.782800 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Apr 21 02:47:16.791085 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Apr 21 02:47:16.869239 kernel: raid6: avx512x4 gen() 41329 MB/s Apr 21 02:47:16.887192 kernel: raid6: avx512x2 gen() 41118 MB/s Apr 21 02:47:16.905244 kernel: raid6: avx512x1 gen() 41528 MB/s Apr 21 02:47:16.923216 kernel: raid6: avx2x4 gen() 33760 MB/s Apr 21 02:47:16.941236 kernel: raid6: avx2x2 gen() 32219 MB/s Apr 21 02:47:16.961225 kernel: raid6: avx2x1 gen() 25513 MB/s Apr 21 02:47:16.961286 kernel: raid6: using algorithm avx512x1 gen() 41528 MB/s Apr 21 02:47:16.981218 kernel: raid6: .... xor() 26406 MB/s, rmw enabled Apr 21 02:47:16.981293 kernel: raid6: using avx512x2 recovery algorithm Apr 21 02:47:17.002188 kernel: xor: automatically using best checksumming function avx Apr 21 02:47:17.188195 kernel: Btrfs loaded, zoned=no, fsverity=no Apr 21 02:47:17.196510 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Apr 21 02:47:17.202397 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 21 02:47:17.230570 systemd-udevd[454]: Using default interface naming scheme 'v255'. Apr 21 02:47:17.235848 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 21 02:47:17.238593 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Apr 21 02:47:17.275975 dracut-pre-trigger[456]: rd.md=0: removing MD RAID activation Apr 21 02:47:17.311285 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Apr 21 02:47:17.316453 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 21 02:47:17.379364 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 21 02:47:17.388351 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Apr 21 02:47:17.435155 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Apr 21 02:47:17.438241 kernel: cryptd: max_cpu_qlen set to 1000 Apr 21 02:47:17.472930 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Apr 21 02:47:17.473818 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 21 02:47:17.474923 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 21 02:47:17.492201 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Apr 21 02:47:17.492257 kernel: GPT:9289727 != 19775487 Apr 21 02:47:17.492272 kernel: GPT:Alternate GPT header not at the end of the disk. Apr 21 02:47:17.492285 kernel: GPT:9289727 != 19775487 Apr 21 02:47:17.493326 kernel: GPT: Use GNU Parted to correct GPT errors. Apr 21 02:47:17.495296 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 21 02:47:17.496310 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Apr 21 02:47:17.507192 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 21 02:47:17.518224 kernel: libata version 3.00 loaded. Apr 21 02:47:17.518241 kernel: AES CTR mode by8 optimization enabled Apr 21 02:47:17.511536 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Apr 21 02:47:17.524395 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 21 02:47:17.535462 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Apr 21 02:47:17.525292 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 21 02:47:17.533547 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 21 02:47:17.578211 kernel: ahci 0000:00:1f.2: version 3.0 Apr 21 02:47:17.578390 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Apr 21 02:47:17.588883 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Apr 21 02:47:17.595695 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Apr 21 02:47:17.595827 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Apr 21 02:47:17.595895 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Apr 21 02:47:17.603589 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Apr 21 02:47:17.612737 kernel: scsi host0: ahci Apr 21 02:47:17.607253 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 21 02:47:17.625318 kernel: scsi host1: ahci Apr 21 02:47:17.625513 kernel: scsi host2: ahci Apr 21 02:47:17.626443 kernel: scsi host3: ahci Apr 21 02:47:17.628251 kernel: scsi host4: ahci Apr 21 02:47:17.630779 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Apr 21 02:47:17.642316 kernel: scsi host5: ahci Apr 21 02:47:17.642522 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 lpm-pol 1 Apr 21 02:47:17.642546 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 lpm-pol 1 Apr 21 02:47:17.642557 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 lpm-pol 1 Apr 21 02:47:17.650825 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 lpm-pol 1 Apr 21 02:47:17.650894 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 lpm-pol 1 Apr 21 02:47:17.659086 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 lpm-pol 1 Apr 21 02:47:17.670231 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Apr 21 02:47:17.671986 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Apr 21 02:47:17.683827 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Apr 21 02:47:17.718361 disk-uuid[647]: Primary Header is updated. Apr 21 02:47:17.718361 disk-uuid[647]: Secondary Entries is updated. Apr 21 02:47:17.718361 disk-uuid[647]: Secondary Header is updated. Apr 21 02:47:17.727827 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 21 02:47:17.969453 kernel: ata1: SATA link down (SStatus 0 SControl 300) Apr 21 02:47:17.978170 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Apr 21 02:47:17.981243 kernel: ata6: SATA link down (SStatus 0 SControl 300) Apr 21 02:47:17.984234 kernel: ata5: SATA link down (SStatus 0 SControl 300) Apr 21 02:47:17.987206 kernel: ata2: SATA link down (SStatus 0 SControl 300) Apr 21 02:47:17.990216 kernel: ata4: SATA link down (SStatus 0 SControl 300) Apr 21 02:47:17.994787 kernel: ata3.00: LPM support broken, forcing max_power Apr 21 02:47:17.994834 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Apr 21 02:47:17.994847 kernel: ata3.00: applying bridge limits Apr 21 02:47:17.999227 kernel: ata3.00: LPM support broken, forcing max_power Apr 21 02:47:17.999262 kernel: ata3.00: configured for UDMA/100 Apr 21 02:47:18.006391 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Apr 21 02:47:18.049207 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Apr 21 02:47:18.049524 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Apr 21 02:47:18.064242 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Apr 21 02:47:18.409749 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Apr 21 02:47:18.414067 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Apr 21 02:47:18.421491 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 21 02:47:18.425753 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 21 02:47:18.430651 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Apr 21 02:47:18.466060 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Apr 21 02:47:18.738153 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 21 02:47:18.738817 disk-uuid[648]: The operation has completed successfully. Apr 21 02:47:18.770466 systemd[1]: disk-uuid.service: Deactivated successfully. Apr 21 02:47:18.770589 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Apr 21 02:47:18.797682 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Apr 21 02:47:18.816765 sh[677]: Success Apr 21 02:47:18.842767 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Apr 21 02:47:18.842822 kernel: device-mapper: uevent: version 1.0.3 Apr 21 02:47:18.847207 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Apr 21 02:47:18.861189 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Apr 21 02:47:18.893791 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Apr 21 02:47:18.901772 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Apr 21 02:47:18.918622 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Apr 21 02:47:18.930766 kernel: BTRFS: device fsid f0ffb5f7-32a8-4c02-8f56-14d7d8f0dab5 devid 1 transid 34 /dev/mapper/usr (253:0) scanned by mount (689) Apr 21 02:47:18.937874 kernel: BTRFS info (device dm-0): first mount of filesystem f0ffb5f7-32a8-4c02-8f56-14d7d8f0dab5 Apr 21 02:47:18.937898 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Apr 21 02:47:18.949373 kernel: BTRFS info (device dm-0 state E): disabling log replay at mount time Apr 21 02:47:18.949399 kernel: BTRFS info (device dm-0 state E): enabling free space tree Apr 21 02:47:18.950800 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Apr 21 02:47:18.956549 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Apr 21 02:47:18.957855 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Apr 21 02:47:18.958595 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Apr 21 02:47:18.985548 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Apr 21 02:47:19.012152 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (712) Apr 21 02:47:19.012188 kernel: BTRFS info (device vda6): first mount of filesystem c8a1e0fd-1038-473a-a82d-f70d62b109dc Apr 21 02:47:19.018224 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 21 02:47:19.026557 kernel: BTRFS info (device vda6): turning on async discard Apr 21 02:47:19.026598 kernel: BTRFS info (device vda6): enabling free space tree Apr 21 02:47:19.035194 kernel: BTRFS info (device vda6): last unmount of filesystem c8a1e0fd-1038-473a-a82d-f70d62b109dc Apr 21 02:47:19.037421 systemd[1]: Finished ignition-setup.service - Ignition (setup). Apr 21 02:47:19.041896 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Apr 21 02:47:19.137707 ignition[765]: Ignition 2.22.0 Apr 21 02:47:19.137717 ignition[765]: Stage: fetch-offline Apr 21 02:47:19.137736 ignition[765]: no configs at "/usr/lib/ignition/base.d" Apr 21 02:47:19.137742 ignition[765]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 21 02:47:19.137800 ignition[765]: parsed url from cmdline: "" Apr 21 02:47:19.137802 ignition[765]: no config URL provided Apr 21 02:47:19.137805 ignition[765]: reading system config file "/usr/lib/ignition/user.ign" Apr 21 02:47:19.137810 ignition[765]: no config at "/usr/lib/ignition/user.ign" Apr 21 02:47:19.137828 ignition[765]: op(1): [started] loading QEMU firmware config module Apr 21 02:47:19.137831 ignition[765]: op(1): executing: "modprobe" "qemu_fw_cfg" Apr 21 02:47:19.152798 ignition[765]: op(1): [finished] loading QEMU firmware config module Apr 21 02:47:19.152818 ignition[765]: QEMU firmware config was not found. Ignoring... Apr 21 02:47:19.200715 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 21 02:47:19.211572 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 21 02:47:19.243909 systemd-networkd[867]: lo: Link UP Apr 21 02:47:19.243999 systemd-networkd[867]: lo: Gained carrier Apr 21 02:47:19.245229 systemd-networkd[867]: Enumeration completed Apr 21 02:47:19.246206 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 21 02:47:19.246242 systemd-networkd[867]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 21 02:47:19.246244 systemd-networkd[867]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 21 02:47:19.250269 systemd-networkd[867]: eth0: Link UP Apr 21 02:47:19.250417 systemd-networkd[867]: eth0: Gained carrier Apr 21 02:47:19.250425 systemd-networkd[867]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 21 02:47:19.253406 systemd[1]: Reached target network.target - Network. Apr 21 02:47:19.302216 systemd-networkd[867]: eth0: DHCPv4 address 10.0.0.38/16, gateway 10.0.0.1 acquired from 10.0.0.1 Apr 21 02:47:19.488086 ignition[765]: parsing config with SHA512: 2f1db453b2ad0ba4c01af11ae0a274e1c1c8e42ae4fb585e37a1d49cd330b004365a65c2d8a8ca748655ebf913b8de089c9ce60f851ee1d8fea98eff746b0da7 Apr 21 02:47:19.492243 unknown[765]: fetched base config from "system" Apr 21 02:47:19.492255 unknown[765]: fetched user config from "qemu" Apr 21 02:47:19.498397 ignition[765]: fetch-offline: fetch-offline passed Apr 21 02:47:19.498486 ignition[765]: Ignition finished successfully Apr 21 02:47:19.505351 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Apr 21 02:47:19.509462 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Apr 21 02:47:19.510486 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Apr 21 02:47:19.560920 ignition[873]: Ignition 2.22.0 Apr 21 02:47:19.560992 ignition[873]: Stage: kargs Apr 21 02:47:19.561193 ignition[873]: no configs at "/usr/lib/ignition/base.d" Apr 21 02:47:19.561202 ignition[873]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 21 02:47:19.562523 ignition[873]: kargs: kargs passed Apr 21 02:47:19.562580 ignition[873]: Ignition finished successfully Apr 21 02:47:19.579304 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Apr 21 02:47:19.584568 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Apr 21 02:47:19.640027 ignition[881]: Ignition 2.22.0 Apr 21 02:47:19.640065 ignition[881]: Stage: disks Apr 21 02:47:19.640290 ignition[881]: no configs at "/usr/lib/ignition/base.d" Apr 21 02:47:19.640299 ignition[881]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 21 02:47:19.641223 ignition[881]: disks: disks passed Apr 21 02:47:19.641270 ignition[881]: Ignition finished successfully Apr 21 02:47:19.658919 systemd[1]: Finished ignition-disks.service - Ignition (disks). Apr 21 02:47:19.662545 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Apr 21 02:47:19.665996 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 21 02:47:19.671310 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 21 02:47:19.683521 systemd[1]: Reached target sysinit.target - System Initialization. Apr 21 02:47:19.684367 systemd[1]: Reached target basic.target - Basic System. Apr 21 02:47:19.699816 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Apr 21 02:47:19.734665 systemd-fsck[891]: ROOT: clean, 15/553520 files, 52789/553472 blocks Apr 21 02:47:19.739915 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Apr 21 02:47:19.751359 systemd[1]: Mounting sysroot.mount - /sysroot... Apr 21 02:47:19.901266 kernel: EXT4-fs (vda9): mounted filesystem 146ef5ea-4935-456e-a7a6-cf0210fee567 r/w with ordered data mode. Quota mode: none. Apr 21 02:47:19.901411 systemd[1]: Mounted sysroot.mount - /sysroot. Apr 21 02:47:19.905415 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Apr 21 02:47:19.913606 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 21 02:47:19.929849 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Apr 21 02:47:19.933176 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Apr 21 02:47:19.962839 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (899) Apr 21 02:47:19.962863 kernel: BTRFS info (device vda6): first mount of filesystem c8a1e0fd-1038-473a-a82d-f70d62b109dc Apr 21 02:47:19.962872 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 21 02:47:19.962880 kernel: BTRFS info (device vda6): turning on async discard Apr 21 02:47:19.962887 kernel: BTRFS info (device vda6): enabling free space tree Apr 21 02:47:19.933221 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Apr 21 02:47:19.933248 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Apr 21 02:47:19.964267 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 21 02:47:19.990934 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Apr 21 02:47:19.998787 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Apr 21 02:47:20.045735 initrd-setup-root[923]: cut: /sysroot/etc/passwd: No such file or directory Apr 21 02:47:20.055401 initrd-setup-root[930]: cut: /sysroot/etc/group: No such file or directory Apr 21 02:47:20.065074 initrd-setup-root[937]: cut: /sysroot/etc/shadow: No such file or directory Apr 21 02:47:20.070876 initrd-setup-root[944]: cut: /sysroot/etc/gshadow: No such file or directory Apr 21 02:47:20.191991 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Apr 21 02:47:20.200983 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Apr 21 02:47:20.203842 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Apr 21 02:47:20.229641 systemd[1]: sysroot-oem.mount: Deactivated successfully. Apr 21 02:47:20.237930 kernel: BTRFS info (device vda6): last unmount of filesystem c8a1e0fd-1038-473a-a82d-f70d62b109dc Apr 21 02:47:20.252318 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Apr 21 02:47:20.273526 ignition[1012]: INFO : Ignition 2.22.0 Apr 21 02:47:20.273526 ignition[1012]: INFO : Stage: mount Apr 21 02:47:20.278685 ignition[1012]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 21 02:47:20.278685 ignition[1012]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 21 02:47:20.278685 ignition[1012]: INFO : mount: mount passed Apr 21 02:47:20.278685 ignition[1012]: INFO : Ignition finished successfully Apr 21 02:47:20.289474 systemd[1]: Finished ignition-mount.service - Ignition (mount). Apr 21 02:47:20.295259 systemd[1]: Starting ignition-files.service - Ignition (files)... Apr 21 02:47:20.319892 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 21 02:47:20.353678 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1025) Apr 21 02:47:20.353734 kernel: BTRFS info (device vda6): first mount of filesystem c8a1e0fd-1038-473a-a82d-f70d62b109dc Apr 21 02:47:20.353747 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 21 02:47:20.365183 kernel: BTRFS info (device vda6): turning on async discard Apr 21 02:47:20.365226 kernel: BTRFS info (device vda6): enabling free space tree Apr 21 02:47:20.367439 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 21 02:47:20.407855 ignition[1043]: INFO : Ignition 2.22.0 Apr 21 02:47:20.407855 ignition[1043]: INFO : Stage: files Apr 21 02:47:20.414282 ignition[1043]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 21 02:47:20.414282 ignition[1043]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 21 02:47:20.414282 ignition[1043]: DEBUG : files: compiled without relabeling support, skipping Apr 21 02:47:20.414282 ignition[1043]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Apr 21 02:47:20.414282 ignition[1043]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Apr 21 02:47:20.414282 ignition[1043]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Apr 21 02:47:20.414282 ignition[1043]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Apr 21 02:47:20.443930 ignition[1043]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Apr 21 02:47:20.415174 unknown[1043]: wrote ssh authorized keys file for user: core Apr 21 02:47:20.451628 ignition[1043]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Apr 21 02:47:20.451628 ignition[1043]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Apr 21 02:47:20.485587 systemd-networkd[867]: eth0: Gained IPv6LL Apr 21 02:47:20.510621 ignition[1043]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Apr 21 02:47:20.599708 ignition[1043]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Apr 21 02:47:20.599708 ignition[1043]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Apr 21 02:47:20.599708 ignition[1043]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Apr 21 02:47:20.667763 ignition[1043]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Apr 21 02:47:20.867225 ignition[1043]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Apr 21 02:47:20.873610 ignition[1043]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Apr 21 02:47:20.873610 ignition[1043]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Apr 21 02:47:20.873610 ignition[1043]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Apr 21 02:47:20.873610 ignition[1043]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Apr 21 02:47:20.873610 ignition[1043]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 21 02:47:20.873610 ignition[1043]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 21 02:47:20.873610 ignition[1043]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 21 02:47:20.873610 ignition[1043]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 21 02:47:20.873610 ignition[1043]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Apr 21 02:47:20.873610 ignition[1043]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Apr 21 02:47:20.873610 ignition[1043]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.35.1-x86-64.raw" Apr 21 02:47:20.873610 ignition[1043]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.35.1-x86-64.raw" Apr 21 02:47:20.873610 ignition[1043]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.35.1-x86-64.raw" Apr 21 02:47:20.873610 ignition[1043]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.35.1-x86-64.raw: attempt #1 Apr 21 02:47:21.111320 ignition[1043]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Apr 21 02:47:22.107579 ignition[1043]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.35.1-x86-64.raw" Apr 21 02:47:22.107579 ignition[1043]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Apr 21 02:47:22.120844 ignition[1043]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 21 02:47:22.128690 ignition[1043]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 21 02:47:22.128690 ignition[1043]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Apr 21 02:47:22.128690 ignition[1043]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Apr 21 02:47:22.128690 ignition[1043]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Apr 21 02:47:22.153620 ignition[1043]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Apr 21 02:47:22.153620 ignition[1043]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Apr 21 02:47:22.153620 ignition[1043]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Apr 21 02:47:22.153620 ignition[1043]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Apr 21 02:47:22.153620 ignition[1043]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Apr 21 02:47:22.153620 ignition[1043]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Apr 21 02:47:22.153620 ignition[1043]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Apr 21 02:47:22.153620 ignition[1043]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Apr 21 02:47:22.153620 ignition[1043]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Apr 21 02:47:22.153620 ignition[1043]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Apr 21 02:47:22.153620 ignition[1043]: INFO : files: files passed Apr 21 02:47:22.153620 ignition[1043]: INFO : Ignition finished successfully Apr 21 02:47:22.172585 systemd[1]: Finished ignition-files.service - Ignition (files). Apr 21 02:47:22.180828 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Apr 21 02:47:22.189003 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Apr 21 02:47:22.231262 systemd[1]: ignition-quench.service: Deactivated successfully. Apr 21 02:47:22.231342 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Apr 21 02:47:22.272643 initrd-setup-root-after-ignition[1070]: grep: /sysroot/oem/oem-release: No such file or directory Apr 21 02:47:22.279978 initrd-setup-root-after-ignition[1073]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 21 02:47:22.285344 initrd-setup-root-after-ignition[1073]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Apr 21 02:47:22.290430 initrd-setup-root-after-ignition[1077]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 21 02:47:22.296931 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 21 02:47:22.301159 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Apr 21 02:47:22.310487 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Apr 21 02:47:22.369541 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Apr 21 02:47:22.372910 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Apr 21 02:47:22.381550 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Apr 21 02:47:22.382493 systemd[1]: Reached target initrd.target - Initrd Default Target. Apr 21 02:47:22.391618 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Apr 21 02:47:22.392583 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Apr 21 02:47:22.427618 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 21 02:47:22.435687 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Apr 21 02:47:22.470608 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Apr 21 02:47:22.474249 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 21 02:47:22.476213 systemd[1]: Stopped target timers.target - Timer Units. Apr 21 02:47:22.484297 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Apr 21 02:47:22.484438 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 21 02:47:22.494211 systemd[1]: Stopped target initrd.target - Initrd Default Target. Apr 21 02:47:22.500426 systemd[1]: Stopped target basic.target - Basic System. Apr 21 02:47:22.510205 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Apr 21 02:47:22.512908 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Apr 21 02:47:22.518605 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Apr 21 02:47:22.528243 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Apr 21 02:47:22.531909 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Apr 21 02:47:22.539793 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Apr 21 02:47:22.550656 systemd[1]: Stopped target sysinit.target - System Initialization. Apr 21 02:47:22.554191 systemd[1]: Stopped target local-fs.target - Local File Systems. Apr 21 02:47:22.562857 systemd[1]: Stopped target swap.target - Swaps. Apr 21 02:47:22.577860 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Apr 21 02:47:22.578088 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Apr 21 02:47:22.587737 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Apr 21 02:47:22.590202 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 21 02:47:22.597890 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Apr 21 02:47:22.604279 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 21 02:47:22.607055 systemd[1]: dracut-initqueue.service: Deactivated successfully. Apr 21 02:47:22.607301 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Apr 21 02:47:22.623589 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Apr 21 02:47:22.624085 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Apr 21 02:47:22.625214 systemd[1]: Stopped target paths.target - Path Units. Apr 21 02:47:22.632777 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Apr 21 02:47:22.633237 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 21 02:47:22.637208 systemd[1]: Stopped target slices.target - Slice Units. Apr 21 02:47:22.645755 systemd[1]: Stopped target sockets.target - Socket Units. Apr 21 02:47:22.655336 systemd[1]: iscsid.socket: Deactivated successfully. Apr 21 02:47:22.655457 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Apr 21 02:47:22.661420 systemd[1]: iscsiuio.socket: Deactivated successfully. Apr 21 02:47:22.661532 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 21 02:47:22.669408 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Apr 21 02:47:22.669501 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 21 02:47:22.671914 systemd[1]: ignition-files.service: Deactivated successfully. Apr 21 02:47:22.672055 systemd[1]: Stopped ignition-files.service - Ignition (files). Apr 21 02:47:22.688574 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Apr 21 02:47:22.714419 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Apr 21 02:47:22.717413 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Apr 21 02:47:22.717561 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Apr 21 02:47:22.723386 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Apr 21 02:47:22.723481 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Apr 21 02:47:22.728317 systemd[1]: initrd-cleanup.service: Deactivated successfully. Apr 21 02:47:22.742169 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Apr 21 02:47:22.759864 systemd[1]: sysroot-boot.mount: Deactivated successfully. Apr 21 02:47:22.763454 ignition[1097]: INFO : Ignition 2.22.0 Apr 21 02:47:22.763454 ignition[1097]: INFO : Stage: umount Apr 21 02:47:22.763454 ignition[1097]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 21 02:47:22.763454 ignition[1097]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 21 02:47:22.784940 ignition[1097]: INFO : umount: umount passed Apr 21 02:47:22.784940 ignition[1097]: INFO : Ignition finished successfully Apr 21 02:47:22.765659 systemd[1]: ignition-mount.service: Deactivated successfully. Apr 21 02:47:22.765799 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Apr 21 02:47:22.771005 systemd[1]: Stopped target network.target - Network. Apr 21 02:47:22.777259 systemd[1]: ignition-disks.service: Deactivated successfully. Apr 21 02:47:22.777344 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Apr 21 02:47:22.784263 systemd[1]: ignition-kargs.service: Deactivated successfully. Apr 21 02:47:22.784304 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Apr 21 02:47:22.786212 systemd[1]: ignition-setup.service: Deactivated successfully. Apr 21 02:47:22.786247 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Apr 21 02:47:22.790234 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Apr 21 02:47:22.790266 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Apr 21 02:47:22.792040 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Apr 21 02:47:22.793672 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Apr 21 02:47:22.822250 systemd[1]: systemd-resolved.service: Deactivated successfully. Apr 21 02:47:22.822406 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Apr 21 02:47:22.829163 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Apr 21 02:47:22.829347 systemd[1]: sysroot-boot.service: Deactivated successfully. Apr 21 02:47:22.829521 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Apr 21 02:47:22.834888 systemd[1]: initrd-setup-root.service: Deactivated successfully. Apr 21 02:47:22.835008 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Apr 21 02:47:22.841028 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Apr 21 02:47:22.841070 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 21 02:47:22.851784 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Apr 21 02:47:22.868919 systemd[1]: systemd-networkd.service: Deactivated successfully. Apr 21 02:47:22.869069 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Apr 21 02:47:22.890821 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Apr 21 02:47:22.891023 systemd[1]: Stopped target network-pre.target - Preparation for Network. Apr 21 02:47:22.893931 systemd[1]: systemd-networkd.socket: Deactivated successfully. Apr 21 02:47:22.894002 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Apr 21 02:47:22.908859 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Apr 21 02:47:22.915765 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Apr 21 02:47:22.915813 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 21 02:47:22.922864 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 21 02:47:22.922898 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 21 02:47:22.930616 systemd[1]: systemd-modules-load.service: Deactivated successfully. Apr 21 02:47:22.930663 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Apr 21 02:47:22.937337 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 21 02:47:22.948341 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Apr 21 02:47:22.988725 systemd[1]: systemd-udevd.service: Deactivated successfully. Apr 21 02:47:22.988990 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 21 02:47:22.996027 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Apr 21 02:47:22.996067 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Apr 21 02:47:23.001639 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Apr 21 02:47:23.001676 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Apr 21 02:47:23.003999 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Apr 21 02:47:23.004039 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Apr 21 02:47:23.022239 systemd[1]: dracut-cmdline.service: Deactivated successfully. Apr 21 02:47:23.022286 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Apr 21 02:47:23.028747 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 21 02:47:23.028792 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 21 02:47:23.039939 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Apr 21 02:47:23.045900 systemd[1]: systemd-network-generator.service: Deactivated successfully. Apr 21 02:47:23.045995 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Apr 21 02:47:23.068564 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Apr 21 02:47:23.068623 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 21 02:47:23.080453 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Apr 21 02:47:23.080504 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 21 02:47:23.092432 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Apr 21 02:47:23.092473 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Apr 21 02:47:23.094747 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 21 02:47:23.094793 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 21 02:47:23.112732 systemd[1]: network-cleanup.service: Deactivated successfully. Apr 21 02:47:23.112883 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Apr 21 02:47:23.134566 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Apr 21 02:47:23.134676 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Apr 21 02:47:23.136909 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Apr 21 02:47:23.143840 systemd[1]: Starting initrd-switch-root.service - Switch Root... Apr 21 02:47:23.178433 systemd[1]: Switching root. Apr 21 02:47:23.207070 systemd-journald[203]: Journal stopped Apr 21 02:47:24.334276 systemd-journald[203]: Received SIGTERM from PID 1 (systemd). Apr 21 02:47:24.334330 kernel: SELinux: policy capability network_peer_controls=1 Apr 21 02:47:24.334345 kernel: SELinux: policy capability open_perms=1 Apr 21 02:47:24.334353 kernel: SELinux: policy capability extended_socket_class=1 Apr 21 02:47:24.334364 kernel: SELinux: policy capability always_check_network=0 Apr 21 02:47:24.334377 kernel: SELinux: policy capability cgroup_seclabel=1 Apr 21 02:47:24.334386 kernel: SELinux: policy capability nnp_nosuid_transition=1 Apr 21 02:47:24.334396 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Apr 21 02:47:24.334405 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Apr 21 02:47:24.334413 kernel: SELinux: policy capability userspace_initial_context=0 Apr 21 02:47:24.334420 kernel: audit: type=1403 audit(1776739643.364:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Apr 21 02:47:24.334429 systemd[1]: Successfully loaded SELinux policy in 60.818ms. Apr 21 02:47:24.334443 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 4.444ms. Apr 21 02:47:24.334452 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Apr 21 02:47:24.334460 systemd[1]: Detected virtualization kvm. Apr 21 02:47:24.334469 systemd[1]: Detected architecture x86-64. Apr 21 02:47:24.334477 systemd[1]: Detected first boot. Apr 21 02:47:24.334485 systemd[1]: Initializing machine ID from VM UUID. Apr 21 02:47:24.334494 zram_generator::config[1142]: No configuration found. Apr 21 02:47:24.334502 kernel: Guest personality initialized and is inactive Apr 21 02:47:24.334510 kernel: VMCI host device registered (name=vmci, major=10, minor=258) Apr 21 02:47:24.334517 kernel: Initialized host personality Apr 21 02:47:24.334524 kernel: NET: Registered PF_VSOCK protocol family Apr 21 02:47:24.334531 systemd[1]: Populated /etc with preset unit settings. Apr 21 02:47:24.334542 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Apr 21 02:47:24.334550 systemd[1]: initrd-switch-root.service: Deactivated successfully. Apr 21 02:47:24.334559 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Apr 21 02:47:24.334567 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Apr 21 02:47:24.334575 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Apr 21 02:47:24.334583 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Apr 21 02:47:24.334590 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Apr 21 02:47:24.334598 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Apr 21 02:47:24.334608 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Apr 21 02:47:24.334616 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Apr 21 02:47:24.334624 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Apr 21 02:47:24.334632 systemd[1]: Created slice user.slice - User and Session Slice. Apr 21 02:47:24.334640 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 21 02:47:24.334647 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 21 02:47:24.334655 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Apr 21 02:47:24.334663 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Apr 21 02:47:24.334671 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Apr 21 02:47:24.334680 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 21 02:47:24.334688 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Apr 21 02:47:24.334695 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 21 02:47:24.334702 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 21 02:47:24.334710 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Apr 21 02:47:24.334717 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Apr 21 02:47:24.334725 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Apr 21 02:47:24.334732 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Apr 21 02:47:24.334742 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 21 02:47:24.334750 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 21 02:47:24.334758 systemd[1]: Reached target slices.target - Slice Units. Apr 21 02:47:24.334766 systemd[1]: Reached target swap.target - Swaps. Apr 21 02:47:24.334773 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Apr 21 02:47:24.334781 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Apr 21 02:47:24.334788 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Apr 21 02:47:24.334797 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 21 02:47:24.334804 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 21 02:47:24.334813 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 21 02:47:24.334821 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Apr 21 02:47:24.334828 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Apr 21 02:47:24.334836 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Apr 21 02:47:24.334844 systemd[1]: Mounting media.mount - External Media Directory... Apr 21 02:47:24.334851 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 21 02:47:24.334859 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Apr 21 02:47:24.334866 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Apr 21 02:47:24.334874 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Apr 21 02:47:24.334884 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Apr 21 02:47:24.334892 systemd[1]: Reached target machines.target - Containers. Apr 21 02:47:24.334899 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Apr 21 02:47:24.334907 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 21 02:47:24.334917 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 21 02:47:24.334925 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Apr 21 02:47:24.334935 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 21 02:47:24.334943 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 21 02:47:24.334952 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 21 02:47:24.334992 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Apr 21 02:47:24.335001 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 21 02:47:24.335009 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Apr 21 02:47:24.335016 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Apr 21 02:47:24.335025 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Apr 21 02:47:24.335032 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Apr 21 02:47:24.335042 systemd[1]: Stopped systemd-fsck-usr.service. Apr 21 02:47:24.335051 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Apr 21 02:47:24.335059 kernel: fuse: init (API version 7.41) Apr 21 02:47:24.335067 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 21 02:47:24.335074 kernel: loop: module loaded Apr 21 02:47:24.335081 kernel: ACPI: bus type drm_connector registered Apr 21 02:47:24.335088 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 21 02:47:24.335153 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Apr 21 02:47:24.335163 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Apr 21 02:47:24.335188 systemd-journald[1227]: Collecting audit messages is disabled. Apr 21 02:47:24.335209 systemd-journald[1227]: Journal started Apr 21 02:47:24.335228 systemd-journald[1227]: Runtime Journal (/run/log/journal/e2e8d8449e674425a689485bf6f09001) is 6M, max 48.1M, 42.1M free. Apr 21 02:47:23.783725 systemd[1]: Queued start job for default target multi-user.target. Apr 21 02:47:23.795330 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Apr 21 02:47:23.795771 systemd[1]: systemd-journald.service: Deactivated successfully. Apr 21 02:47:23.796304 systemd[1]: systemd-journald.service: Consumed 1.066s CPU time. Apr 21 02:47:24.349304 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Apr 21 02:47:24.355176 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 21 02:47:24.360943 systemd[1]: verity-setup.service: Deactivated successfully. Apr 21 02:47:24.363665 systemd[1]: Stopped verity-setup.service. Apr 21 02:47:24.363683 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 21 02:47:24.377081 systemd[1]: Started systemd-journald.service - Journal Service. Apr 21 02:47:24.377461 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Apr 21 02:47:24.380876 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Apr 21 02:47:24.384471 systemd[1]: Mounted media.mount - External Media Directory. Apr 21 02:47:24.387667 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Apr 21 02:47:24.391338 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Apr 21 02:47:24.394955 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Apr 21 02:47:24.398296 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Apr 21 02:47:24.402278 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 21 02:47:24.406443 systemd[1]: modprobe@configfs.service: Deactivated successfully. Apr 21 02:47:24.406609 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Apr 21 02:47:24.410593 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 21 02:47:24.410744 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 21 02:47:24.414612 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 21 02:47:24.414745 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 21 02:47:24.418416 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 21 02:47:24.418576 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 21 02:47:24.422641 systemd[1]: modprobe@fuse.service: Deactivated successfully. Apr 21 02:47:24.422806 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Apr 21 02:47:24.426391 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 21 02:47:24.426555 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 21 02:47:24.430555 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 21 02:47:24.434369 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Apr 21 02:47:24.438576 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Apr 21 02:47:24.443522 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Apr 21 02:47:24.447776 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 21 02:47:24.460018 systemd[1]: Reached target network-pre.target - Preparation for Network. Apr 21 02:47:24.464776 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Apr 21 02:47:24.469067 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Apr 21 02:47:24.473249 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Apr 21 02:47:24.473305 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 21 02:47:24.481087 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Apr 21 02:47:24.492033 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Apr 21 02:47:24.495723 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 21 02:47:24.500024 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Apr 21 02:47:24.508318 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Apr 21 02:47:24.514950 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 21 02:47:24.515929 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Apr 21 02:47:24.522928 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 21 02:47:24.529078 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 21 02:47:24.537729 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Apr 21 02:47:24.555006 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 21 02:47:24.557261 systemd-journald[1227]: Time spent on flushing to /var/log/journal/e2e8d8449e674425a689485bf6f09001 is 51.284ms for 1076 entries. Apr 21 02:47:24.557261 systemd-journald[1227]: System Journal (/var/log/journal/e2e8d8449e674425a689485bf6f09001) is 8M, max 195.6M, 187.6M free. Apr 21 02:47:24.629862 systemd-journald[1227]: Received client request to flush runtime journal. Apr 21 02:47:24.629894 kernel: loop0: detected capacity change from 0 to 110984 Apr 21 02:47:24.576555 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Apr 21 02:47:24.582739 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Apr 21 02:47:24.591271 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Apr 21 02:47:24.605384 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Apr 21 02:47:24.616370 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Apr 21 02:47:24.626033 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 21 02:47:24.636157 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Apr 21 02:47:24.650608 systemd-tmpfiles[1263]: ACLs are not supported, ignoring. Apr 21 02:47:24.650642 systemd-tmpfiles[1263]: ACLs are not supported, ignoring. Apr 21 02:47:24.656705 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 21 02:47:24.670076 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Apr 21 02:47:24.671383 systemd[1]: Starting systemd-sysusers.service - Create System Users... Apr 21 02:47:24.680013 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Apr 21 02:47:24.700213 kernel: loop1: detected capacity change from 0 to 217752 Apr 21 02:47:24.727711 systemd[1]: Finished systemd-sysusers.service - Create System Users. Apr 21 02:47:24.743294 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 21 02:47:24.757923 kernel: loop2: detected capacity change from 0 to 128560 Apr 21 02:47:24.801416 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Apr 21 02:47:24.818633 systemd-tmpfiles[1284]: ACLs are not supported, ignoring. Apr 21 02:47:24.818653 systemd-tmpfiles[1284]: ACLs are not supported, ignoring. Apr 21 02:47:24.823326 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 21 02:47:24.864334 kernel: loop3: detected capacity change from 0 to 110984 Apr 21 02:47:24.914159 kernel: loop4: detected capacity change from 0 to 217752 Apr 21 02:47:24.949397 kernel: loop5: detected capacity change from 0 to 128560 Apr 21 02:47:24.984545 (sd-merge)[1288]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Apr 21 02:47:24.985398 (sd-merge)[1288]: Merged extensions into '/usr'. Apr 21 02:47:24.993526 systemd[1]: Reload requested from client PID 1262 ('systemd-sysext') (unit systemd-sysext.service)... Apr 21 02:47:24.993657 systemd[1]: Reloading... Apr 21 02:47:25.371184 zram_generator::config[1314]: No configuration found. Apr 21 02:47:25.587709 ldconfig[1257]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Apr 21 02:47:25.675686 systemd[1]: Reloading finished in 677 ms. Apr 21 02:47:25.693055 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Apr 21 02:47:25.697171 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Apr 21 02:47:25.717290 systemd[1]: Starting ensure-sysext.service... Apr 21 02:47:25.720732 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 21 02:47:25.734796 systemd[1]: Reload requested from client PID 1351 ('systemctl') (unit ensure-sysext.service)... Apr 21 02:47:25.734831 systemd[1]: Reloading... Apr 21 02:47:25.788204 zram_generator::config[1378]: No configuration found. Apr 21 02:47:25.812903 systemd-tmpfiles[1352]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Apr 21 02:47:25.812926 systemd-tmpfiles[1352]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Apr 21 02:47:25.814477 systemd-tmpfiles[1352]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Apr 21 02:47:25.817169 systemd-tmpfiles[1352]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Apr 21 02:47:25.820749 systemd-tmpfiles[1352]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Apr 21 02:47:25.822290 systemd-tmpfiles[1352]: ACLs are not supported, ignoring. Apr 21 02:47:25.822405 systemd-tmpfiles[1352]: ACLs are not supported, ignoring. Apr 21 02:47:25.825575 systemd-tmpfiles[1352]: Detected autofs mount point /boot during canonicalization of boot. Apr 21 02:47:25.825646 systemd-tmpfiles[1352]: Skipping /boot Apr 21 02:47:25.876155 systemd-tmpfiles[1352]: Detected autofs mount point /boot during canonicalization of boot. Apr 21 02:47:25.876455 systemd-tmpfiles[1352]: Skipping /boot Apr 21 02:47:26.088877 systemd[1]: Reloading finished in 353 ms. Apr 21 02:47:26.108147 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Apr 21 02:47:26.116552 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 21 02:47:26.132603 systemd[1]: Starting audit-rules.service - Load Audit Rules... Apr 21 02:47:26.136781 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Apr 21 02:47:26.141380 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Apr 21 02:47:26.154335 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 21 02:47:26.160388 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 21 02:47:26.166791 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Apr 21 02:47:26.171819 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Apr 21 02:47:26.182918 systemd[1]: Starting systemd-update-done.service - Update is Completed... Apr 21 02:47:26.189219 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Apr 21 02:47:26.200787 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 21 02:47:26.200924 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 21 02:47:26.201930 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 21 02:47:26.207743 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 21 02:47:26.211967 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 21 02:47:26.216346 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 21 02:47:26.216636 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Apr 21 02:47:26.216762 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 21 02:47:26.223866 systemd[1]: Finished systemd-update-done.service - Update is Completed. Apr 21 02:47:26.231268 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 21 02:47:26.232751 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 21 02:47:26.239623 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Apr 21 02:47:26.244287 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Apr 21 02:47:26.248884 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 21 02:47:26.249078 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 21 02:47:26.272642 augenrules[1449]: No rules Apr 21 02:47:26.526062 systemd-udevd[1422]: Using default interface naming scheme 'v255'. Apr 21 02:47:26.561071 systemd[1]: Started systemd-userdbd.service - User Database Manager. Apr 21 02:47:26.566440 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 21 02:47:26.570956 systemd[1]: audit-rules.service: Deactivated successfully. Apr 21 02:47:26.571221 systemd[1]: Finished audit-rules.service - Load Audit Rules. Apr 21 02:47:26.575307 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 21 02:47:26.575477 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 21 02:47:26.600932 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 21 02:47:26.601090 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 21 02:47:26.603316 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 21 02:47:26.608945 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 21 02:47:26.616501 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 21 02:47:26.622479 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 21 02:47:26.622578 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Apr 21 02:47:26.627426 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 21 02:47:26.630923 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Apr 21 02:47:26.631032 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 21 02:47:26.638213 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 21 02:47:26.640879 systemd[1]: Starting audit-rules.service - Load Audit Rules... Apr 21 02:47:26.644680 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 21 02:47:26.648847 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 21 02:47:26.653563 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 21 02:47:26.653671 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Apr 21 02:47:26.653773 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Apr 21 02:47:26.653827 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 21 02:47:26.657368 systemd[1]: Finished ensure-sysext.service. Apr 21 02:47:26.663007 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 21 02:47:26.664804 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 21 02:47:26.673637 systemd-resolved[1420]: Positive Trust Anchors: Apr 21 02:47:26.673647 systemd-resolved[1420]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 21 02:47:26.673672 systemd-resolved[1420]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 21 02:47:26.677682 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Apr 21 02:47:26.680343 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Apr 21 02:47:26.682730 systemd-resolved[1420]: Defaulting to hostname 'linux'. Apr 21 02:47:26.687670 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 21 02:47:26.691664 augenrules[1500]: /sbin/augenrules: No change Apr 21 02:47:26.703732 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 21 02:47:26.866697 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 21 02:47:26.873692 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 21 02:47:26.874484 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 21 02:47:26.879765 augenrules[1524]: No rules Apr 21 02:47:26.880873 systemd[1]: audit-rules.service: Deactivated successfully. Apr 21 02:47:26.881553 systemd[1]: Finished audit-rules.service - Load Audit Rules. Apr 21 02:47:26.888050 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 21 02:47:26.888256 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 21 02:47:26.901080 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 21 02:47:26.906774 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 21 02:47:26.906841 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 21 02:47:26.930921 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Apr 21 02:47:26.937399 kernel: mousedev: PS/2 mouse device common for all mice Apr 21 02:47:26.937637 systemd-networkd[1498]: lo: Link UP Apr 21 02:47:26.937668 systemd-networkd[1498]: lo: Gained carrier Apr 21 02:47:26.939028 systemd-networkd[1498]: Enumeration completed Apr 21 02:47:26.940924 systemd-networkd[1498]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 21 02:47:26.940930 systemd-networkd[1498]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 21 02:47:26.941865 systemd-networkd[1498]: eth0: Link UP Apr 21 02:47:26.942258 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Apr 21 02:47:26.942310 systemd-networkd[1498]: eth0: Gained carrier Apr 21 02:47:26.942322 systemd-networkd[1498]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 21 02:47:26.947547 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 21 02:47:26.952444 systemd[1]: Reached target network.target - Network. Apr 21 02:47:26.960210 systemd-networkd[1498]: eth0: DHCPv4 address 10.0.0.38/16, gateway 10.0.0.1 acquired from 10.0.0.1 Apr 21 02:47:28.001338 systemd-timesyncd[1513]: Contacted time server 10.0.0.1:123 (10.0.0.1). Apr 21 02:47:28.001405 systemd-timesyncd[1513]: Initial clock synchronization to Tue 2026-04-21 02:47:28.001285 UTC. Apr 21 02:47:28.001727 systemd-resolved[1420]: Clock change detected. Flushing caches. Apr 21 02:47:28.002042 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Apr 21 02:47:28.008188 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Apr 21 02:47:28.014994 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Apr 21 02:47:28.015045 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Apr 21 02:47:28.019330 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Apr 21 02:47:28.029350 systemd[1]: Reached target sysinit.target - System Initialization. Apr 21 02:47:28.035834 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Apr 21 02:47:28.036111 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Apr 21 02:47:28.037211 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Apr 21 02:47:28.044288 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Apr 21 02:47:28.044421 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Apr 21 02:47:28.049338 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Apr 21 02:47:28.053288 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Apr 21 02:47:28.057380 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Apr 21 02:47:28.057401 systemd[1]: Reached target paths.target - Path Units. Apr 21 02:47:28.061280 systemd[1]: Reached target time-set.target - System Time Set. Apr 21 02:47:28.078453 kernel: ACPI: button: Power Button [PWRF] Apr 21 02:47:28.065619 systemd[1]: Started logrotate.timer - Daily rotation of log files. Apr 21 02:47:28.071209 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Apr 21 02:47:28.075375 systemd[1]: Reached target timers.target - Timer Units. Apr 21 02:47:28.081405 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Apr 21 02:47:28.086619 systemd[1]: Starting docker.socket - Docker Socket for the API... Apr 21 02:47:28.095828 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Apr 21 02:47:28.100313 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Apr 21 02:47:28.104386 systemd[1]: Reached target ssh-access.target - SSH Access Available. Apr 21 02:47:28.265618 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Apr 21 02:47:28.271621 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Apr 21 02:47:28.279945 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Apr 21 02:47:28.284940 systemd[1]: Listening on docker.socket - Docker Socket for the API. Apr 21 02:47:28.360232 systemd[1]: Reached target sockets.target - Socket Units. Apr 21 02:47:28.363544 systemd[1]: Reached target basic.target - Basic System. Apr 21 02:47:28.367487 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Apr 21 02:47:28.367513 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Apr 21 02:47:28.370370 systemd[1]: Starting containerd.service - containerd container runtime... Apr 21 02:47:28.374680 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Apr 21 02:47:28.380389 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Apr 21 02:47:28.387964 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Apr 21 02:47:28.401383 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Apr 21 02:47:28.404508 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Apr 21 02:47:28.409056 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Apr 21 02:47:28.415088 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Apr 21 02:47:28.421423 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Apr 21 02:47:28.429679 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Apr 21 02:47:28.435344 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Apr 21 02:47:28.443936 systemd[1]: Starting systemd-logind.service - User Login Management... Apr 21 02:47:28.449324 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 21 02:47:28.454388 jq[1567]: false Apr 21 02:47:28.454709 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Apr 21 02:47:28.455195 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Apr 21 02:47:28.455714 systemd[1]: Starting update-engine.service - Update Engine... Apr 21 02:47:28.460337 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Apr 21 02:47:28.477595 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Apr 21 02:47:28.478841 google_oslogin_nss_cache[1569]: oslogin_cache_refresh[1569]: Refreshing passwd entry cache Apr 21 02:47:28.478847 oslogin_cache_refresh[1569]: Refreshing passwd entry cache Apr 21 02:47:28.485256 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Apr 21 02:47:28.485439 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Apr 21 02:47:28.485614 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Apr 21 02:47:28.485765 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Apr 21 02:47:28.504638 google_oslogin_nss_cache[1569]: oslogin_cache_refresh[1569]: Failure getting users, quitting Apr 21 02:47:28.504638 google_oslogin_nss_cache[1569]: oslogin_cache_refresh[1569]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Apr 21 02:47:28.504638 google_oslogin_nss_cache[1569]: oslogin_cache_refresh[1569]: Refreshing group entry cache Apr 21 02:47:28.503951 oslogin_cache_refresh[1569]: Failure getting users, quitting Apr 21 02:47:28.507726 jq[1582]: true Apr 21 02:47:28.504037 oslogin_cache_refresh[1569]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Apr 21 02:47:28.504076 oslogin_cache_refresh[1569]: Refreshing group entry cache Apr 21 02:47:28.515341 oslogin_cache_refresh[1569]: Failure getting groups, quitting Apr 21 02:47:28.516422 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Apr 21 02:47:28.518233 google_oslogin_nss_cache[1569]: oslogin_cache_refresh[1569]: Failure getting groups, quitting Apr 21 02:47:28.518233 google_oslogin_nss_cache[1569]: oslogin_cache_refresh[1569]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Apr 21 02:47:28.515349 oslogin_cache_refresh[1569]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Apr 21 02:47:28.517941 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Apr 21 02:47:28.529309 systemd[1]: motdgen.service: Deactivated successfully. Apr 21 02:47:28.531632 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Apr 21 02:47:28.534196 jq[1592]: true Apr 21 02:47:28.550278 update_engine[1580]: I20260421 02:47:28.546790 1580 main.cc:92] Flatcar Update Engine starting Apr 21 02:47:28.565552 extend-filesystems[1568]: Found /dev/vda6 Apr 21 02:47:28.570380 (ntainerd)[1601]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Apr 21 02:47:28.583630 extend-filesystems[1568]: Found /dev/vda9 Apr 21 02:47:28.589166 tar[1600]: linux-amd64/LICENSE Apr 21 02:47:28.589166 tar[1600]: linux-amd64/helm Apr 21 02:47:28.593753 extend-filesystems[1568]: Checking size of /dev/vda9 Apr 21 02:47:28.613687 dbus-daemon[1565]: [system] SELinux support is enabled Apr 21 02:47:28.614260 systemd[1]: Started dbus.service - D-Bus System Message Bus. Apr 21 02:47:28.618174 extend-filesystems[1568]: Resized partition /dev/vda9 Apr 21 02:47:28.619930 extend-filesystems[1628]: resize2fs 1.47.3 (8-Jul-2025) Apr 21 02:47:28.629879 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Apr 21 02:47:28.623915 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Apr 21 02:47:28.623934 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Apr 21 02:47:28.628360 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Apr 21 02:47:28.628488 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Apr 21 02:47:28.659842 systemd[1]: Started update-engine.service - Update Engine. Apr 21 02:47:28.665447 systemd[1]: Started locksmithd.service - Cluster reboot manager. Apr 21 02:47:28.688309 update_engine[1580]: I20260421 02:47:28.665719 1580 update_check_scheduler.cc:74] Next update check in 10m2s Apr 21 02:47:28.684585 systemd-logind[1574]: Watching system buttons on /dev/input/event2 (Power Button) Apr 21 02:47:28.684596 systemd-logind[1574]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Apr 21 02:47:28.687450 systemd-logind[1574]: New seat seat0. Apr 21 02:47:28.689299 systemd[1]: Started systemd-logind.service - User Login Management. Apr 21 02:47:28.727974 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Apr 21 02:47:28.746891 extend-filesystems[1628]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Apr 21 02:47:28.746891 extend-filesystems[1628]: old_desc_blocks = 1, new_desc_blocks = 1 Apr 21 02:47:28.746891 extend-filesystems[1628]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Apr 21 02:47:28.767235 bash[1625]: Updated "/home/core/.ssh/authorized_keys" Apr 21 02:47:28.767383 extend-filesystems[1568]: Resized filesystem in /dev/vda9 Apr 21 02:47:28.747094 systemd[1]: extend-filesystems.service: Deactivated successfully. Apr 21 02:47:28.747316 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Apr 21 02:47:28.770440 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 21 02:47:28.774837 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Apr 21 02:47:28.785077 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Apr 21 02:47:28.810905 locksmithd[1629]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Apr 21 02:47:29.220383 kernel: hrtimer: interrupt took 8751263 ns Apr 21 02:47:29.329497 sshd_keygen[1590]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Apr 21 02:47:29.374430 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Apr 21 02:47:29.382437 systemd[1]: Starting issuegen.service - Generate /run/issue... Apr 21 02:47:29.413703 systemd[1]: issuegen.service: Deactivated successfully. Apr 21 02:47:29.413919 systemd[1]: Finished issuegen.service - Generate /run/issue. Apr 21 02:47:29.419362 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Apr 21 02:47:29.614636 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Apr 21 02:47:29.621807 systemd[1]: Started getty@tty1.service - Getty on tty1. Apr 21 02:47:29.626225 containerd[1601]: time="2026-04-21T02:47:29Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Apr 21 02:47:29.627173 containerd[1601]: time="2026-04-21T02:47:29.626889029Z" level=info msg="starting containerd" revision=4ac6c20c7bbf8177f29e46bbdc658fec02ffb8ad version=v2.0.7 Apr 21 02:47:29.627189 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Apr 21 02:47:29.632476 systemd[1]: Reached target getty.target - Login Prompts. Apr 21 02:47:29.663747 containerd[1601]: time="2026-04-21T02:47:29.663620872Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="151.788µs" Apr 21 02:47:29.663926 containerd[1601]: time="2026-04-21T02:47:29.663912982Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Apr 21 02:47:29.664036 containerd[1601]: time="2026-04-21T02:47:29.663995516Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Apr 21 02:47:29.664405 containerd[1601]: time="2026-04-21T02:47:29.664389686Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Apr 21 02:47:29.664468 containerd[1601]: time="2026-04-21T02:47:29.664461093Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Apr 21 02:47:29.664613 containerd[1601]: time="2026-04-21T02:47:29.664604143Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Apr 21 02:47:29.664814 containerd[1601]: time="2026-04-21T02:47:29.664801467Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Apr 21 02:47:29.664847 containerd[1601]: time="2026-04-21T02:47:29.664840743Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Apr 21 02:47:29.665270 containerd[1601]: time="2026-04-21T02:47:29.665250823Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Apr 21 02:47:29.665327 containerd[1601]: time="2026-04-21T02:47:29.665318565Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Apr 21 02:47:29.665445 containerd[1601]: time="2026-04-21T02:47:29.665434715Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Apr 21 02:47:29.665474 containerd[1601]: time="2026-04-21T02:47:29.665468787Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Apr 21 02:47:29.665713 containerd[1601]: time="2026-04-21T02:47:29.665702885Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Apr 21 02:47:29.666116 containerd[1601]: time="2026-04-21T02:47:29.666100780Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Apr 21 02:47:29.666393 containerd[1601]: time="2026-04-21T02:47:29.666345920Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Apr 21 02:47:29.666412 containerd[1601]: time="2026-04-21T02:47:29.666396017Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Apr 21 02:47:29.666825 containerd[1601]: time="2026-04-21T02:47:29.666787153Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Apr 21 02:47:29.667739 containerd[1601]: time="2026-04-21T02:47:29.667682374Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Apr 21 02:47:29.667954 containerd[1601]: time="2026-04-21T02:47:29.667909630Z" level=info msg="metadata content store policy set" policy=shared Apr 21 02:47:29.675193 containerd[1601]: time="2026-04-21T02:47:29.673799890Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Apr 21 02:47:29.675193 containerd[1601]: time="2026-04-21T02:47:29.673949513Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Apr 21 02:47:29.675193 containerd[1601]: time="2026-04-21T02:47:29.673962244Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Apr 21 02:47:29.675193 containerd[1601]: time="2026-04-21T02:47:29.673971068Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Apr 21 02:47:29.675193 containerd[1601]: time="2026-04-21T02:47:29.673980330Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Apr 21 02:47:29.675193 containerd[1601]: time="2026-04-21T02:47:29.673989898Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Apr 21 02:47:29.675193 containerd[1601]: time="2026-04-21T02:47:29.674049270Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Apr 21 02:47:29.675193 containerd[1601]: time="2026-04-21T02:47:29.674058931Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Apr 21 02:47:29.675193 containerd[1601]: time="2026-04-21T02:47:29.674067654Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Apr 21 02:47:29.675193 containerd[1601]: time="2026-04-21T02:47:29.674074523Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Apr 21 02:47:29.675193 containerd[1601]: time="2026-04-21T02:47:29.674081087Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Apr 21 02:47:29.675193 containerd[1601]: time="2026-04-21T02:47:29.674090539Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Apr 21 02:47:29.675193 containerd[1601]: time="2026-04-21T02:47:29.674335841Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Apr 21 02:47:29.675193 containerd[1601]: time="2026-04-21T02:47:29.674547774Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Apr 21 02:47:29.675549 containerd[1601]: time="2026-04-21T02:47:29.674586498Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Apr 21 02:47:29.675549 containerd[1601]: time="2026-04-21T02:47:29.674632737Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Apr 21 02:47:29.675549 containerd[1601]: time="2026-04-21T02:47:29.674645406Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Apr 21 02:47:29.675549 containerd[1601]: time="2026-04-21T02:47:29.674687869Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Apr 21 02:47:29.675549 containerd[1601]: time="2026-04-21T02:47:29.674701665Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Apr 21 02:47:29.675549 containerd[1601]: time="2026-04-21T02:47:29.674777248Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Apr 21 02:47:29.675549 containerd[1601]: time="2026-04-21T02:47:29.674791683Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Apr 21 02:47:29.675549 containerd[1601]: time="2026-04-21T02:47:29.674804295Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Apr 21 02:47:29.675549 containerd[1601]: time="2026-04-21T02:47:29.674815140Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Apr 21 02:47:29.675716 containerd[1601]: time="2026-04-21T02:47:29.675072405Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Apr 21 02:47:29.675807 containerd[1601]: time="2026-04-21T02:47:29.675792565Z" level=info msg="Start snapshots syncer" Apr 21 02:47:29.675945 containerd[1601]: time="2026-04-21T02:47:29.675930294Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Apr 21 02:47:29.677216 containerd[1601]: time="2026-04-21T02:47:29.677087326Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Apr 21 02:47:29.677846 containerd[1601]: time="2026-04-21T02:47:29.677828918Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Apr 21 02:47:29.678205 containerd[1601]: time="2026-04-21T02:47:29.678189615Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Apr 21 02:47:29.678450 containerd[1601]: time="2026-04-21T02:47:29.678404616Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Apr 21 02:47:29.678510 containerd[1601]: time="2026-04-21T02:47:29.678498859Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Apr 21 02:47:29.678553 containerd[1601]: time="2026-04-21T02:47:29.678544810Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Apr 21 02:47:29.678598 containerd[1601]: time="2026-04-21T02:47:29.678588947Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Apr 21 02:47:29.678685 containerd[1601]: time="2026-04-21T02:47:29.678672679Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Apr 21 02:47:29.678739 containerd[1601]: time="2026-04-21T02:47:29.678730390Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Apr 21 02:47:29.678789 containerd[1601]: time="2026-04-21T02:47:29.678779688Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Apr 21 02:47:29.678921 containerd[1601]: time="2026-04-21T02:47:29.678909308Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Apr 21 02:47:29.678969 containerd[1601]: time="2026-04-21T02:47:29.678959024Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Apr 21 02:47:29.679053 containerd[1601]: time="2026-04-21T02:47:29.679042909Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Apr 21 02:47:29.679332 containerd[1601]: time="2026-04-21T02:47:29.679316946Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Apr 21 02:47:29.679390 containerd[1601]: time="2026-04-21T02:47:29.679377765Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Apr 21 02:47:29.679433 containerd[1601]: time="2026-04-21T02:47:29.679424201Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Apr 21 02:47:29.679476 containerd[1601]: time="2026-04-21T02:47:29.679465419Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Apr 21 02:47:29.679517 containerd[1601]: time="2026-04-21T02:47:29.679507479Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Apr 21 02:47:29.679556 containerd[1601]: time="2026-04-21T02:47:29.679547944Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Apr 21 02:47:29.685208 containerd[1601]: time="2026-04-21T02:47:29.683926876Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Apr 21 02:47:29.685208 containerd[1601]: time="2026-04-21T02:47:29.684381451Z" level=info msg="runtime interface created" Apr 21 02:47:29.685208 containerd[1601]: time="2026-04-21T02:47:29.684440830Z" level=info msg="created NRI interface" Apr 21 02:47:29.685208 containerd[1601]: time="2026-04-21T02:47:29.684568543Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Apr 21 02:47:29.685208 containerd[1601]: time="2026-04-21T02:47:29.684666319Z" level=info msg="Connect containerd service" Apr 21 02:47:29.685208 containerd[1601]: time="2026-04-21T02:47:29.684713423Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Apr 21 02:47:29.689096 containerd[1601]: time="2026-04-21T02:47:29.688943874Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 21 02:47:29.877070 systemd-networkd[1498]: eth0: Gained IPv6LL Apr 21 02:47:29.885318 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Apr 21 02:47:29.890288 systemd[1]: Reached target network-online.target - Network is Online. Apr 21 02:47:29.896391 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Apr 21 02:47:29.902343 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 21 02:47:29.915487 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Apr 21 02:47:29.969932 systemd[1]: coreos-metadata.service: Deactivated successfully. Apr 21 02:47:29.970308 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Apr 21 02:47:29.974964 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Apr 21 02:47:30.022817 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Apr 21 02:47:30.161603 tar[1600]: linux-amd64/README.md Apr 21 02:47:30.237507 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Apr 21 02:47:30.292383 containerd[1601]: time="2026-04-21T02:47:30.291892213Z" level=info msg="Start subscribing containerd event" Apr 21 02:47:30.292764 containerd[1601]: time="2026-04-21T02:47:30.292500694Z" level=info msg="Start recovering state" Apr 21 02:47:30.293413 containerd[1601]: time="2026-04-21T02:47:30.293363967Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Apr 21 02:47:30.293562 containerd[1601]: time="2026-04-21T02:47:30.293532601Z" level=info msg="Start event monitor" Apr 21 02:47:30.293661 containerd[1601]: time="2026-04-21T02:47:30.293612024Z" level=info msg="Start cni network conf syncer for default" Apr 21 02:47:30.293754 containerd[1601]: time="2026-04-21T02:47:30.293684871Z" level=info msg="Start streaming server" Apr 21 02:47:30.294181 containerd[1601]: time="2026-04-21T02:47:30.293796362Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Apr 21 02:47:30.294181 containerd[1601]: time="2026-04-21T02:47:30.293828079Z" level=info msg=serving... address=/run/containerd/containerd.sock Apr 21 02:47:30.294181 containerd[1601]: time="2026-04-21T02:47:30.293933183Z" level=info msg="runtime interface starting up..." Apr 21 02:47:30.294181 containerd[1601]: time="2026-04-21T02:47:30.294001139Z" level=info msg="starting plugins..." Apr 21 02:47:30.294359 containerd[1601]: time="2026-04-21T02:47:30.294348966Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Apr 21 02:47:30.295179 systemd[1]: Started containerd.service - containerd container runtime. Apr 21 02:47:30.299284 containerd[1601]: time="2026-04-21T02:47:30.298563967Z" level=info msg="containerd successfully booted in 0.673741s" Apr 21 02:47:31.370911 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Apr 21 02:47:31.408967 systemd[1]: Started sshd@0-10.0.0.38:22-10.0.0.1:38598.service - OpenSSH per-connection server daemon (10.0.0.1:38598). Apr 21 02:47:31.549065 sshd[1702]: Accepted publickey for core from 10.0.0.1 port 38598 ssh2: RSA SHA256:9KUHGlOeyJC7ThH1mkUqpKUL96f7+Wj12eT7o4yHGwc Apr 21 02:47:31.551453 sshd-session[1702]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 02:47:31.563627 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Apr 21 02:47:31.569727 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Apr 21 02:47:31.597598 systemd-logind[1574]: New session 1 of user core. Apr 21 02:47:31.608058 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Apr 21 02:47:31.617276 systemd[1]: Starting user@500.service - User Manager for UID 500... Apr 21 02:47:31.640502 (systemd)[1707]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Apr 21 02:47:31.645253 systemd-logind[1574]: New session c1 of user core. Apr 21 02:47:31.881831 systemd[1707]: Queued start job for default target default.target. Apr 21 02:47:31.897845 systemd[1707]: Created slice app.slice - User Application Slice. Apr 21 02:47:31.897909 systemd[1707]: Reached target paths.target - Paths. Apr 21 02:47:31.897940 systemd[1707]: Reached target timers.target - Timers. Apr 21 02:47:31.899702 systemd[1707]: Starting dbus.socket - D-Bus User Message Bus Socket... Apr 21 02:47:31.986568 systemd[1707]: Listening on dbus.socket - D-Bus User Message Bus Socket. Apr 21 02:47:31.986969 systemd[1707]: Reached target sockets.target - Sockets. Apr 21 02:47:31.988099 systemd[1707]: Reached target basic.target - Basic System. Apr 21 02:47:31.988244 systemd[1707]: Reached target default.target - Main User Target. Apr 21 02:47:31.988794 systemd[1707]: Startup finished in 334ms. Apr 21 02:47:31.991477 systemd[1]: Started user@500.service - User Manager for UID 500. Apr 21 02:47:32.006857 systemd[1]: Started session-1.scope - Session 1 of User core. Apr 21 02:47:32.049322 systemd[1]: Started sshd@1-10.0.0.38:22-10.0.0.1:38614.service - OpenSSH per-connection server daemon (10.0.0.1:38614). Apr 21 02:47:32.122799 sshd[1718]: Accepted publickey for core from 10.0.0.1 port 38614 ssh2: RSA SHA256:9KUHGlOeyJC7ThH1mkUqpKUL96f7+Wj12eT7o4yHGwc Apr 21 02:47:32.124191 sshd-session[1718]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 02:47:32.143092 systemd-logind[1574]: New session 2 of user core. Apr 21 02:47:32.163772 systemd[1]: Started session-2.scope - Session 2 of User core. Apr 21 02:47:32.199916 sshd[1721]: Connection closed by 10.0.0.1 port 38614 Apr 21 02:47:32.202651 sshd-session[1718]: pam_unix(sshd:session): session closed for user core Apr 21 02:47:32.212868 systemd[1]: sshd@1-10.0.0.38:22-10.0.0.1:38614.service: Deactivated successfully. Apr 21 02:47:32.214819 systemd[1]: session-2.scope: Deactivated successfully. Apr 21 02:47:32.215616 systemd-logind[1574]: Session 2 logged out. Waiting for processes to exit. Apr 21 02:47:32.218446 systemd[1]: Started sshd@2-10.0.0.38:22-10.0.0.1:38616.service - OpenSSH per-connection server daemon (10.0.0.1:38616). Apr 21 02:47:32.225302 systemd-logind[1574]: Removed session 2. Apr 21 02:47:32.319196 sshd[1727]: Accepted publickey for core from 10.0.0.1 port 38616 ssh2: RSA SHA256:9KUHGlOeyJC7ThH1mkUqpKUL96f7+Wj12eT7o4yHGwc Apr 21 02:47:32.320899 sshd-session[1727]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 02:47:32.329421 systemd-logind[1574]: New session 3 of user core. Apr 21 02:47:32.340720 systemd[1]: Started session-3.scope - Session 3 of User core. Apr 21 02:47:32.358960 sshd[1730]: Connection closed by 10.0.0.1 port 38616 Apr 21 02:47:32.359298 sshd-session[1727]: pam_unix(sshd:session): session closed for user core Apr 21 02:47:32.362291 systemd[1]: sshd@2-10.0.0.38:22-10.0.0.1:38616.service: Deactivated successfully. Apr 21 02:47:32.363942 systemd[1]: session-3.scope: Deactivated successfully. Apr 21 02:47:32.364756 systemd-logind[1574]: Session 3 logged out. Waiting for processes to exit. Apr 21 02:47:32.365887 systemd-logind[1574]: Removed session 3. Apr 21 02:47:33.568954 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 21 02:47:33.573234 systemd[1]: Reached target multi-user.target - Multi-User System. Apr 21 02:47:33.577406 (kubelet)[1740]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 21 02:47:33.577461 systemd[1]: Startup finished in 4.011s (kernel) + 7.580s (initrd) + 9.240s (userspace) = 20.832s. Apr 21 02:47:34.386791 kubelet[1740]: E0421 02:47:34.386639 1740 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 21 02:47:34.389533 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 21 02:47:34.389714 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 21 02:47:34.390284 systemd[1]: kubelet.service: Consumed 3.930s CPU time, 255.7M memory peak. Apr 21 02:47:42.382018 systemd[1]: Started sshd@3-10.0.0.38:22-10.0.0.1:47326.service - OpenSSH per-connection server daemon (10.0.0.1:47326). Apr 21 02:47:42.433801 sshd[1753]: Accepted publickey for core from 10.0.0.1 port 47326 ssh2: RSA SHA256:9KUHGlOeyJC7ThH1mkUqpKUL96f7+Wj12eT7o4yHGwc Apr 21 02:47:42.434908 sshd-session[1753]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 02:47:42.441710 systemd-logind[1574]: New session 4 of user core. Apr 21 02:47:42.456364 systemd[1]: Started session-4.scope - Session 4 of User core. Apr 21 02:47:42.473853 sshd[1756]: Connection closed by 10.0.0.1 port 47326 Apr 21 02:47:42.474791 sshd-session[1753]: pam_unix(sshd:session): session closed for user core Apr 21 02:47:42.485190 systemd[1]: sshd@3-10.0.0.38:22-10.0.0.1:47326.service: Deactivated successfully. Apr 21 02:47:42.486447 systemd[1]: session-4.scope: Deactivated successfully. Apr 21 02:47:42.487222 systemd-logind[1574]: Session 4 logged out. Waiting for processes to exit. Apr 21 02:47:42.489038 systemd[1]: Started sshd@4-10.0.0.38:22-10.0.0.1:47328.service - OpenSSH per-connection server daemon (10.0.0.1:47328). Apr 21 02:47:42.489877 systemd-logind[1574]: Removed session 4. Apr 21 02:47:42.535309 sshd[1762]: Accepted publickey for core from 10.0.0.1 port 47328 ssh2: RSA SHA256:9KUHGlOeyJC7ThH1mkUqpKUL96f7+Wj12eT7o4yHGwc Apr 21 02:47:42.536781 sshd-session[1762]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 02:47:42.541398 systemd-logind[1574]: New session 5 of user core. Apr 21 02:47:42.554873 systemd[1]: Started session-5.scope - Session 5 of User core. Apr 21 02:47:42.566532 sshd[1765]: Connection closed by 10.0.0.1 port 47328 Apr 21 02:47:42.566952 sshd-session[1762]: pam_unix(sshd:session): session closed for user core Apr 21 02:47:42.578999 systemd[1]: sshd@4-10.0.0.38:22-10.0.0.1:47328.service: Deactivated successfully. Apr 21 02:47:42.580363 systemd[1]: session-5.scope: Deactivated successfully. Apr 21 02:47:42.582219 systemd-logind[1574]: Session 5 logged out. Waiting for processes to exit. Apr 21 02:47:42.585930 systemd[1]: Started sshd@5-10.0.0.38:22-10.0.0.1:47332.service - OpenSSH per-connection server daemon (10.0.0.1:47332). Apr 21 02:47:42.587103 systemd-logind[1574]: Removed session 5. Apr 21 02:47:42.641643 sshd[1771]: Accepted publickey for core from 10.0.0.1 port 47332 ssh2: RSA SHA256:9KUHGlOeyJC7ThH1mkUqpKUL96f7+Wj12eT7o4yHGwc Apr 21 02:47:42.642627 sshd-session[1771]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 02:47:42.652646 systemd-logind[1574]: New session 6 of user core. Apr 21 02:47:42.663333 systemd[1]: Started session-6.scope - Session 6 of User core. Apr 21 02:47:42.677046 sshd[1774]: Connection closed by 10.0.0.1 port 47332 Apr 21 02:47:42.677531 sshd-session[1771]: pam_unix(sshd:session): session closed for user core Apr 21 02:47:42.684766 systemd[1]: sshd@5-10.0.0.38:22-10.0.0.1:47332.service: Deactivated successfully. Apr 21 02:47:42.686617 systemd[1]: session-6.scope: Deactivated successfully. Apr 21 02:47:42.687340 systemd-logind[1574]: Session 6 logged out. Waiting for processes to exit. Apr 21 02:47:42.689024 systemd[1]: Started sshd@6-10.0.0.38:22-10.0.0.1:47346.service - OpenSSH per-connection server daemon (10.0.0.1:47346). Apr 21 02:47:42.689847 systemd-logind[1574]: Removed session 6. Apr 21 02:47:42.742463 sshd[1780]: Accepted publickey for core from 10.0.0.1 port 47346 ssh2: RSA SHA256:9KUHGlOeyJC7ThH1mkUqpKUL96f7+Wj12eT7o4yHGwc Apr 21 02:47:42.743742 sshd-session[1780]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 02:47:42.748016 systemd-logind[1574]: New session 7 of user core. Apr 21 02:47:42.757318 systemd[1]: Started session-7.scope - Session 7 of User core. Apr 21 02:47:42.772754 sudo[1785]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Apr 21 02:47:42.772969 sudo[1785]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 21 02:47:42.796210 sudo[1785]: pam_unix(sudo:session): session closed for user root Apr 21 02:47:42.798672 sshd[1784]: Connection closed by 10.0.0.1 port 47346 Apr 21 02:47:42.799297 sshd-session[1780]: pam_unix(sshd:session): session closed for user core Apr 21 02:47:42.819365 systemd[1]: sshd@6-10.0.0.38:22-10.0.0.1:47346.service: Deactivated successfully. Apr 21 02:47:42.820748 systemd[1]: session-7.scope: Deactivated successfully. Apr 21 02:47:42.821586 systemd-logind[1574]: Session 7 logged out. Waiting for processes to exit. Apr 21 02:47:42.823505 systemd[1]: Started sshd@7-10.0.0.38:22-10.0.0.1:47356.service - OpenSSH per-connection server daemon (10.0.0.1:47356). Apr 21 02:47:42.824367 systemd-logind[1574]: Removed session 7. Apr 21 02:47:42.878634 sshd[1791]: Accepted publickey for core from 10.0.0.1 port 47356 ssh2: RSA SHA256:9KUHGlOeyJC7ThH1mkUqpKUL96f7+Wj12eT7o4yHGwc Apr 21 02:47:42.880753 sshd-session[1791]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 02:47:42.892937 systemd-logind[1574]: New session 8 of user core. Apr 21 02:47:42.907612 systemd[1]: Started session-8.scope - Session 8 of User core. Apr 21 02:47:42.923035 sudo[1796]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Apr 21 02:47:42.923420 sudo[1796]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 21 02:47:42.929748 sudo[1796]: pam_unix(sudo:session): session closed for user root Apr 21 02:47:42.935486 sudo[1795]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Apr 21 02:47:42.935692 sudo[1795]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 21 02:47:42.945727 systemd[1]: Starting audit-rules.service - Load Audit Rules... Apr 21 02:47:42.987570 augenrules[1818]: No rules Apr 21 02:47:42.988419 systemd[1]: audit-rules.service: Deactivated successfully. Apr 21 02:47:42.988613 systemd[1]: Finished audit-rules.service - Load Audit Rules. Apr 21 02:47:42.989957 sudo[1795]: pam_unix(sudo:session): session closed for user root Apr 21 02:47:42.992720 sshd[1794]: Connection closed by 10.0.0.1 port 47356 Apr 21 02:47:42.994317 sshd-session[1791]: pam_unix(sshd:session): session closed for user core Apr 21 02:47:43.003402 systemd[1]: sshd@7-10.0.0.38:22-10.0.0.1:47356.service: Deactivated successfully. Apr 21 02:47:43.004623 systemd[1]: session-8.scope: Deactivated successfully. Apr 21 02:47:43.005307 systemd-logind[1574]: Session 8 logged out. Waiting for processes to exit. Apr 21 02:47:43.006978 systemd[1]: Started sshd@8-10.0.0.38:22-10.0.0.1:47364.service - OpenSSH per-connection server daemon (10.0.0.1:47364). Apr 21 02:47:43.007741 systemd-logind[1574]: Removed session 8. Apr 21 02:47:43.114518 sshd[1827]: Accepted publickey for core from 10.0.0.1 port 47364 ssh2: RSA SHA256:9KUHGlOeyJC7ThH1mkUqpKUL96f7+Wj12eT7o4yHGwc Apr 21 02:47:43.116109 sshd-session[1827]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 02:47:43.120681 systemd-logind[1574]: New session 9 of user core. Apr 21 02:47:43.131380 systemd[1]: Started session-9.scope - Session 9 of User core. Apr 21 02:47:43.142648 sudo[1831]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Apr 21 02:47:43.142931 sudo[1831]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 21 02:47:44.523855 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Apr 21 02:47:44.525783 systemd[1]: Starting docker.service - Docker Application Container Engine... Apr 21 02:47:44.527003 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 21 02:47:44.541517 (dockerd)[1851]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Apr 21 02:47:45.196382 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 21 02:47:45.200321 (kubelet)[1865]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 21 02:47:45.415658 kubelet[1865]: E0421 02:47:45.415595 1865 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 21 02:47:45.419446 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 21 02:47:45.419570 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 21 02:47:45.419843 systemd[1]: kubelet.service: Consumed 829ms CPU time, 111.2M memory peak. Apr 21 02:47:45.433744 dockerd[1851]: time="2026-04-21T02:47:45.432792308Z" level=info msg="Starting up" Apr 21 02:47:45.440374 dockerd[1851]: time="2026-04-21T02:47:45.440277427Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Apr 21 02:47:45.487791 dockerd[1851]: time="2026-04-21T02:47:45.487493799Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Apr 21 02:47:45.757254 dockerd[1851]: time="2026-04-21T02:47:45.756857483Z" level=info msg="Loading containers: start." Apr 21 02:47:45.780191 kernel: Initializing XFRM netlink socket Apr 21 02:47:46.205440 systemd-networkd[1498]: docker0: Link UP Apr 21 02:47:46.233627 dockerd[1851]: time="2026-04-21T02:47:46.233235473Z" level=info msg="Loading containers: done." Apr 21 02:47:46.261178 dockerd[1851]: time="2026-04-21T02:47:46.260993879Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Apr 21 02:47:46.261452 dockerd[1851]: time="2026-04-21T02:47:46.261424190Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Apr 21 02:47:46.261637 dockerd[1851]: time="2026-04-21T02:47:46.261591084Z" level=info msg="Initializing buildkit" Apr 21 02:47:46.315388 dockerd[1851]: time="2026-04-21T02:47:46.315245525Z" level=info msg="Completed buildkit initialization" Apr 21 02:47:46.326183 dockerd[1851]: time="2026-04-21T02:47:46.326065455Z" level=info msg="Daemon has completed initialization" Apr 21 02:47:46.326522 dockerd[1851]: time="2026-04-21T02:47:46.326312688Z" level=info msg="API listen on /run/docker.sock" Apr 21 02:47:46.326912 systemd[1]: Started docker.service - Docker Application Container Engine. Apr 21 02:47:47.970421 containerd[1601]: time="2026-04-21T02:47:47.970018412Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.35.4\"" Apr 21 02:47:48.563990 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3176266108.mount: Deactivated successfully. Apr 21 02:47:50.049819 containerd[1601]: time="2026-04-21T02:47:50.049698637Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.35.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 02:47:50.050805 containerd[1601]: time="2026-04-21T02:47:50.050069627Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.35.4: active requests=0, bytes read=27578861" Apr 21 02:47:50.052233 containerd[1601]: time="2026-04-21T02:47:50.052162611Z" level=info msg="ImageCreate event name:\"sha256:840f22aa169cc9a11114a874832f60c2d4a4f7767d107303cd1ca6d9c228ee8b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 02:47:50.054819 containerd[1601]: time="2026-04-21T02:47:50.054763410Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:06b4bb208634a107ab9e6c50cdb9df178d05166a700c0cc448d59522091074b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 02:47:50.055887 containerd[1601]: time="2026-04-21T02:47:50.055841385Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.35.4\" with image id \"sha256:840f22aa169cc9a11114a874832f60c2d4a4f7767d107303cd1ca6d9c228ee8b\", repo tag \"registry.k8s.io/kube-apiserver:v1.35.4\", repo digest \"registry.k8s.io/kube-apiserver@sha256:06b4bb208634a107ab9e6c50cdb9df178d05166a700c0cc448d59522091074b5\", size \"27576022\" in 2.085426806s" Apr 21 02:47:50.055954 containerd[1601]: time="2026-04-21T02:47:50.055927129Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.35.4\" returns image reference \"sha256:840f22aa169cc9a11114a874832f60c2d4a4f7767d107303cd1ca6d9c228ee8b\"" Apr 21 02:47:50.058282 containerd[1601]: time="2026-04-21T02:47:50.057994180Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.35.4\"" Apr 21 02:47:51.649810 containerd[1601]: time="2026-04-21T02:47:51.649713414Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.35.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 02:47:51.650516 containerd[1601]: time="2026-04-21T02:47:51.650247081Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.35.4: active requests=0, bytes read=21451591" Apr 21 02:47:51.651323 containerd[1601]: time="2026-04-21T02:47:51.651232745Z" level=info msg="ImageCreate event name:\"sha256:96ce7469899d4d3ccad56b1a80b91609cb2203287112d73818296004948bb667\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 02:47:51.654231 containerd[1601]: time="2026-04-21T02:47:51.654170352Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:7b036c805d57f203e9efaf43672cff6019b9083a9c0eb107ea8500eace29d8fd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 02:47:51.655445 containerd[1601]: time="2026-04-21T02:47:51.655400485Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.35.4\" with image id \"sha256:96ce7469899d4d3ccad56b1a80b91609cb2203287112d73818296004948bb667\", repo tag \"registry.k8s.io/kube-controller-manager:v1.35.4\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:7b036c805d57f203e9efaf43672cff6019b9083a9c0eb107ea8500eace29d8fd\", size \"23018006\" in 1.597382696s" Apr 21 02:47:51.655445 containerd[1601]: time="2026-04-21T02:47:51.655442073Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.35.4\" returns image reference \"sha256:96ce7469899d4d3ccad56b1a80b91609cb2203287112d73818296004948bb667\"" Apr 21 02:47:51.658935 containerd[1601]: time="2026-04-21T02:47:51.658805640Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.35.4\"" Apr 21 02:47:52.929521 containerd[1601]: time="2026-04-21T02:47:52.929399903Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.35.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 02:47:52.930374 containerd[1601]: time="2026-04-21T02:47:52.930342977Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.35.4: active requests=0, bytes read=15555222" Apr 21 02:47:52.932283 containerd[1601]: time="2026-04-21T02:47:52.932233819Z" level=info msg="ImageCreate event name:\"sha256:a0eecd9b69a38f829c29b535f73c1a3de3c7cc9f1294a44dc42c808faf0a23ff\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 02:47:52.940394 containerd[1601]: time="2026-04-21T02:47:52.940318384Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:9054fecb4fa04cc63aec47b0913c8deb3487d414190cd15211f864cfe0d0b4d6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 02:47:52.941394 containerd[1601]: time="2026-04-21T02:47:52.941360813Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.35.4\" with image id \"sha256:a0eecd9b69a38f829c29b535f73c1a3de3c7cc9f1294a44dc42c808faf0a23ff\", repo tag \"registry.k8s.io/kube-scheduler:v1.35.4\", repo digest \"registry.k8s.io/kube-scheduler@sha256:9054fecb4fa04cc63aec47b0913c8deb3487d414190cd15211f864cfe0d0b4d6\", size \"17121655\" in 1.282398428s" Apr 21 02:47:52.941430 containerd[1601]: time="2026-04-21T02:47:52.941403304Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.35.4\" returns image reference \"sha256:a0eecd9b69a38f829c29b535f73c1a3de3c7cc9f1294a44dc42c808faf0a23ff\"" Apr 21 02:47:52.943233 containerd[1601]: time="2026-04-21T02:47:52.943214228Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.35.4\"" Apr 21 02:47:54.073266 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2312561814.mount: Deactivated successfully. Apr 21 02:47:54.838019 containerd[1601]: time="2026-04-21T02:47:54.837867697Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.35.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 02:47:54.838776 containerd[1601]: time="2026-04-21T02:47:54.838343846Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.35.4: active requests=0, bytes read=25699819" Apr 21 02:47:54.839431 containerd[1601]: time="2026-04-21T02:47:54.839379817Z" level=info msg="ImageCreate event name:\"sha256:f21f27cddb23d0d7131dc7c59666b3b0e0b5ca4c3f003225f90307ab6211b6e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 02:47:54.841165 containerd[1601]: time="2026-04-21T02:47:54.841079111Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:c5daa23c72474e5e4062c320177d3b485fd42e7010f052bc80d657c4c00a0672\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 02:47:54.841663 containerd[1601]: time="2026-04-21T02:47:54.841614926Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.35.4\" with image id \"sha256:f21f27cddb23d0d7131dc7c59666b3b0e0b5ca4c3f003225f90307ab6211b6e1\", repo tag \"registry.k8s.io/kube-proxy:v1.35.4\", repo digest \"registry.k8s.io/kube-proxy@sha256:c5daa23c72474e5e4062c320177d3b485fd42e7010f052bc80d657c4c00a0672\", size \"25698944\" in 1.898314475s" Apr 21 02:47:54.841710 containerd[1601]: time="2026-04-21T02:47:54.841659388Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.35.4\" returns image reference \"sha256:f21f27cddb23d0d7131dc7c59666b3b0e0b5ca4c3f003225f90307ab6211b6e1\"" Apr 21 02:47:54.843252 containerd[1601]: time="2026-04-21T02:47:54.843223796Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.13.1\"" Apr 21 02:47:55.319739 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2317321491.mount: Deactivated successfully. Apr 21 02:47:55.672041 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Apr 21 02:47:55.673872 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 21 02:47:55.831297 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 21 02:47:55.840710 (kubelet)[2208]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 21 02:47:56.213957 kubelet[2208]: E0421 02:47:56.213902 2208 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 21 02:47:56.216850 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 21 02:47:56.216965 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 21 02:47:56.217699 systemd[1]: kubelet.service: Consumed 546ms CPU time, 112.1M memory peak. Apr 21 02:47:56.965658 containerd[1601]: time="2026-04-21T02:47:56.965578988Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.13.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 02:47:56.966445 containerd[1601]: time="2026-04-21T02:47:56.966218074Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.13.1: active requests=0, bytes read=23555980" Apr 21 02:47:56.967255 containerd[1601]: time="2026-04-21T02:47:56.967197761Z" level=info msg="ImageCreate event name:\"sha256:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 02:47:56.969409 containerd[1601]: time="2026-04-21T02:47:56.969373051Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 02:47:56.970392 containerd[1601]: time="2026-04-21T02:47:56.970338351Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.13.1\" with image id \"sha256:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139\", repo tag \"registry.k8s.io/coredns/coredns:v1.13.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6\", size \"23553139\" in 2.127078756s" Apr 21 02:47:56.970392 containerd[1601]: time="2026-04-21T02:47:56.970379175Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.13.1\" returns image reference \"sha256:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139\"" Apr 21 02:47:56.972082 containerd[1601]: time="2026-04-21T02:47:56.971913544Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Apr 21 02:47:57.384979 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2828909364.mount: Deactivated successfully. Apr 21 02:47:57.393816 containerd[1601]: time="2026-04-21T02:47:57.393743170Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 02:47:57.394009 containerd[1601]: time="2026-04-21T02:47:57.393986404Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=321150" Apr 21 02:47:57.395186 containerd[1601]: time="2026-04-21T02:47:57.395081288Z" level=info msg="ImageCreate event name:\"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 02:47:57.396736 containerd[1601]: time="2026-04-21T02:47:57.396678560Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 02:47:57.397529 containerd[1601]: time="2026-04-21T02:47:57.397472291Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"320448\" in 425.536989ms" Apr 21 02:47:57.397529 containerd[1601]: time="2026-04-21T02:47:57.397515441Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\"" Apr 21 02:47:57.399192 containerd[1601]: time="2026-04-21T02:47:57.398953484Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.6-0\"" Apr 21 02:47:57.915903 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1122919571.mount: Deactivated successfully. Apr 21 02:47:58.970773 containerd[1601]: time="2026-04-21T02:47:58.970638674Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.6-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 02:47:58.971388 containerd[1601]: time="2026-04-21T02:47:58.971045481Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.6-0: active requests=0, bytes read=23643979" Apr 21 02:47:58.972041 containerd[1601]: time="2026-04-21T02:47:58.971988184Z" level=info msg="ImageCreate event name:\"sha256:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 02:47:58.975387 containerd[1601]: time="2026-04-21T02:47:58.975304319Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:60a30b5d81b2217555e2cfb9537f655b7ba97220b99c39ee2e162a7127225890\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 02:47:58.976022 containerd[1601]: time="2026-04-21T02:47:58.975932043Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.6-0\" with image id \"sha256:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2\", repo tag \"registry.k8s.io/etcd:3.6.6-0\", repo digest \"registry.k8s.io/etcd@sha256:60a30b5d81b2217555e2cfb9537f655b7ba97220b99c39ee2e162a7127225890\", size \"23641797\" in 1.576956756s" Apr 21 02:47:58.976022 containerd[1601]: time="2026-04-21T02:47:58.975969399Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.6-0\" returns image reference \"sha256:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2\"" Apr 21 02:48:00.558066 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 21 02:48:00.558294 systemd[1]: kubelet.service: Consumed 546ms CPU time, 112.1M memory peak. Apr 21 02:48:00.562055 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 21 02:48:00.593428 systemd[1]: Reload requested from client PID 2328 ('systemctl') (unit session-9.scope)... Apr 21 02:48:00.593463 systemd[1]: Reloading... Apr 21 02:48:00.719244 zram_generator::config[2371]: No configuration found. Apr 21 02:48:00.903060 systemd[1]: Reloading finished in 309 ms. Apr 21 02:48:00.962512 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Apr 21 02:48:00.962591 systemd[1]: kubelet.service: Failed with result 'signal'. Apr 21 02:48:00.962795 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 21 02:48:00.962825 systemd[1]: kubelet.service: Consumed 117ms CPU time, 98.4M memory peak. Apr 21 02:48:00.964415 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 21 02:48:01.098096 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 21 02:48:01.114937 (kubelet)[2419]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 21 02:48:01.197955 kubelet[2419]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 21 02:48:01.266731 kubelet[2419]: I0421 02:48:01.266667 2419 server.go:525] "Kubelet version" kubeletVersion="v1.35.1" Apr 21 02:48:01.266731 kubelet[2419]: I0421 02:48:01.266718 2419 server.go:527] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 21 02:48:01.266731 kubelet[2419]: I0421 02:48:01.266733 2419 watchdog_linux.go:95] "Systemd watchdog is not enabled" Apr 21 02:48:01.266731 kubelet[2419]: I0421 02:48:01.266738 2419 watchdog_linux.go:138] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 21 02:48:01.267109 kubelet[2419]: I0421 02:48:01.267073 2419 server.go:951] "Client rotation is on, will bootstrap in background" Apr 21 02:48:01.308653 kubelet[2419]: E0421 02:48:01.308598 2419 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.38:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.38:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 21 02:48:01.315459 kubelet[2419]: I0421 02:48:01.315379 2419 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 21 02:48:01.321927 kubelet[2419]: I0421 02:48:01.321902 2419 server.go:1418] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Apr 21 02:48:01.330365 kubelet[2419]: I0421 02:48:01.330254 2419 server.go:775] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Apr 21 02:48:01.331980 kubelet[2419]: I0421 02:48:01.331857 2419 container_manager_linux.go:272] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 21 02:48:01.332320 kubelet[2419]: I0421 02:48:01.331964 2419 container_manager_linux.go:277] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 21 02:48:01.332644 kubelet[2419]: I0421 02:48:01.332358 2419 topology_manager.go:143] "Creating topology manager with none policy" Apr 21 02:48:01.332644 kubelet[2419]: I0421 02:48:01.332367 2419 container_manager_linux.go:308] "Creating device plugin manager" Apr 21 02:48:01.332644 kubelet[2419]: I0421 02:48:01.332513 2419 container_manager_linux.go:317] "Creating Dynamic Resource Allocation (DRA) manager" Apr 21 02:48:01.336739 kubelet[2419]: I0421 02:48:01.336655 2419 state_mem.go:41] "Initialized" logger="CPUManager state memory" Apr 21 02:48:01.337349 kubelet[2419]: I0421 02:48:01.337289 2419 kubelet.go:482] "Attempting to sync node with API server" Apr 21 02:48:01.337349 kubelet[2419]: I0421 02:48:01.337328 2419 kubelet.go:383] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 21 02:48:01.337491 kubelet[2419]: I0421 02:48:01.337471 2419 kubelet.go:394] "Adding apiserver pod source" Apr 21 02:48:01.337550 kubelet[2419]: I0421 02:48:01.337531 2419 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 21 02:48:01.343820 kubelet[2419]: I0421 02:48:01.343797 2419 kuberuntime_manager.go:294] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Apr 21 02:48:01.355398 kubelet[2419]: I0421 02:48:01.355083 2419 kubelet.go:943] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 21 02:48:01.355398 kubelet[2419]: I0421 02:48:01.355368 2419 kubelet.go:970] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Apr 21 02:48:01.355795 kubelet[2419]: W0421 02:48:01.355746 2419 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Apr 21 02:48:01.360472 kubelet[2419]: I0421 02:48:01.359826 2419 server.go:1257] "Started kubelet" Apr 21 02:48:01.360472 kubelet[2419]: I0421 02:48:01.360267 2419 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 21 02:48:01.360472 kubelet[2419]: I0421 02:48:01.360398 2419 server_v1.go:49] "podresources" method="list" useActivePods=true Apr 21 02:48:01.367858 kubelet[2419]: I0421 02:48:01.366938 2419 server.go:182] "Starting to listen" address="0.0.0.0" port=10250 Apr 21 02:48:01.367858 kubelet[2419]: I0421 02:48:01.367333 2419 server.go:254] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 21 02:48:01.372485 kubelet[2419]: I0421 02:48:01.372272 2419 server.go:317] "Adding debug handlers to kubelet server" Apr 21 02:48:01.372784 kubelet[2419]: E0421 02:48:01.371719 2419 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.38:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.38:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18a83f539dc93fdf default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-21 02:48:01.359708127 +0000 UTC m=+0.238986410,LastTimestamp:2026-04-21 02:48:01.359708127 +0000 UTC m=+0.238986410,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 21 02:48:01.374106 kubelet[2419]: I0421 02:48:01.374091 2419 fs_resource_analyzer.go:69] "Starting FS ResourceAnalyzer" Apr 21 02:48:01.374523 kubelet[2419]: I0421 02:48:01.374453 2419 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 21 02:48:01.376051 kubelet[2419]: I0421 02:48:01.376036 2419 volume_manager.go:311] "Starting Kubelet Volume Manager" Apr 21 02:48:01.376479 kubelet[2419]: E0421 02:48:01.376465 2419 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 21 02:48:01.376767 kubelet[2419]: I0421 02:48:01.376755 2419 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Apr 21 02:48:01.376926 kubelet[2419]: I0421 02:48:01.376918 2419 reconciler.go:29] "Reconciler: start to sync state" Apr 21 02:48:01.378281 kubelet[2419]: E0421 02:48:01.378227 2419 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.38:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.38:6443: connect: connection refused" interval="200ms" Apr 21 02:48:01.378420 kubelet[2419]: E0421 02:48:01.378378 2419 kubelet.go:1656] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 21 02:48:01.380585 kubelet[2419]: I0421 02:48:01.380534 2419 factory.go:223] Registration of the containerd container factory successfully Apr 21 02:48:01.380585 kubelet[2419]: I0421 02:48:01.380576 2419 factory.go:223] Registration of the systemd container factory successfully Apr 21 02:48:01.380803 kubelet[2419]: I0421 02:48:01.380778 2419 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 21 02:48:01.399004 kubelet[2419]: I0421 02:48:01.398778 2419 cpu_manager.go:225] "Starting" policy="none" Apr 21 02:48:01.399004 kubelet[2419]: I0421 02:48:01.398788 2419 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Apr 21 02:48:01.399004 kubelet[2419]: I0421 02:48:01.398801 2419 state_mem.go:41] "Initialized" logger="CPUManager state checkpoint.CPUManager state memory" Apr 21 02:48:01.400114 kubelet[2419]: I0421 02:48:01.400047 2419 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Apr 21 02:48:01.402377 kubelet[2419]: I0421 02:48:01.402317 2419 policy_none.go:50] "Start" Apr 21 02:48:01.402450 kubelet[2419]: I0421 02:48:01.402367 2419 memory_manager.go:187] "Starting memorymanager" policy="None" Apr 21 02:48:01.402450 kubelet[2419]: I0421 02:48:01.402418 2419 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Apr 21 02:48:01.404463 kubelet[2419]: I0421 02:48:01.404428 2419 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Apr 21 02:48:01.404527 kubelet[2419]: I0421 02:48:01.404485 2419 status_manager.go:249] "Starting to sync pod status with apiserver" Apr 21 02:48:01.404527 kubelet[2419]: I0421 02:48:01.404503 2419 kubelet.go:2501] "Starting kubelet main sync loop" Apr 21 02:48:01.404639 kubelet[2419]: I0421 02:48:01.404624 2419 policy_none.go:44] "Start" Apr 21 02:48:01.404655 kubelet[2419]: E0421 02:48:01.404631 2419 kubelet.go:2525] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 21 02:48:01.414080 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Apr 21 02:48:01.428012 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Apr 21 02:48:01.430611 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Apr 21 02:48:01.445893 kubelet[2419]: E0421 02:48:01.445789 2419 manager.go:525] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 21 02:48:01.446237 kubelet[2419]: I0421 02:48:01.446215 2419 eviction_manager.go:194] "Eviction manager: starting control loop" Apr 21 02:48:01.446574 kubelet[2419]: I0421 02:48:01.446253 2419 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 21 02:48:01.447228 kubelet[2419]: I0421 02:48:01.446676 2419 plugin_manager.go:121] "Starting Kubelet Plugin Manager" Apr 21 02:48:01.450850 kubelet[2419]: E0421 02:48:01.450752 2419 eviction_manager.go:272] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 21 02:48:01.451833 kubelet[2419]: E0421 02:48:01.450989 2419 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 21 02:48:01.520347 systemd[1]: Created slice kubepods-burstable-podf7c88b30fc803a3ec6b6c138191bdaca.slice - libcontainer container kubepods-burstable-podf7c88b30fc803a3ec6b6c138191bdaca.slice. Apr 21 02:48:01.536483 kubelet[2419]: E0421 02:48:01.536426 2419 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 21 02:48:01.539044 systemd[1]: Created slice kubepods-burstable-pod97910e31781b3913a25d1ee86a9ed2b0.slice - libcontainer container kubepods-burstable-pod97910e31781b3913a25d1ee86a9ed2b0.slice. Apr 21 02:48:01.540298 kubelet[2419]: E0421 02:48:01.540269 2419 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 21 02:48:01.542066 systemd[1]: Created slice kubepods-burstable-pod14bc29ec35edba17af38052ec24275f2.slice - libcontainer container kubepods-burstable-pod14bc29ec35edba17af38052ec24275f2.slice. Apr 21 02:48:01.543320 kubelet[2419]: E0421 02:48:01.543297 2419 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 21 02:48:01.554466 kubelet[2419]: I0421 02:48:01.554417 2419 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Apr 21 02:48:01.555025 kubelet[2419]: E0421 02:48:01.554988 2419 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://10.0.0.38:6443/api/v1/nodes\": dial tcp 10.0.0.38:6443: connect: connection refused" node="localhost" Apr 21 02:48:01.577851 kubelet[2419]: I0421 02:48:01.577816 2419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f7c88b30fc803a3ec6b6c138191bdaca-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"f7c88b30fc803a3ec6b6c138191bdaca\") " pod="kube-system/kube-scheduler-localhost" Apr 21 02:48:01.577851 kubelet[2419]: I0421 02:48:01.577854 2419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/97910e31781b3913a25d1ee86a9ed2b0-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"97910e31781b3913a25d1ee86a9ed2b0\") " pod="kube-system/kube-apiserver-localhost" Apr 21 02:48:01.577851 kubelet[2419]: I0421 02:48:01.577870 2419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/97910e31781b3913a25d1ee86a9ed2b0-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"97910e31781b3913a25d1ee86a9ed2b0\") " pod="kube-system/kube-apiserver-localhost" Apr 21 02:48:01.577851 kubelet[2419]: I0421 02:48:01.577889 2419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/14bc29ec35edba17af38052ec24275f2-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"14bc29ec35edba17af38052ec24275f2\") " pod="kube-system/kube-controller-manager-localhost" Apr 21 02:48:01.578237 kubelet[2419]: I0421 02:48:01.577904 2419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/14bc29ec35edba17af38052ec24275f2-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"14bc29ec35edba17af38052ec24275f2\") " pod="kube-system/kube-controller-manager-localhost" Apr 21 02:48:01.578237 kubelet[2419]: I0421 02:48:01.577967 2419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/14bc29ec35edba17af38052ec24275f2-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"14bc29ec35edba17af38052ec24275f2\") " pod="kube-system/kube-controller-manager-localhost" Apr 21 02:48:01.578237 kubelet[2419]: I0421 02:48:01.577981 2419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/97910e31781b3913a25d1ee86a9ed2b0-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"97910e31781b3913a25d1ee86a9ed2b0\") " pod="kube-system/kube-apiserver-localhost" Apr 21 02:48:01.578237 kubelet[2419]: I0421 02:48:01.578049 2419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/14bc29ec35edba17af38052ec24275f2-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"14bc29ec35edba17af38052ec24275f2\") " pod="kube-system/kube-controller-manager-localhost" Apr 21 02:48:01.578237 kubelet[2419]: I0421 02:48:01.578096 2419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/14bc29ec35edba17af38052ec24275f2-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"14bc29ec35edba17af38052ec24275f2\") " pod="kube-system/kube-controller-manager-localhost" Apr 21 02:48:01.578646 kubelet[2419]: E0421 02:48:01.578609 2419 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.38:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.38:6443: connect: connection refused" interval="400ms" Apr 21 02:48:01.763399 kubelet[2419]: I0421 02:48:01.763176 2419 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Apr 21 02:48:01.763696 kubelet[2419]: E0421 02:48:01.763651 2419 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://10.0.0.38:6443/api/v1/nodes\": dial tcp 10.0.0.38:6443: connect: connection refused" node="localhost" Apr 21 02:48:01.844831 kubelet[2419]: E0421 02:48:01.844751 2419 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 02:48:01.847890 kubelet[2419]: E0421 02:48:01.847825 2419 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 02:48:01.849205 containerd[1601]: time="2026-04-21T02:48:01.848950328Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:f7c88b30fc803a3ec6b6c138191bdaca,Namespace:kube-system,Attempt:0,}" Apr 21 02:48:01.849205 containerd[1601]: time="2026-04-21T02:48:01.849086380Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:97910e31781b3913a25d1ee86a9ed2b0,Namespace:kube-system,Attempt:0,}" Apr 21 02:48:01.849507 kubelet[2419]: E0421 02:48:01.849416 2419 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 02:48:01.849805 containerd[1601]: time="2026-04-21T02:48:01.849772799Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:14bc29ec35edba17af38052ec24275f2,Namespace:kube-system,Attempt:0,}" Apr 21 02:48:01.981414 kubelet[2419]: E0421 02:48:01.980752 2419 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.38:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.38:6443: connect: connection refused" interval="800ms" Apr 21 02:48:02.174902 kubelet[2419]: I0421 02:48:02.174679 2419 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Apr 21 02:48:02.175267 kubelet[2419]: E0421 02:48:02.175234 2419 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://10.0.0.38:6443/api/v1/nodes\": dial tcp 10.0.0.38:6443: connect: connection refused" node="localhost" Apr 21 02:48:02.422080 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4216087293.mount: Deactivated successfully. Apr 21 02:48:02.432975 containerd[1601]: time="2026-04-21T02:48:02.432823777Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 21 02:48:02.433631 containerd[1601]: time="2026-04-21T02:48:02.433555213Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321070" Apr 21 02:48:02.437876 containerd[1601]: time="2026-04-21T02:48:02.437827023Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 21 02:48:02.439177 containerd[1601]: time="2026-04-21T02:48:02.439063655Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 21 02:48:02.439570 containerd[1601]: time="2026-04-21T02:48:02.439526108Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 21 02:48:02.441000 containerd[1601]: time="2026-04-21T02:48:02.440698794Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Apr 21 02:48:02.441909 containerd[1601]: time="2026-04-21T02:48:02.441877046Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Apr 21 02:48:02.442903 containerd[1601]: time="2026-04-21T02:48:02.442852769Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 21 02:48:02.443407 containerd[1601]: time="2026-04-21T02:48:02.443350393Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 588.548908ms" Apr 21 02:48:02.448259 containerd[1601]: time="2026-04-21T02:48:02.448065267Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 592.033738ms" Apr 21 02:48:02.448498 containerd[1601]: time="2026-04-21T02:48:02.448436201Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 595.195744ms" Apr 21 02:48:02.487352 containerd[1601]: time="2026-04-21T02:48:02.487252257Z" level=info msg="connecting to shim d18b3b552bbc7c238fc3ab92cf6c4bfb3cbb21141d49bb676a3ef0da24fe0a5c" address="unix:///run/containerd/s/e25122b2bf5e90a1cadc68a3294073edc6f07171769e10928f7c04a7b270aa5a" namespace=k8s.io protocol=ttrpc version=3 Apr 21 02:48:02.487949 containerd[1601]: time="2026-04-21T02:48:02.487798057Z" level=info msg="connecting to shim d4a50177ee0b14310b8db7b6b30a92cdbda6679eb2e255d86694e5988527e700" address="unix:///run/containerd/s/40dd46c385cc8c3959a27a58011cd968b269777894fc810ee415ad9bb0ee6295" namespace=k8s.io protocol=ttrpc version=3 Apr 21 02:48:02.496419 containerd[1601]: time="2026-04-21T02:48:02.496389379Z" level=info msg="connecting to shim 7f5448b097f1ca4f96b5cf7388e7c13ec1e39dcd1b3478b1e430e9ffdd85b27b" address="unix:///run/containerd/s/ac49de4f13307f455da409bcfd51def244f6509d5ee94123ebb455f20aff8470" namespace=k8s.io protocol=ttrpc version=3 Apr 21 02:48:02.519327 systemd[1]: Started cri-containerd-d4a50177ee0b14310b8db7b6b30a92cdbda6679eb2e255d86694e5988527e700.scope - libcontainer container d4a50177ee0b14310b8db7b6b30a92cdbda6679eb2e255d86694e5988527e700. Apr 21 02:48:02.522634 systemd[1]: Started cri-containerd-7f5448b097f1ca4f96b5cf7388e7c13ec1e39dcd1b3478b1e430e9ffdd85b27b.scope - libcontainer container 7f5448b097f1ca4f96b5cf7388e7c13ec1e39dcd1b3478b1e430e9ffdd85b27b. Apr 21 02:48:02.523930 systemd[1]: Started cri-containerd-d18b3b552bbc7c238fc3ab92cf6c4bfb3cbb21141d49bb676a3ef0da24fe0a5c.scope - libcontainer container d18b3b552bbc7c238fc3ab92cf6c4bfb3cbb21141d49bb676a3ef0da24fe0a5c. Apr 21 02:48:02.573954 containerd[1601]: time="2026-04-21T02:48:02.573896504Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:f7c88b30fc803a3ec6b6c138191bdaca,Namespace:kube-system,Attempt:0,} returns sandbox id \"d4a50177ee0b14310b8db7b6b30a92cdbda6679eb2e255d86694e5988527e700\"" Apr 21 02:48:02.576097 kubelet[2419]: E0421 02:48:02.576045 2419 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 02:48:02.586943 containerd[1601]: time="2026-04-21T02:48:02.586752278Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:14bc29ec35edba17af38052ec24275f2,Namespace:kube-system,Attempt:0,} returns sandbox id \"d18b3b552bbc7c238fc3ab92cf6c4bfb3cbb21141d49bb676a3ef0da24fe0a5c\"" Apr 21 02:48:02.588773 kubelet[2419]: E0421 02:48:02.588483 2419 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 02:48:02.593661 containerd[1601]: time="2026-04-21T02:48:02.593540059Z" level=info msg="CreateContainer within sandbox \"d4a50177ee0b14310b8db7b6b30a92cdbda6679eb2e255d86694e5988527e700\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Apr 21 02:48:02.597906 containerd[1601]: time="2026-04-21T02:48:02.597820115Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:97910e31781b3913a25d1ee86a9ed2b0,Namespace:kube-system,Attempt:0,} returns sandbox id \"7f5448b097f1ca4f96b5cf7388e7c13ec1e39dcd1b3478b1e430e9ffdd85b27b\"" Apr 21 02:48:02.599050 kubelet[2419]: E0421 02:48:02.598989 2419 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 02:48:02.603192 containerd[1601]: time="2026-04-21T02:48:02.602805759Z" level=info msg="CreateContainer within sandbox \"d18b3b552bbc7c238fc3ab92cf6c4bfb3cbb21141d49bb676a3ef0da24fe0a5c\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Apr 21 02:48:02.609906 containerd[1601]: time="2026-04-21T02:48:02.609825468Z" level=info msg="CreateContainer within sandbox \"7f5448b097f1ca4f96b5cf7388e7c13ec1e39dcd1b3478b1e430e9ffdd85b27b\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Apr 21 02:48:02.613818 containerd[1601]: time="2026-04-21T02:48:02.613790756Z" level=info msg="Container 7e9e69665d5a4048d60c96062446b3707d16198a468237a38bf7e19309e74865: CDI devices from CRI Config.CDIDevices: []" Apr 21 02:48:02.620862 containerd[1601]: time="2026-04-21T02:48:02.620777755Z" level=info msg="Container 0221308977cfb6da99427f3a789a554cdf0838963695a9b376d21e522bdef084: CDI devices from CRI Config.CDIDevices: []" Apr 21 02:48:02.623789 containerd[1601]: time="2026-04-21T02:48:02.623404163Z" level=info msg="Container 81e2e38a0e32306b9457f58fa9a43ccdcb6c529f9a65856a32b0fb103e659712: CDI devices from CRI Config.CDIDevices: []" Apr 21 02:48:02.626889 containerd[1601]: time="2026-04-21T02:48:02.626811084Z" level=info msg="CreateContainer within sandbox \"d4a50177ee0b14310b8db7b6b30a92cdbda6679eb2e255d86694e5988527e700\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"7e9e69665d5a4048d60c96062446b3707d16198a468237a38bf7e19309e74865\"" Apr 21 02:48:02.630728 containerd[1601]: time="2026-04-21T02:48:02.630635773Z" level=info msg="CreateContainer within sandbox \"d18b3b552bbc7c238fc3ab92cf6c4bfb3cbb21141d49bb676a3ef0da24fe0a5c\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"0221308977cfb6da99427f3a789a554cdf0838963695a9b376d21e522bdef084\"" Apr 21 02:48:02.631940 containerd[1601]: time="2026-04-21T02:48:02.631662650Z" level=info msg="StartContainer for \"0221308977cfb6da99427f3a789a554cdf0838963695a9b376d21e522bdef084\"" Apr 21 02:48:02.631940 containerd[1601]: time="2026-04-21T02:48:02.631747758Z" level=info msg="StartContainer for \"7e9e69665d5a4048d60c96062446b3707d16198a468237a38bf7e19309e74865\"" Apr 21 02:48:02.634877 containerd[1601]: time="2026-04-21T02:48:02.634801889Z" level=info msg="connecting to shim 0221308977cfb6da99427f3a789a554cdf0838963695a9b376d21e522bdef084" address="unix:///run/containerd/s/e25122b2bf5e90a1cadc68a3294073edc6f07171769e10928f7c04a7b270aa5a" protocol=ttrpc version=3 Apr 21 02:48:02.635358 containerd[1601]: time="2026-04-21T02:48:02.635087548Z" level=info msg="connecting to shim 7e9e69665d5a4048d60c96062446b3707d16198a468237a38bf7e19309e74865" address="unix:///run/containerd/s/40dd46c385cc8c3959a27a58011cd968b269777894fc810ee415ad9bb0ee6295" protocol=ttrpc version=3 Apr 21 02:48:02.637856 containerd[1601]: time="2026-04-21T02:48:02.637792095Z" level=info msg="CreateContainer within sandbox \"7f5448b097f1ca4f96b5cf7388e7c13ec1e39dcd1b3478b1e430e9ffdd85b27b\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"81e2e38a0e32306b9457f58fa9a43ccdcb6c529f9a65856a32b0fb103e659712\"" Apr 21 02:48:02.639494 containerd[1601]: time="2026-04-21T02:48:02.639464185Z" level=info msg="StartContainer for \"81e2e38a0e32306b9457f58fa9a43ccdcb6c529f9a65856a32b0fb103e659712\"" Apr 21 02:48:02.640323 containerd[1601]: time="2026-04-21T02:48:02.640293083Z" level=info msg="connecting to shim 81e2e38a0e32306b9457f58fa9a43ccdcb6c529f9a65856a32b0fb103e659712" address="unix:///run/containerd/s/ac49de4f13307f455da409bcfd51def244f6509d5ee94123ebb455f20aff8470" protocol=ttrpc version=3 Apr 21 02:48:02.652857 systemd[1]: Started cri-containerd-0221308977cfb6da99427f3a789a554cdf0838963695a9b376d21e522bdef084.scope - libcontainer container 0221308977cfb6da99427f3a789a554cdf0838963695a9b376d21e522bdef084. Apr 21 02:48:02.656641 systemd[1]: Started cri-containerd-7e9e69665d5a4048d60c96062446b3707d16198a468237a38bf7e19309e74865.scope - libcontainer container 7e9e69665d5a4048d60c96062446b3707d16198a468237a38bf7e19309e74865. Apr 21 02:48:02.666299 systemd[1]: Started cri-containerd-81e2e38a0e32306b9457f58fa9a43ccdcb6c529f9a65856a32b0fb103e659712.scope - libcontainer container 81e2e38a0e32306b9457f58fa9a43ccdcb6c529f9a65856a32b0fb103e659712. Apr 21 02:48:02.726078 containerd[1601]: time="2026-04-21T02:48:02.725977754Z" level=info msg="StartContainer for \"81e2e38a0e32306b9457f58fa9a43ccdcb6c529f9a65856a32b0fb103e659712\" returns successfully" Apr 21 02:48:02.726238 containerd[1601]: time="2026-04-21T02:48:02.726116738Z" level=info msg="StartContainer for \"7e9e69665d5a4048d60c96062446b3707d16198a468237a38bf7e19309e74865\" returns successfully" Apr 21 02:48:02.726238 containerd[1601]: time="2026-04-21T02:48:02.726226532Z" level=info msg="StartContainer for \"0221308977cfb6da99427f3a789a554cdf0838963695a9b376d21e522bdef084\" returns successfully" Apr 21 02:48:02.784546 kubelet[2419]: E0421 02:48:02.784425 2419 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.38:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.38:6443: connect: connection refused" interval="1.6s" Apr 21 02:48:02.981765 kubelet[2419]: I0421 02:48:02.981384 2419 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Apr 21 02:48:03.642677 kubelet[2419]: E0421 02:48:03.642590 2419 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 21 02:48:03.643229 kubelet[2419]: E0421 02:48:03.642827 2419 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 02:48:03.722636 kubelet[2419]: E0421 02:48:03.704752 2419 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 21 02:48:03.731851 kubelet[2419]: E0421 02:48:03.731688 2419 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 02:48:03.745535 kubelet[2419]: E0421 02:48:03.745400 2419 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 21 02:48:03.746253 kubelet[2419]: E0421 02:48:03.746194 2419 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 02:48:04.843990 kubelet[2419]: E0421 02:48:04.843935 2419 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 21 02:48:04.844534 kubelet[2419]: E0421 02:48:04.844271 2419 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 02:48:04.844978 kubelet[2419]: E0421 02:48:04.844928 2419 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 21 02:48:04.845068 kubelet[2419]: E0421 02:48:04.845040 2419 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 02:48:04.951231 kubelet[2419]: E0421 02:48:04.950784 2419 nodelease.go:50] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Apr 21 02:48:05.021374 kubelet[2419]: I0421 02:48:05.021244 2419 kubelet_node_status.go:77] "Successfully registered node" node="localhost" Apr 21 02:48:05.081187 kubelet[2419]: I0421 02:48:05.080523 2419 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Apr 21 02:48:05.091957 kubelet[2419]: E0421 02:48:05.091841 2419 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Apr 21 02:48:05.091957 kubelet[2419]: I0421 02:48:05.091885 2419 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Apr 21 02:48:05.093599 kubelet[2419]: E0421 02:48:05.093536 2419 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Apr 21 02:48:05.093662 kubelet[2419]: I0421 02:48:05.093631 2419 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Apr 21 02:48:05.095647 kubelet[2419]: E0421 02:48:05.095541 2419 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Apr 21 02:48:05.389345 kubelet[2419]: I0421 02:48:05.389301 2419 apiserver.go:52] "Watching apiserver" Apr 21 02:48:05.477693 kubelet[2419]: I0421 02:48:05.477641 2419 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Apr 21 02:48:07.571043 systemd[1]: Reload requested from client PID 2711 ('systemctl') (unit session-9.scope)... Apr 21 02:48:07.571071 systemd[1]: Reloading... Apr 21 02:48:07.716252 zram_generator::config[2752]: No configuration found. Apr 21 02:48:07.947307 systemd[1]: Reloading finished in 375 ms. Apr 21 02:48:07.975914 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 21 02:48:08.001920 systemd[1]: kubelet.service: Deactivated successfully. Apr 21 02:48:08.002223 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 21 02:48:08.002279 systemd[1]: kubelet.service: Consumed 1.776s CPU time, 127.8M memory peak. Apr 21 02:48:08.003819 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 21 02:48:08.199461 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 21 02:48:08.208413 (kubelet)[2799]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 21 02:48:08.299664 kubelet[2799]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 21 02:48:08.315015 kubelet[2799]: I0421 02:48:08.312267 2799 server.go:525] "Kubelet version" kubeletVersion="v1.35.1" Apr 21 02:48:08.315015 kubelet[2799]: I0421 02:48:08.312389 2799 server.go:527] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 21 02:48:08.315015 kubelet[2799]: I0421 02:48:08.312405 2799 watchdog_linux.go:95] "Systemd watchdog is not enabled" Apr 21 02:48:08.315015 kubelet[2799]: I0421 02:48:08.312410 2799 watchdog_linux.go:138] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 21 02:48:08.315015 kubelet[2799]: I0421 02:48:08.312791 2799 server.go:951] "Client rotation is on, will bootstrap in background" Apr 21 02:48:08.315015 kubelet[2799]: I0421 02:48:08.314697 2799 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Apr 21 02:48:08.320920 kubelet[2799]: I0421 02:48:08.320863 2799 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 21 02:48:08.330576 kubelet[2799]: I0421 02:48:08.330538 2799 server.go:1418] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Apr 21 02:48:08.341695 kubelet[2799]: I0421 02:48:08.341514 2799 server.go:775] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Apr 21 02:48:08.342390 kubelet[2799]: I0421 02:48:08.342197 2799 container_manager_linux.go:272] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 21 02:48:08.342764 kubelet[2799]: I0421 02:48:08.342300 2799 container_manager_linux.go:277] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 21 02:48:08.342764 kubelet[2799]: I0421 02:48:08.342707 2799 topology_manager.go:143] "Creating topology manager with none policy" Apr 21 02:48:08.342764 kubelet[2799]: I0421 02:48:08.342739 2799 container_manager_linux.go:308] "Creating device plugin manager" Apr 21 02:48:08.343280 kubelet[2799]: I0421 02:48:08.342791 2799 container_manager_linux.go:317] "Creating Dynamic Resource Allocation (DRA) manager" Apr 21 02:48:08.343368 kubelet[2799]: I0421 02:48:08.343337 2799 state_mem.go:41] "Initialized" logger="CPUManager state memory" Apr 21 02:48:08.343641 kubelet[2799]: I0421 02:48:08.343608 2799 kubelet.go:482] "Attempting to sync node with API server" Apr 21 02:48:08.343819 kubelet[2799]: I0421 02:48:08.343648 2799 kubelet.go:383] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 21 02:48:08.343819 kubelet[2799]: I0421 02:48:08.343701 2799 kubelet.go:394] "Adding apiserver pod source" Apr 21 02:48:08.343819 kubelet[2799]: I0421 02:48:08.343709 2799 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 21 02:48:08.349217 kubelet[2799]: I0421 02:48:08.348948 2799 kuberuntime_manager.go:294] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Apr 21 02:48:08.359049 kubelet[2799]: I0421 02:48:08.358804 2799 kubelet.go:943] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 21 02:48:08.359323 kubelet[2799]: I0421 02:48:08.359308 2799 kubelet.go:970] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Apr 21 02:48:08.376161 kubelet[2799]: I0421 02:48:08.374675 2799 server.go:1257] "Started kubelet" Apr 21 02:48:08.376161 kubelet[2799]: I0421 02:48:08.375075 2799 server.go:182] "Starting to listen" address="0.0.0.0" port=10250 Apr 21 02:48:08.376161 kubelet[2799]: I0421 02:48:08.375268 2799 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 21 02:48:08.376161 kubelet[2799]: I0421 02:48:08.375353 2799 server_v1.go:49] "podresources" method="list" useActivePods=true Apr 21 02:48:08.376573 kubelet[2799]: I0421 02:48:08.376563 2799 fs_resource_analyzer.go:69] "Starting FS ResourceAnalyzer" Apr 21 02:48:08.376759 kubelet[2799]: I0421 02:48:08.376713 2799 server.go:254] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 21 02:48:08.378642 kubelet[2799]: I0421 02:48:08.378392 2799 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 21 02:48:08.379939 kubelet[2799]: I0421 02:48:08.379862 2799 volume_manager.go:311] "Starting Kubelet Volume Manager" Apr 21 02:48:08.379991 kubelet[2799]: I0421 02:48:08.379981 2799 server.go:317] "Adding debug handlers to kubelet server" Apr 21 02:48:08.380718 kubelet[2799]: I0421 02:48:08.380669 2799 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Apr 21 02:48:08.380804 kubelet[2799]: I0421 02:48:08.380783 2799 reconciler.go:29] "Reconciler: start to sync state" Apr 21 02:48:08.383210 kubelet[2799]: I0421 02:48:08.383052 2799 factory.go:223] Registration of the systemd container factory successfully Apr 21 02:48:08.383210 kubelet[2799]: I0421 02:48:08.383187 2799 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 21 02:48:08.395823 kubelet[2799]: I0421 02:48:08.395541 2799 factory.go:223] Registration of the containerd container factory successfully Apr 21 02:48:08.400337 kubelet[2799]: E0421 02:48:08.400232 2799 kubelet.go:1656] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 21 02:48:08.412667 kubelet[2799]: I0421 02:48:08.412607 2799 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Apr 21 02:48:08.414162 kubelet[2799]: I0421 02:48:08.413775 2799 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Apr 21 02:48:08.414162 kubelet[2799]: I0421 02:48:08.413905 2799 status_manager.go:249] "Starting to sync pod status with apiserver" Apr 21 02:48:08.414162 kubelet[2799]: I0421 02:48:08.413925 2799 kubelet.go:2501] "Starting kubelet main sync loop" Apr 21 02:48:08.414917 kubelet[2799]: E0421 02:48:08.414874 2799 kubelet.go:2525] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 21 02:48:08.499656 kubelet[2799]: I0421 02:48:08.499056 2799 cpu_manager.go:225] "Starting" policy="none" Apr 21 02:48:08.499656 kubelet[2799]: I0421 02:48:08.499212 2799 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Apr 21 02:48:08.499656 kubelet[2799]: I0421 02:48:08.499293 2799 state_mem.go:41] "Initialized" logger="CPUManager state checkpoint.CPUManager state memory" Apr 21 02:48:08.501009 kubelet[2799]: I0421 02:48:08.500810 2799 state_mem.go:94] "Updated default CPUSet" logger="CPUManager state checkpoint.CPUManager state memory" cpuSet="" Apr 21 02:48:08.501009 kubelet[2799]: I0421 02:48:08.500846 2799 state_mem.go:102] "Updated CPUSet assignments" logger="CPUManager state checkpoint.CPUManager state memory" assignments={} Apr 21 02:48:08.501009 kubelet[2799]: I0421 02:48:08.500867 2799 policy_none.go:50] "Start" Apr 21 02:48:08.501009 kubelet[2799]: I0421 02:48:08.500875 2799 memory_manager.go:187] "Starting memorymanager" policy="None" Apr 21 02:48:08.501009 kubelet[2799]: I0421 02:48:08.500889 2799 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Apr 21 02:48:08.501547 kubelet[2799]: I0421 02:48:08.501065 2799 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Apr 21 02:48:08.501547 kubelet[2799]: I0421 02:48:08.501152 2799 policy_none.go:44] "Start" Apr 21 02:48:08.506283 kubelet[2799]: E0421 02:48:08.506226 2799 manager.go:525] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 21 02:48:08.506475 kubelet[2799]: I0421 02:48:08.506439 2799 eviction_manager.go:194] "Eviction manager: starting control loop" Apr 21 02:48:08.506498 kubelet[2799]: I0421 02:48:08.506466 2799 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 21 02:48:08.506888 kubelet[2799]: I0421 02:48:08.506809 2799 plugin_manager.go:121] "Starting Kubelet Plugin Manager" Apr 21 02:48:08.510889 kubelet[2799]: E0421 02:48:08.510040 2799 eviction_manager.go:272] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 21 02:48:08.516329 kubelet[2799]: I0421 02:48:08.516296 2799 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Apr 21 02:48:08.519163 kubelet[2799]: I0421 02:48:08.517446 2799 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Apr 21 02:48:08.519163 kubelet[2799]: I0421 02:48:08.517536 2799 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Apr 21 02:48:08.563231 sudo[2841]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Apr 21 02:48:08.563539 sudo[2841]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Apr 21 02:48:08.621912 kubelet[2799]: I0421 02:48:08.621856 2799 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Apr 21 02:48:08.635070 kubelet[2799]: I0421 02:48:08.634973 2799 kubelet_node_status.go:123] "Node was previously registered" node="localhost" Apr 21 02:48:08.635377 kubelet[2799]: I0421 02:48:08.635328 2799 kubelet_node_status.go:77] "Successfully registered node" node="localhost" Apr 21 02:48:08.686835 kubelet[2799]: I0421 02:48:08.686741 2799 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/97910e31781b3913a25d1ee86a9ed2b0-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"97910e31781b3913a25d1ee86a9ed2b0\") " pod="kube-system/kube-apiserver-localhost" Apr 21 02:48:08.687322 kubelet[2799]: I0421 02:48:08.686855 2799 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/97910e31781b3913a25d1ee86a9ed2b0-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"97910e31781b3913a25d1ee86a9ed2b0\") " pod="kube-system/kube-apiserver-localhost" Apr 21 02:48:08.687322 kubelet[2799]: I0421 02:48:08.686940 2799 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/14bc29ec35edba17af38052ec24275f2-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"14bc29ec35edba17af38052ec24275f2\") " pod="kube-system/kube-controller-manager-localhost" Apr 21 02:48:08.687322 kubelet[2799]: I0421 02:48:08.686966 2799 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/14bc29ec35edba17af38052ec24275f2-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"14bc29ec35edba17af38052ec24275f2\") " pod="kube-system/kube-controller-manager-localhost" Apr 21 02:48:08.687322 kubelet[2799]: I0421 02:48:08.687187 2799 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/14bc29ec35edba17af38052ec24275f2-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"14bc29ec35edba17af38052ec24275f2\") " pod="kube-system/kube-controller-manager-localhost" Apr 21 02:48:08.687322 kubelet[2799]: I0421 02:48:08.687228 2799 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f7c88b30fc803a3ec6b6c138191bdaca-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"f7c88b30fc803a3ec6b6c138191bdaca\") " pod="kube-system/kube-scheduler-localhost" Apr 21 02:48:08.687475 kubelet[2799]: I0421 02:48:08.687251 2799 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/97910e31781b3913a25d1ee86a9ed2b0-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"97910e31781b3913a25d1ee86a9ed2b0\") " pod="kube-system/kube-apiserver-localhost" Apr 21 02:48:08.687475 kubelet[2799]: I0421 02:48:08.687276 2799 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/14bc29ec35edba17af38052ec24275f2-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"14bc29ec35edba17af38052ec24275f2\") " pod="kube-system/kube-controller-manager-localhost" Apr 21 02:48:08.687475 kubelet[2799]: I0421 02:48:08.687354 2799 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/14bc29ec35edba17af38052ec24275f2-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"14bc29ec35edba17af38052ec24275f2\") " pod="kube-system/kube-controller-manager-localhost" Apr 21 02:48:08.831047 kubelet[2799]: E0421 02:48:08.830504 2799 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 02:48:08.831047 kubelet[2799]: E0421 02:48:08.830526 2799 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 02:48:08.831767 kubelet[2799]: E0421 02:48:08.831433 2799 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 02:48:09.056425 sudo[2841]: pam_unix(sudo:session): session closed for user root Apr 21 02:48:09.348682 kubelet[2799]: I0421 02:48:09.348605 2799 apiserver.go:52] "Watching apiserver" Apr 21 02:48:09.382772 kubelet[2799]: I0421 02:48:09.382536 2799 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Apr 21 02:48:09.451859 kubelet[2799]: I0421 02:48:09.451574 2799 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Apr 21 02:48:09.451859 kubelet[2799]: I0421 02:48:09.451736 2799 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Apr 21 02:48:09.452626 kubelet[2799]: E0421 02:48:09.452304 2799 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 02:48:09.461880 kubelet[2799]: E0421 02:48:09.461833 2799 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Apr 21 02:48:09.463292 kubelet[2799]: E0421 02:48:09.463227 2799 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 02:48:09.468215 kubelet[2799]: E0421 02:48:09.467197 2799 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Apr 21 02:48:09.468215 kubelet[2799]: E0421 02:48:09.467316 2799 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 02:48:09.565437 kubelet[2799]: I0421 02:48:09.565031 2799 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.564995566 podStartE2EDuration="1.564995566s" podCreationTimestamp="2026-04-21 02:48:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-21 02:48:09.564412185 +0000 UTC m=+1.348949841" watchObservedRunningTime="2026-04-21 02:48:09.564995566 +0000 UTC m=+1.349533222" Apr 21 02:48:09.585860 kubelet[2799]: I0421 02:48:09.585318 2799 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.585182031 podStartE2EDuration="1.585182031s" podCreationTimestamp="2026-04-21 02:48:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-21 02:48:09.58511394 +0000 UTC m=+1.369651592" watchObservedRunningTime="2026-04-21 02:48:09.585182031 +0000 UTC m=+1.369719679" Apr 21 02:48:09.605689 kubelet[2799]: I0421 02:48:09.605232 2799 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.604989444 podStartE2EDuration="1.604989444s" podCreationTimestamp="2026-04-21 02:48:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-21 02:48:09.603459243 +0000 UTC m=+1.387996893" watchObservedRunningTime="2026-04-21 02:48:09.604989444 +0000 UTC m=+1.389527091" Apr 21 02:48:10.463557 kubelet[2799]: E0421 02:48:10.463389 2799 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 02:48:10.463557 kubelet[2799]: E0421 02:48:10.463534 2799 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 02:48:11.057363 sudo[1831]: pam_unix(sudo:session): session closed for user root Apr 21 02:48:11.062892 sshd[1830]: Connection closed by 10.0.0.1 port 47364 Apr 21 02:48:11.065775 sshd-session[1827]: pam_unix(sshd:session): session closed for user core Apr 21 02:48:11.072397 systemd[1]: sshd@8-10.0.0.38:22-10.0.0.1:47364.service: Deactivated successfully. Apr 21 02:48:11.079044 systemd[1]: session-9.scope: Deactivated successfully. Apr 21 02:48:11.079332 systemd[1]: session-9.scope: Consumed 7.452s CPU time, 271.1M memory peak. Apr 21 02:48:11.081299 systemd-logind[1574]: Session 9 logged out. Waiting for processes to exit. Apr 21 02:48:11.086316 systemd-logind[1574]: Removed session 9. Apr 21 02:48:11.133869 kubelet[2799]: E0421 02:48:11.133745 2799 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 02:48:11.472312 kubelet[2799]: E0421 02:48:11.472155 2799 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 02:48:11.472312 kubelet[2799]: E0421 02:48:11.472237 2799 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 02:48:12.732595 kubelet[2799]: I0421 02:48:12.732514 2799 kuberuntime_manager.go:2062] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Apr 21 02:48:12.736493 containerd[1601]: time="2026-04-21T02:48:12.736386555Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Apr 21 02:48:12.738727 kubelet[2799]: I0421 02:48:12.738565 2799 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Apr 21 02:48:13.641486 systemd[1]: Created slice kubepods-besteffort-pod1c482a53_40d4_4c75_bedf_e2f1b0a02cff.slice - libcontainer container kubepods-besteffort-pod1c482a53_40d4_4c75_bedf_e2f1b0a02cff.slice. Apr 21 02:48:13.729061 kubelet[2799]: I0421 02:48:13.728777 2799 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1c482a53-40d4-4c75-bedf-e2f1b0a02cff-xtables-lock\") pod \"kube-proxy-vz54j\" (UID: \"1c482a53-40d4-4c75-bedf-e2f1b0a02cff\") " pod="kube-system/kube-proxy-vz54j" Apr 21 02:48:13.729854 kubelet[2799]: I0421 02:48:13.729231 2799 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1c482a53-40d4-4c75-bedf-e2f1b0a02cff-lib-modules\") pod \"kube-proxy-vz54j\" (UID: \"1c482a53-40d4-4c75-bedf-e2f1b0a02cff\") " pod="kube-system/kube-proxy-vz54j" Apr 21 02:48:13.729854 kubelet[2799]: I0421 02:48:13.729377 2799 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/1c482a53-40d4-4c75-bedf-e2f1b0a02cff-kube-proxy\") pod \"kube-proxy-vz54j\" (UID: \"1c482a53-40d4-4c75-bedf-e2f1b0a02cff\") " pod="kube-system/kube-proxy-vz54j" Apr 21 02:48:13.729854 kubelet[2799]: I0421 02:48:13.729439 2799 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2jlnv\" (UniqueName: \"kubernetes.io/projected/1c482a53-40d4-4c75-bedf-e2f1b0a02cff-kube-api-access-2jlnv\") pod \"kube-proxy-vz54j\" (UID: \"1c482a53-40d4-4c75-bedf-e2f1b0a02cff\") " pod="kube-system/kube-proxy-vz54j" Apr 21 02:48:13.750209 update_engine[1580]: I20260421 02:48:13.749270 1580 update_attempter.cc:509] Updating boot flags... Apr 21 02:48:13.949364 kubelet[2799]: I0421 02:48:13.944641 2799 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/25d6db75-1c26-49ab-a7c7-ec7a8230d88a-hostproc\") pod \"cilium-4nf6m\" (UID: \"25d6db75-1c26-49ab-a7c7-ec7a8230d88a\") " pod="kube-system/cilium-4nf6m" Apr 21 02:48:13.949364 kubelet[2799]: I0421 02:48:13.944668 2799 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/25d6db75-1c26-49ab-a7c7-ec7a8230d88a-cilium-config-path\") pod \"cilium-4nf6m\" (UID: \"25d6db75-1c26-49ab-a7c7-ec7a8230d88a\") " pod="kube-system/cilium-4nf6m" Apr 21 02:48:13.949364 kubelet[2799]: I0421 02:48:13.944682 2799 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/25d6db75-1c26-49ab-a7c7-ec7a8230d88a-host-proc-sys-kernel\") pod \"cilium-4nf6m\" (UID: \"25d6db75-1c26-49ab-a7c7-ec7a8230d88a\") " pod="kube-system/cilium-4nf6m" Apr 21 02:48:13.949364 kubelet[2799]: I0421 02:48:13.944694 2799 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/25d6db75-1c26-49ab-a7c7-ec7a8230d88a-hubble-tls\") pod \"cilium-4nf6m\" (UID: \"25d6db75-1c26-49ab-a7c7-ec7a8230d88a\") " pod="kube-system/cilium-4nf6m" Apr 21 02:48:13.949364 kubelet[2799]: I0421 02:48:13.944705 2799 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/25d6db75-1c26-49ab-a7c7-ec7a8230d88a-cilium-run\") pod \"cilium-4nf6m\" (UID: \"25d6db75-1c26-49ab-a7c7-ec7a8230d88a\") " pod="kube-system/cilium-4nf6m" Apr 21 02:48:13.949364 kubelet[2799]: I0421 02:48:13.944717 2799 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/25d6db75-1c26-49ab-a7c7-ec7a8230d88a-cilium-cgroup\") pod \"cilium-4nf6m\" (UID: \"25d6db75-1c26-49ab-a7c7-ec7a8230d88a\") " pod="kube-system/cilium-4nf6m" Apr 21 02:48:13.949745 kubelet[2799]: I0421 02:48:13.944726 2799 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/25d6db75-1c26-49ab-a7c7-ec7a8230d88a-cni-path\") pod \"cilium-4nf6m\" (UID: \"25d6db75-1c26-49ab-a7c7-ec7a8230d88a\") " pod="kube-system/cilium-4nf6m" Apr 21 02:48:13.949745 kubelet[2799]: I0421 02:48:13.944736 2799 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/25d6db75-1c26-49ab-a7c7-ec7a8230d88a-bpf-maps\") pod \"cilium-4nf6m\" (UID: \"25d6db75-1c26-49ab-a7c7-ec7a8230d88a\") " pod="kube-system/cilium-4nf6m" Apr 21 02:48:13.949745 kubelet[2799]: I0421 02:48:13.944746 2799 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/25d6db75-1c26-49ab-a7c7-ec7a8230d88a-etc-cni-netd\") pod \"cilium-4nf6m\" (UID: \"25d6db75-1c26-49ab-a7c7-ec7a8230d88a\") " pod="kube-system/cilium-4nf6m" Apr 21 02:48:13.949745 kubelet[2799]: I0421 02:48:13.944756 2799 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/25d6db75-1c26-49ab-a7c7-ec7a8230d88a-lib-modules\") pod \"cilium-4nf6m\" (UID: \"25d6db75-1c26-49ab-a7c7-ec7a8230d88a\") " pod="kube-system/cilium-4nf6m" Apr 21 02:48:13.949745 kubelet[2799]: I0421 02:48:13.944803 2799 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/25d6db75-1c26-49ab-a7c7-ec7a8230d88a-xtables-lock\") pod \"cilium-4nf6m\" (UID: \"25d6db75-1c26-49ab-a7c7-ec7a8230d88a\") " pod="kube-system/cilium-4nf6m" Apr 21 02:48:13.949745 kubelet[2799]: I0421 02:48:13.944813 2799 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/25d6db75-1c26-49ab-a7c7-ec7a8230d88a-clustermesh-secrets\") pod \"cilium-4nf6m\" (UID: \"25d6db75-1c26-49ab-a7c7-ec7a8230d88a\") " pod="kube-system/cilium-4nf6m" Apr 21 02:48:13.949835 kubelet[2799]: I0421 02:48:13.944823 2799 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/25d6db75-1c26-49ab-a7c7-ec7a8230d88a-host-proc-sys-net\") pod \"cilium-4nf6m\" (UID: \"25d6db75-1c26-49ab-a7c7-ec7a8230d88a\") " pod="kube-system/cilium-4nf6m" Apr 21 02:48:13.949835 kubelet[2799]: I0421 02:48:13.944835 2799 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2dnn9\" (UniqueName: \"kubernetes.io/projected/25d6db75-1c26-49ab-a7c7-ec7a8230d88a-kube-api-access-2dnn9\") pod \"cilium-4nf6m\" (UID: \"25d6db75-1c26-49ab-a7c7-ec7a8230d88a\") " pod="kube-system/cilium-4nf6m" Apr 21 02:48:14.002894 kubelet[2799]: E0421 02:48:14.002798 2799 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 02:48:14.033204 containerd[1601]: time="2026-04-21T02:48:14.023817288Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-vz54j,Uid:1c482a53-40d4-4c75-bedf-e2f1b0a02cff,Namespace:kube-system,Attempt:0,}" Apr 21 02:48:14.046637 kubelet[2799]: I0421 02:48:14.046171 2799 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fzm7f\" (UniqueName: \"kubernetes.io/projected/1e48e55f-293b-4ca6-84cb-eabcc248637d-kube-api-access-fzm7f\") pod \"cilium-operator-78cf5644cb-kjwq8\" (UID: \"1e48e55f-293b-4ca6-84cb-eabcc248637d\") " pod="kube-system/cilium-operator-78cf5644cb-kjwq8" Apr 21 02:48:14.047579 kubelet[2799]: I0421 02:48:14.047454 2799 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1e48e55f-293b-4ca6-84cb-eabcc248637d-cilium-config-path\") pod \"cilium-operator-78cf5644cb-kjwq8\" (UID: \"1e48e55f-293b-4ca6-84cb-eabcc248637d\") " pod="kube-system/cilium-operator-78cf5644cb-kjwq8" Apr 21 02:48:14.055467 systemd[1]: Created slice kubepods-burstable-pod25d6db75_1c26_49ab_a7c7_ec7a8230d88a.slice - libcontainer container kubepods-burstable-pod25d6db75_1c26_49ab_a7c7_ec7a8230d88a.slice. Apr 21 02:48:14.153197 systemd[1]: Created slice kubepods-besteffort-pod1e48e55f_293b_4ca6_84cb_eabcc248637d.slice - libcontainer container kubepods-besteffort-pod1e48e55f_293b_4ca6_84cb_eabcc248637d.slice. Apr 21 02:48:14.188305 containerd[1601]: time="2026-04-21T02:48:14.188163782Z" level=info msg="connecting to shim a7e8120103bc1c15ba4f46f748b8f57e74e144fedc3ce0ec800a7635319130c3" address="unix:///run/containerd/s/c1b468803f5ec5295fab2c6c40087fe13c6d2d9d6f73eb34f585c25c414cf033" namespace=k8s.io protocol=ttrpc version=3 Apr 21 02:48:14.269813 kubelet[2799]: E0421 02:48:14.269186 2799 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 02:48:14.363283 systemd[1]: Started cri-containerd-a7e8120103bc1c15ba4f46f748b8f57e74e144fedc3ce0ec800a7635319130c3.scope - libcontainer container a7e8120103bc1c15ba4f46f748b8f57e74e144fedc3ce0ec800a7635319130c3. Apr 21 02:48:14.415452 kubelet[2799]: E0421 02:48:14.415372 2799 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 02:48:14.419108 containerd[1601]: time="2026-04-21T02:48:14.419020826Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-4nf6m,Uid:25d6db75-1c26-49ab-a7c7-ec7a8230d88a,Namespace:kube-system,Attempt:0,}" Apr 21 02:48:14.470956 containerd[1601]: time="2026-04-21T02:48:14.470830248Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-vz54j,Uid:1c482a53-40d4-4c75-bedf-e2f1b0a02cff,Namespace:kube-system,Attempt:0,} returns sandbox id \"a7e8120103bc1c15ba4f46f748b8f57e74e144fedc3ce0ec800a7635319130c3\"" Apr 21 02:48:14.475998 kubelet[2799]: E0421 02:48:14.475911 2799 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 02:48:14.476476 kubelet[2799]: E0421 02:48:14.476422 2799 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 02:48:14.479181 containerd[1601]: time="2026-04-21T02:48:14.478648670Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-78cf5644cb-kjwq8,Uid:1e48e55f-293b-4ca6-84cb-eabcc248637d,Namespace:kube-system,Attempt:0,}" Apr 21 02:48:14.503272 containerd[1601]: time="2026-04-21T02:48:14.501531624Z" level=info msg="CreateContainer within sandbox \"a7e8120103bc1c15ba4f46f748b8f57e74e144fedc3ce0ec800a7635319130c3\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Apr 21 02:48:14.506734 containerd[1601]: time="2026-04-21T02:48:14.506586470Z" level=info msg="connecting to shim bbfa269497be7f9f257a7901c2c0499b11fbbda45d72bda3c06de68d88a0ade0" address="unix:///run/containerd/s/de710411cfb8e5a2c307736e29d767293959f6dd68bf5ea6d8c129b8bff13382" namespace=k8s.io protocol=ttrpc version=3 Apr 21 02:48:14.570477 containerd[1601]: time="2026-04-21T02:48:14.569859337Z" level=info msg="connecting to shim 7d6db2effae05fa0b0c0855af785ffa8c0d2c1b992217bdbc1e679abdf3765a4" address="unix:///run/containerd/s/60e624f3b715ee0a81e7d1046b5a97c8e8957cf7df214786e054af7ae65d3de5" namespace=k8s.io protocol=ttrpc version=3 Apr 21 02:48:14.598245 containerd[1601]: time="2026-04-21T02:48:14.597875175Z" level=info msg="Container b2eb71a514ce3f500af243dd77122c89ac5dce8d31d3cf45c4381b6e2f37fdbe: CDI devices from CRI Config.CDIDevices: []" Apr 21 02:48:14.611453 systemd[1]: Started cri-containerd-bbfa269497be7f9f257a7901c2c0499b11fbbda45d72bda3c06de68d88a0ade0.scope - libcontainer container bbfa269497be7f9f257a7901c2c0499b11fbbda45d72bda3c06de68d88a0ade0. Apr 21 02:48:14.626407 containerd[1601]: time="2026-04-21T02:48:14.626368741Z" level=info msg="CreateContainer within sandbox \"a7e8120103bc1c15ba4f46f748b8f57e74e144fedc3ce0ec800a7635319130c3\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"b2eb71a514ce3f500af243dd77122c89ac5dce8d31d3cf45c4381b6e2f37fdbe\"" Apr 21 02:48:14.633708 containerd[1601]: time="2026-04-21T02:48:14.633545558Z" level=info msg="StartContainer for \"b2eb71a514ce3f500af243dd77122c89ac5dce8d31d3cf45c4381b6e2f37fdbe\"" Apr 21 02:48:14.654247 containerd[1601]: time="2026-04-21T02:48:14.654025819Z" level=info msg="connecting to shim b2eb71a514ce3f500af243dd77122c89ac5dce8d31d3cf45c4381b6e2f37fdbe" address="unix:///run/containerd/s/c1b468803f5ec5295fab2c6c40087fe13c6d2d9d6f73eb34f585c25c414cf033" protocol=ttrpc version=3 Apr 21 02:48:14.658019 systemd[1]: Started cri-containerd-7d6db2effae05fa0b0c0855af785ffa8c0d2c1b992217bdbc1e679abdf3765a4.scope - libcontainer container 7d6db2effae05fa0b0c0855af785ffa8c0d2c1b992217bdbc1e679abdf3765a4. Apr 21 02:48:14.711268 systemd[1]: Started cri-containerd-b2eb71a514ce3f500af243dd77122c89ac5dce8d31d3cf45c4381b6e2f37fdbe.scope - libcontainer container b2eb71a514ce3f500af243dd77122c89ac5dce8d31d3cf45c4381b6e2f37fdbe. Apr 21 02:48:14.719743 containerd[1601]: time="2026-04-21T02:48:14.719614091Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-4nf6m,Uid:25d6db75-1c26-49ab-a7c7-ec7a8230d88a,Namespace:kube-system,Attempt:0,} returns sandbox id \"bbfa269497be7f9f257a7901c2c0499b11fbbda45d72bda3c06de68d88a0ade0\"" Apr 21 02:48:14.720553 kubelet[2799]: E0421 02:48:14.720457 2799 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 02:48:14.734202 containerd[1601]: time="2026-04-21T02:48:14.733601170Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Apr 21 02:48:14.761490 containerd[1601]: time="2026-04-21T02:48:14.761233478Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-78cf5644cb-kjwq8,Uid:1e48e55f-293b-4ca6-84cb-eabcc248637d,Namespace:kube-system,Attempt:0,} returns sandbox id \"7d6db2effae05fa0b0c0855af785ffa8c0d2c1b992217bdbc1e679abdf3765a4\"" Apr 21 02:48:14.766912 kubelet[2799]: E0421 02:48:14.766849 2799 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 02:48:14.826874 containerd[1601]: time="2026-04-21T02:48:14.826641217Z" level=info msg="StartContainer for \"b2eb71a514ce3f500af243dd77122c89ac5dce8d31d3cf45c4381b6e2f37fdbe\" returns successfully" Apr 21 02:48:15.526826 kubelet[2799]: E0421 02:48:15.526719 2799 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 02:48:15.554819 kubelet[2799]: I0421 02:48:15.554730 2799 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-proxy-vz54j" podStartSLOduration=2.554628278 podStartE2EDuration="2.554628278s" podCreationTimestamp="2026-04-21 02:48:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-21 02:48:15.554579658 +0000 UTC m=+7.339117307" watchObservedRunningTime="2026-04-21 02:48:15.554628278 +0000 UTC m=+7.339165933" Apr 21 02:48:16.535533 kubelet[2799]: E0421 02:48:16.535489 2799 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 02:48:17.621560 kubelet[2799]: E0421 02:48:17.621445 2799 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 02:48:19.897856 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2511746297.mount: Deactivated successfully. Apr 21 02:48:21.147251 kubelet[2799]: E0421 02:48:21.146705 2799 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 02:48:22.078505 containerd[1601]: time="2026-04-21T02:48:22.078356326Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 02:48:22.078505 containerd[1601]: time="2026-04-21T02:48:22.078471304Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Apr 21 02:48:22.081014 containerd[1601]: time="2026-04-21T02:48:22.080950879Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 02:48:22.083775 containerd[1601]: time="2026-04-21T02:48:22.083720091Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 7.350089236s" Apr 21 02:48:22.083775 containerd[1601]: time="2026-04-21T02:48:22.083767031Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Apr 21 02:48:22.105600 containerd[1601]: time="2026-04-21T02:48:22.104668892Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Apr 21 02:48:22.125527 containerd[1601]: time="2026-04-21T02:48:22.125192700Z" level=info msg="CreateContainer within sandbox \"bbfa269497be7f9f257a7901c2c0499b11fbbda45d72bda3c06de68d88a0ade0\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Apr 21 02:48:22.146535 containerd[1601]: time="2026-04-21T02:48:22.146357427Z" level=info msg="Container 8a80e5906b79c2584845c411a72afaadf5c300c034f8dec677e5adf86931072c: CDI devices from CRI Config.CDIDevices: []" Apr 21 02:48:22.170807 containerd[1601]: time="2026-04-21T02:48:22.170232917Z" level=info msg="CreateContainer within sandbox \"bbfa269497be7f9f257a7901c2c0499b11fbbda45d72bda3c06de68d88a0ade0\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"8a80e5906b79c2584845c411a72afaadf5c300c034f8dec677e5adf86931072c\"" Apr 21 02:48:22.181099 containerd[1601]: time="2026-04-21T02:48:22.180952337Z" level=info msg="StartContainer for \"8a80e5906b79c2584845c411a72afaadf5c300c034f8dec677e5adf86931072c\"" Apr 21 02:48:22.202249 containerd[1601]: time="2026-04-21T02:48:22.201617047Z" level=info msg="connecting to shim 8a80e5906b79c2584845c411a72afaadf5c300c034f8dec677e5adf86931072c" address="unix:///run/containerd/s/de710411cfb8e5a2c307736e29d767293959f6dd68bf5ea6d8c129b8bff13382" protocol=ttrpc version=3 Apr 21 02:48:22.328063 systemd[1]: Started cri-containerd-8a80e5906b79c2584845c411a72afaadf5c300c034f8dec677e5adf86931072c.scope - libcontainer container 8a80e5906b79c2584845c411a72afaadf5c300c034f8dec677e5adf86931072c. Apr 21 02:48:22.471383 systemd[1]: cri-containerd-8a80e5906b79c2584845c411a72afaadf5c300c034f8dec677e5adf86931072c.scope: Deactivated successfully. Apr 21 02:48:22.472330 systemd[1]: cri-containerd-8a80e5906b79c2584845c411a72afaadf5c300c034f8dec677e5adf86931072c.scope: Consumed 65ms CPU time, 6.6M memory peak, 4K read from disk, 2M written to disk. Apr 21 02:48:22.472674 containerd[1601]: time="2026-04-21T02:48:22.472625822Z" level=info msg="received container exit event container_id:\"8a80e5906b79c2584845c411a72afaadf5c300c034f8dec677e5adf86931072c\" id:\"8a80e5906b79c2584845c411a72afaadf5c300c034f8dec677e5adf86931072c\" pid:3250 exited_at:{seconds:1776739702 nanos:471420704}" Apr 21 02:48:22.473027 containerd[1601]: time="2026-04-21T02:48:22.472977910Z" level=info msg="StartContainer for \"8a80e5906b79c2584845c411a72afaadf5c300c034f8dec677e5adf86931072c\" returns successfully" Apr 21 02:48:22.565965 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8a80e5906b79c2584845c411a72afaadf5c300c034f8dec677e5adf86931072c-rootfs.mount: Deactivated successfully. Apr 21 02:48:22.626386 kubelet[2799]: E0421 02:48:22.626338 2799 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 02:48:23.459966 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1538174709.mount: Deactivated successfully. Apr 21 02:48:23.639330 kubelet[2799]: E0421 02:48:23.639224 2799 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 02:48:23.655967 containerd[1601]: time="2026-04-21T02:48:23.655731241Z" level=info msg="CreateContainer within sandbox \"bbfa269497be7f9f257a7901c2c0499b11fbbda45d72bda3c06de68d88a0ade0\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Apr 21 02:48:23.673192 containerd[1601]: time="2026-04-21T02:48:23.671891872Z" level=info msg="Container ffcb6da1f5c7e351b8c038b242b16231639ffce26c9bcc1cb3fc08fc250548b9: CDI devices from CRI Config.CDIDevices: []" Apr 21 02:48:23.676622 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4022608009.mount: Deactivated successfully. Apr 21 02:48:23.696545 containerd[1601]: time="2026-04-21T02:48:23.696424405Z" level=info msg="CreateContainer within sandbox \"bbfa269497be7f9f257a7901c2c0499b11fbbda45d72bda3c06de68d88a0ade0\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"ffcb6da1f5c7e351b8c038b242b16231639ffce26c9bcc1cb3fc08fc250548b9\"" Apr 21 02:48:23.697531 containerd[1601]: time="2026-04-21T02:48:23.697469259Z" level=info msg="StartContainer for \"ffcb6da1f5c7e351b8c038b242b16231639ffce26c9bcc1cb3fc08fc250548b9\"" Apr 21 02:48:23.698906 containerd[1601]: time="2026-04-21T02:48:23.698843656Z" level=info msg="connecting to shim ffcb6da1f5c7e351b8c038b242b16231639ffce26c9bcc1cb3fc08fc250548b9" address="unix:///run/containerd/s/de710411cfb8e5a2c307736e29d767293959f6dd68bf5ea6d8c129b8bff13382" protocol=ttrpc version=3 Apr 21 02:48:23.729320 systemd[1]: Started cri-containerd-ffcb6da1f5c7e351b8c038b242b16231639ffce26c9bcc1cb3fc08fc250548b9.scope - libcontainer container ffcb6da1f5c7e351b8c038b242b16231639ffce26c9bcc1cb3fc08fc250548b9. Apr 21 02:48:23.802217 containerd[1601]: time="2026-04-21T02:48:23.801861255Z" level=info msg="StartContainer for \"ffcb6da1f5c7e351b8c038b242b16231639ffce26c9bcc1cb3fc08fc250548b9\" returns successfully" Apr 21 02:48:23.809012 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 21 02:48:23.809465 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 21 02:48:23.809844 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Apr 21 02:48:23.811560 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 21 02:48:23.824501 systemd[1]: cri-containerd-ffcb6da1f5c7e351b8c038b242b16231639ffce26c9bcc1cb3fc08fc250548b9.scope: Deactivated successfully. Apr 21 02:48:23.833049 containerd[1601]: time="2026-04-21T02:48:23.830937446Z" level=info msg="received container exit event container_id:\"ffcb6da1f5c7e351b8c038b242b16231639ffce26c9bcc1cb3fc08fc250548b9\" id:\"ffcb6da1f5c7e351b8c038b242b16231639ffce26c9bcc1cb3fc08fc250548b9\" pid:3305 exited_at:{seconds:1776739703 nanos:824374375}" Apr 21 02:48:23.891772 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 21 02:48:24.196342 containerd[1601]: time="2026-04-21T02:48:24.195807700Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 02:48:24.197983 containerd[1601]: time="2026-04-21T02:48:24.197802186Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Apr 21 02:48:24.203506 containerd[1601]: time="2026-04-21T02:48:24.203394063Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 02:48:24.205691 containerd[1601]: time="2026-04-21T02:48:24.205594634Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 2.100839438s" Apr 21 02:48:24.205744 containerd[1601]: time="2026-04-21T02:48:24.205697102Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Apr 21 02:48:24.213230 containerd[1601]: time="2026-04-21T02:48:24.213039033Z" level=info msg="CreateContainer within sandbox \"7d6db2effae05fa0b0c0855af785ffa8c0d2c1b992217bdbc1e679abdf3765a4\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Apr 21 02:48:24.230802 containerd[1601]: time="2026-04-21T02:48:24.230711454Z" level=info msg="Container 327bbf37e3c7bede5f6e616569381b3549bbe9541fafc4fad15ab7b1f4f10747: CDI devices from CRI Config.CDIDevices: []" Apr 21 02:48:24.245329 containerd[1601]: time="2026-04-21T02:48:24.245264401Z" level=info msg="CreateContainer within sandbox \"7d6db2effae05fa0b0c0855af785ffa8c0d2c1b992217bdbc1e679abdf3765a4\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"327bbf37e3c7bede5f6e616569381b3549bbe9541fafc4fad15ab7b1f4f10747\"" Apr 21 02:48:24.248222 containerd[1601]: time="2026-04-21T02:48:24.248196064Z" level=info msg="StartContainer for \"327bbf37e3c7bede5f6e616569381b3549bbe9541fafc4fad15ab7b1f4f10747\"" Apr 21 02:48:24.251440 containerd[1601]: time="2026-04-21T02:48:24.251310867Z" level=info msg="connecting to shim 327bbf37e3c7bede5f6e616569381b3549bbe9541fafc4fad15ab7b1f4f10747" address="unix:///run/containerd/s/60e624f3b715ee0a81e7d1046b5a97c8e8957cf7df214786e054af7ae65d3de5" protocol=ttrpc version=3 Apr 21 02:48:24.278394 kubelet[2799]: E0421 02:48:24.278274 2799 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 02:48:24.285290 systemd[1]: Started cri-containerd-327bbf37e3c7bede5f6e616569381b3549bbe9541fafc4fad15ab7b1f4f10747.scope - libcontainer container 327bbf37e3c7bede5f6e616569381b3549bbe9541fafc4fad15ab7b1f4f10747. Apr 21 02:48:24.352389 containerd[1601]: time="2026-04-21T02:48:24.352043949Z" level=info msg="StartContainer for \"327bbf37e3c7bede5f6e616569381b3549bbe9541fafc4fad15ab7b1f4f10747\" returns successfully" Apr 21 02:48:24.452947 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ffcb6da1f5c7e351b8c038b242b16231639ffce26c9bcc1cb3fc08fc250548b9-rootfs.mount: Deactivated successfully. Apr 21 02:48:24.682903 kubelet[2799]: E0421 02:48:24.682652 2799 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 02:48:24.732186 kubelet[2799]: E0421 02:48:24.731886 2799 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 02:48:24.746238 containerd[1601]: time="2026-04-21T02:48:24.745526164Z" level=info msg="CreateContainer within sandbox \"bbfa269497be7f9f257a7901c2c0499b11fbbda45d72bda3c06de68d88a0ade0\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Apr 21 02:48:24.789680 kubelet[2799]: I0421 02:48:24.789207 2799 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/cilium-operator-78cf5644cb-kjwq8" podStartSLOduration=2.34850731 podStartE2EDuration="11.782859568s" podCreationTimestamp="2026-04-21 02:48:13 +0000 UTC" firstStartedPulling="2026-04-21 02:48:14.773943001 +0000 UTC m=+6.558480644" lastFinishedPulling="2026-04-21 02:48:24.208295259 +0000 UTC m=+15.992832902" observedRunningTime="2026-04-21 02:48:24.718673942 +0000 UTC m=+16.503211594" watchObservedRunningTime="2026-04-21 02:48:24.782859568 +0000 UTC m=+16.567397218" Apr 21 02:48:24.803675 containerd[1601]: time="2026-04-21T02:48:24.802363945Z" level=info msg="Container 9a5211943ba1c01524dce71893dbfe77a634bfe40d3ad4cd52e0177cb0b46089: CDI devices from CRI Config.CDIDevices: []" Apr 21 02:48:24.828243 containerd[1601]: time="2026-04-21T02:48:24.827208688Z" level=info msg="CreateContainer within sandbox \"bbfa269497be7f9f257a7901c2c0499b11fbbda45d72bda3c06de68d88a0ade0\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"9a5211943ba1c01524dce71893dbfe77a634bfe40d3ad4cd52e0177cb0b46089\"" Apr 21 02:48:24.833699 containerd[1601]: time="2026-04-21T02:48:24.833261384Z" level=info msg="StartContainer for \"9a5211943ba1c01524dce71893dbfe77a634bfe40d3ad4cd52e0177cb0b46089\"" Apr 21 02:48:24.840947 containerd[1601]: time="2026-04-21T02:48:24.840910777Z" level=info msg="connecting to shim 9a5211943ba1c01524dce71893dbfe77a634bfe40d3ad4cd52e0177cb0b46089" address="unix:///run/containerd/s/de710411cfb8e5a2c307736e29d767293959f6dd68bf5ea6d8c129b8bff13382" protocol=ttrpc version=3 Apr 21 02:48:24.920525 systemd[1]: Started cri-containerd-9a5211943ba1c01524dce71893dbfe77a634bfe40d3ad4cd52e0177cb0b46089.scope - libcontainer container 9a5211943ba1c01524dce71893dbfe77a634bfe40d3ad4cd52e0177cb0b46089. Apr 21 02:48:25.149610 containerd[1601]: time="2026-04-21T02:48:25.149584161Z" level=info msg="StartContainer for \"9a5211943ba1c01524dce71893dbfe77a634bfe40d3ad4cd52e0177cb0b46089\" returns successfully" Apr 21 02:48:25.155279 systemd[1]: cri-containerd-9a5211943ba1c01524dce71893dbfe77a634bfe40d3ad4cd52e0177cb0b46089.scope: Deactivated successfully. Apr 21 02:48:25.158572 containerd[1601]: time="2026-04-21T02:48:25.158546614Z" level=info msg="received container exit event container_id:\"9a5211943ba1c01524dce71893dbfe77a634bfe40d3ad4cd52e0177cb0b46089\" id:\"9a5211943ba1c01524dce71893dbfe77a634bfe40d3ad4cd52e0177cb0b46089\" pid:3396 exited_at:{seconds:1776739705 nanos:156711731}" Apr 21 02:48:25.453986 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9a5211943ba1c01524dce71893dbfe77a634bfe40d3ad4cd52e0177cb0b46089-rootfs.mount: Deactivated successfully. Apr 21 02:48:25.736743 kubelet[2799]: E0421 02:48:25.736020 2799 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 02:48:25.736743 kubelet[2799]: E0421 02:48:25.736292 2799 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 02:48:25.755949 containerd[1601]: time="2026-04-21T02:48:25.755200144Z" level=info msg="CreateContainer within sandbox \"bbfa269497be7f9f257a7901c2c0499b11fbbda45d72bda3c06de68d88a0ade0\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Apr 21 02:48:25.799766 containerd[1601]: time="2026-04-21T02:48:25.799626894Z" level=info msg="Container 57c2aaf387d27a118b1f3fd981bca2a3b7397c649d3316aa3aed7cf7e40c0846: CDI devices from CRI Config.CDIDevices: []" Apr 21 02:48:25.810460 containerd[1601]: time="2026-04-21T02:48:25.810387398Z" level=info msg="CreateContainer within sandbox \"bbfa269497be7f9f257a7901c2c0499b11fbbda45d72bda3c06de68d88a0ade0\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"57c2aaf387d27a118b1f3fd981bca2a3b7397c649d3316aa3aed7cf7e40c0846\"" Apr 21 02:48:25.811286 containerd[1601]: time="2026-04-21T02:48:25.811251348Z" level=info msg="StartContainer for \"57c2aaf387d27a118b1f3fd981bca2a3b7397c649d3316aa3aed7cf7e40c0846\"" Apr 21 02:48:25.812240 containerd[1601]: time="2026-04-21T02:48:25.812221491Z" level=info msg="connecting to shim 57c2aaf387d27a118b1f3fd981bca2a3b7397c649d3316aa3aed7cf7e40c0846" address="unix:///run/containerd/s/de710411cfb8e5a2c307736e29d767293959f6dd68bf5ea6d8c129b8bff13382" protocol=ttrpc version=3 Apr 21 02:48:25.838858 systemd[1]: Started cri-containerd-57c2aaf387d27a118b1f3fd981bca2a3b7397c649d3316aa3aed7cf7e40c0846.scope - libcontainer container 57c2aaf387d27a118b1f3fd981bca2a3b7397c649d3316aa3aed7cf7e40c0846. Apr 21 02:48:25.882038 systemd[1]: cri-containerd-57c2aaf387d27a118b1f3fd981bca2a3b7397c649d3316aa3aed7cf7e40c0846.scope: Deactivated successfully. Apr 21 02:48:25.883489 containerd[1601]: time="2026-04-21T02:48:25.883235121Z" level=info msg="received container exit event container_id:\"57c2aaf387d27a118b1f3fd981bca2a3b7397c649d3316aa3aed7cf7e40c0846\" id:\"57c2aaf387d27a118b1f3fd981bca2a3b7397c649d3316aa3aed7cf7e40c0846\" pid:3436 exited_at:{seconds:1776739705 nanos:882452438}" Apr 21 02:48:25.895987 containerd[1601]: time="2026-04-21T02:48:25.895859525Z" level=info msg="StartContainer for \"57c2aaf387d27a118b1f3fd981bca2a3b7397c649d3316aa3aed7cf7e40c0846\" returns successfully" Apr 21 02:48:25.928819 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-57c2aaf387d27a118b1f3fd981bca2a3b7397c649d3316aa3aed7cf7e40c0846-rootfs.mount: Deactivated successfully. Apr 21 02:48:26.773867 kubelet[2799]: E0421 02:48:26.773805 2799 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 02:48:26.808785 containerd[1601]: time="2026-04-21T02:48:26.808515406Z" level=info msg="CreateContainer within sandbox \"bbfa269497be7f9f257a7901c2c0499b11fbbda45d72bda3c06de68d88a0ade0\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Apr 21 02:48:26.854653 containerd[1601]: time="2026-04-21T02:48:26.854411092Z" level=info msg="Container 427131b1d6556efc25a82534c8b170049ce304ca7416756b3993930685616f69: CDI devices from CRI Config.CDIDevices: []" Apr 21 02:48:26.868439 containerd[1601]: time="2026-04-21T02:48:26.868371787Z" level=info msg="CreateContainer within sandbox \"bbfa269497be7f9f257a7901c2c0499b11fbbda45d72bda3c06de68d88a0ade0\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"427131b1d6556efc25a82534c8b170049ce304ca7416756b3993930685616f69\"" Apr 21 02:48:26.869945 containerd[1601]: time="2026-04-21T02:48:26.869810555Z" level=info msg="StartContainer for \"427131b1d6556efc25a82534c8b170049ce304ca7416756b3993930685616f69\"" Apr 21 02:48:26.871376 containerd[1601]: time="2026-04-21T02:48:26.871077926Z" level=info msg="connecting to shim 427131b1d6556efc25a82534c8b170049ce304ca7416756b3993930685616f69" address="unix:///run/containerd/s/de710411cfb8e5a2c307736e29d767293959f6dd68bf5ea6d8c129b8bff13382" protocol=ttrpc version=3 Apr 21 02:48:26.932624 systemd[1]: Started cri-containerd-427131b1d6556efc25a82534c8b170049ce304ca7416756b3993930685616f69.scope - libcontainer container 427131b1d6556efc25a82534c8b170049ce304ca7416756b3993930685616f69. Apr 21 02:48:27.087754 containerd[1601]: time="2026-04-21T02:48:27.087509107Z" level=info msg="StartContainer for \"427131b1d6556efc25a82534c8b170049ce304ca7416756b3993930685616f69\" returns successfully" Apr 21 02:48:27.315508 kubelet[2799]: I0421 02:48:27.315461 2799 kubelet_node_status.go:427] "Fast updating node status as it just became ready" Apr 21 02:48:27.499172 systemd[1]: Created slice kubepods-burstable-pod53662b21_3e07_40f7_9de5_f0b98824fe66.slice - libcontainer container kubepods-burstable-pod53662b21_3e07_40f7_9de5_f0b98824fe66.slice. Apr 21 02:48:27.509453 systemd[1]: Created slice kubepods-burstable-pod1fa9b352_d07e_4ad7_a3dd_2123c86decc4.slice - libcontainer container kubepods-burstable-pod1fa9b352_d07e_4ad7_a3dd_2123c86decc4.slice. Apr 21 02:48:27.590519 kubelet[2799]: I0421 02:48:27.590299 2799 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d2h85\" (UniqueName: \"kubernetes.io/projected/53662b21-3e07-40f7-9de5-f0b98824fe66-kube-api-access-d2h85\") pod \"coredns-7d764666f9-sffkp\" (UID: \"53662b21-3e07-40f7-9de5-f0b98824fe66\") " pod="kube-system/coredns-7d764666f9-sffkp" Apr 21 02:48:27.590760 kubelet[2799]: I0421 02:48:27.590472 2799 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sskz4\" (UniqueName: \"kubernetes.io/projected/1fa9b352-d07e-4ad7-a3dd-2123c86decc4-kube-api-access-sskz4\") pod \"coredns-7d764666f9-92ncp\" (UID: \"1fa9b352-d07e-4ad7-a3dd-2123c86decc4\") " pod="kube-system/coredns-7d764666f9-92ncp" Apr 21 02:48:27.590782 kubelet[2799]: I0421 02:48:27.590767 2799 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/53662b21-3e07-40f7-9de5-f0b98824fe66-config-volume\") pod \"coredns-7d764666f9-sffkp\" (UID: \"53662b21-3e07-40f7-9de5-f0b98824fe66\") " pod="kube-system/coredns-7d764666f9-sffkp" Apr 21 02:48:27.590800 kubelet[2799]: I0421 02:48:27.590791 2799 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1fa9b352-d07e-4ad7-a3dd-2123c86decc4-config-volume\") pod \"coredns-7d764666f9-92ncp\" (UID: \"1fa9b352-d07e-4ad7-a3dd-2123c86decc4\") " pod="kube-system/coredns-7d764666f9-92ncp" Apr 21 02:48:27.650256 kubelet[2799]: E0421 02:48:27.649848 2799 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 02:48:27.837034 kubelet[2799]: E0421 02:48:27.836878 2799 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 02:48:27.848549 kubelet[2799]: E0421 02:48:27.848450 2799 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 02:48:27.865235 containerd[1601]: time="2026-04-21T02:48:27.864829249Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-sffkp,Uid:53662b21-3e07-40f7-9de5-f0b98824fe66,Namespace:kube-system,Attempt:0,}" Apr 21 02:48:27.866325 containerd[1601]: time="2026-04-21T02:48:27.864979418Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-92ncp,Uid:1fa9b352-d07e-4ad7-a3dd-2123c86decc4,Namespace:kube-system,Attempt:0,}" Apr 21 02:48:27.889453 kubelet[2799]: E0421 02:48:27.889275 2799 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 02:48:28.907827 kubelet[2799]: E0421 02:48:28.907728 2799 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 02:48:29.490493 systemd-networkd[1498]: cilium_host: Link UP Apr 21 02:48:29.491038 systemd-networkd[1498]: cilium_net: Link UP Apr 21 02:48:29.491292 systemd-networkd[1498]: cilium_net: Gained carrier Apr 21 02:48:29.491382 systemd-networkd[1498]: cilium_host: Gained carrier Apr 21 02:48:29.680285 systemd-networkd[1498]: cilium_vxlan: Link UP Apr 21 02:48:29.680826 systemd-networkd[1498]: cilium_vxlan: Gained carrier Apr 21 02:48:29.742398 systemd-networkd[1498]: cilium_net: Gained IPv6LL Apr 21 02:48:29.914040 kubelet[2799]: E0421 02:48:29.913881 2799 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 02:48:30.232199 kernel: NET: Registered PF_ALG protocol family Apr 21 02:48:30.448287 systemd-networkd[1498]: cilium_host: Gained IPv6LL Apr 21 02:48:30.961548 systemd-networkd[1498]: cilium_vxlan: Gained IPv6LL Apr 21 02:48:31.507578 systemd-networkd[1498]: lxc_health: Link UP Apr 21 02:48:31.519718 systemd-networkd[1498]: lxc_health: Gained carrier Apr 21 02:48:32.009256 kernel: eth0: renamed from tmp9c245 Apr 21 02:48:32.021992 kernel: eth0: renamed from tmp5bf18 Apr 21 02:48:32.028222 systemd-networkd[1498]: lxcf8f137afd79b: Link UP Apr 21 02:48:32.029618 systemd-networkd[1498]: lxcf8f137afd79b: Gained carrier Apr 21 02:48:32.029836 systemd-networkd[1498]: lxc8fff9c50d253: Link UP Apr 21 02:48:32.040343 systemd-networkd[1498]: lxc8fff9c50d253: Gained carrier Apr 21 02:48:32.432191 kubelet[2799]: E0421 02:48:32.431839 2799 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 02:48:32.547303 kubelet[2799]: I0421 02:48:32.546708 2799 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/cilium-4nf6m" podStartSLOduration=7.4955966929999995 podStartE2EDuration="19.546570404s" podCreationTimestamp="2026-04-21 02:48:13 +0000 UTC" firstStartedPulling="2026-04-21 02:48:14.728731292 +0000 UTC m=+6.513268936" lastFinishedPulling="2026-04-21 02:48:26.779705002 +0000 UTC m=+18.564242647" observedRunningTime="2026-04-21 02:48:27.954955574 +0000 UTC m=+19.739493227" watchObservedRunningTime="2026-04-21 02:48:32.546570404 +0000 UTC m=+24.331108048" Apr 21 02:48:33.263631 systemd-networkd[1498]: lxc_health: Gained IPv6LL Apr 21 02:48:33.775530 systemd-networkd[1498]: lxc8fff9c50d253: Gained IPv6LL Apr 21 02:48:33.838425 systemd-networkd[1498]: lxcf8f137afd79b: Gained IPv6LL Apr 21 02:48:37.292575 containerd[1601]: time="2026-04-21T02:48:37.292472061Z" level=info msg="connecting to shim 9c245838999ec19802397723a076265a7e956d8bf151aa74fbdc423821f2eb62" address="unix:///run/containerd/s/8000411712203ffe823fdf08c97e77f730e7f80131c93072a97d90b2c980a2f7" namespace=k8s.io protocol=ttrpc version=3 Apr 21 02:48:37.294600 containerd[1601]: time="2026-04-21T02:48:37.292968581Z" level=info msg="connecting to shim 5bf1823e36efcbc46f4fd0ca61458c93614eada3faabac021679821cc99d17c1" address="unix:///run/containerd/s/e6a14e9b591b34db4bd1e38450758c5d088223d034e3f78e72cbcf7c33b86be3" namespace=k8s.io protocol=ttrpc version=3 Apr 21 02:48:37.320319 systemd[1]: Started cri-containerd-9c245838999ec19802397723a076265a7e956d8bf151aa74fbdc423821f2eb62.scope - libcontainer container 9c245838999ec19802397723a076265a7e956d8bf151aa74fbdc423821f2eb62. Apr 21 02:48:37.323271 systemd[1]: Started cri-containerd-5bf1823e36efcbc46f4fd0ca61458c93614eada3faabac021679821cc99d17c1.scope - libcontainer container 5bf1823e36efcbc46f4fd0ca61458c93614eada3faabac021679821cc99d17c1. Apr 21 02:48:37.338983 systemd-resolved[1420]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 21 02:48:37.342312 systemd-resolved[1420]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 21 02:48:37.407421 containerd[1601]: time="2026-04-21T02:48:37.407368130Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-sffkp,Uid:53662b21-3e07-40f7-9de5-f0b98824fe66,Namespace:kube-system,Attempt:0,} returns sandbox id \"9c245838999ec19802397723a076265a7e956d8bf151aa74fbdc423821f2eb62\"" Apr 21 02:48:37.407823 containerd[1601]: time="2026-04-21T02:48:37.407598697Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-92ncp,Uid:1fa9b352-d07e-4ad7-a3dd-2123c86decc4,Namespace:kube-system,Attempt:0,} returns sandbox id \"5bf1823e36efcbc46f4fd0ca61458c93614eada3faabac021679821cc99d17c1\"" Apr 21 02:48:37.410062 kubelet[2799]: E0421 02:48:37.409818 2799 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 02:48:37.410062 kubelet[2799]: E0421 02:48:37.409916 2799 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 02:48:37.421869 containerd[1601]: time="2026-04-21T02:48:37.421679848Z" level=info msg="CreateContainer within sandbox \"9c245838999ec19802397723a076265a7e956d8bf151aa74fbdc423821f2eb62\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 21 02:48:37.421869 containerd[1601]: time="2026-04-21T02:48:37.421685900Z" level=info msg="CreateContainer within sandbox \"5bf1823e36efcbc46f4fd0ca61458c93614eada3faabac021679821cc99d17c1\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 21 02:48:37.432542 containerd[1601]: time="2026-04-21T02:48:37.432501209Z" level=info msg="Container 60d76142291a6bc29bf630b8ab6e19604601d499724953378b766600cd37c92d: CDI devices from CRI Config.CDIDevices: []" Apr 21 02:48:37.435698 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1957080622.mount: Deactivated successfully. Apr 21 02:48:37.455265 containerd[1601]: time="2026-04-21T02:48:37.455084704Z" level=info msg="CreateContainer within sandbox \"9c245838999ec19802397723a076265a7e956d8bf151aa74fbdc423821f2eb62\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"60d76142291a6bc29bf630b8ab6e19604601d499724953378b766600cd37c92d\"" Apr 21 02:48:37.456756 containerd[1601]: time="2026-04-21T02:48:37.456616948Z" level=info msg="Container efb8189edb3b9c9e315375723b5a446d9d41837b1937ed20c58749bec9f33dea: CDI devices from CRI Config.CDIDevices: []" Apr 21 02:48:37.456916 containerd[1601]: time="2026-04-21T02:48:37.456897922Z" level=info msg="StartContainer for \"60d76142291a6bc29bf630b8ab6e19604601d499724953378b766600cd37c92d\"" Apr 21 02:48:37.458379 containerd[1601]: time="2026-04-21T02:48:37.458337116Z" level=info msg="connecting to shim 60d76142291a6bc29bf630b8ab6e19604601d499724953378b766600cd37c92d" address="unix:///run/containerd/s/8000411712203ffe823fdf08c97e77f730e7f80131c93072a97d90b2c980a2f7" protocol=ttrpc version=3 Apr 21 02:48:37.467783 containerd[1601]: time="2026-04-21T02:48:37.467649077Z" level=info msg="CreateContainer within sandbox \"5bf1823e36efcbc46f4fd0ca61458c93614eada3faabac021679821cc99d17c1\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"efb8189edb3b9c9e315375723b5a446d9d41837b1937ed20c58749bec9f33dea\"" Apr 21 02:48:37.470031 containerd[1601]: time="2026-04-21T02:48:37.470013672Z" level=info msg="StartContainer for \"efb8189edb3b9c9e315375723b5a446d9d41837b1937ed20c58749bec9f33dea\"" Apr 21 02:48:37.471097 containerd[1601]: time="2026-04-21T02:48:37.471079171Z" level=info msg="connecting to shim efb8189edb3b9c9e315375723b5a446d9d41837b1937ed20c58749bec9f33dea" address="unix:///run/containerd/s/e6a14e9b591b34db4bd1e38450758c5d088223d034e3f78e72cbcf7c33b86be3" protocol=ttrpc version=3 Apr 21 02:48:37.483808 systemd[1]: Started cri-containerd-60d76142291a6bc29bf630b8ab6e19604601d499724953378b766600cd37c92d.scope - libcontainer container 60d76142291a6bc29bf630b8ab6e19604601d499724953378b766600cd37c92d. Apr 21 02:48:37.495324 systemd[1]: Started cri-containerd-efb8189edb3b9c9e315375723b5a446d9d41837b1937ed20c58749bec9f33dea.scope - libcontainer container efb8189edb3b9c9e315375723b5a446d9d41837b1937ed20c58749bec9f33dea. Apr 21 02:48:37.548034 containerd[1601]: time="2026-04-21T02:48:37.547686649Z" level=info msg="StartContainer for \"60d76142291a6bc29bf630b8ab6e19604601d499724953378b766600cd37c92d\" returns successfully" Apr 21 02:48:37.560960 containerd[1601]: time="2026-04-21T02:48:37.560922236Z" level=info msg="StartContainer for \"efb8189edb3b9c9e315375723b5a446d9d41837b1937ed20c58749bec9f33dea\" returns successfully" Apr 21 02:48:38.025807 kubelet[2799]: E0421 02:48:38.025245 2799 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 02:48:38.026874 kubelet[2799]: E0421 02:48:38.026782 2799 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 02:48:38.051937 kubelet[2799]: I0421 02:48:38.051625 2799 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/coredns-7d764666f9-sffkp" podStartSLOduration=25.051587076 podStartE2EDuration="25.051587076s" podCreationTimestamp="2026-04-21 02:48:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-21 02:48:38.05084717 +0000 UTC m=+29.835384818" watchObservedRunningTime="2026-04-21 02:48:38.051587076 +0000 UTC m=+29.836124732" Apr 21 02:48:38.087769 kubelet[2799]: I0421 02:48:38.087531 2799 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/coredns-7d764666f9-92ncp" podStartSLOduration=25.087445361 podStartE2EDuration="25.087445361s" podCreationTimestamp="2026-04-21 02:48:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-21 02:48:38.08500188 +0000 UTC m=+29.869539535" watchObservedRunningTime="2026-04-21 02:48:38.087445361 +0000 UTC m=+29.871983015" Apr 21 02:48:39.032699 kubelet[2799]: E0421 02:48:39.032397 2799 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 02:48:40.045185 kubelet[2799]: E0421 02:48:40.044911 2799 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 02:48:41.017531 kubelet[2799]: I0421 02:48:41.016408 2799 prober_manager.go:356] "Failed to trigger a manual run" probe="Readiness" Apr 21 02:48:41.022368 kubelet[2799]: E0421 02:48:41.021416 2799 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 02:48:41.050644 kubelet[2799]: E0421 02:48:41.050347 2799 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 02:48:41.639117 systemd[1]: Started sshd@9-10.0.0.38:22-10.0.0.1:40242.service - OpenSSH per-connection server daemon (10.0.0.1:40242). Apr 21 02:48:41.718565 sshd[4149]: Accepted publickey for core from 10.0.0.1 port 40242 ssh2: RSA SHA256:9KUHGlOeyJC7ThH1mkUqpKUL96f7+Wj12eT7o4yHGwc Apr 21 02:48:41.719504 sshd-session[4149]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 02:48:41.728816 systemd-logind[1574]: New session 10 of user core. Apr 21 02:48:41.741364 systemd[1]: Started session-10.scope - Session 10 of User core. Apr 21 02:48:41.832276 sshd[4152]: Connection closed by 10.0.0.1 port 40242 Apr 21 02:48:41.832536 sshd-session[4149]: pam_unix(sshd:session): session closed for user core Apr 21 02:48:41.835884 systemd[1]: sshd@9-10.0.0.38:22-10.0.0.1:40242.service: Deactivated successfully. Apr 21 02:48:41.837480 systemd[1]: session-10.scope: Deactivated successfully. Apr 21 02:48:41.841741 systemd-logind[1574]: Session 10 logged out. Waiting for processes to exit. Apr 21 02:48:41.843287 systemd-logind[1574]: Removed session 10. Apr 21 02:48:46.856573 systemd[1]: Started sshd@10-10.0.0.38:22-10.0.0.1:59778.service - OpenSSH per-connection server daemon (10.0.0.1:59778). Apr 21 02:48:46.929730 sshd[4170]: Accepted publickey for core from 10.0.0.1 port 59778 ssh2: RSA SHA256:9KUHGlOeyJC7ThH1mkUqpKUL96f7+Wj12eT7o4yHGwc Apr 21 02:48:46.931882 sshd-session[4170]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 02:48:46.939608 systemd-logind[1574]: New session 11 of user core. Apr 21 02:48:46.949357 systemd[1]: Started session-11.scope - Session 11 of User core. Apr 21 02:48:47.037424 sshd[4173]: Connection closed by 10.0.0.1 port 59778 Apr 21 02:48:47.037926 sshd-session[4170]: pam_unix(sshd:session): session closed for user core Apr 21 02:48:47.041427 systemd[1]: sshd@10-10.0.0.38:22-10.0.0.1:59778.service: Deactivated successfully. Apr 21 02:48:47.042987 systemd[1]: session-11.scope: Deactivated successfully. Apr 21 02:48:47.043917 systemd-logind[1574]: Session 11 logged out. Waiting for processes to exit. Apr 21 02:48:47.045558 systemd-logind[1574]: Removed session 11. Apr 21 02:48:48.031743 kubelet[2799]: E0421 02:48:48.031453 2799 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 02:48:48.132855 kubelet[2799]: E0421 02:48:48.132763 2799 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 02:48:52.057534 systemd[1]: Started sshd@11-10.0.0.38:22-10.0.0.1:59788.service - OpenSSH per-connection server daemon (10.0.0.1:59788). Apr 21 02:48:52.115806 sshd[4191]: Accepted publickey for core from 10.0.0.1 port 59788 ssh2: RSA SHA256:9KUHGlOeyJC7ThH1mkUqpKUL96f7+Wj12eT7o4yHGwc Apr 21 02:48:52.117310 sshd-session[4191]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 02:48:52.124684 systemd-logind[1574]: New session 12 of user core. Apr 21 02:48:52.131342 systemd[1]: Started session-12.scope - Session 12 of User core. Apr 21 02:48:52.238970 sshd[4194]: Connection closed by 10.0.0.1 port 59788 Apr 21 02:48:52.239659 sshd-session[4191]: pam_unix(sshd:session): session closed for user core Apr 21 02:48:52.243475 systemd[1]: sshd@11-10.0.0.38:22-10.0.0.1:59788.service: Deactivated successfully. Apr 21 02:48:52.245496 systemd[1]: session-12.scope: Deactivated successfully. Apr 21 02:48:52.246368 systemd-logind[1574]: Session 12 logged out. Waiting for processes to exit. Apr 21 02:48:52.247561 systemd-logind[1574]: Removed session 12. Apr 21 02:48:57.266256 systemd[1]: Started sshd@12-10.0.0.38:22-10.0.0.1:33086.service - OpenSSH per-connection server daemon (10.0.0.1:33086). Apr 21 02:48:57.333874 sshd[4209]: Accepted publickey for core from 10.0.0.1 port 33086 ssh2: RSA SHA256:9KUHGlOeyJC7ThH1mkUqpKUL96f7+Wj12eT7o4yHGwc Apr 21 02:48:57.335036 sshd-session[4209]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 02:48:57.341464 systemd-logind[1574]: New session 13 of user core. Apr 21 02:48:57.353726 systemd[1]: Started session-13.scope - Session 13 of User core. Apr 21 02:48:57.450849 sshd[4212]: Connection closed by 10.0.0.1 port 33086 Apr 21 02:48:57.452040 sshd-session[4209]: pam_unix(sshd:session): session closed for user core Apr 21 02:48:57.461082 systemd[1]: sshd@12-10.0.0.38:22-10.0.0.1:33086.service: Deactivated successfully. Apr 21 02:48:57.462661 systemd[1]: session-13.scope: Deactivated successfully. Apr 21 02:48:57.463910 systemd-logind[1574]: Session 13 logged out. Waiting for processes to exit. Apr 21 02:48:57.465609 systemd[1]: Started sshd@13-10.0.0.38:22-10.0.0.1:33092.service - OpenSSH per-connection server daemon (10.0.0.1:33092). Apr 21 02:48:57.467260 systemd-logind[1574]: Removed session 13. Apr 21 02:48:57.520072 sshd[4226]: Accepted publickey for core from 10.0.0.1 port 33092 ssh2: RSA SHA256:9KUHGlOeyJC7ThH1mkUqpKUL96f7+Wj12eT7o4yHGwc Apr 21 02:48:57.523274 sshd-session[4226]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 02:48:57.536632 systemd-logind[1574]: New session 14 of user core. Apr 21 02:48:57.549503 systemd[1]: Started session-14.scope - Session 14 of User core. Apr 21 02:48:57.705296 sshd[4230]: Connection closed by 10.0.0.1 port 33092 Apr 21 02:48:57.706583 sshd-session[4226]: pam_unix(sshd:session): session closed for user core Apr 21 02:48:57.730884 systemd[1]: sshd@13-10.0.0.38:22-10.0.0.1:33092.service: Deactivated successfully. Apr 21 02:48:57.736298 systemd[1]: session-14.scope: Deactivated successfully. Apr 21 02:48:57.737662 systemd-logind[1574]: Session 14 logged out. Waiting for processes to exit. Apr 21 02:48:57.753568 systemd[1]: Started sshd@14-10.0.0.38:22-10.0.0.1:33108.service - OpenSSH per-connection server daemon (10.0.0.1:33108). Apr 21 02:48:57.758925 systemd-logind[1574]: Removed session 14. Apr 21 02:48:57.820937 sshd[4241]: Accepted publickey for core from 10.0.0.1 port 33108 ssh2: RSA SHA256:9KUHGlOeyJC7ThH1mkUqpKUL96f7+Wj12eT7o4yHGwc Apr 21 02:48:57.823339 sshd-session[4241]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 02:48:57.830313 systemd-logind[1574]: New session 15 of user core. Apr 21 02:48:57.835382 systemd[1]: Started session-15.scope - Session 15 of User core. Apr 21 02:48:57.926526 sshd[4244]: Connection closed by 10.0.0.1 port 33108 Apr 21 02:48:57.927092 sshd-session[4241]: pam_unix(sshd:session): session closed for user core Apr 21 02:48:57.930734 systemd[1]: sshd@14-10.0.0.38:22-10.0.0.1:33108.service: Deactivated successfully. Apr 21 02:48:57.932512 systemd[1]: session-15.scope: Deactivated successfully. Apr 21 02:48:57.933486 systemd-logind[1574]: Session 15 logged out. Waiting for processes to exit. Apr 21 02:48:57.934849 systemd-logind[1574]: Removed session 15. Apr 21 02:49:02.951967 systemd[1]: Started sshd@15-10.0.0.38:22-10.0.0.1:33110.service - OpenSSH per-connection server daemon (10.0.0.1:33110). Apr 21 02:49:03.058816 sshd[4259]: Accepted publickey for core from 10.0.0.1 port 33110 ssh2: RSA SHA256:9KUHGlOeyJC7ThH1mkUqpKUL96f7+Wj12eT7o4yHGwc Apr 21 02:49:03.061261 sshd-session[4259]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 02:49:03.076912 systemd-logind[1574]: New session 16 of user core. Apr 21 02:49:03.088521 systemd[1]: Started session-16.scope - Session 16 of User core. Apr 21 02:49:03.191718 sshd[4262]: Connection closed by 10.0.0.1 port 33110 Apr 21 02:49:03.192282 sshd-session[4259]: pam_unix(sshd:session): session closed for user core Apr 21 02:49:03.203539 systemd[1]: sshd@15-10.0.0.38:22-10.0.0.1:33110.service: Deactivated successfully. Apr 21 02:49:03.211024 systemd[1]: session-16.scope: Deactivated successfully. Apr 21 02:49:03.213621 systemd-logind[1574]: Session 16 logged out. Waiting for processes to exit. Apr 21 02:49:03.214872 systemd-logind[1574]: Removed session 16. Apr 21 02:49:08.213657 systemd[1]: Started sshd@16-10.0.0.38:22-10.0.0.1:41026.service - OpenSSH per-connection server daemon (10.0.0.1:41026). Apr 21 02:49:08.282468 sshd[4279]: Accepted publickey for core from 10.0.0.1 port 41026 ssh2: RSA SHA256:9KUHGlOeyJC7ThH1mkUqpKUL96f7+Wj12eT7o4yHGwc Apr 21 02:49:08.284209 sshd-session[4279]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 02:49:08.290634 systemd-logind[1574]: New session 17 of user core. Apr 21 02:49:08.295323 systemd[1]: Started session-17.scope - Session 17 of User core. Apr 21 02:49:08.400530 sshd[4282]: Connection closed by 10.0.0.1 port 41026 Apr 21 02:49:08.400924 sshd-session[4279]: pam_unix(sshd:session): session closed for user core Apr 21 02:49:08.416374 systemd[1]: sshd@16-10.0.0.38:22-10.0.0.1:41026.service: Deactivated successfully. Apr 21 02:49:08.419992 systemd[1]: session-17.scope: Deactivated successfully. Apr 21 02:49:08.421392 systemd-logind[1574]: Session 17 logged out. Waiting for processes to exit. Apr 21 02:49:08.423869 systemd[1]: Started sshd@17-10.0.0.38:22-10.0.0.1:41040.service - OpenSSH per-connection server daemon (10.0.0.1:41040). Apr 21 02:49:08.425818 systemd-logind[1574]: Removed session 17. Apr 21 02:49:08.489361 sshd[4298]: Accepted publickey for core from 10.0.0.1 port 41040 ssh2: RSA SHA256:9KUHGlOeyJC7ThH1mkUqpKUL96f7+Wj12eT7o4yHGwc Apr 21 02:49:08.491270 sshd-session[4298]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 02:49:08.509698 systemd-logind[1574]: New session 18 of user core. Apr 21 02:49:08.517346 systemd[1]: Started session-18.scope - Session 18 of User core. Apr 21 02:49:08.742418 sshd[4301]: Connection closed by 10.0.0.1 port 41040 Apr 21 02:49:08.743818 sshd-session[4298]: pam_unix(sshd:session): session closed for user core Apr 21 02:49:08.759712 systemd[1]: sshd@17-10.0.0.38:22-10.0.0.1:41040.service: Deactivated successfully. Apr 21 02:49:08.765326 systemd[1]: session-18.scope: Deactivated successfully. Apr 21 02:49:08.766785 systemd-logind[1574]: Session 18 logged out. Waiting for processes to exit. Apr 21 02:49:08.770569 systemd[1]: Started sshd@18-10.0.0.38:22-10.0.0.1:41042.service - OpenSSH per-connection server daemon (10.0.0.1:41042). Apr 21 02:49:08.771916 systemd-logind[1574]: Removed session 18. Apr 21 02:49:08.839371 sshd[4313]: Accepted publickey for core from 10.0.0.1 port 41042 ssh2: RSA SHA256:9KUHGlOeyJC7ThH1mkUqpKUL96f7+Wj12eT7o4yHGwc Apr 21 02:49:08.840904 sshd-session[4313]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 02:49:08.847446 systemd-logind[1574]: New session 19 of user core. Apr 21 02:49:08.857435 systemd[1]: Started session-19.scope - Session 19 of User core. Apr 21 02:49:09.553410 sshd[4316]: Connection closed by 10.0.0.1 port 41042 Apr 21 02:49:09.555477 sshd-session[4313]: pam_unix(sshd:session): session closed for user core Apr 21 02:49:09.563850 systemd[1]: sshd@18-10.0.0.38:22-10.0.0.1:41042.service: Deactivated successfully. Apr 21 02:49:09.566087 systemd[1]: session-19.scope: Deactivated successfully. Apr 21 02:49:09.575880 systemd-logind[1574]: Session 19 logged out. Waiting for processes to exit. Apr 21 02:49:09.584425 systemd[1]: Started sshd@19-10.0.0.38:22-10.0.0.1:41054.service - OpenSSH per-connection server daemon (10.0.0.1:41054). Apr 21 02:49:09.586575 systemd-logind[1574]: Removed session 19. Apr 21 02:49:09.649033 sshd[4334]: Accepted publickey for core from 10.0.0.1 port 41054 ssh2: RSA SHA256:9KUHGlOeyJC7ThH1mkUqpKUL96f7+Wj12eT7o4yHGwc Apr 21 02:49:09.651279 sshd-session[4334]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 02:49:09.658813 systemd-logind[1574]: New session 20 of user core. Apr 21 02:49:09.678451 systemd[1]: Started session-20.scope - Session 20 of User core. Apr 21 02:49:10.081894 sshd[4337]: Connection closed by 10.0.0.1 port 41054 Apr 21 02:49:10.081886 sshd-session[4334]: pam_unix(sshd:session): session closed for user core Apr 21 02:49:10.094701 systemd[1]: sshd@19-10.0.0.38:22-10.0.0.1:41054.service: Deactivated successfully. Apr 21 02:49:10.096864 systemd[1]: session-20.scope: Deactivated successfully. Apr 21 02:49:10.098389 systemd-logind[1574]: Session 20 logged out. Waiting for processes to exit. Apr 21 02:49:10.100962 systemd[1]: Started sshd@20-10.0.0.38:22-10.0.0.1:41066.service - OpenSSH per-connection server daemon (10.0.0.1:41066). Apr 21 02:49:10.103210 systemd-logind[1574]: Removed session 20. Apr 21 02:49:10.194102 sshd[4349]: Accepted publickey for core from 10.0.0.1 port 41066 ssh2: RSA SHA256:9KUHGlOeyJC7ThH1mkUqpKUL96f7+Wj12eT7o4yHGwc Apr 21 02:49:10.195413 sshd-session[4349]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 02:49:10.205000 systemd-logind[1574]: New session 21 of user core. Apr 21 02:49:10.216340 systemd[1]: Started session-21.scope - Session 21 of User core. Apr 21 02:49:10.298935 sshd[4352]: Connection closed by 10.0.0.1 port 41066 Apr 21 02:49:10.301002 sshd-session[4349]: pam_unix(sshd:session): session closed for user core Apr 21 02:49:10.309781 systemd[1]: sshd@20-10.0.0.38:22-10.0.0.1:41066.service: Deactivated successfully. Apr 21 02:49:10.317641 systemd[1]: session-21.scope: Deactivated successfully. Apr 21 02:49:10.318914 systemd-logind[1574]: Session 21 logged out. Waiting for processes to exit. Apr 21 02:49:10.320572 systemd-logind[1574]: Removed session 21. Apr 21 02:49:15.322726 systemd[1]: Started sshd@21-10.0.0.38:22-10.0.0.1:48278.service - OpenSSH per-connection server daemon (10.0.0.1:48278). Apr 21 02:49:15.382654 sshd[4370]: Accepted publickey for core from 10.0.0.1 port 48278 ssh2: RSA SHA256:9KUHGlOeyJC7ThH1mkUqpKUL96f7+Wj12eT7o4yHGwc Apr 21 02:49:15.383748 sshd-session[4370]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 02:49:15.389294 systemd-logind[1574]: New session 22 of user core. Apr 21 02:49:15.395386 systemd[1]: Started session-22.scope - Session 22 of User core. Apr 21 02:49:15.482039 sshd[4373]: Connection closed by 10.0.0.1 port 48278 Apr 21 02:49:15.482508 sshd-session[4370]: pam_unix(sshd:session): session closed for user core Apr 21 02:49:15.486198 systemd[1]: sshd@21-10.0.0.38:22-10.0.0.1:48278.service: Deactivated successfully. Apr 21 02:49:15.487777 systemd[1]: session-22.scope: Deactivated successfully. Apr 21 02:49:15.488930 systemd-logind[1574]: Session 22 logged out. Waiting for processes to exit. Apr 21 02:49:15.490477 systemd-logind[1574]: Removed session 22. Apr 21 02:49:20.511038 systemd[1]: Started sshd@22-10.0.0.38:22-10.0.0.1:48284.service - OpenSSH per-connection server daemon (10.0.0.1:48284). Apr 21 02:49:20.562409 sshd[4388]: Accepted publickey for core from 10.0.0.1 port 48284 ssh2: RSA SHA256:9KUHGlOeyJC7ThH1mkUqpKUL96f7+Wj12eT7o4yHGwc Apr 21 02:49:20.563376 sshd-session[4388]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 02:49:20.567718 systemd-logind[1574]: New session 23 of user core. Apr 21 02:49:20.576350 systemd[1]: Started session-23.scope - Session 23 of User core. Apr 21 02:49:20.675000 sshd[4391]: Connection closed by 10.0.0.1 port 48284 Apr 21 02:49:20.675462 sshd-session[4388]: pam_unix(sshd:session): session closed for user core Apr 21 02:49:20.681700 systemd[1]: sshd@22-10.0.0.38:22-10.0.0.1:48284.service: Deactivated successfully. Apr 21 02:49:20.684884 systemd[1]: session-23.scope: Deactivated successfully. Apr 21 02:49:20.686470 systemd-logind[1574]: Session 23 logged out. Waiting for processes to exit. Apr 21 02:49:20.687873 systemd-logind[1574]: Removed session 23. Apr 21 02:49:22.418724 kubelet[2799]: E0421 02:49:22.418371 2799 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 02:49:23.419836 kubelet[2799]: E0421 02:49:23.419557 2799 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 02:49:25.691654 systemd[1]: Started sshd@23-10.0.0.38:22-10.0.0.1:51654.service - OpenSSH per-connection server daemon (10.0.0.1:51654). Apr 21 02:49:25.753105 sshd[4404]: Accepted publickey for core from 10.0.0.1 port 51654 ssh2: RSA SHA256:9KUHGlOeyJC7ThH1mkUqpKUL96f7+Wj12eT7o4yHGwc Apr 21 02:49:25.755269 sshd-session[4404]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 02:49:25.769271 systemd-logind[1574]: New session 24 of user core. Apr 21 02:49:25.779759 systemd[1]: Started session-24.scope - Session 24 of User core. Apr 21 02:49:25.887899 sshd[4407]: Connection closed by 10.0.0.1 port 51654 Apr 21 02:49:25.889067 sshd-session[4404]: pam_unix(sshd:session): session closed for user core Apr 21 02:49:25.897341 systemd[1]: sshd@23-10.0.0.38:22-10.0.0.1:51654.service: Deactivated successfully. Apr 21 02:49:25.898801 systemd[1]: session-24.scope: Deactivated successfully. Apr 21 02:49:25.899691 systemd-logind[1574]: Session 24 logged out. Waiting for processes to exit. Apr 21 02:49:25.901955 systemd[1]: Started sshd@24-10.0.0.38:22-10.0.0.1:51658.service - OpenSSH per-connection server daemon (10.0.0.1:51658). Apr 21 02:49:25.903956 systemd-logind[1574]: Removed session 24. Apr 21 02:49:25.950377 sshd[4421]: Accepted publickey for core from 10.0.0.1 port 51658 ssh2: RSA SHA256:9KUHGlOeyJC7ThH1mkUqpKUL96f7+Wj12eT7o4yHGwc Apr 21 02:49:25.951362 sshd-session[4421]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 02:49:25.955700 systemd-logind[1574]: New session 25 of user core. Apr 21 02:49:25.970195 systemd[1]: Started session-25.scope - Session 25 of User core. Apr 21 02:49:27.422822 kubelet[2799]: E0421 02:49:27.422481 2799 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 02:49:27.523097 containerd[1601]: time="2026-04-21T02:49:27.522509319Z" level=info msg="StopContainer for \"327bbf37e3c7bede5f6e616569381b3549bbe9541fafc4fad15ab7b1f4f10747\" with timeout 30 (s)" Apr 21 02:49:27.532778 containerd[1601]: time="2026-04-21T02:49:27.532718909Z" level=info msg="Stop container \"327bbf37e3c7bede5f6e616569381b3549bbe9541fafc4fad15ab7b1f4f10747\" with signal terminated" Apr 21 02:49:27.568642 systemd[1]: cri-containerd-327bbf37e3c7bede5f6e616569381b3549bbe9541fafc4fad15ab7b1f4f10747.scope: Deactivated successfully. Apr 21 02:49:27.569533 systemd[1]: cri-containerd-327bbf37e3c7bede5f6e616569381b3549bbe9541fafc4fad15ab7b1f4f10747.scope: Consumed 1.357s CPU time, 29.2M memory peak, 618K read from disk, 4K written to disk. Apr 21 02:49:27.577104 containerd[1601]: time="2026-04-21T02:49:27.576628534Z" level=info msg="received container exit event container_id:\"327bbf37e3c7bede5f6e616569381b3549bbe9541fafc4fad15ab7b1f4f10747\" id:\"327bbf37e3c7bede5f6e616569381b3549bbe9541fafc4fad15ab7b1f4f10747\" pid:3362 exited_at:{seconds:1776739767 nanos:569816196}" Apr 21 02:49:27.597056 containerd[1601]: time="2026-04-21T02:49:27.596979860Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 21 02:49:27.599932 containerd[1601]: time="2026-04-21T02:49:27.599834034Z" level=info msg="StopContainer for \"427131b1d6556efc25a82534c8b170049ce304ca7416756b3993930685616f69\" with timeout 2 (s)" Apr 21 02:49:27.604702 containerd[1601]: time="2026-04-21T02:49:27.604235261Z" level=info msg="Stop container \"427131b1d6556efc25a82534c8b170049ce304ca7416756b3993930685616f69\" with signal terminated" Apr 21 02:49:27.653789 systemd-networkd[1498]: lxc_health: Link DOWN Apr 21 02:49:27.653796 systemd-networkd[1498]: lxc_health: Lost carrier Apr 21 02:49:27.759419 systemd[1]: cri-containerd-427131b1d6556efc25a82534c8b170049ce304ca7416756b3993930685616f69.scope: Deactivated successfully. Apr 21 02:49:27.769421 systemd[1]: cri-containerd-427131b1d6556efc25a82534c8b170049ce304ca7416756b3993930685616f69.scope: Consumed 11.864s CPU time, 128M memory peak, 380K read from disk, 13.3M written to disk. Apr 21 02:49:27.847975 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-327bbf37e3c7bede5f6e616569381b3549bbe9541fafc4fad15ab7b1f4f10747-rootfs.mount: Deactivated successfully. Apr 21 02:49:27.876899 containerd[1601]: time="2026-04-21T02:49:27.876249003Z" level=info msg="received container exit event container_id:\"427131b1d6556efc25a82534c8b170049ce304ca7416756b3993930685616f69\" id:\"427131b1d6556efc25a82534c8b170049ce304ca7416756b3993930685616f69\" pid:3473 exited_at:{seconds:1776739767 nanos:870776144}" Apr 21 02:49:27.895720 containerd[1601]: time="2026-04-21T02:49:27.895516870Z" level=info msg="StopContainer for \"327bbf37e3c7bede5f6e616569381b3549bbe9541fafc4fad15ab7b1f4f10747\" returns successfully" Apr 21 02:49:27.898775 containerd[1601]: time="2026-04-21T02:49:27.898592484Z" level=info msg="StopPodSandbox for \"7d6db2effae05fa0b0c0855af785ffa8c0d2c1b992217bdbc1e679abdf3765a4\"" Apr 21 02:49:27.899917 containerd[1601]: time="2026-04-21T02:49:27.899894769Z" level=info msg="Container to stop \"327bbf37e3c7bede5f6e616569381b3549bbe9541fafc4fad15ab7b1f4f10747\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 21 02:49:27.915796 systemd[1]: cri-containerd-7d6db2effae05fa0b0c0855af785ffa8c0d2c1b992217bdbc1e679abdf3765a4.scope: Deactivated successfully. Apr 21 02:49:27.928340 containerd[1601]: time="2026-04-21T02:49:27.928247915Z" level=info msg="received sandbox exit event container_id:\"7d6db2effae05fa0b0c0855af785ffa8c0d2c1b992217bdbc1e679abdf3765a4\" id:\"7d6db2effae05fa0b0c0855af785ffa8c0d2c1b992217bdbc1e679abdf3765a4\" exit_status:137 exited_at:{seconds:1776739767 nanos:920990233}" monitor_name=podsandbox Apr 21 02:49:27.964335 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-427131b1d6556efc25a82534c8b170049ce304ca7416756b3993930685616f69-rootfs.mount: Deactivated successfully. Apr 21 02:49:27.977626 containerd[1601]: time="2026-04-21T02:49:27.977454829Z" level=info msg="StopContainer for \"427131b1d6556efc25a82534c8b170049ce304ca7416756b3993930685616f69\" returns successfully" Apr 21 02:49:27.981773 containerd[1601]: time="2026-04-21T02:49:27.981710780Z" level=info msg="StopPodSandbox for \"bbfa269497be7f9f257a7901c2c0499b11fbbda45d72bda3c06de68d88a0ade0\"" Apr 21 02:49:27.981947 containerd[1601]: time="2026-04-21T02:49:27.981840042Z" level=info msg="Container to stop \"57c2aaf387d27a118b1f3fd981bca2a3b7397c649d3316aa3aed7cf7e40c0846\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 21 02:49:27.981947 containerd[1601]: time="2026-04-21T02:49:27.981894021Z" level=info msg="Container to stop \"8a80e5906b79c2584845c411a72afaadf5c300c034f8dec677e5adf86931072c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 21 02:49:27.981947 containerd[1601]: time="2026-04-21T02:49:27.981900684Z" level=info msg="Container to stop \"ffcb6da1f5c7e351b8c038b242b16231639ffce26c9bcc1cb3fc08fc250548b9\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 21 02:49:27.981947 containerd[1601]: time="2026-04-21T02:49:27.981906745Z" level=info msg="Container to stop \"9a5211943ba1c01524dce71893dbfe77a634bfe40d3ad4cd52e0177cb0b46089\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 21 02:49:27.981947 containerd[1601]: time="2026-04-21T02:49:27.981913383Z" level=info msg="Container to stop \"427131b1d6556efc25a82534c8b170049ce304ca7416756b3993930685616f69\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 21 02:49:28.008519 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7d6db2effae05fa0b0c0855af785ffa8c0d2c1b992217bdbc1e679abdf3765a4-rootfs.mount: Deactivated successfully. Apr 21 02:49:28.012229 containerd[1601]: time="2026-04-21T02:49:28.012003106Z" level=info msg="shim disconnected" id=7d6db2effae05fa0b0c0855af785ffa8c0d2c1b992217bdbc1e679abdf3765a4 namespace=k8s.io Apr 21 02:49:28.012229 containerd[1601]: time="2026-04-21T02:49:28.012029618Z" level=warning msg="cleaning up after shim disconnected" id=7d6db2effae05fa0b0c0855af785ffa8c0d2c1b992217bdbc1e679abdf3765a4 namespace=k8s.io Apr 21 02:49:28.012869 systemd[1]: cri-containerd-bbfa269497be7f9f257a7901c2c0499b11fbbda45d72bda3c06de68d88a0ade0.scope: Deactivated successfully. Apr 21 02:49:28.026899 containerd[1601]: time="2026-04-21T02:49:28.012071070Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 21 02:49:28.027049 containerd[1601]: time="2026-04-21T02:49:28.015589768Z" level=error msg="failed sending message on channel" error="write unix /run/containerd/containerd.sock.ttrpc->@: write: broken pipe" Apr 21 02:49:28.030327 containerd[1601]: time="2026-04-21T02:49:28.016065486Z" level=info msg="received sandbox exit event container_id:\"bbfa269497be7f9f257a7901c2c0499b11fbbda45d72bda3c06de68d88a0ade0\" id:\"bbfa269497be7f9f257a7901c2c0499b11fbbda45d72bda3c06de68d88a0ade0\" exit_status:137 exited_at:{seconds:1776739768 nanos:15338403}" monitor_name=podsandbox Apr 21 02:49:28.060805 containerd[1601]: time="2026-04-21T02:49:28.060502459Z" level=info msg="TearDown network for sandbox \"7d6db2effae05fa0b0c0855af785ffa8c0d2c1b992217bdbc1e679abdf3765a4\" successfully" Apr 21 02:49:28.060805 containerd[1601]: time="2026-04-21T02:49:28.060561553Z" level=info msg="StopPodSandbox for \"7d6db2effae05fa0b0c0855af785ffa8c0d2c1b992217bdbc1e679abdf3765a4\" returns successfully" Apr 21 02:49:28.060868 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-7d6db2effae05fa0b0c0855af785ffa8c0d2c1b992217bdbc1e679abdf3765a4-shm.mount: Deactivated successfully. Apr 21 02:49:28.067108 containerd[1601]: time="2026-04-21T02:49:28.066941756Z" level=info msg="received sandbox container exit event sandbox_id:\"7d6db2effae05fa0b0c0855af785ffa8c0d2c1b992217bdbc1e679abdf3765a4\" exit_status:137 exited_at:{seconds:1776739767 nanos:920990233}" monitor_name=criService Apr 21 02:49:28.069504 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bbfa269497be7f9f257a7901c2c0499b11fbbda45d72bda3c06de68d88a0ade0-rootfs.mount: Deactivated successfully. Apr 21 02:49:28.081885 containerd[1601]: time="2026-04-21T02:49:28.081643482Z" level=info msg="shim disconnected" id=bbfa269497be7f9f257a7901c2c0499b11fbbda45d72bda3c06de68d88a0ade0 namespace=k8s.io Apr 21 02:49:28.081885 containerd[1601]: time="2026-04-21T02:49:28.081729302Z" level=warning msg="cleaning up after shim disconnected" id=bbfa269497be7f9f257a7901c2c0499b11fbbda45d72bda3c06de68d88a0ade0 namespace=k8s.io Apr 21 02:49:28.081885 containerd[1601]: time="2026-04-21T02:49:28.081735486Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 21 02:49:28.129223 containerd[1601]: time="2026-04-21T02:49:28.128489073Z" level=info msg="received sandbox container exit event sandbox_id:\"bbfa269497be7f9f257a7901c2c0499b11fbbda45d72bda3c06de68d88a0ade0\" exit_status:137 exited_at:{seconds:1776739768 nanos:15338403}" monitor_name=criService Apr 21 02:49:28.129223 containerd[1601]: time="2026-04-21T02:49:28.128819533Z" level=info msg="TearDown network for sandbox \"bbfa269497be7f9f257a7901c2c0499b11fbbda45d72bda3c06de68d88a0ade0\" successfully" Apr 21 02:49:28.129223 containerd[1601]: time="2026-04-21T02:49:28.128839868Z" level=info msg="StopPodSandbox for \"bbfa269497be7f9f257a7901c2c0499b11fbbda45d72bda3c06de68d88a0ade0\" returns successfully" Apr 21 02:49:28.250822 kubelet[2799]: I0421 02:49:28.250445 2799 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/projected/1e48e55f-293b-4ca6-84cb-eabcc248637d-kube-api-access-fzm7f\" (UniqueName: \"kubernetes.io/projected/1e48e55f-293b-4ca6-84cb-eabcc248637d-kube-api-access-fzm7f\") pod \"1e48e55f-293b-4ca6-84cb-eabcc248637d\" (UID: \"1e48e55f-293b-4ca6-84cb-eabcc248637d\") " Apr 21 02:49:28.250822 kubelet[2799]: I0421 02:49:28.250761 2799 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/secret/25d6db75-1c26-49ab-a7c7-ec7a8230d88a-clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/25d6db75-1c26-49ab-a7c7-ec7a8230d88a-clustermesh-secrets\") pod \"25d6db75-1c26-49ab-a7c7-ec7a8230d88a\" (UID: \"25d6db75-1c26-49ab-a7c7-ec7a8230d88a\") " Apr 21 02:49:28.252191 kubelet[2799]: I0421 02:49:28.251028 2799 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/host-path/25d6db75-1c26-49ab-a7c7-ec7a8230d88a-hostproc\" (UniqueName: \"kubernetes.io/host-path/25d6db75-1c26-49ab-a7c7-ec7a8230d88a-hostproc\") pod \"25d6db75-1c26-49ab-a7c7-ec7a8230d88a\" (UID: \"25d6db75-1c26-49ab-a7c7-ec7a8230d88a\") " Apr 21 02:49:28.252191 kubelet[2799]: I0421 02:49:28.251106 2799 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/host-path/25d6db75-1c26-49ab-a7c7-ec7a8230d88a-cni-path\" (UniqueName: \"kubernetes.io/host-path/25d6db75-1c26-49ab-a7c7-ec7a8230d88a-cni-path\") pod \"25d6db75-1c26-49ab-a7c7-ec7a8230d88a\" (UID: \"25d6db75-1c26-49ab-a7c7-ec7a8230d88a\") " Apr 21 02:49:28.252191 kubelet[2799]: I0421 02:49:28.251209 2799 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/host-path/25d6db75-1c26-49ab-a7c7-ec7a8230d88a-etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/25d6db75-1c26-49ab-a7c7-ec7a8230d88a-etc-cni-netd\") pod \"25d6db75-1c26-49ab-a7c7-ec7a8230d88a\" (UID: \"25d6db75-1c26-49ab-a7c7-ec7a8230d88a\") " Apr 21 02:49:28.252191 kubelet[2799]: I0421 02:49:28.251254 2799 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/configmap/1e48e55f-293b-4ca6-84cb-eabcc248637d-cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1e48e55f-293b-4ca6-84cb-eabcc248637d-cilium-config-path\") pod \"1e48e55f-293b-4ca6-84cb-eabcc248637d\" (UID: \"1e48e55f-293b-4ca6-84cb-eabcc248637d\") " Apr 21 02:49:28.252191 kubelet[2799]: I0421 02:49:28.251271 2799 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/host-path/25d6db75-1c26-49ab-a7c7-ec7a8230d88a-bpf-maps\" (UniqueName: \"kubernetes.io/host-path/25d6db75-1c26-49ab-a7c7-ec7a8230d88a-bpf-maps\") pod \"25d6db75-1c26-49ab-a7c7-ec7a8230d88a\" (UID: \"25d6db75-1c26-49ab-a7c7-ec7a8230d88a\") " Apr 21 02:49:28.252329 kubelet[2799]: I0421 02:49:28.251469 2799 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/25d6db75-1c26-49ab-a7c7-ec7a8230d88a-bpf-maps" pod "25d6db75-1c26-49ab-a7c7-ec7a8230d88a" (UID: "25d6db75-1c26-49ab-a7c7-ec7a8230d88a"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 21 02:49:28.252329 kubelet[2799]: I0421 02:49:28.251586 2799 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/25d6db75-1c26-49ab-a7c7-ec7a8230d88a-hostproc" pod "25d6db75-1c26-49ab-a7c7-ec7a8230d88a" (UID: "25d6db75-1c26-49ab-a7c7-ec7a8230d88a"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 21 02:49:28.252329 kubelet[2799]: I0421 02:49:28.251596 2799 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/25d6db75-1c26-49ab-a7c7-ec7a8230d88a-cni-path" pod "25d6db75-1c26-49ab-a7c7-ec7a8230d88a" (UID: "25d6db75-1c26-49ab-a7c7-ec7a8230d88a"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 21 02:49:28.252329 kubelet[2799]: I0421 02:49:28.251604 2799 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/25d6db75-1c26-49ab-a7c7-ec7a8230d88a-etc-cni-netd" pod "25d6db75-1c26-49ab-a7c7-ec7a8230d88a" (UID: "25d6db75-1c26-49ab-a7c7-ec7a8230d88a"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 21 02:49:28.253342 kubelet[2799]: I0421 02:49:28.253242 2799 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1e48e55f-293b-4ca6-84cb-eabcc248637d-cilium-config-path" pod "1e48e55f-293b-4ca6-84cb-eabcc248637d" (UID: "1e48e55f-293b-4ca6-84cb-eabcc248637d"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 21 02:49:28.259394 kubelet[2799]: I0421 02:49:28.259103 2799 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25d6db75-1c26-49ab-a7c7-ec7a8230d88a-clustermesh-secrets" pod "25d6db75-1c26-49ab-a7c7-ec7a8230d88a" (UID: "25d6db75-1c26-49ab-a7c7-ec7a8230d88a"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 21 02:49:28.260411 kubelet[2799]: I0421 02:49:28.259298 2799 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1e48e55f-293b-4ca6-84cb-eabcc248637d-kube-api-access-fzm7f" pod "1e48e55f-293b-4ca6-84cb-eabcc248637d" (UID: "1e48e55f-293b-4ca6-84cb-eabcc248637d"). InnerVolumeSpecName "kube-api-access-fzm7f". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 21 02:49:28.352797 kubelet[2799]: I0421 02:49:28.352216 2799 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/configmap/25d6db75-1c26-49ab-a7c7-ec7a8230d88a-cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/25d6db75-1c26-49ab-a7c7-ec7a8230d88a-cilium-config-path\") pod \"25d6db75-1c26-49ab-a7c7-ec7a8230d88a\" (UID: \"25d6db75-1c26-49ab-a7c7-ec7a8230d88a\") " Apr 21 02:49:28.352797 kubelet[2799]: I0421 02:49:28.352342 2799 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/host-path/25d6db75-1c26-49ab-a7c7-ec7a8230d88a-cilium-run\" (UniqueName: \"kubernetes.io/host-path/25d6db75-1c26-49ab-a7c7-ec7a8230d88a-cilium-run\") pod \"25d6db75-1c26-49ab-a7c7-ec7a8230d88a\" (UID: \"25d6db75-1c26-49ab-a7c7-ec7a8230d88a\") " Apr 21 02:49:28.352797 kubelet[2799]: I0421 02:49:28.352356 2799 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/host-path/25d6db75-1c26-49ab-a7c7-ec7a8230d88a-host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/25d6db75-1c26-49ab-a7c7-ec7a8230d88a-host-proc-sys-net\") pod \"25d6db75-1c26-49ab-a7c7-ec7a8230d88a\" (UID: \"25d6db75-1c26-49ab-a7c7-ec7a8230d88a\") " Apr 21 02:49:28.352797 kubelet[2799]: I0421 02:49:28.352377 2799 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/host-path/25d6db75-1c26-49ab-a7c7-ec7a8230d88a-host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/25d6db75-1c26-49ab-a7c7-ec7a8230d88a-host-proc-sys-kernel\") pod \"25d6db75-1c26-49ab-a7c7-ec7a8230d88a\" (UID: \"25d6db75-1c26-49ab-a7c7-ec7a8230d88a\") " Apr 21 02:49:28.352797 kubelet[2799]: I0421 02:49:28.352397 2799 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/projected/25d6db75-1c26-49ab-a7c7-ec7a8230d88a-hubble-tls\" (UniqueName: \"kubernetes.io/projected/25d6db75-1c26-49ab-a7c7-ec7a8230d88a-hubble-tls\") pod \"25d6db75-1c26-49ab-a7c7-ec7a8230d88a\" (UID: \"25d6db75-1c26-49ab-a7c7-ec7a8230d88a\") " Apr 21 02:49:28.354008 kubelet[2799]: I0421 02:49:28.352409 2799 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/host-path/25d6db75-1c26-49ab-a7c7-ec7a8230d88a-xtables-lock\" (UniqueName: \"kubernetes.io/host-path/25d6db75-1c26-49ab-a7c7-ec7a8230d88a-xtables-lock\") pod \"25d6db75-1c26-49ab-a7c7-ec7a8230d88a\" (UID: \"25d6db75-1c26-49ab-a7c7-ec7a8230d88a\") " Apr 21 02:49:28.354008 kubelet[2799]: I0421 02:49:28.352425 2799 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/projected/25d6db75-1c26-49ab-a7c7-ec7a8230d88a-kube-api-access-2dnn9\" (UniqueName: \"kubernetes.io/projected/25d6db75-1c26-49ab-a7c7-ec7a8230d88a-kube-api-access-2dnn9\") pod \"25d6db75-1c26-49ab-a7c7-ec7a8230d88a\" (UID: \"25d6db75-1c26-49ab-a7c7-ec7a8230d88a\") " Apr 21 02:49:28.354488 kubelet[2799]: I0421 02:49:28.354232 2799 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/25d6db75-1c26-49ab-a7c7-ec7a8230d88a-host-proc-sys-kernel" pod "25d6db75-1c26-49ab-a7c7-ec7a8230d88a" (UID: "25d6db75-1c26-49ab-a7c7-ec7a8230d88a"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 21 02:49:28.354488 kubelet[2799]: I0421 02:49:28.354267 2799 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/25d6db75-1c26-49ab-a7c7-ec7a8230d88a-cilium-run" pod "25d6db75-1c26-49ab-a7c7-ec7a8230d88a" (UID: "25d6db75-1c26-49ab-a7c7-ec7a8230d88a"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 21 02:49:28.354488 kubelet[2799]: I0421 02:49:28.354278 2799 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/25d6db75-1c26-49ab-a7c7-ec7a8230d88a-host-proc-sys-net" pod "25d6db75-1c26-49ab-a7c7-ec7a8230d88a" (UID: "25d6db75-1c26-49ab-a7c7-ec7a8230d88a"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 21 02:49:28.354488 kubelet[2799]: I0421 02:49:28.354447 2799 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/25d6db75-1c26-49ab-a7c7-ec7a8230d88a-xtables-lock" pod "25d6db75-1c26-49ab-a7c7-ec7a8230d88a" (UID: "25d6db75-1c26-49ab-a7c7-ec7a8230d88a"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 21 02:49:28.354488 kubelet[2799]: I0421 02:49:28.354483 2799 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/host-path/25d6db75-1c26-49ab-a7c7-ec7a8230d88a-cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/25d6db75-1c26-49ab-a7c7-ec7a8230d88a-cilium-cgroup\") pod \"25d6db75-1c26-49ab-a7c7-ec7a8230d88a\" (UID: \"25d6db75-1c26-49ab-a7c7-ec7a8230d88a\") " Apr 21 02:49:28.354606 kubelet[2799]: I0421 02:49:28.354499 2799 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/host-path/25d6db75-1c26-49ab-a7c7-ec7a8230d88a-lib-modules\" (UniqueName: \"kubernetes.io/host-path/25d6db75-1c26-49ab-a7c7-ec7a8230d88a-lib-modules\") pod \"25d6db75-1c26-49ab-a7c7-ec7a8230d88a\" (UID: \"25d6db75-1c26-49ab-a7c7-ec7a8230d88a\") " Apr 21 02:49:28.354624 kubelet[2799]: I0421 02:49:28.354605 2799 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/25d6db75-1c26-49ab-a7c7-ec7a8230d88a-xtables-lock\") on node \"localhost\" DevicePath \"\"" Apr 21 02:49:28.354624 kubelet[2799]: I0421 02:49:28.354613 2799 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/25d6db75-1c26-49ab-a7c7-ec7a8230d88a-cilium-run\") on node \"localhost\" DevicePath \"\"" Apr 21 02:49:28.354624 kubelet[2799]: I0421 02:49:28.354619 2799 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/25d6db75-1c26-49ab-a7c7-ec7a8230d88a-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Apr 21 02:49:28.354740 kubelet[2799]: I0421 02:49:28.354625 2799 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-fzm7f\" (UniqueName: \"kubernetes.io/projected/1e48e55f-293b-4ca6-84cb-eabcc248637d-kube-api-access-fzm7f\") on node \"localhost\" DevicePath \"\"" Apr 21 02:49:28.354740 kubelet[2799]: I0421 02:49:28.354631 2799 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/25d6db75-1c26-49ab-a7c7-ec7a8230d88a-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Apr 21 02:49:28.354740 kubelet[2799]: I0421 02:49:28.354637 2799 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/25d6db75-1c26-49ab-a7c7-ec7a8230d88a-hostproc\") on node \"localhost\" DevicePath \"\"" Apr 21 02:49:28.354740 kubelet[2799]: I0421 02:49:28.354642 2799 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/25d6db75-1c26-49ab-a7c7-ec7a8230d88a-cni-path\") on node \"localhost\" DevicePath \"\"" Apr 21 02:49:28.354740 kubelet[2799]: I0421 02:49:28.354647 2799 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/25d6db75-1c26-49ab-a7c7-ec7a8230d88a-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Apr 21 02:49:28.354740 kubelet[2799]: I0421 02:49:28.354653 2799 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1e48e55f-293b-4ca6-84cb-eabcc248637d-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Apr 21 02:49:28.354740 kubelet[2799]: I0421 02:49:28.354659 2799 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/25d6db75-1c26-49ab-a7c7-ec7a8230d88a-bpf-maps\") on node \"localhost\" DevicePath \"\"" Apr 21 02:49:28.354740 kubelet[2799]: I0421 02:49:28.354663 2799 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/25d6db75-1c26-49ab-a7c7-ec7a8230d88a-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Apr 21 02:49:28.355017 kubelet[2799]: I0421 02:49:28.354676 2799 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/25d6db75-1c26-49ab-a7c7-ec7a8230d88a-lib-modules" pod "25d6db75-1c26-49ab-a7c7-ec7a8230d88a" (UID: "25d6db75-1c26-49ab-a7c7-ec7a8230d88a"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 21 02:49:28.355017 kubelet[2799]: I0421 02:49:28.354686 2799 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/25d6db75-1c26-49ab-a7c7-ec7a8230d88a-cilium-cgroup" pod "25d6db75-1c26-49ab-a7c7-ec7a8230d88a" (UID: "25d6db75-1c26-49ab-a7c7-ec7a8230d88a"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 21 02:49:28.356202 kubelet[2799]: I0421 02:49:28.355979 2799 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25d6db75-1c26-49ab-a7c7-ec7a8230d88a-cilium-config-path" pod "25d6db75-1c26-49ab-a7c7-ec7a8230d88a" (UID: "25d6db75-1c26-49ab-a7c7-ec7a8230d88a"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 21 02:49:28.358026 kubelet[2799]: I0421 02:49:28.357909 2799 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25d6db75-1c26-49ab-a7c7-ec7a8230d88a-kube-api-access-2dnn9" pod "25d6db75-1c26-49ab-a7c7-ec7a8230d88a" (UID: "25d6db75-1c26-49ab-a7c7-ec7a8230d88a"). InnerVolumeSpecName "kube-api-access-2dnn9". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 21 02:49:28.358905 kubelet[2799]: I0421 02:49:28.358868 2799 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25d6db75-1c26-49ab-a7c7-ec7a8230d88a-hubble-tls" pod "25d6db75-1c26-49ab-a7c7-ec7a8230d88a" (UID: "25d6db75-1c26-49ab-a7c7-ec7a8230d88a"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 21 02:49:28.428371 systemd[1]: Removed slice kubepods-besteffort-pod1e48e55f_293b_4ca6_84cb_eabcc248637d.slice - libcontainer container kubepods-besteffort-pod1e48e55f_293b_4ca6_84cb_eabcc248637d.slice. Apr 21 02:49:28.428515 systemd[1]: kubepods-besteffort-pod1e48e55f_293b_4ca6_84cb_eabcc248637d.slice: Consumed 1.397s CPU time, 29.5M memory peak, 618K read from disk, 4K written to disk. Apr 21 02:49:28.429746 systemd[1]: Removed slice kubepods-burstable-pod25d6db75_1c26_49ab_a7c7_ec7a8230d88a.slice - libcontainer container kubepods-burstable-pod25d6db75_1c26_49ab_a7c7_ec7a8230d88a.slice. Apr 21 02:49:28.429814 systemd[1]: kubepods-burstable-pod25d6db75_1c26_49ab_a7c7_ec7a8230d88a.slice: Consumed 12.067s CPU time, 128.3M memory peak, 396K read from disk, 15.4M written to disk. Apr 21 02:49:28.455341 kubelet[2799]: I0421 02:49:28.455284 2799 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/25d6db75-1c26-49ab-a7c7-ec7a8230d88a-hubble-tls\") on node \"localhost\" DevicePath \"\"" Apr 21 02:49:28.455673 kubelet[2799]: I0421 02:49:28.455422 2799 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-2dnn9\" (UniqueName: \"kubernetes.io/projected/25d6db75-1c26-49ab-a7c7-ec7a8230d88a-kube-api-access-2dnn9\") on node \"localhost\" DevicePath \"\"" Apr 21 02:49:28.455673 kubelet[2799]: I0421 02:49:28.455447 2799 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/25d6db75-1c26-49ab-a7c7-ec7a8230d88a-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Apr 21 02:49:28.455673 kubelet[2799]: I0421 02:49:28.455456 2799 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/25d6db75-1c26-49ab-a7c7-ec7a8230d88a-lib-modules\") on node \"localhost\" DevicePath \"\"" Apr 21 02:49:28.455673 kubelet[2799]: I0421 02:49:28.455463 2799 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/25d6db75-1c26-49ab-a7c7-ec7a8230d88a-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Apr 21 02:49:28.693224 kubelet[2799]: E0421 02:49:28.692340 2799 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 21 02:49:28.845548 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-bbfa269497be7f9f257a7901c2c0499b11fbbda45d72bda3c06de68d88a0ade0-shm.mount: Deactivated successfully. Apr 21 02:49:28.845725 systemd[1]: var-lib-kubelet-pods-1e48e55f\x2d293b\x2d4ca6\x2d84cb\x2deabcc248637d-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dfzm7f.mount: Deactivated successfully. Apr 21 02:49:28.845771 systemd[1]: var-lib-kubelet-pods-25d6db75\x2d1c26\x2d49ab\x2da7c7\x2dec7a8230d88a-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d2dnn9.mount: Deactivated successfully. Apr 21 02:49:28.845817 systemd[1]: var-lib-kubelet-pods-25d6db75\x2d1c26\x2d49ab\x2da7c7\x2dec7a8230d88a-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Apr 21 02:49:28.845899 systemd[1]: var-lib-kubelet-pods-25d6db75\x2d1c26\x2d49ab\x2da7c7\x2dec7a8230d88a-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Apr 21 02:49:28.910395 kubelet[2799]: I0421 02:49:28.910237 2799 scope.go:122] "RemoveContainer" containerID="327bbf37e3c7bede5f6e616569381b3549bbe9541fafc4fad15ab7b1f4f10747" Apr 21 02:49:28.921268 containerd[1601]: time="2026-04-21T02:49:28.917855269Z" level=info msg="RemoveContainer for \"327bbf37e3c7bede5f6e616569381b3549bbe9541fafc4fad15ab7b1f4f10747\"" Apr 21 02:49:28.949957 containerd[1601]: time="2026-04-21T02:49:28.949850267Z" level=info msg="RemoveContainer for \"327bbf37e3c7bede5f6e616569381b3549bbe9541fafc4fad15ab7b1f4f10747\" returns successfully" Apr 21 02:49:28.951653 kubelet[2799]: I0421 02:49:28.951465 2799 scope.go:122] "RemoveContainer" containerID="327bbf37e3c7bede5f6e616569381b3549bbe9541fafc4fad15ab7b1f4f10747" Apr 21 02:49:28.952584 containerd[1601]: time="2026-04-21T02:49:28.952337462Z" level=error msg="ContainerStatus for \"327bbf37e3c7bede5f6e616569381b3549bbe9541fafc4fad15ab7b1f4f10747\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"327bbf37e3c7bede5f6e616569381b3549bbe9541fafc4fad15ab7b1f4f10747\": not found" Apr 21 02:49:28.954745 kubelet[2799]: E0421 02:49:28.953115 2799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"327bbf37e3c7bede5f6e616569381b3549bbe9541fafc4fad15ab7b1f4f10747\": not found" containerID="327bbf37e3c7bede5f6e616569381b3549bbe9541fafc4fad15ab7b1f4f10747" Apr 21 02:49:28.954875 kubelet[2799]: I0421 02:49:28.954752 2799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"327bbf37e3c7bede5f6e616569381b3549bbe9541fafc4fad15ab7b1f4f10747"} err="failed to get container status \"327bbf37e3c7bede5f6e616569381b3549bbe9541fafc4fad15ab7b1f4f10747\": rpc error: code = NotFound desc = an error occurred when try to find container \"327bbf37e3c7bede5f6e616569381b3549bbe9541fafc4fad15ab7b1f4f10747\": not found" Apr 21 02:49:28.954875 kubelet[2799]: I0421 02:49:28.954842 2799 scope.go:122] "RemoveContainer" containerID="427131b1d6556efc25a82534c8b170049ce304ca7416756b3993930685616f69" Apr 21 02:49:28.967321 containerd[1601]: time="2026-04-21T02:49:28.965328666Z" level=info msg="RemoveContainer for \"427131b1d6556efc25a82534c8b170049ce304ca7416756b3993930685616f69\"" Apr 21 02:49:28.987207 containerd[1601]: time="2026-04-21T02:49:28.986953895Z" level=info msg="RemoveContainer for \"427131b1d6556efc25a82534c8b170049ce304ca7416756b3993930685616f69\" returns successfully" Apr 21 02:49:28.995996 kubelet[2799]: I0421 02:49:28.990097 2799 scope.go:122] "RemoveContainer" containerID="57c2aaf387d27a118b1f3fd981bca2a3b7397c649d3316aa3aed7cf7e40c0846" Apr 21 02:49:29.005854 containerd[1601]: time="2026-04-21T02:49:29.005305345Z" level=info msg="RemoveContainer for \"57c2aaf387d27a118b1f3fd981bca2a3b7397c649d3316aa3aed7cf7e40c0846\"" Apr 21 02:49:29.032516 containerd[1601]: time="2026-04-21T02:49:29.032330224Z" level=info msg="RemoveContainer for \"57c2aaf387d27a118b1f3fd981bca2a3b7397c649d3316aa3aed7cf7e40c0846\" returns successfully" Apr 21 02:49:29.034539 kubelet[2799]: I0421 02:49:29.034433 2799 scope.go:122] "RemoveContainer" containerID="9a5211943ba1c01524dce71893dbfe77a634bfe40d3ad4cd52e0177cb0b46089" Apr 21 02:49:29.042794 containerd[1601]: time="2026-04-21T02:49:29.042594249Z" level=info msg="RemoveContainer for \"9a5211943ba1c01524dce71893dbfe77a634bfe40d3ad4cd52e0177cb0b46089\"" Apr 21 02:49:29.054012 containerd[1601]: time="2026-04-21T02:49:29.053769737Z" level=info msg="RemoveContainer for \"9a5211943ba1c01524dce71893dbfe77a634bfe40d3ad4cd52e0177cb0b46089\" returns successfully" Apr 21 02:49:29.056621 kubelet[2799]: I0421 02:49:29.056565 2799 scope.go:122] "RemoveContainer" containerID="ffcb6da1f5c7e351b8c038b242b16231639ffce26c9bcc1cb3fc08fc250548b9" Apr 21 02:49:29.059476 containerd[1601]: time="2026-04-21T02:49:29.059453245Z" level=info msg="RemoveContainer for \"ffcb6da1f5c7e351b8c038b242b16231639ffce26c9bcc1cb3fc08fc250548b9\"" Apr 21 02:49:29.074943 containerd[1601]: time="2026-04-21T02:49:29.074661694Z" level=info msg="RemoveContainer for \"ffcb6da1f5c7e351b8c038b242b16231639ffce26c9bcc1cb3fc08fc250548b9\" returns successfully" Apr 21 02:49:29.076222 kubelet[2799]: I0421 02:49:29.076099 2799 scope.go:122] "RemoveContainer" containerID="8a80e5906b79c2584845c411a72afaadf5c300c034f8dec677e5adf86931072c" Apr 21 02:49:29.078953 containerd[1601]: time="2026-04-21T02:49:29.078884016Z" level=info msg="RemoveContainer for \"8a80e5906b79c2584845c411a72afaadf5c300c034f8dec677e5adf86931072c\"" Apr 21 02:49:29.086920 containerd[1601]: time="2026-04-21T02:49:29.086715074Z" level=info msg="RemoveContainer for \"8a80e5906b79c2584845c411a72afaadf5c300c034f8dec677e5adf86931072c\" returns successfully" Apr 21 02:49:29.089474 kubelet[2799]: I0421 02:49:29.089380 2799 scope.go:122] "RemoveContainer" containerID="427131b1d6556efc25a82534c8b170049ce304ca7416756b3993930685616f69" Apr 21 02:49:29.090269 containerd[1601]: time="2026-04-21T02:49:29.090026816Z" level=error msg="ContainerStatus for \"427131b1d6556efc25a82534c8b170049ce304ca7416756b3993930685616f69\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"427131b1d6556efc25a82534c8b170049ce304ca7416756b3993930685616f69\": not found" Apr 21 02:49:29.090729 kubelet[2799]: E0421 02:49:29.090561 2799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"427131b1d6556efc25a82534c8b170049ce304ca7416756b3993930685616f69\": not found" containerID="427131b1d6556efc25a82534c8b170049ce304ca7416756b3993930685616f69" Apr 21 02:49:29.090800 kubelet[2799]: I0421 02:49:29.090737 2799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"427131b1d6556efc25a82534c8b170049ce304ca7416756b3993930685616f69"} err="failed to get container status \"427131b1d6556efc25a82534c8b170049ce304ca7416756b3993930685616f69\": rpc error: code = NotFound desc = an error occurred when try to find container \"427131b1d6556efc25a82534c8b170049ce304ca7416756b3993930685616f69\": not found" Apr 21 02:49:29.090819 kubelet[2799]: I0421 02:49:29.090805 2799 scope.go:122] "RemoveContainer" containerID="57c2aaf387d27a118b1f3fd981bca2a3b7397c649d3316aa3aed7cf7e40c0846" Apr 21 02:49:29.091294 containerd[1601]: time="2026-04-21T02:49:29.091268739Z" level=error msg="ContainerStatus for \"57c2aaf387d27a118b1f3fd981bca2a3b7397c649d3316aa3aed7cf7e40c0846\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"57c2aaf387d27a118b1f3fd981bca2a3b7397c649d3316aa3aed7cf7e40c0846\": not found" Apr 21 02:49:29.091609 kubelet[2799]: E0421 02:49:29.091573 2799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"57c2aaf387d27a118b1f3fd981bca2a3b7397c649d3316aa3aed7cf7e40c0846\": not found" containerID="57c2aaf387d27a118b1f3fd981bca2a3b7397c649d3316aa3aed7cf7e40c0846" Apr 21 02:49:29.091640 kubelet[2799]: I0421 02:49:29.091615 2799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"57c2aaf387d27a118b1f3fd981bca2a3b7397c649d3316aa3aed7cf7e40c0846"} err="failed to get container status \"57c2aaf387d27a118b1f3fd981bca2a3b7397c649d3316aa3aed7cf7e40c0846\": rpc error: code = NotFound desc = an error occurred when try to find container \"57c2aaf387d27a118b1f3fd981bca2a3b7397c649d3316aa3aed7cf7e40c0846\": not found" Apr 21 02:49:29.091640 kubelet[2799]: I0421 02:49:29.091631 2799 scope.go:122] "RemoveContainer" containerID="9a5211943ba1c01524dce71893dbfe77a634bfe40d3ad4cd52e0177cb0b46089" Apr 21 02:49:29.091927 containerd[1601]: time="2026-04-21T02:49:29.091829675Z" level=error msg="ContainerStatus for \"9a5211943ba1c01524dce71893dbfe77a634bfe40d3ad4cd52e0177cb0b46089\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"9a5211943ba1c01524dce71893dbfe77a634bfe40d3ad4cd52e0177cb0b46089\": not found" Apr 21 02:49:29.092278 kubelet[2799]: E0421 02:49:29.092201 2799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"9a5211943ba1c01524dce71893dbfe77a634bfe40d3ad4cd52e0177cb0b46089\": not found" containerID="9a5211943ba1c01524dce71893dbfe77a634bfe40d3ad4cd52e0177cb0b46089" Apr 21 02:49:29.092278 kubelet[2799]: I0421 02:49:29.092261 2799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"9a5211943ba1c01524dce71893dbfe77a634bfe40d3ad4cd52e0177cb0b46089"} err="failed to get container status \"9a5211943ba1c01524dce71893dbfe77a634bfe40d3ad4cd52e0177cb0b46089\": rpc error: code = NotFound desc = an error occurred when try to find container \"9a5211943ba1c01524dce71893dbfe77a634bfe40d3ad4cd52e0177cb0b46089\": not found" Apr 21 02:49:29.092278 kubelet[2799]: I0421 02:49:29.092273 2799 scope.go:122] "RemoveContainer" containerID="ffcb6da1f5c7e351b8c038b242b16231639ffce26c9bcc1cb3fc08fc250548b9" Apr 21 02:49:29.092727 containerd[1601]: time="2026-04-21T02:49:29.092686075Z" level=error msg="ContainerStatus for \"ffcb6da1f5c7e351b8c038b242b16231639ffce26c9bcc1cb3fc08fc250548b9\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ffcb6da1f5c7e351b8c038b242b16231639ffce26c9bcc1cb3fc08fc250548b9\": not found" Apr 21 02:49:29.093042 kubelet[2799]: E0421 02:49:29.093010 2799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ffcb6da1f5c7e351b8c038b242b16231639ffce26c9bcc1cb3fc08fc250548b9\": not found" containerID="ffcb6da1f5c7e351b8c038b242b16231639ffce26c9bcc1cb3fc08fc250548b9" Apr 21 02:49:29.093218 kubelet[2799]: I0421 02:49:29.093049 2799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ffcb6da1f5c7e351b8c038b242b16231639ffce26c9bcc1cb3fc08fc250548b9"} err="failed to get container status \"ffcb6da1f5c7e351b8c038b242b16231639ffce26c9bcc1cb3fc08fc250548b9\": rpc error: code = NotFound desc = an error occurred when try to find container \"ffcb6da1f5c7e351b8c038b242b16231639ffce26c9bcc1cb3fc08fc250548b9\": not found" Apr 21 02:49:29.093218 kubelet[2799]: I0421 02:49:29.093216 2799 scope.go:122] "RemoveContainer" containerID="8a80e5906b79c2584845c411a72afaadf5c300c034f8dec677e5adf86931072c" Apr 21 02:49:29.093471 containerd[1601]: time="2026-04-21T02:49:29.093431800Z" level=error msg="ContainerStatus for \"8a80e5906b79c2584845c411a72afaadf5c300c034f8dec677e5adf86931072c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"8a80e5906b79c2584845c411a72afaadf5c300c034f8dec677e5adf86931072c\": not found" Apr 21 02:49:29.093668 kubelet[2799]: E0421 02:49:29.093629 2799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"8a80e5906b79c2584845c411a72afaadf5c300c034f8dec677e5adf86931072c\": not found" containerID="8a80e5906b79c2584845c411a72afaadf5c300c034f8dec677e5adf86931072c" Apr 21 02:49:29.093712 kubelet[2799]: I0421 02:49:29.093670 2799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"8a80e5906b79c2584845c411a72afaadf5c300c034f8dec677e5adf86931072c"} err="failed to get container status \"8a80e5906b79c2584845c411a72afaadf5c300c034f8dec677e5adf86931072c\": rpc error: code = NotFound desc = an error occurred when try to find container \"8a80e5906b79c2584845c411a72afaadf5c300c034f8dec677e5adf86931072c\": not found" Apr 21 02:49:29.430010 sshd[4424]: Connection closed by 10.0.0.1 port 51658 Apr 21 02:49:29.430955 sshd-session[4421]: pam_unix(sshd:session): session closed for user core Apr 21 02:49:29.442258 systemd[1]: sshd@24-10.0.0.38:22-10.0.0.1:51658.service: Deactivated successfully. Apr 21 02:49:29.443738 systemd[1]: session-25.scope: Deactivated successfully. Apr 21 02:49:29.444004 systemd[1]: session-25.scope: Consumed 1.045s CPU time, 27.6M memory peak. Apr 21 02:49:29.444733 systemd-logind[1574]: Session 25 logged out. Waiting for processes to exit. Apr 21 02:49:29.447084 systemd[1]: Started sshd@25-10.0.0.38:22-10.0.0.1:51666.service - OpenSSH per-connection server daemon (10.0.0.1:51666). Apr 21 02:49:29.448268 systemd-logind[1574]: Removed session 25. Apr 21 02:49:29.517311 sshd[4571]: Accepted publickey for core from 10.0.0.1 port 51666 ssh2: RSA SHA256:9KUHGlOeyJC7ThH1mkUqpKUL96f7+Wj12eT7o4yHGwc Apr 21 02:49:29.519252 sshd-session[4571]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 02:49:29.526217 systemd-logind[1574]: New session 26 of user core. Apr 21 02:49:29.538340 systemd[1]: Started session-26.scope - Session 26 of User core. Apr 21 02:49:30.064300 sshd[4574]: Connection closed by 10.0.0.1 port 51666 Apr 21 02:49:30.067618 sshd-session[4571]: pam_unix(sshd:session): session closed for user core Apr 21 02:49:30.086915 systemd[1]: Started sshd@26-10.0.0.38:22-10.0.0.1:51668.service - OpenSSH per-connection server daemon (10.0.0.1:51668). Apr 21 02:49:30.087800 systemd[1]: sshd@25-10.0.0.38:22-10.0.0.1:51666.service: Deactivated successfully. Apr 21 02:49:30.089991 systemd[1]: session-26.scope: Deactivated successfully. Apr 21 02:49:30.092279 systemd-logind[1574]: Session 26 logged out. Waiting for processes to exit. Apr 21 02:49:30.094456 systemd-logind[1574]: Removed session 26. Apr 21 02:49:30.131000 systemd[1]: Created slice kubepods-burstable-poddf9fe63f_2c79_4a9c_9317_73bf8923c659.slice - libcontainer container kubepods-burstable-poddf9fe63f_2c79_4a9c_9317_73bf8923c659.slice. Apr 21 02:49:30.205675 sshd[4583]: Accepted publickey for core from 10.0.0.1 port 51668 ssh2: RSA SHA256:9KUHGlOeyJC7ThH1mkUqpKUL96f7+Wj12eT7o4yHGwc Apr 21 02:49:30.208665 sshd-session[4583]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 02:49:30.223042 systemd-logind[1574]: New session 27 of user core. Apr 21 02:49:30.238552 systemd[1]: Started session-27.scope - Session 27 of User core. Apr 21 02:49:30.253590 sshd[4589]: Connection closed by 10.0.0.1 port 51668 Apr 21 02:49:30.254058 sshd-session[4583]: pam_unix(sshd:session): session closed for user core Apr 21 02:49:30.267924 systemd[1]: sshd@26-10.0.0.38:22-10.0.0.1:51668.service: Deactivated successfully. Apr 21 02:49:30.270111 systemd[1]: session-27.scope: Deactivated successfully. Apr 21 02:49:30.274113 systemd-logind[1574]: Session 27 logged out. Waiting for processes to exit. Apr 21 02:49:30.278748 systemd[1]: Started sshd@27-10.0.0.38:22-10.0.0.1:51682.service - OpenSSH per-connection server daemon (10.0.0.1:51682). Apr 21 02:49:30.279727 systemd-logind[1574]: Removed session 27. Apr 21 02:49:30.291775 kubelet[2799]: I0421 02:49:30.291050 2799 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/df9fe63f-2c79-4a9c-9317-73bf8923c659-cilium-cgroup\") pod \"cilium-7r62d\" (UID: \"df9fe63f-2c79-4a9c-9317-73bf8923c659\") " pod="kube-system/cilium-7r62d" Apr 21 02:49:30.291775 kubelet[2799]: I0421 02:49:30.291210 2799 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/df9fe63f-2c79-4a9c-9317-73bf8923c659-xtables-lock\") pod \"cilium-7r62d\" (UID: \"df9fe63f-2c79-4a9c-9317-73bf8923c659\") " pod="kube-system/cilium-7r62d" Apr 21 02:49:30.291775 kubelet[2799]: I0421 02:49:30.291554 2799 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/df9fe63f-2c79-4a9c-9317-73bf8923c659-cilium-config-path\") pod \"cilium-7r62d\" (UID: \"df9fe63f-2c79-4a9c-9317-73bf8923c659\") " pod="kube-system/cilium-7r62d" Apr 21 02:49:30.291775 kubelet[2799]: I0421 02:49:30.291686 2799 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/df9fe63f-2c79-4a9c-9317-73bf8923c659-hubble-tls\") pod \"cilium-7r62d\" (UID: \"df9fe63f-2c79-4a9c-9317-73bf8923c659\") " pod="kube-system/cilium-7r62d" Apr 21 02:49:30.293886 kubelet[2799]: I0421 02:49:30.291744 2799 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/df9fe63f-2c79-4a9c-9317-73bf8923c659-cni-path\") pod \"cilium-7r62d\" (UID: \"df9fe63f-2c79-4a9c-9317-73bf8923c659\") " pod="kube-system/cilium-7r62d" Apr 21 02:49:30.293886 kubelet[2799]: I0421 02:49:30.292032 2799 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/df9fe63f-2c79-4a9c-9317-73bf8923c659-cilium-ipsec-secrets\") pod \"cilium-7r62d\" (UID: \"df9fe63f-2c79-4a9c-9317-73bf8923c659\") " pod="kube-system/cilium-7r62d" Apr 21 02:49:30.293886 kubelet[2799]: I0421 02:49:30.292074 2799 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/df9fe63f-2c79-4a9c-9317-73bf8923c659-etc-cni-netd\") pod \"cilium-7r62d\" (UID: \"df9fe63f-2c79-4a9c-9317-73bf8923c659\") " pod="kube-system/cilium-7r62d" Apr 21 02:49:30.293886 kubelet[2799]: I0421 02:49:30.292089 2799 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sqmd4\" (UniqueName: \"kubernetes.io/projected/df9fe63f-2c79-4a9c-9317-73bf8923c659-kube-api-access-sqmd4\") pod \"cilium-7r62d\" (UID: \"df9fe63f-2c79-4a9c-9317-73bf8923c659\") " pod="kube-system/cilium-7r62d" Apr 21 02:49:30.293886 kubelet[2799]: I0421 02:49:30.292212 2799 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/df9fe63f-2c79-4a9c-9317-73bf8923c659-lib-modules\") pod \"cilium-7r62d\" (UID: \"df9fe63f-2c79-4a9c-9317-73bf8923c659\") " pod="kube-system/cilium-7r62d" Apr 21 02:49:30.293886 kubelet[2799]: I0421 02:49:30.292332 2799 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/df9fe63f-2c79-4a9c-9317-73bf8923c659-host-proc-sys-kernel\") pod \"cilium-7r62d\" (UID: \"df9fe63f-2c79-4a9c-9317-73bf8923c659\") " pod="kube-system/cilium-7r62d" Apr 21 02:49:30.294001 kubelet[2799]: I0421 02:49:30.292377 2799 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/df9fe63f-2c79-4a9c-9317-73bf8923c659-hostproc\") pod \"cilium-7r62d\" (UID: \"df9fe63f-2c79-4a9c-9317-73bf8923c659\") " pod="kube-system/cilium-7r62d" Apr 21 02:49:30.294001 kubelet[2799]: I0421 02:49:30.292427 2799 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/df9fe63f-2c79-4a9c-9317-73bf8923c659-host-proc-sys-net\") pod \"cilium-7r62d\" (UID: \"df9fe63f-2c79-4a9c-9317-73bf8923c659\") " pod="kube-system/cilium-7r62d" Apr 21 02:49:30.294001 kubelet[2799]: I0421 02:49:30.292480 2799 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/df9fe63f-2c79-4a9c-9317-73bf8923c659-cilium-run\") pod \"cilium-7r62d\" (UID: \"df9fe63f-2c79-4a9c-9317-73bf8923c659\") " pod="kube-system/cilium-7r62d" Apr 21 02:49:30.294001 kubelet[2799]: I0421 02:49:30.292534 2799 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/df9fe63f-2c79-4a9c-9317-73bf8923c659-bpf-maps\") pod \"cilium-7r62d\" (UID: \"df9fe63f-2c79-4a9c-9317-73bf8923c659\") " pod="kube-system/cilium-7r62d" Apr 21 02:49:30.294001 kubelet[2799]: I0421 02:49:30.292548 2799 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/df9fe63f-2c79-4a9c-9317-73bf8923c659-clustermesh-secrets\") pod \"cilium-7r62d\" (UID: \"df9fe63f-2c79-4a9c-9317-73bf8923c659\") " pod="kube-system/cilium-7r62d" Apr 21 02:49:30.358616 sshd[4596]: Accepted publickey for core from 10.0.0.1 port 51682 ssh2: RSA SHA256:9KUHGlOeyJC7ThH1mkUqpKUL96f7+Wj12eT7o4yHGwc Apr 21 02:49:30.360595 sshd-session[4596]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 02:49:30.376011 systemd-logind[1574]: New session 28 of user core. Apr 21 02:49:30.385651 systemd[1]: Started session-28.scope - Session 28 of User core. Apr 21 02:49:30.425725 kubelet[2799]: E0421 02:49:30.425704 2799 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 02:49:30.434290 kubelet[2799]: I0421 02:49:30.434221 2799 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="1e48e55f-293b-4ca6-84cb-eabcc248637d" path="/var/lib/kubelet/pods/1e48e55f-293b-4ca6-84cb-eabcc248637d/volumes" Apr 21 02:49:30.434555 kubelet[2799]: I0421 02:49:30.434512 2799 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="25d6db75-1c26-49ab-a7c7-ec7a8230d88a" path="/var/lib/kubelet/pods/25d6db75-1c26-49ab-a7c7-ec7a8230d88a/volumes" Apr 21 02:49:30.470284 kubelet[2799]: E0421 02:49:30.470046 2799 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 02:49:30.488332 containerd[1601]: time="2026-04-21T02:49:30.487455816Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-7r62d,Uid:df9fe63f-2c79-4a9c-9317-73bf8923c659,Namespace:kube-system,Attempt:0,}" Apr 21 02:49:30.554839 containerd[1601]: time="2026-04-21T02:49:30.554478592Z" level=info msg="connecting to shim ef277b76e7a6b74b4650791eaf56a4bf48ef93ac88abd9c7a7875bb4e47620f5" address="unix:///run/containerd/s/8675a3fdf0814deda9da66fc2c644ee8bf6116e70b34a67022eead63e760cf30" namespace=k8s.io protocol=ttrpc version=3 Apr 21 02:49:30.620480 systemd[1]: Started cri-containerd-ef277b76e7a6b74b4650791eaf56a4bf48ef93ac88abd9c7a7875bb4e47620f5.scope - libcontainer container ef277b76e7a6b74b4650791eaf56a4bf48ef93ac88abd9c7a7875bb4e47620f5. Apr 21 02:49:30.738092 containerd[1601]: time="2026-04-21T02:49:30.737678364Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-7r62d,Uid:df9fe63f-2c79-4a9c-9317-73bf8923c659,Namespace:kube-system,Attempt:0,} returns sandbox id \"ef277b76e7a6b74b4650791eaf56a4bf48ef93ac88abd9c7a7875bb4e47620f5\"" Apr 21 02:49:30.740204 kubelet[2799]: E0421 02:49:30.740182 2799 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 02:49:30.756103 containerd[1601]: time="2026-04-21T02:49:30.755816667Z" level=info msg="CreateContainer within sandbox \"ef277b76e7a6b74b4650791eaf56a4bf48ef93ac88abd9c7a7875bb4e47620f5\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Apr 21 02:49:30.790049 containerd[1601]: time="2026-04-21T02:49:30.789473725Z" level=info msg="Container ff2a3deb9a1b9d57ca71f69f714cdea189791e8117eb196d9be44891d1e04753: CDI devices from CRI Config.CDIDevices: []" Apr 21 02:49:30.812983 containerd[1601]: time="2026-04-21T02:49:30.812710750Z" level=info msg="CreateContainer within sandbox \"ef277b76e7a6b74b4650791eaf56a4bf48ef93ac88abd9c7a7875bb4e47620f5\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"ff2a3deb9a1b9d57ca71f69f714cdea189791e8117eb196d9be44891d1e04753\"" Apr 21 02:49:30.815372 containerd[1601]: time="2026-04-21T02:49:30.814994950Z" level=info msg="StartContainer for \"ff2a3deb9a1b9d57ca71f69f714cdea189791e8117eb196d9be44891d1e04753\"" Apr 21 02:49:30.816830 containerd[1601]: time="2026-04-21T02:49:30.816660456Z" level=info msg="connecting to shim ff2a3deb9a1b9d57ca71f69f714cdea189791e8117eb196d9be44891d1e04753" address="unix:///run/containerd/s/8675a3fdf0814deda9da66fc2c644ee8bf6116e70b34a67022eead63e760cf30" protocol=ttrpc version=3 Apr 21 02:49:30.913476 systemd[1]: Started cri-containerd-ff2a3deb9a1b9d57ca71f69f714cdea189791e8117eb196d9be44891d1e04753.scope - libcontainer container ff2a3deb9a1b9d57ca71f69f714cdea189791e8117eb196d9be44891d1e04753. Apr 21 02:49:30.950414 kubelet[2799]: I0421 02:49:30.947230 2799 setters.go:546] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-21T02:49:30Z","lastTransitionTime":"2026-04-21T02:49:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Apr 21 02:49:31.002607 containerd[1601]: time="2026-04-21T02:49:31.002577612Z" level=info msg="StartContainer for \"ff2a3deb9a1b9d57ca71f69f714cdea189791e8117eb196d9be44891d1e04753\" returns successfully" Apr 21 02:49:31.021064 systemd[1]: cri-containerd-ff2a3deb9a1b9d57ca71f69f714cdea189791e8117eb196d9be44891d1e04753.scope: Deactivated successfully. Apr 21 02:49:31.024682 containerd[1601]: time="2026-04-21T02:49:31.024654205Z" level=info msg="received container exit event container_id:\"ff2a3deb9a1b9d57ca71f69f714cdea189791e8117eb196d9be44891d1e04753\" id:\"ff2a3deb9a1b9d57ca71f69f714cdea189791e8117eb196d9be44891d1e04753\" pid:4673 exited_at:{seconds:1776739771 nanos:23039734}" Apr 21 02:49:31.979557 kubelet[2799]: E0421 02:49:31.979356 2799 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 02:49:31.992474 containerd[1601]: time="2026-04-21T02:49:31.991717041Z" level=info msg="CreateContainer within sandbox \"ef277b76e7a6b74b4650791eaf56a4bf48ef93ac88abd9c7a7875bb4e47620f5\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Apr 21 02:49:32.016985 containerd[1601]: time="2026-04-21T02:49:32.016869081Z" level=info msg="Container 437d432141cfcd0124d5e6ce54ec1054ad9ecf4cb7783601423ec6b2b34ebcd4: CDI devices from CRI Config.CDIDevices: []" Apr 21 02:49:32.029231 containerd[1601]: time="2026-04-21T02:49:32.029010582Z" level=info msg="CreateContainer within sandbox \"ef277b76e7a6b74b4650791eaf56a4bf48ef93ac88abd9c7a7875bb4e47620f5\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"437d432141cfcd0124d5e6ce54ec1054ad9ecf4cb7783601423ec6b2b34ebcd4\"" Apr 21 02:49:32.030112 containerd[1601]: time="2026-04-21T02:49:32.030018693Z" level=info msg="StartContainer for \"437d432141cfcd0124d5e6ce54ec1054ad9ecf4cb7783601423ec6b2b34ebcd4\"" Apr 21 02:49:32.036359 containerd[1601]: time="2026-04-21T02:49:32.035973509Z" level=info msg="connecting to shim 437d432141cfcd0124d5e6ce54ec1054ad9ecf4cb7783601423ec6b2b34ebcd4" address="unix:///run/containerd/s/8675a3fdf0814deda9da66fc2c644ee8bf6116e70b34a67022eead63e760cf30" protocol=ttrpc version=3 Apr 21 02:49:32.078718 systemd[1]: Started cri-containerd-437d432141cfcd0124d5e6ce54ec1054ad9ecf4cb7783601423ec6b2b34ebcd4.scope - libcontainer container 437d432141cfcd0124d5e6ce54ec1054ad9ecf4cb7783601423ec6b2b34ebcd4. Apr 21 02:49:32.223777 containerd[1601]: time="2026-04-21T02:49:32.223575865Z" level=info msg="StartContainer for \"437d432141cfcd0124d5e6ce54ec1054ad9ecf4cb7783601423ec6b2b34ebcd4\" returns successfully" Apr 21 02:49:32.235275 systemd[1]: cri-containerd-437d432141cfcd0124d5e6ce54ec1054ad9ecf4cb7783601423ec6b2b34ebcd4.scope: Deactivated successfully. Apr 21 02:49:32.241897 containerd[1601]: time="2026-04-21T02:49:32.241759812Z" level=info msg="received container exit event container_id:\"437d432141cfcd0124d5e6ce54ec1054ad9ecf4cb7783601423ec6b2b34ebcd4\" id:\"437d432141cfcd0124d5e6ce54ec1054ad9ecf4cb7783601423ec6b2b34ebcd4\" pid:4719 exited_at:{seconds:1776739772 nanos:235827828}" Apr 21 02:49:32.360440 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-437d432141cfcd0124d5e6ce54ec1054ad9ecf4cb7783601423ec6b2b34ebcd4-rootfs.mount: Deactivated successfully. Apr 21 02:49:32.990197 kubelet[2799]: E0421 02:49:32.989923 2799 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 02:49:33.084569 containerd[1601]: time="2026-04-21T02:49:33.082981271Z" level=info msg="CreateContainer within sandbox \"ef277b76e7a6b74b4650791eaf56a4bf48ef93ac88abd9c7a7875bb4e47620f5\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Apr 21 02:49:33.340833 containerd[1601]: time="2026-04-21T02:49:33.339887499Z" level=info msg="Container 30e9191316ae40be9ebea7fbc81431a5770cfe3c5ce7f6672e5f89c8d912d04e: CDI devices from CRI Config.CDIDevices: []" Apr 21 02:49:33.451382 containerd[1601]: time="2026-04-21T02:49:33.450912511Z" level=info msg="CreateContainer within sandbox \"ef277b76e7a6b74b4650791eaf56a4bf48ef93ac88abd9c7a7875bb4e47620f5\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"30e9191316ae40be9ebea7fbc81431a5770cfe3c5ce7f6672e5f89c8d912d04e\"" Apr 21 02:49:33.461208 containerd[1601]: time="2026-04-21T02:49:33.460301584Z" level=info msg="StartContainer for \"30e9191316ae40be9ebea7fbc81431a5770cfe3c5ce7f6672e5f89c8d912d04e\"" Apr 21 02:49:33.473247 containerd[1601]: time="2026-04-21T02:49:33.472773506Z" level=info msg="connecting to shim 30e9191316ae40be9ebea7fbc81431a5770cfe3c5ce7f6672e5f89c8d912d04e" address="unix:///run/containerd/s/8675a3fdf0814deda9da66fc2c644ee8bf6116e70b34a67022eead63e760cf30" protocol=ttrpc version=3 Apr 21 02:49:33.562847 systemd[1]: Started cri-containerd-30e9191316ae40be9ebea7fbc81431a5770cfe3c5ce7f6672e5f89c8d912d04e.scope - libcontainer container 30e9191316ae40be9ebea7fbc81431a5770cfe3c5ce7f6672e5f89c8d912d04e. Apr 21 02:49:33.740075 kubelet[2799]: E0421 02:49:33.739392 2799 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 21 02:49:33.755344 systemd[1]: cri-containerd-30e9191316ae40be9ebea7fbc81431a5770cfe3c5ce7f6672e5f89c8d912d04e.scope: Deactivated successfully. Apr 21 02:49:33.758937 containerd[1601]: time="2026-04-21T02:49:33.758846186Z" level=info msg="StartContainer for \"30e9191316ae40be9ebea7fbc81431a5770cfe3c5ce7f6672e5f89c8d912d04e\" returns successfully" Apr 21 02:49:33.764373 containerd[1601]: time="2026-04-21T02:49:33.763750848Z" level=info msg="received container exit event container_id:\"30e9191316ae40be9ebea7fbc81431a5770cfe3c5ce7f6672e5f89c8d912d04e\" id:\"30e9191316ae40be9ebea7fbc81431a5770cfe3c5ce7f6672e5f89c8d912d04e\" pid:4763 exited_at:{seconds:1776739773 nanos:759375036}" Apr 21 02:49:33.968392 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-30e9191316ae40be9ebea7fbc81431a5770cfe3c5ce7f6672e5f89c8d912d04e-rootfs.mount: Deactivated successfully. Apr 21 02:49:34.022435 kubelet[2799]: E0421 02:49:34.021010 2799 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 02:49:34.027379 containerd[1601]: time="2026-04-21T02:49:34.026276726Z" level=info msg="CreateContainer within sandbox \"ef277b76e7a6b74b4650791eaf56a4bf48ef93ac88abd9c7a7875bb4e47620f5\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Apr 21 02:49:34.080248 containerd[1601]: time="2026-04-21T02:49:34.078506442Z" level=info msg="Container 85ee5bdf51266c81ee447c320a56fea8db4e33dbb594a897f70d899256f89cb6: CDI devices from CRI Config.CDIDevices: []" Apr 21 02:49:34.124543 containerd[1601]: time="2026-04-21T02:49:34.124389822Z" level=info msg="CreateContainer within sandbox \"ef277b76e7a6b74b4650791eaf56a4bf48ef93ac88abd9c7a7875bb4e47620f5\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"85ee5bdf51266c81ee447c320a56fea8db4e33dbb594a897f70d899256f89cb6\"" Apr 21 02:49:34.129889 containerd[1601]: time="2026-04-21T02:49:34.129466433Z" level=info msg="StartContainer for \"85ee5bdf51266c81ee447c320a56fea8db4e33dbb594a897f70d899256f89cb6\"" Apr 21 02:49:34.132594 containerd[1601]: time="2026-04-21T02:49:34.132561011Z" level=info msg="connecting to shim 85ee5bdf51266c81ee447c320a56fea8db4e33dbb594a897f70d899256f89cb6" address="unix:///run/containerd/s/8675a3fdf0814deda9da66fc2c644ee8bf6116e70b34a67022eead63e760cf30" protocol=ttrpc version=3 Apr 21 02:49:34.197397 systemd[1]: Started cri-containerd-85ee5bdf51266c81ee447c320a56fea8db4e33dbb594a897f70d899256f89cb6.scope - libcontainer container 85ee5bdf51266c81ee447c320a56fea8db4e33dbb594a897f70d899256f89cb6. Apr 21 02:49:34.273609 systemd[1]: cri-containerd-85ee5bdf51266c81ee447c320a56fea8db4e33dbb594a897f70d899256f89cb6.scope: Deactivated successfully. Apr 21 02:49:34.277781 containerd[1601]: time="2026-04-21T02:49:34.277710220Z" level=info msg="received container exit event container_id:\"85ee5bdf51266c81ee447c320a56fea8db4e33dbb594a897f70d899256f89cb6\" id:\"85ee5bdf51266c81ee447c320a56fea8db4e33dbb594a897f70d899256f89cb6\" pid:4804 exited_at:{seconds:1776739774 nanos:273894156}" Apr 21 02:49:34.296291 containerd[1601]: time="2026-04-21T02:49:34.295475978Z" level=info msg="StartContainer for \"85ee5bdf51266c81ee447c320a56fea8db4e33dbb594a897f70d899256f89cb6\" returns successfully" Apr 21 02:49:34.392344 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-85ee5bdf51266c81ee447c320a56fea8db4e33dbb594a897f70d899256f89cb6-rootfs.mount: Deactivated successfully. Apr 21 02:49:35.038610 kubelet[2799]: E0421 02:49:35.038283 2799 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 02:49:35.053743 containerd[1601]: time="2026-04-21T02:49:35.053189673Z" level=info msg="CreateContainer within sandbox \"ef277b76e7a6b74b4650791eaf56a4bf48ef93ac88abd9c7a7875bb4e47620f5\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Apr 21 02:49:35.086458 containerd[1601]: time="2026-04-21T02:49:35.086398474Z" level=info msg="Container 50b8752ee8a7052b385457cef849b3653ec98644959a284c703ab4050f9f85ba: CDI devices from CRI Config.CDIDevices: []" Apr 21 02:49:35.106524 containerd[1601]: time="2026-04-21T02:49:35.106103520Z" level=info msg="CreateContainer within sandbox \"ef277b76e7a6b74b4650791eaf56a4bf48ef93ac88abd9c7a7875bb4e47620f5\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"50b8752ee8a7052b385457cef849b3653ec98644959a284c703ab4050f9f85ba\"" Apr 21 02:49:35.113719 containerd[1601]: time="2026-04-21T02:49:35.113314608Z" level=info msg="StartContainer for \"50b8752ee8a7052b385457cef849b3653ec98644959a284c703ab4050f9f85ba\"" Apr 21 02:49:35.115554 containerd[1601]: time="2026-04-21T02:49:35.115443784Z" level=info msg="connecting to shim 50b8752ee8a7052b385457cef849b3653ec98644959a284c703ab4050f9f85ba" address="unix:///run/containerd/s/8675a3fdf0814deda9da66fc2c644ee8bf6116e70b34a67022eead63e760cf30" protocol=ttrpc version=3 Apr 21 02:49:35.149579 systemd[1]: Started cri-containerd-50b8752ee8a7052b385457cef849b3653ec98644959a284c703ab4050f9f85ba.scope - libcontainer container 50b8752ee8a7052b385457cef849b3653ec98644959a284c703ab4050f9f85ba. Apr 21 02:49:35.354599 containerd[1601]: time="2026-04-21T02:49:35.353952746Z" level=info msg="StartContainer for \"50b8752ee8a7052b385457cef849b3653ec98644959a284c703ab4050f9f85ba\" returns successfully" Apr 21 02:49:36.065290 kubelet[2799]: E0421 02:49:36.063620 2799 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 02:49:36.152632 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-vaes-avx10_256)) Apr 21 02:49:36.157093 kubelet[2799]: I0421 02:49:36.156987 2799 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/cilium-7r62d" podStartSLOduration=6.156933138 podStartE2EDuration="6.156933138s" podCreationTimestamp="2026-04-21 02:49:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-21 02:49:36.155886645 +0000 UTC m=+87.940424294" watchObservedRunningTime="2026-04-21 02:49:36.156933138 +0000 UTC m=+87.941470796" Apr 21 02:49:37.072858 kubelet[2799]: E0421 02:49:37.072704 2799 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 02:49:38.076628 kubelet[2799]: E0421 02:49:38.076448 2799 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 02:49:40.905062 systemd-networkd[1498]: lxc_health: Link UP Apr 21 02:49:40.906892 systemd-networkd[1498]: lxc_health: Gained carrier Apr 21 02:49:42.468092 kubelet[2799]: E0421 02:49:42.467578 2799 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 02:49:42.638414 systemd-networkd[1498]: lxc_health: Gained IPv6LL Apr 21 02:49:43.239991 kubelet[2799]: E0421 02:49:43.239793 2799 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 02:49:44.245053 kubelet[2799]: E0421 02:49:44.244950 2799 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 02:49:46.988496 sshd[4600]: Connection closed by 10.0.0.1 port 51682 Apr 21 02:49:46.993717 sshd-session[4596]: pam_unix(sshd:session): session closed for user core Apr 21 02:49:47.002359 systemd-logind[1574]: Session 28 logged out. Waiting for processes to exit. Apr 21 02:49:47.002734 systemd[1]: sshd@27-10.0.0.38:22-10.0.0.1:51682.service: Deactivated successfully. Apr 21 02:49:47.004822 systemd[1]: session-28.scope: Deactivated successfully. Apr 21 02:49:47.006620 systemd-logind[1574]: Removed session 28.