May 27 03:17:19.986840 kernel: Linux version 6.12.30-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT_DYNAMIC Tue May 27 01:09:43 -00 2025 May 27 03:17:19.986877 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=f6c186658a19d5a08471ef76df75f82494b37b46908f9237b2c3cf497da860c6 May 27 03:17:19.986892 kernel: BIOS-provided physical RAM map: May 27 03:17:19.986901 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable May 27 03:17:19.986910 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable May 27 03:17:19.986919 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS May 27 03:17:19.986929 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable May 27 03:17:19.986939 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS May 27 03:17:19.986967 kernel: BIOS-e820: [mem 0x000000000080c000-0x0000000000810fff] usable May 27 03:17:19.986977 kernel: BIOS-e820: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS May 27 03:17:19.986999 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009bd3efff] usable May 27 03:17:19.987018 kernel: BIOS-e820: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved May 27 03:17:19.987028 kernel: BIOS-e820: [mem 0x000000009be00000-0x000000009c8ecfff] usable May 27 03:17:19.987037 kernel: BIOS-e820: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved May 27 03:17:19.987052 kernel: BIOS-e820: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data May 27 03:17:19.987062 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS May 27 03:17:19.987075 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009ce90fff] usable May 27 03:17:19.987085 kernel: BIOS-e820: [mem 0x000000009ce91000-0x000000009ce94fff] reserved May 27 03:17:19.987094 kernel: BIOS-e820: [mem 0x000000009ce95000-0x000000009ce96fff] ACPI NVS May 27 03:17:19.987104 kernel: BIOS-e820: [mem 0x000000009ce97000-0x000000009cedbfff] usable May 27 03:17:19.987114 kernel: BIOS-e820: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved May 27 03:17:19.987123 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS May 27 03:17:19.987133 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved May 27 03:17:19.987142 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved May 27 03:17:19.987151 kernel: BIOS-e820: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved May 27 03:17:19.987164 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved May 27 03:17:19.987174 kernel: NX (Execute Disable) protection: active May 27 03:17:19.987184 kernel: APIC: Static calls initialized May 27 03:17:19.987194 kernel: e820: update [mem 0x9b320018-0x9b329c57] usable ==> usable May 27 03:17:19.987204 kernel: e820: update [mem 0x9b2e3018-0x9b31fe57] usable ==> usable May 27 03:17:19.987214 kernel: extended physical RAM map: May 27 03:17:19.987223 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable May 27 03:17:19.987233 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000007fffff] usable May 27 03:17:19.987242 kernel: reserve setup_data: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS May 27 03:17:19.987253 kernel: reserve setup_data: [mem 0x0000000000808000-0x000000000080afff] usable May 27 03:17:19.987263 kernel: reserve setup_data: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS May 27 03:17:19.987276 kernel: reserve setup_data: [mem 0x000000000080c000-0x0000000000810fff] usable May 27 03:17:19.987287 kernel: reserve setup_data: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS May 27 03:17:19.987297 kernel: reserve setup_data: [mem 0x0000000000900000-0x000000009b2e3017] usable May 27 03:17:19.987307 kernel: reserve setup_data: [mem 0x000000009b2e3018-0x000000009b31fe57] usable May 27 03:17:19.987322 kernel: reserve setup_data: [mem 0x000000009b31fe58-0x000000009b320017] usable May 27 03:17:19.987332 kernel: reserve setup_data: [mem 0x000000009b320018-0x000000009b329c57] usable May 27 03:17:19.987346 kernel: reserve setup_data: [mem 0x000000009b329c58-0x000000009bd3efff] usable May 27 03:17:19.987357 kernel: reserve setup_data: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved May 27 03:17:19.987367 kernel: reserve setup_data: [mem 0x000000009be00000-0x000000009c8ecfff] usable May 27 03:17:19.987378 kernel: reserve setup_data: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved May 27 03:17:19.987388 kernel: reserve setup_data: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data May 27 03:17:19.987399 kernel: reserve setup_data: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS May 27 03:17:19.987410 kernel: reserve setup_data: [mem 0x000000009cbff000-0x000000009ce90fff] usable May 27 03:17:19.987421 kernel: reserve setup_data: [mem 0x000000009ce91000-0x000000009ce94fff] reserved May 27 03:17:19.987431 kernel: reserve setup_data: [mem 0x000000009ce95000-0x000000009ce96fff] ACPI NVS May 27 03:17:19.987445 kernel: reserve setup_data: [mem 0x000000009ce97000-0x000000009cedbfff] usable May 27 03:17:19.987456 kernel: reserve setup_data: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved May 27 03:17:19.987469 kernel: reserve setup_data: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS May 27 03:17:19.987481 kernel: reserve setup_data: [mem 0x00000000e0000000-0x00000000efffffff] reserved May 27 03:17:19.987519 kernel: reserve setup_data: [mem 0x00000000feffc000-0x00000000feffffff] reserved May 27 03:17:19.987530 kernel: reserve setup_data: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved May 27 03:17:19.987540 kernel: reserve setup_data: [mem 0x000000fd00000000-0x000000ffffffffff] reserved May 27 03:17:19.987554 kernel: efi: EFI v2.7 by EDK II May 27 03:17:19.987565 kernel: efi: SMBIOS=0x9c988000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b9e4198 RNG=0x9cb73018 May 27 03:17:19.987575 kernel: random: crng init done May 27 03:17:19.987588 kernel: efi: Remove mem151: MMIO range=[0xffc00000-0xffffffff] (4MB) from e820 map May 27 03:17:19.987599 kernel: e820: remove [mem 0xffc00000-0xffffffff] reserved May 27 03:17:19.987616 kernel: secureboot: Secure boot disabled May 27 03:17:19.987626 kernel: SMBIOS 2.8 present. May 27 03:17:19.987637 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS unknown 02/02/2022 May 27 03:17:19.987647 kernel: DMI: Memory slots populated: 1/1 May 27 03:17:19.987658 kernel: Hypervisor detected: KVM May 27 03:17:19.987668 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 May 27 03:17:19.987679 kernel: kvm-clock: using sched offset of 5497530746 cycles May 27 03:17:19.987691 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns May 27 03:17:19.987702 kernel: tsc: Detected 2794.748 MHz processor May 27 03:17:19.987713 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved May 27 03:17:19.987726 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable May 27 03:17:19.987736 kernel: last_pfn = 0x9cedc max_arch_pfn = 0x400000000 May 27 03:17:19.987747 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs May 27 03:17:19.987758 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT May 27 03:17:19.987769 kernel: Using GB pages for direct mapping May 27 03:17:19.987780 kernel: ACPI: Early table checksum verification disabled May 27 03:17:19.987791 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) May 27 03:17:19.987802 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) May 27 03:17:19.987813 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) May 27 03:17:19.987827 kernel: ACPI: DSDT 0x000000009CB7A000 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) May 27 03:17:19.987838 kernel: ACPI: FACS 0x000000009CBDD000 000040 May 27 03:17:19.987849 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 27 03:17:19.987860 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 27 03:17:19.987871 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) May 27 03:17:19.987882 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 27 03:17:19.987894 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) May 27 03:17:19.987905 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] May 27 03:17:19.987916 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1b9] May 27 03:17:19.987929 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] May 27 03:17:19.987940 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] May 27 03:17:19.987960 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] May 27 03:17:19.987971 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] May 27 03:17:19.987982 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] May 27 03:17:19.987993 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] May 27 03:17:19.988004 kernel: No NUMA configuration found May 27 03:17:19.988015 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cedbfff] May 27 03:17:19.988026 kernel: NODE_DATA(0) allocated [mem 0x9ce36dc0-0x9ce3dfff] May 27 03:17:19.988040 kernel: Zone ranges: May 27 03:17:19.988051 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] May 27 03:17:19.988062 kernel: DMA32 [mem 0x0000000001000000-0x000000009cedbfff] May 27 03:17:19.988073 kernel: Normal empty May 27 03:17:19.988083 kernel: Device empty May 27 03:17:19.988095 kernel: Movable zone start for each node May 27 03:17:19.988106 kernel: Early memory node ranges May 27 03:17:19.988116 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] May 27 03:17:19.988127 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] May 27 03:17:19.988141 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] May 27 03:17:19.988155 kernel: node 0: [mem 0x000000000080c000-0x0000000000810fff] May 27 03:17:19.988166 kernel: node 0: [mem 0x0000000000900000-0x000000009bd3efff] May 27 03:17:19.988177 kernel: node 0: [mem 0x000000009be00000-0x000000009c8ecfff] May 27 03:17:19.988188 kernel: node 0: [mem 0x000000009cbff000-0x000000009ce90fff] May 27 03:17:19.988198 kernel: node 0: [mem 0x000000009ce97000-0x000000009cedbfff] May 27 03:17:19.988209 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cedbfff] May 27 03:17:19.988223 kernel: On node 0, zone DMA: 1 pages in unavailable ranges May 27 03:17:19.988234 kernel: On node 0, zone DMA: 96 pages in unavailable ranges May 27 03:17:19.988257 kernel: On node 0, zone DMA: 8 pages in unavailable ranges May 27 03:17:19.988268 kernel: On node 0, zone DMA: 1 pages in unavailable ranges May 27 03:17:19.988279 kernel: On node 0, zone DMA: 239 pages in unavailable ranges May 27 03:17:19.988291 kernel: On node 0, zone DMA32: 193 pages in unavailable ranges May 27 03:17:19.988305 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges May 27 03:17:19.988316 kernel: On node 0, zone DMA32: 6 pages in unavailable ranges May 27 03:17:19.988328 kernel: On node 0, zone DMA32: 12580 pages in unavailable ranges May 27 03:17:19.988339 kernel: ACPI: PM-Timer IO Port: 0x608 May 27 03:17:19.988350 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) May 27 03:17:19.988365 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 May 27 03:17:19.988376 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) May 27 03:17:19.988388 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) May 27 03:17:19.988399 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) May 27 03:17:19.988411 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) May 27 03:17:19.988422 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) May 27 03:17:19.988433 kernel: ACPI: Using ACPI (MADT) for SMP configuration information May 27 03:17:19.988444 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 May 27 03:17:19.988458 kernel: TSC deadline timer available May 27 03:17:19.988470 kernel: CPU topo: Max. logical packages: 1 May 27 03:17:19.988496 kernel: CPU topo: Max. logical dies: 1 May 27 03:17:19.988507 kernel: CPU topo: Max. dies per package: 1 May 27 03:17:19.988518 kernel: CPU topo: Max. threads per core: 1 May 27 03:17:19.988528 kernel: CPU topo: Num. cores per package: 4 May 27 03:17:19.988552 kernel: CPU topo: Num. threads per package: 4 May 27 03:17:19.988563 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs May 27 03:17:19.988574 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() May 27 03:17:19.988584 kernel: kvm-guest: KVM setup pv remote TLB flush May 27 03:17:19.988609 kernel: kvm-guest: setup PV sched yield May 27 03:17:19.988622 kernel: [mem 0x9d000000-0xdfffffff] available for PCI devices May 27 03:17:19.988650 kernel: Booting paravirtualized kernel on KVM May 27 03:17:19.988662 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns May 27 03:17:19.988673 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 May 27 03:17:19.988684 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 May 27 03:17:19.988695 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 May 27 03:17:19.988707 kernel: pcpu-alloc: [0] 0 1 2 3 May 27 03:17:19.988717 kernel: kvm-guest: PV spinlocks enabled May 27 03:17:19.988733 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) May 27 03:17:19.988751 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=f6c186658a19d5a08471ef76df75f82494b37b46908f9237b2c3cf497da860c6 May 27 03:17:19.988767 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 27 03:17:19.988778 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 27 03:17:19.988790 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 27 03:17:19.988801 kernel: Fallback order for Node 0: 0 May 27 03:17:19.988813 kernel: Built 1 zonelists, mobility grouping on. Total pages: 641450 May 27 03:17:19.988824 kernel: Policy zone: DMA32 May 27 03:17:19.988839 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 27 03:17:19.988851 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 May 27 03:17:19.988862 kernel: ftrace: allocating 40081 entries in 157 pages May 27 03:17:19.988874 kernel: ftrace: allocated 157 pages with 5 groups May 27 03:17:19.988885 kernel: Dynamic Preempt: voluntary May 27 03:17:19.988897 kernel: rcu: Preemptible hierarchical RCU implementation. May 27 03:17:19.988909 kernel: rcu: RCU event tracing is enabled. May 27 03:17:19.988920 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. May 27 03:17:19.988932 kernel: Trampoline variant of Tasks RCU enabled. May 27 03:17:19.988957 kernel: Rude variant of Tasks RCU enabled. May 27 03:17:19.988969 kernel: Tracing variant of Tasks RCU enabled. May 27 03:17:19.988981 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 27 03:17:19.988995 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 May 27 03:17:19.989007 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 27 03:17:19.989019 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 27 03:17:19.989030 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 27 03:17:19.989042 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 May 27 03:17:19.989053 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 27 03:17:19.989068 kernel: Console: colour dummy device 80x25 May 27 03:17:19.989080 kernel: printk: legacy console [ttyS0] enabled May 27 03:17:19.989091 kernel: ACPI: Core revision 20240827 May 27 03:17:19.989103 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns May 27 03:17:19.989114 kernel: APIC: Switch to symmetric I/O mode setup May 27 03:17:19.989126 kernel: x2apic enabled May 27 03:17:19.989138 kernel: APIC: Switched APIC routing to: physical x2apic May 27 03:17:19.989149 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() May 27 03:17:19.989161 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() May 27 03:17:19.989175 kernel: kvm-guest: setup PV IPIs May 27 03:17:19.989186 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 May 27 03:17:19.989198 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x2848df6a9de, max_idle_ns: 440795280912 ns May 27 03:17:19.989210 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) May 27 03:17:19.989222 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated May 27 03:17:19.989233 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 May 27 03:17:19.989245 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 May 27 03:17:19.989257 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization May 27 03:17:19.989268 kernel: Spectre V2 : Mitigation: Retpolines May 27 03:17:19.989282 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT May 27 03:17:19.989294 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls May 27 03:17:19.989305 kernel: RETBleed: Mitigation: untrained return thunk May 27 03:17:19.989317 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier May 27 03:17:19.989332 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl May 27 03:17:19.989344 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! May 27 03:17:19.989356 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. May 27 03:17:19.989368 kernel: x86/bugs: return thunk changed May 27 03:17:19.989382 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode May 27 03:17:19.989394 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' May 27 03:17:19.989405 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' May 27 03:17:19.989417 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' May 27 03:17:19.989428 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 May 27 03:17:19.989441 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. May 27 03:17:19.989452 kernel: Freeing SMP alternatives memory: 32K May 27 03:17:19.989463 kernel: pid_max: default: 32768 minimum: 301 May 27 03:17:19.989475 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima May 27 03:17:19.989508 kernel: landlock: Up and running. May 27 03:17:19.989520 kernel: SELinux: Initializing. May 27 03:17:19.989531 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 27 03:17:19.989543 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 27 03:17:19.989555 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) May 27 03:17:19.989567 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. May 27 03:17:19.989578 kernel: ... version: 0 May 27 03:17:19.989589 kernel: ... bit width: 48 May 27 03:17:19.989601 kernel: ... generic registers: 6 May 27 03:17:19.989616 kernel: ... value mask: 0000ffffffffffff May 27 03:17:19.989628 kernel: ... max period: 00007fffffffffff May 27 03:17:19.989640 kernel: ... fixed-purpose events: 0 May 27 03:17:19.989651 kernel: ... event mask: 000000000000003f May 27 03:17:19.989662 kernel: signal: max sigframe size: 1776 May 27 03:17:19.989674 kernel: rcu: Hierarchical SRCU implementation. May 27 03:17:19.989686 kernel: rcu: Max phase no-delay instances is 400. May 27 03:17:19.989700 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level May 27 03:17:19.989712 kernel: smp: Bringing up secondary CPUs ... May 27 03:17:19.989726 kernel: smpboot: x86: Booting SMP configuration: May 27 03:17:19.989737 kernel: .... node #0, CPUs: #1 #2 #3 May 27 03:17:19.989748 kernel: smp: Brought up 1 node, 4 CPUs May 27 03:17:19.989759 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) May 27 03:17:19.989771 kernel: Memory: 2422664K/2565800K available (14336K kernel code, 2430K rwdata, 9952K rodata, 54416K init, 2552K bss, 137200K reserved, 0K cma-reserved) May 27 03:17:19.989783 kernel: devtmpfs: initialized May 27 03:17:19.989794 kernel: x86/mm: Memory block size: 128MB May 27 03:17:19.989806 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) May 27 03:17:19.989817 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) May 27 03:17:19.989831 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00811000-0x008fffff] (978944 bytes) May 27 03:17:19.989842 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) May 27 03:17:19.989853 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9ce95000-0x9ce96fff] (8192 bytes) May 27 03:17:19.989864 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) May 27 03:17:19.989887 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 27 03:17:19.989899 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) May 27 03:17:19.989909 kernel: pinctrl core: initialized pinctrl subsystem May 27 03:17:19.989920 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 27 03:17:19.989941 kernel: audit: initializing netlink subsys (disabled) May 27 03:17:19.989967 kernel: audit: type=2000 audit(1748315837.273:1): state=initialized audit_enabled=0 res=1 May 27 03:17:19.989978 kernel: thermal_sys: Registered thermal governor 'step_wise' May 27 03:17:19.989989 kernel: thermal_sys: Registered thermal governor 'user_space' May 27 03:17:19.990000 kernel: cpuidle: using governor menu May 27 03:17:19.990011 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 27 03:17:19.990022 kernel: dca service started, version 1.12.1 May 27 03:17:19.990033 kernel: PCI: ECAM [mem 0xe0000000-0xefffffff] (base 0xe0000000) for domain 0000 [bus 00-ff] May 27 03:17:19.990049 kernel: PCI: Using configuration type 1 for base access May 27 03:17:19.990059 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. May 27 03:17:19.990072 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages May 27 03:17:19.990083 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page May 27 03:17:19.990094 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 27 03:17:19.990105 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page May 27 03:17:19.990116 kernel: ACPI: Added _OSI(Module Device) May 27 03:17:19.990127 kernel: ACPI: Added _OSI(Processor Device) May 27 03:17:19.990138 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 27 03:17:19.990150 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 27 03:17:19.990161 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 27 03:17:19.990176 kernel: ACPI: Interpreter enabled May 27 03:17:19.990187 kernel: ACPI: PM: (supports S0 S3 S5) May 27 03:17:19.990198 kernel: ACPI: Using IOAPIC for interrupt routing May 27 03:17:19.990210 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug May 27 03:17:19.990222 kernel: PCI: Using E820 reservations for host bridge windows May 27 03:17:19.990233 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F May 27 03:17:19.990245 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 27 03:17:19.990533 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 27 03:17:19.990706 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] May 27 03:17:19.990866 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] May 27 03:17:19.990882 kernel: PCI host bridge to bus 0000:00 May 27 03:17:19.991067 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] May 27 03:17:19.991214 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] May 27 03:17:19.991362 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] May 27 03:17:19.991523 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xdfffffff window] May 27 03:17:19.991677 kernel: pci_bus 0000:00: root bus resource [mem 0xf0000000-0xfebfffff window] May 27 03:17:19.991820 kernel: pci_bus 0000:00: root bus resource [mem 0x380000000000-0x3807ffffffff window] May 27 03:17:19.991983 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 27 03:17:19.992222 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint May 27 03:17:19.992412 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint May 27 03:17:19.992617 kernel: pci 0000:00:01.0: BAR 0 [mem 0xc0000000-0xc0ffffff pref] May 27 03:17:19.992783 kernel: pci 0000:00:01.0: BAR 2 [mem 0xc1044000-0xc1044fff] May 27 03:17:19.992939 kernel: pci 0000:00:01.0: ROM [mem 0xffff0000-0xffffffff pref] May 27 03:17:19.993111 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] May 27 03:17:19.993293 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint May 27 03:17:19.993504 kernel: pci 0000:00:02.0: BAR 0 [io 0x6100-0x611f] May 27 03:17:19.993671 kernel: pci 0000:00:02.0: BAR 1 [mem 0xc1043000-0xc1043fff] May 27 03:17:19.993829 kernel: pci 0000:00:02.0: BAR 4 [mem 0x380000000000-0x380000003fff 64bit pref] May 27 03:17:19.994027 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint May 27 03:17:19.994246 kernel: pci 0000:00:03.0: BAR 0 [io 0x6000-0x607f] May 27 03:17:19.994405 kernel: pci 0000:00:03.0: BAR 1 [mem 0xc1042000-0xc1042fff] May 27 03:17:19.994636 kernel: pci 0000:00:03.0: BAR 4 [mem 0x380000004000-0x380000007fff 64bit pref] May 27 03:17:19.994825 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint May 27 03:17:19.994997 kernel: pci 0000:00:04.0: BAR 0 [io 0x60e0-0x60ff] May 27 03:17:19.995157 kernel: pci 0000:00:04.0: BAR 1 [mem 0xc1041000-0xc1041fff] May 27 03:17:19.995322 kernel: pci 0000:00:04.0: BAR 4 [mem 0x380000008000-0x38000000bfff 64bit pref] May 27 03:17:19.995479 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref] May 27 03:17:19.995681 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint May 27 03:17:19.995840 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO May 27 03:17:19.996026 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint May 27 03:17:19.996186 kernel: pci 0000:00:1f.2: BAR 4 [io 0x60c0-0x60df] May 27 03:17:19.996348 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xc1040000-0xc1040fff] May 27 03:17:19.996564 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint May 27 03:17:19.996728 kernel: pci 0000:00:1f.3: BAR 4 [io 0x6080-0x60bf] May 27 03:17:19.996744 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 May 27 03:17:19.996756 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 May 27 03:17:19.996768 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 May 27 03:17:19.996779 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 May 27 03:17:19.996791 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 May 27 03:17:19.996807 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 May 27 03:17:19.996819 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 May 27 03:17:19.996830 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 May 27 03:17:19.996842 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 May 27 03:17:19.996854 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 May 27 03:17:19.996865 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 May 27 03:17:19.996877 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 May 27 03:17:19.996888 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 May 27 03:17:19.996900 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 May 27 03:17:19.996914 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 May 27 03:17:19.996926 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 May 27 03:17:19.996937 kernel: iommu: Default domain type: Translated May 27 03:17:19.996959 kernel: iommu: DMA domain TLB invalidation policy: lazy mode May 27 03:17:19.996971 kernel: efivars: Registered efivars operations May 27 03:17:19.996983 kernel: PCI: Using ACPI for IRQ routing May 27 03:17:19.996995 kernel: PCI: pci_cache_line_size set to 64 bytes May 27 03:17:19.997006 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] May 27 03:17:19.997018 kernel: e820: reserve RAM buffer [mem 0x00811000-0x008fffff] May 27 03:17:19.997032 kernel: e820: reserve RAM buffer [mem 0x9b2e3018-0x9bffffff] May 27 03:17:19.997044 kernel: e820: reserve RAM buffer [mem 0x9b320018-0x9bffffff] May 27 03:17:19.997055 kernel: e820: reserve RAM buffer [mem 0x9bd3f000-0x9bffffff] May 27 03:17:19.997066 kernel: e820: reserve RAM buffer [mem 0x9c8ed000-0x9fffffff] May 27 03:17:19.997078 kernel: e820: reserve RAM buffer [mem 0x9ce91000-0x9fffffff] May 27 03:17:19.997089 kernel: e820: reserve RAM buffer [mem 0x9cedc000-0x9fffffff] May 27 03:17:19.997249 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device May 27 03:17:19.997406 kernel: pci 0000:00:01.0: vgaarb: bridge control possible May 27 03:17:19.997607 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none May 27 03:17:19.997624 kernel: vgaarb: loaded May 27 03:17:19.997637 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 May 27 03:17:19.997648 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter May 27 03:17:19.997660 kernel: clocksource: Switched to clocksource kvm-clock May 27 03:17:19.997672 kernel: VFS: Disk quotas dquot_6.6.0 May 27 03:17:19.997684 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 27 03:17:19.997696 kernel: pnp: PnP ACPI init May 27 03:17:19.997926 kernel: system 00:05: [mem 0xe0000000-0xefffffff window] has been reserved May 27 03:17:19.997965 kernel: pnp: PnP ACPI: found 6 devices May 27 03:17:19.997979 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns May 27 03:17:19.997991 kernel: NET: Registered PF_INET protocol family May 27 03:17:19.998003 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) May 27 03:17:19.998015 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) May 27 03:17:19.998027 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 27 03:17:19.998040 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) May 27 03:17:19.998052 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) May 27 03:17:19.998068 kernel: TCP: Hash tables configured (established 32768 bind 32768) May 27 03:17:19.998080 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) May 27 03:17:19.998093 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) May 27 03:17:19.998105 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 27 03:17:19.998117 kernel: NET: Registered PF_XDP protocol family May 27 03:17:19.998281 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref]: can't claim; no compatible bridge window May 27 03:17:19.998442 kernel: pci 0000:00:04.0: ROM [mem 0x9d000000-0x9d03ffff pref]: assigned May 27 03:17:19.998615 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] May 27 03:17:19.998765 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] May 27 03:17:19.998908 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] May 27 03:17:19.999063 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xdfffffff window] May 27 03:17:19.999206 kernel: pci_bus 0000:00: resource 8 [mem 0xf0000000-0xfebfffff window] May 27 03:17:19.999347 kernel: pci_bus 0000:00: resource 9 [mem 0x380000000000-0x3807ffffffff window] May 27 03:17:19.999363 kernel: PCI: CLS 0 bytes, default 64 May 27 03:17:19.999376 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x2848df6a9de, max_idle_ns: 440795280912 ns May 27 03:17:19.999388 kernel: Initialise system trusted keyrings May 27 03:17:19.999399 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 May 27 03:17:19.999417 kernel: Key type asymmetric registered May 27 03:17:19.999431 kernel: Asymmetric key parser 'x509' registered May 27 03:17:19.999444 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) May 27 03:17:19.999456 kernel: io scheduler mq-deadline registered May 27 03:17:19.999468 kernel: io scheduler kyber registered May 27 03:17:19.999500 kernel: io scheduler bfq registered May 27 03:17:19.999525 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 May 27 03:17:19.999538 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 May 27 03:17:19.999550 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 May 27 03:17:19.999562 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 May 27 03:17:19.999574 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 27 03:17:19.999586 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A May 27 03:17:19.999598 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 May 27 03:17:19.999610 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 May 27 03:17:19.999622 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 May 27 03:17:19.999807 kernel: rtc_cmos 00:04: RTC can wake from S4 May 27 03:17:19.999825 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 May 27 03:17:19.999998 kernel: rtc_cmos 00:04: registered as rtc0 May 27 03:17:20.000143 kernel: rtc_cmos 00:04: setting system clock to 2025-05-27T03:17:19 UTC (1748315839) May 27 03:17:20.000275 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram May 27 03:17:20.000289 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled May 27 03:17:20.000300 kernel: efifb: probing for efifb May 27 03:17:20.000315 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k May 27 03:17:20.000327 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 May 27 03:17:20.000337 kernel: efifb: scrolling: redraw May 27 03:17:20.000348 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 May 27 03:17:20.000359 kernel: Console: switching to colour frame buffer device 160x50 May 27 03:17:20.000370 kernel: fb0: EFI VGA frame buffer device May 27 03:17:20.000381 kernel: pstore: Using crash dump compression: deflate May 27 03:17:20.000392 kernel: pstore: Registered efi_pstore as persistent store backend May 27 03:17:20.000402 kernel: NET: Registered PF_INET6 protocol family May 27 03:17:20.000437 kernel: Segment Routing with IPv6 May 27 03:17:20.000448 kernel: In-situ OAM (IOAM) with IPv6 May 27 03:17:20.000459 kernel: NET: Registered PF_PACKET protocol family May 27 03:17:20.000469 kernel: Key type dns_resolver registered May 27 03:17:20.000480 kernel: IPI shorthand broadcast: enabled May 27 03:17:20.000511 kernel: sched_clock: Marking stable (3912007407, 192540730)->(4163957556, -59409419) May 27 03:17:20.000522 kernel: registered taskstats version 1 May 27 03:17:20.000533 kernel: Loading compiled-in X.509 certificates May 27 03:17:20.000544 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.30-flatcar: ba9eddccb334a70147f3ddfe4fbde029feaa991d' May 27 03:17:20.000555 kernel: Demotion targets for Node 0: null May 27 03:17:20.000569 kernel: Key type .fscrypt registered May 27 03:17:20.000580 kernel: Key type fscrypt-provisioning registered May 27 03:17:20.000590 kernel: ima: No TPM chip found, activating TPM-bypass! May 27 03:17:20.000602 kernel: ima: Allocated hash algorithm: sha1 May 27 03:17:20.000613 kernel: ima: No architecture policies found May 27 03:17:20.000623 kernel: clk: Disabling unused clocks May 27 03:17:20.000634 kernel: Warning: unable to open an initial console. May 27 03:17:20.000645 kernel: Freeing unused kernel image (initmem) memory: 54416K May 27 03:17:20.000659 kernel: Write protecting the kernel read-only data: 24576k May 27 03:17:20.000670 kernel: Freeing unused kernel image (rodata/data gap) memory: 288K May 27 03:17:20.000680 kernel: Run /init as init process May 27 03:17:20.000691 kernel: with arguments: May 27 03:17:20.000701 kernel: /init May 27 03:17:20.000712 kernel: with environment: May 27 03:17:20.000723 kernel: HOME=/ May 27 03:17:20.000734 kernel: TERM=linux May 27 03:17:20.000745 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 27 03:17:20.000757 systemd[1]: Successfully made /usr/ read-only. May 27 03:17:20.000775 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 27 03:17:20.000788 systemd[1]: Detected virtualization kvm. May 27 03:17:20.000799 systemd[1]: Detected architecture x86-64. May 27 03:17:20.000810 systemd[1]: Running in initrd. May 27 03:17:20.000821 systemd[1]: No hostname configured, using default hostname. May 27 03:17:20.000832 systemd[1]: Hostname set to . May 27 03:17:20.000845 systemd[1]: Initializing machine ID from VM UUID. May 27 03:17:20.000856 systemd[1]: Queued start job for default target initrd.target. May 27 03:17:20.000867 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 27 03:17:20.000878 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 27 03:17:20.000893 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 27 03:17:20.000905 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 27 03:17:20.000917 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 27 03:17:20.000929 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 27 03:17:20.000945 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 27 03:17:20.000966 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 27 03:17:20.000978 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 27 03:17:20.000992 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 27 03:17:20.001003 systemd[1]: Reached target paths.target - Path Units. May 27 03:17:20.001015 systemd[1]: Reached target slices.target - Slice Units. May 27 03:17:20.001027 systemd[1]: Reached target swap.target - Swaps. May 27 03:17:20.001038 systemd[1]: Reached target timers.target - Timer Units. May 27 03:17:20.001053 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 27 03:17:20.001064 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 27 03:17:20.001076 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 27 03:17:20.001088 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. May 27 03:17:20.001100 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 27 03:17:20.001112 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 27 03:17:20.001123 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 27 03:17:20.001135 systemd[1]: Reached target sockets.target - Socket Units. May 27 03:17:20.001149 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 27 03:17:20.001161 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 27 03:17:20.001173 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 27 03:17:20.001186 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). May 27 03:17:20.001198 systemd[1]: Starting systemd-fsck-usr.service... May 27 03:17:20.001210 systemd[1]: Starting systemd-journald.service - Journal Service... May 27 03:17:20.001222 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 27 03:17:20.001233 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 27 03:17:20.001248 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 27 03:17:20.001261 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 27 03:17:20.001273 systemd[1]: Finished systemd-fsck-usr.service. May 27 03:17:20.001285 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 27 03:17:20.001326 systemd-journald[220]: Collecting audit messages is disabled. May 27 03:17:20.001358 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 27 03:17:20.001370 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 27 03:17:20.001383 systemd-journald[220]: Journal started May 27 03:17:20.001409 systemd-journald[220]: Runtime Journal (/run/log/journal/20ea784a7750425a80a68979693e062e) is 6M, max 48.5M, 42.4M free. May 27 03:17:19.987056 systemd-modules-load[222]: Inserted module 'overlay' May 27 03:17:20.006517 systemd[1]: Started systemd-journald.service - Journal Service. May 27 03:17:20.007461 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 27 03:17:20.018026 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 27 03:17:20.021106 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 27 03:17:20.027516 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 27 03:17:20.030709 systemd-modules-load[222]: Inserted module 'br_netfilter' May 27 03:17:20.032198 kernel: Bridge firewalling registered May 27 03:17:20.032668 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 27 03:17:20.036685 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 27 03:17:20.039384 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 27 03:17:20.046256 systemd-tmpfiles[243]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. May 27 03:17:20.047102 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 27 03:17:20.052593 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 27 03:17:20.054880 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 27 03:17:20.071701 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 27 03:17:20.075554 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 27 03:17:20.093912 dracut-cmdline[262]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=f6c186658a19d5a08471ef76df75f82494b37b46908f9237b2c3cf497da860c6 May 27 03:17:20.145397 systemd-resolved[265]: Positive Trust Anchors: May 27 03:17:20.145419 systemd-resolved[265]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 27 03:17:20.145517 systemd-resolved[265]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 27 03:17:20.148867 systemd-resolved[265]: Defaulting to hostname 'linux'. May 27 03:17:20.151158 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 27 03:17:20.157419 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 27 03:17:20.241526 kernel: SCSI subsystem initialized May 27 03:17:20.270549 kernel: Loading iSCSI transport class v2.0-870. May 27 03:17:20.282518 kernel: iscsi: registered transport (tcp) May 27 03:17:20.305677 kernel: iscsi: registered transport (qla4xxx) May 27 03:17:20.305765 kernel: QLogic iSCSI HBA Driver May 27 03:17:20.330270 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 27 03:17:20.356875 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 27 03:17:20.357866 systemd[1]: Reached target network-pre.target - Preparation for Network. May 27 03:17:20.427260 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 27 03:17:20.430650 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 27 03:17:20.505542 kernel: raid6: avx2x4 gen() 28839 MB/s May 27 03:17:20.522551 kernel: raid6: avx2x2 gen() 28472 MB/s May 27 03:17:20.539695 kernel: raid6: avx2x1 gen() 24518 MB/s May 27 03:17:20.539786 kernel: raid6: using algorithm avx2x4 gen() 28839 MB/s May 27 03:17:20.557639 kernel: raid6: .... xor() 7119 MB/s, rmw enabled May 27 03:17:20.557733 kernel: raid6: using avx2x2 recovery algorithm May 27 03:17:20.580545 kernel: xor: automatically using best checksumming function avx May 27 03:17:20.758535 kernel: Btrfs loaded, zoned=no, fsverity=no May 27 03:17:20.767407 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 27 03:17:20.770555 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 27 03:17:20.828629 systemd-udevd[474]: Using default interface naming scheme 'v255'. May 27 03:17:20.834707 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 27 03:17:20.836463 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 27 03:17:20.866526 dracut-pre-trigger[479]: rd.md=0: removing MD RAID activation May 27 03:17:20.901449 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 27 03:17:20.956236 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 27 03:17:21.047906 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 27 03:17:21.055011 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 27 03:17:21.128911 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues May 27 03:17:21.129964 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) May 27 03:17:21.142880 kernel: cryptd: max_cpu_qlen set to 1000 May 27 03:17:21.142949 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 27 03:17:21.142962 kernel: GPT:9289727 != 19775487 May 27 03:17:21.144226 kernel: GPT:Alternate GPT header not at the end of the disk. May 27 03:17:21.144254 kernel: GPT:9289727 != 19775487 May 27 03:17:21.145939 kernel: GPT: Use GNU Parted to correct GPT errors. May 27 03:17:21.145962 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 27 03:17:21.146541 kernel: libata version 3.00 loaded. May 27 03:17:21.160518 kernel: ahci 0000:00:1f.2: version 3.0 May 27 03:17:21.177000 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 May 27 03:17:21.177022 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 May 27 03:17:21.177037 kernel: AES CTR mode by8 optimization enabled May 27 03:17:21.181861 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode May 27 03:17:21.182110 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) May 27 03:17:21.182276 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only May 27 03:17:21.181127 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 27 03:17:21.181362 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 27 03:17:21.192171 kernel: scsi host0: ahci May 27 03:17:21.191082 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 27 03:17:21.194862 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 27 03:17:21.201365 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. May 27 03:17:21.215514 kernel: scsi host1: ahci May 27 03:17:21.222506 kernel: scsi host2: ahci May 27 03:17:21.227513 kernel: scsi host3: ahci May 27 03:17:21.228523 kernel: scsi host4: ahci May 27 03:17:21.230659 kernel: scsi host5: ahci May 27 03:17:21.230840 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 lpm-pol 0 May 27 03:17:21.230853 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 lpm-pol 0 May 27 03:17:21.235596 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 lpm-pol 0 May 27 03:17:21.235656 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 lpm-pol 0 May 27 03:17:21.235667 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 lpm-pol 0 May 27 03:17:21.235678 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 lpm-pol 0 May 27 03:17:21.237534 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. May 27 03:17:21.240697 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 27 03:17:21.260165 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 27 03:17:21.269262 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. May 27 03:17:21.307839 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. May 27 03:17:21.308216 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. May 27 03:17:21.309792 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 27 03:17:21.546527 kernel: ata1: SATA link down (SStatus 0 SControl 300) May 27 03:17:21.546612 kernel: ata5: SATA link down (SStatus 0 SControl 300) May 27 03:17:21.547525 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) May 27 03:17:21.548520 kernel: ata2: SATA link down (SStatus 0 SControl 300) May 27 03:17:21.549533 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 May 27 03:17:21.549624 kernel: ata3.00: applying bridge limits May 27 03:17:21.550528 kernel: ata4: SATA link down (SStatus 0 SControl 300) May 27 03:17:21.551516 kernel: ata6: SATA link down (SStatus 0 SControl 300) May 27 03:17:21.552523 kernel: ata3.00: configured for UDMA/100 May 27 03:17:21.554532 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 May 27 03:17:21.601520 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray May 27 03:17:21.601798 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 May 27 03:17:21.627763 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 May 27 03:17:21.838237 disk-uuid[636]: Primary Header is updated. May 27 03:17:21.838237 disk-uuid[636]: Secondary Entries is updated. May 27 03:17:21.838237 disk-uuid[636]: Secondary Header is updated. May 27 03:17:21.874513 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 27 03:17:21.878511 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 27 03:17:21.968750 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 27 03:17:21.995761 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 27 03:17:22.000806 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 27 03:17:22.001323 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 27 03:17:22.002842 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 27 03:17:22.036623 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 27 03:17:22.914518 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 27 03:17:22.914941 disk-uuid[640]: The operation has completed successfully. May 27 03:17:22.946856 systemd[1]: disk-uuid.service: Deactivated successfully. May 27 03:17:22.946999 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 27 03:17:23.002228 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 27 03:17:23.029758 sh[665]: Success May 27 03:17:23.067544 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 27 03:17:23.067626 kernel: device-mapper: uevent: version 1.0.3 May 27 03:17:23.067645 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev May 27 03:17:23.080521 kernel: device-mapper: verity: sha256 using shash "sha256-ni" May 27 03:17:23.118448 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 27 03:17:23.122095 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 27 03:17:23.143300 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 27 03:17:23.150116 kernel: BTRFS info: 'norecovery' is for compatibility only, recommended to use 'rescue=nologreplay' May 27 03:17:23.150151 kernel: BTRFS: device fsid f0f66fe8-3990-49eb-980e-559a3dfd3522 devid 1 transid 40 /dev/mapper/usr (253:0) scanned by mount (677) May 27 03:17:23.150567 kernel: BTRFS info (device dm-0): first mount of filesystem f0f66fe8-3990-49eb-980e-559a3dfd3522 May 27 03:17:23.152558 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm May 27 03:17:23.152588 kernel: BTRFS info (device dm-0): using free-space-tree May 27 03:17:23.158535 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 27 03:17:23.159430 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. May 27 03:17:23.160968 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 27 03:17:23.165319 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 27 03:17:23.166592 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 27 03:17:23.242188 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 (254:6) scanned by mount (708) May 27 03:17:23.242258 kernel: BTRFS info (device vda6): first mount of filesystem fd7bb961-7a0f-4c90-a609-3bffeb956d05 May 27 03:17:23.242271 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 27 03:17:23.243823 kernel: BTRFS info (device vda6): using free-space-tree May 27 03:17:23.253547 kernel: BTRFS info (device vda6): last unmount of filesystem fd7bb961-7a0f-4c90-a609-3bffeb956d05 May 27 03:17:23.254382 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 27 03:17:23.257948 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 27 03:17:23.352153 ignition[751]: Ignition 2.21.0 May 27 03:17:23.352175 ignition[751]: Stage: fetch-offline May 27 03:17:23.362981 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 27 03:17:23.352289 ignition[751]: no configs at "/usr/lib/ignition/base.d" May 27 03:17:23.370595 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 27 03:17:23.352316 ignition[751]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 27 03:17:23.352506 ignition[751]: parsed url from cmdline: "" May 27 03:17:23.352515 ignition[751]: no config URL provided May 27 03:17:23.352522 ignition[751]: reading system config file "/usr/lib/ignition/user.ign" May 27 03:17:23.352535 ignition[751]: no config at "/usr/lib/ignition/user.ign" May 27 03:17:23.352578 ignition[751]: op(1): [started] loading QEMU firmware config module May 27 03:17:23.369653 ignition[751]: op(1): executing: "modprobe" "qemu_fw_cfg" May 27 03:17:23.384593 ignition[751]: op(1): [finished] loading QEMU firmware config module May 27 03:17:23.426600 systemd-networkd[855]: lo: Link UP May 27 03:17:23.426677 ignition[751]: parsing config with SHA512: 21282ff9ac065e5ba6599744678f47f8ed90cf78f8c6ca445bd20c56257b07fac31594907e5333ec0c262289f98b5fd0884801bb6bad5c6dd9203cab18326c70 May 27 03:17:23.426615 systemd-networkd[855]: lo: Gained carrier May 27 03:17:23.428412 systemd-networkd[855]: Enumeration completed May 27 03:17:23.428605 systemd[1]: Started systemd-networkd.service - Network Configuration. May 27 03:17:23.428826 systemd-networkd[855]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 27 03:17:23.433763 ignition[751]: fetch-offline: fetch-offline passed May 27 03:17:23.428831 systemd-networkd[855]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 27 03:17:23.433916 ignition[751]: Ignition finished successfully May 27 03:17:23.429721 systemd-networkd[855]: eth0: Link UP May 27 03:17:23.429725 systemd-networkd[855]: eth0: Gained carrier May 27 03:17:23.429733 systemd-networkd[855]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 27 03:17:23.431280 systemd[1]: Reached target network.target - Network. May 27 03:17:23.433182 unknown[751]: fetched base config from "system" May 27 03:17:23.433191 unknown[751]: fetched user config from "qemu" May 27 03:17:23.437782 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 27 03:17:23.438483 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). May 27 03:17:23.439465 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 27 03:17:23.443283 systemd-networkd[855]: eth0: DHCPv4 address 10.0.0.71/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 27 03:17:23.476615 ignition[861]: Ignition 2.21.0 May 27 03:17:23.476630 ignition[861]: Stage: kargs May 27 03:17:23.476803 ignition[861]: no configs at "/usr/lib/ignition/base.d" May 27 03:17:23.476817 ignition[861]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 27 03:17:23.481549 ignition[861]: kargs: kargs passed May 27 03:17:23.481617 ignition[861]: Ignition finished successfully May 27 03:17:23.486848 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 27 03:17:23.489102 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 27 03:17:23.522399 ignition[870]: Ignition 2.21.0 May 27 03:17:23.522415 ignition[870]: Stage: disks May 27 03:17:23.522596 ignition[870]: no configs at "/usr/lib/ignition/base.d" May 27 03:17:23.522611 ignition[870]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 27 03:17:23.524741 ignition[870]: disks: disks passed May 27 03:17:23.525609 ignition[870]: Ignition finished successfully May 27 03:17:23.529944 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 27 03:17:23.530917 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 27 03:17:23.532375 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 27 03:17:23.532888 systemd[1]: Reached target local-fs.target - Local File Systems. May 27 03:17:23.533254 systemd[1]: Reached target sysinit.target - System Initialization. May 27 03:17:23.533775 systemd[1]: Reached target basic.target - Basic System. May 27 03:17:23.536813 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 27 03:17:23.572670 systemd-fsck[880]: ROOT: clean, 15/553520 files, 52789/553472 blocks May 27 03:17:23.924202 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 27 03:17:23.926330 systemd[1]: Mounting sysroot.mount - /sysroot... May 27 03:17:24.063543 kernel: EXT4-fs (vda9): mounted filesystem 18301365-b380-45d7-9677-e42472a122bc r/w with ordered data mode. Quota mode: none. May 27 03:17:24.064586 systemd[1]: Mounted sysroot.mount - /sysroot. May 27 03:17:24.067027 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 27 03:17:24.070897 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 27 03:17:24.106021 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 27 03:17:24.121824 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. May 27 03:17:24.124434 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 27 03:17:24.131175 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 (254:6) scanned by mount (888) May 27 03:17:24.131209 kernel: BTRFS info (device vda6): first mount of filesystem fd7bb961-7a0f-4c90-a609-3bffeb956d05 May 27 03:17:24.131225 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 27 03:17:24.131240 kernel: BTRFS info (device vda6): using free-space-tree May 27 03:17:24.124521 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 27 03:17:24.135686 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 27 03:17:24.137743 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 27 03:17:24.141955 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 27 03:17:24.204743 initrd-setup-root[912]: cut: /sysroot/etc/passwd: No such file or directory May 27 03:17:24.219277 initrd-setup-root[919]: cut: /sysroot/etc/group: No such file or directory May 27 03:17:24.223812 initrd-setup-root[926]: cut: /sysroot/etc/shadow: No such file or directory May 27 03:17:24.229179 initrd-setup-root[933]: cut: /sysroot/etc/gshadow: No such file or directory May 27 03:17:24.335715 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 27 03:17:24.352919 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 27 03:17:24.354864 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 27 03:17:24.378407 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 27 03:17:24.383923 kernel: BTRFS info (device vda6): last unmount of filesystem fd7bb961-7a0f-4c90-a609-3bffeb956d05 May 27 03:17:24.395854 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 27 03:17:24.415880 ignition[1003]: INFO : Ignition 2.21.0 May 27 03:17:24.415880 ignition[1003]: INFO : Stage: mount May 27 03:17:24.415880 ignition[1003]: INFO : no configs at "/usr/lib/ignition/base.d" May 27 03:17:24.415880 ignition[1003]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 27 03:17:24.424880 ignition[1003]: INFO : mount: mount passed May 27 03:17:24.424880 ignition[1003]: INFO : Ignition finished successfully May 27 03:17:24.419340 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 27 03:17:24.421630 systemd[1]: Starting ignition-files.service - Ignition (files)... May 27 03:17:24.448038 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 27 03:17:24.479513 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 (254:6) scanned by mount (1015) May 27 03:17:24.481855 kernel: BTRFS info (device vda6): first mount of filesystem fd7bb961-7a0f-4c90-a609-3bffeb956d05 May 27 03:17:24.481884 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 27 03:17:24.481896 kernel: BTRFS info (device vda6): using free-space-tree May 27 03:17:24.486060 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 27 03:17:24.521222 ignition[1032]: INFO : Ignition 2.21.0 May 27 03:17:24.521222 ignition[1032]: INFO : Stage: files May 27 03:17:24.526431 ignition[1032]: INFO : no configs at "/usr/lib/ignition/base.d" May 27 03:17:24.526431 ignition[1032]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 27 03:17:24.526431 ignition[1032]: DEBUG : files: compiled without relabeling support, skipping May 27 03:17:24.526431 ignition[1032]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 27 03:17:24.526431 ignition[1032]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 27 03:17:24.534524 ignition[1032]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 27 03:17:24.534524 ignition[1032]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 27 03:17:24.534524 ignition[1032]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 27 03:17:24.534524 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" May 27 03:17:24.534524 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 May 27 03:17:24.529978 unknown[1032]: wrote ssh authorized keys file for user: core May 27 03:17:24.572015 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 27 03:17:24.710681 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" May 27 03:17:24.710681 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" May 27 03:17:24.714885 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 May 27 03:17:25.062912 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK May 27 03:17:25.079706 systemd-networkd[855]: eth0: Gained IPv6LL May 27 03:17:25.159308 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" May 27 03:17:25.159308 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" May 27 03:17:25.163329 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" May 27 03:17:25.163329 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" May 27 03:17:25.163329 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" May 27 03:17:25.163329 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 27 03:17:25.163329 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 27 03:17:25.163329 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 27 03:17:25.163329 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 27 03:17:25.291521 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" May 27 03:17:25.293748 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 27 03:17:25.293748 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" May 27 03:17:25.408365 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" May 27 03:17:25.408365 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" May 27 03:17:25.413681 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 May 27 03:17:26.097355 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK May 27 03:17:26.659079 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" May 27 03:17:26.659079 ignition[1032]: INFO : files: op(c): [started] processing unit "prepare-helm.service" May 27 03:17:26.846116 ignition[1032]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 27 03:17:27.387740 ignition[1032]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 27 03:17:27.387740 ignition[1032]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" May 27 03:17:27.387740 ignition[1032]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" May 27 03:17:27.387740 ignition[1032]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 27 03:17:27.395268 ignition[1032]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 27 03:17:27.395268 ignition[1032]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" May 27 03:17:27.395268 ignition[1032]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" May 27 03:17:27.429641 ignition[1032]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" May 27 03:17:27.439167 ignition[1032]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" May 27 03:17:27.441267 ignition[1032]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" May 27 03:17:27.441267 ignition[1032]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" May 27 03:17:27.441267 ignition[1032]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" May 27 03:17:27.441267 ignition[1032]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" May 27 03:17:27.441267 ignition[1032]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" May 27 03:17:27.441267 ignition[1032]: INFO : files: files passed May 27 03:17:27.441267 ignition[1032]: INFO : Ignition finished successfully May 27 03:17:27.445478 systemd[1]: Finished ignition-files.service - Ignition (files). May 27 03:17:27.449554 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 27 03:17:27.452706 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 27 03:17:27.473919 systemd[1]: ignition-quench.service: Deactivated successfully. May 27 03:17:27.474103 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 27 03:17:27.477667 initrd-setup-root-after-ignition[1060]: grep: /sysroot/oem/oem-release: No such file or directory May 27 03:17:27.483898 initrd-setup-root-after-ignition[1063]: grep: May 27 03:17:27.485305 initrd-setup-root-after-ignition[1067]: grep: May 27 03:17:27.486084 initrd-setup-root-after-ignition[1063]: /sysroot/etc/flatcar/enabled-sysext.conf May 27 03:17:27.485889 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 27 03:17:27.489895 initrd-setup-root-after-ignition[1067]: /sysroot/etc/flatcar/enabled-sysext.conf May 27 03:17:27.489895 initrd-setup-root-after-ignition[1063]: : No such file or directory May 27 03:17:27.489354 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 27 03:17:27.497330 initrd-setup-root-after-ignition[1067]: : No such file or directory May 27 03:17:27.491212 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 27 03:17:27.500872 initrd-setup-root-after-ignition[1063]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 27 03:17:27.560978 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 27 03:17:27.561151 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 27 03:17:27.573146 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 27 03:17:27.575786 systemd[1]: Reached target initrd.target - Initrd Default Target. May 27 03:17:27.578135 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 27 03:17:27.579289 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 27 03:17:27.614689 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 27 03:17:27.617476 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 27 03:17:27.645583 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 27 03:17:27.646311 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 27 03:17:27.647006 systemd[1]: Stopped target timers.target - Timer Units. May 27 03:17:27.647327 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 27 03:17:27.647479 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 27 03:17:27.652864 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 27 03:17:27.653214 systemd[1]: Stopped target basic.target - Basic System. May 27 03:17:27.653587 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 27 03:17:27.654153 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 27 03:17:27.654449 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 27 03:17:27.654969 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. May 27 03:17:27.655340 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 27 03:17:27.655828 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 27 03:17:27.658320 systemd[1]: Stopped target sysinit.target - System Initialization. May 27 03:17:27.675740 systemd[1]: Stopped target local-fs.target - Local File Systems. May 27 03:17:27.676404 systemd[1]: Stopped target swap.target - Swaps. May 27 03:17:27.682281 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 27 03:17:27.682509 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 27 03:17:27.691602 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 27 03:17:27.692201 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 27 03:17:27.699145 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 27 03:17:27.700428 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 27 03:17:27.703791 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 27 03:17:27.703996 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 27 03:17:27.707385 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 27 03:17:27.707537 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 27 03:17:27.708330 systemd[1]: Stopped target paths.target - Path Units. May 27 03:17:27.708819 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 27 03:17:27.712622 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 27 03:17:27.713910 systemd[1]: Stopped target slices.target - Slice Units. May 27 03:17:27.716042 systemd[1]: Stopped target sockets.target - Socket Units. May 27 03:17:27.718910 systemd[1]: iscsid.socket: Deactivated successfully. May 27 03:17:27.719022 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 27 03:17:27.720880 systemd[1]: iscsiuio.socket: Deactivated successfully. May 27 03:17:27.721002 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 27 03:17:27.722550 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 27 03:17:27.722706 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 27 03:17:27.724294 systemd[1]: ignition-files.service: Deactivated successfully. May 27 03:17:27.724425 systemd[1]: Stopped ignition-files.service - Ignition (files). May 27 03:17:27.730532 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 27 03:17:27.731070 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 27 03:17:27.731217 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 27 03:17:27.734925 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 27 03:17:27.736403 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 27 03:17:27.736630 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 27 03:17:27.738430 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 27 03:17:27.738596 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 27 03:17:27.750909 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 27 03:17:27.752730 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 27 03:17:27.780458 ignition[1087]: INFO : Ignition 2.21.0 May 27 03:17:27.780458 ignition[1087]: INFO : Stage: umount May 27 03:17:27.780458 ignition[1087]: INFO : no configs at "/usr/lib/ignition/base.d" May 27 03:17:27.780458 ignition[1087]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 27 03:17:27.779861 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 27 03:17:27.789616 ignition[1087]: INFO : umount: umount passed May 27 03:17:27.789616 ignition[1087]: INFO : Ignition finished successfully May 27 03:17:27.785380 systemd[1]: ignition-mount.service: Deactivated successfully. May 27 03:17:27.785603 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 27 03:17:27.787836 systemd[1]: Stopped target network.target - Network. May 27 03:17:27.790352 systemd[1]: ignition-disks.service: Deactivated successfully. May 27 03:17:27.790449 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 27 03:17:27.791017 systemd[1]: ignition-kargs.service: Deactivated successfully. May 27 03:17:27.791081 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 27 03:17:27.791408 systemd[1]: ignition-setup.service: Deactivated successfully. May 27 03:17:27.791556 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 27 03:17:27.792036 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 27 03:17:27.792103 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 27 03:17:27.792652 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 27 03:17:27.793124 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 27 03:17:27.805548 systemd[1]: sysroot-boot.service: Deactivated successfully. May 27 03:17:27.805704 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 27 03:17:27.806316 systemd[1]: systemd-resolved.service: Deactivated successfully. May 27 03:17:27.806434 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 27 03:17:27.813126 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. May 27 03:17:27.813618 systemd[1]: systemd-networkd.service: Deactivated successfully. May 27 03:17:27.813785 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 27 03:17:27.842120 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. May 27 03:17:27.843547 systemd[1]: Stopped target network-pre.target - Preparation for Network. May 27 03:17:27.844052 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 27 03:17:27.844109 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 27 03:17:27.844356 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 27 03:17:27.844413 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 27 03:17:27.845970 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 27 03:17:27.850711 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 27 03:17:27.850788 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 27 03:17:27.851091 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 27 03:17:27.851136 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 27 03:17:27.866930 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 27 03:17:27.866996 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 27 03:17:27.868443 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 27 03:17:27.868596 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 27 03:17:27.872443 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 27 03:17:27.874325 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 27 03:17:27.874394 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. May 27 03:17:27.898219 systemd[1]: network-cleanup.service: Deactivated successfully. May 27 03:17:27.898385 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 27 03:17:27.903344 systemd[1]: systemd-udevd.service: Deactivated successfully. May 27 03:17:27.903560 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 27 03:17:27.916994 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 27 03:17:27.917065 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 27 03:17:27.919083 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 27 03:17:27.919135 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 27 03:17:27.919451 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 27 03:17:27.919543 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 27 03:17:27.920542 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 27 03:17:27.920610 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 27 03:17:27.921513 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 27 03:17:27.921584 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 27 03:17:27.936816 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 27 03:17:27.937380 systemd[1]: systemd-network-generator.service: Deactivated successfully. May 27 03:17:27.937513 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. May 27 03:17:27.943187 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 27 03:17:27.943296 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 27 03:17:27.955575 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 27 03:17:27.955696 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 27 03:17:27.960616 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. May 27 03:17:27.960697 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. May 27 03:17:27.960747 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. May 27 03:17:27.979210 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 27 03:17:27.979362 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 27 03:17:27.980279 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 27 03:17:27.986026 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 27 03:17:28.021330 systemd[1]: Switching root. May 27 03:17:28.067968 systemd-journald[220]: Journal stopped May 27 03:17:29.732395 systemd-journald[220]: Received SIGTERM from PID 1 (systemd). May 27 03:17:29.733706 kernel: SELinux: policy capability network_peer_controls=1 May 27 03:17:29.733754 kernel: SELinux: policy capability open_perms=1 May 27 03:17:29.733785 kernel: SELinux: policy capability extended_socket_class=1 May 27 03:17:29.733814 kernel: SELinux: policy capability always_check_network=0 May 27 03:17:29.733856 kernel: SELinux: policy capability cgroup_seclabel=1 May 27 03:17:29.733882 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 27 03:17:29.733910 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 27 03:17:29.733925 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 27 03:17:29.733942 kernel: SELinux: policy capability userspace_initial_context=0 May 27 03:17:29.733958 systemd[1]: Successfully loaded SELinux policy in 67.132ms. May 27 03:17:29.734017 kernel: audit: type=1403 audit(1748315848.755:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 27 03:17:29.734040 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 15.242ms. May 27 03:17:29.734065 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 27 03:17:29.734085 systemd[1]: Detected virtualization kvm. May 27 03:17:29.734108 systemd[1]: Detected architecture x86-64. May 27 03:17:29.734131 systemd[1]: Detected first boot. May 27 03:17:29.734173 systemd[1]: Initializing machine ID from VM UUID. May 27 03:17:29.734195 zram_generator::config[1132]: No configuration found. May 27 03:17:29.734212 kernel: Guest personality initialized and is inactive May 27 03:17:29.734226 kernel: VMCI host device registered (name=vmci, major=10, minor=125) May 27 03:17:29.734240 kernel: Initialized host personality May 27 03:17:29.734254 kernel: NET: Registered PF_VSOCK protocol family May 27 03:17:29.734269 systemd[1]: Populated /etc with preset unit settings. May 27 03:17:29.734286 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. May 27 03:17:29.734300 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 27 03:17:29.734318 systemd[1]: Stopped initrd-switch-root.service - Switch Root. May 27 03:17:29.734332 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 27 03:17:29.734347 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 27 03:17:29.734363 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 27 03:17:29.734380 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 27 03:17:29.734403 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 27 03:17:29.734418 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 27 03:17:29.734434 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 27 03:17:29.734450 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 27 03:17:29.734469 systemd[1]: Created slice user.slice - User and Session Slice. May 27 03:17:29.734499 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 27 03:17:29.734516 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 27 03:17:29.734532 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 27 03:17:29.734547 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 27 03:17:29.734563 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 27 03:17:29.734578 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 27 03:17:29.734597 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... May 27 03:17:29.734612 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 27 03:17:29.734628 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 27 03:17:29.734643 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. May 27 03:17:29.734658 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. May 27 03:17:29.734673 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. May 27 03:17:29.734688 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 27 03:17:29.734703 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 27 03:17:29.734729 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 27 03:17:29.734748 systemd[1]: Reached target slices.target - Slice Units. May 27 03:17:29.734764 systemd[1]: Reached target swap.target - Swaps. May 27 03:17:29.734779 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 27 03:17:29.734793 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 27 03:17:29.734808 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. May 27 03:17:29.734824 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 27 03:17:29.734838 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 27 03:17:29.734853 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 27 03:17:29.734868 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 27 03:17:29.734884 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 27 03:17:29.734902 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 27 03:17:29.734917 systemd[1]: Mounting media.mount - External Media Directory... May 27 03:17:29.734932 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 27 03:17:29.734947 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 27 03:17:29.734963 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 27 03:17:29.734977 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 27 03:17:29.734993 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 27 03:17:29.735009 systemd[1]: Reached target machines.target - Containers. May 27 03:17:29.735027 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 27 03:17:29.735043 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 27 03:17:29.735058 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 27 03:17:29.735074 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 27 03:17:29.735089 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 27 03:17:29.735104 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 27 03:17:29.735120 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 27 03:17:29.735135 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 27 03:17:29.735150 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 27 03:17:29.735168 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 27 03:17:29.735183 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 27 03:17:29.735197 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. May 27 03:17:29.735212 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 27 03:17:29.735226 systemd[1]: Stopped systemd-fsck-usr.service. May 27 03:17:29.735241 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 27 03:17:29.735256 systemd[1]: Starting systemd-journald.service - Journal Service... May 27 03:17:29.735270 kernel: loop: module loaded May 27 03:17:29.735288 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 27 03:17:29.735302 kernel: fuse: init (API version 7.41) May 27 03:17:29.735316 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 27 03:17:29.735331 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 27 03:17:29.735346 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... May 27 03:17:29.735365 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 27 03:17:29.735380 systemd[1]: verity-setup.service: Deactivated successfully. May 27 03:17:29.735394 systemd[1]: Stopped verity-setup.service. May 27 03:17:29.735409 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 27 03:17:29.735423 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 27 03:17:29.735438 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 27 03:17:29.735453 systemd[1]: Mounted media.mount - External Media Directory. May 27 03:17:29.735470 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 27 03:17:29.735997 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 27 03:17:29.736025 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 27 03:17:29.736042 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 27 03:17:29.736105 systemd-journald[1196]: Collecting audit messages is disabled. May 27 03:17:29.736137 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 27 03:17:29.736158 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 27 03:17:29.736174 kernel: ACPI: bus type drm_connector registered May 27 03:17:29.736190 systemd-journald[1196]: Journal started May 27 03:17:29.736219 systemd-journald[1196]: Runtime Journal (/run/log/journal/20ea784a7750425a80a68979693e062e) is 6M, max 48.5M, 42.4M free. May 27 03:17:29.400732 systemd[1]: Queued start job for default target multi-user.target. May 27 03:17:29.422856 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. May 27 03:17:29.423839 systemd[1]: systemd-journald.service: Deactivated successfully. May 27 03:17:29.740919 systemd[1]: Started systemd-journald.service - Journal Service. May 27 03:17:29.742814 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 27 03:17:29.743184 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 27 03:17:29.745195 systemd[1]: modprobe@drm.service: Deactivated successfully. May 27 03:17:29.745517 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 27 03:17:29.748109 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 27 03:17:29.748437 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 27 03:17:29.750748 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 27 03:17:29.751149 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 27 03:17:29.753110 systemd[1]: modprobe@loop.service: Deactivated successfully. May 27 03:17:29.753417 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 27 03:17:29.755937 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 27 03:17:29.758474 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 27 03:17:29.765952 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 27 03:17:29.767870 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. May 27 03:17:29.792853 systemd[1]: Reached target network-pre.target - Preparation for Network. May 27 03:17:29.796047 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 27 03:17:29.798775 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 27 03:17:29.800272 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 27 03:17:29.800400 systemd[1]: Reached target local-fs.target - Local File Systems. May 27 03:17:29.803193 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. May 27 03:17:29.811626 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 27 03:17:29.813063 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 27 03:17:29.814979 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 27 03:17:29.817782 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 27 03:17:29.819591 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 27 03:17:29.821952 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 27 03:17:29.823434 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 27 03:17:29.826682 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 27 03:17:29.837260 systemd-journald[1196]: Time spent on flushing to /var/log/journal/20ea784a7750425a80a68979693e062e is 15.914ms for 1068 entries. May 27 03:17:29.837260 systemd-journald[1196]: System Journal (/var/log/journal/20ea784a7750425a80a68979693e062e) is 8M, max 195.6M, 187.6M free. May 27 03:17:30.148738 systemd-journald[1196]: Received client request to flush runtime journal. May 27 03:17:30.148825 kernel: loop0: detected capacity change from 0 to 229808 May 27 03:17:30.148856 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 27 03:17:30.148876 kernel: loop1: detected capacity change from 0 to 146240 May 27 03:17:30.148897 kernel: loop2: detected capacity change from 0 to 113872 May 27 03:17:29.840058 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 27 03:17:29.843627 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 27 03:17:29.845611 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 27 03:17:29.847563 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 27 03:17:29.849933 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 27 03:17:29.855566 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 27 03:17:29.892865 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 27 03:17:30.055385 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 27 03:17:30.058475 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 27 03:17:30.117692 systemd-tmpfiles[1262]: ACLs are not supported, ignoring. May 27 03:17:30.117714 systemd-tmpfiles[1262]: ACLs are not supported, ignoring. May 27 03:17:30.124432 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 27 03:17:30.140072 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 27 03:17:30.142663 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 27 03:17:30.146695 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... May 27 03:17:30.160887 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 27 03:17:30.171525 kernel: loop3: detected capacity change from 0 to 229808 May 27 03:17:30.196539 kernel: loop4: detected capacity change from 0 to 146240 May 27 03:17:30.204123 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. May 27 03:17:30.219828 kernel: loop5: detected capacity change from 0 to 113872 May 27 03:17:30.236192 (sd-merge)[1272]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. May 27 03:17:30.236981 (sd-merge)[1272]: Merged extensions into '/usr'. May 27 03:17:30.262611 systemd[1]: Reload requested from client PID 1250 ('systemd-sysext') (unit systemd-sysext.service)... May 27 03:17:30.262644 systemd[1]: Reloading... May 27 03:17:30.370529 zram_generator::config[1302]: No configuration found. May 27 03:17:30.513056 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 27 03:17:30.621026 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 27 03:17:30.621300 systemd[1]: Reloading finished in 357 ms. May 27 03:17:30.686719 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 27 03:17:30.690749 ldconfig[1245]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 27 03:17:30.702072 systemd[1]: Starting ensure-sysext.service... May 27 03:17:30.705763 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 27 03:17:30.722080 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 27 03:17:30.724395 systemd[1]: Reload requested from client PID 1335 ('systemctl') (unit ensure-sysext.service)... May 27 03:17:30.724419 systemd[1]: Reloading... May 27 03:17:30.746414 systemd-tmpfiles[1336]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. May 27 03:17:30.746471 systemd-tmpfiles[1336]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. May 27 03:17:30.747007 systemd-tmpfiles[1336]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 27 03:17:30.747350 systemd-tmpfiles[1336]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 27 03:17:30.748610 systemd-tmpfiles[1336]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 27 03:17:30.748965 systemd-tmpfiles[1336]: ACLs are not supported, ignoring. May 27 03:17:30.749039 systemd-tmpfiles[1336]: ACLs are not supported, ignoring. May 27 03:17:30.756090 systemd-tmpfiles[1336]: Detected autofs mount point /boot during canonicalization of boot. May 27 03:17:30.756105 systemd-tmpfiles[1336]: Skipping /boot May 27 03:17:30.784227 systemd-tmpfiles[1336]: Detected autofs mount point /boot during canonicalization of boot. May 27 03:17:30.785852 systemd-tmpfiles[1336]: Skipping /boot May 27 03:17:30.835572 zram_generator::config[1364]: No configuration found. May 27 03:17:30.962301 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 27 03:17:31.077372 systemd[1]: Reloading finished in 352 ms. May 27 03:17:31.090262 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 27 03:17:31.121565 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 27 03:17:31.132329 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 27 03:17:31.137340 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 27 03:17:31.144294 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 27 03:17:31.149252 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 27 03:17:31.153213 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 27 03:17:31.160815 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 27 03:17:31.167830 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 27 03:17:31.168110 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 27 03:17:31.175066 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 27 03:17:31.184632 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 27 03:17:31.212574 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 27 03:17:31.214149 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 27 03:17:31.214547 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 27 03:17:31.221111 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 27 03:17:31.222723 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 27 03:17:31.225904 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 27 03:17:31.235212 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 27 03:17:31.242184 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 27 03:17:31.248144 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 27 03:17:31.248851 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 27 03:17:31.255085 systemd[1]: modprobe@loop.service: Deactivated successfully. May 27 03:17:31.255416 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 27 03:17:31.270922 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 27 03:17:31.271398 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 27 03:17:31.274998 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 27 03:17:31.279950 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 27 03:17:31.283898 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 27 03:17:31.285792 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 27 03:17:31.285997 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 27 03:17:31.288120 augenrules[1439]: No rules May 27 03:17:31.293791 systemd-udevd[1410]: Using default interface naming scheme 'v255'. May 27 03:17:31.294012 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 27 03:17:31.295606 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 27 03:17:31.297315 systemd[1]: audit-rules.service: Deactivated successfully. May 27 03:17:31.297644 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 27 03:17:31.299867 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 27 03:17:31.302183 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 27 03:17:31.304826 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 27 03:17:31.311082 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 27 03:17:31.311931 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 27 03:17:31.316154 systemd[1]: modprobe@loop.service: Deactivated successfully. May 27 03:17:31.316474 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 27 03:17:31.328542 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 27 03:17:31.331824 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 27 03:17:31.333609 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 27 03:17:31.335807 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 27 03:17:31.343868 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 27 03:17:31.347996 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 27 03:17:31.351959 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 27 03:17:31.352700 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 27 03:17:31.352857 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 27 03:17:31.353044 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 27 03:17:31.355068 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 27 03:17:31.357947 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 27 03:17:31.360208 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 27 03:17:31.360741 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 27 03:17:31.373082 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 27 03:17:31.375101 systemd[1]: Finished ensure-sysext.service. May 27 03:17:31.377169 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 27 03:17:31.379936 augenrules[1450]: /sbin/augenrules: No change May 27 03:17:31.381104 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 27 03:17:31.382805 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 27 03:17:31.385404 systemd[1]: modprobe@loop.service: Deactivated successfully. May 27 03:17:31.385707 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 27 03:17:31.387812 systemd[1]: modprobe@drm.service: Deactivated successfully. May 27 03:17:31.388156 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 27 03:17:31.399984 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 27 03:17:31.409770 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 27 03:17:31.409906 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 27 03:17:31.413907 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... May 27 03:17:31.415773 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 27 03:17:31.435378 augenrules[1506]: No rules May 27 03:17:31.436945 systemd[1]: audit-rules.service: Deactivated successfully. May 27 03:17:31.440805 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 27 03:17:31.481227 systemd-resolved[1407]: Positive Trust Anchors: May 27 03:17:31.481255 systemd-resolved[1407]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 27 03:17:31.481290 systemd-resolved[1407]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 27 03:17:31.491243 systemd-resolved[1407]: Defaulting to hostname 'linux'. May 27 03:17:31.494462 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 27 03:17:31.496840 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 27 03:17:31.524621 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. May 27 03:17:31.601546 kernel: mousedev: PS/2 mouse device common for all mice May 27 03:17:31.622751 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 27 03:17:31.626992 systemd-networkd[1502]: lo: Link UP May 27 03:17:31.627012 systemd-networkd[1502]: lo: Gained carrier May 27 03:17:31.629282 systemd-networkd[1502]: Enumeration completed May 27 03:17:31.629740 systemd-networkd[1502]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 27 03:17:31.629755 systemd-networkd[1502]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 27 03:17:31.633777 systemd-networkd[1502]: eth0: Link UP May 27 03:17:31.633996 systemd-networkd[1502]: eth0: Gained carrier May 27 03:17:31.634023 systemd-networkd[1502]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 27 03:17:31.634514 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 May 27 03:17:31.635038 systemd[1]: Started systemd-networkd.service - Network Configuration. May 27 03:17:31.636682 systemd[1]: Reached target network.target - Network. May 27 03:17:31.640147 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 27 03:17:31.644525 kernel: ACPI: button: Power Button [PWRF] May 27 03:17:31.644561 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... May 27 03:17:31.656573 systemd-networkd[1502]: eth0: DHCPv4 address 10.0.0.71/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 27 03:17:31.660284 systemd-timesyncd[1503]: Network configuration changed, trying to establish connection. May 27 03:17:33.311740 systemd-resolved[1407]: Clock change detected. Flushing caches. May 27 03:17:33.311907 systemd-timesyncd[1503]: Contacted time server 10.0.0.1:123 (10.0.0.1). May 27 03:17:33.311962 systemd-timesyncd[1503]: Initial clock synchronization to Tue 2025-05-27 03:17:33.311691 UTC. May 27 03:17:33.312583 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 27 03:17:33.314244 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. May 27 03:17:33.316325 systemd[1]: Reached target sysinit.target - System Initialization. May 27 03:17:33.317762 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 27 03:17:33.319180 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 27 03:17:33.320685 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. May 27 03:17:33.322200 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 27 03:17:33.323810 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 27 03:17:33.323858 systemd[1]: Reached target paths.target - Path Units. May 27 03:17:33.325452 systemd[1]: Reached target time-set.target - System Time Set. May 27 03:17:33.327127 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 27 03:17:33.329079 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 27 03:17:33.331875 systemd[1]: Reached target timers.target - Timer Units. May 27 03:17:33.335601 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 27 03:17:33.340480 systemd[1]: Starting docker.socket - Docker Socket for the API... May 27 03:17:33.348366 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). May 27 03:17:33.350747 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). May 27 03:17:33.352663 systemd[1]: Reached target ssh-access.target - SSH Access Available. May 27 03:17:33.368277 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 27 03:17:33.371692 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. May 27 03:17:33.376064 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 27 03:17:33.379255 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. May 27 03:17:33.382715 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 27 03:17:33.390019 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device May 27 03:17:33.397072 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt May 27 03:17:33.397478 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD May 27 03:17:33.403499 systemd[1]: Reached target sockets.target - Socket Units. May 27 03:17:33.404962 systemd[1]: Reached target basic.target - Basic System. May 27 03:17:33.406650 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 27 03:17:33.406692 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 27 03:17:33.409740 systemd[1]: Starting containerd.service - containerd container runtime... May 27 03:17:33.416571 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 27 03:17:33.420626 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 27 03:17:33.428111 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 27 03:17:33.433415 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 27 03:17:33.434985 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 27 03:17:33.439273 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... May 27 03:17:33.443307 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 27 03:17:33.446296 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... May 27 03:17:33.449834 jq[1556]: false May 27 03:17:33.454240 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 27 03:17:33.479284 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 27 03:17:33.486776 google_oslogin_nss_cache[1558]: oslogin_cache_refresh[1558]: Refreshing passwd entry cache May 27 03:17:33.467540 oslogin_cache_refresh[1558]: Refreshing passwd entry cache May 27 03:17:33.487409 systemd[1]: Starting systemd-logind.service - User Login Management... May 27 03:17:33.490194 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 27 03:17:33.490910 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 27 03:17:33.494226 systemd[1]: Starting update-engine.service - Update Engine... May 27 03:17:33.500050 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 27 03:17:33.510016 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 27 03:17:33.512306 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 27 03:17:33.512720 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 27 03:17:33.515742 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 27 03:17:33.516047 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 27 03:17:33.524746 google_oslogin_nss_cache[1558]: oslogin_cache_refresh[1558]: Failure getting users, quitting May 27 03:17:33.524746 google_oslogin_nss_cache[1558]: oslogin_cache_refresh[1558]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. May 27 03:17:33.524746 google_oslogin_nss_cache[1558]: oslogin_cache_refresh[1558]: Refreshing group entry cache May 27 03:17:33.523903 oslogin_cache_refresh[1558]: Failure getting users, quitting May 27 03:17:33.523932 oslogin_cache_refresh[1558]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. May 27 03:17:33.524054 oslogin_cache_refresh[1558]: Refreshing group entry cache May 27 03:17:33.529370 extend-filesystems[1557]: Found loop3 May 27 03:17:33.543387 extend-filesystems[1557]: Found loop4 May 27 03:17:33.543387 extend-filesystems[1557]: Found loop5 May 27 03:17:33.543387 extend-filesystems[1557]: Found sr0 May 27 03:17:33.543387 extend-filesystems[1557]: Found vda May 27 03:17:33.543387 extend-filesystems[1557]: Found vda1 May 27 03:17:33.543387 extend-filesystems[1557]: Found vda2 May 27 03:17:33.543387 extend-filesystems[1557]: Found vda3 May 27 03:17:33.543387 extend-filesystems[1557]: Found usr May 27 03:17:33.543387 extend-filesystems[1557]: Found vda4 May 27 03:17:33.543387 extend-filesystems[1557]: Found vda6 May 27 03:17:33.543387 extend-filesystems[1557]: Found vda7 May 27 03:17:33.543387 extend-filesystems[1557]: Found vda9 May 27 03:17:33.543387 extend-filesystems[1557]: Checking size of /dev/vda9 May 27 03:17:33.538375 systemd[1]: google-oslogin-cache.service: Deactivated successfully. May 27 03:17:33.558677 google_oslogin_nss_cache[1558]: oslogin_cache_refresh[1558]: Failure getting groups, quitting May 27 03:17:33.558677 google_oslogin_nss_cache[1558]: oslogin_cache_refresh[1558]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. May 27 03:17:33.531723 oslogin_cache_refresh[1558]: Failure getting groups, quitting May 27 03:17:33.558775 update_engine[1567]: I20250527 03:17:33.529399 1567 main.cc:92] Flatcar Update Engine starting May 27 03:17:33.538705 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. May 27 03:17:33.559155 jq[1569]: true May 27 03:17:33.531739 oslogin_cache_refresh[1558]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. May 27 03:17:33.564218 systemd[1]: motdgen.service: Deactivated successfully. May 27 03:17:33.568324 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 27 03:17:33.569846 (ntainerd)[1586]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 27 03:17:33.576620 jq[1587]: true May 27 03:17:33.606353 dbus-daemon[1553]: [system] SELinux support is enabled May 27 03:17:33.627849 update_engine[1567]: I20250527 03:17:33.615891 1567 update_check_scheduler.cc:74] Next update check in 2m25s May 27 03:17:33.606752 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 27 03:17:33.630912 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 27 03:17:33.630956 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 27 03:17:33.632354 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 27 03:17:33.632377 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 27 03:17:33.634128 tar[1572]: linux-amd64/LICENSE May 27 03:17:33.635403 tar[1572]: linux-amd64/helm May 27 03:17:33.641415 systemd[1]: Started update-engine.service - Update Engine. May 27 03:17:33.646704 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 27 03:17:33.699473 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 27 03:17:33.726896 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 27 03:17:33.727334 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 27 03:17:33.731351 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 27 03:17:33.743082 kernel: kvm_amd: TSC scaling supported May 27 03:17:33.743155 kernel: kvm_amd: Nested Virtualization enabled May 27 03:17:33.743175 kernel: kvm_amd: Nested Paging enabled May 27 03:17:33.744363 kernel: kvm_amd: LBR virtualization supported May 27 03:17:33.744395 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported May 27 03:17:33.745505 kernel: kvm_amd: Virtual GIF supported May 27 03:17:33.782570 extend-filesystems[1557]: Resized partition /dev/vda9 May 27 03:17:33.884558 extend-filesystems[1622]: resize2fs 1.47.2 (1-Jan-2025) May 27 03:17:33.888582 systemd-logind[1564]: Watching system buttons on /dev/input/event2 (Power Button) May 27 03:17:33.888605 systemd-logind[1564]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) May 27 03:17:33.889001 systemd-logind[1564]: New seat seat0. May 27 03:17:33.891166 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks May 27 03:17:33.893703 locksmithd[1610]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 27 03:17:33.935010 sshd_keygen[1574]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 27 03:17:33.961154 systemd[1]: Started systemd-logind.service - User Login Management. May 27 03:17:33.992914 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 27 03:17:34.001059 kernel: EXT4-fs (vda9): resized filesystem to 1864699 May 27 03:17:34.133710 kernel: EDAC MC: Ver: 3.0.0 May 27 03:17:34.133839 bash[1607]: Updated "/home/core/.ssh/authorized_keys" May 27 03:17:34.002776 systemd[1]: Starting issuegen.service - Generate /run/issue... May 27 03:17:34.135060 extend-filesystems[1622]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required May 27 03:17:34.135060 extend-filesystems[1622]: old_desc_blocks = 1, new_desc_blocks = 1 May 27 03:17:34.135060 extend-filesystems[1622]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. May 27 03:17:34.007052 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 27 03:17:34.138823 extend-filesystems[1557]: Resized filesystem in /dev/vda9 May 27 03:17:34.013789 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. May 27 03:17:34.029699 systemd[1]: issuegen.service: Deactivated successfully. May 27 03:17:34.030120 systemd[1]: Finished issuegen.service - Generate /run/issue. May 27 03:17:34.050715 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 27 03:17:34.066841 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 27 03:17:34.076717 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 27 03:17:34.080172 systemd[1]: Started getty@tty1.service - Getty on tty1. May 27 03:17:34.083773 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. May 27 03:17:34.096363 systemd[1]: Reached target getty.target - Login Prompts. May 27 03:17:34.136825 systemd[1]: extend-filesystems.service: Deactivated successfully. May 27 03:17:34.137210 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 27 03:17:34.150422 containerd[1586]: time="2025-05-27T03:17:34Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 May 27 03:17:34.151576 containerd[1586]: time="2025-05-27T03:17:34.151515749Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 May 27 03:17:34.171195 containerd[1586]: time="2025-05-27T03:17:34.171040064Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="15.179µs" May 27 03:17:34.171195 containerd[1586]: time="2025-05-27T03:17:34.171149590Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 May 27 03:17:34.171195 containerd[1586]: time="2025-05-27T03:17:34.171208039Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 May 27 03:17:34.171828 containerd[1586]: time="2025-05-27T03:17:34.171742301Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 May 27 03:17:34.171828 containerd[1586]: time="2025-05-27T03:17:34.171808335Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 May 27 03:17:34.171919 containerd[1586]: time="2025-05-27T03:17:34.171853390Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 May 27 03:17:34.172059 containerd[1586]: time="2025-05-27T03:17:34.172008080Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 May 27 03:17:34.172059 containerd[1586]: time="2025-05-27T03:17:34.172050810Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 May 27 03:17:34.172563 containerd[1586]: time="2025-05-27T03:17:34.172519509Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 May 27 03:17:34.172563 containerd[1586]: time="2025-05-27T03:17:34.172544947Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 May 27 03:17:34.172563 containerd[1586]: time="2025-05-27T03:17:34.172559073Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 May 27 03:17:34.172674 containerd[1586]: time="2025-05-27T03:17:34.172569653Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 May 27 03:17:34.172756 containerd[1586]: time="2025-05-27T03:17:34.172721768Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 May 27 03:17:34.173253 containerd[1586]: time="2025-05-27T03:17:34.173201177Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 May 27 03:17:34.173310 containerd[1586]: time="2025-05-27T03:17:34.173253505Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 May 27 03:17:34.173310 containerd[1586]: time="2025-05-27T03:17:34.173266129Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 May 27 03:17:34.173398 containerd[1586]: time="2025-05-27T03:17:34.173355747Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 May 27 03:17:34.174031 containerd[1586]: time="2025-05-27T03:17:34.174004504Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 May 27 03:17:34.174162 containerd[1586]: time="2025-05-27T03:17:34.174115983Z" level=info msg="metadata content store policy set" policy=shared May 27 03:17:34.184220 containerd[1586]: time="2025-05-27T03:17:34.184057236Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 May 27 03:17:34.184220 containerd[1586]: time="2025-05-27T03:17:34.184361627Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 May 27 03:17:34.184668 containerd[1586]: time="2025-05-27T03:17:34.184451916Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 May 27 03:17:34.184668 containerd[1586]: time="2025-05-27T03:17:34.184488936Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 May 27 03:17:34.184668 containerd[1586]: time="2025-05-27T03:17:34.184510737Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 May 27 03:17:34.184668 containerd[1586]: time="2025-05-27T03:17:34.184523551Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 May 27 03:17:34.184668 containerd[1586]: time="2025-05-27T03:17:34.184618960Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 May 27 03:17:34.184905 containerd[1586]: time="2025-05-27T03:17:34.184679974Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 May 27 03:17:34.184905 containerd[1586]: time="2025-05-27T03:17:34.184710932Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 May 27 03:17:34.184905 containerd[1586]: time="2025-05-27T03:17:34.184737512Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 May 27 03:17:34.184905 containerd[1586]: time="2025-05-27T03:17:34.184754444Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 May 27 03:17:34.184905 containerd[1586]: time="2025-05-27T03:17:34.184843050Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 May 27 03:17:34.186032 containerd[1586]: time="2025-05-27T03:17:34.185920992Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 May 27 03:17:34.186032 containerd[1586]: time="2025-05-27T03:17:34.186028122Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 May 27 03:17:34.186244 containerd[1586]: time="2025-05-27T03:17:34.186205756Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 May 27 03:17:34.186324 containerd[1586]: time="2025-05-27T03:17:34.186260218Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 May 27 03:17:34.186324 containerd[1586]: time="2025-05-27T03:17:34.186307246Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 May 27 03:17:34.186467 containerd[1586]: time="2025-05-27T03:17:34.186353593Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 May 27 03:17:34.186467 containerd[1586]: time="2025-05-27T03:17:34.186401863Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 May 27 03:17:34.186467 containerd[1586]: time="2025-05-27T03:17:34.186420168Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 May 27 03:17:34.186467 containerd[1586]: time="2025-05-27T03:17:34.186441257Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 May 27 03:17:34.186467 containerd[1586]: time="2025-05-27T03:17:34.186460974Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 May 27 03:17:34.186467 containerd[1586]: time="2025-05-27T03:17:34.186474179Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 May 27 03:17:34.186751 containerd[1586]: time="2025-05-27T03:17:34.186678602Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" May 27 03:17:34.186751 containerd[1586]: time="2025-05-27T03:17:34.186743043Z" level=info msg="Start snapshots syncer" May 27 03:17:34.186994 containerd[1586]: time="2025-05-27T03:17:34.186923111Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 May 27 03:17:34.187681 containerd[1586]: time="2025-05-27T03:17:34.187544466Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" May 27 03:17:34.187913 containerd[1586]: time="2025-05-27T03:17:34.187710417Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 May 27 03:17:34.188062 containerd[1586]: time="2025-05-27T03:17:34.188000261Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 May 27 03:17:34.188283 containerd[1586]: time="2025-05-27T03:17:34.188229080Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 May 27 03:17:34.188336 containerd[1586]: time="2025-05-27T03:17:34.188285195Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 May 27 03:17:34.188336 containerd[1586]: time="2025-05-27T03:17:34.188307738Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 May 27 03:17:34.188417 containerd[1586]: time="2025-05-27T03:17:34.188359765Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 May 27 03:17:34.188446 containerd[1586]: time="2025-05-27T03:17:34.188426741Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 May 27 03:17:34.188472 containerd[1586]: time="2025-05-27T03:17:34.188448311Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 May 27 03:17:34.188472 containerd[1586]: time="2025-05-27T03:17:34.188462578Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 May 27 03:17:34.188680 containerd[1586]: time="2025-05-27T03:17:34.188542838Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 May 27 03:17:34.188680 containerd[1586]: time="2025-05-27T03:17:34.188566393Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 May 27 03:17:34.188680 containerd[1586]: time="2025-05-27T03:17:34.188640061Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 May 27 03:17:34.188776 containerd[1586]: time="2025-05-27T03:17:34.188700494Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 May 27 03:17:34.188805 containerd[1586]: time="2025-05-27T03:17:34.188772189Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 May 27 03:17:34.188805 containerd[1586]: time="2025-05-27T03:17:34.188789461Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 May 27 03:17:34.188805 containerd[1586]: time="2025-05-27T03:17:34.188802435Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 May 27 03:17:34.188961 containerd[1586]: time="2025-05-27T03:17:34.188814498Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 May 27 03:17:34.188961 containerd[1586]: time="2025-05-27T03:17:34.188838483Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 May 27 03:17:34.188961 containerd[1586]: time="2025-05-27T03:17:34.188868910Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 May 27 03:17:34.188961 containerd[1586]: time="2025-05-27T03:17:34.188946105Z" level=info msg="runtime interface created" May 27 03:17:34.188961 containerd[1586]: time="2025-05-27T03:17:34.188954941Z" level=info msg="created NRI interface" May 27 03:17:34.191330 containerd[1586]: time="2025-05-27T03:17:34.191233615Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 May 27 03:17:34.191330 containerd[1586]: time="2025-05-27T03:17:34.191308655Z" level=info msg="Connect containerd service" May 27 03:17:34.191557 containerd[1586]: time="2025-05-27T03:17:34.191347318Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 27 03:17:34.193306 containerd[1586]: time="2025-05-27T03:17:34.193219900Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 27 03:17:34.409276 systemd-networkd[1502]: eth0: Gained IPv6LL May 27 03:17:34.411324 containerd[1586]: time="2025-05-27T03:17:34.411272316Z" level=info msg="Start subscribing containerd event" May 27 03:17:34.411547 containerd[1586]: time="2025-05-27T03:17:34.411373055Z" level=info msg="Start recovering state" May 27 03:17:34.411618 containerd[1586]: time="2025-05-27T03:17:34.411590573Z" level=info msg="Start event monitor" May 27 03:17:34.411683 containerd[1586]: time="2025-05-27T03:17:34.411616742Z" level=info msg="Start cni network conf syncer for default" May 27 03:17:34.411683 containerd[1586]: time="2025-05-27T03:17:34.411657107Z" level=info msg="Start streaming server" May 27 03:17:34.411683 containerd[1586]: time="2025-05-27T03:17:34.411670442Z" level=info msg="Registered namespace \"k8s.io\" with NRI" May 27 03:17:34.411683 containerd[1586]: time="2025-05-27T03:17:34.411680130Z" level=info msg="runtime interface starting up..." May 27 03:17:34.411683 containerd[1586]: time="2025-05-27T03:17:34.411687805Z" level=info msg="starting plugins..." May 27 03:17:34.411873 containerd[1586]: time="2025-05-27T03:17:34.411708674Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" May 27 03:17:34.411962 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 27 03:17:34.413340 containerd[1586]: time="2025-05-27T03:17:34.412237836Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 27 03:17:34.413340 containerd[1586]: time="2025-05-27T03:17:34.412297819Z" level=info msg=serving... address=/run/containerd/containerd.sock May 27 03:17:34.413340 containerd[1586]: time="2025-05-27T03:17:34.412359525Z" level=info msg="containerd successfully booted in 0.262716s" May 27 03:17:34.413721 systemd[1]: Started containerd.service - containerd container runtime. May 27 03:17:34.416171 systemd[1]: Reached target network-online.target - Network is Online. May 27 03:17:34.419146 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... May 27 03:17:34.423177 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 27 03:17:34.426214 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 27 03:17:34.484768 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 27 03:17:34.515347 systemd[1]: coreos-metadata.service: Deactivated successfully. May 27 03:17:34.515662 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. May 27 03:17:34.517579 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. May 27 03:17:34.653145 tar[1572]: linux-amd64/README.md May 27 03:17:34.679767 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. May 27 03:17:35.833595 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 27 03:17:35.847704 (kubelet)[1695]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 27 03:17:35.853411 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 27 03:17:35.855201 systemd[1]: Reached target multi-user.target - Multi-User System. May 27 03:17:35.858956 systemd[1]: Started sshd@0-10.0.0.71:22-10.0.0.1:44686.service - OpenSSH per-connection server daemon (10.0.0.1:44686). May 27 03:17:35.861753 systemd[1]: Startup finished in 4.006s (kernel) + 9.035s (initrd) + 5.522s (userspace) = 18.565s. May 27 03:17:35.933273 sshd[1697]: Accepted publickey for core from 10.0.0.1 port 44686 ssh2: RSA SHA256:yrdvci6hXDWGDW7i9bmImWu+5ErcoHe0M1IyHhFSL9U May 27 03:17:35.936254 sshd-session[1697]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:17:35.950148 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 27 03:17:35.951855 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 27 03:17:35.960260 systemd-logind[1564]: New session 1 of user core. May 27 03:17:35.981480 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 27 03:17:35.985603 systemd[1]: Starting user@500.service - User Manager for UID 500... May 27 03:17:36.007136 (systemd)[1711]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 27 03:17:36.010607 systemd-logind[1564]: New session c1 of user core. May 27 03:17:36.201757 systemd[1711]: Queued start job for default target default.target. May 27 03:17:36.210310 systemd[1711]: Created slice app.slice - User Application Slice. May 27 03:17:36.210344 systemd[1711]: Reached target paths.target - Paths. May 27 03:17:36.210401 systemd[1711]: Reached target timers.target - Timers. May 27 03:17:36.212095 systemd[1711]: Starting dbus.socket - D-Bus User Message Bus Socket... May 27 03:17:36.224060 systemd[1711]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 27 03:17:36.224247 systemd[1711]: Reached target sockets.target - Sockets. May 27 03:17:36.224307 systemd[1711]: Reached target basic.target - Basic System. May 27 03:17:36.224367 systemd[1711]: Reached target default.target - Main User Target. May 27 03:17:36.224458 systemd[1711]: Startup finished in 203ms. May 27 03:17:36.224771 systemd[1]: Started user@500.service - User Manager for UID 500. May 27 03:17:36.226888 systemd[1]: Started session-1.scope - Session 1 of User core. May 27 03:17:36.296805 systemd[1]: Started sshd@1-10.0.0.71:22-10.0.0.1:44700.service - OpenSSH per-connection server daemon (10.0.0.1:44700). May 27 03:17:36.355581 sshd[1722]: Accepted publickey for core from 10.0.0.1 port 44700 ssh2: RSA SHA256:yrdvci6hXDWGDW7i9bmImWu+5ErcoHe0M1IyHhFSL9U May 27 03:17:36.357651 sshd-session[1722]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:17:36.362932 systemd-logind[1564]: New session 2 of user core. May 27 03:17:36.374201 systemd[1]: Started session-2.scope - Session 2 of User core. May 27 03:17:36.416891 kubelet[1695]: E0527 03:17:36.416806 1695 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 27 03:17:36.421236 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 27 03:17:36.421429 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 27 03:17:36.421887 systemd[1]: kubelet.service: Consumed 1.641s CPU time, 267.5M memory peak. May 27 03:17:36.434060 sshd[1724]: Connection closed by 10.0.0.1 port 44700 May 27 03:17:36.434420 sshd-session[1722]: pam_unix(sshd:session): session closed for user core May 27 03:17:36.446514 systemd[1]: sshd@1-10.0.0.71:22-10.0.0.1:44700.service: Deactivated successfully. May 27 03:17:36.448537 systemd[1]: session-2.scope: Deactivated successfully. May 27 03:17:36.449364 systemd-logind[1564]: Session 2 logged out. Waiting for processes to exit. May 27 03:17:36.452423 systemd[1]: Started sshd@2-10.0.0.71:22-10.0.0.1:44710.service - OpenSSH per-connection server daemon (10.0.0.1:44710). May 27 03:17:36.453618 systemd-logind[1564]: Removed session 2. May 27 03:17:36.510001 sshd[1731]: Accepted publickey for core from 10.0.0.1 port 44710 ssh2: RSA SHA256:yrdvci6hXDWGDW7i9bmImWu+5ErcoHe0M1IyHhFSL9U May 27 03:17:36.511582 sshd-session[1731]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:17:36.515874 systemd-logind[1564]: New session 3 of user core. May 27 03:17:36.527198 systemd[1]: Started session-3.scope - Session 3 of User core. May 27 03:17:36.578575 sshd[1733]: Connection closed by 10.0.0.1 port 44710 May 27 03:17:36.579010 sshd-session[1731]: pam_unix(sshd:session): session closed for user core May 27 03:17:36.596540 systemd[1]: sshd@2-10.0.0.71:22-10.0.0.1:44710.service: Deactivated successfully. May 27 03:17:36.599393 systemd[1]: session-3.scope: Deactivated successfully. May 27 03:17:36.600394 systemd-logind[1564]: Session 3 logged out. Waiting for processes to exit. May 27 03:17:36.604726 systemd[1]: Started sshd@3-10.0.0.71:22-10.0.0.1:44718.service - OpenSSH per-connection server daemon (10.0.0.1:44718). May 27 03:17:36.605558 systemd-logind[1564]: Removed session 3. May 27 03:17:36.658895 sshd[1739]: Accepted publickey for core from 10.0.0.1 port 44718 ssh2: RSA SHA256:yrdvci6hXDWGDW7i9bmImWu+5ErcoHe0M1IyHhFSL9U May 27 03:17:36.660782 sshd-session[1739]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:17:36.666151 systemd-logind[1564]: New session 4 of user core. May 27 03:17:36.676133 systemd[1]: Started session-4.scope - Session 4 of User core. May 27 03:17:36.733122 sshd[1741]: Connection closed by 10.0.0.1 port 44718 May 27 03:17:36.733368 sshd-session[1739]: pam_unix(sshd:session): session closed for user core May 27 03:17:36.751637 systemd[1]: sshd@3-10.0.0.71:22-10.0.0.1:44718.service: Deactivated successfully. May 27 03:17:36.753902 systemd[1]: session-4.scope: Deactivated successfully. May 27 03:17:36.755185 systemd-logind[1564]: Session 4 logged out. Waiting for processes to exit. May 27 03:17:36.758945 systemd[1]: Started sshd@4-10.0.0.71:22-10.0.0.1:44722.service - OpenSSH per-connection server daemon (10.0.0.1:44722). May 27 03:17:36.760408 systemd-logind[1564]: Removed session 4. May 27 03:17:36.815271 sshd[1747]: Accepted publickey for core from 10.0.0.1 port 44722 ssh2: RSA SHA256:yrdvci6hXDWGDW7i9bmImWu+5ErcoHe0M1IyHhFSL9U May 27 03:17:36.817370 sshd-session[1747]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:17:36.822548 systemd-logind[1564]: New session 5 of user core. May 27 03:17:36.837158 systemd[1]: Started session-5.scope - Session 5 of User core. May 27 03:17:36.902702 sudo[1750]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 May 27 03:17:36.903129 sudo[1750]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 27 03:17:36.917780 sudo[1750]: pam_unix(sudo:session): session closed for user root May 27 03:17:36.919385 sshd[1749]: Connection closed by 10.0.0.1 port 44722 May 27 03:17:36.919729 sshd-session[1747]: pam_unix(sshd:session): session closed for user core May 27 03:17:36.940227 systemd[1]: sshd@4-10.0.0.71:22-10.0.0.1:44722.service: Deactivated successfully. May 27 03:17:36.942194 systemd[1]: session-5.scope: Deactivated successfully. May 27 03:17:36.942906 systemd-logind[1564]: Session 5 logged out. Waiting for processes to exit. May 27 03:17:36.945819 systemd[1]: Started sshd@5-10.0.0.71:22-10.0.0.1:44724.service - OpenSSH per-connection server daemon (10.0.0.1:44724). May 27 03:17:36.946619 systemd-logind[1564]: Removed session 5. May 27 03:17:37.006697 sshd[1756]: Accepted publickey for core from 10.0.0.1 port 44724 ssh2: RSA SHA256:yrdvci6hXDWGDW7i9bmImWu+5ErcoHe0M1IyHhFSL9U May 27 03:17:37.008552 sshd-session[1756]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:17:37.014774 systemd-logind[1564]: New session 6 of user core. May 27 03:17:37.024196 systemd[1]: Started session-6.scope - Session 6 of User core. May 27 03:17:37.082715 sudo[1761]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules May 27 03:17:37.083111 sudo[1761]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 27 03:17:37.164960 sudo[1761]: pam_unix(sudo:session): session closed for user root May 27 03:17:37.172472 sudo[1760]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules May 27 03:17:37.172796 sudo[1760]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 27 03:17:37.184105 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 27 03:17:37.245364 augenrules[1783]: No rules May 27 03:17:37.246701 systemd[1]: audit-rules.service: Deactivated successfully. May 27 03:17:37.247104 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 27 03:17:37.248735 sudo[1760]: pam_unix(sudo:session): session closed for user root May 27 03:17:37.250927 sshd[1759]: Connection closed by 10.0.0.1 port 44724 May 27 03:17:37.251675 sshd-session[1756]: pam_unix(sshd:session): session closed for user core May 27 03:17:37.261625 systemd[1]: sshd@5-10.0.0.71:22-10.0.0.1:44724.service: Deactivated successfully. May 27 03:17:37.263881 systemd[1]: session-6.scope: Deactivated successfully. May 27 03:17:37.264723 systemd-logind[1564]: Session 6 logged out. Waiting for processes to exit. May 27 03:17:37.268198 systemd[1]: Started sshd@6-10.0.0.71:22-10.0.0.1:44736.service - OpenSSH per-connection server daemon (10.0.0.1:44736). May 27 03:17:37.269161 systemd-logind[1564]: Removed session 6. May 27 03:17:37.327384 sshd[1792]: Accepted publickey for core from 10.0.0.1 port 44736 ssh2: RSA SHA256:yrdvci6hXDWGDW7i9bmImWu+5ErcoHe0M1IyHhFSL9U May 27 03:17:37.330287 sshd-session[1792]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:17:37.337757 systemd-logind[1564]: New session 7 of user core. May 27 03:17:37.347340 systemd[1]: Started session-7.scope - Session 7 of User core. May 27 03:17:37.405596 sudo[1795]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 27 03:17:37.406040 sudo[1795]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 27 03:17:38.240995 systemd[1]: Starting docker.service - Docker Application Container Engine... May 27 03:17:38.256458 (dockerd)[1815]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU May 27 03:17:38.748459 dockerd[1815]: time="2025-05-27T03:17:38.748380462Z" level=info msg="Starting up" May 27 03:17:38.749440 dockerd[1815]: time="2025-05-27T03:17:38.749392880Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" May 27 03:17:39.896781 dockerd[1815]: time="2025-05-27T03:17:39.896706553Z" level=info msg="Loading containers: start." May 27 03:17:39.949067 kernel: Initializing XFRM netlink socket May 27 03:17:40.464286 systemd-networkd[1502]: docker0: Link UP May 27 03:17:40.502495 dockerd[1815]: time="2025-05-27T03:17:40.502396949Z" level=info msg="Loading containers: done." May 27 03:17:40.550544 dockerd[1815]: time="2025-05-27T03:17:40.550443182Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 27 03:17:40.550544 dockerd[1815]: time="2025-05-27T03:17:40.550561154Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 May 27 03:17:40.550962 dockerd[1815]: time="2025-05-27T03:17:40.550757292Z" level=info msg="Initializing buildkit" May 27 03:17:40.792690 dockerd[1815]: time="2025-05-27T03:17:40.792464599Z" level=info msg="Completed buildkit initialization" May 27 03:17:40.798067 dockerd[1815]: time="2025-05-27T03:17:40.798018575Z" level=info msg="Daemon has completed initialization" May 27 03:17:40.798183 dockerd[1815]: time="2025-05-27T03:17:40.798101490Z" level=info msg="API listen on /run/docker.sock" May 27 03:17:40.798419 systemd[1]: Started docker.service - Docker Application Container Engine. May 27 03:17:41.909828 containerd[1586]: time="2025-05-27T03:17:41.909780495Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.1\"" May 27 03:17:44.169987 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4169600938.mount: Deactivated successfully. May 27 03:17:46.672132 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 27 03:17:46.674333 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 27 03:17:47.006640 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 27 03:17:47.022307 (kubelet)[2074]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 27 03:17:47.236740 kubelet[2074]: E0527 03:17:47.236641 2074 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 27 03:17:47.244332 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 27 03:17:47.244595 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 27 03:17:47.245111 systemd[1]: kubelet.service: Consumed 264ms CPU time, 111.5M memory peak. May 27 03:17:47.880791 containerd[1586]: time="2025-05-27T03:17:47.880701303Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:17:47.883853 containerd[1586]: time="2025-05-27T03:17:47.883773354Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.1: active requests=0, bytes read=30075403" May 27 03:17:47.887087 containerd[1586]: time="2025-05-27T03:17:47.887041193Z" level=info msg="ImageCreate event name:\"sha256:c6ab243b29f82a6ce269a5342bfd9ea3d0d4ef0f2bb7e98c6ac0bde1aeafab66\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:17:47.891291 containerd[1586]: time="2025-05-27T03:17:47.891242101Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:d8ae2fb01c39aa1c7add84f3d54425cf081c24c11e3946830292a8cfa4293548\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:17:47.892353 containerd[1586]: time="2025-05-27T03:17:47.892284335Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.1\" with image id \"sha256:c6ab243b29f82a6ce269a5342bfd9ea3d0d4ef0f2bb7e98c6ac0bde1aeafab66\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.1\", repo digest \"registry.k8s.io/kube-apiserver@sha256:d8ae2fb01c39aa1c7add84f3d54425cf081c24c11e3946830292a8cfa4293548\", size \"30072203\" in 5.982448666s" May 27 03:17:47.892353 containerd[1586]: time="2025-05-27T03:17:47.892343386Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.1\" returns image reference \"sha256:c6ab243b29f82a6ce269a5342bfd9ea3d0d4ef0f2bb7e98c6ac0bde1aeafab66\"" May 27 03:17:47.893033 containerd[1586]: time="2025-05-27T03:17:47.893002572Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.1\"" May 27 03:17:50.498570 containerd[1586]: time="2025-05-27T03:17:50.498477888Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:17:50.564952 containerd[1586]: time="2025-05-27T03:17:50.564846994Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.1: active requests=0, bytes read=26011390" May 27 03:17:50.652943 containerd[1586]: time="2025-05-27T03:17:50.652857626Z" level=info msg="ImageCreate event name:\"sha256:ef43894fa110c389f7286f4d5a3ea176072c95280efeca60d6a79617cdbbf3e4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:17:50.734503 containerd[1586]: time="2025-05-27T03:17:50.734405257Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:7c9bea694e3a3c01ed6a5ee02d55a6124cc08e0b2eec6caa33f2c396b8cbc3f8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:17:50.735778 containerd[1586]: time="2025-05-27T03:17:50.735704804Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.1\" with image id \"sha256:ef43894fa110c389f7286f4d5a3ea176072c95280efeca60d6a79617cdbbf3e4\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.1\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:7c9bea694e3a3c01ed6a5ee02d55a6124cc08e0b2eec6caa33f2c396b8cbc3f8\", size \"27638910\" in 2.842672676s" May 27 03:17:50.735778 containerd[1586]: time="2025-05-27T03:17:50.735768724Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.1\" returns image reference \"sha256:ef43894fa110c389f7286f4d5a3ea176072c95280efeca60d6a79617cdbbf3e4\"" May 27 03:17:50.736507 containerd[1586]: time="2025-05-27T03:17:50.736394507Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.1\"" May 27 03:17:52.342285 containerd[1586]: time="2025-05-27T03:17:52.342201178Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:17:52.343935 containerd[1586]: time="2025-05-27T03:17:52.343822408Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.1: active requests=0, bytes read=20148960" May 27 03:17:52.346009 containerd[1586]: time="2025-05-27T03:17:52.345860320Z" level=info msg="ImageCreate event name:\"sha256:398c985c0d950becc8dcdab5877a8a517ffeafca0792b3fe5f1acff218aeac49\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:17:52.351006 containerd[1586]: time="2025-05-27T03:17:52.350869724Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:395b7de7cdbdcc3c3a3db270844a3f71d757e2447a1e4db76b4cce46fba7fd55\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:17:52.352064 containerd[1586]: time="2025-05-27T03:17:52.352019431Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.1\" with image id \"sha256:398c985c0d950becc8dcdab5877a8a517ffeafca0792b3fe5f1acff218aeac49\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.1\", repo digest \"registry.k8s.io/kube-scheduler@sha256:395b7de7cdbdcc3c3a3db270844a3f71d757e2447a1e4db76b4cce46fba7fd55\", size \"21776498\" in 1.615572996s" May 27 03:17:52.352064 containerd[1586]: time="2025-05-27T03:17:52.352060558Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.1\" returns image reference \"sha256:398c985c0d950becc8dcdab5877a8a517ffeafca0792b3fe5f1acff218aeac49\"" May 27 03:17:52.352896 containerd[1586]: time="2025-05-27T03:17:52.352845720Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.1\"" May 27 03:17:57.306187 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4154982735.mount: Deactivated successfully. May 27 03:17:57.307539 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. May 27 03:17:57.309744 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 27 03:17:57.611461 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 27 03:17:57.636476 (kubelet)[2122]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 27 03:17:57.724918 kubelet[2122]: E0527 03:17:57.724851 2122 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 27 03:17:57.730068 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 27 03:17:57.730316 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 27 03:17:57.730836 systemd[1]: kubelet.service: Consumed 319ms CPU time, 111M memory peak. May 27 03:18:00.267810 containerd[1586]: time="2025-05-27T03:18:00.267720596Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:18:00.299924 containerd[1586]: time="2025-05-27T03:18:00.299867372Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.1: active requests=0, bytes read=31889075" May 27 03:18:00.315019 containerd[1586]: time="2025-05-27T03:18:00.314951390Z" level=info msg="ImageCreate event name:\"sha256:b79c189b052cdbe0e837d0caa6faf1d9fd696d8664fcc462f67d9ea51f26fef2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:18:00.330232 containerd[1586]: time="2025-05-27T03:18:00.330171503Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:7ddf379897139ae8ade8b33cb9373b70c632a4d5491da6e234f5d830e0a50807\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:18:00.330944 containerd[1586]: time="2025-05-27T03:18:00.330884420Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.1\" with image id \"sha256:b79c189b052cdbe0e837d0caa6faf1d9fd696d8664fcc462f67d9ea51f26fef2\", repo tag \"registry.k8s.io/kube-proxy:v1.33.1\", repo digest \"registry.k8s.io/kube-proxy@sha256:7ddf379897139ae8ade8b33cb9373b70c632a4d5491da6e234f5d830e0a50807\", size \"31888094\" in 7.977999417s" May 27 03:18:00.330944 containerd[1586]: time="2025-05-27T03:18:00.330929685Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.1\" returns image reference \"sha256:b79c189b052cdbe0e837d0caa6faf1d9fd696d8664fcc462f67d9ea51f26fef2\"" May 27 03:18:00.331582 containerd[1586]: time="2025-05-27T03:18:00.331546753Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" May 27 03:18:02.405739 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1669955139.mount: Deactivated successfully. May 27 03:18:06.896807 containerd[1586]: time="2025-05-27T03:18:06.896690478Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:18:06.898908 containerd[1586]: time="2025-05-27T03:18:06.898850121Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20942238" May 27 03:18:06.903729 containerd[1586]: time="2025-05-27T03:18:06.903671088Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:18:06.907949 containerd[1586]: time="2025-05-27T03:18:06.907839354Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:18:06.909258 containerd[1586]: time="2025-05-27T03:18:06.909175066Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 6.577592556s" May 27 03:18:06.909258 containerd[1586]: time="2025-05-27T03:18:06.909222578Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" May 27 03:18:06.910010 containerd[1586]: time="2025-05-27T03:18:06.909958649Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" May 27 03:18:07.504643 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3382716539.mount: Deactivated successfully. May 27 03:18:07.519785 containerd[1586]: time="2025-05-27T03:18:07.519705234Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 27 03:18:07.521322 containerd[1586]: time="2025-05-27T03:18:07.521273267Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" May 27 03:18:07.523674 containerd[1586]: time="2025-05-27T03:18:07.523609913Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 27 03:18:07.527286 containerd[1586]: time="2025-05-27T03:18:07.527236770Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 27 03:18:07.528249 containerd[1586]: time="2025-05-27T03:18:07.528119501Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 618.108892ms" May 27 03:18:07.528249 containerd[1586]: time="2025-05-27T03:18:07.528163425Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" May 27 03:18:07.528870 containerd[1586]: time="2025-05-27T03:18:07.528818660Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" May 27 03:18:07.980878 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. May 27 03:18:07.983036 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 27 03:18:08.282675 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 27 03:18:08.300536 (kubelet)[2196]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 27 03:18:08.757032 kubelet[2196]: E0527 03:18:08.756890 2196 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 27 03:18:08.761950 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 27 03:18:08.762204 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 27 03:18:08.762747 systemd[1]: kubelet.service: Consumed 244ms CPU time, 109M memory peak. May 27 03:18:11.377758 containerd[1586]: time="2025-05-27T03:18:11.377645128Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:18:11.379345 containerd[1586]: time="2025-05-27T03:18:11.379293209Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=58142739" May 27 03:18:11.381701 containerd[1586]: time="2025-05-27T03:18:11.381657356Z" level=info msg="ImageCreate event name:\"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:18:11.386403 containerd[1586]: time="2025-05-27T03:18:11.386334271Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:18:11.387553 containerd[1586]: time="2025-05-27T03:18:11.387498170Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"58938593\" in 3.858594808s" May 27 03:18:11.387553 containerd[1586]: time="2025-05-27T03:18:11.387542886Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\"" May 27 03:18:16.842296 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 27 03:18:16.842472 systemd[1]: kubelet.service: Consumed 244ms CPU time, 109M memory peak. May 27 03:18:16.844952 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 27 03:18:16.871486 systemd[1]: Reload requested from client PID 2247 ('systemctl') (unit session-7.scope)... May 27 03:18:16.871507 systemd[1]: Reloading... May 27 03:18:17.011137 zram_generator::config[2292]: No configuration found. May 27 03:18:17.443612 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 27 03:18:17.587185 systemd[1]: Reloading finished in 715 ms. May 27 03:18:17.681310 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM May 27 03:18:17.681560 systemd[1]: kubelet.service: Failed with result 'signal'. May 27 03:18:17.682197 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 27 03:18:17.682307 systemd[1]: kubelet.service: Consumed 172ms CPU time, 98.3M memory peak. May 27 03:18:17.685044 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 27 03:18:17.937088 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 27 03:18:17.955628 (kubelet)[2337]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 27 03:18:17.998936 kubelet[2337]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 27 03:18:17.999472 kubelet[2337]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. May 27 03:18:17.999472 kubelet[2337]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 27 03:18:17.999673 kubelet[2337]: I0527 03:18:17.999496 2337 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 27 03:18:18.818829 kubelet[2337]: I0527 03:18:18.818765 2337 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" May 27 03:18:18.818829 kubelet[2337]: I0527 03:18:18.818803 2337 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 27 03:18:18.819115 kubelet[2337]: I0527 03:18:18.819087 2337 server.go:956] "Client rotation is on, will bootstrap in background" May 27 03:18:18.876708 kubelet[2337]: E0527 03:18:18.876642 2337 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.71:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.71:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" May 27 03:18:18.880713 kubelet[2337]: I0527 03:18:18.880662 2337 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 27 03:18:18.885253 kubelet[2337]: I0527 03:18:18.885199 2337 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" May 27 03:18:18.894847 kubelet[2337]: I0527 03:18:18.894779 2337 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 27 03:18:18.895211 kubelet[2337]: I0527 03:18:18.895166 2337 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 27 03:18:18.895432 kubelet[2337]: I0527 03:18:18.895222 2337 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 27 03:18:18.895432 kubelet[2337]: I0527 03:18:18.895431 2337 topology_manager.go:138] "Creating topology manager with none policy" May 27 03:18:18.895586 kubelet[2337]: I0527 03:18:18.895452 2337 container_manager_linux.go:303] "Creating device plugin manager" May 27 03:18:18.903102 kubelet[2337]: I0527 03:18:18.902918 2337 state_mem.go:36] "Initialized new in-memory state store" May 27 03:18:18.905993 kubelet[2337]: I0527 03:18:18.905943 2337 kubelet.go:480] "Attempting to sync node with API server" May 27 03:18:18.906046 kubelet[2337]: I0527 03:18:18.906003 2337 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" May 27 03:18:18.906046 kubelet[2337]: I0527 03:18:18.906042 2337 kubelet.go:386] "Adding apiserver pod source" May 27 03:18:18.910414 kubelet[2337]: I0527 03:18:18.910373 2337 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 27 03:18:18.915902 kubelet[2337]: E0527 03:18:18.915863 2337 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.71:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.71:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" May 27 03:18:18.918016 kubelet[2337]: E0527 03:18:18.917968 2337 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.71:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.71:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" May 27 03:18:18.921401 kubelet[2337]: I0527 03:18:18.921376 2337 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" May 27 03:18:18.921879 kubelet[2337]: I0527 03:18:18.921857 2337 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" May 27 03:18:18.922523 kubelet[2337]: W0527 03:18:18.922501 2337 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 27 03:18:18.927236 kubelet[2337]: I0527 03:18:18.927214 2337 watchdog_linux.go:99] "Systemd watchdog is not enabled" May 27 03:18:18.927288 kubelet[2337]: I0527 03:18:18.927270 2337 server.go:1289] "Started kubelet" May 27 03:18:18.929361 kubelet[2337]: I0527 03:18:18.929337 2337 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 27 03:18:18.930285 kubelet[2337]: I0527 03:18:18.930187 2337 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 27 03:18:18.930860 kubelet[2337]: I0527 03:18:18.930819 2337 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 27 03:18:18.930919 kubelet[2337]: I0527 03:18:18.930891 2337 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 May 27 03:18:18.932065 kubelet[2337]: I0527 03:18:18.932037 2337 server.go:317] "Adding debug handlers to kubelet server" May 27 03:18:18.937333 kubelet[2337]: I0527 03:18:18.936153 2337 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 27 03:18:18.937333 kubelet[2337]: I0527 03:18:18.937076 2337 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" May 27 03:18:18.938493 kubelet[2337]: E0527 03:18:18.938299 2337 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 27 03:18:18.938493 kubelet[2337]: I0527 03:18:18.938344 2337 volume_manager.go:297] "Starting Kubelet Volume Manager" May 27 03:18:18.938587 kubelet[2337]: I0527 03:18:18.938511 2337 desired_state_of_world_populator.go:150] "Desired state populator starts to run" May 27 03:18:18.938587 kubelet[2337]: I0527 03:18:18.938574 2337 reconciler.go:26] "Reconciler: start to sync state" May 27 03:18:18.938916 kubelet[2337]: E0527 03:18:18.938885 2337 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.71:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.71:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" May 27 03:18:18.939188 kubelet[2337]: E0527 03:18:18.939140 2337 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.71:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.71:6443: connect: connection refused" interval="200ms" May 27 03:18:18.939497 kubelet[2337]: I0527 03:18:18.939466 2337 factory.go:223] Registration of the systemd container factory successfully May 27 03:18:18.939558 kubelet[2337]: I0527 03:18:18.939548 2337 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 27 03:18:18.940242 kubelet[2337]: E0527 03:18:18.937900 2337 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.71:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.71:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1843440c0eb9c31f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-05-27 03:18:18.927235871 +0000 UTC m=+0.965918855,LastTimestamp:2025-05-27 03:18:18.927235871 +0000 UTC m=+0.965918855,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" May 27 03:18:18.941130 kubelet[2337]: E0527 03:18:18.941106 2337 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 27 03:18:18.942226 kubelet[2337]: I0527 03:18:18.942193 2337 factory.go:223] Registration of the containerd container factory successfully May 27 03:18:18.960846 kubelet[2337]: I0527 03:18:18.960793 2337 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" May 27 03:18:18.960846 kubelet[2337]: I0527 03:18:18.960833 2337 status_manager.go:230] "Starting to sync pod status with apiserver" May 27 03:18:18.960846 kubelet[2337]: I0527 03:18:18.960858 2337 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." May 27 03:18:18.961078 kubelet[2337]: I0527 03:18:18.960865 2337 kubelet.go:2436] "Starting kubelet main sync loop" May 27 03:18:18.961078 kubelet[2337]: E0527 03:18:18.960912 2337 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 27 03:18:18.961727 kubelet[2337]: E0527 03:18:18.961683 2337 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.71:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.71:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" May 27 03:18:18.969454 kubelet[2337]: I0527 03:18:18.969421 2337 cpu_manager.go:221] "Starting CPU manager" policy="none" May 27 03:18:18.969454 kubelet[2337]: I0527 03:18:18.969450 2337 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" May 27 03:18:18.969558 kubelet[2337]: I0527 03:18:18.969472 2337 state_mem.go:36] "Initialized new in-memory state store" May 27 03:18:19.039255 kubelet[2337]: E0527 03:18:19.039183 2337 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 27 03:18:19.061473 kubelet[2337]: E0527 03:18:19.061405 2337 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" May 27 03:18:19.140041 kubelet[2337]: E0527 03:18:19.139813 2337 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 27 03:18:19.140490 kubelet[2337]: E0527 03:18:19.140405 2337 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.71:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.71:6443: connect: connection refused" interval="400ms" May 27 03:18:19.240318 kubelet[2337]: E0527 03:18:19.240242 2337 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 27 03:18:19.262572 kubelet[2337]: E0527 03:18:19.262492 2337 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" May 27 03:18:19.313275 update_engine[1567]: I20250527 03:18:19.313135 1567 update_attempter.cc:509] Updating boot flags... May 27 03:18:19.341111 kubelet[2337]: E0527 03:18:19.341049 2337 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 27 03:18:19.442063 kubelet[2337]: E0527 03:18:19.442015 2337 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 27 03:18:19.541178 kubelet[2337]: E0527 03:18:19.541112 2337 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.71:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.71:6443: connect: connection refused" interval="800ms" May 27 03:18:19.543187 kubelet[2337]: E0527 03:18:19.543129 2337 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 27 03:18:19.643999 kubelet[2337]: E0527 03:18:19.643922 2337 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 27 03:18:19.663342 kubelet[2337]: E0527 03:18:19.663251 2337 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" May 27 03:18:19.745131 kubelet[2337]: E0527 03:18:19.744946 2337 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 27 03:18:19.779263 kubelet[2337]: E0527 03:18:19.779193 2337 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.71:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.71:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" May 27 03:18:19.845420 kubelet[2337]: E0527 03:18:19.845330 2337 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 27 03:18:19.941852 kubelet[2337]: E0527 03:18:19.941787 2337 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.71:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.71:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" May 27 03:18:19.946250 kubelet[2337]: E0527 03:18:19.946206 2337 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 27 03:18:19.954933 kubelet[2337]: E0527 03:18:19.954871 2337 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.71:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.71:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" May 27 03:18:20.047104 kubelet[2337]: E0527 03:18:20.046929 2337 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 27 03:18:20.104860 kubelet[2337]: I0527 03:18:20.104806 2337 policy_none.go:49] "None policy: Start" May 27 03:18:20.105012 kubelet[2337]: I0527 03:18:20.104893 2337 memory_manager.go:186] "Starting memorymanager" policy="None" May 27 03:18:20.105012 kubelet[2337]: I0527 03:18:20.104911 2337 state_mem.go:35] "Initializing new in-memory state store" May 27 03:18:20.147722 kubelet[2337]: E0527 03:18:20.147600 2337 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 27 03:18:20.252109 kubelet[2337]: E0527 03:18:20.248052 2337 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 27 03:18:20.280664 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. May 27 03:18:20.320889 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. May 27 03:18:20.325915 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. May 27 03:18:20.342894 kubelet[2337]: E0527 03:18:20.342836 2337 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.71:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.71:6443: connect: connection refused" interval="1.6s" May 27 03:18:20.348960 kubelet[2337]: E0527 03:18:20.348868 2337 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 27 03:18:20.352473 kubelet[2337]: E0527 03:18:20.352360 2337 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" May 27 03:18:20.352693 kubelet[2337]: I0527 03:18:20.352664 2337 eviction_manager.go:189] "Eviction manager: starting control loop" May 27 03:18:20.352922 kubelet[2337]: I0527 03:18:20.352697 2337 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 27 03:18:20.352957 kubelet[2337]: I0527 03:18:20.352927 2337 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 27 03:18:20.353795 kubelet[2337]: E0527 03:18:20.353767 2337 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" May 27 03:18:20.353865 kubelet[2337]: E0527 03:18:20.353809 2337 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" May 27 03:18:20.455100 kubelet[2337]: I0527 03:18:20.455020 2337 kubelet_node_status.go:75] "Attempting to register node" node="localhost" May 27 03:18:20.455628 kubelet[2337]: E0527 03:18:20.455577 2337 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.71:6443/api/v1/nodes\": dial tcp 10.0.0.71:6443: connect: connection refused" node="localhost" May 27 03:18:20.507278 kubelet[2337]: E0527 03:18:20.507211 2337 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.71:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.71:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" May 27 03:18:20.550176 kubelet[2337]: I0527 03:18:20.550095 2337 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7946659c8201cdf14c2e177403b99ae0-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"7946659c8201cdf14c2e177403b99ae0\") " pod="kube-system/kube-apiserver-localhost" May 27 03:18:20.550176 kubelet[2337]: I0527 03:18:20.550140 2337 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7946659c8201cdf14c2e177403b99ae0-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"7946659c8201cdf14c2e177403b99ae0\") " pod="kube-system/kube-apiserver-localhost" May 27 03:18:20.550176 kubelet[2337]: I0527 03:18:20.550167 2337 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7946659c8201cdf14c2e177403b99ae0-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"7946659c8201cdf14c2e177403b99ae0\") " pod="kube-system/kube-apiserver-localhost" May 27 03:18:20.657668 kubelet[2337]: I0527 03:18:20.657512 2337 kubelet_node_status.go:75] "Attempting to register node" node="localhost" May 27 03:18:20.658095 kubelet[2337]: E0527 03:18:20.658055 2337 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.71:6443/api/v1/nodes\": dial tcp 10.0.0.71:6443: connect: connection refused" node="localhost" May 27 03:18:20.982727 kubelet[2337]: E0527 03:18:20.982595 2337 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.71:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.71:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" May 27 03:18:21.017037 systemd[1]: Created slice kubepods-burstable-pod7946659c8201cdf14c2e177403b99ae0.slice - libcontainer container kubepods-burstable-pod7946659c8201cdf14c2e177403b99ae0.slice. May 27 03:18:21.034523 kubelet[2337]: E0527 03:18:21.034461 2337 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 27 03:18:21.035315 kubelet[2337]: E0527 03:18:21.034843 2337 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 03:18:21.035701 containerd[1586]: time="2025-05-27T03:18:21.035646080Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:7946659c8201cdf14c2e177403b99ae0,Namespace:kube-system,Attempt:0,}" May 27 03:18:21.041064 systemd[1]: Created slice kubepods-burstable-pod97963c41ada533e2e0872a518ecd4611.slice - libcontainer container kubepods-burstable-pod97963c41ada533e2e0872a518ecd4611.slice. May 27 03:18:21.043898 kubelet[2337]: E0527 03:18:21.043856 2337 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 27 03:18:21.054093 kubelet[2337]: I0527 03:18:21.054021 2337 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/97963c41ada533e2e0872a518ecd4611-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"97963c41ada533e2e0872a518ecd4611\") " pod="kube-system/kube-controller-manager-localhost" May 27 03:18:21.054093 kubelet[2337]: I0527 03:18:21.054082 2337 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/97963c41ada533e2e0872a518ecd4611-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"97963c41ada533e2e0872a518ecd4611\") " pod="kube-system/kube-controller-manager-localhost" May 27 03:18:21.054093 kubelet[2337]: I0527 03:18:21.054099 2337 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/97963c41ada533e2e0872a518ecd4611-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"97963c41ada533e2e0872a518ecd4611\") " pod="kube-system/kube-controller-manager-localhost" May 27 03:18:21.054601 kubelet[2337]: I0527 03:18:21.054115 2337 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/97963c41ada533e2e0872a518ecd4611-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"97963c41ada533e2e0872a518ecd4611\") " pod="kube-system/kube-controller-manager-localhost" May 27 03:18:21.054601 kubelet[2337]: I0527 03:18:21.054133 2337 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8fba52155e63f70cc922ab7cc8c200fd-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"8fba52155e63f70cc922ab7cc8c200fd\") " pod="kube-system/kube-scheduler-localhost" May 27 03:18:21.054601 kubelet[2337]: I0527 03:18:21.054147 2337 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/97963c41ada533e2e0872a518ecd4611-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"97963c41ada533e2e0872a518ecd4611\") " pod="kube-system/kube-controller-manager-localhost" May 27 03:18:21.060003 kubelet[2337]: I0527 03:18:21.059951 2337 kubelet_node_status.go:75] "Attempting to register node" node="localhost" May 27 03:18:21.060545 kubelet[2337]: E0527 03:18:21.060474 2337 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.71:6443/api/v1/nodes\": dial tcp 10.0.0.71:6443: connect: connection refused" node="localhost" May 27 03:18:21.073612 systemd[1]: Created slice kubepods-burstable-pod8fba52155e63f70cc922ab7cc8c200fd.slice - libcontainer container kubepods-burstable-pod8fba52155e63f70cc922ab7cc8c200fd.slice. May 27 03:18:21.076496 kubelet[2337]: E0527 03:18:21.076454 2337 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 27 03:18:21.159597 containerd[1586]: time="2025-05-27T03:18:21.159485286Z" level=info msg="connecting to shim 3cd2d8c20719673727eb6cefd9954103badc7b4438d95da9f9b886811c062e9f" address="unix:///run/containerd/s/ddf608a77e8730107d56905c9f9a02b490f59c653d36e580aaf10b3a86d653e0" namespace=k8s.io protocol=ttrpc version=3 May 27 03:18:21.195268 systemd[1]: Started cri-containerd-3cd2d8c20719673727eb6cefd9954103badc7b4438d95da9f9b886811c062e9f.scope - libcontainer container 3cd2d8c20719673727eb6cefd9954103badc7b4438d95da9f9b886811c062e9f. May 27 03:18:21.253528 containerd[1586]: time="2025-05-27T03:18:21.253367444Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:7946659c8201cdf14c2e177403b99ae0,Namespace:kube-system,Attempt:0,} returns sandbox id \"3cd2d8c20719673727eb6cefd9954103badc7b4438d95da9f9b886811c062e9f\"" May 27 03:18:21.255142 kubelet[2337]: E0527 03:18:21.255055 2337 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 03:18:21.265066 containerd[1586]: time="2025-05-27T03:18:21.264942170Z" level=info msg="CreateContainer within sandbox \"3cd2d8c20719673727eb6cefd9954103badc7b4438d95da9f9b886811c062e9f\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 27 03:18:21.299309 containerd[1586]: time="2025-05-27T03:18:21.299233535Z" level=info msg="Container 631a00b3afd7994b23339b73bbc8395cb7fdbbe9680e62cff7d10497b433a272: CDI devices from CRI Config.CDIDevices: []" May 27 03:18:21.344593 kubelet[2337]: E0527 03:18:21.344528 2337 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 03:18:21.345237 containerd[1586]: time="2025-05-27T03:18:21.345178686Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:97963c41ada533e2e0872a518ecd4611,Namespace:kube-system,Attempt:0,}" May 27 03:18:21.377396 kubelet[2337]: E0527 03:18:21.377311 2337 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 03:18:21.378440 containerd[1586]: time="2025-05-27T03:18:21.378373177Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:8fba52155e63f70cc922ab7cc8c200fd,Namespace:kube-system,Attempt:0,}" May 27 03:18:21.445185 kubelet[2337]: E0527 03:18:21.444938 2337 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.71:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.71:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1843440c0eb9c31f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-05-27 03:18:18.927235871 +0000 UTC m=+0.965918855,LastTimestamp:2025-05-27 03:18:18.927235871 +0000 UTC m=+0.965918855,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" May 27 03:18:21.534769 containerd[1586]: time="2025-05-27T03:18:21.534620537Z" level=info msg="CreateContainer within sandbox \"3cd2d8c20719673727eb6cefd9954103badc7b4438d95da9f9b886811c062e9f\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"631a00b3afd7994b23339b73bbc8395cb7fdbbe9680e62cff7d10497b433a272\"" May 27 03:18:21.535619 containerd[1586]: time="2025-05-27T03:18:21.535563069Z" level=info msg="StartContainer for \"631a00b3afd7994b23339b73bbc8395cb7fdbbe9680e62cff7d10497b433a272\"" May 27 03:18:21.537136 containerd[1586]: time="2025-05-27T03:18:21.537094917Z" level=info msg="connecting to shim 631a00b3afd7994b23339b73bbc8395cb7fdbbe9680e62cff7d10497b433a272" address="unix:///run/containerd/s/ddf608a77e8730107d56905c9f9a02b490f59c653d36e580aaf10b3a86d653e0" protocol=ttrpc version=3 May 27 03:18:21.564253 systemd[1]: Started cri-containerd-631a00b3afd7994b23339b73bbc8395cb7fdbbe9680e62cff7d10497b433a272.scope - libcontainer container 631a00b3afd7994b23339b73bbc8395cb7fdbbe9680e62cff7d10497b433a272. May 27 03:18:21.579838 containerd[1586]: time="2025-05-27T03:18:21.579775702Z" level=info msg="connecting to shim 4590d2b7c2f2b337e634832c1dab34cc45d7e3d6f71941adb91180f22e8042ab" address="unix:///run/containerd/s/42269187bcbb0b794f472ced57a76f7559d6787383458c733a275e9bfd5ce777" namespace=k8s.io protocol=ttrpc version=3 May 27 03:18:21.609448 systemd[1]: Started cri-containerd-4590d2b7c2f2b337e634832c1dab34cc45d7e3d6f71941adb91180f22e8042ab.scope - libcontainer container 4590d2b7c2f2b337e634832c1dab34cc45d7e3d6f71941adb91180f22e8042ab. May 27 03:18:21.631036 containerd[1586]: time="2025-05-27T03:18:21.627561195Z" level=info msg="connecting to shim 7868413d8db7c28c9f44419d766e96a0c7adc1147c53a39aafd69b6cc1bfa540" address="unix:///run/containerd/s/ae51ad8a2248b2a971d50b3c000075e0c50db7e57bb39ba58a4affbf1c4064c9" namespace=k8s.io protocol=ttrpc version=3 May 27 03:18:21.661463 containerd[1586]: time="2025-05-27T03:18:21.661402780Z" level=info msg="StartContainer for \"631a00b3afd7994b23339b73bbc8395cb7fdbbe9680e62cff7d10497b433a272\" returns successfully" May 27 03:18:21.689149 systemd[1]: Started cri-containerd-7868413d8db7c28c9f44419d766e96a0c7adc1147c53a39aafd69b6cc1bfa540.scope - libcontainer container 7868413d8db7c28c9f44419d766e96a0c7adc1147c53a39aafd69b6cc1bfa540. May 27 03:18:21.697227 containerd[1586]: time="2025-05-27T03:18:21.697138758Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:97963c41ada533e2e0872a518ecd4611,Namespace:kube-system,Attempt:0,} returns sandbox id \"4590d2b7c2f2b337e634832c1dab34cc45d7e3d6f71941adb91180f22e8042ab\"" May 27 03:18:21.698376 kubelet[2337]: E0527 03:18:21.698175 2337 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 03:18:21.705654 containerd[1586]: time="2025-05-27T03:18:21.705590797Z" level=info msg="CreateContainer within sandbox \"4590d2b7c2f2b337e634832c1dab34cc45d7e3d6f71941adb91180f22e8042ab\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 27 03:18:21.725193 containerd[1586]: time="2025-05-27T03:18:21.725137681Z" level=info msg="Container fc5196e899a4215f87f1b428f2e3ac69ec72cecba75dd52b4ee6dd56d39bf7c2: CDI devices from CRI Config.CDIDevices: []" May 27 03:18:21.862108 containerd[1586]: time="2025-05-27T03:18:21.861383015Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:8fba52155e63f70cc922ab7cc8c200fd,Namespace:kube-system,Attempt:0,} returns sandbox id \"7868413d8db7c28c9f44419d766e96a0c7adc1147c53a39aafd69b6cc1bfa540\"" May 27 03:18:21.862517 kubelet[2337]: I0527 03:18:21.862465 2337 kubelet_node_status.go:75] "Attempting to register node" node="localhost" May 27 03:18:21.862721 kubelet[2337]: E0527 03:18:21.862692 2337 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 03:18:21.895662 containerd[1586]: time="2025-05-27T03:18:21.895599720Z" level=info msg="CreateContainer within sandbox \"4590d2b7c2f2b337e634832c1dab34cc45d7e3d6f71941adb91180f22e8042ab\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"fc5196e899a4215f87f1b428f2e3ac69ec72cecba75dd52b4ee6dd56d39bf7c2\"" May 27 03:18:21.896378 containerd[1586]: time="2025-05-27T03:18:21.896326145Z" level=info msg="StartContainer for \"fc5196e899a4215f87f1b428f2e3ac69ec72cecba75dd52b4ee6dd56d39bf7c2\"" May 27 03:18:21.897816 containerd[1586]: time="2025-05-27T03:18:21.897757702Z" level=info msg="connecting to shim fc5196e899a4215f87f1b428f2e3ac69ec72cecba75dd52b4ee6dd56d39bf7c2" address="unix:///run/containerd/s/42269187bcbb0b794f472ced57a76f7559d6787383458c733a275e9bfd5ce777" protocol=ttrpc version=3 May 27 03:18:21.934376 systemd[1]: Started cri-containerd-fc5196e899a4215f87f1b428f2e3ac69ec72cecba75dd52b4ee6dd56d39bf7c2.scope - libcontainer container fc5196e899a4215f87f1b428f2e3ac69ec72cecba75dd52b4ee6dd56d39bf7c2. May 27 03:18:22.019578 containerd[1586]: time="2025-05-27T03:18:22.019526012Z" level=info msg="CreateContainer within sandbox \"7868413d8db7c28c9f44419d766e96a0c7adc1147c53a39aafd69b6cc1bfa540\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 27 03:18:22.024739 kubelet[2337]: E0527 03:18:22.024212 2337 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 27 03:18:22.024739 kubelet[2337]: E0527 03:18:22.024520 2337 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 03:18:22.038550 containerd[1586]: time="2025-05-27T03:18:22.038479816Z" level=info msg="StartContainer for \"fc5196e899a4215f87f1b428f2e3ac69ec72cecba75dd52b4ee6dd56d39bf7c2\" returns successfully" May 27 03:18:22.049281 containerd[1586]: time="2025-05-27T03:18:22.049219170Z" level=info msg="Container c1908718796f4bd6d0eab3a859aec3ff6d2cac31aa15a87c640ab908f439efac: CDI devices from CRI Config.CDIDevices: []" May 27 03:18:22.060102 containerd[1586]: time="2025-05-27T03:18:22.059942192Z" level=info msg="CreateContainer within sandbox \"7868413d8db7c28c9f44419d766e96a0c7adc1147c53a39aafd69b6cc1bfa540\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"c1908718796f4bd6d0eab3a859aec3ff6d2cac31aa15a87c640ab908f439efac\"" May 27 03:18:22.060706 containerd[1586]: time="2025-05-27T03:18:22.060681451Z" level=info msg="StartContainer for \"c1908718796f4bd6d0eab3a859aec3ff6d2cac31aa15a87c640ab908f439efac\"" May 27 03:18:22.063106 containerd[1586]: time="2025-05-27T03:18:22.063071549Z" level=info msg="connecting to shim c1908718796f4bd6d0eab3a859aec3ff6d2cac31aa15a87c640ab908f439efac" address="unix:///run/containerd/s/ae51ad8a2248b2a971d50b3c000075e0c50db7e57bb39ba58a4affbf1c4064c9" protocol=ttrpc version=3 May 27 03:18:22.106339 systemd[1]: Started cri-containerd-c1908718796f4bd6d0eab3a859aec3ff6d2cac31aa15a87c640ab908f439efac.scope - libcontainer container c1908718796f4bd6d0eab3a859aec3ff6d2cac31aa15a87c640ab908f439efac. May 27 03:18:22.283763 containerd[1586]: time="2025-05-27T03:18:22.283719907Z" level=info msg="StartContainer for \"c1908718796f4bd6d0eab3a859aec3ff6d2cac31aa15a87c640ab908f439efac\" returns successfully" May 27 03:18:22.982456 kubelet[2337]: E0527 03:18:22.982121 2337 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 27 03:18:22.982456 kubelet[2337]: E0527 03:18:22.982315 2337 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 03:18:22.987542 kubelet[2337]: E0527 03:18:22.987521 2337 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 27 03:18:22.987761 kubelet[2337]: E0527 03:18:22.987738 2337 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 03:18:22.988539 kubelet[2337]: E0527 03:18:22.988518 2337 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 27 03:18:22.988764 kubelet[2337]: E0527 03:18:22.988744 2337 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 03:18:23.180671 kubelet[2337]: E0527 03:18:23.180603 2337 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" May 27 03:18:23.377139 kubelet[2337]: I0527 03:18:23.376550 2337 kubelet_node_status.go:78] "Successfully registered node" node="localhost" May 27 03:18:23.377139 kubelet[2337]: E0527 03:18:23.376607 2337 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" May 27 03:18:23.439404 kubelet[2337]: I0527 03:18:23.438961 2337 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" May 27 03:18:23.668337 kubelet[2337]: E0527 03:18:23.668164 2337 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" May 27 03:18:23.668337 kubelet[2337]: I0527 03:18:23.668207 2337 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" May 27 03:18:23.670697 kubelet[2337]: E0527 03:18:23.670652 2337 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" May 27 03:18:23.670697 kubelet[2337]: I0527 03:18:23.670695 2337 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" May 27 03:18:23.672288 kubelet[2337]: E0527 03:18:23.672249 2337 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" May 27 03:18:23.915050 kubelet[2337]: I0527 03:18:23.914945 2337 apiserver.go:52] "Watching apiserver" May 27 03:18:23.938785 kubelet[2337]: I0527 03:18:23.938726 2337 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" May 27 03:18:23.988008 kubelet[2337]: I0527 03:18:23.987939 2337 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" May 27 03:18:23.988479 kubelet[2337]: I0527 03:18:23.988064 2337 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" May 27 03:18:23.990439 kubelet[2337]: E0527 03:18:23.990387 2337 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" May 27 03:18:23.990623 kubelet[2337]: E0527 03:18:23.990598 2337 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 03:18:23.991486 kubelet[2337]: E0527 03:18:23.991434 2337 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" May 27 03:18:23.991596 kubelet[2337]: E0527 03:18:23.991571 2337 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 03:18:24.989572 kubelet[2337]: I0527 03:18:24.989509 2337 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" May 27 03:18:25.038359 kubelet[2337]: E0527 03:18:25.038289 2337 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 03:18:25.231473 kubelet[2337]: I0527 03:18:25.231403 2337 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" May 27 03:18:25.240141 kubelet[2337]: E0527 03:18:25.239990 2337 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 03:18:25.307956 kubelet[2337]: I0527 03:18:25.307893 2337 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" May 27 03:18:25.316484 kubelet[2337]: E0527 03:18:25.316448 2337 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 03:18:25.426788 systemd[1]: Reload requested from client PID 2637 ('systemctl') (unit session-7.scope)... May 27 03:18:25.426805 systemd[1]: Reloading... May 27 03:18:25.524034 zram_generator::config[2683]: No configuration found. May 27 03:18:25.854718 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 27 03:18:25.991250 kubelet[2337]: E0527 03:18:25.991207 2337 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 03:18:25.991728 kubelet[2337]: E0527 03:18:25.991294 2337 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 03:18:25.991728 kubelet[2337]: E0527 03:18:25.991596 2337 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 03:18:25.993345 systemd[1]: Reloading finished in 566 ms. May 27 03:18:26.026854 kubelet[2337]: I0527 03:18:26.026802 2337 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 27 03:18:26.026911 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 27 03:18:26.053502 systemd[1]: kubelet.service: Deactivated successfully. May 27 03:18:26.053811 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 27 03:18:26.053867 systemd[1]: kubelet.service: Consumed 1.573s CPU time, 132.4M memory peak. May 27 03:18:26.056088 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 27 03:18:26.298082 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 27 03:18:26.313508 (kubelet)[2725]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 27 03:18:26.362959 kubelet[2725]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 27 03:18:26.362959 kubelet[2725]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. May 27 03:18:26.362959 kubelet[2725]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 27 03:18:26.363501 kubelet[2725]: I0527 03:18:26.363026 2725 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 27 03:18:26.374741 kubelet[2725]: I0527 03:18:26.374672 2725 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" May 27 03:18:26.374741 kubelet[2725]: I0527 03:18:26.374720 2725 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 27 03:18:26.375079 kubelet[2725]: I0527 03:18:26.375051 2725 server.go:956] "Client rotation is on, will bootstrap in background" May 27 03:18:26.376715 kubelet[2725]: I0527 03:18:26.376675 2725 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" May 27 03:18:26.379719 kubelet[2725]: I0527 03:18:26.379677 2725 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 27 03:18:26.383868 kubelet[2725]: I0527 03:18:26.383841 2725 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" May 27 03:18:26.388766 kubelet[2725]: I0527 03:18:26.388737 2725 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 27 03:18:26.389015 kubelet[2725]: I0527 03:18:26.388955 2725 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 27 03:18:26.389148 kubelet[2725]: I0527 03:18:26.389001 2725 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 27 03:18:26.389235 kubelet[2725]: I0527 03:18:26.389153 2725 topology_manager.go:138] "Creating topology manager with none policy" May 27 03:18:26.389235 kubelet[2725]: I0527 03:18:26.389163 2725 container_manager_linux.go:303] "Creating device plugin manager" May 27 03:18:26.389235 kubelet[2725]: I0527 03:18:26.389214 2725 state_mem.go:36] "Initialized new in-memory state store" May 27 03:18:26.389394 kubelet[2725]: I0527 03:18:26.389369 2725 kubelet.go:480] "Attempting to sync node with API server" May 27 03:18:26.389394 kubelet[2725]: I0527 03:18:26.389393 2725 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" May 27 03:18:26.389441 kubelet[2725]: I0527 03:18:26.389415 2725 kubelet.go:386] "Adding apiserver pod source" May 27 03:18:26.389441 kubelet[2725]: I0527 03:18:26.389431 2725 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 27 03:18:26.391581 kubelet[2725]: I0527 03:18:26.391554 2725 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" May 27 03:18:26.392994 kubelet[2725]: I0527 03:18:26.392847 2725 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" May 27 03:18:26.396389 kubelet[2725]: I0527 03:18:26.396350 2725 watchdog_linux.go:99] "Systemd watchdog is not enabled" May 27 03:18:26.396474 kubelet[2725]: I0527 03:18:26.396428 2725 server.go:1289] "Started kubelet" May 27 03:18:26.399189 kubelet[2725]: I0527 03:18:26.399113 2725 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 May 27 03:18:26.400226 kubelet[2725]: I0527 03:18:26.400187 2725 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 27 03:18:26.400473 kubelet[2725]: I0527 03:18:26.400442 2725 server.go:317] "Adding debug handlers to kubelet server" May 27 03:18:26.403458 kubelet[2725]: I0527 03:18:26.403162 2725 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 27 03:18:26.404787 kubelet[2725]: I0527 03:18:26.404728 2725 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 27 03:18:26.405007 kubelet[2725]: I0527 03:18:26.404966 2725 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 27 03:18:26.408135 kubelet[2725]: E0527 03:18:26.408072 2725 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 27 03:18:26.408135 kubelet[2725]: I0527 03:18:26.408118 2725 volume_manager.go:297] "Starting Kubelet Volume Manager" May 27 03:18:26.408406 kubelet[2725]: I0527 03:18:26.408357 2725 desired_state_of_world_populator.go:150] "Desired state populator starts to run" May 27 03:18:26.408563 kubelet[2725]: I0527 03:18:26.408539 2725 reconciler.go:26] "Reconciler: start to sync state" May 27 03:18:26.412617 kubelet[2725]: I0527 03:18:26.412571 2725 factory.go:223] Registration of the containerd container factory successfully May 27 03:18:26.412617 kubelet[2725]: I0527 03:18:26.412594 2725 factory.go:223] Registration of the systemd container factory successfully May 27 03:18:26.412794 kubelet[2725]: I0527 03:18:26.412689 2725 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 27 03:18:26.420158 kubelet[2725]: I0527 03:18:26.419943 2725 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" May 27 03:18:26.421717 kubelet[2725]: I0527 03:18:26.421697 2725 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" May 27 03:18:26.421792 kubelet[2725]: I0527 03:18:26.421782 2725 status_manager.go:230] "Starting to sync pod status with apiserver" May 27 03:18:26.421902 kubelet[2725]: I0527 03:18:26.421883 2725 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." May 27 03:18:26.421951 kubelet[2725]: I0527 03:18:26.421942 2725 kubelet.go:2436] "Starting kubelet main sync loop" May 27 03:18:26.422102 kubelet[2725]: E0527 03:18:26.422056 2725 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 27 03:18:26.460076 kubelet[2725]: I0527 03:18:26.460007 2725 cpu_manager.go:221] "Starting CPU manager" policy="none" May 27 03:18:26.460076 kubelet[2725]: I0527 03:18:26.460030 2725 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" May 27 03:18:26.460076 kubelet[2725]: I0527 03:18:26.460052 2725 state_mem.go:36] "Initialized new in-memory state store" May 27 03:18:26.460446 kubelet[2725]: I0527 03:18:26.460185 2725 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 27 03:18:26.460446 kubelet[2725]: I0527 03:18:26.460195 2725 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 27 03:18:26.460446 kubelet[2725]: I0527 03:18:26.460212 2725 policy_none.go:49] "None policy: Start" May 27 03:18:26.460446 kubelet[2725]: I0527 03:18:26.460221 2725 memory_manager.go:186] "Starting memorymanager" policy="None" May 27 03:18:26.460446 kubelet[2725]: I0527 03:18:26.460230 2725 state_mem.go:35] "Initializing new in-memory state store" May 27 03:18:26.460446 kubelet[2725]: I0527 03:18:26.460308 2725 state_mem.go:75] "Updated machine memory state" May 27 03:18:26.466904 kubelet[2725]: E0527 03:18:26.466872 2725 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" May 27 03:18:26.467354 kubelet[2725]: I0527 03:18:26.467323 2725 eviction_manager.go:189] "Eviction manager: starting control loop" May 27 03:18:26.467420 kubelet[2725]: I0527 03:18:26.467344 2725 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 27 03:18:26.467642 kubelet[2725]: I0527 03:18:26.467619 2725 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 27 03:18:26.468774 kubelet[2725]: E0527 03:18:26.468730 2725 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" May 27 03:18:26.524033 kubelet[2725]: I0527 03:18:26.523699 2725 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" May 27 03:18:26.524033 kubelet[2725]: I0527 03:18:26.523908 2725 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" May 27 03:18:26.524033 kubelet[2725]: I0527 03:18:26.524040 2725 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" May 27 03:18:26.535583 kubelet[2725]: E0527 03:18:26.535523 2725 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" May 27 03:18:26.535757 kubelet[2725]: E0527 03:18:26.535708 2725 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" May 27 03:18:26.535913 kubelet[2725]: E0527 03:18:26.535831 2725 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" May 27 03:18:26.547893 sudo[2765]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin May 27 03:18:26.548343 sudo[2765]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) May 27 03:18:26.572471 kubelet[2725]: I0527 03:18:26.572369 2725 kubelet_node_status.go:75] "Attempting to register node" node="localhost" May 27 03:18:26.592904 kubelet[2725]: I0527 03:18:26.592856 2725 kubelet_node_status.go:124] "Node was previously registered" node="localhost" May 27 03:18:26.593080 kubelet[2725]: I0527 03:18:26.592963 2725 kubelet_node_status.go:78] "Successfully registered node" node="localhost" May 27 03:18:26.608917 kubelet[2725]: I0527 03:18:26.608860 2725 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8fba52155e63f70cc922ab7cc8c200fd-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"8fba52155e63f70cc922ab7cc8c200fd\") " pod="kube-system/kube-scheduler-localhost" May 27 03:18:26.608917 kubelet[2725]: I0527 03:18:26.608907 2725 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7946659c8201cdf14c2e177403b99ae0-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"7946659c8201cdf14c2e177403b99ae0\") " pod="kube-system/kube-apiserver-localhost" May 27 03:18:26.609130 kubelet[2725]: I0527 03:18:26.608935 2725 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7946659c8201cdf14c2e177403b99ae0-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"7946659c8201cdf14c2e177403b99ae0\") " pod="kube-system/kube-apiserver-localhost" May 27 03:18:26.609130 kubelet[2725]: I0527 03:18:26.608957 2725 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/97963c41ada533e2e0872a518ecd4611-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"97963c41ada533e2e0872a518ecd4611\") " pod="kube-system/kube-controller-manager-localhost" May 27 03:18:26.609130 kubelet[2725]: I0527 03:18:26.608999 2725 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/97963c41ada533e2e0872a518ecd4611-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"97963c41ada533e2e0872a518ecd4611\") " pod="kube-system/kube-controller-manager-localhost" May 27 03:18:26.609130 kubelet[2725]: I0527 03:18:26.609020 2725 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/97963c41ada533e2e0872a518ecd4611-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"97963c41ada533e2e0872a518ecd4611\") " pod="kube-system/kube-controller-manager-localhost" May 27 03:18:26.609130 kubelet[2725]: I0527 03:18:26.609037 2725 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7946659c8201cdf14c2e177403b99ae0-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"7946659c8201cdf14c2e177403b99ae0\") " pod="kube-system/kube-apiserver-localhost" May 27 03:18:26.609277 kubelet[2725]: I0527 03:18:26.609055 2725 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/97963c41ada533e2e0872a518ecd4611-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"97963c41ada533e2e0872a518ecd4611\") " pod="kube-system/kube-controller-manager-localhost" May 27 03:18:26.609277 kubelet[2725]: I0527 03:18:26.609074 2725 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/97963c41ada533e2e0872a518ecd4611-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"97963c41ada533e2e0872a518ecd4611\") " pod="kube-system/kube-controller-manager-localhost" May 27 03:18:26.836196 kubelet[2725]: E0527 03:18:26.836053 2725 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 03:18:26.836328 kubelet[2725]: E0527 03:18:26.836231 2725 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 03:18:26.836720 kubelet[2725]: E0527 03:18:26.836695 2725 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 03:18:27.054881 sudo[2765]: pam_unix(sudo:session): session closed for user root May 27 03:18:27.390674 kubelet[2725]: I0527 03:18:27.390614 2725 apiserver.go:52] "Watching apiserver" May 27 03:18:27.441995 kubelet[2725]: I0527 03:18:27.441958 2725 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" May 27 03:18:27.442461 kubelet[2725]: I0527 03:18:27.442446 2725 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" May 27 03:18:27.730878 kubelet[2725]: E0527 03:18:27.730473 2725 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" May 27 03:18:27.730878 kubelet[2725]: E0527 03:18:27.730583 2725 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" May 27 03:18:27.730878 kubelet[2725]: E0527 03:18:27.730688 2725 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 03:18:27.730878 kubelet[2725]: E0527 03:18:27.730809 2725 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 03:18:28.200101 kubelet[2725]: E0527 03:18:28.200053 2725 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 03:18:28.209380 kubelet[2725]: I0527 03:18:28.209316 2725 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" May 27 03:18:28.273923 kubelet[2725]: I0527 03:18:28.273715 2725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=3.2736980239999998 podStartE2EDuration="3.273698024s" podCreationTimestamp="2025-05-27 03:18:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-27 03:18:28.273313689 +0000 UTC m=+1.955289043" watchObservedRunningTime="2025-05-27 03:18:28.273698024 +0000 UTC m=+1.955673378" May 27 03:18:28.343513 kubelet[2725]: I0527 03:18:28.343434 2725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=4.34338849 podStartE2EDuration="4.34338849s" podCreationTimestamp="2025-05-27 03:18:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-27 03:18:28.287420301 +0000 UTC m=+1.969395675" watchObservedRunningTime="2025-05-27 03:18:28.34338849 +0000 UTC m=+2.025363844" May 27 03:18:28.443556 kubelet[2725]: I0527 03:18:28.443514 2725 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" May 27 03:18:28.444861 kubelet[2725]: I0527 03:18:28.443589 2725 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" May 27 03:18:28.571818 kubelet[2725]: E0527 03:18:28.571655 2725 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" May 27 03:18:28.571818 kubelet[2725]: E0527 03:18:28.571823 2725 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 03:18:28.574003 kubelet[2725]: E0527 03:18:28.573952 2725 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" May 27 03:18:28.574725 kubelet[2725]: E0527 03:18:28.574708 2725 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 03:18:29.366619 sudo[1795]: pam_unix(sudo:session): session closed for user root May 27 03:18:29.368461 sshd[1794]: Connection closed by 10.0.0.1 port 44736 May 27 03:18:29.373141 sshd-session[1792]: pam_unix(sshd:session): session closed for user core May 27 03:18:29.377819 systemd[1]: sshd@6-10.0.0.71:22-10.0.0.1:44736.service: Deactivated successfully. May 27 03:18:29.380240 systemd[1]: session-7.scope: Deactivated successfully. May 27 03:18:29.380476 systemd[1]: session-7.scope: Consumed 8.444s CPU time, 260.7M memory peak. May 27 03:18:29.381812 systemd-logind[1564]: Session 7 logged out. Waiting for processes to exit. May 27 03:18:29.383171 systemd-logind[1564]: Removed session 7. May 27 03:18:29.445453 kubelet[2725]: E0527 03:18:29.445397 2725 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 03:18:29.445453 kubelet[2725]: E0527 03:18:29.445442 2725 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 03:18:30.447530 kubelet[2725]: E0527 03:18:30.447468 2725 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 03:18:30.644885 kubelet[2725]: E0527 03:18:30.644818 2725 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 03:18:32.989020 kubelet[2725]: I0527 03:18:32.988934 2725 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 27 03:18:32.989545 containerd[1586]: time="2025-05-27T03:18:32.989377317Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 27 03:18:32.989826 kubelet[2725]: I0527 03:18:32.989617 2725 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 27 03:18:33.724110 kubelet[2725]: E0527 03:18:33.724070 2725 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 03:18:34.002379 kubelet[2725]: I0527 03:18:34.002146 2725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=9.002128021 podStartE2EDuration="9.002128021s" podCreationTimestamp="2025-05-27 03:18:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-27 03:18:28.343628602 +0000 UTC m=+2.025603956" watchObservedRunningTime="2025-05-27 03:18:34.002128021 +0000 UTC m=+7.684103375" May 27 03:18:34.092457 systemd[1]: Created slice kubepods-besteffort-pod47fe2a85_3bb2_46c4_bd1a_f5f977fde580.slice - libcontainer container kubepods-besteffort-pod47fe2a85_3bb2_46c4_bd1a_f5f977fde580.slice. May 27 03:18:34.110717 systemd[1]: Created slice kubepods-burstable-pode464bfcb_84d6_4586_811f_f5524741755f.slice - libcontainer container kubepods-burstable-pode464bfcb_84d6_4586_811f_f5524741755f.slice. May 27 03:18:34.155374 kubelet[2725]: I0527 03:18:34.155296 2725 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/47fe2a85-3bb2-46c4-bd1a-f5f977fde580-lib-modules\") pod \"kube-proxy-mgk9d\" (UID: \"47fe2a85-3bb2-46c4-bd1a-f5f977fde580\") " pod="kube-system/kube-proxy-mgk9d" May 27 03:18:34.155374 kubelet[2725]: I0527 03:18:34.155349 2725 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e464bfcb-84d6-4586-811f-f5524741755f-bpf-maps\") pod \"cilium-2ptxk\" (UID: \"e464bfcb-84d6-4586-811f-f5524741755f\") " pod="kube-system/cilium-2ptxk" May 27 03:18:34.155374 kubelet[2725]: I0527 03:18:34.155370 2725 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e464bfcb-84d6-4586-811f-f5524741755f-cilium-cgroup\") pod \"cilium-2ptxk\" (UID: \"e464bfcb-84d6-4586-811f-f5524741755f\") " pod="kube-system/cilium-2ptxk" May 27 03:18:34.155647 kubelet[2725]: I0527 03:18:34.155395 2725 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e464bfcb-84d6-4586-811f-f5524741755f-cni-path\") pod \"cilium-2ptxk\" (UID: \"e464bfcb-84d6-4586-811f-f5524741755f\") " pod="kube-system/cilium-2ptxk" May 27 03:18:34.155647 kubelet[2725]: I0527 03:18:34.155417 2725 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e464bfcb-84d6-4586-811f-f5524741755f-clustermesh-secrets\") pod \"cilium-2ptxk\" (UID: \"e464bfcb-84d6-4586-811f-f5524741755f\") " pod="kube-system/cilium-2ptxk" May 27 03:18:34.155647 kubelet[2725]: I0527 03:18:34.155436 2725 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e464bfcb-84d6-4586-811f-f5524741755f-etc-cni-netd\") pod \"cilium-2ptxk\" (UID: \"e464bfcb-84d6-4586-811f-f5524741755f\") " pod="kube-system/cilium-2ptxk" May 27 03:18:34.155647 kubelet[2725]: I0527 03:18:34.155451 2725 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e464bfcb-84d6-4586-811f-f5524741755f-cilium-config-path\") pod \"cilium-2ptxk\" (UID: \"e464bfcb-84d6-4586-811f-f5524741755f\") " pod="kube-system/cilium-2ptxk" May 27 03:18:34.155647 kubelet[2725]: I0527 03:18:34.155465 2725 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e464bfcb-84d6-4586-811f-f5524741755f-host-proc-sys-net\") pod \"cilium-2ptxk\" (UID: \"e464bfcb-84d6-4586-811f-f5524741755f\") " pod="kube-system/cilium-2ptxk" May 27 03:18:34.155647 kubelet[2725]: I0527 03:18:34.155549 2725 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e464bfcb-84d6-4586-811f-f5524741755f-hubble-tls\") pod \"cilium-2ptxk\" (UID: \"e464bfcb-84d6-4586-811f-f5524741755f\") " pod="kube-system/cilium-2ptxk" May 27 03:18:34.155867 kubelet[2725]: I0527 03:18:34.155609 2725 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lqg87\" (UniqueName: \"kubernetes.io/projected/e464bfcb-84d6-4586-811f-f5524741755f-kube-api-access-lqg87\") pod \"cilium-2ptxk\" (UID: \"e464bfcb-84d6-4586-811f-f5524741755f\") " pod="kube-system/cilium-2ptxk" May 27 03:18:34.155867 kubelet[2725]: I0527 03:18:34.155632 2725 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/47fe2a85-3bb2-46c4-bd1a-f5f977fde580-kube-proxy\") pod \"kube-proxy-mgk9d\" (UID: \"47fe2a85-3bb2-46c4-bd1a-f5f977fde580\") " pod="kube-system/kube-proxy-mgk9d" May 27 03:18:34.155867 kubelet[2725]: I0527 03:18:34.155645 2725 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/47fe2a85-3bb2-46c4-bd1a-f5f977fde580-xtables-lock\") pod \"kube-proxy-mgk9d\" (UID: \"47fe2a85-3bb2-46c4-bd1a-f5f977fde580\") " pod="kube-system/kube-proxy-mgk9d" May 27 03:18:34.155867 kubelet[2725]: I0527 03:18:34.155662 2725 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e464bfcb-84d6-4586-811f-f5524741755f-cilium-run\") pod \"cilium-2ptxk\" (UID: \"e464bfcb-84d6-4586-811f-f5524741755f\") " pod="kube-system/cilium-2ptxk" May 27 03:18:34.155867 kubelet[2725]: I0527 03:18:34.155685 2725 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e464bfcb-84d6-4586-811f-f5524741755f-xtables-lock\") pod \"cilium-2ptxk\" (UID: \"e464bfcb-84d6-4586-811f-f5524741755f\") " pod="kube-system/cilium-2ptxk" May 27 03:18:34.156108 kubelet[2725]: I0527 03:18:34.155714 2725 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e464bfcb-84d6-4586-811f-f5524741755f-host-proc-sys-kernel\") pod \"cilium-2ptxk\" (UID: \"e464bfcb-84d6-4586-811f-f5524741755f\") " pod="kube-system/cilium-2ptxk" May 27 03:18:34.156108 kubelet[2725]: I0527 03:18:34.155728 2725 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4sswb\" (UniqueName: \"kubernetes.io/projected/47fe2a85-3bb2-46c4-bd1a-f5f977fde580-kube-api-access-4sswb\") pod \"kube-proxy-mgk9d\" (UID: \"47fe2a85-3bb2-46c4-bd1a-f5f977fde580\") " pod="kube-system/kube-proxy-mgk9d" May 27 03:18:34.156108 kubelet[2725]: I0527 03:18:34.155754 2725 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e464bfcb-84d6-4586-811f-f5524741755f-hostproc\") pod \"cilium-2ptxk\" (UID: \"e464bfcb-84d6-4586-811f-f5524741755f\") " pod="kube-system/cilium-2ptxk" May 27 03:18:34.156108 kubelet[2725]: I0527 03:18:34.155775 2725 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e464bfcb-84d6-4586-811f-f5524741755f-lib-modules\") pod \"cilium-2ptxk\" (UID: \"e464bfcb-84d6-4586-811f-f5524741755f\") " pod="kube-system/cilium-2ptxk" May 27 03:18:34.491367 kubelet[2725]: E0527 03:18:34.491097 2725 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 03:18:34.498693 systemd[1]: Created slice kubepods-besteffort-pod422199c7_1e48_4b96_9f11_fabed8cd678b.slice - libcontainer container kubepods-besteffort-pod422199c7_1e48_4b96_9f11_fabed8cd678b.slice. May 27 03:18:34.558739 kubelet[2725]: I0527 03:18:34.558660 2725 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/422199c7-1e48-4b96-9f11-fabed8cd678b-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-jfkd7\" (UID: \"422199c7-1e48-4b96-9f11-fabed8cd678b\") " pod="kube-system/cilium-operator-6c4d7847fc-jfkd7" May 27 03:18:34.558739 kubelet[2725]: I0527 03:18:34.558720 2725 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k4vm2\" (UniqueName: \"kubernetes.io/projected/422199c7-1e48-4b96-9f11-fabed8cd678b-kube-api-access-k4vm2\") pod \"cilium-operator-6c4d7847fc-jfkd7\" (UID: \"422199c7-1e48-4b96-9f11-fabed8cd678b\") " pod="kube-system/cilium-operator-6c4d7847fc-jfkd7" May 27 03:18:34.705332 kubelet[2725]: E0527 03:18:34.705237 2725 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 03:18:34.706726 containerd[1586]: time="2025-05-27T03:18:34.706648930Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-mgk9d,Uid:47fe2a85-3bb2-46c4-bd1a-f5f977fde580,Namespace:kube-system,Attempt:0,}" May 27 03:18:34.716041 kubelet[2725]: E0527 03:18:34.715965 2725 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 03:18:34.716748 containerd[1586]: time="2025-05-27T03:18:34.716599010Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-2ptxk,Uid:e464bfcb-84d6-4586-811f-f5524741755f,Namespace:kube-system,Attempt:0,}" May 27 03:18:34.769404 containerd[1586]: time="2025-05-27T03:18:34.769238998Z" level=info msg="connecting to shim 171b88b80ad1d619b0d90dd6694762a3f4146ef79e8b793477c5455f6419b9b1" address="unix:///run/containerd/s/5236abd6f427c82a34a88371697ba220091784927f15dc8287135644eed38e34" namespace=k8s.io protocol=ttrpc version=3 May 27 03:18:34.770430 containerd[1586]: time="2025-05-27T03:18:34.770373784Z" level=info msg="connecting to shim e6d78b07717f5ee4a6b5ad66ce47438b4da3beaf21084fb8ad6b4124965e49fc" address="unix:///run/containerd/s/93f2246e22f23d7c2583f308c80abb42d918166e26ba94994abf347b59bbf17e" namespace=k8s.io protocol=ttrpc version=3 May 27 03:18:34.802426 kubelet[2725]: E0527 03:18:34.802126 2725 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 03:18:34.803080 containerd[1586]: time="2025-05-27T03:18:34.802804711Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-jfkd7,Uid:422199c7-1e48-4b96-9f11-fabed8cd678b,Namespace:kube-system,Attempt:0,}" May 27 03:18:34.828279 systemd[1]: Started cri-containerd-171b88b80ad1d619b0d90dd6694762a3f4146ef79e8b793477c5455f6419b9b1.scope - libcontainer container 171b88b80ad1d619b0d90dd6694762a3f4146ef79e8b793477c5455f6419b9b1. May 27 03:18:34.831238 containerd[1586]: time="2025-05-27T03:18:34.831114101Z" level=info msg="connecting to shim e175dda86ba402cab8d051e139dbdcdda87f30ed525db19c195ee33d6704cf96" address="unix:///run/containerd/s/ea1a577e56eee3951daf4c4d22b4e3f5bf59474d2fd68bfaae87e11c8b2f138b" namespace=k8s.io protocol=ttrpc version=3 May 27 03:18:34.831293 systemd[1]: Started cri-containerd-e6d78b07717f5ee4a6b5ad66ce47438b4da3beaf21084fb8ad6b4124965e49fc.scope - libcontainer container e6d78b07717f5ee4a6b5ad66ce47438b4da3beaf21084fb8ad6b4124965e49fc. May 27 03:18:34.870398 systemd[1]: Started cri-containerd-e175dda86ba402cab8d051e139dbdcdda87f30ed525db19c195ee33d6704cf96.scope - libcontainer container e175dda86ba402cab8d051e139dbdcdda87f30ed525db19c195ee33d6704cf96. May 27 03:18:34.880213 containerd[1586]: time="2025-05-27T03:18:34.880158621Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-2ptxk,Uid:e464bfcb-84d6-4586-811f-f5524741755f,Namespace:kube-system,Attempt:0,} returns sandbox id \"171b88b80ad1d619b0d90dd6694762a3f4146ef79e8b793477c5455f6419b9b1\"" May 27 03:18:34.880993 kubelet[2725]: E0527 03:18:34.880948 2725 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 03:18:34.883375 containerd[1586]: time="2025-05-27T03:18:34.883327817Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" May 27 03:18:34.885574 containerd[1586]: time="2025-05-27T03:18:34.885521376Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-mgk9d,Uid:47fe2a85-3bb2-46c4-bd1a-f5f977fde580,Namespace:kube-system,Attempt:0,} returns sandbox id \"e6d78b07717f5ee4a6b5ad66ce47438b4da3beaf21084fb8ad6b4124965e49fc\"" May 27 03:18:34.886490 kubelet[2725]: E0527 03:18:34.886444 2725 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 03:18:34.926725 containerd[1586]: time="2025-05-27T03:18:34.926671807Z" level=info msg="CreateContainer within sandbox \"e6d78b07717f5ee4a6b5ad66ce47438b4da3beaf21084fb8ad6b4124965e49fc\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 27 03:18:34.945099 containerd[1586]: time="2025-05-27T03:18:34.945031798Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-jfkd7,Uid:422199c7-1e48-4b96-9f11-fabed8cd678b,Namespace:kube-system,Attempt:0,} returns sandbox id \"e175dda86ba402cab8d051e139dbdcdda87f30ed525db19c195ee33d6704cf96\"" May 27 03:18:34.945825 kubelet[2725]: E0527 03:18:34.945781 2725 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 03:18:35.045948 containerd[1586]: time="2025-05-27T03:18:35.045840620Z" level=info msg="Container e4ecec4f28a3dda6d486068bf999950fa6ab009446759bfce719a024a55f10f6: CDI devices from CRI Config.CDIDevices: []" May 27 03:18:35.138487 containerd[1586]: time="2025-05-27T03:18:35.138411594Z" level=info msg="CreateContainer within sandbox \"e6d78b07717f5ee4a6b5ad66ce47438b4da3beaf21084fb8ad6b4124965e49fc\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"e4ecec4f28a3dda6d486068bf999950fa6ab009446759bfce719a024a55f10f6\"" May 27 03:18:35.142024 containerd[1586]: time="2025-05-27T03:18:35.139381098Z" level=info msg="StartContainer for \"e4ecec4f28a3dda6d486068bf999950fa6ab009446759bfce719a024a55f10f6\"" May 27 03:18:35.145729 containerd[1586]: time="2025-05-27T03:18:35.145678288Z" level=info msg="connecting to shim e4ecec4f28a3dda6d486068bf999950fa6ab009446759bfce719a024a55f10f6" address="unix:///run/containerd/s/93f2246e22f23d7c2583f308c80abb42d918166e26ba94994abf347b59bbf17e" protocol=ttrpc version=3 May 27 03:18:35.184291 systemd[1]: Started cri-containerd-e4ecec4f28a3dda6d486068bf999950fa6ab009446759bfce719a024a55f10f6.scope - libcontainer container e4ecec4f28a3dda6d486068bf999950fa6ab009446759bfce719a024a55f10f6. May 27 03:18:35.249010 containerd[1586]: time="2025-05-27T03:18:35.248925955Z" level=info msg="StartContainer for \"e4ecec4f28a3dda6d486068bf999950fa6ab009446759bfce719a024a55f10f6\" returns successfully" May 27 03:18:35.461540 kubelet[2725]: E0527 03:18:35.461497 2725 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 03:18:35.474429 kubelet[2725]: I0527 03:18:35.474318 2725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-mgk9d" podStartSLOduration=2.474298032 podStartE2EDuration="2.474298032s" podCreationTimestamp="2025-05-27 03:18:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-27 03:18:35.474029326 +0000 UTC m=+9.156004711" watchObservedRunningTime="2025-05-27 03:18:35.474298032 +0000 UTC m=+9.156273386" May 27 03:18:38.958505 kubelet[2725]: E0527 03:18:38.958102 2725 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 03:18:39.467914 kubelet[2725]: E0527 03:18:39.467807 2725 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 03:18:40.651258 kubelet[2725]: E0527 03:18:40.651157 2725 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 03:18:42.074099 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3758379174.mount: Deactivated successfully. May 27 03:18:44.983590 containerd[1586]: time="2025-05-27T03:18:44.983493469Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:18:44.985568 containerd[1586]: time="2025-05-27T03:18:44.985525969Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" May 27 03:18:44.987110 containerd[1586]: time="2025-05-27T03:18:44.987066624Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:18:44.989034 containerd[1586]: time="2025-05-27T03:18:44.988960172Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 10.105593232s" May 27 03:18:44.989034 containerd[1586]: time="2025-05-27T03:18:44.989031245Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" May 27 03:18:44.996201 containerd[1586]: time="2025-05-27T03:18:44.996074309Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" May 27 03:18:45.001536 containerd[1586]: time="2025-05-27T03:18:45.001472773Z" level=info msg="CreateContainer within sandbox \"171b88b80ad1d619b0d90dd6694762a3f4146ef79e8b793477c5455f6419b9b1\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 27 03:18:45.015556 containerd[1586]: time="2025-05-27T03:18:45.015471133Z" level=info msg="Container 7708aa2484161530289b6ef04fb7a7f659aec94bbd31fbdcaaa15fbfbb472657: CDI devices from CRI Config.CDIDevices: []" May 27 03:18:45.019944 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2810399353.mount: Deactivated successfully. May 27 03:18:45.031106 containerd[1586]: time="2025-05-27T03:18:45.031030467Z" level=info msg="CreateContainer within sandbox \"171b88b80ad1d619b0d90dd6694762a3f4146ef79e8b793477c5455f6419b9b1\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"7708aa2484161530289b6ef04fb7a7f659aec94bbd31fbdcaaa15fbfbb472657\"" May 27 03:18:45.032008 containerd[1586]: time="2025-05-27T03:18:45.031507814Z" level=info msg="StartContainer for \"7708aa2484161530289b6ef04fb7a7f659aec94bbd31fbdcaaa15fbfbb472657\"" May 27 03:18:45.032813 containerd[1586]: time="2025-05-27T03:18:45.032785394Z" level=info msg="connecting to shim 7708aa2484161530289b6ef04fb7a7f659aec94bbd31fbdcaaa15fbfbb472657" address="unix:///run/containerd/s/5236abd6f427c82a34a88371697ba220091784927f15dc8287135644eed38e34" protocol=ttrpc version=3 May 27 03:18:45.062531 systemd[1]: Started cri-containerd-7708aa2484161530289b6ef04fb7a7f659aec94bbd31fbdcaaa15fbfbb472657.scope - libcontainer container 7708aa2484161530289b6ef04fb7a7f659aec94bbd31fbdcaaa15fbfbb472657. May 27 03:18:45.102208 containerd[1586]: time="2025-05-27T03:18:45.102147191Z" level=info msg="StartContainer for \"7708aa2484161530289b6ef04fb7a7f659aec94bbd31fbdcaaa15fbfbb472657\" returns successfully" May 27 03:18:45.114244 systemd[1]: cri-containerd-7708aa2484161530289b6ef04fb7a7f659aec94bbd31fbdcaaa15fbfbb472657.scope: Deactivated successfully. May 27 03:18:45.115086 systemd[1]: cri-containerd-7708aa2484161530289b6ef04fb7a7f659aec94bbd31fbdcaaa15fbfbb472657.scope: Consumed 31ms CPU time, 6.9M memory peak, 4K read from disk, 2.6M written to disk. May 27 03:18:45.115625 containerd[1586]: time="2025-05-27T03:18:45.115580991Z" level=info msg="received exit event container_id:\"7708aa2484161530289b6ef04fb7a7f659aec94bbd31fbdcaaa15fbfbb472657\" id:\"7708aa2484161530289b6ef04fb7a7f659aec94bbd31fbdcaaa15fbfbb472657\" pid:3159 exited_at:{seconds:1748315925 nanos:115125756}" May 27 03:18:45.115687 containerd[1586]: time="2025-05-27T03:18:45.115639982Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7708aa2484161530289b6ef04fb7a7f659aec94bbd31fbdcaaa15fbfbb472657\" id:\"7708aa2484161530289b6ef04fb7a7f659aec94bbd31fbdcaaa15fbfbb472657\" pid:3159 exited_at:{seconds:1748315925 nanos:115125756}" May 27 03:18:45.140441 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7708aa2484161530289b6ef04fb7a7f659aec94bbd31fbdcaaa15fbfbb472657-rootfs.mount: Deactivated successfully. May 27 03:18:45.483701 kubelet[2725]: E0527 03:18:45.483642 2725 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 03:18:46.485462 kubelet[2725]: E0527 03:18:46.485402 2725 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 03:18:47.488897 kubelet[2725]: E0527 03:18:47.488851 2725 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 03:18:47.687950 containerd[1586]: time="2025-05-27T03:18:47.687884437Z" level=info msg="CreateContainer within sandbox \"171b88b80ad1d619b0d90dd6694762a3f4146ef79e8b793477c5455f6419b9b1\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 27 03:18:48.008553 containerd[1586]: time="2025-05-27T03:18:48.008128960Z" level=info msg="Container 075c645673dae00f4970c0bb0a93b73b39451f03435f89023df23b36da0a8e14: CDI devices from CRI Config.CDIDevices: []" May 27 03:18:48.031077 containerd[1586]: time="2025-05-27T03:18:48.031007109Z" level=info msg="CreateContainer within sandbox \"171b88b80ad1d619b0d90dd6694762a3f4146ef79e8b793477c5455f6419b9b1\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"075c645673dae00f4970c0bb0a93b73b39451f03435f89023df23b36da0a8e14\"" May 27 03:18:48.031940 containerd[1586]: time="2025-05-27T03:18:48.031730920Z" level=info msg="StartContainer for \"075c645673dae00f4970c0bb0a93b73b39451f03435f89023df23b36da0a8e14\"" May 27 03:18:48.035010 containerd[1586]: time="2025-05-27T03:18:48.034229112Z" level=info msg="connecting to shim 075c645673dae00f4970c0bb0a93b73b39451f03435f89023df23b36da0a8e14" address="unix:///run/containerd/s/5236abd6f427c82a34a88371697ba220091784927f15dc8287135644eed38e34" protocol=ttrpc version=3 May 27 03:18:48.062310 systemd[1]: Started cri-containerd-075c645673dae00f4970c0bb0a93b73b39451f03435f89023df23b36da0a8e14.scope - libcontainer container 075c645673dae00f4970c0bb0a93b73b39451f03435f89023df23b36da0a8e14. May 27 03:18:48.100400 containerd[1586]: time="2025-05-27T03:18:48.100336765Z" level=info msg="StartContainer for \"075c645673dae00f4970c0bb0a93b73b39451f03435f89023df23b36da0a8e14\" returns successfully" May 27 03:18:48.117907 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 27 03:18:48.118854 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 27 03:18:48.119917 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... May 27 03:18:48.123007 containerd[1586]: time="2025-05-27T03:18:48.122853275Z" level=info msg="TaskExit event in podsandbox handler container_id:\"075c645673dae00f4970c0bb0a93b73b39451f03435f89023df23b36da0a8e14\" id:\"075c645673dae00f4970c0bb0a93b73b39451f03435f89023df23b36da0a8e14\" pid:3206 exited_at:{seconds:1748315928 nanos:122024218}" May 27 03:18:48.123007 containerd[1586]: time="2025-05-27T03:18:48.122677165Z" level=info msg="received exit event container_id:\"075c645673dae00f4970c0bb0a93b73b39451f03435f89023df23b36da0a8e14\" id:\"075c645673dae00f4970c0bb0a93b73b39451f03435f89023df23b36da0a8e14\" pid:3206 exited_at:{seconds:1748315928 nanos:122024218}" May 27 03:18:48.123775 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 27 03:18:48.126574 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 27 03:18:48.127527 systemd[1]: cri-containerd-075c645673dae00f4970c0bb0a93b73b39451f03435f89023df23b36da0a8e14.scope: Deactivated successfully. May 27 03:18:48.148023 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-075c645673dae00f4970c0bb0a93b73b39451f03435f89023df23b36da0a8e14-rootfs.mount: Deactivated successfully. May 27 03:18:48.164275 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 27 03:18:48.494071 kubelet[2725]: E0527 03:18:48.494026 2725 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 03:18:48.510685 containerd[1586]: time="2025-05-27T03:18:48.510614891Z" level=info msg="CreateContainer within sandbox \"171b88b80ad1d619b0d90dd6694762a3f4146ef79e8b793477c5455f6419b9b1\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 27 03:18:48.544355 containerd[1586]: time="2025-05-27T03:18:48.544274260Z" level=info msg="Container 2a802e36b742d3779893d19f1a5d85f6930b13d0b953e585ce34a48d390434f3: CDI devices from CRI Config.CDIDevices: []" May 27 03:18:48.563357 containerd[1586]: time="2025-05-27T03:18:48.563206188Z" level=info msg="CreateContainer within sandbox \"171b88b80ad1d619b0d90dd6694762a3f4146ef79e8b793477c5455f6419b9b1\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"2a802e36b742d3779893d19f1a5d85f6930b13d0b953e585ce34a48d390434f3\"" May 27 03:18:48.564802 containerd[1586]: time="2025-05-27T03:18:48.564444654Z" level=info msg="StartContainer for \"2a802e36b742d3779893d19f1a5d85f6930b13d0b953e585ce34a48d390434f3\"" May 27 03:18:48.566464 containerd[1586]: time="2025-05-27T03:18:48.566409234Z" level=info msg="connecting to shim 2a802e36b742d3779893d19f1a5d85f6930b13d0b953e585ce34a48d390434f3" address="unix:///run/containerd/s/5236abd6f427c82a34a88371697ba220091784927f15dc8287135644eed38e34" protocol=ttrpc version=3 May 27 03:18:48.615405 systemd[1]: Started cri-containerd-2a802e36b742d3779893d19f1a5d85f6930b13d0b953e585ce34a48d390434f3.scope - libcontainer container 2a802e36b742d3779893d19f1a5d85f6930b13d0b953e585ce34a48d390434f3. May 27 03:18:48.669103 systemd[1]: cri-containerd-2a802e36b742d3779893d19f1a5d85f6930b13d0b953e585ce34a48d390434f3.scope: Deactivated successfully. May 27 03:18:48.671710 containerd[1586]: time="2025-05-27T03:18:48.671664805Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2a802e36b742d3779893d19f1a5d85f6930b13d0b953e585ce34a48d390434f3\" id:\"2a802e36b742d3779893d19f1a5d85f6930b13d0b953e585ce34a48d390434f3\" pid:3256 exited_at:{seconds:1748315928 nanos:671204641}" May 27 03:18:48.682515 containerd[1586]: time="2025-05-27T03:18:48.682429022Z" level=info msg="received exit event container_id:\"2a802e36b742d3779893d19f1a5d85f6930b13d0b953e585ce34a48d390434f3\" id:\"2a802e36b742d3779893d19f1a5d85f6930b13d0b953e585ce34a48d390434f3\" pid:3256 exited_at:{seconds:1748315928 nanos:671204641}" May 27 03:18:48.694539 containerd[1586]: time="2025-05-27T03:18:48.694490527Z" level=info msg="StartContainer for \"2a802e36b742d3779893d19f1a5d85f6930b13d0b953e585ce34a48d390434f3\" returns successfully" May 27 03:18:49.266359 containerd[1586]: time="2025-05-27T03:18:49.266278854Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:18:49.267166 containerd[1586]: time="2025-05-27T03:18:49.267095728Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" May 27 03:18:49.268591 containerd[1586]: time="2025-05-27T03:18:49.268521897Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:18:49.269948 containerd[1586]: time="2025-05-27T03:18:49.269887081Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 4.273755895s" May 27 03:18:49.269948 containerd[1586]: time="2025-05-27T03:18:49.269930613Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" May 27 03:18:49.276547 containerd[1586]: time="2025-05-27T03:18:49.276483278Z" level=info msg="CreateContainer within sandbox \"e175dda86ba402cab8d051e139dbdcdda87f30ed525db19c195ee33d6704cf96\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" May 27 03:18:49.285849 containerd[1586]: time="2025-05-27T03:18:49.285791740Z" level=info msg="Container 09f7e2312598301e320fcb8c63a99469b23870897709b716821961fc430a445f: CDI devices from CRI Config.CDIDevices: []" May 27 03:18:49.297830 containerd[1586]: time="2025-05-27T03:18:49.297752042Z" level=info msg="CreateContainer within sandbox \"e175dda86ba402cab8d051e139dbdcdda87f30ed525db19c195ee33d6704cf96\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"09f7e2312598301e320fcb8c63a99469b23870897709b716821961fc430a445f\"" May 27 03:18:49.298498 containerd[1586]: time="2025-05-27T03:18:49.298459841Z" level=info msg="StartContainer for \"09f7e2312598301e320fcb8c63a99469b23870897709b716821961fc430a445f\"" May 27 03:18:49.299570 containerd[1586]: time="2025-05-27T03:18:49.299535581Z" level=info msg="connecting to shim 09f7e2312598301e320fcb8c63a99469b23870897709b716821961fc430a445f" address="unix:///run/containerd/s/ea1a577e56eee3951daf4c4d22b4e3f5bf59474d2fd68bfaae87e11c8b2f138b" protocol=ttrpc version=3 May 27 03:18:49.331300 systemd[1]: Started cri-containerd-09f7e2312598301e320fcb8c63a99469b23870897709b716821961fc430a445f.scope - libcontainer container 09f7e2312598301e320fcb8c63a99469b23870897709b716821961fc430a445f. May 27 03:18:49.369148 containerd[1586]: time="2025-05-27T03:18:49.369093002Z" level=info msg="StartContainer for \"09f7e2312598301e320fcb8c63a99469b23870897709b716821961fc430a445f\" returns successfully" May 27 03:18:49.501795 kubelet[2725]: E0527 03:18:49.501747 2725 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 03:18:49.505444 kubelet[2725]: E0527 03:18:49.505405 2725 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 03:18:49.510224 containerd[1586]: time="2025-05-27T03:18:49.510174657Z" level=info msg="CreateContainer within sandbox \"171b88b80ad1d619b0d90dd6694762a3f4146ef79e8b793477c5455f6419b9b1\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 27 03:18:49.532923 containerd[1586]: time="2025-05-27T03:18:49.531085708Z" level=info msg="Container 6dbced1d437878d587ecbef33fbf35d326ed812a2caed1c8cc3a1f153c9f1146: CDI devices from CRI Config.CDIDevices: []" May 27 03:18:49.540627 containerd[1586]: time="2025-05-27T03:18:49.540560602Z" level=info msg="CreateContainer within sandbox \"171b88b80ad1d619b0d90dd6694762a3f4146ef79e8b793477c5455f6419b9b1\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"6dbced1d437878d587ecbef33fbf35d326ed812a2caed1c8cc3a1f153c9f1146\"" May 27 03:18:49.542292 containerd[1586]: time="2025-05-27T03:18:49.542238183Z" level=info msg="StartContainer for \"6dbced1d437878d587ecbef33fbf35d326ed812a2caed1c8cc3a1f153c9f1146\"" May 27 03:18:49.553428 containerd[1586]: time="2025-05-27T03:18:49.550376127Z" level=info msg="connecting to shim 6dbced1d437878d587ecbef33fbf35d326ed812a2caed1c8cc3a1f153c9f1146" address="unix:///run/containerd/s/5236abd6f427c82a34a88371697ba220091784927f15dc8287135644eed38e34" protocol=ttrpc version=3 May 27 03:18:49.579256 systemd[1]: Started cri-containerd-6dbced1d437878d587ecbef33fbf35d326ed812a2caed1c8cc3a1f153c9f1146.scope - libcontainer container 6dbced1d437878d587ecbef33fbf35d326ed812a2caed1c8cc3a1f153c9f1146. May 27 03:18:49.618139 systemd[1]: cri-containerd-6dbced1d437878d587ecbef33fbf35d326ed812a2caed1c8cc3a1f153c9f1146.scope: Deactivated successfully. May 27 03:18:49.619282 containerd[1586]: time="2025-05-27T03:18:49.619208566Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6dbced1d437878d587ecbef33fbf35d326ed812a2caed1c8cc3a1f153c9f1146\" id:\"6dbced1d437878d587ecbef33fbf35d326ed812a2caed1c8cc3a1f153c9f1146\" pid:3345 exited_at:{seconds:1748315929 nanos:618541743}" May 27 03:18:49.620897 containerd[1586]: time="2025-05-27T03:18:49.620829009Z" level=info msg="received exit event container_id:\"6dbced1d437878d587ecbef33fbf35d326ed812a2caed1c8cc3a1f153c9f1146\" id:\"6dbced1d437878d587ecbef33fbf35d326ed812a2caed1c8cc3a1f153c9f1146\" pid:3345 exited_at:{seconds:1748315929 nanos:618541743}" May 27 03:18:49.633372 containerd[1586]: time="2025-05-27T03:18:49.633319215Z" level=info msg="StartContainer for \"6dbced1d437878d587ecbef33fbf35d326ed812a2caed1c8cc3a1f153c9f1146\" returns successfully" May 27 03:18:50.551171 kubelet[2725]: E0527 03:18:50.550992 2725 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 03:18:50.551171 kubelet[2725]: E0527 03:18:50.551074 2725 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 03:18:50.634033 containerd[1586]: time="2025-05-27T03:18:50.633956893Z" level=info msg="CreateContainer within sandbox \"171b88b80ad1d619b0d90dd6694762a3f4146ef79e8b793477c5455f6419b9b1\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 27 03:18:50.660926 kubelet[2725]: I0527 03:18:50.660853 2725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-jfkd7" podStartSLOduration=2.336245619 podStartE2EDuration="16.660834856s" podCreationTimestamp="2025-05-27 03:18:34 +0000 UTC" firstStartedPulling="2025-05-27 03:18:34.946493199 +0000 UTC m=+8.628468553" lastFinishedPulling="2025-05-27 03:18:49.271082436 +0000 UTC m=+22.953057790" observedRunningTime="2025-05-27 03:18:49.546890129 +0000 UTC m=+23.228865483" watchObservedRunningTime="2025-05-27 03:18:50.660834856 +0000 UTC m=+24.342810210" May 27 03:18:51.057690 containerd[1586]: time="2025-05-27T03:18:51.057004387Z" level=info msg="Container 4d9cfac9459c36bd4423f83cd1639cf2d65b1e9f3f6c6a0a0e56ae46871b57f3: CDI devices from CRI Config.CDIDevices: []" May 27 03:18:51.060884 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3008125912.mount: Deactivated successfully. May 27 03:18:51.289299 containerd[1586]: time="2025-05-27T03:18:51.289228185Z" level=info msg="CreateContainer within sandbox \"171b88b80ad1d619b0d90dd6694762a3f4146ef79e8b793477c5455f6419b9b1\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"4d9cfac9459c36bd4423f83cd1639cf2d65b1e9f3f6c6a0a0e56ae46871b57f3\"" May 27 03:18:51.289827 containerd[1586]: time="2025-05-27T03:18:51.289792714Z" level=info msg="StartContainer for \"4d9cfac9459c36bd4423f83cd1639cf2d65b1e9f3f6c6a0a0e56ae46871b57f3\"" May 27 03:18:51.290842 containerd[1586]: time="2025-05-27T03:18:51.290765381Z" level=info msg="connecting to shim 4d9cfac9459c36bd4423f83cd1639cf2d65b1e9f3f6c6a0a0e56ae46871b57f3" address="unix:///run/containerd/s/5236abd6f427c82a34a88371697ba220091784927f15dc8287135644eed38e34" protocol=ttrpc version=3 May 27 03:18:51.317189 systemd[1]: Started cri-containerd-4d9cfac9459c36bd4423f83cd1639cf2d65b1e9f3f6c6a0a0e56ae46871b57f3.scope - libcontainer container 4d9cfac9459c36bd4423f83cd1639cf2d65b1e9f3f6c6a0a0e56ae46871b57f3. May 27 03:18:51.609034 containerd[1586]: time="2025-05-27T03:18:51.608852594Z" level=info msg="StartContainer for \"4d9cfac9459c36bd4423f83cd1639cf2d65b1e9f3f6c6a0a0e56ae46871b57f3\" returns successfully" May 27 03:18:51.723735 containerd[1586]: time="2025-05-27T03:18:51.723686238Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4d9cfac9459c36bd4423f83cd1639cf2d65b1e9f3f6c6a0a0e56ae46871b57f3\" id:\"bd0df5a787da67dbcc93e6cedbc965dbf07b8c24a93f9db3e5790c63fbae6dcf\" pid:3426 exited_at:{seconds:1748315931 nanos:723377217}" May 27 03:18:51.822203 kubelet[2725]: I0527 03:18:51.822168 2725 kubelet_node_status.go:501] "Fast updating node status as it just became ready" May 27 03:18:52.390334 systemd[1]: Created slice kubepods-burstable-podd5471488_5260_476c_bc70_a91cfcf5748a.slice - libcontainer container kubepods-burstable-podd5471488_5260_476c_bc70_a91cfcf5748a.slice. May 27 03:18:52.454512 systemd[1]: Created slice kubepods-burstable-pod4741fe46_8656_4d73_808b_6eb0281dd736.slice - libcontainer container kubepods-burstable-pod4741fe46_8656_4d73_808b_6eb0281dd736.slice. May 27 03:18:52.483366 kubelet[2725]: I0527 03:18:52.483290 2725 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4741fe46-8656-4d73-808b-6eb0281dd736-config-volume\") pod \"coredns-674b8bbfcf-cpd8x\" (UID: \"4741fe46-8656-4d73-808b-6eb0281dd736\") " pod="kube-system/coredns-674b8bbfcf-cpd8x" May 27 03:18:52.483366 kubelet[2725]: I0527 03:18:52.483343 2725 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8fgrr\" (UniqueName: \"kubernetes.io/projected/4741fe46-8656-4d73-808b-6eb0281dd736-kube-api-access-8fgrr\") pod \"coredns-674b8bbfcf-cpd8x\" (UID: \"4741fe46-8656-4d73-808b-6eb0281dd736\") " pod="kube-system/coredns-674b8bbfcf-cpd8x" May 27 03:18:52.483366 kubelet[2725]: I0527 03:18:52.483366 2725 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d5471488-5260-476c-bc70-a91cfcf5748a-config-volume\") pod \"coredns-674b8bbfcf-g8rfl\" (UID: \"d5471488-5260-476c-bc70-a91cfcf5748a\") " pod="kube-system/coredns-674b8bbfcf-g8rfl" May 27 03:18:52.483366 kubelet[2725]: I0527 03:18:52.483386 2725 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bc2bl\" (UniqueName: \"kubernetes.io/projected/d5471488-5260-476c-bc70-a91cfcf5748a-kube-api-access-bc2bl\") pod \"coredns-674b8bbfcf-g8rfl\" (UID: \"d5471488-5260-476c-bc70-a91cfcf5748a\") " pod="kube-system/coredns-674b8bbfcf-g8rfl" May 27 03:18:52.601307 kubelet[2725]: E0527 03:18:52.601224 2725 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 03:18:52.696610 kubelet[2725]: E0527 03:18:52.696562 2725 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 03:18:52.697506 containerd[1586]: time="2025-05-27T03:18:52.697460242Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-g8rfl,Uid:d5471488-5260-476c-bc70-a91cfcf5748a,Namespace:kube-system,Attempt:0,}" May 27 03:18:52.758119 kubelet[2725]: E0527 03:18:52.758072 2725 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 03:18:52.758746 containerd[1586]: time="2025-05-27T03:18:52.758697439Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-cpd8x,Uid:4741fe46-8656-4d73-808b-6eb0281dd736,Namespace:kube-system,Attempt:0,}" May 27 03:18:53.609153 kubelet[2725]: E0527 03:18:53.609082 2725 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 03:18:53.912309 systemd-networkd[1502]: cilium_host: Link UP May 27 03:18:53.912531 systemd-networkd[1502]: cilium_net: Link UP May 27 03:18:53.912757 systemd-networkd[1502]: cilium_net: Gained carrier May 27 03:18:53.913020 systemd-networkd[1502]: cilium_host: Gained carrier May 27 03:18:54.045964 systemd-networkd[1502]: cilium_vxlan: Link UP May 27 03:18:54.046228 systemd-networkd[1502]: cilium_vxlan: Gained carrier May 27 03:18:54.113266 systemd-networkd[1502]: cilium_host: Gained IPv6LL May 27 03:18:54.316100 kernel: NET: Registered PF_ALG protocol family May 27 03:18:54.665439 systemd-networkd[1502]: cilium_net: Gained IPv6LL May 27 03:18:55.076537 systemd-networkd[1502]: lxc_health: Link UP May 27 03:18:55.083792 systemd-networkd[1502]: lxc_health: Gained carrier May 27 03:18:55.252411 kernel: eth0: renamed from tmp48bda May 27 03:18:55.251694 systemd-networkd[1502]: lxc036148eb0365: Link UP May 27 03:18:55.253652 systemd-networkd[1502]: lxc036148eb0365: Gained carrier May 27 03:18:55.299654 systemd-networkd[1502]: lxc29128b417761: Link UP May 27 03:18:55.312003 kernel: eth0: renamed from tmpd6aaf May 27 03:18:55.318059 systemd-networkd[1502]: lxc29128b417761: Gained carrier May 27 03:18:55.881178 systemd-networkd[1502]: cilium_vxlan: Gained IPv6LL May 27 03:18:56.632940 systemd[1]: Started sshd@7-10.0.0.71:22-10.0.0.1:42610.service - OpenSSH per-connection server daemon (10.0.0.1:42610). May 27 03:18:56.649241 systemd-networkd[1502]: lxc_health: Gained IPv6LL May 27 03:18:56.695875 sshd[3881]: Accepted publickey for core from 10.0.0.1 port 42610 ssh2: RSA SHA256:yrdvci6hXDWGDW7i9bmImWu+5ErcoHe0M1IyHhFSL9U May 27 03:18:56.698257 sshd-session[3881]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:18:56.703924 systemd-logind[1564]: New session 8 of user core. May 27 03:18:56.714242 systemd[1]: Started session-8.scope - Session 8 of User core. May 27 03:18:56.737822 kubelet[2725]: I0527 03:18:56.737685 2725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-2ptxk" podStartSLOduration=13.624715608 podStartE2EDuration="23.737560156s" podCreationTimestamp="2025-05-27 03:18:33 +0000 UTC" firstStartedPulling="2025-05-27 03:18:34.88292568 +0000 UTC m=+8.564901034" lastFinishedPulling="2025-05-27 03:18:44.995770228 +0000 UTC m=+18.677745582" observedRunningTime="2025-05-27 03:18:52.652850035 +0000 UTC m=+26.334825419" watchObservedRunningTime="2025-05-27 03:18:56.737560156 +0000 UTC m=+30.419535510" May 27 03:18:56.884702 sshd[3883]: Connection closed by 10.0.0.1 port 42610 May 27 03:18:56.885235 sshd-session[3881]: pam_unix(sshd:session): session closed for user core May 27 03:18:56.892929 systemd-logind[1564]: Session 8 logged out. Waiting for processes to exit. May 27 03:18:56.893616 systemd[1]: sshd@7-10.0.0.71:22-10.0.0.1:42610.service: Deactivated successfully. May 27 03:18:56.896889 systemd[1]: session-8.scope: Deactivated successfully. May 27 03:18:56.900029 systemd-logind[1564]: Removed session 8. May 27 03:18:56.970257 systemd-networkd[1502]: lxc29128b417761: Gained IPv6LL May 27 03:18:57.225213 systemd-networkd[1502]: lxc036148eb0365: Gained IPv6LL May 27 03:18:59.904292 containerd[1586]: time="2025-05-27T03:18:59.904200975Z" level=info msg="connecting to shim d6aafaf6f51310c8ac8339f231336a1d5f50863275f69b712163d34ac1615ad7" address="unix:///run/containerd/s/e007eaba22edd1724f5ecefb8b05ad67467446519ef3a6673ca79b8dbdc0fb84" namespace=k8s.io protocol=ttrpc version=3 May 27 03:18:59.908570 containerd[1586]: time="2025-05-27T03:18:59.908509391Z" level=info msg="connecting to shim 48bda4022594b6c1c4b75726de7fbd7e72f2aaa90b71819ef9a2d40c2e051501" address="unix:///run/containerd/s/207c109f8a2e843390c9b26616bbef746aaf493c023d6ca9ab9eb2ec49bd8b65" namespace=k8s.io protocol=ttrpc version=3 May 27 03:18:59.941311 systemd[1]: Started cri-containerd-d6aafaf6f51310c8ac8339f231336a1d5f50863275f69b712163d34ac1615ad7.scope - libcontainer container d6aafaf6f51310c8ac8339f231336a1d5f50863275f69b712163d34ac1615ad7. May 27 03:18:59.945404 systemd[1]: Started cri-containerd-48bda4022594b6c1c4b75726de7fbd7e72f2aaa90b71819ef9a2d40c2e051501.scope - libcontainer container 48bda4022594b6c1c4b75726de7fbd7e72f2aaa90b71819ef9a2d40c2e051501. May 27 03:18:59.958491 systemd-resolved[1407]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 27 03:18:59.962129 systemd-resolved[1407]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 27 03:19:00.002847 containerd[1586]: time="2025-05-27T03:19:00.002796475Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-g8rfl,Uid:d5471488-5260-476c-bc70-a91cfcf5748a,Namespace:kube-system,Attempt:0,} returns sandbox id \"48bda4022594b6c1c4b75726de7fbd7e72f2aaa90b71819ef9a2d40c2e051501\"" May 27 03:19:00.013169 containerd[1586]: time="2025-05-27T03:19:00.013119598Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-cpd8x,Uid:4741fe46-8656-4d73-808b-6eb0281dd736,Namespace:kube-system,Attempt:0,} returns sandbox id \"d6aafaf6f51310c8ac8339f231336a1d5f50863275f69b712163d34ac1615ad7\"" May 27 03:19:00.015381 containerd[1586]: time="2025-05-27T03:19:00.015340155Z" level=info msg="CreateContainer within sandbox \"48bda4022594b6c1c4b75726de7fbd7e72f2aaa90b71819ef9a2d40c2e051501\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 27 03:19:00.019549 containerd[1586]: time="2025-05-27T03:19:00.019491245Z" level=info msg="CreateContainer within sandbox \"d6aafaf6f51310c8ac8339f231336a1d5f50863275f69b712163d34ac1615ad7\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 27 03:19:00.041819 containerd[1586]: time="2025-05-27T03:19:00.041761429Z" level=info msg="Container 56d5228995e92b2c8a19b8351510f3856244c75a34b864038f4a6f9e9784b766: CDI devices from CRI Config.CDIDevices: []" May 27 03:19:00.078431 containerd[1586]: time="2025-05-27T03:19:00.078358439Z" level=info msg="Container f86a29e74e0bd6e50d820dd7d68c59f09c3991975e029c11325da695e0476c4b: CDI devices from CRI Config.CDIDevices: []" May 27 03:19:00.085632 containerd[1586]: time="2025-05-27T03:19:00.085581084Z" level=info msg="CreateContainer within sandbox \"48bda4022594b6c1c4b75726de7fbd7e72f2aaa90b71819ef9a2d40c2e051501\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"56d5228995e92b2c8a19b8351510f3856244c75a34b864038f4a6f9e9784b766\"" May 27 03:19:00.086164 containerd[1586]: time="2025-05-27T03:19:00.086144552Z" level=info msg="StartContainer for \"56d5228995e92b2c8a19b8351510f3856244c75a34b864038f4a6f9e9784b766\"" May 27 03:19:00.088240 containerd[1586]: time="2025-05-27T03:19:00.088114869Z" level=info msg="connecting to shim 56d5228995e92b2c8a19b8351510f3856244c75a34b864038f4a6f9e9784b766" address="unix:///run/containerd/s/207c109f8a2e843390c9b26616bbef746aaf493c023d6ca9ab9eb2ec49bd8b65" protocol=ttrpc version=3 May 27 03:19:00.093416 containerd[1586]: time="2025-05-27T03:19:00.093113170Z" level=info msg="CreateContainer within sandbox \"d6aafaf6f51310c8ac8339f231336a1d5f50863275f69b712163d34ac1615ad7\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"f86a29e74e0bd6e50d820dd7d68c59f09c3991975e029c11325da695e0476c4b\"" May 27 03:19:00.094364 containerd[1586]: time="2025-05-27T03:19:00.094315746Z" level=info msg="StartContainer for \"f86a29e74e0bd6e50d820dd7d68c59f09c3991975e029c11325da695e0476c4b\"" May 27 03:19:00.096005 containerd[1586]: time="2025-05-27T03:19:00.095937018Z" level=info msg="connecting to shim f86a29e74e0bd6e50d820dd7d68c59f09c3991975e029c11325da695e0476c4b" address="unix:///run/containerd/s/e007eaba22edd1724f5ecefb8b05ad67467446519ef3a6673ca79b8dbdc0fb84" protocol=ttrpc version=3 May 27 03:19:00.110184 systemd[1]: Started cri-containerd-56d5228995e92b2c8a19b8351510f3856244c75a34b864038f4a6f9e9784b766.scope - libcontainer container 56d5228995e92b2c8a19b8351510f3856244c75a34b864038f4a6f9e9784b766. May 27 03:19:00.123180 systemd[1]: Started cri-containerd-f86a29e74e0bd6e50d820dd7d68c59f09c3991975e029c11325da695e0476c4b.scope - libcontainer container f86a29e74e0bd6e50d820dd7d68c59f09c3991975e029c11325da695e0476c4b. May 27 03:19:00.167168 containerd[1586]: time="2025-05-27T03:19:00.167002565Z" level=info msg="StartContainer for \"56d5228995e92b2c8a19b8351510f3856244c75a34b864038f4a6f9e9784b766\" returns successfully" May 27 03:19:00.170523 containerd[1586]: time="2025-05-27T03:19:00.170459462Z" level=info msg="StartContainer for \"f86a29e74e0bd6e50d820dd7d68c59f09c3991975e029c11325da695e0476c4b\" returns successfully" May 27 03:19:00.662985 kubelet[2725]: I0527 03:19:00.662877 2725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-g8rfl" podStartSLOduration=26.662856389 podStartE2EDuration="26.662856389s" podCreationTimestamp="2025-05-27 03:18:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-27 03:19:00.648318376 +0000 UTC m=+34.330293750" watchObservedRunningTime="2025-05-27 03:19:00.662856389 +0000 UTC m=+34.344831733" May 27 03:19:00.663763 kubelet[2725]: I0527 03:19:00.663006 2725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-cpd8x" podStartSLOduration=26.66300099 podStartE2EDuration="26.66300099s" podCreationTimestamp="2025-05-27 03:18:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-27 03:19:00.661003611 +0000 UTC m=+34.342978996" watchObservedRunningTime="2025-05-27 03:19:00.66300099 +0000 UTC m=+34.344976344" May 27 03:19:01.901060 systemd[1]: Started sshd@8-10.0.0.71:22-10.0.0.1:42618.service - OpenSSH per-connection server daemon (10.0.0.1:42618). May 27 03:19:01.966128 sshd[4082]: Accepted publickey for core from 10.0.0.1 port 42618 ssh2: RSA SHA256:yrdvci6hXDWGDW7i9bmImWu+5ErcoHe0M1IyHhFSL9U May 27 03:19:01.968051 sshd-session[4082]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:19:01.973341 systemd-logind[1564]: New session 9 of user core. May 27 03:19:01.983165 systemd[1]: Started session-9.scope - Session 9 of User core. May 27 03:19:02.144025 sshd[4084]: Connection closed by 10.0.0.1 port 42618 May 27 03:19:02.144410 sshd-session[4082]: pam_unix(sshd:session): session closed for user core May 27 03:19:02.150756 systemd[1]: sshd@8-10.0.0.71:22-10.0.0.1:42618.service: Deactivated successfully. May 27 03:19:02.153736 systemd[1]: session-9.scope: Deactivated successfully. May 27 03:19:02.155094 systemd-logind[1564]: Session 9 logged out. Waiting for processes to exit. May 27 03:19:02.157121 systemd-logind[1564]: Removed session 9. May 27 03:19:07.162789 systemd[1]: Started sshd@9-10.0.0.71:22-10.0.0.1:33798.service - OpenSSH per-connection server daemon (10.0.0.1:33798). May 27 03:19:07.224309 sshd[4101]: Accepted publickey for core from 10.0.0.1 port 33798 ssh2: RSA SHA256:yrdvci6hXDWGDW7i9bmImWu+5ErcoHe0M1IyHhFSL9U May 27 03:19:07.226512 sshd-session[4101]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:19:07.232164 systemd-logind[1564]: New session 10 of user core. May 27 03:19:07.239373 systemd[1]: Started session-10.scope - Session 10 of User core. May 27 03:19:07.371390 sshd[4103]: Connection closed by 10.0.0.1 port 33798 May 27 03:19:07.371754 sshd-session[4101]: pam_unix(sshd:session): session closed for user core May 27 03:19:07.377004 systemd[1]: sshd@9-10.0.0.71:22-10.0.0.1:33798.service: Deactivated successfully. May 27 03:19:07.379311 systemd[1]: session-10.scope: Deactivated successfully. May 27 03:19:07.380153 systemd-logind[1564]: Session 10 logged out. Waiting for processes to exit. May 27 03:19:07.381617 systemd-logind[1564]: Removed session 10. May 27 03:19:12.391521 systemd[1]: Started sshd@10-10.0.0.71:22-10.0.0.1:33814.service - OpenSSH per-connection server daemon (10.0.0.1:33814). May 27 03:19:12.449385 sshd[4117]: Accepted publickey for core from 10.0.0.1 port 33814 ssh2: RSA SHA256:yrdvci6hXDWGDW7i9bmImWu+5ErcoHe0M1IyHhFSL9U May 27 03:19:12.451424 sshd-session[4117]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:19:12.456813 systemd-logind[1564]: New session 11 of user core. May 27 03:19:12.467235 systemd[1]: Started session-11.scope - Session 11 of User core. May 27 03:19:12.608183 sshd[4119]: Connection closed by 10.0.0.1 port 33814 May 27 03:19:12.608838 sshd-session[4117]: pam_unix(sshd:session): session closed for user core May 27 03:19:12.618315 systemd[1]: sshd@10-10.0.0.71:22-10.0.0.1:33814.service: Deactivated successfully. May 27 03:19:12.620429 systemd[1]: session-11.scope: Deactivated successfully. May 27 03:19:12.621414 systemd-logind[1564]: Session 11 logged out. Waiting for processes to exit. May 27 03:19:12.625496 systemd[1]: Started sshd@11-10.0.0.71:22-10.0.0.1:33822.service - OpenSSH per-connection server daemon (10.0.0.1:33822). May 27 03:19:12.626450 systemd-logind[1564]: Removed session 11. May 27 03:19:12.688279 sshd[4133]: Accepted publickey for core from 10.0.0.1 port 33822 ssh2: RSA SHA256:yrdvci6hXDWGDW7i9bmImWu+5ErcoHe0M1IyHhFSL9U May 27 03:19:12.690302 sshd-session[4133]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:19:12.696105 systemd-logind[1564]: New session 12 of user core. May 27 03:19:12.706199 systemd[1]: Started session-12.scope - Session 12 of User core. May 27 03:19:12.947001 sshd[4135]: Connection closed by 10.0.0.1 port 33822 May 27 03:19:12.947652 sshd-session[4133]: pam_unix(sshd:session): session closed for user core May 27 03:19:12.962225 systemd[1]: sshd@11-10.0.0.71:22-10.0.0.1:33822.service: Deactivated successfully. May 27 03:19:12.964526 systemd[1]: session-12.scope: Deactivated successfully. May 27 03:19:12.965511 systemd-logind[1564]: Session 12 logged out. Waiting for processes to exit. May 27 03:19:12.969241 systemd[1]: Started sshd@12-10.0.0.71:22-10.0.0.1:33826.service - OpenSSH per-connection server daemon (10.0.0.1:33826). May 27 03:19:12.970191 systemd-logind[1564]: Removed session 12. May 27 03:19:13.027304 sshd[4147]: Accepted publickey for core from 10.0.0.1 port 33826 ssh2: RSA SHA256:yrdvci6hXDWGDW7i9bmImWu+5ErcoHe0M1IyHhFSL9U May 27 03:19:13.029162 sshd-session[4147]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:19:13.035040 systemd-logind[1564]: New session 13 of user core. May 27 03:19:13.048245 systemd[1]: Started session-13.scope - Session 13 of User core. May 27 03:19:13.179792 sshd[4149]: Connection closed by 10.0.0.1 port 33826 May 27 03:19:13.180220 sshd-session[4147]: pam_unix(sshd:session): session closed for user core May 27 03:19:13.186218 systemd[1]: sshd@12-10.0.0.71:22-10.0.0.1:33826.service: Deactivated successfully. May 27 03:19:13.189149 systemd[1]: session-13.scope: Deactivated successfully. May 27 03:19:13.190093 systemd-logind[1564]: Session 13 logged out. Waiting for processes to exit. May 27 03:19:13.192022 systemd-logind[1564]: Removed session 13. May 27 03:19:18.206514 systemd[1]: Started sshd@13-10.0.0.71:22-10.0.0.1:54740.service - OpenSSH per-connection server daemon (10.0.0.1:54740). May 27 03:19:18.270130 sshd[4162]: Accepted publickey for core from 10.0.0.1 port 54740 ssh2: RSA SHA256:yrdvci6hXDWGDW7i9bmImWu+5ErcoHe0M1IyHhFSL9U May 27 03:19:18.272132 sshd-session[4162]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:19:18.277455 systemd-logind[1564]: New session 14 of user core. May 27 03:19:18.287167 systemd[1]: Started session-14.scope - Session 14 of User core. May 27 03:19:18.419196 sshd[4164]: Connection closed by 10.0.0.1 port 54740 May 27 03:19:18.419611 sshd-session[4162]: pam_unix(sshd:session): session closed for user core May 27 03:19:18.423658 systemd[1]: sshd@13-10.0.0.71:22-10.0.0.1:54740.service: Deactivated successfully. May 27 03:19:18.427107 systemd[1]: session-14.scope: Deactivated successfully. May 27 03:19:18.429296 systemd-logind[1564]: Session 14 logged out. Waiting for processes to exit. May 27 03:19:18.431252 systemd-logind[1564]: Removed session 14. May 27 03:19:23.440042 systemd[1]: Started sshd@14-10.0.0.71:22-10.0.0.1:57488.service - OpenSSH per-connection server daemon (10.0.0.1:57488). May 27 03:19:23.497341 sshd[4177]: Accepted publickey for core from 10.0.0.1 port 57488 ssh2: RSA SHA256:yrdvci6hXDWGDW7i9bmImWu+5ErcoHe0M1IyHhFSL9U May 27 03:19:23.499554 sshd-session[4177]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:19:23.505354 systemd-logind[1564]: New session 15 of user core. May 27 03:19:23.515335 systemd[1]: Started session-15.scope - Session 15 of User core. May 27 03:19:23.659509 sshd[4179]: Connection closed by 10.0.0.1 port 57488 May 27 03:19:23.660057 sshd-session[4177]: pam_unix(sshd:session): session closed for user core May 27 03:19:23.671825 systemd[1]: sshd@14-10.0.0.71:22-10.0.0.1:57488.service: Deactivated successfully. May 27 03:19:23.674230 systemd[1]: session-15.scope: Deactivated successfully. May 27 03:19:23.675514 systemd-logind[1564]: Session 15 logged out. Waiting for processes to exit. May 27 03:19:23.679439 systemd[1]: Started sshd@15-10.0.0.71:22-10.0.0.1:57492.service - OpenSSH per-connection server daemon (10.0.0.1:57492). May 27 03:19:23.680882 systemd-logind[1564]: Removed session 15. May 27 03:19:23.734303 sshd[4193]: Accepted publickey for core from 10.0.0.1 port 57492 ssh2: RSA SHA256:yrdvci6hXDWGDW7i9bmImWu+5ErcoHe0M1IyHhFSL9U May 27 03:19:23.736444 sshd-session[4193]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:19:23.743533 systemd-logind[1564]: New session 16 of user core. May 27 03:19:23.758306 systemd[1]: Started session-16.scope - Session 16 of User core. May 27 03:19:24.134851 sshd[4195]: Connection closed by 10.0.0.1 port 57492 May 27 03:19:24.136912 sshd-session[4193]: pam_unix(sshd:session): session closed for user core May 27 03:19:24.148867 systemd[1]: sshd@15-10.0.0.71:22-10.0.0.1:57492.service: Deactivated successfully. May 27 03:19:24.151707 systemd[1]: session-16.scope: Deactivated successfully. May 27 03:19:24.152671 systemd-logind[1564]: Session 16 logged out. Waiting for processes to exit. May 27 03:19:24.157682 systemd[1]: Started sshd@16-10.0.0.71:22-10.0.0.1:57504.service - OpenSSH per-connection server daemon (10.0.0.1:57504). May 27 03:19:24.158614 systemd-logind[1564]: Removed session 16. May 27 03:19:24.219075 sshd[4207]: Accepted publickey for core from 10.0.0.1 port 57504 ssh2: RSA SHA256:yrdvci6hXDWGDW7i9bmImWu+5ErcoHe0M1IyHhFSL9U May 27 03:19:24.220762 sshd-session[4207]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:19:24.225652 systemd-logind[1564]: New session 17 of user core. May 27 03:19:24.236216 systemd[1]: Started session-17.scope - Session 17 of User core. May 27 03:19:25.117437 sshd[4209]: Connection closed by 10.0.0.1 port 57504 May 27 03:19:25.118197 sshd-session[4207]: pam_unix(sshd:session): session closed for user core May 27 03:19:25.127730 systemd[1]: sshd@16-10.0.0.71:22-10.0.0.1:57504.service: Deactivated successfully. May 27 03:19:25.130058 systemd[1]: session-17.scope: Deactivated successfully. May 27 03:19:25.131565 systemd-logind[1564]: Session 17 logged out. Waiting for processes to exit. May 27 03:19:25.136239 systemd[1]: Started sshd@17-10.0.0.71:22-10.0.0.1:57506.service - OpenSSH per-connection server daemon (10.0.0.1:57506). May 27 03:19:25.137279 systemd-logind[1564]: Removed session 17. May 27 03:19:25.202279 sshd[4230]: Accepted publickey for core from 10.0.0.1 port 57506 ssh2: RSA SHA256:yrdvci6hXDWGDW7i9bmImWu+5ErcoHe0M1IyHhFSL9U May 27 03:19:25.205120 sshd-session[4230]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:19:25.210717 systemd-logind[1564]: New session 18 of user core. May 27 03:19:25.221142 systemd[1]: Started session-18.scope - Session 18 of User core. May 27 03:19:25.548649 sshd[4232]: Connection closed by 10.0.0.1 port 57506 May 27 03:19:25.549159 sshd-session[4230]: pam_unix(sshd:session): session closed for user core May 27 03:19:25.562905 systemd[1]: sshd@17-10.0.0.71:22-10.0.0.1:57506.service: Deactivated successfully. May 27 03:19:25.566248 systemd[1]: session-18.scope: Deactivated successfully. May 27 03:19:25.569059 systemd-logind[1564]: Session 18 logged out. Waiting for processes to exit. May 27 03:19:25.572385 systemd-logind[1564]: Removed session 18. May 27 03:19:25.574647 systemd[1]: Started sshd@18-10.0.0.71:22-10.0.0.1:57520.service - OpenSSH per-connection server daemon (10.0.0.1:57520). May 27 03:19:25.636220 sshd[4243]: Accepted publickey for core from 10.0.0.1 port 57520 ssh2: RSA SHA256:yrdvci6hXDWGDW7i9bmImWu+5ErcoHe0M1IyHhFSL9U May 27 03:19:25.638150 sshd-session[4243]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:19:25.643799 systemd-logind[1564]: New session 19 of user core. May 27 03:19:25.653251 systemd[1]: Started session-19.scope - Session 19 of User core. May 27 03:19:25.778105 sshd[4245]: Connection closed by 10.0.0.1 port 57520 May 27 03:19:25.778507 sshd-session[4243]: pam_unix(sshd:session): session closed for user core May 27 03:19:25.781950 systemd[1]: sshd@18-10.0.0.71:22-10.0.0.1:57520.service: Deactivated successfully. May 27 03:19:25.784241 systemd[1]: session-19.scope: Deactivated successfully. May 27 03:19:25.786155 systemd-logind[1564]: Session 19 logged out. Waiting for processes to exit. May 27 03:19:25.787572 systemd-logind[1564]: Removed session 19. May 27 03:19:30.792725 systemd[1]: Started sshd@19-10.0.0.71:22-10.0.0.1:57536.service - OpenSSH per-connection server daemon (10.0.0.1:57536). May 27 03:19:30.859839 sshd[4261]: Accepted publickey for core from 10.0.0.1 port 57536 ssh2: RSA SHA256:yrdvci6hXDWGDW7i9bmImWu+5ErcoHe0M1IyHhFSL9U May 27 03:19:30.862315 sshd-session[4261]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:19:30.867823 systemd-logind[1564]: New session 20 of user core. May 27 03:19:30.878226 systemd[1]: Started session-20.scope - Session 20 of User core. May 27 03:19:31.011766 sshd[4263]: Connection closed by 10.0.0.1 port 57536 May 27 03:19:31.012165 sshd-session[4261]: pam_unix(sshd:session): session closed for user core May 27 03:19:31.016814 systemd[1]: sshd@19-10.0.0.71:22-10.0.0.1:57536.service: Deactivated successfully. May 27 03:19:31.019283 systemd[1]: session-20.scope: Deactivated successfully. May 27 03:19:31.020251 systemd-logind[1564]: Session 20 logged out. Waiting for processes to exit. May 27 03:19:31.021646 systemd-logind[1564]: Removed session 20. May 27 03:19:36.030636 systemd[1]: Started sshd@20-10.0.0.71:22-10.0.0.1:37884.service - OpenSSH per-connection server daemon (10.0.0.1:37884). May 27 03:19:36.099388 sshd[4280]: Accepted publickey for core from 10.0.0.1 port 37884 ssh2: RSA SHA256:yrdvci6hXDWGDW7i9bmImWu+5ErcoHe0M1IyHhFSL9U May 27 03:19:36.101695 sshd-session[4280]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:19:36.107661 systemd-logind[1564]: New session 21 of user core. May 27 03:19:36.117182 systemd[1]: Started session-21.scope - Session 21 of User core. May 27 03:19:36.237685 sshd[4282]: Connection closed by 10.0.0.1 port 37884 May 27 03:19:36.238068 sshd-session[4280]: pam_unix(sshd:session): session closed for user core May 27 03:19:36.241413 systemd[1]: sshd@20-10.0.0.71:22-10.0.0.1:37884.service: Deactivated successfully. May 27 03:19:36.243640 systemd[1]: session-21.scope: Deactivated successfully. May 27 03:19:36.245557 systemd-logind[1564]: Session 21 logged out. Waiting for processes to exit. May 27 03:19:36.247689 systemd-logind[1564]: Removed session 21. May 27 03:19:41.251280 systemd[1]: Started sshd@21-10.0.0.71:22-10.0.0.1:37888.service - OpenSSH per-connection server daemon (10.0.0.1:37888). May 27 03:19:41.298806 sshd[4295]: Accepted publickey for core from 10.0.0.1 port 37888 ssh2: RSA SHA256:yrdvci6hXDWGDW7i9bmImWu+5ErcoHe0M1IyHhFSL9U May 27 03:19:41.300662 sshd-session[4295]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:19:41.305863 systemd-logind[1564]: New session 22 of user core. May 27 03:19:41.316142 systemd[1]: Started session-22.scope - Session 22 of User core. May 27 03:19:41.427998 sshd[4297]: Connection closed by 10.0.0.1 port 37888 May 27 03:19:41.428315 sshd-session[4295]: pam_unix(sshd:session): session closed for user core May 27 03:19:41.433083 systemd[1]: sshd@21-10.0.0.71:22-10.0.0.1:37888.service: Deactivated successfully. May 27 03:19:41.435232 systemd[1]: session-22.scope: Deactivated successfully. May 27 03:19:41.436292 systemd-logind[1564]: Session 22 logged out. Waiting for processes to exit. May 27 03:19:41.437780 systemd-logind[1564]: Removed session 22. May 27 03:19:46.446188 systemd[1]: Started sshd@22-10.0.0.71:22-10.0.0.1:46224.service - OpenSSH per-connection server daemon (10.0.0.1:46224). May 27 03:19:46.501809 sshd[4310]: Accepted publickey for core from 10.0.0.1 port 46224 ssh2: RSA SHA256:yrdvci6hXDWGDW7i9bmImWu+5ErcoHe0M1IyHhFSL9U May 27 03:19:46.503250 sshd-session[4310]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:19:46.507966 systemd-logind[1564]: New session 23 of user core. May 27 03:19:46.519138 systemd[1]: Started session-23.scope - Session 23 of User core. May 27 03:19:46.636093 sshd[4312]: Connection closed by 10.0.0.1 port 46224 May 27 03:19:46.636386 sshd-session[4310]: pam_unix(sshd:session): session closed for user core May 27 03:19:46.640686 systemd[1]: sshd@22-10.0.0.71:22-10.0.0.1:46224.service: Deactivated successfully. May 27 03:19:46.642746 systemd[1]: session-23.scope: Deactivated successfully. May 27 03:19:46.643659 systemd-logind[1564]: Session 23 logged out. Waiting for processes to exit. May 27 03:19:46.644880 systemd-logind[1564]: Removed session 23. May 27 03:19:51.654073 systemd[1]: Started sshd@23-10.0.0.71:22-10.0.0.1:46230.service - OpenSSH per-connection server daemon (10.0.0.1:46230). May 27 03:19:51.709522 sshd[4328]: Accepted publickey for core from 10.0.0.1 port 46230 ssh2: RSA SHA256:yrdvci6hXDWGDW7i9bmImWu+5ErcoHe0M1IyHhFSL9U May 27 03:19:51.711373 sshd-session[4328]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:19:51.716419 systemd-logind[1564]: New session 24 of user core. May 27 03:19:51.726122 systemd[1]: Started session-24.scope - Session 24 of User core. May 27 03:19:51.849480 sshd[4330]: Connection closed by 10.0.0.1 port 46230 May 27 03:19:51.849867 sshd-session[4328]: pam_unix(sshd:session): session closed for user core May 27 03:19:51.854915 systemd[1]: sshd@23-10.0.0.71:22-10.0.0.1:46230.service: Deactivated successfully. May 27 03:19:51.857314 systemd[1]: session-24.scope: Deactivated successfully. May 27 03:19:51.858330 systemd-logind[1564]: Session 24 logged out. Waiting for processes to exit. May 27 03:19:51.859781 systemd-logind[1564]: Removed session 24. May 27 03:19:56.868320 systemd[1]: Started sshd@24-10.0.0.71:22-10.0.0.1:41700.service - OpenSSH per-connection server daemon (10.0.0.1:41700). May 27 03:19:56.915569 sshd[4343]: Accepted publickey for core from 10.0.0.1 port 41700 ssh2: RSA SHA256:yrdvci6hXDWGDW7i9bmImWu+5ErcoHe0M1IyHhFSL9U May 27 03:19:56.917367 sshd-session[4343]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:19:56.922953 systemd-logind[1564]: New session 25 of user core. May 27 03:19:56.930161 systemd[1]: Started session-25.scope - Session 25 of User core. May 27 03:19:57.047483 sshd[4345]: Connection closed by 10.0.0.1 port 41700 May 27 03:19:57.047943 sshd-session[4343]: pam_unix(sshd:session): session closed for user core May 27 03:19:57.052805 systemd[1]: sshd@24-10.0.0.71:22-10.0.0.1:41700.service: Deactivated successfully. May 27 03:19:57.055169 systemd[1]: session-25.scope: Deactivated successfully. May 27 03:19:57.057066 systemd-logind[1564]: Session 25 logged out. Waiting for processes to exit. May 27 03:19:57.058726 systemd-logind[1564]: Removed session 25. May 27 03:19:59.297420 update_engine[1567]: I20250527 03:19:59.297322 1567 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs May 27 03:19:59.297420 update_engine[1567]: I20250527 03:19:59.297408 1567 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs May 27 03:19:59.297908 update_engine[1567]: I20250527 03:19:59.297730 1567 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs May 27 03:19:59.298459 update_engine[1567]: I20250527 03:19:59.298431 1567 omaha_request_params.cc:62] Current group set to alpha May 27 03:19:59.299297 update_engine[1567]: I20250527 03:19:59.299252 1567 update_attempter.cc:499] Already updated boot flags. Skipping. May 27 03:19:59.299297 update_engine[1567]: I20250527 03:19:59.299274 1567 update_attempter.cc:643] Scheduling an action processor start. May 27 03:19:59.299297 update_engine[1567]: I20250527 03:19:59.299293 1567 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction May 27 03:19:59.299423 update_engine[1567]: I20250527 03:19:59.299362 1567 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs May 27 03:19:59.299455 update_engine[1567]: I20250527 03:19:59.299440 1567 omaha_request_action.cc:271] Posting an Omaha request to disabled May 27 03:19:59.299480 update_engine[1567]: I20250527 03:19:59.299450 1567 omaha_request_action.cc:272] Request: May 27 03:19:59.299480 update_engine[1567]: May 27 03:19:59.299480 update_engine[1567]: May 27 03:19:59.299480 update_engine[1567]: May 27 03:19:59.299480 update_engine[1567]: May 27 03:19:59.299480 update_engine[1567]: May 27 03:19:59.299480 update_engine[1567]: May 27 03:19:59.299480 update_engine[1567]: May 27 03:19:59.299480 update_engine[1567]: May 27 03:19:59.299480 update_engine[1567]: I20250527 03:19:59.299458 1567 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 27 03:19:59.304505 locksmithd[1610]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 May 27 03:19:59.304905 update_engine[1567]: I20250527 03:19:59.304735 1567 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 27 03:19:59.305258 update_engine[1567]: I20250527 03:19:59.305220 1567 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 27 03:19:59.317795 update_engine[1567]: E20250527 03:19:59.317740 1567 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 27 03:19:59.317854 update_engine[1567]: I20250527 03:19:59.317826 1567 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 May 27 03:20:02.062473 systemd[1]: Started sshd@25-10.0.0.71:22-10.0.0.1:41716.service - OpenSSH per-connection server daemon (10.0.0.1:41716). May 27 03:20:02.126164 sshd[4358]: Accepted publickey for core from 10.0.0.1 port 41716 ssh2: RSA SHA256:yrdvci6hXDWGDW7i9bmImWu+5ErcoHe0M1IyHhFSL9U May 27 03:20:02.127600 sshd-session[4358]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:20:02.132949 systemd-logind[1564]: New session 26 of user core. May 27 03:20:02.142135 systemd[1]: Started session-26.scope - Session 26 of User core. May 27 03:20:02.269222 sshd[4360]: Connection closed by 10.0.0.1 port 41716 May 27 03:20:02.269612 sshd-session[4358]: pam_unix(sshd:session): session closed for user core May 27 03:20:02.275463 systemd[1]: sshd@25-10.0.0.71:22-10.0.0.1:41716.service: Deactivated successfully. May 27 03:20:02.278307 systemd[1]: session-26.scope: Deactivated successfully. May 27 03:20:02.279771 systemd-logind[1564]: Session 26 logged out. Waiting for processes to exit. May 27 03:20:02.281728 systemd-logind[1564]: Removed session 26. May 27 03:20:07.287355 systemd[1]: Started sshd@26-10.0.0.71:22-10.0.0.1:51672.service - OpenSSH per-connection server daemon (10.0.0.1:51672). May 27 03:20:07.347046 sshd[4375]: Accepted publickey for core from 10.0.0.1 port 51672 ssh2: RSA SHA256:yrdvci6hXDWGDW7i9bmImWu+5ErcoHe0M1IyHhFSL9U May 27 03:20:07.349025 sshd-session[4375]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:20:07.353794 systemd-logind[1564]: New session 27 of user core. May 27 03:20:07.365136 systemd[1]: Started session-27.scope - Session 27 of User core. May 27 03:20:07.565244 sshd[4377]: Connection closed by 10.0.0.1 port 51672 May 27 03:20:07.565537 sshd-session[4375]: pam_unix(sshd:session): session closed for user core May 27 03:20:07.569601 systemd[1]: sshd@26-10.0.0.71:22-10.0.0.1:51672.service: Deactivated successfully. May 27 03:20:07.571966 systemd[1]: session-27.scope: Deactivated successfully. May 27 03:20:07.572784 systemd-logind[1564]: Session 27 logged out. Waiting for processes to exit. May 27 03:20:07.574119 systemd-logind[1564]: Removed session 27. May 27 03:20:09.297855 update_engine[1567]: I20250527 03:20:09.297722 1567 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 27 03:20:09.298379 update_engine[1567]: I20250527 03:20:09.298135 1567 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 27 03:20:09.298493 update_engine[1567]: I20250527 03:20:09.298460 1567 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 27 03:20:09.307263 update_engine[1567]: E20250527 03:20:09.307199 1567 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 27 03:20:09.307336 update_engine[1567]: I20250527 03:20:09.307291 1567 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 May 27 03:20:12.590376 systemd[1]: Started sshd@27-10.0.0.71:22-10.0.0.1:51688.service - OpenSSH per-connection server daemon (10.0.0.1:51688). May 27 03:20:12.646516 sshd[4392]: Accepted publickey for core from 10.0.0.1 port 51688 ssh2: RSA SHA256:yrdvci6hXDWGDW7i9bmImWu+5ErcoHe0M1IyHhFSL9U May 27 03:20:12.648140 sshd-session[4392]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:20:12.653627 systemd-logind[1564]: New session 28 of user core. May 27 03:20:12.664107 systemd[1]: Started session-28.scope - Session 28 of User core. May 27 03:20:12.800460 sshd[4394]: Connection closed by 10.0.0.1 port 51688 May 27 03:20:12.800834 sshd-session[4392]: pam_unix(sshd:session): session closed for user core May 27 03:20:12.805677 systemd[1]: sshd@27-10.0.0.71:22-10.0.0.1:51688.service: Deactivated successfully. May 27 03:20:12.807797 systemd[1]: session-28.scope: Deactivated successfully. May 27 03:20:12.808807 systemd-logind[1564]: Session 28 logged out. Waiting for processes to exit. May 27 03:20:12.810336 systemd-logind[1564]: Removed session 28. May 27 03:20:17.819801 systemd[1]: Started sshd@28-10.0.0.71:22-10.0.0.1:46554.service - OpenSSH per-connection server daemon (10.0.0.1:46554). May 27 03:20:17.882277 sshd[4409]: Accepted publickey for core from 10.0.0.1 port 46554 ssh2: RSA SHA256:yrdvci6hXDWGDW7i9bmImWu+5ErcoHe0M1IyHhFSL9U May 27 03:20:17.884497 sshd-session[4409]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:20:17.892284 systemd-logind[1564]: New session 29 of user core. May 27 03:20:17.908449 systemd[1]: Started session-29.scope - Session 29 of User core. May 27 03:20:18.034451 sshd[4411]: Connection closed by 10.0.0.1 port 46554 May 27 03:20:18.034907 sshd-session[4409]: pam_unix(sshd:session): session closed for user core May 27 03:20:18.039470 systemd[1]: sshd@28-10.0.0.71:22-10.0.0.1:46554.service: Deactivated successfully. May 27 03:20:18.041636 systemd[1]: session-29.scope: Deactivated successfully. May 27 03:20:18.042469 systemd-logind[1564]: Session 29 logged out. Waiting for processes to exit. May 27 03:20:18.043893 systemd-logind[1564]: Removed session 29. May 27 03:20:19.297428 update_engine[1567]: I20250527 03:20:19.297312 1567 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 27 03:20:19.298170 update_engine[1567]: I20250527 03:20:19.297654 1567 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 27 03:20:19.298170 update_engine[1567]: I20250527 03:20:19.298004 1567 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 27 03:20:19.333325 update_engine[1567]: E20250527 03:20:19.333249 1567 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 27 03:20:19.333325 update_engine[1567]: I20250527 03:20:19.333333 1567 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 May 27 03:20:23.054801 systemd[1]: Started sshd@29-10.0.0.71:22-10.0.0.1:33582.service - OpenSSH per-connection server daemon (10.0.0.1:33582). May 27 03:20:23.127158 sshd[4424]: Accepted publickey for core from 10.0.0.1 port 33582 ssh2: RSA SHA256:yrdvci6hXDWGDW7i9bmImWu+5ErcoHe0M1IyHhFSL9U May 27 03:20:23.128990 sshd-session[4424]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:20:23.134165 systemd-logind[1564]: New session 30 of user core. May 27 03:20:23.144183 systemd[1]: Started session-30.scope - Session 30 of User core. May 27 03:20:23.274923 sshd[4426]: Connection closed by 10.0.0.1 port 33582 May 27 03:20:23.275348 sshd-session[4424]: pam_unix(sshd:session): session closed for user core May 27 03:20:23.280262 systemd[1]: sshd@29-10.0.0.71:22-10.0.0.1:33582.service: Deactivated successfully. May 27 03:20:23.282554 systemd[1]: session-30.scope: Deactivated successfully. May 27 03:20:23.283380 systemd-logind[1564]: Session 30 logged out. Waiting for processes to exit. May 27 03:20:23.285176 systemd-logind[1564]: Removed session 30. May 27 03:20:28.291851 systemd[1]: Started sshd@30-10.0.0.71:22-10.0.0.1:33598.service - OpenSSH per-connection server daemon (10.0.0.1:33598). May 27 03:20:28.342480 sshd[4442]: Accepted publickey for core from 10.0.0.1 port 33598 ssh2: RSA SHA256:yrdvci6hXDWGDW7i9bmImWu+5ErcoHe0M1IyHhFSL9U May 27 03:20:28.344415 sshd-session[4442]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:20:28.349150 systemd-logind[1564]: New session 31 of user core. May 27 03:20:28.357201 systemd[1]: Started session-31.scope - Session 31 of User core. May 27 03:20:28.476630 sshd[4444]: Connection closed by 10.0.0.1 port 33598 May 27 03:20:28.476990 sshd-session[4442]: pam_unix(sshd:session): session closed for user core May 27 03:20:28.481695 systemd[1]: sshd@30-10.0.0.71:22-10.0.0.1:33598.service: Deactivated successfully. May 27 03:20:28.484560 systemd[1]: session-31.scope: Deactivated successfully. May 27 03:20:28.485707 systemd-logind[1564]: Session 31 logged out. Waiting for processes to exit. May 27 03:20:28.487937 systemd-logind[1564]: Removed session 31. May 27 03:20:29.297105 update_engine[1567]: I20250527 03:20:29.296948 1567 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 27 03:20:29.297631 update_engine[1567]: I20250527 03:20:29.297343 1567 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 27 03:20:29.297738 update_engine[1567]: I20250527 03:20:29.297699 1567 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 27 03:20:29.309160 update_engine[1567]: E20250527 03:20:29.309072 1567 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 27 03:20:29.309273 update_engine[1567]: I20250527 03:20:29.309180 1567 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded May 27 03:20:29.309273 update_engine[1567]: I20250527 03:20:29.309191 1567 omaha_request_action.cc:617] Omaha request response: May 27 03:20:29.310538 update_engine[1567]: E20250527 03:20:29.310453 1567 omaha_request_action.cc:636] Omaha request network transfer failed. May 27 03:20:29.310594 update_engine[1567]: I20250527 03:20:29.310571 1567 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. May 27 03:20:29.310594 update_engine[1567]: I20250527 03:20:29.310585 1567 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction May 27 03:20:29.310647 update_engine[1567]: I20250527 03:20:29.310593 1567 update_attempter.cc:306] Processing Done. May 27 03:20:29.310647 update_engine[1567]: E20250527 03:20:29.310616 1567 update_attempter.cc:619] Update failed. May 27 03:20:29.310647 update_engine[1567]: I20250527 03:20:29.310629 1567 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse May 27 03:20:29.310647 update_engine[1567]: I20250527 03:20:29.310637 1567 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) May 27 03:20:29.310741 update_engine[1567]: I20250527 03:20:29.310645 1567 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. May 27 03:20:29.310763 update_engine[1567]: I20250527 03:20:29.310736 1567 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction May 27 03:20:29.310786 update_engine[1567]: I20250527 03:20:29.310773 1567 omaha_request_action.cc:271] Posting an Omaha request to disabled May 27 03:20:29.310786 update_engine[1567]: I20250527 03:20:29.310781 1567 omaha_request_action.cc:272] Request: May 27 03:20:29.310786 update_engine[1567]: May 27 03:20:29.310786 update_engine[1567]: May 27 03:20:29.310786 update_engine[1567]: May 27 03:20:29.310786 update_engine[1567]: May 27 03:20:29.310786 update_engine[1567]: May 27 03:20:29.310786 update_engine[1567]: May 27 03:20:29.310961 update_engine[1567]: I20250527 03:20:29.310788 1567 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 27 03:20:29.311054 update_engine[1567]: I20250527 03:20:29.311026 1567 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 27 03:20:29.311228 locksmithd[1610]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 May 27 03:20:29.311563 update_engine[1567]: I20250527 03:20:29.311355 1567 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 27 03:20:29.318199 update_engine[1567]: E20250527 03:20:29.318152 1567 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 27 03:20:29.318199 update_engine[1567]: I20250527 03:20:29.318197 1567 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded May 27 03:20:29.318199 update_engine[1567]: I20250527 03:20:29.318204 1567 omaha_request_action.cc:617] Omaha request response: May 27 03:20:29.318199 update_engine[1567]: I20250527 03:20:29.318210 1567 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction May 27 03:20:29.318422 update_engine[1567]: I20250527 03:20:29.318217 1567 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction May 27 03:20:29.318422 update_engine[1567]: I20250527 03:20:29.318222 1567 update_attempter.cc:306] Processing Done. May 27 03:20:29.318422 update_engine[1567]: I20250527 03:20:29.318228 1567 update_attempter.cc:310] Error event sent. May 27 03:20:29.318422 update_engine[1567]: I20250527 03:20:29.318243 1567 update_check_scheduler.cc:74] Next update check in 40m55s May 27 03:20:29.318639 locksmithd[1610]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 May 27 03:20:33.490531 systemd[1]: Started sshd@31-10.0.0.71:22-10.0.0.1:57062.service - OpenSSH per-connection server daemon (10.0.0.1:57062). May 27 03:20:33.549794 sshd[4458]: Accepted publickey for core from 10.0.0.1 port 57062 ssh2: RSA SHA256:yrdvci6hXDWGDW7i9bmImWu+5ErcoHe0M1IyHhFSL9U May 27 03:20:33.552057 sshd-session[4458]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:20:33.557551 systemd-logind[1564]: New session 32 of user core. May 27 03:20:33.566136 systemd[1]: Started session-32.scope - Session 32 of User core. May 27 03:20:33.685685 sshd[4460]: Connection closed by 10.0.0.1 port 57062 May 27 03:20:33.685693 sshd-session[4458]: pam_unix(sshd:session): session closed for user core May 27 03:20:33.692677 systemd[1]: sshd@31-10.0.0.71:22-10.0.0.1:57062.service: Deactivated successfully. May 27 03:20:33.695314 systemd[1]: session-32.scope: Deactivated successfully. May 27 03:20:33.696659 systemd-logind[1564]: Session 32 logged out. Waiting for processes to exit. May 27 03:20:33.698942 systemd-logind[1564]: Removed session 32. May 27 03:20:38.702652 systemd[1]: Started sshd@32-10.0.0.71:22-10.0.0.1:57076.service - OpenSSH per-connection server daemon (10.0.0.1:57076). May 27 03:20:38.762315 sshd[4475]: Accepted publickey for core from 10.0.0.1 port 57076 ssh2: RSA SHA256:yrdvci6hXDWGDW7i9bmImWu+5ErcoHe0M1IyHhFSL9U May 27 03:20:38.764182 sshd-session[4475]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:20:38.770404 systemd-logind[1564]: New session 33 of user core. May 27 03:20:38.781250 systemd[1]: Started session-33.scope - Session 33 of User core. May 27 03:20:38.904551 sshd[4477]: Connection closed by 10.0.0.1 port 57076 May 27 03:20:38.904921 sshd-session[4475]: pam_unix(sshd:session): session closed for user core May 27 03:20:38.909911 systemd[1]: sshd@32-10.0.0.71:22-10.0.0.1:57076.service: Deactivated successfully. May 27 03:20:38.912263 systemd[1]: session-33.scope: Deactivated successfully. May 27 03:20:38.913047 systemd-logind[1564]: Session 33 logged out. Waiting for processes to exit. May 27 03:20:38.914542 systemd-logind[1564]: Removed session 33. May 27 03:20:43.919123 systemd[1]: Started sshd@33-10.0.0.71:22-10.0.0.1:40160.service - OpenSSH per-connection server daemon (10.0.0.1:40160). May 27 03:20:43.980084 sshd[4491]: Accepted publickey for core from 10.0.0.1 port 40160 ssh2: RSA SHA256:yrdvci6hXDWGDW7i9bmImWu+5ErcoHe0M1IyHhFSL9U May 27 03:20:43.981861 sshd-session[4491]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:20:43.987286 systemd-logind[1564]: New session 34 of user core. May 27 03:20:44.002184 systemd[1]: Started session-34.scope - Session 34 of User core. May 27 03:20:44.117155 sshd[4493]: Connection closed by 10.0.0.1 port 40160 May 27 03:20:44.117541 sshd-session[4491]: pam_unix(sshd:session): session closed for user core May 27 03:20:44.122373 systemd[1]: sshd@33-10.0.0.71:22-10.0.0.1:40160.service: Deactivated successfully. May 27 03:20:44.124597 systemd[1]: session-34.scope: Deactivated successfully. May 27 03:20:44.125361 systemd-logind[1564]: Session 34 logged out. Waiting for processes to exit. May 27 03:20:44.126664 systemd-logind[1564]: Removed session 34. May 27 03:20:49.136387 systemd[1]: Started sshd@34-10.0.0.71:22-10.0.0.1:40174.service - OpenSSH per-connection server daemon (10.0.0.1:40174). May 27 03:20:49.196435 sshd[4506]: Accepted publickey for core from 10.0.0.1 port 40174 ssh2: RSA SHA256:yrdvci6hXDWGDW7i9bmImWu+5ErcoHe0M1IyHhFSL9U May 27 03:20:49.197876 sshd-session[4506]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:20:49.202677 systemd-logind[1564]: New session 35 of user core. May 27 03:20:49.212193 systemd[1]: Started session-35.scope - Session 35 of User core. May 27 03:20:49.320082 sshd[4508]: Connection closed by 10.0.0.1 port 40174 May 27 03:20:49.320438 sshd-session[4506]: pam_unix(sshd:session): session closed for user core May 27 03:20:49.324872 systemd[1]: sshd@34-10.0.0.71:22-10.0.0.1:40174.service: Deactivated successfully. May 27 03:20:49.327160 systemd[1]: session-35.scope: Deactivated successfully. May 27 03:20:49.328917 systemd-logind[1564]: Session 35 logged out. Waiting for processes to exit. May 27 03:20:49.330486 systemd-logind[1564]: Removed session 35. May 27 03:20:54.337207 systemd[1]: Started sshd@35-10.0.0.71:22-10.0.0.1:59080.service - OpenSSH per-connection server daemon (10.0.0.1:59080). May 27 03:20:54.393372 sshd[4521]: Accepted publickey for core from 10.0.0.1 port 59080 ssh2: RSA SHA256:yrdvci6hXDWGDW7i9bmImWu+5ErcoHe0M1IyHhFSL9U May 27 03:20:54.395782 sshd-session[4521]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:20:54.400484 systemd-logind[1564]: New session 36 of user core. May 27 03:20:54.411104 systemd[1]: Started session-36.scope - Session 36 of User core. May 27 03:20:54.567516 sshd[4523]: Connection closed by 10.0.0.1 port 59080 May 27 03:20:54.567877 sshd-session[4521]: pam_unix(sshd:session): session closed for user core May 27 03:20:54.572968 systemd[1]: sshd@35-10.0.0.71:22-10.0.0.1:59080.service: Deactivated successfully. May 27 03:20:54.575390 systemd[1]: session-36.scope: Deactivated successfully. May 27 03:20:54.576415 systemd-logind[1564]: Session 36 logged out. Waiting for processes to exit. May 27 03:20:54.577891 systemd-logind[1564]: Removed session 36. May 27 03:20:59.584056 systemd[1]: Started sshd@36-10.0.0.71:22-10.0.0.1:59096.service - OpenSSH per-connection server daemon (10.0.0.1:59096). May 27 03:20:59.656253 sshd[4536]: Accepted publickey for core from 10.0.0.1 port 59096 ssh2: RSA SHA256:yrdvci6hXDWGDW7i9bmImWu+5ErcoHe0M1IyHhFSL9U May 27 03:20:59.658256 sshd-session[4536]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:20:59.663644 systemd-logind[1564]: New session 37 of user core. May 27 03:20:59.678173 systemd[1]: Started session-37.scope - Session 37 of User core. May 27 03:20:59.815597 sshd[4538]: Connection closed by 10.0.0.1 port 59096 May 27 03:20:59.815948 sshd-session[4536]: pam_unix(sshd:session): session closed for user core May 27 03:20:59.820391 systemd[1]: sshd@36-10.0.0.71:22-10.0.0.1:59096.service: Deactivated successfully. May 27 03:20:59.822579 systemd[1]: session-37.scope: Deactivated successfully. May 27 03:20:59.823527 systemd-logind[1564]: Session 37 logged out. Waiting for processes to exit. May 27 03:20:59.824928 systemd-logind[1564]: Removed session 37. May 27 03:21:04.840281 systemd[1]: Started sshd@37-10.0.0.71:22-10.0.0.1:51074.service - OpenSSH per-connection server daemon (10.0.0.1:51074). May 27 03:21:04.898405 sshd[4552]: Accepted publickey for core from 10.0.0.1 port 51074 ssh2: RSA SHA256:yrdvci6hXDWGDW7i9bmImWu+5ErcoHe0M1IyHhFSL9U May 27 03:21:04.900425 sshd-session[4552]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:21:04.905399 systemd-logind[1564]: New session 38 of user core. May 27 03:21:04.915125 systemd[1]: Started session-38.scope - Session 38 of User core. May 27 03:21:05.029325 sshd[4554]: Connection closed by 10.0.0.1 port 51074 May 27 03:21:05.029687 sshd-session[4552]: pam_unix(sshd:session): session closed for user core May 27 03:21:05.034653 systemd[1]: sshd@37-10.0.0.71:22-10.0.0.1:51074.service: Deactivated successfully. May 27 03:21:05.037094 systemd[1]: session-38.scope: Deactivated successfully. May 27 03:21:05.037988 systemd-logind[1564]: Session 38 logged out. Waiting for processes to exit. May 27 03:21:05.039998 systemd-logind[1564]: Removed session 38. May 27 03:21:10.050641 systemd[1]: Started sshd@38-10.0.0.71:22-10.0.0.1:51076.service - OpenSSH per-connection server daemon (10.0.0.1:51076). May 27 03:21:10.110163 sshd[4569]: Accepted publickey for core from 10.0.0.1 port 51076 ssh2: RSA SHA256:yrdvci6hXDWGDW7i9bmImWu+5ErcoHe0M1IyHhFSL9U May 27 03:21:10.112044 sshd-session[4569]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:21:10.117759 systemd-logind[1564]: New session 39 of user core. May 27 03:21:10.128223 systemd[1]: Started session-39.scope - Session 39 of User core. May 27 03:21:10.257038 sshd[4571]: Connection closed by 10.0.0.1 port 51076 May 27 03:21:10.257491 sshd-session[4569]: pam_unix(sshd:session): session closed for user core May 27 03:21:10.264796 systemd[1]: sshd@38-10.0.0.71:22-10.0.0.1:51076.service: Deactivated successfully. May 27 03:21:10.267698 systemd[1]: session-39.scope: Deactivated successfully. May 27 03:21:10.268723 systemd-logind[1564]: Session 39 logged out. Waiting for processes to exit. May 27 03:21:10.270528 systemd-logind[1564]: Removed session 39. May 27 03:21:15.274730 systemd[1]: Started sshd@39-10.0.0.71:22-10.0.0.1:49588.service - OpenSSH per-connection server daemon (10.0.0.1:49588). May 27 03:21:15.329621 sshd[4586]: Accepted publickey for core from 10.0.0.1 port 49588 ssh2: RSA SHA256:yrdvci6hXDWGDW7i9bmImWu+5ErcoHe0M1IyHhFSL9U May 27 03:21:15.331453 sshd-session[4586]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:21:15.336647 systemd-logind[1564]: New session 40 of user core. May 27 03:21:15.347146 systemd[1]: Started session-40.scope - Session 40 of User core. May 27 03:21:15.494710 sshd[4588]: Connection closed by 10.0.0.1 port 49588 May 27 03:21:15.495099 sshd-session[4586]: pam_unix(sshd:session): session closed for user core May 27 03:21:15.499311 systemd[1]: sshd@39-10.0.0.71:22-10.0.0.1:49588.service: Deactivated successfully. May 27 03:21:15.501929 systemd[1]: session-40.scope: Deactivated successfully. May 27 03:21:15.505168 systemd-logind[1564]: Session 40 logged out. Waiting for processes to exit. May 27 03:21:15.506317 systemd-logind[1564]: Removed session 40. May 27 03:21:20.519011 systemd[1]: Started sshd@40-10.0.0.71:22-10.0.0.1:49600.service - OpenSSH per-connection server daemon (10.0.0.1:49600). May 27 03:21:20.586694 sshd[4601]: Accepted publickey for core from 10.0.0.1 port 49600 ssh2: RSA SHA256:yrdvci6hXDWGDW7i9bmImWu+5ErcoHe0M1IyHhFSL9U May 27 03:21:20.588610 sshd-session[4601]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:21:20.594799 systemd-logind[1564]: New session 41 of user core. May 27 03:21:20.608173 systemd[1]: Started session-41.scope - Session 41 of User core. May 27 03:21:20.730291 sshd[4603]: Connection closed by 10.0.0.1 port 49600 May 27 03:21:20.730744 sshd-session[4601]: pam_unix(sshd:session): session closed for user core May 27 03:21:20.736299 systemd[1]: sshd@40-10.0.0.71:22-10.0.0.1:49600.service: Deactivated successfully. May 27 03:21:20.738660 systemd[1]: session-41.scope: Deactivated successfully. May 27 03:21:20.739646 systemd-logind[1564]: Session 41 logged out. Waiting for processes to exit. May 27 03:21:20.741597 systemd-logind[1564]: Removed session 41. May 27 03:21:25.743846 systemd[1]: Started sshd@41-10.0.0.71:22-10.0.0.1:36230.service - OpenSSH per-connection server daemon (10.0.0.1:36230). May 27 03:21:25.793054 sshd[4617]: Accepted publickey for core from 10.0.0.1 port 36230 ssh2: RSA SHA256:yrdvci6hXDWGDW7i9bmImWu+5ErcoHe0M1IyHhFSL9U May 27 03:21:25.794462 sshd-session[4617]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:21:25.799715 systemd-logind[1564]: New session 42 of user core. May 27 03:21:25.811217 systemd[1]: Started session-42.scope - Session 42 of User core. May 27 03:21:25.920565 sshd[4619]: Connection closed by 10.0.0.1 port 36230 May 27 03:21:25.920897 sshd-session[4617]: pam_unix(sshd:session): session closed for user core May 27 03:21:25.925094 systemd[1]: sshd@41-10.0.0.71:22-10.0.0.1:36230.service: Deactivated successfully. May 27 03:21:25.927751 systemd[1]: session-42.scope: Deactivated successfully. May 27 03:21:25.928677 systemd-logind[1564]: Session 42 logged out. Waiting for processes to exit. May 27 03:21:25.930229 systemd-logind[1564]: Removed session 42. May 27 03:21:30.934615 systemd[1]: Started sshd@42-10.0.0.71:22-10.0.0.1:36234.service - OpenSSH per-connection server daemon (10.0.0.1:36234). May 27 03:21:30.997100 sshd[4635]: Accepted publickey for core from 10.0.0.1 port 36234 ssh2: RSA SHA256:yrdvci6hXDWGDW7i9bmImWu+5ErcoHe0M1IyHhFSL9U May 27 03:21:30.999167 sshd-session[4635]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:21:31.004465 systemd-logind[1564]: New session 43 of user core. May 27 03:21:31.019243 systemd[1]: Started session-43.scope - Session 43 of User core. May 27 03:21:31.143457 sshd[4637]: Connection closed by 10.0.0.1 port 36234 May 27 03:21:31.143832 sshd-session[4635]: pam_unix(sshd:session): session closed for user core May 27 03:21:31.147555 systemd[1]: sshd@42-10.0.0.71:22-10.0.0.1:36234.service: Deactivated successfully. May 27 03:21:31.150154 systemd[1]: session-43.scope: Deactivated successfully. May 27 03:21:31.152072 systemd-logind[1564]: Session 43 logged out. Waiting for processes to exit. May 27 03:21:31.153989 systemd-logind[1564]: Removed session 43. May 27 03:21:36.157168 systemd[1]: Started sshd@43-10.0.0.71:22-10.0.0.1:48588.service - OpenSSH per-connection server daemon (10.0.0.1:48588). May 27 03:21:36.200762 sshd[4652]: Accepted publickey for core from 10.0.0.1 port 48588 ssh2: RSA SHA256:yrdvci6hXDWGDW7i9bmImWu+5ErcoHe0M1IyHhFSL9U May 27 03:21:36.202638 sshd-session[4652]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:21:36.207326 systemd-logind[1564]: New session 44 of user core. May 27 03:21:36.218115 systemd[1]: Started session-44.scope - Session 44 of User core. May 27 03:21:36.337274 sshd[4654]: Connection closed by 10.0.0.1 port 48588 May 27 03:21:36.337712 sshd-session[4652]: pam_unix(sshd:session): session closed for user core May 27 03:21:36.343436 systemd[1]: sshd@43-10.0.0.71:22-10.0.0.1:48588.service: Deactivated successfully. May 27 03:21:36.345610 systemd[1]: session-44.scope: Deactivated successfully. May 27 03:21:36.346717 systemd-logind[1564]: Session 44 logged out. Waiting for processes to exit. May 27 03:21:36.348544 systemd-logind[1564]: Removed session 44. May 27 03:21:41.353535 systemd[1]: Started sshd@44-10.0.0.71:22-10.0.0.1:48596.service - OpenSSH per-connection server daemon (10.0.0.1:48596). May 27 03:21:41.410711 sshd[4668]: Accepted publickey for core from 10.0.0.1 port 48596 ssh2: RSA SHA256:yrdvci6hXDWGDW7i9bmImWu+5ErcoHe0M1IyHhFSL9U May 27 03:21:41.412681 sshd-session[4668]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:21:41.418204 systemd-logind[1564]: New session 45 of user core. May 27 03:21:41.427227 systemd[1]: Started session-45.scope - Session 45 of User core. May 27 03:21:41.557076 sshd[4670]: Connection closed by 10.0.0.1 port 48596 May 27 03:21:41.557486 sshd-session[4668]: pam_unix(sshd:session): session closed for user core May 27 03:21:41.562769 systemd[1]: sshd@44-10.0.0.71:22-10.0.0.1:48596.service: Deactivated successfully. May 27 03:21:41.565296 systemd[1]: session-45.scope: Deactivated successfully. May 27 03:21:41.566227 systemd-logind[1564]: Session 45 logged out. Waiting for processes to exit. May 27 03:21:41.567871 systemd-logind[1564]: Removed session 45. May 27 03:21:46.575341 systemd[1]: Started sshd@45-10.0.0.71:22-10.0.0.1:43818.service - OpenSSH per-connection server daemon (10.0.0.1:43818). May 27 03:21:46.645686 sshd[4683]: Accepted publickey for core from 10.0.0.1 port 43818 ssh2: RSA SHA256:yrdvci6hXDWGDW7i9bmImWu+5ErcoHe0M1IyHhFSL9U May 27 03:21:46.647817 sshd-session[4683]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:21:46.652529 systemd-logind[1564]: New session 46 of user core. May 27 03:21:46.661162 systemd[1]: Started session-46.scope - Session 46 of User core. May 27 03:21:46.810728 sshd[4685]: Connection closed by 10.0.0.1 port 43818 May 27 03:21:46.811282 sshd-session[4683]: pam_unix(sshd:session): session closed for user core May 27 03:21:46.817020 systemd[1]: sshd@45-10.0.0.71:22-10.0.0.1:43818.service: Deactivated successfully. May 27 03:21:46.820026 systemd[1]: session-46.scope: Deactivated successfully. May 27 03:21:46.821039 systemd-logind[1564]: Session 46 logged out. Waiting for processes to exit. May 27 03:21:46.822724 systemd-logind[1564]: Removed session 46. May 27 03:21:51.825466 systemd[1]: Started sshd@46-10.0.0.71:22-10.0.0.1:43828.service - OpenSSH per-connection server daemon (10.0.0.1:43828). May 27 03:21:51.869550 sshd[4698]: Accepted publickey for core from 10.0.0.1 port 43828 ssh2: RSA SHA256:yrdvci6hXDWGDW7i9bmImWu+5ErcoHe0M1IyHhFSL9U May 27 03:21:51.871141 sshd-session[4698]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:21:51.876028 systemd-logind[1564]: New session 47 of user core. May 27 03:21:51.886267 systemd[1]: Started session-47.scope - Session 47 of User core. May 27 03:21:52.029926 sshd[4700]: Connection closed by 10.0.0.1 port 43828 May 27 03:21:52.030365 sshd-session[4698]: pam_unix(sshd:session): session closed for user core May 27 03:21:52.034522 systemd[1]: sshd@46-10.0.0.71:22-10.0.0.1:43828.service: Deactivated successfully. May 27 03:21:52.036517 systemd[1]: session-47.scope: Deactivated successfully. May 27 03:21:52.037581 systemd-logind[1564]: Session 47 logged out. Waiting for processes to exit. May 27 03:21:52.038967 systemd-logind[1564]: Removed session 47. May 27 03:21:57.047721 systemd[1]: Started sshd@47-10.0.0.71:22-10.0.0.1:59172.service - OpenSSH per-connection server daemon (10.0.0.1:59172). May 27 03:21:57.098340 sshd[4714]: Accepted publickey for core from 10.0.0.1 port 59172 ssh2: RSA SHA256:yrdvci6hXDWGDW7i9bmImWu+5ErcoHe0M1IyHhFSL9U May 27 03:21:57.100561 sshd-session[4714]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:21:57.106159 systemd-logind[1564]: New session 48 of user core. May 27 03:21:57.118306 systemd[1]: Started session-48.scope - Session 48 of User core. May 27 03:21:57.244944 sshd[4716]: Connection closed by 10.0.0.1 port 59172 May 27 03:21:57.245411 sshd-session[4714]: pam_unix(sshd:session): session closed for user core May 27 03:21:57.250809 systemd[1]: sshd@47-10.0.0.71:22-10.0.0.1:59172.service: Deactivated successfully. May 27 03:21:57.253141 systemd[1]: session-48.scope: Deactivated successfully. May 27 03:21:57.254153 systemd-logind[1564]: Session 48 logged out. Waiting for processes to exit. May 27 03:21:57.256064 systemd-logind[1564]: Removed session 48. May 27 03:22:02.258396 systemd[1]: Started sshd@48-10.0.0.71:22-10.0.0.1:59182.service - OpenSSH per-connection server daemon (10.0.0.1:59182). May 27 03:22:02.314283 sshd[4729]: Accepted publickey for core from 10.0.0.1 port 59182 ssh2: RSA SHA256:yrdvci6hXDWGDW7i9bmImWu+5ErcoHe0M1IyHhFSL9U May 27 03:22:02.315932 sshd-session[4729]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:22:02.320755 systemd-logind[1564]: New session 49 of user core. May 27 03:22:02.331406 systemd[1]: Started session-49.scope - Session 49 of User core. May 27 03:22:02.446795 sshd[4731]: Connection closed by 10.0.0.1 port 59182 May 27 03:22:02.447219 sshd-session[4729]: pam_unix(sshd:session): session closed for user core May 27 03:22:02.450572 systemd[1]: sshd@48-10.0.0.71:22-10.0.0.1:59182.service: Deactivated successfully. May 27 03:22:02.452656 systemd[1]: session-49.scope: Deactivated successfully. May 27 03:22:02.454406 systemd-logind[1564]: Session 49 logged out. Waiting for processes to exit. May 27 03:22:02.456035 systemd-logind[1564]: Removed session 49. May 27 03:22:07.463796 systemd[1]: Started sshd@49-10.0.0.71:22-10.0.0.1:41884.service - OpenSSH per-connection server daemon (10.0.0.1:41884). May 27 03:22:07.512557 sshd[4746]: Accepted publickey for core from 10.0.0.1 port 41884 ssh2: RSA SHA256:yrdvci6hXDWGDW7i9bmImWu+5ErcoHe0M1IyHhFSL9U May 27 03:22:07.514155 sshd-session[4746]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:22:07.518544 systemd-logind[1564]: New session 50 of user core. May 27 03:22:07.528108 systemd[1]: Started session-50.scope - Session 50 of User core. May 27 03:22:07.640050 sshd[4748]: Connection closed by 10.0.0.1 port 41884 May 27 03:22:07.640341 sshd-session[4746]: pam_unix(sshd:session): session closed for user core May 27 03:22:07.643631 systemd[1]: sshd@49-10.0.0.71:22-10.0.0.1:41884.service: Deactivated successfully. May 27 03:22:07.645819 systemd[1]: session-50.scope: Deactivated successfully. May 27 03:22:07.646738 systemd-logind[1564]: Session 50 logged out. Waiting for processes to exit. May 27 03:22:07.649333 systemd-logind[1564]: Removed session 50. May 27 03:22:12.652917 systemd[1]: Started sshd@50-10.0.0.71:22-10.0.0.1:41900.service - OpenSSH per-connection server daemon (10.0.0.1:41900). May 27 03:22:12.706084 sshd[4761]: Accepted publickey for core from 10.0.0.1 port 41900 ssh2: RSA SHA256:yrdvci6hXDWGDW7i9bmImWu+5ErcoHe0M1IyHhFSL9U May 27 03:22:12.707586 sshd-session[4761]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:22:12.712150 systemd-logind[1564]: New session 51 of user core. May 27 03:22:12.722135 systemd[1]: Started session-51.scope - Session 51 of User core. May 27 03:22:12.845708 sshd[4763]: Connection closed by 10.0.0.1 port 41900 May 27 03:22:12.846050 sshd-session[4761]: pam_unix(sshd:session): session closed for user core May 27 03:22:12.850870 systemd[1]: sshd@50-10.0.0.71:22-10.0.0.1:41900.service: Deactivated successfully. May 27 03:22:12.853198 systemd[1]: session-51.scope: Deactivated successfully. May 27 03:22:12.853996 systemd-logind[1564]: Session 51 logged out. Waiting for processes to exit. May 27 03:22:12.855349 systemd-logind[1564]: Removed session 51. May 27 03:22:17.864440 systemd[1]: Started sshd@51-10.0.0.71:22-10.0.0.1:41578.service - OpenSSH per-connection server daemon (10.0.0.1:41578). May 27 03:22:17.917424 sshd[4776]: Accepted publickey for core from 10.0.0.1 port 41578 ssh2: RSA SHA256:yrdvci6hXDWGDW7i9bmImWu+5ErcoHe0M1IyHhFSL9U May 27 03:22:17.919116 sshd-session[4776]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:22:17.923896 systemd-logind[1564]: New session 52 of user core. May 27 03:22:17.931166 systemd[1]: Started session-52.scope - Session 52 of User core. May 27 03:22:18.037602 sshd[4778]: Connection closed by 10.0.0.1 port 41578 May 27 03:22:18.037947 sshd-session[4776]: pam_unix(sshd:session): session closed for user core May 27 03:22:18.042108 systemd[1]: sshd@51-10.0.0.71:22-10.0.0.1:41578.service: Deactivated successfully. May 27 03:22:18.044368 systemd[1]: session-52.scope: Deactivated successfully. May 27 03:22:18.045313 systemd-logind[1564]: Session 52 logged out. Waiting for processes to exit. May 27 03:22:18.046763 systemd-logind[1564]: Removed session 52. May 27 03:22:23.055600 systemd[1]: Started sshd@52-10.0.0.71:22-10.0.0.1:43046.service - OpenSSH per-connection server daemon (10.0.0.1:43046). May 27 03:22:23.116741 sshd[4791]: Accepted publickey for core from 10.0.0.1 port 43046 ssh2: RSA SHA256:yrdvci6hXDWGDW7i9bmImWu+5ErcoHe0M1IyHhFSL9U May 27 03:22:23.118575 sshd-session[4791]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:22:23.123612 systemd-logind[1564]: New session 53 of user core. May 27 03:22:23.134218 systemd[1]: Started session-53.scope - Session 53 of User core. May 27 03:22:23.251325 sshd[4793]: Connection closed by 10.0.0.1 port 43046 May 27 03:22:23.251705 sshd-session[4791]: pam_unix(sshd:session): session closed for user core May 27 03:22:23.256667 systemd[1]: sshd@52-10.0.0.71:22-10.0.0.1:43046.service: Deactivated successfully. May 27 03:22:23.258836 systemd[1]: session-53.scope: Deactivated successfully. May 27 03:22:23.259661 systemd-logind[1564]: Session 53 logged out. Waiting for processes to exit. May 27 03:22:23.261084 systemd-logind[1564]: Removed session 53. May 27 03:22:28.267109 systemd[1]: Started sshd@53-10.0.0.71:22-10.0.0.1:43050.service - OpenSSH per-connection server daemon (10.0.0.1:43050). May 27 03:22:28.329918 sshd[4808]: Accepted publickey for core from 10.0.0.1 port 43050 ssh2: RSA SHA256:yrdvci6hXDWGDW7i9bmImWu+5ErcoHe0M1IyHhFSL9U May 27 03:22:28.331964 sshd-session[4808]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:22:28.337338 systemd-logind[1564]: New session 54 of user core. May 27 03:22:28.347116 systemd[1]: Started session-54.scope - Session 54 of User core. May 27 03:22:28.462316 sshd[4810]: Connection closed by 10.0.0.1 port 43050 May 27 03:22:28.462654 sshd-session[4808]: pam_unix(sshd:session): session closed for user core May 27 03:22:28.477046 systemd[1]: sshd@53-10.0.0.71:22-10.0.0.1:43050.service: Deactivated successfully. May 27 03:22:28.479094 systemd[1]: session-54.scope: Deactivated successfully. May 27 03:22:28.479962 systemd-logind[1564]: Session 54 logged out. Waiting for processes to exit. May 27 03:22:28.483065 systemd[1]: Started sshd@54-10.0.0.71:22-10.0.0.1:43062.service - OpenSSH per-connection server daemon (10.0.0.1:43062). May 27 03:22:28.483747 systemd-logind[1564]: Removed session 54. May 27 03:22:28.546002 sshd[4823]: Accepted publickey for core from 10.0.0.1 port 43062 ssh2: RSA SHA256:yrdvci6hXDWGDW7i9bmImWu+5ErcoHe0M1IyHhFSL9U May 27 03:22:28.547450 sshd-session[4823]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:22:28.551895 systemd-logind[1564]: New session 55 of user core. May 27 03:22:28.566140 systemd[1]: Started session-55.scope - Session 55 of User core. May 27 03:22:29.936281 containerd[1586]: time="2025-05-27T03:22:29.936214312Z" level=info msg="StopContainer for \"09f7e2312598301e320fcb8c63a99469b23870897709b716821961fc430a445f\" with timeout 30 (s)" May 27 03:22:29.944237 containerd[1586]: time="2025-05-27T03:22:29.944188286Z" level=info msg="Stop container \"09f7e2312598301e320fcb8c63a99469b23870897709b716821961fc430a445f\" with signal terminated" May 27 03:22:29.959462 systemd[1]: cri-containerd-09f7e2312598301e320fcb8c63a99469b23870897709b716821961fc430a445f.scope: Deactivated successfully. May 27 03:22:29.961870 containerd[1586]: time="2025-05-27T03:22:29.961642371Z" level=info msg="received exit event container_id:\"09f7e2312598301e320fcb8c63a99469b23870897709b716821961fc430a445f\" id:\"09f7e2312598301e320fcb8c63a99469b23870897709b716821961fc430a445f\" pid:3310 exited_at:{seconds:1748316149 nanos:961139667}" May 27 03:22:29.962064 containerd[1586]: time="2025-05-27T03:22:29.961926851Z" level=info msg="TaskExit event in podsandbox handler container_id:\"09f7e2312598301e320fcb8c63a99469b23870897709b716821961fc430a445f\" id:\"09f7e2312598301e320fcb8c63a99469b23870897709b716821961fc430a445f\" pid:3310 exited_at:{seconds:1748316149 nanos:961139667}" May 27 03:22:30.001584 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-09f7e2312598301e320fcb8c63a99469b23870897709b716821961fc430a445f-rootfs.mount: Deactivated successfully. May 27 03:22:30.012494 containerd[1586]: time="2025-05-27T03:22:30.012439083Z" level=info msg="StopContainer for \"09f7e2312598301e320fcb8c63a99469b23870897709b716821961fc430a445f\" returns successfully" May 27 03:22:30.013290 containerd[1586]: time="2025-05-27T03:22:30.013243971Z" level=info msg="StopPodSandbox for \"e175dda86ba402cab8d051e139dbdcdda87f30ed525db19c195ee33d6704cf96\"" May 27 03:22:30.019842 containerd[1586]: time="2025-05-27T03:22:30.019784983Z" level=info msg="Container to stop \"09f7e2312598301e320fcb8c63a99469b23870897709b716821961fc430a445f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 27 03:22:30.027736 systemd[1]: cri-containerd-e175dda86ba402cab8d051e139dbdcdda87f30ed525db19c195ee33d6704cf96.scope: Deactivated successfully. May 27 03:22:30.028796 containerd[1586]: time="2025-05-27T03:22:30.028761737Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e175dda86ba402cab8d051e139dbdcdda87f30ed525db19c195ee33d6704cf96\" id:\"e175dda86ba402cab8d051e139dbdcdda87f30ed525db19c195ee33d6704cf96\" pid:2941 exit_status:137 exited_at:{seconds:1748316150 nanos:28445718}" May 27 03:22:30.062778 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e175dda86ba402cab8d051e139dbdcdda87f30ed525db19c195ee33d6704cf96-rootfs.mount: Deactivated successfully. May 27 03:22:30.070678 containerd[1586]: time="2025-05-27T03:22:30.070613037Z" level=info msg="shim disconnected" id=e175dda86ba402cab8d051e139dbdcdda87f30ed525db19c195ee33d6704cf96 namespace=k8s.io May 27 03:22:30.070678 containerd[1586]: time="2025-05-27T03:22:30.070666780Z" level=warning msg="cleaning up after shim disconnected" id=e175dda86ba402cab8d051e139dbdcdda87f30ed525db19c195ee33d6704cf96 namespace=k8s.io May 27 03:22:30.088247 containerd[1586]: time="2025-05-27T03:22:30.070676989Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 27 03:22:30.103281 containerd[1586]: time="2025-05-27T03:22:30.103116338Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 27 03:22:30.113544 containerd[1586]: time="2025-05-27T03:22:30.113494752Z" level=info msg="StopContainer for \"4d9cfac9459c36bd4423f83cd1639cf2d65b1e9f3f6c6a0a0e56ae46871b57f3\" with timeout 2 (s)" May 27 03:22:30.113843 containerd[1586]: time="2025-05-27T03:22:30.113798027Z" level=info msg="Stop container \"4d9cfac9459c36bd4423f83cd1639cf2d65b1e9f3f6c6a0a0e56ae46871b57f3\" with signal terminated" May 27 03:22:30.117054 containerd[1586]: time="2025-05-27T03:22:30.117016184Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4d9cfac9459c36bd4423f83cd1639cf2d65b1e9f3f6c6a0a0e56ae46871b57f3\" id:\"8ea2b1600b6bb9c731ffcc3e8af7ca63e0d1352cd299eb7c105edb08c27fdfb0\" pid:4890 exited_at:{seconds:1748316150 nanos:109735098}" May 27 03:22:30.120586 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e175dda86ba402cab8d051e139dbdcdda87f30ed525db19c195ee33d6704cf96-shm.mount: Deactivated successfully. May 27 03:22:30.123696 systemd-networkd[1502]: lxc_health: Link DOWN May 27 03:22:30.124018 systemd-networkd[1502]: lxc_health: Lost carrier May 27 03:22:30.127446 containerd[1586]: time="2025-05-27T03:22:30.127383759Z" level=info msg="received exit event sandbox_id:\"e175dda86ba402cab8d051e139dbdcdda87f30ed525db19c195ee33d6704cf96\" exit_status:137 exited_at:{seconds:1748316150 nanos:28445718}" May 27 03:22:30.140704 containerd[1586]: time="2025-05-27T03:22:30.140637339Z" level=info msg="TearDown network for sandbox \"e175dda86ba402cab8d051e139dbdcdda87f30ed525db19c195ee33d6704cf96\" successfully" May 27 03:22:30.140704 containerd[1586]: time="2025-05-27T03:22:30.140689007Z" level=info msg="StopPodSandbox for \"e175dda86ba402cab8d051e139dbdcdda87f30ed525db19c195ee33d6704cf96\" returns successfully" May 27 03:22:30.146439 systemd[1]: cri-containerd-4d9cfac9459c36bd4423f83cd1639cf2d65b1e9f3f6c6a0a0e56ae46871b57f3.scope: Deactivated successfully. May 27 03:22:30.146838 systemd[1]: cri-containerd-4d9cfac9459c36bd4423f83cd1639cf2d65b1e9f3f6c6a0a0e56ae46871b57f3.scope: Consumed 7.806s CPU time, 123.3M memory peak, 352K read from disk, 13.3M written to disk. May 27 03:22:30.148402 containerd[1586]: time="2025-05-27T03:22:30.148256949Z" level=info msg="received exit event container_id:\"4d9cfac9459c36bd4423f83cd1639cf2d65b1e9f3f6c6a0a0e56ae46871b57f3\" id:\"4d9cfac9459c36bd4423f83cd1639cf2d65b1e9f3f6c6a0a0e56ae46871b57f3\" pid:3381 exited_at:{seconds:1748316150 nanos:148051980}" May 27 03:22:30.148538 containerd[1586]: time="2025-05-27T03:22:30.148471987Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4d9cfac9459c36bd4423f83cd1639cf2d65b1e9f3f6c6a0a0e56ae46871b57f3\" id:\"4d9cfac9459c36bd4423f83cd1639cf2d65b1e9f3f6c6a0a0e56ae46871b57f3\" pid:3381 exited_at:{seconds:1748316150 nanos:148051980}" May 27 03:22:30.174500 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4d9cfac9459c36bd4423f83cd1639cf2d65b1e9f3f6c6a0a0e56ae46871b57f3-rootfs.mount: Deactivated successfully. May 27 03:22:30.181063 kubelet[2725]: I0527 03:22:30.180964 2725 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k4vm2\" (UniqueName: \"kubernetes.io/projected/422199c7-1e48-4b96-9f11-fabed8cd678b-kube-api-access-k4vm2\") pod \"422199c7-1e48-4b96-9f11-fabed8cd678b\" (UID: \"422199c7-1e48-4b96-9f11-fabed8cd678b\") " May 27 03:22:30.181515 kubelet[2725]: I0527 03:22:30.181104 2725 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/422199c7-1e48-4b96-9f11-fabed8cd678b-cilium-config-path\") pod \"422199c7-1e48-4b96-9f11-fabed8cd678b\" (UID: \"422199c7-1e48-4b96-9f11-fabed8cd678b\") " May 27 03:22:30.185643 kubelet[2725]: I0527 03:22:30.185589 2725 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/422199c7-1e48-4b96-9f11-fabed8cd678b-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "422199c7-1e48-4b96-9f11-fabed8cd678b" (UID: "422199c7-1e48-4b96-9f11-fabed8cd678b"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" May 27 03:22:30.186640 kubelet[2725]: I0527 03:22:30.186559 2725 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/422199c7-1e48-4b96-9f11-fabed8cd678b-kube-api-access-k4vm2" (OuterVolumeSpecName: "kube-api-access-k4vm2") pod "422199c7-1e48-4b96-9f11-fabed8cd678b" (UID: "422199c7-1e48-4b96-9f11-fabed8cd678b"). InnerVolumeSpecName "kube-api-access-k4vm2". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 27 03:22:30.188347 systemd[1]: var-lib-kubelet-pods-422199c7\x2d1e48\x2d4b96\x2d9f11\x2dfabed8cd678b-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dk4vm2.mount: Deactivated successfully. May 27 03:22:30.190371 containerd[1586]: time="2025-05-27T03:22:30.190316083Z" level=info msg="StopContainer for \"4d9cfac9459c36bd4423f83cd1639cf2d65b1e9f3f6c6a0a0e56ae46871b57f3\" returns successfully" May 27 03:22:30.196321 containerd[1586]: time="2025-05-27T03:22:30.196270041Z" level=info msg="StopPodSandbox for \"171b88b80ad1d619b0d90dd6694762a3f4146ef79e8b793477c5455f6419b9b1\"" May 27 03:22:30.196502 containerd[1586]: time="2025-05-27T03:22:30.196350685Z" level=info msg="Container to stop \"6dbced1d437878d587ecbef33fbf35d326ed812a2caed1c8cc3a1f153c9f1146\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 27 03:22:30.196502 containerd[1586]: time="2025-05-27T03:22:30.196370021Z" level=info msg="Container to stop \"4d9cfac9459c36bd4423f83cd1639cf2d65b1e9f3f6c6a0a0e56ae46871b57f3\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 27 03:22:30.196502 containerd[1586]: time="2025-05-27T03:22:30.196382835Z" level=info msg="Container to stop \"7708aa2484161530289b6ef04fb7a7f659aec94bbd31fbdcaaa15fbfbb472657\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 27 03:22:30.196502 containerd[1586]: time="2025-05-27T03:22:30.196401801Z" level=info msg="Container to stop \"075c645673dae00f4970c0bb0a93b73b39451f03435f89023df23b36da0a8e14\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 27 03:22:30.196502 containerd[1586]: time="2025-05-27T03:22:30.196413443Z" level=info msg="Container to stop \"2a802e36b742d3779893d19f1a5d85f6930b13d0b953e585ce34a48d390434f3\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 27 03:22:30.204641 systemd[1]: cri-containerd-171b88b80ad1d619b0d90dd6694762a3f4146ef79e8b793477c5455f6419b9b1.scope: Deactivated successfully. May 27 03:22:30.205842 containerd[1586]: time="2025-05-27T03:22:30.205794977Z" level=info msg="TaskExit event in podsandbox handler container_id:\"171b88b80ad1d619b0d90dd6694762a3f4146ef79e8b793477c5455f6419b9b1\" id:\"171b88b80ad1d619b0d90dd6694762a3f4146ef79e8b793477c5455f6419b9b1\" pid:2893 exit_status:137 exited_at:{seconds:1748316150 nanos:205002272}" May 27 03:22:30.253129 containerd[1586]: time="2025-05-27T03:22:30.253066812Z" level=info msg="shim disconnected" id=171b88b80ad1d619b0d90dd6694762a3f4146ef79e8b793477c5455f6419b9b1 namespace=k8s.io May 27 03:22:30.253129 containerd[1586]: time="2025-05-27T03:22:30.253113871Z" level=warning msg="cleaning up after shim disconnected" id=171b88b80ad1d619b0d90dd6694762a3f4146ef79e8b793477c5455f6419b9b1 namespace=k8s.io May 27 03:22:30.253129 containerd[1586]: time="2025-05-27T03:22:30.253125533Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 27 03:22:30.269484 containerd[1586]: time="2025-05-27T03:22:30.269414304Z" level=info msg="received exit event sandbox_id:\"171b88b80ad1d619b0d90dd6694762a3f4146ef79e8b793477c5455f6419b9b1\" exit_status:137 exited_at:{seconds:1748316150 nanos:205002272}" May 27 03:22:30.269693 containerd[1586]: time="2025-05-27T03:22:30.269651164Z" level=info msg="TearDown network for sandbox \"171b88b80ad1d619b0d90dd6694762a3f4146ef79e8b793477c5455f6419b9b1\" successfully" May 27 03:22:30.269693 containerd[1586]: time="2025-05-27T03:22:30.269690768Z" level=info msg="StopPodSandbox for \"171b88b80ad1d619b0d90dd6694762a3f4146ef79e8b793477c5455f6419b9b1\" returns successfully" May 27 03:22:30.281912 kubelet[2725]: I0527 03:22:30.281826 2725 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-k4vm2\" (UniqueName: \"kubernetes.io/projected/422199c7-1e48-4b96-9f11-fabed8cd678b-kube-api-access-k4vm2\") on node \"localhost\" DevicePath \"\"" May 27 03:22:30.281912 kubelet[2725]: I0527 03:22:30.281877 2725 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/422199c7-1e48-4b96-9f11-fabed8cd678b-cilium-config-path\") on node \"localhost\" DevicePath \"\"" May 27 03:22:30.382900 kubelet[2725]: I0527 03:22:30.382845 2725 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e464bfcb-84d6-4586-811f-f5524741755f-cilium-cgroup\") pod \"e464bfcb-84d6-4586-811f-f5524741755f\" (UID: \"e464bfcb-84d6-4586-811f-f5524741755f\") " May 27 03:22:30.382900 kubelet[2725]: I0527 03:22:30.382895 2725 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e464bfcb-84d6-4586-811f-f5524741755f-etc-cni-netd\") pod \"e464bfcb-84d6-4586-811f-f5524741755f\" (UID: \"e464bfcb-84d6-4586-811f-f5524741755f\") " May 27 03:22:30.382900 kubelet[2725]: I0527 03:22:30.382915 2725 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e464bfcb-84d6-4586-811f-f5524741755f-bpf-maps\") pod \"e464bfcb-84d6-4586-811f-f5524741755f\" (UID: \"e464bfcb-84d6-4586-811f-f5524741755f\") " May 27 03:22:30.383203 kubelet[2725]: I0527 03:22:30.382933 2725 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e464bfcb-84d6-4586-811f-f5524741755f-hostproc\") pod \"e464bfcb-84d6-4586-811f-f5524741755f\" (UID: \"e464bfcb-84d6-4586-811f-f5524741755f\") " May 27 03:22:30.383203 kubelet[2725]: I0527 03:22:30.382954 2725 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e464bfcb-84d6-4586-811f-f5524741755f-xtables-lock\") pod \"e464bfcb-84d6-4586-811f-f5524741755f\" (UID: \"e464bfcb-84d6-4586-811f-f5524741755f\") " May 27 03:22:30.383203 kubelet[2725]: I0527 03:22:30.383007 2725 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lqg87\" (UniqueName: \"kubernetes.io/projected/e464bfcb-84d6-4586-811f-f5524741755f-kube-api-access-lqg87\") pod \"e464bfcb-84d6-4586-811f-f5524741755f\" (UID: \"e464bfcb-84d6-4586-811f-f5524741755f\") " May 27 03:22:30.383203 kubelet[2725]: I0527 03:22:30.383038 2725 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e464bfcb-84d6-4586-811f-f5524741755f-clustermesh-secrets\") pod \"e464bfcb-84d6-4586-811f-f5524741755f\" (UID: \"e464bfcb-84d6-4586-811f-f5524741755f\") " May 27 03:22:30.383203 kubelet[2725]: I0527 03:22:30.383006 2725 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e464bfcb-84d6-4586-811f-f5524741755f-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "e464bfcb-84d6-4586-811f-f5524741755f" (UID: "e464bfcb-84d6-4586-811f-f5524741755f"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 27 03:22:30.383203 kubelet[2725]: I0527 03:22:30.383065 2725 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e464bfcb-84d6-4586-811f-f5524741755f-cilium-config-path\") pod \"e464bfcb-84d6-4586-811f-f5524741755f\" (UID: \"e464bfcb-84d6-4586-811f-f5524741755f\") " May 27 03:22:30.383417 kubelet[2725]: I0527 03:22:30.383012 2725 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e464bfcb-84d6-4586-811f-f5524741755f-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "e464bfcb-84d6-4586-811f-f5524741755f" (UID: "e464bfcb-84d6-4586-811f-f5524741755f"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 27 03:22:30.383417 kubelet[2725]: I0527 03:22:30.383027 2725 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e464bfcb-84d6-4586-811f-f5524741755f-hostproc" (OuterVolumeSpecName: "hostproc") pod "e464bfcb-84d6-4586-811f-f5524741755f" (UID: "e464bfcb-84d6-4586-811f-f5524741755f"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 27 03:22:30.383417 kubelet[2725]: I0527 03:22:30.383084 2725 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e464bfcb-84d6-4586-811f-f5524741755f-lib-modules\") pod \"e464bfcb-84d6-4586-811f-f5524741755f\" (UID: \"e464bfcb-84d6-4586-811f-f5524741755f\") " May 27 03:22:30.383417 kubelet[2725]: I0527 03:22:30.383103 2725 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e464bfcb-84d6-4586-811f-f5524741755f-cni-path\") pod \"e464bfcb-84d6-4586-811f-f5524741755f\" (UID: \"e464bfcb-84d6-4586-811f-f5524741755f\") " May 27 03:22:30.383417 kubelet[2725]: I0527 03:22:30.383121 2725 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e464bfcb-84d6-4586-811f-f5524741755f-host-proc-sys-net\") pod \"e464bfcb-84d6-4586-811f-f5524741755f\" (UID: \"e464bfcb-84d6-4586-811f-f5524741755f\") " May 27 03:22:30.383417 kubelet[2725]: I0527 03:22:30.383144 2725 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e464bfcb-84d6-4586-811f-f5524741755f-hubble-tls\") pod \"e464bfcb-84d6-4586-811f-f5524741755f\" (UID: \"e464bfcb-84d6-4586-811f-f5524741755f\") " May 27 03:22:30.383612 kubelet[2725]: I0527 03:22:30.383161 2725 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e464bfcb-84d6-4586-811f-f5524741755f-cilium-run\") pod \"e464bfcb-84d6-4586-811f-f5524741755f\" (UID: \"e464bfcb-84d6-4586-811f-f5524741755f\") " May 27 03:22:30.383612 kubelet[2725]: I0527 03:22:30.383180 2725 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e464bfcb-84d6-4586-811f-f5524741755f-host-proc-sys-kernel\") pod \"e464bfcb-84d6-4586-811f-f5524741755f\" (UID: \"e464bfcb-84d6-4586-811f-f5524741755f\") " May 27 03:22:30.383612 kubelet[2725]: I0527 03:22:30.383221 2725 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e464bfcb-84d6-4586-811f-f5524741755f-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" May 27 03:22:30.383612 kubelet[2725]: I0527 03:22:30.383235 2725 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e464bfcb-84d6-4586-811f-f5524741755f-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" May 27 03:22:30.383612 kubelet[2725]: I0527 03:22:30.383247 2725 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e464bfcb-84d6-4586-811f-f5524741755f-hostproc\") on node \"localhost\" DevicePath \"\"" May 27 03:22:30.383995 kubelet[2725]: I0527 03:22:30.383036 2725 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e464bfcb-84d6-4586-811f-f5524741755f-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "e464bfcb-84d6-4586-811f-f5524741755f" (UID: "e464bfcb-84d6-4586-811f-f5524741755f"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 27 03:22:30.383995 kubelet[2725]: I0527 03:22:30.383039 2725 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e464bfcb-84d6-4586-811f-f5524741755f-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "e464bfcb-84d6-4586-811f-f5524741755f" (UID: "e464bfcb-84d6-4586-811f-f5524741755f"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 27 03:22:30.383995 kubelet[2725]: I0527 03:22:30.383276 2725 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e464bfcb-84d6-4586-811f-f5524741755f-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "e464bfcb-84d6-4586-811f-f5524741755f" (UID: "e464bfcb-84d6-4586-811f-f5524741755f"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 27 03:22:30.383995 kubelet[2725]: I0527 03:22:30.383832 2725 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e464bfcb-84d6-4586-811f-f5524741755f-cni-path" (OuterVolumeSpecName: "cni-path") pod "e464bfcb-84d6-4586-811f-f5524741755f" (UID: "e464bfcb-84d6-4586-811f-f5524741755f"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 27 03:22:30.383995 kubelet[2725]: I0527 03:22:30.383868 2725 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e464bfcb-84d6-4586-811f-f5524741755f-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "e464bfcb-84d6-4586-811f-f5524741755f" (UID: "e464bfcb-84d6-4586-811f-f5524741755f"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 27 03:22:30.384312 kubelet[2725]: I0527 03:22:30.384236 2725 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e464bfcb-84d6-4586-811f-f5524741755f-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "e464bfcb-84d6-4586-811f-f5524741755f" (UID: "e464bfcb-84d6-4586-811f-f5524741755f"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 27 03:22:30.384312 kubelet[2725]: I0527 03:22:30.384285 2725 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e464bfcb-84d6-4586-811f-f5524741755f-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "e464bfcb-84d6-4586-811f-f5524741755f" (UID: "e464bfcb-84d6-4586-811f-f5524741755f"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 27 03:22:30.386556 kubelet[2725]: I0527 03:22:30.386517 2725 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e464bfcb-84d6-4586-811f-f5524741755f-kube-api-access-lqg87" (OuterVolumeSpecName: "kube-api-access-lqg87") pod "e464bfcb-84d6-4586-811f-f5524741755f" (UID: "e464bfcb-84d6-4586-811f-f5524741755f"). InnerVolumeSpecName "kube-api-access-lqg87". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 27 03:22:30.387076 kubelet[2725]: I0527 03:22:30.387038 2725 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e464bfcb-84d6-4586-811f-f5524741755f-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "e464bfcb-84d6-4586-811f-f5524741755f" (UID: "e464bfcb-84d6-4586-811f-f5524741755f"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" May 27 03:22:30.387360 kubelet[2725]: I0527 03:22:30.387323 2725 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e464bfcb-84d6-4586-811f-f5524741755f-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "e464bfcb-84d6-4586-811f-f5524741755f" (UID: "e464bfcb-84d6-4586-811f-f5524741755f"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" May 27 03:22:30.387988 kubelet[2725]: I0527 03:22:30.387940 2725 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e464bfcb-84d6-4586-811f-f5524741755f-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "e464bfcb-84d6-4586-811f-f5524741755f" (UID: "e464bfcb-84d6-4586-811f-f5524741755f"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 27 03:22:30.431135 systemd[1]: Removed slice kubepods-burstable-pode464bfcb_84d6_4586_811f_f5524741755f.slice - libcontainer container kubepods-burstable-pode464bfcb_84d6_4586_811f_f5524741755f.slice. May 27 03:22:30.431248 systemd[1]: kubepods-burstable-pode464bfcb_84d6_4586_811f_f5524741755f.slice: Consumed 7.939s CPU time, 123.6M memory peak, 368K read from disk, 15.9M written to disk. May 27 03:22:30.432395 systemd[1]: Removed slice kubepods-besteffort-pod422199c7_1e48_4b96_9f11_fabed8cd678b.slice - libcontainer container kubepods-besteffort-pod422199c7_1e48_4b96_9f11_fabed8cd678b.slice. May 27 03:22:30.483905 kubelet[2725]: I0527 03:22:30.483769 2725 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e464bfcb-84d6-4586-811f-f5524741755f-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" May 27 03:22:30.483905 kubelet[2725]: I0527 03:22:30.483811 2725 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e464bfcb-84d6-4586-811f-f5524741755f-cilium-config-path\") on node \"localhost\" DevicePath \"\"" May 27 03:22:30.483905 kubelet[2725]: I0527 03:22:30.483820 2725 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e464bfcb-84d6-4586-811f-f5524741755f-lib-modules\") on node \"localhost\" DevicePath \"\"" May 27 03:22:30.483905 kubelet[2725]: I0527 03:22:30.483830 2725 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e464bfcb-84d6-4586-811f-f5524741755f-cni-path\") on node \"localhost\" DevicePath \"\"" May 27 03:22:30.483905 kubelet[2725]: I0527 03:22:30.483839 2725 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e464bfcb-84d6-4586-811f-f5524741755f-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" May 27 03:22:30.483905 kubelet[2725]: I0527 03:22:30.483847 2725 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e464bfcb-84d6-4586-811f-f5524741755f-hubble-tls\") on node \"localhost\" DevicePath \"\"" May 27 03:22:30.483905 kubelet[2725]: I0527 03:22:30.483854 2725 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e464bfcb-84d6-4586-811f-f5524741755f-cilium-run\") on node \"localhost\" DevicePath \"\"" May 27 03:22:30.483905 kubelet[2725]: I0527 03:22:30.483862 2725 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e464bfcb-84d6-4586-811f-f5524741755f-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" May 27 03:22:30.484263 kubelet[2725]: I0527 03:22:30.483869 2725 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e464bfcb-84d6-4586-811f-f5524741755f-bpf-maps\") on node \"localhost\" DevicePath \"\"" May 27 03:22:30.484263 kubelet[2725]: I0527 03:22:30.483878 2725 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e464bfcb-84d6-4586-811f-f5524741755f-xtables-lock\") on node \"localhost\" DevicePath \"\"" May 27 03:22:30.484263 kubelet[2725]: I0527 03:22:30.483886 2725 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-lqg87\" (UniqueName: \"kubernetes.io/projected/e464bfcb-84d6-4586-811f-f5524741755f-kube-api-access-lqg87\") on node \"localhost\" DevicePath \"\"" May 27 03:22:31.001406 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-171b88b80ad1d619b0d90dd6694762a3f4146ef79e8b793477c5455f6419b9b1-rootfs.mount: Deactivated successfully. May 27 03:22:31.001548 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-171b88b80ad1d619b0d90dd6694762a3f4146ef79e8b793477c5455f6419b9b1-shm.mount: Deactivated successfully. May 27 03:22:31.001653 systemd[1]: var-lib-kubelet-pods-e464bfcb\x2d84d6\x2d4586\x2d811f\x2df5524741755f-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dlqg87.mount: Deactivated successfully. May 27 03:22:31.001753 systemd[1]: var-lib-kubelet-pods-e464bfcb\x2d84d6\x2d4586\x2d811f\x2df5524741755f-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. May 27 03:22:31.001857 systemd[1]: var-lib-kubelet-pods-e464bfcb\x2d84d6\x2d4586\x2d811f\x2df5524741755f-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. May 27 03:22:31.068479 kubelet[2725]: I0527 03:22:31.068430 2725 scope.go:117] "RemoveContainer" containerID="09f7e2312598301e320fcb8c63a99469b23870897709b716821961fc430a445f" May 27 03:22:31.070189 containerd[1586]: time="2025-05-27T03:22:31.070144314Z" level=info msg="RemoveContainer for \"09f7e2312598301e320fcb8c63a99469b23870897709b716821961fc430a445f\"" May 27 03:22:31.085650 containerd[1586]: time="2025-05-27T03:22:31.085601323Z" level=info msg="RemoveContainer for \"09f7e2312598301e320fcb8c63a99469b23870897709b716821961fc430a445f\" returns successfully" May 27 03:22:31.085885 kubelet[2725]: I0527 03:22:31.085848 2725 scope.go:117] "RemoveContainer" containerID="4d9cfac9459c36bd4423f83cd1639cf2d65b1e9f3f6c6a0a0e56ae46871b57f3" May 27 03:22:31.087461 containerd[1586]: time="2025-05-27T03:22:31.087426416Z" level=info msg="RemoveContainer for \"4d9cfac9459c36bd4423f83cd1639cf2d65b1e9f3f6c6a0a0e56ae46871b57f3\"" May 27 03:22:31.097361 containerd[1586]: time="2025-05-27T03:22:31.097321922Z" level=info msg="RemoveContainer for \"4d9cfac9459c36bd4423f83cd1639cf2d65b1e9f3f6c6a0a0e56ae46871b57f3\" returns successfully" May 27 03:22:31.097622 kubelet[2725]: I0527 03:22:31.097568 2725 scope.go:117] "RemoveContainer" containerID="6dbced1d437878d587ecbef33fbf35d326ed812a2caed1c8cc3a1f153c9f1146" May 27 03:22:31.099405 containerd[1586]: time="2025-05-27T03:22:31.099367624Z" level=info msg="RemoveContainer for \"6dbced1d437878d587ecbef33fbf35d326ed812a2caed1c8cc3a1f153c9f1146\"" May 27 03:22:31.106691 containerd[1586]: time="2025-05-27T03:22:31.106638701Z" level=info msg="RemoveContainer for \"6dbced1d437878d587ecbef33fbf35d326ed812a2caed1c8cc3a1f153c9f1146\" returns successfully" May 27 03:22:31.106870 kubelet[2725]: I0527 03:22:31.106830 2725 scope.go:117] "RemoveContainer" containerID="2a802e36b742d3779893d19f1a5d85f6930b13d0b953e585ce34a48d390434f3" May 27 03:22:31.109455 containerd[1586]: time="2025-05-27T03:22:31.109415941Z" level=info msg="RemoveContainer for \"2a802e36b742d3779893d19f1a5d85f6930b13d0b953e585ce34a48d390434f3\"" May 27 03:22:31.118685 containerd[1586]: time="2025-05-27T03:22:31.118636327Z" level=info msg="RemoveContainer for \"2a802e36b742d3779893d19f1a5d85f6930b13d0b953e585ce34a48d390434f3\" returns successfully" May 27 03:22:31.118864 kubelet[2725]: I0527 03:22:31.118811 2725 scope.go:117] "RemoveContainer" containerID="075c645673dae00f4970c0bb0a93b73b39451f03435f89023df23b36da0a8e14" May 27 03:22:31.120287 containerd[1586]: time="2025-05-27T03:22:31.120250138Z" level=info msg="RemoveContainer for \"075c645673dae00f4970c0bb0a93b73b39451f03435f89023df23b36da0a8e14\"" May 27 03:22:31.126935 containerd[1586]: time="2025-05-27T03:22:31.126897993Z" level=info msg="RemoveContainer for \"075c645673dae00f4970c0bb0a93b73b39451f03435f89023df23b36da0a8e14\" returns successfully" May 27 03:22:31.127185 kubelet[2725]: I0527 03:22:31.127104 2725 scope.go:117] "RemoveContainer" containerID="7708aa2484161530289b6ef04fb7a7f659aec94bbd31fbdcaaa15fbfbb472657" May 27 03:22:31.128718 containerd[1586]: time="2025-05-27T03:22:31.128685985Z" level=info msg="RemoveContainer for \"7708aa2484161530289b6ef04fb7a7f659aec94bbd31fbdcaaa15fbfbb472657\"" May 27 03:22:31.132489 containerd[1586]: time="2025-05-27T03:22:31.132462511Z" level=info msg="RemoveContainer for \"7708aa2484161530289b6ef04fb7a7f659aec94bbd31fbdcaaa15fbfbb472657\" returns successfully" May 27 03:22:31.132643 kubelet[2725]: I0527 03:22:31.132612 2725 scope.go:117] "RemoveContainer" containerID="4d9cfac9459c36bd4423f83cd1639cf2d65b1e9f3f6c6a0a0e56ae46871b57f3" May 27 03:22:31.132876 containerd[1586]: time="2025-05-27T03:22:31.132826672Z" level=error msg="ContainerStatus for \"4d9cfac9459c36bd4423f83cd1639cf2d65b1e9f3f6c6a0a0e56ae46871b57f3\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"4d9cfac9459c36bd4423f83cd1639cf2d65b1e9f3f6c6a0a0e56ae46871b57f3\": not found" May 27 03:22:31.133037 kubelet[2725]: E0527 03:22:31.133013 2725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"4d9cfac9459c36bd4423f83cd1639cf2d65b1e9f3f6c6a0a0e56ae46871b57f3\": not found" containerID="4d9cfac9459c36bd4423f83cd1639cf2d65b1e9f3f6c6a0a0e56ae46871b57f3" May 27 03:22:31.133081 kubelet[2725]: I0527 03:22:31.133044 2725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"4d9cfac9459c36bd4423f83cd1639cf2d65b1e9f3f6c6a0a0e56ae46871b57f3"} err="failed to get container status \"4d9cfac9459c36bd4423f83cd1639cf2d65b1e9f3f6c6a0a0e56ae46871b57f3\": rpc error: code = NotFound desc = an error occurred when try to find container \"4d9cfac9459c36bd4423f83cd1639cf2d65b1e9f3f6c6a0a0e56ae46871b57f3\": not found" May 27 03:22:31.133114 kubelet[2725]: I0527 03:22:31.133084 2725 scope.go:117] "RemoveContainer" containerID="6dbced1d437878d587ecbef33fbf35d326ed812a2caed1c8cc3a1f153c9f1146" May 27 03:22:31.133287 containerd[1586]: time="2025-05-27T03:22:31.133251758Z" level=error msg="ContainerStatus for \"6dbced1d437878d587ecbef33fbf35d326ed812a2caed1c8cc3a1f153c9f1146\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"6dbced1d437878d587ecbef33fbf35d326ed812a2caed1c8cc3a1f153c9f1146\": not found" May 27 03:22:31.133430 kubelet[2725]: E0527 03:22:31.133398 2725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"6dbced1d437878d587ecbef33fbf35d326ed812a2caed1c8cc3a1f153c9f1146\": not found" containerID="6dbced1d437878d587ecbef33fbf35d326ed812a2caed1c8cc3a1f153c9f1146" May 27 03:22:31.133468 kubelet[2725]: I0527 03:22:31.133428 2725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"6dbced1d437878d587ecbef33fbf35d326ed812a2caed1c8cc3a1f153c9f1146"} err="failed to get container status \"6dbced1d437878d587ecbef33fbf35d326ed812a2caed1c8cc3a1f153c9f1146\": rpc error: code = NotFound desc = an error occurred when try to find container \"6dbced1d437878d587ecbef33fbf35d326ed812a2caed1c8cc3a1f153c9f1146\": not found" May 27 03:22:31.133468 kubelet[2725]: I0527 03:22:31.133450 2725 scope.go:117] "RemoveContainer" containerID="2a802e36b742d3779893d19f1a5d85f6930b13d0b953e585ce34a48d390434f3" May 27 03:22:31.133618 containerd[1586]: time="2025-05-27T03:22:31.133589149Z" level=error msg="ContainerStatus for \"2a802e36b742d3779893d19f1a5d85f6930b13d0b953e585ce34a48d390434f3\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"2a802e36b742d3779893d19f1a5d85f6930b13d0b953e585ce34a48d390434f3\": not found" May 27 03:22:31.133714 kubelet[2725]: E0527 03:22:31.133688 2725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"2a802e36b742d3779893d19f1a5d85f6930b13d0b953e585ce34a48d390434f3\": not found" containerID="2a802e36b742d3779893d19f1a5d85f6930b13d0b953e585ce34a48d390434f3" May 27 03:22:31.133769 kubelet[2725]: I0527 03:22:31.133711 2725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"2a802e36b742d3779893d19f1a5d85f6930b13d0b953e585ce34a48d390434f3"} err="failed to get container status \"2a802e36b742d3779893d19f1a5d85f6930b13d0b953e585ce34a48d390434f3\": rpc error: code = NotFound desc = an error occurred when try to find container \"2a802e36b742d3779893d19f1a5d85f6930b13d0b953e585ce34a48d390434f3\": not found" May 27 03:22:31.133769 kubelet[2725]: I0527 03:22:31.133728 2725 scope.go:117] "RemoveContainer" containerID="075c645673dae00f4970c0bb0a93b73b39451f03435f89023df23b36da0a8e14" May 27 03:22:31.133909 containerd[1586]: time="2025-05-27T03:22:31.133870142Z" level=error msg="ContainerStatus for \"075c645673dae00f4970c0bb0a93b73b39451f03435f89023df23b36da0a8e14\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"075c645673dae00f4970c0bb0a93b73b39451f03435f89023df23b36da0a8e14\": not found" May 27 03:22:31.134018 kubelet[2725]: E0527 03:22:31.133996 2725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"075c645673dae00f4970c0bb0a93b73b39451f03435f89023df23b36da0a8e14\": not found" containerID="075c645673dae00f4970c0bb0a93b73b39451f03435f89023df23b36da0a8e14" May 27 03:22:31.134018 kubelet[2725]: I0527 03:22:31.134015 2725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"075c645673dae00f4970c0bb0a93b73b39451f03435f89023df23b36da0a8e14"} err="failed to get container status \"075c645673dae00f4970c0bb0a93b73b39451f03435f89023df23b36da0a8e14\": rpc error: code = NotFound desc = an error occurred when try to find container \"075c645673dae00f4970c0bb0a93b73b39451f03435f89023df23b36da0a8e14\": not found" May 27 03:22:31.134088 kubelet[2725]: I0527 03:22:31.134026 2725 scope.go:117] "RemoveContainer" containerID="7708aa2484161530289b6ef04fb7a7f659aec94bbd31fbdcaaa15fbfbb472657" May 27 03:22:31.134198 containerd[1586]: time="2025-05-27T03:22:31.134158690Z" level=error msg="ContainerStatus for \"7708aa2484161530289b6ef04fb7a7f659aec94bbd31fbdcaaa15fbfbb472657\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"7708aa2484161530289b6ef04fb7a7f659aec94bbd31fbdcaaa15fbfbb472657\": not found" May 27 03:22:31.134351 kubelet[2725]: E0527 03:22:31.134278 2725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"7708aa2484161530289b6ef04fb7a7f659aec94bbd31fbdcaaa15fbfbb472657\": not found" containerID="7708aa2484161530289b6ef04fb7a7f659aec94bbd31fbdcaaa15fbfbb472657" May 27 03:22:31.134351 kubelet[2725]: I0527 03:22:31.134302 2725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"7708aa2484161530289b6ef04fb7a7f659aec94bbd31fbdcaaa15fbfbb472657"} err="failed to get container status \"7708aa2484161530289b6ef04fb7a7f659aec94bbd31fbdcaaa15fbfbb472657\": rpc error: code = NotFound desc = an error occurred when try to find container \"7708aa2484161530289b6ef04fb7a7f659aec94bbd31fbdcaaa15fbfbb472657\": not found" May 27 03:22:31.545935 kubelet[2725]: E0527 03:22:31.545720 2725 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 27 03:22:31.896660 sshd[4825]: Connection closed by 10.0.0.1 port 43062 May 27 03:22:31.897334 sshd-session[4823]: pam_unix(sshd:session): session closed for user core May 27 03:22:31.909003 systemd[1]: sshd@54-10.0.0.71:22-10.0.0.1:43062.service: Deactivated successfully. May 27 03:22:31.910954 systemd[1]: session-55.scope: Deactivated successfully. May 27 03:22:31.911818 systemd-logind[1564]: Session 55 logged out. Waiting for processes to exit. May 27 03:22:31.914755 systemd[1]: Started sshd@55-10.0.0.71:22-10.0.0.1:43078.service - OpenSSH per-connection server daemon (10.0.0.1:43078). May 27 03:22:31.915679 systemd-logind[1564]: Removed session 55. May 27 03:22:31.975711 sshd[4983]: Accepted publickey for core from 10.0.0.1 port 43078 ssh2: RSA SHA256:yrdvci6hXDWGDW7i9bmImWu+5ErcoHe0M1IyHhFSL9U May 27 03:22:31.977541 sshd-session[4983]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:22:31.982154 systemd-logind[1564]: New session 56 of user core. May 27 03:22:31.989084 systemd[1]: Started session-56.scope - Session 56 of User core. May 27 03:22:32.426138 kubelet[2725]: I0527 03:22:32.426077 2725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="422199c7-1e48-4b96-9f11-fabed8cd678b" path="/var/lib/kubelet/pods/422199c7-1e48-4b96-9f11-fabed8cd678b/volumes" May 27 03:22:32.426878 kubelet[2725]: I0527 03:22:32.426839 2725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e464bfcb-84d6-4586-811f-f5524741755f" path="/var/lib/kubelet/pods/e464bfcb-84d6-4586-811f-f5524741755f/volumes" May 27 03:22:32.573218 sshd[4985]: Connection closed by 10.0.0.1 port 43078 May 27 03:22:32.575207 sshd-session[4983]: pam_unix(sshd:session): session closed for user core May 27 03:22:32.586115 systemd[1]: sshd@55-10.0.0.71:22-10.0.0.1:43078.service: Deactivated successfully. May 27 03:22:32.590504 systemd[1]: session-56.scope: Deactivated successfully. May 27 03:22:32.591719 systemd-logind[1564]: Session 56 logged out. Waiting for processes to exit. May 27 03:22:32.603268 systemd[1]: Started sshd@56-10.0.0.71:22-10.0.0.1:43088.service - OpenSSH per-connection server daemon (10.0.0.1:43088). May 27 03:22:32.605438 systemd-logind[1564]: Removed session 56. May 27 03:22:32.629934 systemd[1]: Created slice kubepods-burstable-pod1a952612_c495_41af_a299_c149e1382b40.slice - libcontainer container kubepods-burstable-pod1a952612_c495_41af_a299_c149e1382b40.slice. May 27 03:22:32.661745 sshd[4997]: Accepted publickey for core from 10.0.0.1 port 43088 ssh2: RSA SHA256:yrdvci6hXDWGDW7i9bmImWu+5ErcoHe0M1IyHhFSL9U May 27 03:22:32.663239 sshd-session[4997]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:22:32.667639 systemd-logind[1564]: New session 57 of user core. May 27 03:22:32.677178 systemd[1]: Started session-57.scope - Session 57 of User core. May 27 03:22:32.695668 kubelet[2725]: I0527 03:22:32.695617 2725 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/1a952612-c495-41af-a299-c149e1382b40-clustermesh-secrets\") pod \"cilium-zlbgx\" (UID: \"1a952612-c495-41af-a299-c149e1382b40\") " pod="kube-system/cilium-zlbgx" May 27 03:22:32.695668 kubelet[2725]: I0527 03:22:32.695659 2725 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/1a952612-c495-41af-a299-c149e1382b40-bpf-maps\") pod \"cilium-zlbgx\" (UID: \"1a952612-c495-41af-a299-c149e1382b40\") " pod="kube-system/cilium-zlbgx" May 27 03:22:32.696126 kubelet[2725]: I0527 03:22:32.695679 2725 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/1a952612-c495-41af-a299-c149e1382b40-cni-path\") pod \"cilium-zlbgx\" (UID: \"1a952612-c495-41af-a299-c149e1382b40\") " pod="kube-system/cilium-zlbgx" May 27 03:22:32.696126 kubelet[2725]: I0527 03:22:32.695693 2725 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1a952612-c495-41af-a299-c149e1382b40-xtables-lock\") pod \"cilium-zlbgx\" (UID: \"1a952612-c495-41af-a299-c149e1382b40\") " pod="kube-system/cilium-zlbgx" May 27 03:22:32.696126 kubelet[2725]: I0527 03:22:32.695708 2725 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1a952612-c495-41af-a299-c149e1382b40-cilium-config-path\") pod \"cilium-zlbgx\" (UID: \"1a952612-c495-41af-a299-c149e1382b40\") " pod="kube-system/cilium-zlbgx" May 27 03:22:32.696126 kubelet[2725]: I0527 03:22:32.695735 2725 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/1a952612-c495-41af-a299-c149e1382b40-host-proc-sys-net\") pod \"cilium-zlbgx\" (UID: \"1a952612-c495-41af-a299-c149e1382b40\") " pod="kube-system/cilium-zlbgx" May 27 03:22:32.696126 kubelet[2725]: I0527 03:22:32.695753 2725 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m7n95\" (UniqueName: \"kubernetes.io/projected/1a952612-c495-41af-a299-c149e1382b40-kube-api-access-m7n95\") pod \"cilium-zlbgx\" (UID: \"1a952612-c495-41af-a299-c149e1382b40\") " pod="kube-system/cilium-zlbgx" May 27 03:22:32.696126 kubelet[2725]: I0527 03:22:32.695784 2725 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/1a952612-c495-41af-a299-c149e1382b40-hubble-tls\") pod \"cilium-zlbgx\" (UID: \"1a952612-c495-41af-a299-c149e1382b40\") " pod="kube-system/cilium-zlbgx" May 27 03:22:32.696325 kubelet[2725]: I0527 03:22:32.695805 2725 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1a952612-c495-41af-a299-c149e1382b40-etc-cni-netd\") pod \"cilium-zlbgx\" (UID: \"1a952612-c495-41af-a299-c149e1382b40\") " pod="kube-system/cilium-zlbgx" May 27 03:22:32.696325 kubelet[2725]: I0527 03:22:32.695847 2725 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1a952612-c495-41af-a299-c149e1382b40-lib-modules\") pod \"cilium-zlbgx\" (UID: \"1a952612-c495-41af-a299-c149e1382b40\") " pod="kube-system/cilium-zlbgx" May 27 03:22:32.696325 kubelet[2725]: I0527 03:22:32.695886 2725 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/1a952612-c495-41af-a299-c149e1382b40-host-proc-sys-kernel\") pod \"cilium-zlbgx\" (UID: \"1a952612-c495-41af-a299-c149e1382b40\") " pod="kube-system/cilium-zlbgx" May 27 03:22:32.696325 kubelet[2725]: I0527 03:22:32.695918 2725 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/1a952612-c495-41af-a299-c149e1382b40-cilium-cgroup\") pod \"cilium-zlbgx\" (UID: \"1a952612-c495-41af-a299-c149e1382b40\") " pod="kube-system/cilium-zlbgx" May 27 03:22:32.696325 kubelet[2725]: I0527 03:22:32.695944 2725 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/1a952612-c495-41af-a299-c149e1382b40-hostproc\") pod \"cilium-zlbgx\" (UID: \"1a952612-c495-41af-a299-c149e1382b40\") " pod="kube-system/cilium-zlbgx" May 27 03:22:32.696325 kubelet[2725]: I0527 03:22:32.695964 2725 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/1a952612-c495-41af-a299-c149e1382b40-cilium-run\") pod \"cilium-zlbgx\" (UID: \"1a952612-c495-41af-a299-c149e1382b40\") " pod="kube-system/cilium-zlbgx" May 27 03:22:32.696501 kubelet[2725]: I0527 03:22:32.696011 2725 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/1a952612-c495-41af-a299-c149e1382b40-cilium-ipsec-secrets\") pod \"cilium-zlbgx\" (UID: \"1a952612-c495-41af-a299-c149e1382b40\") " pod="kube-system/cilium-zlbgx" May 27 03:22:32.728767 sshd[4999]: Connection closed by 10.0.0.1 port 43088 May 27 03:22:32.729096 sshd-session[4997]: pam_unix(sshd:session): session closed for user core May 27 03:22:32.741778 systemd[1]: sshd@56-10.0.0.71:22-10.0.0.1:43088.service: Deactivated successfully. May 27 03:22:32.743641 systemd[1]: session-57.scope: Deactivated successfully. May 27 03:22:32.744427 systemd-logind[1564]: Session 57 logged out. Waiting for processes to exit. May 27 03:22:32.747190 systemd[1]: Started sshd@57-10.0.0.71:22-10.0.0.1:43104.service - OpenSSH per-connection server daemon (10.0.0.1:43104). May 27 03:22:32.747774 systemd-logind[1564]: Removed session 57. May 27 03:22:32.797294 sshd[5006]: Accepted publickey for core from 10.0.0.1 port 43104 ssh2: RSA SHA256:yrdvci6hXDWGDW7i9bmImWu+5ErcoHe0M1IyHhFSL9U May 27 03:22:32.799867 sshd-session[5006]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:22:32.822723 systemd-logind[1564]: New session 58 of user core. May 27 03:22:32.833147 systemd[1]: Started session-58.scope - Session 58 of User core. May 27 03:22:32.936149 containerd[1586]: time="2025-05-27T03:22:32.936102722Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-zlbgx,Uid:1a952612-c495-41af-a299-c149e1382b40,Namespace:kube-system,Attempt:0,}" May 27 03:22:32.965681 containerd[1586]: time="2025-05-27T03:22:32.965549741Z" level=info msg="connecting to shim 43d9429a19d75c069240eb8348ebf7ef39df80b8572031ee9b10dbc267f917ab" address="unix:///run/containerd/s/3f7125ed517e4f2b433190fac306e871007b9cb0c5a393dc0c53e6db09cb9f14" namespace=k8s.io protocol=ttrpc version=3 May 27 03:22:33.002169 systemd[1]: Started cri-containerd-43d9429a19d75c069240eb8348ebf7ef39df80b8572031ee9b10dbc267f917ab.scope - libcontainer container 43d9429a19d75c069240eb8348ebf7ef39df80b8572031ee9b10dbc267f917ab. May 27 03:22:33.033829 containerd[1586]: time="2025-05-27T03:22:33.033784787Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-zlbgx,Uid:1a952612-c495-41af-a299-c149e1382b40,Namespace:kube-system,Attempt:0,} returns sandbox id \"43d9429a19d75c069240eb8348ebf7ef39df80b8572031ee9b10dbc267f917ab\"" May 27 03:22:33.045733 containerd[1586]: time="2025-05-27T03:22:33.045695414Z" level=info msg="CreateContainer within sandbox \"43d9429a19d75c069240eb8348ebf7ef39df80b8572031ee9b10dbc267f917ab\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 27 03:22:33.055966 containerd[1586]: time="2025-05-27T03:22:33.055931132Z" level=info msg="Container 5768e4d22f0c1a30e6458a2842d9285272f157dca61449e9945cbc64ed49039a: CDI devices from CRI Config.CDIDevices: []" May 27 03:22:33.064653 containerd[1586]: time="2025-05-27T03:22:33.064508575Z" level=info msg="CreateContainer within sandbox \"43d9429a19d75c069240eb8348ebf7ef39df80b8572031ee9b10dbc267f917ab\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"5768e4d22f0c1a30e6458a2842d9285272f157dca61449e9945cbc64ed49039a\"" May 27 03:22:33.065329 containerd[1586]: time="2025-05-27T03:22:33.065301139Z" level=info msg="StartContainer for \"5768e4d22f0c1a30e6458a2842d9285272f157dca61449e9945cbc64ed49039a\"" May 27 03:22:33.068315 containerd[1586]: time="2025-05-27T03:22:33.068224494Z" level=info msg="connecting to shim 5768e4d22f0c1a30e6458a2842d9285272f157dca61449e9945cbc64ed49039a" address="unix:///run/containerd/s/3f7125ed517e4f2b433190fac306e871007b9cb0c5a393dc0c53e6db09cb9f14" protocol=ttrpc version=3 May 27 03:22:33.099178 systemd[1]: Started cri-containerd-5768e4d22f0c1a30e6458a2842d9285272f157dca61449e9945cbc64ed49039a.scope - libcontainer container 5768e4d22f0c1a30e6458a2842d9285272f157dca61449e9945cbc64ed49039a. May 27 03:22:33.143025 containerd[1586]: time="2025-05-27T03:22:33.142947233Z" level=info msg="StartContainer for \"5768e4d22f0c1a30e6458a2842d9285272f157dca61449e9945cbc64ed49039a\" returns successfully" May 27 03:22:33.152520 systemd[1]: cri-containerd-5768e4d22f0c1a30e6458a2842d9285272f157dca61449e9945cbc64ed49039a.scope: Deactivated successfully. May 27 03:22:33.153730 containerd[1586]: time="2025-05-27T03:22:33.153670957Z" level=info msg="received exit event container_id:\"5768e4d22f0c1a30e6458a2842d9285272f157dca61449e9945cbc64ed49039a\" id:\"5768e4d22f0c1a30e6458a2842d9285272f157dca61449e9945cbc64ed49039a\" pid:5078 exited_at:{seconds:1748316153 nanos:153304882}" May 27 03:22:33.153859 containerd[1586]: time="2025-05-27T03:22:33.153781146Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5768e4d22f0c1a30e6458a2842d9285272f157dca61449e9945cbc64ed49039a\" id:\"5768e4d22f0c1a30e6458a2842d9285272f157dca61449e9945cbc64ed49039a\" pid:5078 exited_at:{seconds:1748316153 nanos:153304882}" May 27 03:22:34.094820 containerd[1586]: time="2025-05-27T03:22:34.094753934Z" level=info msg="CreateContainer within sandbox \"43d9429a19d75c069240eb8348ebf7ef39df80b8572031ee9b10dbc267f917ab\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 27 03:22:34.113347 containerd[1586]: time="2025-05-27T03:22:34.113281750Z" level=info msg="Container e73d8666fc9f0357680ba6e01fa4db5ff38ed36767405f76f25732ea87b0b551: CDI devices from CRI Config.CDIDevices: []" May 27 03:22:34.123115 containerd[1586]: time="2025-05-27T03:22:34.123055180Z" level=info msg="CreateContainer within sandbox \"43d9429a19d75c069240eb8348ebf7ef39df80b8572031ee9b10dbc267f917ab\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"e73d8666fc9f0357680ba6e01fa4db5ff38ed36767405f76f25732ea87b0b551\"" May 27 03:22:34.123819 containerd[1586]: time="2025-05-27T03:22:34.123760848Z" level=info msg="StartContainer for \"e73d8666fc9f0357680ba6e01fa4db5ff38ed36767405f76f25732ea87b0b551\"" May 27 03:22:34.125246 containerd[1586]: time="2025-05-27T03:22:34.125210429Z" level=info msg="connecting to shim e73d8666fc9f0357680ba6e01fa4db5ff38ed36767405f76f25732ea87b0b551" address="unix:///run/containerd/s/3f7125ed517e4f2b433190fac306e871007b9cb0c5a393dc0c53e6db09cb9f14" protocol=ttrpc version=3 May 27 03:22:34.148317 systemd[1]: Started cri-containerd-e73d8666fc9f0357680ba6e01fa4db5ff38ed36767405f76f25732ea87b0b551.scope - libcontainer container e73d8666fc9f0357680ba6e01fa4db5ff38ed36767405f76f25732ea87b0b551. May 27 03:22:34.192292 containerd[1586]: time="2025-05-27T03:22:34.192230803Z" level=info msg="StartContainer for \"e73d8666fc9f0357680ba6e01fa4db5ff38ed36767405f76f25732ea87b0b551\" returns successfully" May 27 03:22:34.198476 systemd[1]: cri-containerd-e73d8666fc9f0357680ba6e01fa4db5ff38ed36767405f76f25732ea87b0b551.scope: Deactivated successfully. May 27 03:22:34.198874 containerd[1586]: time="2025-05-27T03:22:34.198799113Z" level=info msg="received exit event container_id:\"e73d8666fc9f0357680ba6e01fa4db5ff38ed36767405f76f25732ea87b0b551\" id:\"e73d8666fc9f0357680ba6e01fa4db5ff38ed36767405f76f25732ea87b0b551\" pid:5123 exited_at:{seconds:1748316154 nanos:198357985}" May 27 03:22:34.198874 containerd[1586]: time="2025-05-27T03:22:34.198850590Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e73d8666fc9f0357680ba6e01fa4db5ff38ed36767405f76f25732ea87b0b551\" id:\"e73d8666fc9f0357680ba6e01fa4db5ff38ed36767405f76f25732ea87b0b551\" pid:5123 exited_at:{seconds:1748316154 nanos:198357985}" May 27 03:22:34.807068 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e73d8666fc9f0357680ba6e01fa4db5ff38ed36767405f76f25732ea87b0b551-rootfs.mount: Deactivated successfully. May 27 03:22:35.096594 containerd[1586]: time="2025-05-27T03:22:35.096461159Z" level=info msg="CreateContainer within sandbox \"43d9429a19d75c069240eb8348ebf7ef39df80b8572031ee9b10dbc267f917ab\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 27 03:22:35.109360 containerd[1586]: time="2025-05-27T03:22:35.109295183Z" level=info msg="Container bae27865949099322cd5a2192c3e0666857a42591cd7931b45c1016b645bae2a: CDI devices from CRI Config.CDIDevices: []" May 27 03:22:35.125793 containerd[1586]: time="2025-05-27T03:22:35.125721038Z" level=info msg="CreateContainer within sandbox \"43d9429a19d75c069240eb8348ebf7ef39df80b8572031ee9b10dbc267f917ab\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"bae27865949099322cd5a2192c3e0666857a42591cd7931b45c1016b645bae2a\"" May 27 03:22:35.126679 containerd[1586]: time="2025-05-27T03:22:35.126551915Z" level=info msg="StartContainer for \"bae27865949099322cd5a2192c3e0666857a42591cd7931b45c1016b645bae2a\"" May 27 03:22:35.128478 containerd[1586]: time="2025-05-27T03:22:35.128441418Z" level=info msg="connecting to shim bae27865949099322cd5a2192c3e0666857a42591cd7931b45c1016b645bae2a" address="unix:///run/containerd/s/3f7125ed517e4f2b433190fac306e871007b9cb0c5a393dc0c53e6db09cb9f14" protocol=ttrpc version=3 May 27 03:22:35.175261 systemd[1]: Started cri-containerd-bae27865949099322cd5a2192c3e0666857a42591cd7931b45c1016b645bae2a.scope - libcontainer container bae27865949099322cd5a2192c3e0666857a42591cd7931b45c1016b645bae2a. May 27 03:22:35.225669 containerd[1586]: time="2025-05-27T03:22:35.225621709Z" level=info msg="StartContainer for \"bae27865949099322cd5a2192c3e0666857a42591cd7931b45c1016b645bae2a\" returns successfully" May 27 03:22:35.229192 systemd[1]: cri-containerd-bae27865949099322cd5a2192c3e0666857a42591cd7931b45c1016b645bae2a.scope: Deactivated successfully. May 27 03:22:35.229934 containerd[1586]: time="2025-05-27T03:22:35.229908159Z" level=info msg="received exit event container_id:\"bae27865949099322cd5a2192c3e0666857a42591cd7931b45c1016b645bae2a\" id:\"bae27865949099322cd5a2192c3e0666857a42591cd7931b45c1016b645bae2a\" pid:5166 exited_at:{seconds:1748316155 nanos:229710684}" May 27 03:22:35.230318 containerd[1586]: time="2025-05-27T03:22:35.230272890Z" level=info msg="TaskExit event in podsandbox handler container_id:\"bae27865949099322cd5a2192c3e0666857a42591cd7931b45c1016b645bae2a\" id:\"bae27865949099322cd5a2192c3e0666857a42591cd7931b45c1016b645bae2a\" pid:5166 exited_at:{seconds:1748316155 nanos:229710684}" May 27 03:22:35.254638 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bae27865949099322cd5a2192c3e0666857a42591cd7931b45c1016b645bae2a-rootfs.mount: Deactivated successfully. May 27 03:22:36.106544 containerd[1586]: time="2025-05-27T03:22:36.106486992Z" level=info msg="CreateContainer within sandbox \"43d9429a19d75c069240eb8348ebf7ef39df80b8572031ee9b10dbc267f917ab\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 27 03:22:36.132225 containerd[1586]: time="2025-05-27T03:22:36.131954414Z" level=info msg="Container ab1ab8c91bcdcfd1ca30bf01617e334e1e29eca47f19f00c90b41f3c4e7166ee: CDI devices from CRI Config.CDIDevices: []" May 27 03:22:36.135880 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2773729082.mount: Deactivated successfully. May 27 03:22:36.156360 containerd[1586]: time="2025-05-27T03:22:36.156294007Z" level=info msg="CreateContainer within sandbox \"43d9429a19d75c069240eb8348ebf7ef39df80b8572031ee9b10dbc267f917ab\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"ab1ab8c91bcdcfd1ca30bf01617e334e1e29eca47f19f00c90b41f3c4e7166ee\"" May 27 03:22:36.157098 containerd[1586]: time="2025-05-27T03:22:36.157050642Z" level=info msg="StartContainer for \"ab1ab8c91bcdcfd1ca30bf01617e334e1e29eca47f19f00c90b41f3c4e7166ee\"" May 27 03:22:36.158222 containerd[1586]: time="2025-05-27T03:22:36.158194461Z" level=info msg="connecting to shim ab1ab8c91bcdcfd1ca30bf01617e334e1e29eca47f19f00c90b41f3c4e7166ee" address="unix:///run/containerd/s/3f7125ed517e4f2b433190fac306e871007b9cb0c5a393dc0c53e6db09cb9f14" protocol=ttrpc version=3 May 27 03:22:36.188170 systemd[1]: Started cri-containerd-ab1ab8c91bcdcfd1ca30bf01617e334e1e29eca47f19f00c90b41f3c4e7166ee.scope - libcontainer container ab1ab8c91bcdcfd1ca30bf01617e334e1e29eca47f19f00c90b41f3c4e7166ee. May 27 03:22:36.219796 systemd[1]: cri-containerd-ab1ab8c91bcdcfd1ca30bf01617e334e1e29eca47f19f00c90b41f3c4e7166ee.scope: Deactivated successfully. May 27 03:22:36.222514 containerd[1586]: time="2025-05-27T03:22:36.220463693Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ab1ab8c91bcdcfd1ca30bf01617e334e1e29eca47f19f00c90b41f3c4e7166ee\" id:\"ab1ab8c91bcdcfd1ca30bf01617e334e1e29eca47f19f00c90b41f3c4e7166ee\" pid:5207 exited_at:{seconds:1748316156 nanos:220147703}" May 27 03:22:36.243932 containerd[1586]: time="2025-05-27T03:22:36.243871067Z" level=info msg="received exit event container_id:\"ab1ab8c91bcdcfd1ca30bf01617e334e1e29eca47f19f00c90b41f3c4e7166ee\" id:\"ab1ab8c91bcdcfd1ca30bf01617e334e1e29eca47f19f00c90b41f3c4e7166ee\" pid:5207 exited_at:{seconds:1748316156 nanos:220147703}" May 27 03:22:36.253229 containerd[1586]: time="2025-05-27T03:22:36.253187708Z" level=info msg="StartContainer for \"ab1ab8c91bcdcfd1ca30bf01617e334e1e29eca47f19f00c90b41f3c4e7166ee\" returns successfully" May 27 03:22:36.267472 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ab1ab8c91bcdcfd1ca30bf01617e334e1e29eca47f19f00c90b41f3c4e7166ee-rootfs.mount: Deactivated successfully. May 27 03:22:36.547524 kubelet[2725]: E0527 03:22:36.547472 2725 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 27 03:22:37.123478 containerd[1586]: time="2025-05-27T03:22:37.123407229Z" level=info msg="CreateContainer within sandbox \"43d9429a19d75c069240eb8348ebf7ef39df80b8572031ee9b10dbc267f917ab\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 27 03:22:37.157580 containerd[1586]: time="2025-05-27T03:22:37.157485858Z" level=info msg="Container 88613035850e6f6b3afabcdd78e2db3c0012c5111b7934cd6a349ce14dd47268: CDI devices from CRI Config.CDIDevices: []" May 27 03:22:37.175009 containerd[1586]: time="2025-05-27T03:22:37.174909261Z" level=info msg="CreateContainer within sandbox \"43d9429a19d75c069240eb8348ebf7ef39df80b8572031ee9b10dbc267f917ab\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"88613035850e6f6b3afabcdd78e2db3c0012c5111b7934cd6a349ce14dd47268\"" May 27 03:22:37.175613 containerd[1586]: time="2025-05-27T03:22:37.175581916Z" level=info msg="StartContainer for \"88613035850e6f6b3afabcdd78e2db3c0012c5111b7934cd6a349ce14dd47268\"" May 27 03:22:37.176621 containerd[1586]: time="2025-05-27T03:22:37.176579789Z" level=info msg="connecting to shim 88613035850e6f6b3afabcdd78e2db3c0012c5111b7934cd6a349ce14dd47268" address="unix:///run/containerd/s/3f7125ed517e4f2b433190fac306e871007b9cb0c5a393dc0c53e6db09cb9f14" protocol=ttrpc version=3 May 27 03:22:37.202179 systemd[1]: Started cri-containerd-88613035850e6f6b3afabcdd78e2db3c0012c5111b7934cd6a349ce14dd47268.scope - libcontainer container 88613035850e6f6b3afabcdd78e2db3c0012c5111b7934cd6a349ce14dd47268. May 27 03:22:37.250456 containerd[1586]: time="2025-05-27T03:22:37.250395947Z" level=info msg="StartContainer for \"88613035850e6f6b3afabcdd78e2db3c0012c5111b7934cd6a349ce14dd47268\" returns successfully" May 27 03:22:37.326345 containerd[1586]: time="2025-05-27T03:22:37.326301358Z" level=info msg="TaskExit event in podsandbox handler container_id:\"88613035850e6f6b3afabcdd78e2db3c0012c5111b7934cd6a349ce14dd47268\" id:\"ebb9495a8e6ea22b6d4748d74cd0c4a235e8a9cc103910b12079e3e7dead7836\" pid:5275 exited_at:{seconds:1748316157 nanos:325956825}" May 27 03:22:37.736997 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni-avx)) May 27 03:22:39.313433 containerd[1586]: time="2025-05-27T03:22:39.313355428Z" level=info msg="TaskExit event in podsandbox handler container_id:\"88613035850e6f6b3afabcdd78e2db3c0012c5111b7934cd6a349ce14dd47268\" id:\"92dbd11f8cc1fad94bed596872aa6612e996c39a171c4c6bce95d08650045975\" pid:5392 exit_status:1 exited_at:{seconds:1748316159 nanos:312642657}" May 27 03:22:39.422922 kubelet[2725]: E0527 03:22:39.422839 2725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-674b8bbfcf-cpd8x" podUID="4741fe46-8656-4d73-808b-6eb0281dd736" May 27 03:22:41.022542 systemd-networkd[1502]: lxc_health: Link UP May 27 03:22:41.031617 systemd-networkd[1502]: lxc_health: Gained carrier May 27 03:22:41.249009 kubelet[2725]: I0527 03:22:41.248429 2725 setters.go:618] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-05-27T03:22:41Z","lastTransitionTime":"2025-05-27T03:22:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} May 27 03:22:41.423890 kubelet[2725]: E0527 03:22:41.423195 2725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-674b8bbfcf-cpd8x" podUID="4741fe46-8656-4d73-808b-6eb0281dd736" May 27 03:22:41.441867 containerd[1586]: time="2025-05-27T03:22:41.441780216Z" level=info msg="TaskExit event in podsandbox handler container_id:\"88613035850e6f6b3afabcdd78e2db3c0012c5111b7934cd6a349ce14dd47268\" id:\"1f27c115ecb38ab8ffdf4bb9a0bc8e9dec1802a2b44d24459367a58dc4de8a0b\" pid:5802 exited_at:{seconds:1748316161 nanos:440774359}" May 27 03:22:42.059252 systemd-networkd[1502]: lxc_health: Gained IPv6LL May 27 03:22:42.957179 kubelet[2725]: I0527 03:22:42.957079 2725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-zlbgx" podStartSLOduration=10.957036223 podStartE2EDuration="10.957036223s" podCreationTimestamp="2025-05-27 03:22:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-27 03:22:38.129276852 +0000 UTC m=+251.811252216" watchObservedRunningTime="2025-05-27 03:22:42.957036223 +0000 UTC m=+256.639011577" May 27 03:22:43.560632 containerd[1586]: time="2025-05-27T03:22:43.560573456Z" level=info msg="TaskExit event in podsandbox handler container_id:\"88613035850e6f6b3afabcdd78e2db3c0012c5111b7934cd6a349ce14dd47268\" id:\"987ddf6b11a8e6d1a1a872c82cd5650cb582719bd0949f6a15e53a365bf6d3b8\" pid:5839 exited_at:{seconds:1748316163 nanos:560128052}" May 27 03:22:45.668524 containerd[1586]: time="2025-05-27T03:22:45.668460292Z" level=info msg="TaskExit event in podsandbox handler container_id:\"88613035850e6f6b3afabcdd78e2db3c0012c5111b7934cd6a349ce14dd47268\" id:\"d79ef52383e00c388dcefca4c6055513d58c60305dbb520a78040647c78197e6\" pid:5870 exited_at:{seconds:1748316165 nanos:668076534}" May 27 03:22:47.764770 containerd[1586]: time="2025-05-27T03:22:47.764707456Z" level=info msg="TaskExit event in podsandbox handler container_id:\"88613035850e6f6b3afabcdd78e2db3c0012c5111b7934cd6a349ce14dd47268\" id:\"d40445e6ac50042e282dae678e1e6c63529d3d9f89ae78abf200fb4475ea7162\" pid:5896 exited_at:{seconds:1748316167 nanos:764123650}" May 27 03:22:47.778402 sshd[5013]: Connection closed by 10.0.0.1 port 43104 May 27 03:22:47.778907 sshd-session[5006]: pam_unix(sshd:session): session closed for user core May 27 03:22:47.783251 systemd[1]: sshd@57-10.0.0.71:22-10.0.0.1:43104.service: Deactivated successfully. May 27 03:22:47.785501 systemd[1]: session-58.scope: Deactivated successfully. May 27 03:22:47.786461 systemd-logind[1564]: Session 58 logged out. Waiting for processes to exit. May 27 03:22:47.787876 systemd-logind[1564]: Removed session 58.